Search is not available for this dataset
text
stringlengths 216
4.52M
| meta
dict |
---|---|
\section{Introduction}
In order to study open physical systems, that is, systems in interaction with a large environment, two main approaches are often considered. The first one is Hamiltonian, it consists in describing completely the small system, the environment and their interactions in a completely Hamiltonian way. Then one studies the associated dynamical system. The other approach is Markovian, it consists in giving up describing the environment (which is too complicated or unknown) and to describe only the effective action of the environment on the small system. Under some assumptions on the environment, the evolution of the small system is a Markov process. One can then study this Markov process with the associated probabilistic tools (invariant measure, etc).
\smallskip
In the context of quantum mechanical systems, S. Attal and Y. Pautrat propose in \cite{AP} a new type of model for the interaction of a small system and the environment: the \emph{scheme of repeated interactions}. In this setup, the environment is regarded as an infinite assembly of small identical pieces which act independently, one after the other, on the system during a small time step $h$. This approach has the advantage of being in between the two previous approaches: it is Hamiltonian for each interaction of the small system with one piece of the environment is described by a full Hamiltonian, it is also Markovian in its structure of independent and repeated interactions with fresh pieces of the environment.
This approach, in the quantum context, has also the advantage to give rise to a rather workable way of implementing the dissipation. It is physically realistic for it shown in \cite{AP} that, in the continuous interaction limit ($h$ tends to 0) the associated dynamics converges to the usual quantum Langevin equations associated to open quantum systems.
Our aim in this article is to consider this scheme of repeated interactions, and its continuous time limit, for classical physical systems.
\smallskip
S. Attal describes in \cite{A} a mathematical framework for classical systems of repeated interactions. His construction is based on a strong connection between Markov processes and dynamical systems that he describes; we present it in Section 2. The main idea is that Markov processes are all obtained from deterministic dynamical systems on product spaces when ignoring one of the two components. In particular stochastic differential equations can be seen as deterministic dynamical systems on the state space times the Wiener space.
The dynamical system associated to repeated classical interactions is discrete in time, depending on the time parameter $h$. If one wants to consider all these dynamical systems for all $h$ and their continuous limit, when $h$ tends to 0, we have to embed discrete time and continuous time dynamical systems into a common natural setup. This is what we develop in Section 3.
\smallskip
Section 4 is devoted to presenting several physical exemples to which our main theorems will be applied at the end of the article.
\smallskip
The convergence of the discrete dynamical systems associated to classical repeated interactions is carefully studied in Section 5. More precisely, the evolution of the system undergoing repeated interactions shall be represented by a Markov chain $(X^h_{nh})$. The study of a limit evolution comes down to the convergence of a linearly interpolated Markov chain to the solution of a stochastic differential equation. Indeed, the embedded dynamics lead to consider the linear interpolation of $(X^h_{nh})$, i.e. the process $(X_t^h)$ defined by
\begin{align*}
X_t^h = X_{\lfloor t/h\rfloor h}^h +\dfrac{t-\lfloor t/h\rfloor h}{h} \Big{\{}& X_{(\lfloor t/h\rfloor +1)h}^h-X_{\lfloor t/h\rfloor h}^h\Big{\}}\,.
\end{align*}
The main theorems of Section 5 are concerned with the convergence of this process $(X_t^h)$ under some assumptions. Theorem 5.2 shows the $L^{p}$ and almost sure convergences when the evolution of the Markov chain is given by
\[X^h_{(n+1)h}= X_{nh}^h+ \sigma(X_{nh}^h)(W_{(n+1)h}- W_{nh}) + h b(X_{nh}^h) + h \eta^{(h)}(X_{nh}^h,W_{(n+1)h}- W_{nh})\,.\]
The limit process is the solution of the stochastic differential
\[dX_t = b(X_t)\,dt + \sigma(X_t)\,d W_t\,\]
when the applications $b$ and $\sigma$ are Lipschitz, and when $\eta^{(h)}$ satisfies
\[\left|\eta^{(h)}(x,y) \right| \leq K (h^{\alpha}\left|x\right| + \left|y\right|)\,.\]
All these results are then applied in Section 6 to the physical exemples previously presented in Section 4.
\section{Dynamical Systems and Markov Processes}
\subsection{Discrete Time}
A connexion between deterministic dynamical systems and Markov chains has been recently presented by Attal in \cite{A}. He shows that randomness in Markov chains appears naturally from deterministic dynamical systems when loosing some information about some part of the system. In this section, we present the context and the main results of \cite{A}.
\smallskip
A \emph{discrete time dynamical system} is a measurable map $\widehat{T}$ on a measurable space $(F,\mathcal{F})$. The dynamics of this system is given by the sequence $(\widehat{T}^n)_{n \in \mathbb{N}^*}$ of iterations of $\widehat T$. From this map $\widehat T$, one can naturally define a map ${T}$ on $\mathcal{L}^{\infty} (F,\mathcal{F})$, the set of bounded functions on $F$, by
\[ {T}f(x)=f(\widehat{T}(x))\,,\]
for all $f$ in $\mathcal{L}^{\infty} (F,\mathcal{F})$ and all $x$ in $F$.
\smallskip
Now consider a dynamical system $\widehat{T}$ on a product space $S \times E$ where $(S,\mathcal{S})$ and $(E, \mathcal{E})$ are two measurable spaces. Furthermore, assume that $(E,\mathcal{E})$ is equipped with a probability measure $\mu$. Physically, $S$ is understood as the phase space of a ``small'' system and $E$ as the one of the environment. Let $T$ be the application on $\mathcal{L}^{\infty} (S \times E)$ induced by $\widehat{T}$.
The idea of the construction developed in \cite{A} is to consider the situation where one has only access to the system $S$ and not to the environment $E$ (for example $E$ might be too complicated, or unknown, or unaccessible to measurement). One wants to understand what kind of dynamics is obtained from $T$ when restricting it to $S$ only.
\smallskip
Consider a bounded function $f$ on $S$, we naturally extend it as a (bounded) function on $S \times E$ by considering
\[(f \otimes \mathds{1}) (x,y)= f(x),\]
for all $x$ in $S$, all $y$ in $E$. That is, the function $f$ made for acting on $S$ only is seen as being part of a larger world $S\times E$.
The dynamical system $T$ can now act on $f\otimes \mathds{1}$. We assume that what the observer of the system $S$ sees from the whole dynamics on $S\times E$ is an average on $E$ along the given probability measure $\mu$.
Therefore, we have to consider the application $L$ on $\mathcal{L}^\infty(S)$ defined by
\[Lf(x)= \int_E T(f \otimes \mathds{1}) (x,y)\, d\mu(y)\,.\]
Note that $Lf$ also belongs to $\mathcal{L}^\infty(S)$. This operator $L$ on $\mathcal{L}^\infty(S)$ represents the restriction of the dynamics $\widehat T$ on $S$.
In other words, we have the following commuting diagram :
\[\begin{CD}
\mathcal{L}^{\infty} (S \times E) @>T>> \mathcal{L}^{\infty} (S \times E)\\
@A{\otimes \mathds{1}}AA @VV{\int_E \cdot \hspace{1mm} d\mu}V\\
\mathcal{L}^{\infty} (S) @>>L> \mathcal{L}^{\infty} (S)
\end{CD}\]
It is natural now to wonder what is the nature of the operator $L$ obtained this way. In \cite{A} the following result is proved.
\begin{theo}
There exists a Markov transition kernel $\Pi$ such that $L$ is of the form
\[ Lf(x) = \int_S f(z) \,\Pi(x,dz)\,,\]
for all $f\in\mathcal{L}^\infty(S)$.
\smallskip
Conversely, if $S$ is a Lusin space and $\Pi$ is any Markov transition kernel on $S$, then there exist a probability space $(E,\mathcal{E},\mu)$ and a dynamical system $\widehat T$ on $S\times E$ such that the operator
\[ Lf(x) = \int_S f(z) \,\Pi(x,dz)\,,\]
is of the form
\[Lf(x)= \int_E T(f \otimes \mathds{1}) (x,y)\, d\mu(y)\]
for all $f\in\mathcal{L}^\infty(S)$.
\end{theo}
The mapping $L$ on the system $S$ is not a dynamical system anymore. It represents the part of the dynamics on the large system $S\times E$ which is obtained when observing $S$ only. The main fact of the above result is that $L$ encodes some randomness, while $T$ was completely deterministic. This randomness arises from the lack of knowledge on the environment.
\smallskip
Note that in the reciprocal above, the dynamical system $T$ which dilates $L$ is not unique.
\bigskip
The Markov operator $L$ obtained above comes from the projection of only one iteration of the dynamical system $T$. It is not true in general that if one projects the mapping $T^k$ on $S$ one would obtain $L^k$ (see \cite{A} for a counter-example). It would be very interesting to be able to construct a dynamical system $T$ which dilates a Markov operator $L$ not only for one step, but for all their powers $T^k$ and $L^k$. This would mean that we have a whole dynamical system $(T^k)_{k\in\mathbb{N}}$ which dilates a whole Markov chain $(L^k)_{k\in\mathbb{N}}$ when restricting it to $S$. This is to say that one wants the following diagram to commute for all $k\in\mathbb{N}$:
$$
\begin{CD}
\mathcal{L}^{\infty} (S \times E) @>T^k>> \mathcal{L}^{\infty} (S \times E)\\
@A{\otimes \mathds{1}}AA @VV{\int_E \cdot \hspace{1mm} d\mu}V\\
\mathcal{L}^{\infty} (S) @>>L^k> \mathcal{L}^{\infty} (S)
\end{CD}
$$
As explained in \cite{A} this can be obtained through the scheme of repeated interactions, as follows.
Let $\widehat{T}$ be a dynamical system on $S \times E$ which dilates a Markov operator $L$. Since $\widehat{T}$ is a measurable map from $S \times E$ to $S \times E$, there exist two measurable applications $U$ and $V$ such that, for all $x$ in $S$, all $y$ in $E$,
\[\widehat{T}(x,y) = (U(x,y),V(x,y))\,.\]
Now, consider the space $\widetilde{E} = E^{\mathbb{N}^*}$ endowed with the usual $\sigma$-field $\mathcal{E}^{\otimes \mathbb{N}^*}$ and the product measure $\widetilde{\mu}=\mu^{\otimes \mathbb{N}^*}$. From the map $\widehat{T}$, one defines a dynamical system $\widetilde{T}$ on the space $S \times \widetilde{E}$ by
\[\widetilde{T}(x,y)=(U(x,y_1),\theta(y))\,,\]
for all $x$ in $S$, all sequence $y=(y_m)_{m \in \mathbb{N}^*}$ in $\widetilde{E}$, where $\theta$ is the shift on $\widetilde{E}$, that is,
$$
\theta(y)=(y_{m+1})_{m \in \mathbb{N}^*}\,.
$$
Physically, this construction can be understood as follows.
The system $S$ is in interaction with
a large environment. This large environment is a chain $\widetilde{E}$ made of copies of a small piece $E$. One after the other, each part $E$ of this environment interacts with the system $S$, independently from the others. This is the so-called ``\emph{repeated interactions scheme}''.
\smallskip
Note that the application $V$ doesn't appear in the definition of $\widetilde{T}$.
From the physical point of view this is natural, the map $V$ gives the evolution of the part $E$ of the environment which acts on the system. As this part shall not be involved in the dynamics of the system $S$, its new state has no importance for the system.
\smallskip
As previously, the mapping $\widetilde{T}$ induces an operator $T$ on $\mathcal{L}^{\infty}(S \times \widetilde{E})$.
Then, the following theorem (proved in \cite{A}) shows that the sequence $(T^k)$ of iterations of $T$ dilates the whole Markov chain $(L^k)_{k\in\mathbb{N}}$.
\begin{theo}
For all $m$ in $\mathbb{N}^*$, all $x$ in $S$, and all $f$ in $\mathcal{L}^{\infty} (S)$,
\[(L^m f) (x)= \int_{\widetilde{E}} T^m(f \otimes \mathds{1}) (x,y)\, d\widetilde{\mu}(y)\,.\]
\end{theo}
The operator $L$ represents the action of a Markov kernel $\Pi$ on bounded applications and the restriction of the whole dynamics given by $T$ to $S$.
On the other hand, the Markovian behaviour on applications associated to the action of $\Pi$ can be also seen on points when focusing on the dynamical system $\widetilde{T}$. Indeed, for all initial state $x$ of the system in $S$, and all state of the environment $y$ in $\widetilde{E}$, the evolution of the system is given by the sequence $(X_n^x(y))_{n \in \mathbb{N}}$ defined by \[X_{n+1}^x (y) = U(X_n^x(y), y_{n+1})\,,\]
with $X_0=x$.
Now if the state of the environment is unknown, the dynamics of the system is represented by the sequence $(X_n^x)_{n \in \mathbb{N}}$ of applications from $\widetilde{E}$ to $S$.
This sequence $(X_n^x)$ is a Markov chain. Furthermore, the Markov transition kernel of $(X_n^x)$
is $\Pi$ again.
\smallskip
Note that the whole dynamics of $\widetilde{T}$ can be expressed from this sequence $(X_n^x)$ as follows. For all $k$ in $\mathbb{N}$, all $x$ in $S$, and all $y$ in $\widetilde{E}$,
\[\widetilde{T}^k(x,y)=(X^x_k(y),\theta^k(y))\,,\]
where $\theta^k(y)$ is the sequence $y$ which is shifted $k$ times, i.e. $\theta^k(y)=(y_{n+k})_{n \in \mathbb{N}^*}$.
\bigskip
Note that sequences are indexed by $\mathbb{N}^*$ in this scheme of repeated interactions. Physically, this can be understood as repeated interactions between the systems which last for a time duration $1$. Now, a new parameter $h$ is added. It represents the time step for the interactions. Henceforth, all the sequences are now indexed by $h \mathbb{N}^*$.
For instance, the dynamics of the system is now given by the Markov chain $(X_{nh}^{(h)})_{n \in \mathbb{N}}$ defined for all starting state $x$ by
\[X_{(n+1)h}^{(h)} (y) = U^{(h)}(X_{nh}^{(h)}(y), y_{(n+1)h})\,,\]
with $X_0^{(h)}=x$.
The dynamical system given by the scheme of repeated interactions is now
\[\widetilde{T}^{(h)}(x,y)=(U^{(h)}(x,y_h),\theta^{(h)}(y))\,,\]
where $\theta^{(h)}$ is the shift, i.e. $\theta^{(h)}(y)=(y_{(n+1)h})_{n \in \mathbb{N}^*}$.
\bigskip
Note that the application $U^{(h)}$ can also depend on the time step $h$. We shall see in the exemples of Section 4 explicit expressions for this map $U^{(h)}$ in some physical situations.
\smallskip
Our aim is now to understand what can be the limit dynamics of these repeated interactions when the time step $h$ goes to $0$.
Attal and Pautrat, in \cite{AP}, have studied open quantum systems with an equivalent setup of repeated interactions. They show the convergence of repeated interactions to quantum stochastic differential equations.
In section 5, we shall see that the limit evolutions in the classical case are solutions of some stochastic differential equations. Therefore, we shall need to extend our parallel between Markov chains and dynamical systems to the continuous time setup and in particular to solutions of stochastic differential equations.
\subsection{Continuous Time}
As previously seen, from a Markov operator $L$, with the help of repeated interactions, one can construct a dynamical system which dilates the whole sequence~$(L^k)$. Moreover, the evolution of the first component is given by a Markov chain associated to the Markov transition kernel. We wish to extend this idea to Markov processes which are solutions of stochastic differential equations.
\smallskip
A \emph{continuous time dynamical system} is a semigroup of measurable applications $(T_t)_{t \in \mathbb{R}^+}$, that is, which satisfy $T_s \circ T_t = T_{s+t}\,,$ for all $s$, $t$ in $\mathbb{R}^+$.
Consider now a stochastic differential equation (SDE) on $\mathbb{R}^m$,
\begin{equation}
dX_t = b(X_t)\, dt + \sigma(X_t) \,dW_t \label{eq:A}\,,
\end{equation}
where $W_t$ is a $d$-dimensional standard Brownian motion, $b$ and $\sigma$ are measurable functions respectively from $\mathbb{R}^m$ to $\mathbb{R}^m$, and from $\mathbb{R}^m$ to $\mathcal{M}_{m,d} (\mathbb{R})$, the space of $m \times d$ real matrices.
We want to create an environment $\Omega$ and a deterministic dynamical system $(T_t)$ on $\mathbb{R}^m \times \Omega$ which dilates the solution and such that the first component of $T_t$ is the solution of
\eqref{eq:A} at time $t$.
\smallskip
However, we shall need existence and uniqueness of the solution of \eqref{eq:A} at all time $t$ and for all initial conditions. In order to guarantee that properties we have to make some assumptions on the functions $b$ and $\sigma$. We require them to be either globally Lipschitz,
\begin{enumerate}
\item[(H1)]There exists $K >0$ such that for all $x,y$ we have
\[\left|b(x)-b(y)\right|\leq K_0 \left|x-y\right|\, \text{ and} \hspace{5mm} \|\sigma(x)-\sigma(y)\| \leq K_0 \left|x-y\right|\,,\]
\end{enumerate}
or locally Lipschitz and linearly bounded,
\begin{enumerate}
\item[(H2)] The functions $b$ and $\sigma$ are locally Lipschitz:
for all $N>0$, there exists $K_N >0$ such that for all $x,y \in \mathbb{B}(0,N)$ we have
\[\left|b(x)-b(y)\right|\leq K_N \left|x-y\right|\, \text{ and} \hspace{5mm} \|\sigma(x)-\sigma(y)\| \leq K_N \left|x-y\right|\,.\]
\item[(H3)] Linear growth bound: There exists a constant $K_1>0$ such that
\[\left|b(x)\right| \leq K_1(1 + \left|x\right|)\qquad\mbox{and}\qquad
\|\sigma(x)\|\leq K_1(1 + \left|x\right|)\,,\]
\end{enumerate}
where $\left| \cdot \right|$ is the euclidean norm on $\mathbb{R}^m$ and $\| \cdot \|$ is the Hilbert-Schmidt norm on $\mathcal{M}_{m,d} (\mathbb{R})$ defined by
\[\| \sigma \|^2 = \sum_{j = 1}^d \sum_{i=1}^m (\sigma^{i,j})^2\,.\]
Note that Assumption (H1) implies (H2) and (H3). We differentiate the two cases because the convergences studied in section 5 shall be stronger for globally Lipschitz functions $b$ and $\sigma$.
Note also that Assumption (H1) or Assumptions (H2) and (H3) are sufficient (see \cite{IW}) but not necessary.
\smallskip
One can now construct the environment and the dynamical system. Consider the Wiener space $(\Omega,\mathcal{F},\mathbb{P})$ associated to the canonical Brownian motion $(W_t)$. This is to say that $\Omega=C_0 (\mathbb{R}_+, \mathbb{R}^d)$ is the space of continuous functions from $\mathbb{R}_+$ to $\mathbb{R}^d$ vanishing at the origin. The canonical Brownian motion $(W_t)$ is then defined by $W_t (\omega)=\omega(t)$, for all $\omega\in\Omega$ and all $t\in\mathbb{R}_+$.
Consider the \emph{shift} $\theta_t$ defined on $\Omega$ by
\[\theta_t(\omega)(s)=\omega (t+s) - \omega(t)\,.\]
and the family of mappings $(T_t)_{t\in \mathbb{R}^+}$ on $\mathbb{R}^+\times \Omega$ defined by
\[T_t(x,\omega) = (X^x_t(\omega),\theta_t(\omega))\,,\]
for all $x\in\mathbb{R}^m$, all $\omega\in\Omega$, where $X^x_t$ is the solution of \eqref{eq:A} at time $t$, starting at $x$. In \cite{A}, Attal shows the following.
\begin{theo}
The family $(T_t)$ on $\mathbb{R}^m \times \Omega$ is a continuous time dynamical system.
\end{theo}
Let us denote by $\mathcal{T}_t$ the map induced by $T_t$ on $\mathcal{L}^\infty(\mathbb{R}^m)$. It induces a semigroup $(P_t)_{t \in \mathbb{R}^+}$ on $\mathcal{L}^\infty(\mathbb{R}^m)$ by
\[P_t(f)(x)=\mathbb{E}\left[\mathcal{T}_t(f \otimes \mathds{1})(x, \cdot )\right]\,.\]
The generator of this semigroup $(P_t)$ is
\[\mathcal{A}= \frac{1}{2} \sum_{i,j=1}^d a^{i,j}(x) \frac{\partial^2}{\partial x^i \partial x^j} + \sum_{i=1}^d b^i(x) \frac{\partial}{\partial x^i}\,,\]
where $a$ is the symmetric matrix $\sigma \sigma^t$.
\section{Embedding Discrete Time into Continuous Time}
In order to prove the convergence of the discrete time dynamical systems, associated to repeated interactions, to the continuous time dynamical systems associated to solutions of stochatic differential equations, we need to explicitly embed the discrete time dynamical systems into a continuous time setup.
The state space on which the repeated interaction dynamical system $\widetilde{T}^{(h)}$ acts is $\mathbb{R}^m \times (\mathbb{R}^d)^{h\mathbb{N}^*}$, whereas the one of $(T_t)_{t \in \mathbb{R}_+}$ is $\mathbb{R}^m \times \Omega$. The first step in constructing the embedding of dynamical systems is to construct an embedding of $(\mathbb{R}^d)^{h\mathbb{N}^*}$ into $\Omega$.
\subsection{Discrete Approximation of $\Omega$}
Let $\phi_I^{(h)}$ be the map from $(\mathbb{R}^d)^{h \mathbb{N}^*}$ to $\Omega$ defined by
\[\phi_I^{(h)}(y) (t) = \sum_{n=0} ^{\lfloor t/h \rfloor} y_{nh} + \frac{t - \lfloor t/h \rfloor h}{h} y_{(\lfloor t/h \rfloor +1)h}\, ,\]
where $y_0 =0$.
This map $\phi_I^{(h)}$ actually builds a continuous, piecewise linear, function whose increments are the elements of the sequence $y$. The range of $\phi_I^{(h)}$ is denoted by $\Omega^{(h)}$, it is a subspace of $\Omega$.
\smallskip
Conversely, define the map $\phi_P^{(h)}$ from $\Omega$ to $(\mathbb{R}^d)^{h \mathbb{N}^*}$ by
\[\phi_P^{(h)}(\omega)=(W_{nh}(\omega) - W_{(n-1)h}(\omega))_{n \in \mathbb{N}^*}= (\omega (nh) - \omega((n-1)h))_{n \in \mathbb{N}^*} \,.\]
In other words, the range of an element of $\Omega$ by $\phi_P^{(h)}$ is the sequence of its increments at the times $nh$, for all $n\in\mathbb{N}^*$.
Note that these applications $\phi_I^{(h)}$ and $\phi_P^{(h)}$ satisfy
\[ \phi_P^{(h)} \circ \phi_I^{(h)} = \mbox{Id}_{(\mathbb{R}^d)^{h \mathbb{N}^*}}\,.\]
In particular the map $\phi_I^{(h)}$ is an injection from $(\mathbb{R}^d)^{h \mathbb{N}^*}$ to $\Omega$, the spaces $(\mathbb{R}^d)^{h \mathbb{N}^*}$ and $\Omega^{(h)}$ are in bijection through the applications $\phi_I^{(h)}$ and $(\phi_P^{(h)})_{\vert \Omega^{(h)}}$.
\smallskip
We now show that $(\mathbb{R}^d)^{h \mathbb{N}^*}$ can be viewed as an approximation of $\Omega$ for the usual metric associated to the topology of uniform convergence on compact sets. For two elements $\omega$ and $\omega'$ in $\Omega$ define the distance
\[D(\omega,\omega') = \sum_{n=1}^{\infty} \frac{1}{2^n} \hspace{3mm} \dfrac{\underset{0\leq t \leq n}{\sup} \left| \omega(t) - \omega '(t) \right|}{1+\underset{0\leq t \leq n}{\sup} \left| \omega(t) - \omega '(t) \right|}\,,\]
where $\left| \cdot \right|$ is the euclidean norm on $\mathbb{R}^d$.
The space $\Omega$ endowed with this metric is a polish space.
\begin{lem}
For all $\omega \in \Omega$, \[\underset{h \rightarrow 0}{\lim} \hspace{2mm} D(\omega,\phi_I^{(h)} \circ \phi_P^{(h)} (\omega) )=0\,.\]
\end{lem}
\begin{proof}
This result is based on the uniform convergence of piecewise linear applications to continuous one on compact sets.
Let $\omega$ be a function in $\Omega$. Then, by definition,
$$
\phi_P^{(h)}(\omega)=(\omega (nh) - \omega((n-1)h))_{n \in \mathbb{N}^*}\,.
$$
Now the mapping $\phi_I^{(h)}$ is applied to sequence $\phi_P^{(h)}(\omega)$. For all $t$ in $\mathbb{R}_+$, we have
\begin{multline*}
\phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t)= \sum_{n=0} ^{\lfloor t/h \rfloor} (\omega (nh) - \omega((n-1)h)) +\hfill \\
\hfill +\frac{t - \lfloor t/h \rfloor h}{h} \Big{\{}\omega((\lfloor t/h \rfloor +1)h)- \omega(\lfloor t/h \rfloor h)\Big{\}}\\
\hphantom{\phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t) \ \ \ }=\,\omega(\lfloor t/h \rfloor h) + \frac{t - \lfloor t/h \rfloor h}{h} \Big{\{}\omega((\lfloor t/h \rfloor +1)h)- \omega(\lfloor t/h \rfloor h)\Big{\}} \,.\hfill
\end{multline*}
Note that $\phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t)$ is a point in the segment
$
\Big{[}\omega(\lfloor t/h \rfloor h))\,,\omega((\lfloor t/h \rfloor +1)h)\,\Big{]}
$
Since $\omega$ is continuous we have
\[\lim_{h \rightarrow 0}\omega(\lfloor t/h \rfloor h)=\lim_{h \rightarrow 0}\omega((\lfloor t/h \rfloor +1)h)= \omega(t)\,.\]
Therefore,
\[ \lim_{h \rightarrow 0} \phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t)= \omega (t)\,.\]
Let $n\in\mathbb{N}$, as $\left[0,n\right]$ is compact the function $\omega$ is uniformly continuous on this interval.
Thus,
\[ \lim_{h \rightarrow 0}\ \sup_{0\leq t \leq n} \left| \omega(t) - \phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t) \right| = 0\,.\]
On the other hand, note that
\[\frac{\underset{0\leq t \leq n}{\sup} \left| \omega(t) - \phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t) \right|}{1+\underset{0\leq t \leq n}{\sup} \left| \omega(t) - \phi_I^{(h)} \circ \phi_P^{(h)} (\omega) (t) \right|} \leq 1\,.\]
Hence, by Lebesgue's theorem,
\[ \lim_{h \rightarrow 0} D(\omega,\phi_I^{(h)} \circ \phi_P^{(h)} (\omega)) = 0\,.\]
\end{proof}
\subsection{Embedding the Discrete Time Dynamics}
The first step in the construction of a continuous dynamics from $\widetilde{T}^{(h)}$ is to relate it to a discrete time evolution on $\mathbb{R}^m \times \Omega$.
Since the applications $\phi_I^{(h)}$ and $\phi_P^{(h)}$ are only defined on $(\mathbb{R}^d)^{h \mathbb{N}^*}$ or $\Omega$, we extend them to $\mathbb{R}^m \times (\mathbb{R}^d)^{h \mathbb{N}^*}$ and $\mathbb{R}^m \times \Omega$, respectively, by
\[\Phi_I^{(h)} = (Id,\phi_I^{(h)})\,,\qquad \Phi_P^{(h)} = (Id,\phi_P^{(h)}) \,.\]
Consider the dynamical system $\overline{T}^{(h)}$ on $\mathbb{R}^m \times \Omega$ given by
\[\overline{T}^{(h)}= \Phi_I^{(h)} \circ \widetilde{T}^{(h)} \circ \Phi_P^{(h)}\,.\]
As we have
\[ \Phi_P^{(h)} \circ \Phi_I^{(h)} = \mbox{Id}_{\mathbb{R}^{m} \times \Omega}\,,\]
the following diagram is communting for all $n \geq 1$:
\[\begin{CD}
\mathbb{R}^m \times \Omega @>(\overline{T}^{(h)})^n>> \mathbb{R}^m \times \Omega \\
@V\Phi_P^{(h)}VV @AA\Phi_I^{(h)}A\\
\mathbb{R}^m \times (\mathbb{R}^d)^{h \mathbb{N}^*} @>>(\widetilde{T}^{(h)})^n> \mathbb{R}^m \times (\mathbb{R}^d)^{h \mathbb{N}^*}
\end{CD}\]
We now relate this dynamical system to a continuous time dynamics, by linearly interpolating in time. Define a new family of applications $(\overline{T}^{(h)}_t)_{t \in \mathbb{R}^+}$ by
\begin{equation*}
\begin{split}
\overline{T}^{(h)}_t &=(\overline{T}^{(h)})^{\lfloor t/h\rfloor}+ \dfrac{t-\lfloor t/h\rfloor h}{h}\hspace{3mm} \Big{\{}(\overline{T}^{(h)})^{(\lfloor t/h\rfloor+1) } - (\overline{T}^{(h)})^{\lfloor t/h\rfloor}\Big{\}}\\
&=\frac{(\lfloor t/h\rfloor+1)h - t}{h}\hspace{3mm} (\overline{T}^{(h)})^{\lfloor t/h\rfloor } + \frac{t-\lfloor t/h\rfloor h}{h}\hspace{3mm} (\overline{T}^{(h)})^{(\lfloor t/h\rfloor+1)}\,,
\end{split}
\end{equation*}
where, by convention, $(\overline{T}^{(h)})^0=\mbox{Id}$.
The projections on the first and the second component of $\overline{T}^{(h)}_t$ are respectively denoted by $\overline{X}^{(h)}_t$ and $\overline{\theta}_t^{(h)}$.
More precisely, for all initial $x \in \mathbb{R}$ and $\omega$, the value of $\overline{T}^{(h)}_t(x,\omega)$ can be also expressed as
\[\overline{T}^{(h)}_t(x,\omega)= (\overline{X}^{(h)}_t (\phi_P^{(h)}(\omega)),\overline{\theta}^{(h)}_t(\omega))\,.\]
\smallskip
Note that, in general, the family $(\overline{T}^{(h)}_t)_{t \in \mathbb{R}_+}$ is not a semigroup because of the linear interpolation. The random process $\overline{X}^{(h)}_t$ is not also Markovian but a linearly interpolated Markov chain.
Also note that the state of the environment for the evolution of the system is given by $\phi_P^{(h)}(\omega)$, i.e. by the increments of a continuous application of the Wiener space $\Omega$.
Finally, the convergence of the dynamical system $\widetilde{T}^{(h)}$ to a continuous one can be studied by examining the dynamics of $(\overline{T}^{(h)}_t)_{t \in \mathbb{R}_+}$, more exactly the convergence of the random process $\overline{X}^{(h)}$ to a solution of a stochastic differential equation and the convergence of $\bar{\theta}_t^{(h)}$ to the shift $\theta_t$ on $\Omega$ according to the metric $D$.
\smallskip
However, before hands, we want to illustrate this framework and in particular the scheme of repeated interactions, through physical examples.
\section{Application to some Physical Systems}
As explained in the introduction, the main motivation for the study of repeated interaction schemes and their continuous limit is to try to obtain physically justified and workable models for the dissipation of a simple system into a large environment, such as a heat bath for example.
\subsection{Charged Particle in a Uniform Electric Field}
Consider a particle of charge $q$ and mass $m$ in an uniform electric field $E$ in dimension $1$. Its energy without interaction with $E$ is just kinetic, i.e. $p^2/2m$. In the presence of the exterior electric field, the particule has a potential energy $-qxE$, where $x$ is the position.
Thus, the Hamiltonian of the particle is
\[H(x,p) = \frac{p^2}{2m} -qxE\,.\]
The dynamics of the particle is governed by Hamilton's equations of motion,
\[\begin{cases}
\dot{x}= \dfrac{p}{m}\\
\dot{p}= qE\,.
\end{cases}\]
These equations can be easily solved and give
\[\begin{cases}
x(t)= \dfrac{qE}{2m} t^2+\dfrac{p(0)}{m}t+ x(0)\\
p(t)= qE t + p(0)\,.
\end{cases}\]
Now we set up the scheme of repeated interactions. The small system is the particle. As it moves in dimension $1$, the space $S$ is $\mathbb{R}^2$ endowed with its Borelian $\sigma$-algebra.
The environment is the exterior electric field. Hence, the space $E$ is $\mathbb{R}$. Intuitively, the limit process shall be a solution of a SDE driven by a $1$-dimensional Brownian motion $(W_t)$.
The environment of the discrete time dynamics at time step $h$ is the set $\mathbb{R}^{h \mathbb{N}^*}$.
The interaction between the system and the environment is described as follows. At each time $(n-1)h$, the value of the electric field $E(nh)$ is sampled from the increment $W(nh)-W((n-1)h)$ of the $1$-dimensional Brownian motion. The system evolves during a time $h$ according to the solution of the equations whose initial values $x((n-1)h)$ and $p((n-1)h)$ and where the electric field is $E(nh)$. After this time $h$, the interactions are stopped and one repeats the procedure.
In particular the evolution of the system is given by the following Markov chain
\[\begin{cases}
x(nh)= x((n-1)h)+h\dfrac{p((n-1)h)}{m}+ h^2\dfrac{qE(nh)}{2m}\\
p(nh)=p((n-1)h) + h qE(nh)\,.
\end{cases}\]
However, one has to make some renormalization somewhere. Indeed, when the time step $h$ decreases, the effect of the electric field on the particle becomes smaller and smaller. To counter that, the interactions have to be reinforced. The electric field $E$ is renormalized by multiplying by a factor $1/h$.
This normalization factor can be intuitively understood as follows. First if one wants to keep the same intensity (at least in law) for the interactions, a factor $1/\sqrt{h}$ is needed. On the other hand, one needs to renormalize with an other $1/\sqrt{h}$ as in the quantum case of \cite{AP}, since the influence of the environment decreases with the time step $h$. Thus, the value of the electric field is now sampled from $1/{h}\,\big(W(nh)-W((n-1)h)\big)$.
Therefore, the new dynamics of the system is
\[\begin{cases}
x(nh)= x((n-1)h)+h\dfrac{p((n-1)h)}{m}+ h\dfrac{qE(nh)}{2m}\\
p(nh)=p((n-1)h) + qE(nh)\,.
\end{cases}\]
In other words, using vector notations, the dynamics is
\[X(nh)= U^{(h)}(X((n-1)h),E(nh))\,,\]
where the map $U^{(h)}$ is defined by
\[ U^{(h)}(x,y)= x + \sigma(x) y +h b(x) + h\eta^{(h)}(x,y)\,,\]
with
\[\sigma \left(\begin{array}{c} x_1\\ x_2 \end{array}\right)=\left(\begin{array}{c} 0\\ q \end{array}\right)\,, \hspace{10mm}b\left(\begin{array}{c} x_1\\ x_2 \end{array}\right)=\left(\begin{array}{c} \dfrac{x_2}{m}\\ 0 \end{array}\right)\,,\hspace{10mm}\eta^{(h)}(x,y)=\left(\begin{array}{c} \dfrac{qy}{2m}\\ 0 \end{array}\right)\,.\]
\subsection{Harmonic Interaction}
Our second example is another example where Hamilton's equations can be explicitly solved.
Consider two unit mass systems linked by a spring whose spring constant is $1$. Let $l$ be the length for which the potential energy is minimum.
We assume that the two objects can just horizontaly move without friction.
In this case, the Hamiltonian of the two objects is
\[H \left[ \left(\begin{array}{c}
Q_1\\
P_1
\end{array}\right), \left(\begin{array}{c}
Q_2\\
P_2
\end{array}\right) \right] =\frac{P_1^2}{2} + \frac{P_2^2}{2} + \dfrac{1}{2}( Q_2 - Q_1 - l)^2 \,,\]
where ${P_1^2}/{2}$ is the kinetic energy of the system $1$ and ${P_2^2}/{2}$ the kinetic energy of the system $2$.
The dynamics of the whole system is given by Hamilton's equations
\[\begin{cases}
\dot{Q_1} = \dfrac{\partial H}{\partial P_1} = P_1\qquad\mbox{and}\qquad
\dot{P_1} = -\dfrac{\partial H}{\partial Q_1} = Q_2 - Q_1 -l\\
\\
\dot{Q_2} = \dfrac{\partial H}{\partial P_2} = P_2\qquad\mbox{and}\qquad
\dot{P_2} = -\dfrac{\partial H}{\partial Q_2} = - (Q_2 -Q_1-l)\,.
\end{cases}
\]
These equations can be solved and give this evolution of the whole system according to the initial conditions.
\begin{align*}
Q_1(t) =& \dfrac{1}2\left(P_1(0)+P_2(0)\right) t +\dfrac12(Q_1(0) + Q_2(0)- l)+ \\
&\ \ \ +\dfrac12(Q_1(0)-Q_2(0)+l)\, \cos(\sqrt{2}t) +\dfrac1{2\sqrt2}(P_1(0)-P_2(0))\,\sin(\sqrt{2}t) \\
P_1(t) =& \dfrac12(P_1(0)+P_2(0))- \dfrac1{\sqrt2}(Q_1(0)-Q_2(0)+l)\, \sin(\sqrt{2}t) + \\
&\ \ \ +\dfrac12(P_1(0)-P_2(0))\, \cos(\sqrt{2}t)
\end{align*}
\begin{align*}
Q_2(t) =& \dfrac12(P_1(0)+P_2(0)) t +\dfrac12(Q_1(0)+Q_2(0)+ l)+ \\
&\ \ \ -\dfrac12(Q_1(0)-Q_2(0)+l)\, \cos(\sqrt{2}t) - \dfrac1{2\sqrt2}(P_1(0)-P_2(0))\,\sin(\sqrt{2}t)\\
P_2(t) =& \dfrac12(P_1(0)+P_2(0)) + \dfrac1{\sqrt2}(Q_1(0)-Q_2(0)+l)\,\sin(\sqrt{2}t)+ \\
&\ \ \ -\dfrac12(P_1(0)-P_2(0))\, \cos(\sqrt{2}t)\,.
\end{align*}
Let $h$ be a fixed time step. As $h$ is supposed sufficiently small,
an approached expression of the discrete time dynamical system can be computed by a Taylor expansion.
On the other hand, as we shall be only interested in the dynamics of the system 1, the evolutions of $Q_2$ and $P_2$ are forgotten.
We get
\begin{multline*}
Q_1(h) = Q_1(0)+h P_1(0) + \dfrac{1}{2}\left(Q_2(0)-Q_1(0)-l\right)h^2+ \hfill \\
\hfill - \dfrac{1}{6}\left(P_1(0)-P_2(0)\right) h^3 +\circ(h^3) \\
\ \ P_1(h) = P_1(0) +\left(Q_2(0)-Q_1(0)-l\right) h+ \dfrac{1}{2} \left(P_2(0)-P_1(0)\right) h^2+\hfill \\
\hfill+ \dfrac{1}{3}\left(Q_1(0)-Q_2(0)+l\right) h^3+\circ(h^3)\,.
\end{multline*}
We now introduce the scheme of repeated interactions.
System $1$ is chosen as the small system. Thus, the space $S$ is $\mathbb{R}^2$.
System $2$ plays the role of one piece of the environment. The space $E$ is also in this case $\mathbb{R}^2$.
At time $nh$, the small system is in the state $Q_1(nh)$ and $P_1(nh)$. The values of $Q_2((n+1)h)$ and $P_2((n+1)h)$ are sampled from the increments of a $2$-dimensional Brownian motion $W_t$. One makes the system and the environment interact during a time $h$ with the initial conditions for the environment $Q_2((n+1)h)$ and $P_2((n+1)h)$. The interaction is stopped after a time $h$. One then repeats the procedure.
\smallskip
However, for the same reasons as for the first example, interactions need to be renormalized by a factor ${1}/{h}$.
The Markov chain which gives the evolution of the system becomes
\begin{multline*}
Q_1(nh) = Q_1((n-1)h)+ \Big(P_1((n-1)h) + \dfrac{1}{2} Q_2(nh)\Big) h +\hfill \\
\hfill- \dfrac{1}{2} \Big(Q_1((n-1)h)+l - \dfrac{P_2(nh)}{3}\Big) h^2+ \circ(h^2) \\
\ \ P_1(nh) = P_1((n-1)h) + Q_2(nh)-\Big(Q_1((n-1)h)+l -\dfrac{1}{2} P_2(nh)\Big) h +\hfill \\
\hfill - \Big(\dfrac{P_1((n-1)h)}{2} + \dfrac{Q_2(nh)}{3}\Big)h^2 + \circ(h^2)\,,
\end{multline*}
or, equivalently
\[X(nh)=U^{(h)}(X((n-1)h), Y(nh))\,\]
where
\begin{align*}
U^{(h)}(X,Y)= X + \sigma(X) Y + h b(X) +h \eta^{(h)}(X,Y),
\end{align*}
with
\[b\left(\begin{array}{c} x_1\\chi_2 \end{array} \right) = \left(\begin{array}{c} x_2\\ -(x_1+l) \end{array} \right), \hspace{5mm} \sigma \left(\begin{array}{c} x_1\\ x_2 \end{array} \right) =\left(\begin{array}{cc} 0& 0\\ 1& 0 \end{array} \right)\,,\]
and
\[\eta^{(h)}\left[\left(\begin{array}{c} x_1\\ x_2 \end{array}\right), \left(\begin{array}{c} y_1\\ y_2 \end{array}\right) \right] = \dfrac{1}{2}\left(\begin{array}{c} y_1\\ y_2 \end{array}\right)-\dfrac{h}{2} \left(\begin{array}{c} x_1+l-y_2/3 \\ x_2+2y_1/3 \end{array}\right) + \circ(h)\,.\]
Note that this application $U^{(h)}$ is of the same form as in the case of the particle in an uniform electric field.
\bigskip
Note also that the renormalization can be understood in a more general way as follows. In the two examples above, the interactions are reinforced by changing the states of the environment in the scheme of repeated interactions. But, in the same way as in the quantum case, the renormalization can be understood as a reinforcement of the interaction in the Hamiltonian.
Indeed, the Hamiltonien of the whole system can be written in general way,
\[H =\underbrace{H_1}_{System}+ \underbrace{H_2}_{Environment} + \underbrace{I}_{Interaction},\]
where $H_1$ is the part of the Hamiltonian which only depends on the state of the system ( ${P_1^2}/{2} +{Q_1^2}/{2}$ for instance in the case of the harmonic interaction), $H_2$, the part which only depends on the state of the environment (${P_2^2}/{2} +{Q_2^2}/{2}$) and the interaction part $I$ which really depends on the two states ($- Q_1 Q_2$). Our renormalization factor can be viewed as a reinforcement of the interaction term $I$ by multiplying it by this factor ${1}/{h}$. Then, the scheme of repeated interactions is set up from this new Hamiltonian which depends on $h$.
\subsection{Damped Harmonic Oscillator}
The formalism developped in Section 2 is general and it can also be applied to non-Hamiltonian systems. An example based on a change of the harmonic interaction by adding a friction term for the system is presented in this part.
Consider the same system as previously, i.e. two harmonically interacting objects of mass $1$. We assume that the system 1 also undergoes a fluid friction in $-f P_1$ where $f$ is the friction coefficient. Because of this force, the energy is not conserved, and therefore, the system is not Hamiltonian. Moreover, in order to simplify the interaction, the length $l$ of the minimum of the potential energy is supposed to be $0$.
The evolution of the system follows Newton's law of motion,
\[\begin{cases}
\dot{Q_1}= P_1\\
\dot{P_1}= - f P_1 + Q_2-Q_1\,.
\end{cases}\]
On the other hand, the environment part stays Hamiltonian and evolves according to the equations
\[\begin{cases}
\dot{Q_2}= P_2\\
\dot{P_2}= Q_1-Q_2\,.
\end{cases}\]
For a small time $h>0$, Taylor's expansions and the previous equations lead to a state of the system after a time $h$
\[\begin{cases}
Q_1(h)=Q_1(0) + h P_1(0)+ {\cal O} (h^2)\\
P_1(h)= P_1(0) +h(- f P_1(0) + Q_2(0)-Q_1(0))+{\cal O}(h^2)\,.
\end{cases}\]
We now set up the repeated interactions framework.
The space of the system is $\mathbb{R}^2$. The environment is represented by the chain $(\mathbb{R}^2)^{h\mathbb{N}^*}$.
The motion of the system is given by the following Markov chain
\[\begin{cases}
Q_1((n+1)h)=Q_1(nh) + h P_1(nh)+ {\cal O} (h^2)\\
P_1((n+1)h)= P_1(nh) +h(- f P_1(nh) + Q_2(nh)-Q_1(nh))+{\cal O}(h^2)\,.
\end{cases}\]
The sequence $(Q_2(nh), P_2(nh))_{n \in \mathbb{N}}$ is sampled from the increments of a $2$-dimensional Brownian motion. However, as previousy, states of the environment are reinforced by a factor $1/h$.
Hence, the Markov chain $(X(nh))$ is defined by
\[X(nh)=U^{(h)}(X((n-1)h), Y(nh))\,\]
where
\begin{align*}
U^{(h)}(X,Y)= X + \sigma(X) Y + h b(X) +h \eta^{(h)}(X,Y),
\end{align*}
with
\[b\left(\begin{array}{c} x_1\\chi_2 \end{array} \right) = \left(\begin{array}{c} x_2\\ -x_1 - f x_2 \end{array} \right), \hspace{5mm} \sigma \left(\begin{array}{c} x_1\\ x_2 \end{array} \right) =\left(\begin{array}{cc} 0& 0\\ 1& 0 \end{array} \right)\,,\]
and the remaining terms of Taylor's expansion are grouped together in the application $\eta^{(h)}$.
Note that this last function could be explicitely determined from the equations of Newton's law of motion from which higher derivatives can be expressed.
\section{Convergence of Dynamics}
We leave for the moment our physical examples and come back to the general setup as introduced in Sections 2 and 3.
In Subsection 3.2 a continuous dynamics $\overline{T}^{(h)}$ related to $\widetilde{T}^{(h)}$ was defined on $\mathbb{R}^m \times \Omega$.
The convergence of $\overline{T}^{(h)}$ to a continuous time dynamical system like $(T_t)$ is now studied in this part.
Each dynamics acts on a product space. Therefore, we examine separately the functions on each component. More precisely, the convergence of the application $\overline{\theta}_t^{(h)}$ to $\theta_t$ for all $t$ on $\Omega$ is firstly regarded in the next subsection. Then, the one of the processes $\overline{X}_t$ to a solution of stochastic differential equation depending particularly on the application $U^{(h)}$ is studied.
\subsection{Convergence of Shift}
The space of the environment is $\Omega$, the set of continuous functions from $\mathbb{R}_+$ to $\mathbb{R}^d$ vanishing at the origin. If the limit dynamical system exists, the space on which the shift acts has to be $\Omega$ too. Therefore, we now consider the shift $\theta_t$ on $\Omega$. Recall that it is defined by
\[\theta_t(\omega)(s)=\omega (t+s) - \omega(t)\,,\]
for all $t$, $s$ in $\mathbb{R}_+$ and all $\omega$.
\smallskip
The convergence of $\overline{\theta}_t^{(h)}$ to $\theta_t$ according to the natural metric $D$ on $\Omega$ for all $t$ is shown in the next theorem.
\begin{theo}
Let $\omega$ be a function in $\Omega$. For all $t \in \mathbb{R}_+$,
$$
\lim_{h \rightarrow 0}\,D(\theta_t(\omega), \overline{\theta}^{(h)}_t(\omega)) = 0\,.
$$
\end{theo}
\begin{proof}
As for the proof of the lemma 1, this result is also based on the convergence of piecewise linear functions to continuous one on compact sets. However, note that two linear interpolations constitute the definition of $\overline{\theta}_t^{(h)}$ instead of one: one is due to the injection $\phi_I^{(h)}$ and the other one is due to the construction of the continuous dynamics on $\mathbb{R}^m \times \Omega$.
\smallskip
For all $\omega \in \Omega$ and for all $t$ and $s$ in $\mathbb{R}_+$,
we start by computing $\theta_t(\omega)(s) - \overline{\theta}^{(h)}_t(\omega)(s)$.
By definition of $\overline{T}^{(h)}_t$, the point $\overline{\theta}^{(h)}_t(\omega)(s)$ is obtained by linear interpolation between $\phi_I^{(h)} \circ (\theta^{(h)})^{\lfloor t/h\rfloor} \circ \phi_P^{(h)}(\omega) (s)$ and $\phi_I^{(h)} \circ (\theta^{(h)})^{(\lfloor t/h\rfloor +1)} \circ \phi_P^{(h)} (\omega) (s)$.
Therefore, let us compute these two values. We have
\begin{multline*}
\phi_I^{(h)} \circ (\theta^{(h)})^{\lfloor t/h\rfloor} \circ \phi_P^{(h)} (\omega) (s) =
\omega((\lfloor t/h\rfloor +\lfloor s/h\rfloor)h) - \omega(\lfloor t/h\rfloor h)+\hfill\\
\hfill + \frac{s-\lfloor s/h\rfloor h}{h}\Big{\{}\omega((\lfloor t/h \rfloor + \lfloor s/h \rfloor +1)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor)h)\Big{\}}\,.
\end{multline*}
On the other hand,
\begin{multline*}
\phi_I^{(h)} \circ (\theta^{(h)})^{(\lfloor t/h\rfloor +1)} \circ \phi_P^{(h)} (\omega) (s) =
\omega((\lfloor t/h\rfloor +\lfloor s/h\rfloor+1)h) - \omega((\lfloor t/h\rfloor+1) h)+\hfill\\
\hfill+ \frac{s-\lfloor s/h\rfloor h}{h}\Big{\{}\omega((\lfloor t/h \rfloor + \lfloor s/h \rfloor +2)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor +1)h)\Big{\}}\,.
\end{multline*}
Since $\overline{\theta}^{(h)}_t(\omega)(s)$ is a barycenter between these two points whose coefficients are given by the linear interpolation on $t$, then
\begin{align*}
\overline{\theta}^{(h)}_t(\omega)(s) &=
\frac{(\lfloor t/h\rfloor+1)h - t}{h}(\phi_I^{(h)} \circ (\theta^{(h)})^{\lfloor t/h\rfloor} \circ \phi_P^{(h)})(\omega) (s)+\\
&\ + \frac{t-\lfloor t/h\rfloor h}{h} (\phi_I^{(h)} \circ (\theta^{(h)})^{(\lfloor t/h\rfloor +1)} \circ \phi_P^{(h)})(\omega) (s)\\
&=\frac{(\lfloor t/h\rfloor+1)h - t}{h} \Big{\{} \omega((\lfloor t/h\rfloor+\lfloor s/h\rfloor)h)+\\
&\ +\frac{s-\lfloor s/h\rfloor h}{h}\Big{[}\omega((\lfloor t/h \rfloor + \lfloor s/h \rfloor +1)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor)h)\Big{]}\Big{\}}+\\
&\ + \frac{t-\lfloor t/h\rfloor h}{h}\Big{\{}\omega((\lfloor t/h\rfloor +\lfloor s/h\rfloor+1)h)+\\
&\ +\frac{s-\lfloor s/h\rfloor h}{h}\Big{[}\omega((\lfloor t/h \rfloor+ \lfloor s/h \rfloor +2)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor+1)h)\Big{]}\Big{\}}+\\
&\ - \frac{( \lfloor t/h\rfloor +1)h - t}{h}\,\omega(\lfloor t/h\rfloor h)- \frac{t-\lfloor t/h\rfloor h}{h} \,\omega((\lfloor t/h\rfloor+1) h)\,.
\end{align*}
As seen in the proof of the lemma 1, the term
$$- \frac{(\lfloor t/h\rfloor+1)h - t}{h}\,\omega(\lfloor t/h\rfloor h)- \frac{t-\lfloor t/h\rfloor h}{h}\,\omega((\lfloor t/h\rfloor+1) h)$$
tends to $-\omega(t)$ when $h$ goes to $0$.
We just have to prove now that the other terms converge to $\omega(t+s)$.
As the function $\omega$ is continuous, we have
\[ \lim_{h \rightarrow 0}\left|\omega((\lfloor t/h \rfloor + \lfloor s/h \rfloor +1)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor)h)\right| =0\,.\]
For the same reason,
\[\lim_{h \rightarrow 0}\left|\omega((\lfloor t/h \rfloor+ \lfloor s/h \rfloor +2)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor+1)h)\right|=0\,.\]
On the other hand,
\[0 \leq \frac{(\left[t/h\right]+1)h - t}{h} \leq 1\,,\hspace{5mm} 0 \leq \frac{t-\left[t/h\right]h}{h} \leq 1\,, \]
and, obviously,
\[ 0 \leq \frac{(\left[s/h\right]+1)h - s}{h} \leq 1,\hspace{5mm} 0 \leq \frac{s-\left[s/h\right]h}{h} \leq 1\,, \]
Therefore,
\begin{multline*}
\lim_{h \rightarrow 0}\, \left|\frac{(\lfloor t/h\rfloor+1)h - t}{h} \right|\,\left|\frac{s-\lfloor s/h\rfloor h}{h}\right|\times\hfill\\
\hfill\times\left|\omega((\lfloor t/h \rfloor+ \lfloor s/h \rfloor +2)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor+1)h)\right| =0\,,
\end{multline*}
and
\begin{multline*}
\lim_{h \rightarrow 0} \, \left|\frac{t-\lfloor t/h\rfloor h}{h} \right|\,\left|\frac{s-\lfloor s/h\rfloor h}{h}\right| \times \hfill \\
\hfill \times \left|\omega((\lfloor t/h \rfloor + \lfloor s/h \rfloor +1)h) - \omega((\lfloor t/h \rfloor+\lfloor s/h \rfloor)h)\right| =0\,.
\end{multline*}
For the same reasons as previously, the remaining terms
$$\frac{(\lfloor t/h\rfloor+1)h - t}{h} \omega((\lfloor t/h\rfloor+\lfloor s/h\rfloor)h)+\frac{t-\lfloor t/h\rfloor h}{h}\omega((\lfloor t/h\rfloor +\lfloor s/h\rfloor+1)h)$$
tend to $\omega(t+s)$.
As a conclusion, for all $t$, $s$ in $\mathbb{R}_+$, we have
\[\lim_{h \rightarrow 0} \, \left| \theta_t(\omega)(s) - \overline{\theta}^{(h)}_t(\omega)(s) \right| = 0\,.\]
Since the interval $[0,n]$ is compact, by uniform continuity of these applications
\[\lim_{h \rightarrow 0}\,\underset{0\leq s \leq n}{\sup} \left| \theta_t(\omega)(s) - \overline{\theta}^{(h)}_t(\omega) (s) \right| =0\,.\]
Finally, by Lebesgue's Theorem,
\[\lim_{h \rightarrow 0}\,D (\theta_t(\omega), \overline{\theta}^{(h)}_t(\omega)) = 0\,.\]
The theorem is proved. \end{proof}
Eventually, the application $\overline{\theta}^{(h)}_t$ converges to the shift $\theta_t$ when $h$ goes to $0$ for all $t$ on $\Omega$ whatever be the continuous time dynamical system as long as the noise of the stochastic differential equation is a $d$-dimensional Brownian motion whose canonical space is $\Omega$.
\subsection{$L^{p}$ and Almost-Sure Convergence}
After having studied the convergence of the shift, we want now to give conditions on $U^{(h)}$ for the $L^{p}$ and almost sure convergence, on every time interval $[0,\tau]$, of the process $\overline{X}^h_t$, the first component of $\overline{T}^h_t$, to the solution $X_t$ of a SDE.
\smallskip
As the process $\overline{X}^h_t$ is just a linearly interpolated Markov chain, this convergence boils down to the convergence of some schemes of stochastic numerical analysis. Thus, our result belongs to a more general problem, that we shall apply later on to the process $\overline{X}^h_t$ in Theorem 5.3.
\smallskip
Consider the solution $X_t$ in $\mathbb{R}^m$ starting in $X_0$ of the SDE \eqref{eq:A}
\[dX_t = b(X_t) dt + \sigma(X_t) dW_t\,,\]
where $(W_t)$ is a $d$-dimensional Brownian motion, and where the applications $b$ and $\sigma$, respectively from $\mathbb{R}^m$ to $\mathbb{R}^m$ and from $\mathbb{R}^m$ to $\mathcal{M}_{m,d}(\mathbb{R})$, are Lipschitz. Recall that this assumption is required for the existence and the uniqueness of the solution of the SDE on every time interval $[0, \tau]$ and for all initial conditions.
Let $(X^h_{nh})_{n \in \mathbb{N}}$ be a Markov chain for a time step $h$ whose evolution is given by
\[X^h_{(n+1)h}= X_{nh}^h+ \sigma(X_{nh}^h)(W_{(n+1)h}- W_{nh}) + h b(X_{nh}^h) + h \eta^{(h)}(X_{nh}^h,W_{(n+1)h}- W_{nh}),\]
with $X^h_0 = x_0$ and where $\eta^{(h)}$ is a measurable application.
Note that the form of the equation above is identical to the ones previously seen in the physical examples.
Also note that, without the term $\eta^{(h)}$, the scheme above is the usual stochastic Euler one. Hence our context is more general than the usual Euler scheme for the discrete-time approximation of SDE. We have to adapt convergence theorem to this situation.
\bigskip
The Markov chain $(X_{nh}^{h})$ is now linearly interpolated to obtain a continuous time process $(X_t^{h})_{t \in \mathbb{R}_+}$ defined by
\begin{multline*}
X_t^h = X_{\lfloor t/h\rfloor h}^h +\dfrac{t-\lfloor t/h\rfloor h}{h} \big{\{} X_{(\lfloor t/h\rfloor +1)h}^h-X_{\lfloor t/h\rfloor h}^h\big{\}}\\
\phantom{X_t^h\ \ \ }= X_{\lfloor t/h\rfloor h}^h +\dfrac{t-\lfloor t/h\rfloor h}{h} \big{\{}\sigma(X_{\lfloor t/h\rfloor h}^h)(W_{(\lfloor t/h\rfloor+1)h}- W_{\lfloor t/h\rfloor h})+\hfill \\
\hfill +h b(X_{\lfloor t/h\rfloor h}^h)+h \eta^{(h)}(X_{\lfloor t/h\rfloor h}^h,W_{(\lfloor t/h\rfloor+1)h}- W_{\lfloor t/h\rfloor h})\big{\}}\,,
\end{multline*}
for all $t$, and with $X^h_0=x_0$.
For the convergence of the process $X^h_t$, we make one more assumption to control the term in $\eta^{(h)}$.
\begin{enumerate}
\item[(H4)] There exist $\alpha \in ]0, + \infty]$ and $K_2$ such that
\[\left|\eta^{(h)}(x,y) \right| \leq K_2 (h^{\alpha}\left|x\right| + \left|y\right|)\,.\]
\end{enumerate}
We start with the main result on the convergence of processes in the case of globally Lipschitz functions.
\begin{theo}\label{globally}
For all $\tau>0$ and for all $p>2$, under Assumptions (H1) and (H4), the process $(X_t^h)$ converges in $L^{p}$ to the solution $(X_t)$ of the SDE \eqref{eq:A} on $\left[0,\tau\right]$.
More precisely, for $h$ small enough and $p=2q$ with $q>1$,
\[\mathbb{E}\Big{[}\big( \sup_{t \in \left[0,\tau\right]} \left| X_t - X^h_t \right|\big)^{2q}\Big{]} \leq C (h^{2q\alpha} + h^{q-1} (-\log h)^q)\,\]
where $C$ is a constant depending only on $\tau$, $q$, $K_0$ and $K_2$.
Moreover, if $q$ is such that $q>2$ and $2q\alpha >1$, then the convergence is almost sure on $\left[0,\tau\right]$.
\end{theo}
In order to prove this theorem, we use the same strategy as Faure considered in his PhD thesis (\cite{F}) for the convergence of the explicit Euler scheme, i.e. without the term $h\eta^{(h)}$.
Before hands we need two long and technical lemmas \ref{le2} and \ref{le4} and a property on solution of stochastic differential equation, Lemma \ref{le3}. These lemmas shall be applied in the proof in the case of locally Lipschitz and linearly bounded applications because the linear growth condition shall be the key property.
\smallskip
The first lemma gives an inequality on the $L^{p}$-norm of some stochastic processes.
In order to obtain this result, the definition of a \emph{$L^{p}$-continuous process} is introduced.
A process $Y_t$ is \emph{$L^{p}$-continuous} if the function $t \longmapsto \mathbb{E}\left[\left|Y_t\right|^{p}\right]$ is continuous.
\begin{lemm}\label{le2}
Let $Y_t$ be a process defined by
$Y_t= Y_0 + \int_0^t A_s\, dW_s + \int_0^t B_s\, ds\,,$
where $A_s$ and $B_s$ are $L^{2p}$-continuous, and $\mathbb{E} \left[\left|Y_0\right|^{p}\right] < \infty$.
Then $Y_t$ is $L^{p}$-continuous.
Moreover,
\begin{align}
\mathbb{E}\left[\left|Y_t\right|^{p}\right] \leq \mathbb{E}\left[\left|Y_0\right|^{p}\right] + C \int_0^t \mathbb{E}\left[\left|Y_s\right|^{p} + \|A_s\|^{p} + \left|B_s\right|^{p}\right] \,ds\,.\label{22}
\end{align}
\end{lemm}
\begin{proof}
First, by the convexity of the function $x \longrightarrow x^{q}$, we have for all $x$, $y$, and $z$ non-negative reals
\[(x+y+z)^{2q} \leq C_{2q} (x^{2q}+ y^{2q}+z^{2q})\,.\]
Hence,
\[\mathbb{E}\left[\left|Y_t\right|^{2q}\right] \leq C_{2q} \Big{(}\mathbb{E}\left[\left|Y_0\right|^{2q}\right] + \mathbb{E}\Big{[}\Big{|}\int_0^t A_s\,dW_s \Big{|}^{2q}\,\Big{]} + \mathbb{E}\Big{[}\Big| \int_0^t B_s\,ds\, \Big| ^{2q}\,\Big{]}\Big{)} \,.\]
By H\"older's inequality we claim that
\begin{equation}\label{holder}
\mathbb{E}\Big[\Big|\int_0^t B_s\,ds\, \Big|^{2q}\,\Big] \leq C_0\, t^{2q-1} \int_0^t \mathbb{E}\left[\left| B_s\right|^{2q}\,\right]\,ds\,.
\end{equation}
Indeed, first note that
\[\Big| \int_0^t B_s\,ds\, \Big|^{2q}= \Big( \sum_{i=1}^m \Big|\int_0^t B^i_s\,ds\, \Big|^{2}\,\Big)^q\,.\]
Component by component, we have by H\"older's inequality,
$$
\mathbb{E}\Big[\Big|\int_0^t B_s^i\,ds\, \Big|^{2q}\,\Big] \leq t^{2q-1}\mathbb{E}\Big[\int_0^t \left|B_s^i \right|^{2q} \,ds\Big]\,.$$
By the convexity of the application $t\longmapsto t^q$ we have the announced inequality (\ref{holder}).
For the second term, there exists Burkholder inequality based on It\^o's formula which gives a similar bound:
\[\mathbb{E}\Big[\Big|\int_0^t A_s\,dW_s \,\Big|^{2q}\,\Big] \leq C_1 t^{q-1} \int_0^t \mathbb{E}\left[\| A_s\|^{2q}\right]\,ds\,.\]
Finally, we have obtained the following bound
\[\mathbb{E} \left[\left|Y_t\right|^{2q}\right] \leq C_{2q} \Big{(}\mathbb{E}\left[\left|Y_0\right|^{2q}\right] + C_0 t^{2q-1} \int_0^t \mathbb{E}\left[\left| B_s\right|^{2q}\right]\,ds + C_1 t^{q-1} \int_0^t \mathbb{E}\left[\| A_s\|^{2q}\right]\,ds \Big{)}\,.\]
Now for the $L^{p}$-continuity of this process, It\^o's formula is applied between two times $s, t$ with $s \leq t$ to the process $(Y_t)$.
Indeed, since the application $x \longmapsto \left|x\right|^{2q}$ is twice differentiable for $q\geq 1$, we get
\begin{align*}
\left|Y_t\right|^{2q} =\left|Y_s\right|^{2q} &+ \sum_{i=1}^m \int_s^t 2q \left|Y_u\right|^{2q-2} Y_u^i \,dY_u^i \ +\\
& + \frac{1}{2} \sum_{i\neq j} \int_s^t 2q(2q-2)\left|Y_u\right|^{2q-4}Y_u^i Y_u^j \,d\langle Y^i,Y^j \rangle_u \ +\\
&+ \frac{1}{2} \sum_{i} \int_s^t 2q(2q-2) (Y_u^i)^2 \left|Y_u\right|^{2q-4} + 2q\left|Y_u\right|^{2q-2}\, d\langle Y^i,Y^i \rangle_u
\end{align*}
where $Y_u=(Y_u^i)_{i=1,\cdots,m}$\,.
From the definition of $Y_u$,
$$dY_u^i= \sum_{j=1}^d A_u^{i,j}\,dW^j_u + B_u^i\, du\,,$$
we have
$$d\langle Y^i,Y^j \rangle_u = \sum_{k=1}^d \sum_{l=1}^d A_u^{i,k} A_u^{j,l}\,d\langle W^k_, W^l \rangle_u = \sum_{k=1}^d A_u^{i,k} A_u^{j,k}\, du\,.$$
Hence we get
\begin{align*}
\left|Y_t\right|^{2q} = \left|Y_s\right|^{2q} &+ 2q \sum_{i=1}^m \int_s^t \left|Y_u\right|^{2q-2} Y_u^i B_u^i\, du +2q \sum_{i=1}^m \sum_{j=1}^d \int_s^t \left| Y_u\right|^{2q-2} Y_u^i A_u^{i,j}\,dW^j_u +\\
&+ 2q(q-1) \sum_{i,j} \sum_{k=1}^d \int_s^t \left|Y_u\right|^{2q-4}Y_u^i Y_u^j A_u^{i,k} A_u^{j,k} \,du +\\
&+ p \sum_{i=1}^m \sum_{k=1}^d \int_s^t \left|Y_u\right|^{2q-2}( A_u^{i,k})^2 \,du\,.
\end{align*}
Taking the expectation, we obtain
\begin{align*}
\mathbb{E}\left[\left|Y_t\right|^{2q}\right] =& \mathbb{E}\left[\left|Y_s\right|^{2q}\right] + 2q \sum_{i=1}^m \int_s^t
\mathbb{E}\left[\left|Y_u\right|^{2q-2} Y_u^i B_u^i\right]\,du+ \nonumber \\
&\qquad+ 2q(q-1) \sum_{i,j} \sum_{k=1}^d \int_s^t \mathbb{E}\left[\left|Y_u\right|^{2q-4}Y_u^i Y_u^j A_u^{i,k} A_u^{j,k}\right] \,du + \nonumber\\
&\qquad+ p \sum_{i=1}^m \sum_{k=1}^d \int_s^t \mathbb{E}\left[\left|Y_u\right|^{2q-2}( A_u^{i,k})^2 \right]\,du\,.
\end{align*}
Hence,
\begin{multline}
\left|\mathbb{E}\left[\left|Y_t\right|^{2q}\right] - \mathbb{E}\left[\left|Y_s\right|^{2q}\right]\right|\leq C_1 \int_s^t
\mathbb{E}\left[\left|Y_u\right|^{2q-1} \left| B_u\right|\right]\,du + \hfill \\
\hfill + C_2 \int_s^t \mathbb{E}\left[\left|Y_u\right|^{2q-2} \|A_u\|^2\right] \,du\,.\label{44}
\end{multline}
Consider the first term in Inequality (\ref{44}). By H\"older's inequality we have
\[\mathbb{E}\left[\left|Y_u\right|^{2q-1} \left| B_u\right|\right] \leq \mathbb{E}\left[\left|Y_u\right|^{2q}\right]^{(2q-1)/2q}\ \mathbb{E}\left[\left| B_u\right|^{2q}\right]^{1/2q}\,.\]
In the same way for the second term in (\ref{44}) we get
\[\mathbb{E}\left[\left|Y_u\right|^{2q-2} \|A_u\|^2\right] \leq \mathbb{E}\left[\left|Y_u\right|^{2q}\right]^{(q-1)/q}\ \mathbb{E}\left[\|A_u\|^{2q}\right]^{1/q}\,.\]
Therefore,
\begin{multline*}
\left|\mathbb{E}\left[\left|Y_t\right|^{2q}\right] - \mathbb{E}\left[\left|Y_s\right|^{2q}\right]\right|\leq C_1 \int_s^t
\mathbb{E}\left[\left|Y_u\right|^{2q}\right]^{(2q-1)/2q}\ \mathbb{E}\left[\left| B_u\right|^{2q}]\right]^{1/2q}\,du + \hfill\\
\hfill + C_2 \int_s^t \mathbb{E}\left[\left|Y_u\right|^{2q}\right]^{(q-1)/q}\ \mathbb{E}\left[\|A_u\|^{2q}\right]^{1/q} \,du\,.
\end{multline*}
Since $\mathbb{E}\left[\left|Y_u\right|^{2q}\right]$ is bounded for all $s\leq u \leq t$ and by the $L^{p}$-continuity of $A_u$ and $B_u$, one can conclude that the process $Y_t$ is $L^{p}$-continuous.
Let us proceed now with the proof of Inequality (\ref{22}).
By It\^o's formula between $t$ and $0$ we have
\begin{align*}
\left|Y_t\right|^{2q} =\left|Y_0\right|^{2q} &+ \sum_{i=1}^m \int_0^t 2q \left|Y_s\right|^{2q-2} Y_s^i dY_s^i + \\
&+\frac{1}{2} \sum_{i\neq j} \int_0^t 2q(2q-2)\left|Y_s\right|^{2q-4}Y_s^i Y_s^j d\langle Y^i,Y^j \rangle_s +\\
&+ \frac{1}{2} \sum_{i} \int_0^t 2q(2q-2) (Y_s^i)^2 \left|Y_s\right|^{2q-4} + 2q\left|Y_s\right|^{2q-2} d\langle Y^i,Y^i \rangle_s\,.
\end{align*}
Hence, taking the expectation we get
\begin{align}
\mathbb{E}\left[\left|Y_t\right|^{2q}\right] = \mathbb{E}\left[\left|Y_0\right|^{2q}\right] &+ 2q \sum_{i=1}^m \int_0^t
\mathbb{E}\big[\left|Y_s^i\right|^{2q-2} Y_s^i B_s^i\big]\,ds +\nonumber \\
&+ 2q(q-1) \sum_{i,j} \sum_{k=1}^d \int_0^t \mathbb{E}\left[\left|Y_s\right|^{2q-4}Y_s^i Y_s^j A_s^{i,k} A_s^{j,k}\right] \,ds +\nonumber\\
&+ q \sum_{i=1}^m \sum_{k=1}^d \int_0^t \mathbb{E}\left[\left|Y_s\right|^{2q-2}( A_s^{i,k})^2 \right]\,ds\,.\label{55}
\end{align}
Let us start with the last term of the (\ref{55}).
First recall that
\[\sum_{i=1}^m \sum_{k=1}^d ( A_s^{i,k})^2 = \|A_s \|^2\,.\]
Note that, for all $k$ in $\left[\!\left| 0 , 2q \right|\!\right]$ and $x,y$ in $\mathbb{R}$, we claim that $x^{2q-k}y^{k} \leq x^{2q} + y^{2q}$. Indeed we have $Y^k \leq 1+ Y^{2q}$, for all $Y$ in $\mathbb{R}$ and we apply it to $Y= y/x$ (if $x=0$, the inequality is clearly true).
Therefore we get
\[\sum_{i=1}^m \sum_{k=1}^d \left|Y_s\right|^{2q-2} ( A_s^{i,k})^2 \leq \left|Y_s\right|^{2q} + \|A_s \|^{2q}\,.\]
Now consider the first term in (\ref{55}). If the scalar product on $\mathbb{R}^m$ is denoted by $(\,,\,)$, then
\[
\sum_{i=1}^m \left|Y_s\right|^{2q-2} Y_s^i B_s^i = \left|Y_s\right|^{2q-2} (Y_s,B_s) \leq \left|Y_s\right|^{2q-1} \left|B_s \right|\,.\]
From the previous inequality, one obtains
\[\sum_{i=1}^m \left|Y_s\right|^{2q-2} Y_s^i B_s^i \leq \left|Y_s\right|^{2q} + \left|B_s\right|^{2q}\,.\]
Let us consider now the last term of (\ref{55}).
Note that,
\[ \left|Y_s\right|^{2q-4} Y_s^i Y_s^j A_s^{i,k} A_s^{j,k} \leq \left|Y_s\right|^{2q-2} \|A_s \|^{2}\,.\]
Hence,
\[\sum_{i,j} \sum_{k=1}^d \left|Y_s\right|^{2q-4} Y_s^i Y_s^j A_s^{i,k} A_s^{j,k} \leq C_3( \left|Y_s\right|^{2q} + \|A_s\|^{2q})\,.\]
In conclusion, there exists a constant $C$ such that
\[\mathbb{E}\left[\left|Y_t\right|^{2q}\right] \leq \mathbb{E}\left[\left|Y_0\right|^{2q}\right] + C \int_0^t \mathbb{E}\left[\left|Y_s\right|^{2q} + \|A_s\|^{2q} + \left|B_s\right|^{2q}\right]\,ds\,.\]
The lemma is proved.
\end{proof}
The next lemma (proved in \cite{KP}) gives some regularities of trajectories of solutions $X_t$ of the SDE \eqref{eq:A}.
\begin{lemm}\label{le3}
Let $(X_t)$ be the solution of \eqref{eq:A} for all $t \in \left[0,\tau\right]$. Suppose that the maps $b$ and $\sigma$ are locally Lipschitz (H2) and linearly bounded (H3) .
Then, for all $t \in \left[0,\tau\right]$ and for all $q\leq 1$,
\begin{align}
\mathbb{E}\left[\left| X_{t}\right|^{2q}\right] \leq (1+ \mathbb{E}\left[\left| x_0 \right|^{2q}\right]) e^{Ct},\label{3311}
\end{align}
and, for all $t$, $s$ such that $t\geq s$,
\begin{align}
\mathbb{E}\left[\left| X_{t}-X_s\right|^{2q}\right] \leq D (1+ \mathbb{E}\left[\left| x_0 \right|^{2q}\right]) (t-s)^q e^{C (t-s)},\label{3322}
\end{align}
where $C$ and $D$ are positive constants depending only on $\tau$, $q$ and $K_1$.
Moreover, for all $\tau>0$
\begin{align*}
\mathbb{E}\left[\sup_{t \in \left[ 0,\tau\right]}\left| X_{t}\right|^{2q}\right] < + \infty
\end{align*}
\end{lemm}
For the convergence of the process $X^h_t$ to $X_t$ the main tool shall be Lemma \ref{le2}.
However, notice that the evolution of $X^h_t$ doesn't allow to apply this lemma because of the linear interpolation. More precisely, for all $n$, between the times $nh$ and $(n+1)h$, this process is not of the form $X_{nh}^h + \int_{nh}^t A_s\, dW_s + \int_{nh}^t B_s\, ds$. Therefore, in order to apply it, it's natural to introduce the new process $Y^h_t$ defined by
\begin{multline*}
Y^h_{t} = Y^h_{\lfloor t/h \rfloor h} + \int_{\lfloor t/h \rfloor h}^{t} b(Y^h_{\lfloor t/h \rfloor h}) +\eta^h( Y^h_{\lfloor t/h \rfloor h}, W_{(\lfloor t/h \rfloor +1)h} - W_{\lfloor t/h \rfloor h}))\,ds + \hfill\\
\hfill +\int_{\lfloor t/h \rfloor h}^{t} \sigma(Y^h_{\lfloor t/h \rfloor h}) \,dW_s\,.
\end{multline*}
Note that $X^h_{(n +1)h}=Y^h_{(n +1)h}$ for all $n$.
\smallskip
The last lemma gives equivalent bounds as in Lemma \ref{le3} but for the processes $(X^h_t)$ and $(Y_t^h)$.
\begin{lemm}\label{le4}
Let $(X^h_t)$ and $(Y^h_t)$ be the processes defined above.
Suppose that the maps $b$ and $\sigma$ are locally Lipschitz and linearly bounded. Moreover, suppose that Hypothesis (H4) are satisfied. Then, for all $t \in \left[0,\tau\right]$ and all $q \geq 1$,
\begin{align*}
\mathbb{E}\big[\left|X^h_t\right|^{2q}\big] \leq C_0\big(1+\mathbb{E}\big[\left|X^h_0\right|^{2q}\big]\big)e^{C_1t}\,,
\end{align*}
and,
\begin{align}
\mathbb{E}\big[\left|Y^h_t\right|^{2q}\big] \leq C_0\big(1+\mathbb{E}\big[\left|X^h_0\right|^{2q}\big]\big)e^{C_1t}\,.\label{411}
\end{align}
Moreover, for all $t$ and for $h$ small enough,
\begin{align*}
\mathbb{E}\big[\left|X^h_t-X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_2 (h^{2q} + h^q(-\log h)^q)\,,
\end{align*}
and,
\begin{align}
\mathbb{E}\big[\left|Y^h_t-Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_2 (h^{2q} + h^q(-\log h)^q)\,.\label{422}
\end{align}
On the other hand, for all $h\leq h_0$,
\begin{align*}
\mathbb{E}\big[ (\sup_{ t \leq \tau} \left| Y^h_t \right|)^{2q} \big] \leq C_3 \big(1+\mathbb{E}\big[\left|X^h_0\right|^{2q}\big]\big)\,,
\end{align*}
where $(C_i)$ are constants depending only on $\tau$, $q$, $K_1$ and $K_2$.
\end{lemm}
\begin{proof} Before the proof, we want to note that Assumption (H1) implies the fact that the functions $b$ and $\sigma$ are linearly bounded. More precisely, there exists $K_1\geq 0$ such that
\[\left|b(x)\right| \leq K_1(1 + \left|x\right|)\qquad\mbox{and}\qquad
\|\sigma(x)\|\leq K_1(1 + \left|x\right|)\,.\]
This linear growth property shall be often used in the following proofs.
\smallskip
The proof of this lemma is achieved in several steps.
The first step is to bound $\mathbb{E}\big[\left|X^h_t\right|^{2q}\big]$ according to $\mathbb{E}\big[\big|X^h_{\lfloor t/h\rfloor h}\big|^{2q}\,\big]$. From the definition of $X^h_t$, we have
\begin{align*}
\left|X^h_t\right|^{2q} \leq C_0 \big(\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q} + &h^{2q} \left|b(X^h_{\lfloor t/h\rfloor h})\right|^{2q} +\\
&\qquad+ \|\sigma(X^h_{\lfloor t/h\rfloor h})\|^{2q}\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\\
&\qquad+ h^{2q} \left|\eta^h( X^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big)\,.
\end{align*}
The process $X^h_t$ at time $\lfloor t/h\rfloor h$ is independent of $W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}$. Therefore, by the linear growth property, we get
\begin{multline}
\mathbb{E}\big[\left|X^h_t\right|^{2q}\big] \leq C_0 \Big{(} \mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] +C_pK_1^{2q} h^{2q} (1+\mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big])+\hfill \\
\hfill + C_pK_1^{2q} (1+\mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big]) \mathbb{E}\big[\left|W_{(\lfloor t/h\rfloor+1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big] +\\
\hfill + h^{2q} \mathbb{E}\big[\left|\eta^h( X^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor+1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big]\Big{)}\,.\label{433}
\end{multline}
In the same way as above, the following bound can be found for $(Y_t^h)$:
\begin{multline}
\mathbb{E}\big[\left|Y^h_t\right|^{2q}\big] \leq C_0 \Big{(} \mathbb{E}\big[\left|Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] +C_qK_1^{2q} h^{2q} (1+\mathbb{E}\big[\left|Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big]) + \hfill \\
\hfill + C_pK_1^{2q} (1+\mathbb{E}\big[\left|Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big]) \mathbb{E}\big[\left|W_{t} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big]+ \\
\hfill+ h^{2q} \mathbb{E}\big[\left|\eta^h( Y^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor+1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big]\Big{)}\,.\label{444}
\end{multline}
The next step is now to bound $\mathbb{E}\big[\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big]$ and\\ $\mathbb{E}\big[\left|W_{t} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big]$.
By definition of the norm, one can find an upper bound by regarding the supremum of $d$ $1$-dimensional standard Brownian motions $(B^k_t)_{k=1 \cdots d}$ on the time interval $\left[0,h\right]$.
Thus, consider the process $M_h$ defined by
\[M_h=\max_{k \in \left[\left|1,d\right|\right]} \sup_{t \in \left[0,h\right]} \left|B^k_t \right|\,.\]
The aim is to find a bound on $\mathbb{E}\left[M_h^{2q}\right]$. Firstly, note that,
\[\mathbb{E}\left[M_h^{2q}\right] \leq \mathbb{E}\Big[M_h^{2q}\ \mathds{1}_{\left\{M_h>2\sqrt{h(-\log h)}\right\}}\Big]+ C_1 h^q(-\log h)^p\,.\]
But,
\[\mathbb{E}\Big[M_h^{2q}\ \mathds{1}_{\left\{M_h>2\sqrt{h(-\log h)}\right\}}\Big] \leq \sum_{k=1}^{d} \mathbb{E}\Big[(\sup_{t \in \left[0,h\right]} \left|B^k_t \right|)^{2q} \ \mathds{1}_{\big\{\underset{t \in \left[0,h\right]}{\sup} \left|B^k_t \right|>2\sqrt{h(-\log h)}\big\}}\Big]\,.\]
By using the reflexion principle we get
\begin{align*}
\mathbb{E}\Big[M_h^{2q}\ \mathds{1}_{\left\{M_h>2\sqrt{h(-\log h)}\right\}}\Big] &\leq 2\sum_{k=1}^{d} \mathbb{E}\Big[ (\sup_{t \in \left[0,h\right]} B^k_t)^{2q}\ \mathds{1}_{\big\{\underset{t \in \left[0,h\right]}{\sup} B^k_t >2\sqrt{h(-\log h)}\big\}}\Big]\\
&\leq C_2\int_{x\geq2\sqrt{h(-\log h)}} x^{2q} g(x)\,dx\,,
\end{align*}
where $g(x)= 2 \ e^{-x^2/2h}/ \sqrt{2\pi h}\,.$
Hence,
\[\mathbb{E}\Big[M_h^{2q}\ \mathds{1}_{\left\{M_h>2\sqrt{h(-\log h)}\right\}}\Big] \leq C_3 h^{q-1/2} \int_{u\geq2\sqrt{-\log h}} u^{2q}e^{-u^2/2}\,du\,.\]
Let us compute now this integral.
By integration by parts, we have
\begin{align*}
I_{q}=\int_{u\geq2\sqrt{-\log h}} u^{2q}e^{-u^2/2}\,du =& \left[u^{2q-1}(-e^{-u^2/2})\right]^{\infty}_{2\sqrt{-\log h}}\\ &+ (2q-1)\int_{u\geq2\sqrt{-\log h}} u^{2q-2}e^{-u^2/2}dx\\
=&(2\sqrt{-\log h})^{2q-1} e^{-(2\sqrt{-\log h})^2/2} + (2q-1) I_{q-1}\\
=&(2\sqrt{-\log h})^{2q-1} h^2 +(2q-1) I_{q-1}\,.
\end{align*}
By recurrence, $I_q \leq C_4 (\sqrt{-\log h})^{2q-1} h^2$.
Therefore, for $h$ small enough,
\[\mathbb{E}\left[M_h^{2q}\right] \leq C_5 h^q(-\log h)^q\,.\]
Thus, we obtain
\[\mathbb{E}\big[\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_6 h^q(-\log h)^q\,,\]
and
\[\mathbb{E}\big[\left|W_{t} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_6 h^q(-\log h)^q\,.\]
Then, from Assumption (H3),
\begin{multline*}
\mathbb{E}\big[\left|\eta^h( X^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big] \leq C_7 (h^{2q\alpha} \mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] +\hfill \\
\hfill+ \mathbb{E}\big[\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big])\,
\end{multline*}
and with the previous inequality on $\mathbb{E}\big[\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q}\big]$,
\begin{multline}
\mathbb{E}\big[\left|\eta^h( X^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big] \leq C_7 (h^{2q\alpha} \mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] +\hfill \\
\hfill+ C_6 h^q(-\log h)^q)\,.\label{466}
\end{multline}
Hence, for $h$ small enough, Inequilties (\ref{433}) and (\ref{444}) becomes
\begin{align}
\mathbb{E}\big[\left|X^h_t\right|^{2q}\big] \leq& C_8 \Big{(}(1 + h^{2q}+ h^q(-\log h)^q)\ \mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big]+ h^{2q} + h^q(-\log h)^q \Big{)}\,,\label{477}
\end{align}
and,
\begin{align}
\mathbb{E}\big[\left|Y^h_t\right|^{2q}\big] \leq& C_8 \Big{(}(1 + h^{2q}+ h^q(-\log h)^q)\ \mathbb{E}\big[\left|Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big]+ h^{2q} + h^q(-\log h)^q \Big{)}\,.\label{488}
\end{align}
These inequalities allow us to bound the expectation of the norm of the processes $X^h_t$ and $Y^h_t$ according to the time in $h\mathbb{N}^*$ just before.
The next step is to understand how the norm of this process between two successive times in $h\mathbb{N}^*$ evolves. In other words, we want to study the evolution of the norm of the Markov chain $(X_{nh}^h)=(Y_{nh}^h)$.
Recall that
\begin{multline*}
X^h_{(n+1)h}= X_{n h}^h +\sigma(X_{n h}^h)(W_{(n +1)h}- W_{n h}) + h b(X_{n}^h)+ \hfill \\
\hfill +h \eta^{(h)}(X_{n h}^h,W_{(n+1)h}- W_{n h}) \,.
\end{multline*}
This can be also written
\begin{multline*}
X^h_{(n +1)h} = X^h_{n h} + \int_{n h}^{(n +1)h} b(X^h_{n h}) +\eta^h( X^h_{n h}, W_{(n +1)h} - W_{n h}))\,ds +\hfill \\
\hfill +\int_{n h}^{(n +1)h} \sigma(X^h_{n h}) \,dW_s\,.
\end{multline*}
One obtains from Lemma 2 between the times $nh$ and $(n+1)h$ for the process $Y^h_{t+nh}$ that
\begin{multline*}
\mathbb{E}\big[\left|X^h_{(n +1)h}\right|^{2q}\big] \leq \mathbb{E}\big[\left|X^h_{n h}\right|^{2q}\big] + C_9 \int_{n h}^{(n +1)h} \big{(}\mathbb{E}\big[\left|Y^h_s\right|^{2q}\big]+ \mathbb{E}\big[\|\sigma(X^h_{n h})\|^{2q}\big] + \hfill \\
\hfill + \mathbb{E}\big[\left| b(X^h_{n h}) +\eta^h( X^h_{n h}, W_{(n +1)h} - W_{n h}))\right|^{2q}\big]\big{)} \,ds\,.
\end{multline*}
The value of $\mathbb{E}\big[\left|Y^h_s\right|^{2q}\big]$ can be bounded with (\ref{488}).
Hence, for $h$ sufficiently small, and from the linear growth property, it follows that
\begin{align*}
\mathbb{E}\big[\left|X^h_{(n +1)h}\right|^{2q}\big]
\leq& (1+C_{10}h)\mathbb{E}\big[\left|X^h_{n h}\right|^{2q}\big] + C_{11} h\,.
\end{align*}
Note that, the sequence $(\mathbb{E}\big[\left| X_{nh}^h\right|^{2q}\big])_{n \in \mathbb{N}}$ is subarithmetico-geometric, that is, this sequence has the following form $x_{n+1} \leq \beta x_n +\gamma$ where $\beta \geq 1$.
Thus, each $\mathbb{E}\big[\left| X_{nh}^h\right|^{2q}\big]$ can be controlled by only $\mathbb{E}\big[\left| X_{0}^h\right|^{2q}\big]$ and the time $n$.
Indeed, if a sequence $(x_n)$ satisfies the previous inequality, then, for all $n$,
\[x_n \leq \beta^n x_0 + n e^{n(\beta-1)} \gamma\,.\]
Therefore, for all $t$ in $[0, \tau]$,
\begin{align*}
\mathbb{E}\big[\left|X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq& (1+C_{10}h)^{t/h} \mathbb{E}\big[\left|X^h_0\right|^{2q}\big] + \frac{t}{h} e^{C_{10}t} C_{11} h \\
\leq& (\mathbb{E}\big[\left|X^h_0\right|^{2q}\big]+ C_{11}t)e^{C_{10}t}\,.
\end{align*}
Hence with Inequality (\ref{477}),
\begin{align*}
\mathbb{E}\big[\left|X^h_t\right|^{2q}\big] &\leq C_{12}(\mathbb{E}\big[\left|X^h_0\right|^{2q}\big]+ C_{11}t)e^{C_{10}t}\\
&\leq C_{13}(1+ \mathbb{E}\big[\left|X^h_0\right|^{2q}\big])e^{C_{10}t}\,,
\end{align*}
and, for the same reason,
\begin{align*}
\mathbb{E}\big[\left|Y^h_t\right|^{2q}\big] \leq C_{13}(1+ \mathbb{E}\big[\left|X^h_0\right|^{2q}\big])e^{C_{10}t}\,.
\end{align*}
Let us proceed now with the proof of Inequality (\ref{422}).
For all $t$, we get
\begin{align*}
\mathbb{E}\big[\left|X^h_t-X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_0 \mathbb{E} \big[& h^{2q} \left|b(X^h_{\lfloor t/h\rfloor h})\right|^{2q} + \\
& \quad+ \|\sigma(X^h_{\lfloor t/h\rfloor h})\|^{2q}\left|W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h}\right|^{2q} +\\
& \quad+ h^{2q} \left|\eta^h( X^h_{\lfloor t/h\rfloor h}, W_{(\lfloor t/h\rfloor +1)h} - W_{\lfloor t/h\rfloor h})\right|^{2q}\big]\,.
\end{align*}
Then, we have
\begin{multline*}
\mathbb{E}\big[\big|X^h_t-X^h_{\lfloor t/h\rfloor h}\big|^{2q}\big] \leq C_8 \Big{(}( h^{2q}+ h^q(-\log h)^q)
\mathbb{E}\big[\big|X^h_{\lfloor t/h\rfloor h}\big|^{2q}\big]+\hfill \\
\hfill+ h^{2q} + h^q(-\log h)^q \Big{)}\,.
\end{multline*}
Finally, from the bound (\ref{411}) on $\mathbb{E}\big[\big|X^h_{\lfloor t/h\rfloor h}\big|^{2q}\big]$, we obtain
\begin{align*}
\mathbb{E}\big[\left|X^h_t-X^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_{14} ( h^{2q}+ h^q(-\log h)^q)\,
\end{align*}
and the same reasons
\begin{align*}
\mathbb{E}\big[\left|Y^h_t-Y^h_{\lfloor t/h\rfloor h}\right|^{2q}\big] \leq C_{14} ( h^{2q}+ h^q(-\log h)^q)\,.
\end{align*}
We now proceed with the proof of the last inequality.
Note that the process $Y^h_t$ can be also written by this way
\begin{align*}
Y^h_t = X_0&+\int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (s) \big(b(Y_{kh}^h) + \eta^{(h)} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\, ds +\\
&+ \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]}(s)\sigma(Y_{kh}^h)\, d W_s\,.
\end{align*}
We define the process $Z^h_t$ by
\[Z^h_t = \sup_{s \leq t} \big| Y^h_t \big|\,.\]
The aim is to find a bound on $\mathbb{E}\big[(Z^h_t)^{2q}\big]$ independent of $h$.
Note that from the definition of the norm and the convexity of the application~$x \longmapsto \left| x \right|^q$,
\begin{align*}
\mathbb{E}\big[(Z^h_t)^{2q}\big] \leq C_{15} \sum_{i=1}^m \mathbb{E}\big[\big( Z^{h,i}_t \big)^{2q}\big]\,,
\end{align*}
where $ Z^{h,i}_t$ is defined by $ Z^{h,i}_t= \sup_{s \leq t} \big| Y^{h,i}_t \big|$.
Therefore it is sufficient to bound component by component.
For all $i \in \left[|1,m\right|]$, by the triangle inequality,
\begin{multline}\label{sup}
\mathbb{E}\big[\big|Z^{h,i}_t\big| ^{2q}\big] \leq C_{16} \big( \mathbb{E} \big[ \left| X_0 \right| ^{2q} \big]+\hfill \\ \hfill+ \mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\, du \big| \big)^{2q}\,\big] +\\
\hfill+\mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=0}^{d}\mathds{1}_{\left] kh,(k+1)h \right]}(u)\sigma^{i,j}(Y_{kh}^h)\, d W_u^j\big| \big)^{2q}\, \big]\big)\,.
\end{multline}
Consider the second term in this inequality. By H\"older's inequality,
\begin{multline*}\big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\, du \big|^{2q} \leq \hfill \\
\hfill \leq s^{2q-1} \int_0^s \big| \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\big|^{2q}\, du\,.
\end{multline*}
Note that this sum over $k$ is reduced to one term for each $s$. Hence,
\begin{multline*}
\int_0^s \big| \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\big|^{2q}\, du\,. \leq \hfill \\
\hfill \leq C_{17}\int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u)\big( \big| b^i(Y_{kh}^h) \big|^{2q}+\big| \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big|^{2q}\big)\, du\,.
\end{multline*}
From Assumption (H3) and the linear growth bound on the function $b$,
\begin{multline*}
\int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u)\big( \big| b^i(Y_{kh}^h) \big|^{2q}+\big| \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big|^{2q}\big)\, du\leq \hfill \\
\hfill \leq C_{18} \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u)\big( 1 + \big(Z_u^h\big)^{2q} + h^{\alpha 2q}\big( Z_u^h \big)^{2q} + \big| W_{(k+1)h} - W_{kh}\big|^{2q}\big)\, du\,.
\end{multline*}
Since we are interested in $h$ small, we can consider $h \leq 1$. Therefore, since $t \leq \tau$, we obtain
\begin{multline*}
\mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\, du \big| \big)^{2q}\,\big] \leq \hfill \\
\hfill \leq C_{19} (1 + \int_0^t \mathbb{E}\big[\big( Z_u^h \big)^{2q}\,\big] +\mathbb{E}\big[\sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]}(u) \big| W_{(k+1)h} - W_{kh}\big|^{2q}\,\big] \, du\,.
\end{multline*}
Recall that for each $u$ the sum over $k$ is reduced to one term. Since each term can be bounded by $h^q(-\log h)^q$ as previously shown, then, for $h$ sufficiently small,
\begin{multline*}
\mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \mathds{1}_{\left] kh,(k+1)h \right]} (u) \big(b^i(Y_{kh}^h) + \eta^{(h),i} (Y_{kh}^h,W_{(k+1)h} - W_{kh})\big)\, du \big| \big)^{2q}\,\big] \leq \hfill \\
\hfill \leq C_{20} \big( 1 + \int_0^t \mathbb{E}\big[\big( Z_u^h \big)^{2q}\,\big] \, du \big) \,.
\end{multline*}
Consider now the last term in Inequality (\ref{sup}). By Burkh\"older Inequality, we get
\begin{multline*}
\mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=0}^{d}\mathds{1}_{\left] kh,(k+1)h \right]}(u)\sigma^{i,j}(Y_{kh}^h)\, d W_u^j\big| \big)^{2q}\, \big] \leq \hfill \\
\hfill \leq C_{21} t^{q-1} \big( \int_0^t \mathbb{E}\big[\sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=0}^{d}\mathds{1}_{\left] kh,(k+1)h \right]}(u)\big|\sigma^{i,j}(Y_{kh}^h)\big|^{2q}\,\big]\,du \big)\,.
\end{multline*}
From the the linear growth bound of $\sigma$, we obtain
\begin{multline*}
\mathbb{E}\big[\big( \sup_{s \leq t} \big| \int_0^s \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=0}^{d}\mathds{1}_{\left] kh,(k+1)h \right]}(u)\sigma^{i,j}(Y_{kh}^h)\, d W_u^j\big| \big)^{2q}\, \big] \leq \hfill \\
\hfill \leq C_{22} \big( 1+ \int_0^t \mathbb{E}\big[\big( Z^h_u\big)^{2q}\,\big]\,du \big)\,.
\end{multline*}
Finally, we get
$$
\mathbb{E}\big[\big|Z^{h,i}_t\big| ^{2q}\big] \leq C_{23 }\big( 1+ \int_0^t \mathbb{E}\big[\big( Z^h_u\big)^{2q}\,\big]\,du \big)\,,
$$
and then,
$$
\mathbb{E}\big[\big(Z^{h}_t\big) ^{2q}\big] \leq C_{24}\big( 1+ \int_0^t \mathbb{E}\big[\big( Z^h_u\big)^{2q}\,\big]\,du \big)\,.
$$
Hence, by Gronwall's lemma,
$$\mathbb{E}\big[\big(Z^{h}_{\tau}\big) ^{2q}\big] \leq C_{25}\,,$$
where $C_{25}$ is independent of $h$.
\end{proof}
\begin{proof}{Theorem 5.2.}
For all positive $\tau$, the same strategy as in the proof of Lemma \ref{le4} is set up to show the convergence on the time interval $\big[0, \tau]$.
The error between the solution $X_t$ of the SDE and the process $X_t^h$ is denoted by~$\epsilon_t$.
Let us begin with a formula which lies the errors at two consecutive points of $h\mathbb{N}$,
\begin{align*}
\epsilon_{(n+1)h} =& X_{(n+1)h} - X_{(n+1)h}^h\\
=&\epsilon_{nh} +\int_{nh}^{(n+1)h} \sigma(X_s) - \sigma(X_{nh}^h)\, d W_s + \\
&\quad \ +\int_{nh}^{(n+1)h} b(X_s)- b(X_{nh}^h)-\eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh})\, ds\,.
\end{align*}
As previously seen, Lemma \ref{le2} is applied to the process $(X_{nh+t} - Y^h_{nh+t})$ instead of $\epsilon_{nh+t}$ because of the linear interpolation in the definition of $X_t^h$.
Then, we get
\begin{align*} \mathbb{E}\big[\big| \epsilon_{(n+1)h} \big|^{2q}\big] \leq &\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + C_0
\int_{nh}^{(n+1)h} \big{(} \mathbb{E}\big[ \left| X_{s} - Y^h_{s} \right|^{2q}\big]+ \\
&\quad+ \mathbb{E}\big[ \|\sigma(X_s)-\sigma(X_{nh}^h)\|^{2q}\big] +\\
&\quad+ \mathbb{E}\big[\left|b(X_s)- b(X_{nh}^h) - \eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh}) \right|^{2q}\big]\big{)}\, ds\,.
\end{align*}
Or,
\[X_{s} - Y^h_{s} = \epsilon_{nh} + X_{s} - X_{nh} + Y_{nh}^h - Y_{s}^h\,.\]
Then, we obtain
\[\mathbb{E}\big[\left| X_{s} - Y^h_{s} \right|^{2q}\big] \leq C_1 \big(\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + \mathbb{E}\big[\left| X_{s} - X_{nh}\right|^{2q}\big] + \mathbb{E}\big[\left| Y_{s}^h - Y_{nh}^h\right|^{2q}\big]\big)\,.\]
From Lemma \ref{le3},
\[\mathbb{E}\big[\left| X_{s} - X_{nh}\right|^{2q}\big] \leq C_2 (1+ \mathbb{E}\big[\left| X_0 \right|^{2q}\big]) (s-nh)^q
\leq C_3 h^q\]
For the process $Y^h_t$, Lemma \ref{le4} gives the inequality
\[\mathbb{E}\big[\left| Y^h_{s} - Y^h_{nh}\right|^{2q}\big] \leq C_4 (h^{2q} + h^q(-\log h)^q)\,.\]
Therefore, for a small $h$,
\begin{align*}
\mathbb{E}\big[ \left| X_{s} - Y^h_{s} \right|^{2q}\big] \leq C_5 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + h^q(-\log h)^q)\,.
\end{align*}
Since $\sigma$ is a Lipschitz function,
\begin{align*}
\mathbb{E}\big[\|\sigma(X_s)-\sigma(X_{nh}^h)\|^{2q}\big] &\leq K_0^{2q}\ \mathbb{E}\big[ \left|X_s-X_{nh}^h\right|^{2q}\big]\\
&\leq K_0^{2q}\ \mathbb{E}\big[\left|X_s- X_{nh} + X_{nh} - X_{nh}^h\right|^{2q}\big]\\
&\leq C_6 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + h^q)\,.
\end{align*}
On the other hand, the last term can be also bounded
\begin{multline*}
\mathbb{E}\big[\left|b(X_s)- b(X_{nh}^h) - \eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh}) \right|^{2q}\big] \hfill \\
\hfill \leq C_7 \big(\mathbb{E}\big[\left|b(X_s)- b(X_{nh}^h)\right|^{2q}\big] +
\mathbb{E}\big[\left|\eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh}) \right|^{2q}\big]\big)\,.
\end{multline*}
As $\mathbb{E}\big[\left| X_{nh}^h \right|^{2q}\big]$ is finite,
\[\mathbb{E}\big[\left|\eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh}) \right|^{2q}\big] \leq K (h^q(-\log h)^q + h^{2q\alpha})\,.\]
And, finally, we have
\begin{multline*}
\mathbb{E}\big[\left|b(X_s)- b(X_{nh}^h) - \eta^h (X_{nh}^h, W_{(n+1)h}-W_{nh}) \right|^{2q}\big] \leq C_8 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big]+\hfill\\
\hfill + h^q(-\log h)^q + h^{2q\alpha})\,.
\end{multline*}
Thus, for $h$ sufficiently small,
\begin{multline*}
\mathbb{E}\big[\left| \epsilon_{(n+1)h} \right|^{2q}\big]\leq \mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + C_0 h\Big{(}C_5 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + h^q(-\log h)^q)+ \hfill \\
\hfill + C_6 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + h^q) +C_8 (\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + h^q(-\log h)^q + h^{2q\alpha})\Big{)}\\
\phantom{\mathbb{E}\big[\left| \epsilon_{(n+1)h} \right|^{2q}\big]\ \ \ } \leq (1 + C_9h)\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big] + C_{10} h^{q+1}(-\log h)^q + C_{11} h^{1+2q\alpha} \,.\hfill
\end{multline*}
Note that $(\mathbb{E}\big[\left| \epsilon_{nh} \right|^{2q}\big])$ is an other subarithmetico-geometric sequence.
Hence, as $\epsilon_0 =0$,
\begin{align*}
\mathbb{E}\left[\left| \epsilon_{nh} \right|^{2q}\right] \leq& n e^{nC_9 h} (C_{10} h^{q+1}(-\log h)^q + C_{11} h^{1+2q\alpha})\,.
\end{align*}
On the other hand,
\[\epsilon_t= \epsilon_{\lfloor t/h \rfloor h} + X_s-X_{nh} + X_{nh}^h- X_s^h\,.\]
Therefore, for a time $t$ in $[0,\tau]$,
\begin{align}
\mathbb{E}\big[\left| \epsilon_t \right|^{2q}\big] \leq& C_5 (\mathbb{E}\left[\left| \epsilon_{\lfloor t/h \rfloor h} \right|^{2q}\right] + h^q(-\log h)^q )\nonumber \\
\leq& C_5 \big[ \frac{t}{h} e^{t/h C_9 h} (C_{10} h^{q+1}(-\log h)^q + C_{11} h^{1+2q\alpha}) + h^q (-\log h)^q\big] \nonumber \\
\leq& C_{12}( h^q(-\log h)^q + h^{2q\alpha})\,. \label{51}
\end{align}
Eventually, a bound on $\mathbb{E}\big[\left| X_t - X_t^h \right|^{2q}\big]$ is found.
However, we want to prove the inequality on the supremum
\[\mathbb{E}\big{[}\big( \sup_{t \in [0,\tau]} \left| \epsilon_t \right|\big)^{2q}\big{]} \leq C (h^{2q\alpha} + h^q (-\log h)^q)\,.\]
Let us proceed with the proof of this inequality.
From the definition of the norm and the convexity of the application $x \longmapsto \left| x \right|^p$,
\begin{align*}
\mathbb{E}\big[\big(\sup_{t \in [0,\tau]} \left| \epsilon_t \right|\big)^{2q}\big] \leq C_{13} \sum_{i=1}^m \mathbb{E}\big[\big(\sup_{t \in [0,\tau]} \left|\epsilon_t^i\right|\big)^{2q}\big]\,.
\end{align*}
Thus, it is sufficient to control the supremum of each component.
For all $i$ in $[|1,m|]$,
\begin{align*}
\epsilon_t^i =& \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \big[b^i(X_s)-b^i(X_{kh}^h) - (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big] \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, ds\\
&+ \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=1}^d \big[\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h)\big] \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, d W^j_s\\
&+ \sum_{j=1}^d \sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big[W^j_t - W^j_{\lfloor t/h\rfloor h} - \frac{t - \lfloor t/h\rfloor h}{h} (W^j_{(\lfloor t/h\rfloor +1)h} - W^j_{\lfloor t/h\rfloor h})\big]\,.
\end{align*}
Then,
\begin{multline}
\mathbb{E}\big{[}\big(\sup_{t \in [0,\tau]} \left|\epsilon_t^i\right|\big)^{2q}\big{]} \hfill \\
\leq C_{14}\big( \mathbb{E}\big{[}\big(\sup_{t \in [0,\tau]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \big\{b^i(X_s)-b^i(X_{kh}^h)+\hfill \\
\hfill - (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, ds \big|\big)^{2q} \big{]} \\
+\mathbb{E} \big{[}\big(\sup_{t \in [0,\tau]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=1}^d \big\{\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h)\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, d W^j_s \big|\big)^{2q} \big{]} \hfill \\
+ \mathbb{E}\big{[}\big(\sup_{t \in [0,\tau]} \big|\sum_{j=1}^d \sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big\{W^j_t - W^j_{\lfloor t/h\rfloor h} + \hfill \\
\hfill- \frac{t - \lfloor t/h\rfloor h}{h} (W^j_{(\lfloor t/h\rfloor +1)h} - W^j_{\lfloor t/h\rfloor h})\big\}\big|\big)^{2q} \big{]} \big)\,.\label{52}
\end{multline}
Consider the first term of (\ref{52}), by H\"older's inequality, we get
\begin{multline*}
\mathbb{E}\big{[}\big(\sup_{t \in [0,\tau ]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \big\{b^i(X_s)-b^i(X_{kh}^h) + \hfill \\
\hfill- (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, ds \big\}\big|\big)^{2q}\big{]} \\
\leq C_{15}\ \mathbb{E}\big{[}\int_0^{\tau} \big| \sum_{k=0}^{\lfloor \tau /h\rfloor} \big\{b^i(X_s)-b^i(X_{kh}^h) +\hfill \\
\hfill - (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\big|^{2q}\, ds\big{]}\,.
\end{multline*}
Note that, for all $s$, the sum over $k$ is just composed by only one term, thus
\begin{multline*}
\mathbb{E}\big{[}\sup_{t \in [0,\tau]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau /h\rfloor} \big\{b^i(X_s)-b^i(X_{kh}^h)+\hfill \\
\hfill - (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, ds \big|^{2q}\big{]}\\
\leq C_{15} \int_0^{\tau } \mathbb{E}\big{[}\sum_{k=0}^{\lfloor \tau /h\rfloor} \big| b^i(X_s)-b^i(X_{kh}^h) +\hfill \\
\hfill- (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big|^{2q} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\big{]}\, ds\,.
\end{multline*}
Or, from previous inequalities on processes,
\begin{multline*}
\mathbb{E}\big{[}\sup_{t \in [0,\tau]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau /h\rfloor} \big\{b^i(X_s)-b^i(X_{kh}^h)+\hfill \\
\hfill - (\eta^{(h)})^i (X_{kh}^h,W_{(k+1)h} - W_{kh})\big\} \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, ds \big|^{2q}\big{]} \\
\hfill \leq C_{16} ( h^q(-\log h)^q + h^{2q\alpha})\,.
\end{multline*}
Consider now the second term of (\ref{52}). By Burkholder's inequality, we obtain
\begin{align*}
\mathbb{E}\big{[} &\big(\sup_{t \in [0,\tau ]} \big| \int_0^t \sum_{k=0}^{\lfloor \tau/h\rfloor} \sum_{j=1}^d (\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h)) \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, d W^j_s \big|\big)^{2q}\big{]}\\
&\quad\leq C_{17}\mathbb{E}\big{[} \int_0^{\tau} \big| \sum_{k=0}^{\lfloor \tau /h\rfloor} \sum_{j=1}^d (\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h)) \mathds{1}_{\left] kh,(k+1)h \right]} (s)\big|^{2q} \,ds \big{]}\,\\
&\quad \leq C_{17} \int_0^{\tau} \mathbb{E}\big[ \sum_{k=0}^{\lfloor \tau /h\rfloor} \big| \sum_{j=1}^d (\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h))\big|^{2q}\mathds{1}_{\left] kh,(k+1)h \right]} (s)\big]\,ds \,.
\end{align*}
Therefore, from Assumption (H1) and the bound on $\epsilon_t$,
\begin{multline*}
\mathbb{E}\big{[}\big( \sup_{t \in [0,\tau ]} \big{|} \int_0^t \sum_{k=0}^{\lfloor \tau /h\rfloor} \sum_{j=1}^d (\sigma^{i,j}(X_s)-\sigma^{i,j}(X_{kh}^h)) \mathds{1}_{\left] kh,(k+1)h \right]} (s)\, d W^j_s \big{|}\big)^{2q} \big{]} \hfill \\
\hfill \leq C_{18}(h^q(-\log h)^q + h^{2q\alpha})\,.
\end{multline*}
Let us proceed with the remaining term of (\ref{52}). Note that
\begin{multline*}
\mathbb{E}\big{[}\big(\sup_{t \in [0,\tau]} \big|\sum_{j=1}^d \sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big\{W^j_t - W^j_{\lfloor t/h\rfloor h} + \hfill \\
\hfill- \frac{t - \lfloor t/h\rfloor h}{h} (W^j_{(\lfloor t/h\rfloor +1)h} - W^j_{\lfloor t/h\rfloor h})\big\}\big|\big)^{2q} \big{]}\leq \\
\hfill \leq C_{19}\mathbb{E}\big{[}\sup_{t \in [0,\tau]} \big(\sum_{j=1}^d \big|\sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big|^{2q} \sup_{s \in [\lfloor t/h\rfloor h,(\lfloor t/h\rfloor +1) h]}\big|W^j_s - W^j_{\lfloor t/h\rfloor h}\big|^{2q}\big)\big{]}\\
\hfill \leq C_{19}\mathbb{E}\big{[}\sup_{k \in [|0,\lfloor \tau /h \rfloor|]} \big(\sum_{j=1}^d \big|\sigma^{i,j}(X_{kh}^h)\big|^{2q} \sup_{s \in [k h,(k +1) h]}\big|W^j_s - W^j_{kh}\big|^{2q}\big)\big{]}
\end{multline*}
Since the supremum of non-negative elements is less than the sum of this elements and from the linear growth bound on $\sigma$, we get
\begin{multline*}
\mathbb{E}\big{[}\sup_{k \in [|0,\lfloor \tau /h \rfloor|]} \big(\sum_{j=1}^d \big|\sigma^{i,j}(X_{kh}^h)\big|^{2q} \sup_{s \in [k h,(k +1) h]}\big|W^j_s - W^j_{kh}\big|^{2q}\big)\big{]} \leq \hfill \\
\hfill \leq C_{20} \mathbb{E}\big{[} \sum_{k=1}^{\lfloor \tau /h \rfloor} \big(1+ \big|X_{kh}^h\big|^{2q}\big) \sup_{s \in [k h,(k +1) h]}\big|W^j_s - W^j_{kh}\big|^{2q} \big{]}
\end{multline*}
Since $ \mathbb{E}\big[ \underset{t \in [0,\tau ]}{\sup} \big|\sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big|^{2q}\big]$ is finite (Lemma \ref{le4}), by independence of increments of Brownian motion and, from the bound on the supremum of these supremum on a time interval of length $h$, we obtain
\begin{multline*}
\mathbb{E}\big{[}\big(\sup_{t \in [0,\tau]} \big|\sum_{j=1}^d \sigma^{i,j}(X_{\lfloor t/h\rfloor h}^h)\big\{W^j_t - W^j_{\lfloor t/h\rfloor h} + \hfill \\
\hfill- \frac{t - \lfloor t/h\rfloor h}{h} (W^j_{(\lfloor t/h\rfloor +1)h} - W^j_{\lfloor t/h\rfloor h})\big\}\big|\big)^{2q} \big{]}\leq \\
\hfill \leq C_{21} \lfloor \tau /h\rfloor h^{q} (-\log h)^q\,.
\end{multline*}
Finally, we obtain
\begin{align*}
\mathbb{E}\big[ \big( \sup_{t \in [0,\tau]} \left|\epsilon_t^i\right|\big)^{2q}\big] \leq C_{22} ( h^{q-1} (-\log h)^q + h^{2q\alpha})\,.
\end{align*}
And, therefore, we have
\[\mathbb{E}\big[ \big(\sup_{t \in [0,\tau]} \left| \epsilon_t \right|\big)^{2q}\big] \leq C_{22} ( h^{q-1} (-\log h)^q + h^{2q\alpha})\,.\]
This inequality implies the convergence in $L^{p}$ of the process $(X^{h}_t)$ to $(X_t)$ on $[0,\tau]$ when $h$ tends to $0$.
\smallskip
For the almost sure convergence, we suppose also that the exponent $p$ is such that $p>2$ and $p \alpha >1$.
Thus, the previous result gives
\[\sum_{k=1}^{\infty} \mathbb{E}\big[\big(\sup_{t \in [0,\tau]} \big|X_t - X_t^{1/k}\big|\big)^{2q}\big] < \infty\,.\]
As $\big(\sup_{t \in [0,\tau]} \big|X_t - X_t^{1/k}\big|\big)^{2q}$ are non-negative random variables, then
\[\lim_{h\rightarrow 0} \sup_ {t \in [0,\tau]} \left|\epsilon_t\right| =0 \hspace{3mm} \text{a.s.}\,.\]
\end{proof}
From the convergence of this scheme of stochastic numerical analysis, the main result on the convergence of the process $\overline{X}^h_t$ to a solution of a SDE is deduced.
\begin{theo}\label{Main}
Suppose that there exist measurable maps $b$, $\sigma$, and $\eta^{(h)}$ which verify Assumption (H1) for $b$ and $\sigma$, and (H4) for $\eta^{(h)}$ such that the application on the first component of $\widetilde{T}^{(h)}$ is of the following form,
\[ U^{(h)}(x,y) = x+ \sigma(x)y+ hb(x)+h \eta^{(h)}(x,y)\,.\]
For all $x_0$ in $\mathbb{R}^m$, and all $\tau>0$, let $X_t^{x_0}$ be the solution on $[0,\tau]$ of the SDE
\[dX^{x_0}_t= b(X^{x_0}_t)dt + \sigma(X_t^{x_0})dW_t\,.\]
Then, the process $(\overline{X}^h_t)$, starting in $x_0$, converges to $(X_t^{x_0})$ when $h$ tends to $0$ in $L^{p}$, for all $p>2$ on $\left[0, \tau\right]$.
\smallskip
Moreover, the convergence is almost sure on $\left[0, \tau\right]$.
\end{theo}
We now present a similar result in the case of locally Lipschitz and linearly bounded applications $b$ and $\sigma$, that is to say satisfying Assumptions (H2) and (H3).
\begin{theo}\label{locally}
For all $\tau>0$, under the assumptions (H1), (H2), and (H3), the process $(X_t^h)$ converges in $L^{p}$ to the solution $(X_t)$ of the SDE \eqref{eq:A} on $\left[0,\tau\right]$ for all $p > 2$.
More precisely,
\[\lim_{h \rightarrow 0}\mathbb{E}\big[ (\sup_{t \in [0,\tau]} \big| X_t -X_t^h\big|)^p \big] = 0\,.\]
\end{theo}
\begin{proof}
For all $\tau>0$ and all natural $N$, we define two stopping times $\tau_N$ and $\tau_N^{(h)}$ by
\[\tau_N = \inf\{t; \left| X_t \right| \geq N \} \ \text{ and }\ \tau_N^{(h)} = \inf\{t; \big| X_t^{(h)} \big| \geq N \}\,.\]
Note that, from Theorem \ref{globally},
\[\mathbb{E}\big[ \big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t \wedge \tau_N^{(h)} \wedge \tau_N} - X_{t \wedge \tau_N^{(h)} \wedge \tau_N} \big|)^{2q} \big] \leq C_N (h^{q-1}(-\log h)^q+ h^{2q\alpha})\,,\]
where $C_N$ notably depends on $K_N$.
\begin{multline*}
\mathbb{E}\big[\big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t} - X_{t} \big|)^{2q} \big]\leq \mathbb{E}\big[\big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t \wedge \tau_N^{(h)} \wedge \tau_N} - X_{t \wedge \tau_N^{(h)} \wedge \tau_N} \big|)^{2q} \big]+\hfill \\
\hfill+\mathbb{E}\big[\big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t}\big|^{2q}+\big| X_{t} \big|^{2q} : \tau_N^{(h)} \leq \tau \big] +\\
\hfill+ \mathbb{E}\big[\big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t}\big|^{2q}+\big| X_{t} \big|^{2q} : \tau_N \leq \tau \big]\,.
\end{multline*}
Hence, we get for all $N$
\begin{multline*}
\mathbb{E}\big[\big(\sup_{t \in \left[0,\tau\right]} \big|X^h_{t} - X_{t} \big|)^{2q} \big]\leq \mathbb{E}\big[\sup_{t \in \left[0,\tau\right]} \big|X^h_{t \wedge \tau_N^{(h)} \wedge \tau_N} - X_{t \wedge \tau_N^{(h)} \wedge \tau_N} \big|^{2q} \big]+\hfill \\
\hfill+\dfrac2N \mathbb{E}\big[\sup_{t \in \left[0,\tau\right]} \big|X^h_{t}\big|^{2q+1}+\sup_{t \in \left[0,\tau\right]} \big| X_{t} \big|^{2q+1} \big]\,.
\end{multline*}
Since the expectation of the supremum of $X_t$ (Lemma \ref{le2}) and the supremum of $X_t^h$ (Lemma \ref{le3}) can be bounded by a constant independent of $h$ for all $h$ sufficiently small, then
\[\lim_{h \rightarrow 0}\mathbb{E}\big[ (\sup_{t \in [0,\tau]} \big| X_t -X_t^h\big|)^{2q} \big] = 0\,.\]
\end{proof}
This theorem \ref{locally} implies the following result on the limit evolution of the system when the time step $h$ for the interactions goes to $0$.
\begin{theo}\label{Main2}
Suppose that there exist measurable maps $b$, $\sigma$, and $\eta^{(h)}$ which verify Assumptions (H2) and (H3) for $b$ and $\sigma$, and (H4) for $\eta^{(h)}$ such that the application on the first component of $\widetilde{T}^{(h)}$ is of the following form,
\[ U^{(h)}(x,y) = x+ \sigma(x)y+ hb(x)+h \eta^{(h)}(x,y)\,.\]
For all $x_0$ in $\mathbb{R}^m$, and all $\tau>0$, let $X_t^{x_0}$ be the solution on $[0,\tau]$ of the SDE
\[dX^{x_0}_t= b(X^{x_0}_t)dt + \sigma(X_t^{x_0})dW_t\,.\]
Then, the process $(\overline{X}^h_t)$, starting in $x_0$, converges to $(X_t^{x_0})$ when $h$ tends to $0$ in $L^{p}$, for all $p>2$ on $\left[0, \tau\right]$.
\end{theo}
Physically, the results of these two theorems of convergence can be understood as follows. If the effective action of the environment on the system is roughly linear, the limit evolution of the system is given by a solution of a stochastic differential equation. This SDE is deduced from a Taylor expansion of the application $U^{(h)}$.
With the quantum repeated interactions scheme, Attal and Pautrat (in \cite{AP}) find quantum Langevin equations as limits of some Hamiltonian systems. There are some similarities with their results, particularly on the form of considered interactions.
\section{Back to the Physical Systems}
\subsection{Charged Particle in an Uniform Electric Field}
The first example was a charged particule in an uniform electric field.
Recall that the evolution for a time step $h$ is given by
\[X(nh)= U^{(h)}(X((n-1)h),E(nh))\,,\]
where the map $U^{(h)}$ is defined by
\[ U^{(h)}(x,y)= x + \sigma(x) y +h b(x) + h\eta^{h}(x,y)\,,\]
with
\[\sigma \left(\begin{array}{c} x_1\\ x_2 \end{array}\right)=\left(\begin{array}{c} 0\\ q \end{array}\right) \hspace{10mm}b\left(\begin{array}{c} x_1\\ x_2 \end{array}\right)=\left(\begin{array}{c} \dfrac{x_2}{m}\\ 0 \end{array}\right)\hspace{10mm}\eta^{h}(x,y)=\left(\begin{array}{c} \dfrac{qy}{2m}\\ 0 \end{array}\right)\,.\]
Note that this application $U^{(h)}$ is of the form of Theorem \ref{Main}.
The function $\sigma$ is constant and the function $b$ is linear. Finally, they are Lipschitz functions.
For the application $\eta^{h}$, Assumption (H4) is also verified with $\alpha = + \infty$ and $K_2 = \dfrac{q}{2m}$.
Therefore, Theorem \ref{Main} can be applied for this system.
For all intial $x_0$ and $p_0$, all $\tau>0$, the limit process which gives the evolutions of the charged particule is almost surely the solution $X_t=\left(\begin{array}{c}
X_t^1\\
X_t^2
\end{array}\right)$ on $[0,\tau]$ of the stochastic differential equation
\[dX_t =\left(\begin{array}{c}
\frac{X_t^2}{m}\\
0
\end{array}\right) dt + \left(\begin{array}{c}
0\\
q
\end{array} \right)dW_t,\]
where $W_t$ is a $1$-dimensional standard Brownian motion and with $X_0 = (x_0,p_0)$.
\subsection{Harmonic Interaction}
The second example was a harmonic interaction between the system and the environment.
The evolution of the system was descibed by the Markov chain
\[X(nh)=U^{(h)}(X((n-1)h), Y(nh))\,\]
where
\begin{align*}
U^{(h)}(X,Y)= X + \sigma(X) Y + h b(X) +h \eta^{(h)}(X,Y)\,,
\end{align*}
with
\[b\left(\begin{array}{c} x_1\\chi_2 \end{array} \right) = \left(\begin{array}{c} x_2\\ -(x_1+l) \end{array} \right), \hspace{5mm} \sigma \left(\begin{array}{c} x_1\\ x_2 \end{array} \right) =\left(\begin{array}{cc} 0& 0\\ 1& 0 \end{array} \right)\,,\]
and
\[\eta^{(h)}\left[\left(\begin{array}{c} x_1\\ x_2 \end{array}\right), \left(\begin{array}{c} y_1\\ y_2 \end{array}\right) \right] = \dfrac{1}{2}\left(\begin{array}{c} y_1\\ y_2 \end{array}\right)-\dfrac{h}{2} \left(\begin{array}{c} x_1+l-y_2/3 \\ x_2+2y_1/3 \end{array}\right) + \circ(h)\,.\]
The functions $b$ and $\sigma$ are Lipschitz.
For the application $\eta^{(h)}$, Assumptions (H4) is verified too for $\alpha=1$.
Theorem \ref{Main} can be applied for this system.
Therefore, we conclude that for all $\tau>0$ and all $Q_1(0)$, $P_1(0)$ in $\mathbb{R}$, the limit evolution on $\left[0,\tau \right]$ of the system is given almost surely by the solution of the stochastic differential equation
\[dX_t = \left( \begin{array}{c}
X_t^2\\
-(X^1_t +l)
\end{array} \right)dt +\left(\begin{array}{cc}
0 &0\\
1& 0
\end{array} \right)dW_t\,,\]
starting at time $0$ in $X_0 = \left(\begin{array}{c} Q_1(0)\\ P_1(0) \end{array}\right)$.
Note that this SDE is the equation of a harmonic oscillator perturbated by a Brownian noise. This kind of SDE was already considered in \cite{T} for instance.
\smallskip
Let us focus now on the asymptotic behaviour of this process. This stochastic differential equation has no stationaty measure since the non-perturbated differential equation has no stable point. Physically, the energy of the system whose evolution is governed by this SDE inscreases with the time. This rise of energy can be explained by the fact that repeated interactions bring energy to the system and there is no way to dissipate it. The next example shall be different.
\subsection{Damped Harmonic Oscillator}
The last example is the damped harmonic oscillator. The system undergoing repeated interactions evolves following the Markov chain $(X(nh))$ defined by
\[X(nh)=U^{(h)}(X((n-1)h), Y(nh))\,\]
where
\begin{align*}
U^{(h)}(X,Y)= X + \sigma(X) Y + h b(X) +h \eta^{(h)}(X,Y),
\end{align*}
with
\[b\left(\begin{array}{c} x_1\\chi_2 \end{array} \right) = \left(\begin{array}{c} x_2\\ -x_1 - f x_2 \end{array} \right), \hspace{5mm} \sigma \left(\begin{array}{c} x_1\\ x_2 \end{array} \right) =\left(\begin{array}{cc} 0& 0\\ 1& 0 \end{array} \right)\,.\]
The application $\eta^{(h)}$ is not explicitly expressed. But as said in the section 4, from Newton's law of motion, high derivative of $P_1$ and $Q_1$ can be bounded with $Q_1$, $P_1$, $Q_2$ et $P_2$. Then a required bound on $\eta^{(h)}$ can be found. Moreover, the maps $b$ et $\sigma$ are Lipschitz. Therefore Theorem \ref{Main} can be applied.
Hence, the limit evolution of the system is governed by the solution of the stochastic differential equation
\[\begin{cases}
d\, Q_1 = P_1 dt\\
d\,P_1 = - f \,P_1 - Q_1 dt + d\,W_t^1\,.
\end{cases}\]
Note that in all these examples, the states of parts of the environment are sampled from the increments of a Brownian motion whose variance is $1$.
A new parameter which could be interpreted as the temperature of the environment can be added by taking a variance $T$. Mathematically, it can be introduced by multipling states of the environment in the setup of repeated interactions by a factor $\sqrt{T}$.
Physically, more the temperature of the environment is high, more the interaction with the environment influences the dynamics of the system.
The effect of this temperature is the change of the diffucion term. The new stochastic differential equation is
\[\begin{cases}
d\, Q_1 = P_1 dt\\
d\,P_1 = - f \,P_1 - Q_1 dt + \sqrt{T}\, d\,W_t^1\,.
\end{cases}\]
Then, we examine the asymptotic behaviour of the system.
This stochastic differential equation is a Langevin equation whose stationary measure is given by the Gibbs measure
$$ d\,\mu = \frac{e^{-f \frac{(Q_1)^2+(P_1)^2}{2T}}}{\mathcal{Z}}\,d\,Q_1\,d\,P_1\,,$$
where $\mathcal{Z}$ is a normalizing constant.
Contrary to the previous example, the friction force allows the system to dissipate a part of its energy and then, the convergence of the dynamics to the stationary state (see \cite{MSH}). Physically, this convergence to this Gibbs measure can be understood as follows. The repeated interactions of the environment at temperature $T$ lead the system to tend to the thermal state related to the same temperature $T$. Finally, the system is thermalised by the environment.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Analysis of equilibrium points and their stability}
\subsection{Equilibrium points with weak injected fields}
\label{sec:weak-fields}
In this section, we study equilibrium points of
system~\eqref{eq:Martin-Regalado} (i.e., points $(E_\pm, N, n)$ at
which the right-hand side of~\eqref{eq:Martin-Regalado} vanishes)
under the assumption that the injected field $u$ is weak and constant
in time. Specifically, we consider injected fields $u$ of the form
\begin{equation}
\label{eq:lambda-uhat}
u = \lambda\widehat{u},
\end{equation}
where $\widehat{u}\in\mathbb{C}^2\setminus\{0\}$ is fixed and
$\lambda\in\mathbb{C}\setminus\{0\}$ is a small parameter, and we are
interested in the behavior of the equilibrium points as a function of
the parameter $\lambda$.
We assume without loss of generality that $\eta=1$, as this constant
can be incorporated in the injected field $u$. Then we can write
system~\eqref{eq:Martin-Regalado} in an equivalent form
\begin{subequations}
\label{eq:system}
\begin{align}
\frac{d}{dt} E(t) &= -\kappa\big( (1+i\alpha)\, X(N(t),n(t)) E(t) - u \big),\\
\frac{d}{dt}
\begin{bmatrix}
N(t) \\n(t)
\end{bmatrix}
&= -\gamma\left(Y(E(t))
\begin{bmatrix}
N(t)\\n(t)
\end{bmatrix}
-
\begin{bmatrix}
\mu\\0
\end{bmatrix}\right),
\end{align}
\end{subequations}
where $E(t) = (E_-(t),E_+(t))$ is a $\mathbb{C}^2$-valued function, and $X$
and $Y$ are matrix-valued functions defined for a vector
$z=(z_1,z_2)\in\mathbb{C}^2$ by
\begin{subequations}
\label{eq:XY}
\begin{align}
X(z) &:=
\begin{bmatrix}
1 - (z_1-z_2) & 0\\
0 & 1 - (z_1+z_2)
\end{bmatrix},\label{eq:X}\\
Y(z) &:=
\begin{bmatrix}
1+|z|^2 & |z_2|^2 - |z_1|^2\\
|z_2|^2 - |z_1|^2 & \delta + |z|^2
\end{bmatrix}\label{eq:Y}
\end{align}
\end{subequations}
(we use everywhere $(a_1,\ldots,a_n)$ as an alternative notation for a
column vector $\begin{bmatrix}a_1&\cdots&a_n\end{bmatrix}^T$). Above
$|\cdot|$ denotes the absolute value on $\mathbb{C}$ and norm on $\mathbb{C}^2$, and
$\delta := \gamma_s/\gamma>0$ is a dimensionless parameter. The
parameters satisfy ${\delta, \gamma, \kappa}\in(0,\infty)$,
$\alpha\in\mathbb{R}$, and $\mu>1$, and throughout this paper we take them to
be fixed, so that various constants explicit or implicit (as in the
little $o$-notation) in the equations below may depend on them.
\begin{figure}[t!]
\centering \input{gp_simu.tikz}
\caption[ODE-solution]{Time evolution of the slowly varying
amplitude $E(t)$ (in circularly polarized basis, blue lines) of an
electric field emitted by a laser in a case where the slowly
varying amplitude of an external electric field injected into the
laser is piecewise constant in time, and corresponding time
evolution of the parameters $N(t)$ and $n(t)$ (red lines) of the
laser.
The zero initial value at $t = -4$~ns was used, yet the solution
is plotted only for $t\ge 0$. In this figure, the injected field
$u(t) = \lambda(t)\widehat{u}(t)$ has been chosen so that
$\im(E_\pm(t)) = 0$ for real-valued initial values. Here
$\lambda(t)=0.25$ for $t\in[-4,0]$ and $t\in[8k, 4(2k+1))$,
$k\in\{0,1,2\}$, and $\lambda(t)=0.01$ otherwise, and
$\widehat{u}(t) = \sqrt{\mu-1}\, (\cos\theta(t),\sin\theta(t))$, where
$\theta(t)=\pi/6$ (corresponding to elliptical polarization) for
$t\in[-4, 8)$, $\theta(t)=\pi/4$ (linear polarization) for
$t\in[8, 16)$, and $\theta(t)=11\pi/24$ (nearly circular
polarization) for $t\in[16, 24)$. After every change in the
injected field $u$, the laser is seen to quickly stabilize at a
new equilibrium point. Black dotted lines correspond to the stable
equilibrium point $E^{(+\textsc{x})}_{\widehat{u}(t)}(\lambda(t))$ (cf.\
Theorems~\ref{thm:equilibrium-small-dynamics}
and~\ref{thm:stability}), i.e., they show values of $E$, $N$, and
$n$ of the laser after a successful injection locking.
In this figure, $\kappa=300$~ns$^{-1}$, $\mu=1.2$, $\alpha=0$,
$\gamma=1$~ns$^{-1}$, and $\delta=\gamma_s/\gamma=1.4$.}
\label{fig:ODE-solution}
\end{figure}
Figure~\ref{fig:ODE-solution} shows an example of a solution to
system~\eqref{eq:system} with an injected field $u$ that is piecewise
constant.\footnote{All numerical calculations in this article were
done with Julia~\cite{Julia-2017}. In Figure~\ref{fig:ODE-solution}
the suite
DifferentialEquations.jl~\cite{rackauckas2017differentialequations}
was used.} After every abrupt change of the injected field $u$, the
solution is seen to quickly settle at a new value (an equilibrium
point of the system).
\begin{proposition}
\label{prop:system-solution}
For every initial value $(E_0,N_0,n_0)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$,
there exists a unique maximal solution (i.e., a solution that has no
proper extension that is also a solution) to
system~\eqref{eq:system} satisfying the initial value at $t=0$. The
solution is global in forward time, that is, its domain includes
$[0,\infty)$.
\end{proposition}
\begin{proof}
A straightforward calculation shows that the right-hand side of
system~\eqref{eq:system} is locally Lipschitz, which implies that
for any given initial value, there exists a unique maximal solution
satisfying the value at $t=0$.
Consider an arbitrary maximal solution
$(E,N,n):I\to\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$, where $0\in I\subset\mathbb{R}$, and for
the sake of a contradiction assume that $[0,\infty)\not\subset
I$. If $\omega\in\mathbb{R}$ denotes the right endpoint of $I$, then
$\omega\notin I$ and either
\begin{equation}\label{eq:blow-up}
\lim_{
\substack{
t \to \omega,\\ t\in I}}
|E(t)| = \infty\text{ or }
\lim_{
\substack{
t \to \omega,\\ t\in I}}
\big|\big(N(t),n(t)\big)\big|=\infty
\end{equation}
(see \cite[Theorem~{7.6}]{MR1071170}).
Denote $\nu(t):=(N(t),n(t))\in\mathbb{R}^2$. The function $Y$ is uniformly
bounded from below, in the sense that there exists $c>0$ such that
for every $z\in\mathbb{C}^2$ and $y\in\mathbb{R}^2$ it holds that
\begin{equation}
\label{eq:Y-lower-bound}
Y(z)y\cdot y \ge c|y|^2.
\end{equation}
With~\eqref{eq:Y-lower-bound} we can estimate
\begin{equation*}
\begin{split}
\frac{1}{2}\Big(\frac{d}{dt}|\nu|^2\Big)(t)
&= \dot{\nu}(t)\cdot\nu(t)\\
&= \gamma\big(-Y(E(t))\nu(t)\cdot\nu(t) + \mu N(t)\big)\\
&\le C_1(1+|\nu(t)|^2),
\end{split}
\end{equation*}
where $0<t<\omega$ and $C_1\in\mathbb{R}$ is a constant. This inequality
together with Gr\"onwall's lemma yields $|\nu(t)| \le C_2$ for every
$0\le t<\omega$, where $C_2\ge 0$ is another constant.
The fact that $\nu$ is bounded on $[0,\omega)$ implies that the
function $t\mapsto X(\nu(t))$ is also bounded there. Then similar
reasoning as above (involving Gr\"onwall's lemma) shows that $E$ is
bounded on $[0,\omega)$. This contradicts with~\eqref{eq:blow-up},
and therefore $[0,\infty)\subset I$.
\end{proof}
Following theorem is the main result of this section. Its essential
content is that with sufficiently weak injected fields of the form
$u=\lambda\widehat{u}$ system~\eqref{eq:system} has nine distinct
equilibrium points, and that the equilibrium points depend
continuously on $\lambda\in\mathbb{C}\setminus\{0\}$ with asymptotics given
by~\eqref{eq:E-approximations}. In the statement of the theorem, the
requirement that $\widehat{u}_-\neq 0$ and $\widehat{u}_+\neq 0$ means physically
that the field is not \emph{circularly polarized}, while
$|\widehat{u}_-|=|\widehat{u}_+|$ means that the field is \emph{linearly
polarized}. The function $y:\mathbb{R}^2\to\mathbb{R}^2$ is defined by
\begin{align}
y(x) &:= Y(x)^{-1}
\begin{bmatrix}
\mu\\0
\end{bmatrix}
= \frac{\mu}{\det Y(x)}
\begin{bmatrix}
\delta+|x|^2\\
x_1^2-x_2^2
\end{bmatrix}, \text{ where} \label{eq:def-y}\\
\det Y(x) &= \delta + (1+\delta)|x|^2+4x_1^2x_2^2>0\label{eq:detY}
\end{align}
(the function $Y$ is defined in~\eqref{eq:Y}).
\begin{theorem}
\label{thm:equilibrium-small-dynamics}
Consider injected external field with amplitude $\lambda\widehat{u}$,
where $\lambda\in\mathbb{C}$ and $\widehat{u}=(\widehat{u}_-, \widehat{u}_+)\in\mathbb{C}^2$ satisfies
$\widehat{u}_-\neq 0$ and $\widehat{u}_+\neq 0$. There exists a constant
$\ell=\ell(\widehat{u})>0$ and a family $\{E_{\widehat{u}}^{(j)}\}_{j\in\mathcal{J}}$ of
nine continuous functions
\begin{equation}
\label{eq:E-lambda-j}
E_{\widehat{u}}^{(j)} : \{\lambda\in\mathbb{C} : 0<|\lambda|<\ell\}\to\mathbb{C}^2,\,
j\in\mathcal{J}
:=\{\textsc{0}, \pm\textsc{l},\pm\textsc{r}, \pm\textsc{x}, \pm\textsc{y}\},
\end{equation}
with pairwise distinct values that have the following properties:
\begin{enumerate}[(i)]
\item If in system~\eqref{eq:system} the injected field is of the
form $u=\lambda\widehat{u}$ with $0<|\lambda|<\ell$, then a triple
$(E, N, n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ is an equilibrium point (a
time-independent solution) of the system, if and only if
\begin{equation*}
E = E_{\widehat{u}}^{(j)}(\lambda)\text{ for some } j\in\mathcal{J},
\text{ and } (N,n) = y(|E_-|,|E_+|).
\end{equation*}
\item The functions $E_{\widehat{u}}^{(j)}$ have following asymptotics as
$\lambda\to 0$:
\begin{subequations}
\label{eq:E-approximations}
\begin{align}
E_{\widehat{u}}^{(\textsc{0})}(\lambda)
&= e^{i\theta}\frac{\lambda}{|\lambda|}
\left( |\lambda| \widehat{w}^{(\textsc{0})}
+ o(\lambda) \right),\label{eq:E0-approximation}\\
%
E_{\widehat{u}}^{(\pm\textsc{l})}(\lambda)
&= e^{i\theta}\frac{\lambda}{|\lambda|}
\left( \pm\sqrt{\frac{\delta(\mu-1)}{1+\delta}}
\begin{bmatrix}
\widehat{u}_-/|\widehat{u}_-| \\
0
\end{bmatrix}
+ |\lambda|\,\,\widehat{w}^{(\textsc{l})}
+ o(\lambda) \right),\\
%
E_{\widehat{u}}^{(\pm\textsc{r})}(\lambda)
&= e^{i\theta}\frac{\lambda}{|\lambda|}
\left( \pm\sqrt{\frac{\delta(\mu-1)}{1+\delta}}
\begin{bmatrix}
0 \\
\widehat{u}_+/|\widehat{u}_+|
\end{bmatrix}
+ |\lambda|\,\widehat{w}^{(\textsc{r})}
+ o(\lambda)\right),\\
%
E_{\widehat{u}}^{(\pm\textsc{x})}(\lambda)
&= e^{i\theta}\frac{\lambda}{|\lambda|}
\left( \pm\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix}
\widehat{u}_-/|\widehat{u}_-|\\
\widehat{u}_+/|\widehat{u}_+|
\end{bmatrix}
+|\lambda|\,\widehat{w}^{(\textsc{x})}
+ o(\lambda)\right),\\
%
E_{\widehat{u}}^{(\pm\textsc{y})}(\lambda)
&= e^{i\theta}\frac{\lambda}{|\lambda|}
\left(\pm\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix}
\phantom{-}\widehat{u}_-/|\widehat{u}_-|\\
-\widehat{u}_+/|\widehat{u}_+|
\end{bmatrix}
+|\lambda|\,\widehat{w}^{(\textsc{y})}
+ o(\lambda)\right),
\end{align}
\end{subequations}
where $\theta := -\arg(1+i\alpha)$ and
\begin{align*}
\widehat{w}^{(\textsc{0})}
& := \frac{-1}{|1+i\alpha|(\mu-1)} \widehat{u}\\
%
\widehat{w}^{(\textsc{l})}
&:= \frac{1}{2|1+i\alpha|(\mu-1)}
\begin{bmatrix*}[r]
\mu\,\widehat{u}_-\\
-(1+\delta)\,\widehat{u}_+
\end{bmatrix*},\\
%
\widehat{w}^{(\textsc{r})}
&:= \frac{1}{2|1+i\alpha|(\mu-1)}
\begin{bmatrix*}[r]
-(1+\delta)\,\widehat{u}_-\\
\mu\,\widehat{u}_+
\end{bmatrix*},\\
%
\widehat{w}^{(\textsc{x})}
&:= \frac{1}{4|1+i\alpha|(\mu-1)}
\begin{bmatrix*}[r]
\big(2\mu+\delta-1 + (1-\delta)|\widehat{u}_+|/|\widehat{u}_-|\big)\,\widehat{u}_- \\
\big((1-\delta)|\widehat{u}_-|/|\widehat{u}_+| +
2\mu+\delta-1\big)\,\widehat{u}_+
\end{bmatrix*},\\
%
\widehat{w}^{(\textsc{y})}
&:= \frac{1}{4|1+i\alpha|(\mu-1)}
\begin{bmatrix*}[r]
\big(2\mu+\delta-1 + (\delta-1)|\widehat{u}_+|/|\widehat{u}_-|\big)\,\widehat{u}_- \\
\big((\delta-1)|\widehat{u}_-|/|\widehat{u}_+| +
2\mu+\delta-1\big)\,\widehat{u}_+
\end{bmatrix*}.\\
\end{align*}
\item Furthermore, if $|\widehat{u}_-|=|\widehat{u}_+|$ and
$j\in\{0,\pm\textsc{x}\}$, then for every $\lambda$ with
$0<|\lambda|<\ell$ it holds that
\begin{equation*}
E_{\widehat{u}}^{(j)}(\lambda) = \rho^{(j)}(\lambda)\widehat{u}
\end{equation*}
for some $\rho^{(j)}(\lambda)\in\mathbb{C}$.
\end{enumerate}
\end{theorem}
\begin{remark}
As $\lambda\to 0$, the amplitude $E_{\widehat{u}}^{(\textsc{0})}(\lambda)$
vanishes, the amplitudes $E_{\widehat{u}}^{(\pm\textsc{l})}(\lambda)$ and
$E_{\widehat{u}}^{(\pm\textsc{r})}(\lambda)$ become left and right
circularly polarized, respectively, and the amplitudes
$E_{\widehat{u}}^{(\pm\textsc{x})}(\lambda)$ and
$E_{\widehat{u}}^{(\pm\textsc{y})}(\lambda)$ become linearly polarized and
orthogonal to each other. The index set $\mathcal{J}$ is chosen to reflect
this fact. Note that as $\lambda\to 0$, on the normalized Poincar\'e
sphere the amplitudes $E_{\widehat{u}}^{(\pm\textsc{x})}(\lambda)$
approach the projection of $\widehat{u}$ onto the equator, and the
amplitudes $E_{\widehat{u}}^{(\pm\textsc{y})}(\lambda)$ approach the
antipodal point of that projection.
\end{remark}
\begin{remark}
At the expense of a more complicated statement, the theorem can be
modified to hold also in the case $\widehat{u}_-=0$ or $\widehat{u}_+=0$. The
reason why this case is special is that if a point $(E_-,E_+,N,n)$
is an equilibrium point of system~\eqref{eq:system} with injected
field (say) $u=(0,\lambda\widehat{u}_+)$, then for every $\phi\in\mathbb{R}$ the
point $(e^ {i\phi}\,E_-,E_+, N, n)$ is an equilibrium point of the
system, also. Thus, instead of distinct equilibrium points, there
will be disjoint sets of equilibrium points. See also
Proposition~\ref{prop:algebraic-version},
Remark~\ref{remark:non-uniqueness}, and
Theorem~\ref{thm:solution-from-IVP} below.
\end{remark}
\begin{figure}[tp]
\centering \input{ax_E.tikz}
\caption[Equilibrium dynamics]{Values on the $\re(E_\pm)$-plane
(circularly polarized basis) of the slowly varying amplitude $E$
of an electric field of a laser at the stable and unstable
equilibrium points as the magnitude $\lambda$ of an external
optical injection $u=\lambda\widehat{u}$ varies.
For small $|\lambda|$ the laser has nine equilibrium points
(Theorem~\ref{thm:equilibrium-small-dynamics}). Solid lines denote
paths traced by real parts of the points when
$\widehat{u} = \sqrt{\mu-1}(\cos\theta, \sin\theta)$ and
$\theta = \pi/4$ (linear polarization), and
$\lambda\in[-1/4, 1/4]$ varies. In this figure, $\widehat{u}$ has been
chosen so that the equilibrium points are real-valued for
$\lambda\in\mathbb{R}$ and so that the intensity of the injected field
$u=\lambda\widehat{u}$ at $\lambda=1$ is equal to the intensity of the
emitted field $E$ of the free-running laser.
As $\lambda$ increases, the points move in the directions indicated
by the arrows. At $\lambda = -1/4$ only one of the points exists
(it is located at {(i)}). As $\lambda$ increases, eight new points
appear. First at $\lambda\approx -0.072$ two points appear at {(a)}
and start moving in opposite directions. At $\lambda\approx -0.071$
one of these points has moved to {(b)}, where it splits into three.
At $\lambda\approx -0.057$ two points appear at each {(c)}. The
circled dots denote locations of the points at $\lambda=0$. As
$\lambda$ grows, eight of the points disappear (at {(d)}
($\lambda\approx 0.057$), {(e)} ($\lambda\approx 0.71$), and {(f)}
($\lambda\approx 0.072$)). The paths were calculated from the
functions $h^{(j)}_{\widehat{r}}$ (cf. Figure~\ref{fig:h-solution})
via~\eqref{eq:E-N-n-u} and~\eqref{eq:u-from-r-phi}.
For $-1/4\le\lambda<0$ only the equilibrium point on the path from
{(i)} to {(ii)} is stable, for $0<\lambda\le 1/4$ the same is
true for the equilibrium point on the path from {(iii)} to {(iv)}
(cf.\ Figure~\ref{fig:stability}). Consequently, at $\lambda=0$,
the unique stable equilibrium point of the system jumps from
{(ii)} to {(iii)}.
The parameters $\kappa$, $\mu$, $\alpha$, $\gamma$, and $\delta$
are those of Figure~\ref{fig:ODE-solution}. The dotted and dashed
paths are interpreted analogously. In these paths
$\theta\in\{\pi/6, 11\pi/24\}$ (elliptical polarizations).}
\label{fig:paths}
\end{figure}
Figure~\ref{fig:paths} shows values of the nine equilibrium points
from Theorem~\ref{thm:equilibrium-small-dynamics} as the magnitude
$\lambda\in\mathbb{R}$ of an external optical injection $u=\lambda\widehat{u}$
varies. In the dimensionless units of system~\eqref{eq:system} the
intensity of the free running laser, i.e., $|E|^2$ at a stable
equilibrium point of~\eqref{eq:system} when $u=0$, is $\mu-1$. In the
figure $\widehat{u}$ has been chosen so that at $|\lambda|=1$ the intensity
$|u|^2=|\widehat{u}|^2$ of the external injected field is also $\mu-1$. For
the laser parameters used in the figure, the injected field is
sufficiently weak in the sense of
Theorem~\ref{thm:equilibrium-small-dynamics}, namely, in the sense
that the nine equilibrium points of the theorem exist, if
$|\lambda| < 0.057$, i.e., if the injected field does not exceed in
magnitude 5.7~\% of the emitted field of the free running laser. In
practice this value would depend also on experimental setup details
such as the coupling efficiency.
As a real-valued amplitude $E=(E_-,E_+)\in\mathbb{R}^2$ is linearly polarized
if and only if $E_-=\pm E_+$, it is seen from Figure~\ref{fig:paths}
that even if the injected field is linearly polarized, only three of
the nine equilibrium points have a linear state of polarization, while
the remaining six equilibrium points have an elliptical state of
polarization.
We prove Theorem~\ref{thm:equilibrium-small-dynamics} at the end of
this section after developing some preliminary results. We begin by
transforming the problem of finding equilibrium points of
system~\eqref{eq:system} from $\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ into a problem of
finding solutions from $\mathbb{R}^2$ to a system of two bivariate
polynomials:
\begin{proposition}
\label{prop:algebraic-version}
Let $X$ and $y$ be the functions defined in~\eqref{eq:X}
and~\eqref{eq:def-y}.
\begin{enumerate}[(i)]
\item Fix a vector $r=(r_1,r_2)\in[0,\infty)\times[0,\infty)$, and
suppose $x=(x_1,x_2)\in\mathbb{R}^2$ satisfies
\begin{equation}
\label{eq:X-y-x-x-r}
X(y(x))x = r.
\end{equation}
Let $\phi_\pm\in\mathbb{R}$, and define a vector $E\in\mathbb{C}^2$ and numbers
${N,n}\in\mathbb{R}$ by
\begin{equation}
\label{eq:E-N-n-u}
E :=
\begin{bmatrix}
x_1\,e^{i\phi_-}\\
x_2\,e^{i\phi_+}
\end{bmatrix}\text{ and }
\begin{bmatrix}
N\\n
\end{bmatrix}
:= y(x).
\end{equation}
Then the triple $(E,N,n)$ is an an equilibrium point of
system~\eqref{eq:system} with the injected electric field
\begin{equation}
\label{eq:u-from-r-phi}
u := (1+i\alpha)
\begin{bmatrix}
r_1\,e^{i\phi_-}\\
r_2\,e^{i\phi_+}
\end{bmatrix}.
\end{equation}
\item Suppose a triple $(E,N,n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ is an
equilibrium point of system~\eqref{eq:system} with some injected
electric field $u\in\mathbb{C}^2$. Then there exists numbers
$\phi_\pm\in\mathbb{R}$ and vectors $r\in[0,\infty)\times[0,\infty)$ and
$x\in\mathbb{R}^2$ such that equations~\eqref{eq:X-y-x-x-r}
to~\eqref{eq:u-from-r-phi} hold.
\end{enumerate}
\end{proposition}
\begin{remark}
\label{remark:non-uniqueness}
An arbitrary field $u=(u_-,u_+)\in\mathbb{C}^2$ uniquely determines the
numbers $r_j\ge 0$ in~\eqref{eq:u-from-r-phi}. If $u_-\neq 0$ and
$u_+\neq 0$, then also the numbers $e^{i\phi_\pm}$ are uniquely
determined, and therefore a solution $x\in\mathbb{R}^2$
of~\eqref{eq:X-y-x-x-r} corresponds via~\eqref{eq:E-N-n-u} to a
unique equilibrium point of system~\eqref{eq:system}. But if (say)
$u_-=0$ and $x$ is a solution of~\eqref{eq:X-y-x-x-r} with
$x_1\neq 0$, then there exists a continuum of equilibrium points of
system~\eqref{eq:system} corresponding to $x$ due to the arbitrary
choice of $\phi_-\in\mathbb{R}$ in~\eqref{eq:u-from-r-phi}.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:algebraic-version}]
For a vector $\phi=(\phi_-,\phi_+)\in\mathbb{R}^2$ denote
\begin{equation*}
J_\phi :=
\begin{bmatrix}
e^{i\phi_-}&0\\
0&e^{i\phi_+}
\end{bmatrix}
\in\mathbb{C}^{2\times 2}.
\end{equation*}
Then for every $z\in\mathbb{C}^2$ the matrices $X(z)$ and $J_\phi$ commute,
and $Y(J_\phi z)=Y(z)$.
For proving the first part of the proposition assume that $x\in\mathbb{R}^2$
and $r\in[0,\infty)\times[0,\infty)$ satisfy~\eqref{eq:X-y-x-x-r},
and let $E = J_\phi x$, $(N,n)=y(x)$, and $u=(1+i\alpha)J_\phi r$ be
as in~\eqref{eq:E-N-n-u} and~\eqref{eq:u-from-r-phi}. Then
\begin{align*}
-\kappa\big((1+i\alpha)X(N,n)E-u\big)
&= -\kappa(1+i\alpha)J_\phi\big(X(y(x))x-r\big)
= 0,\text{ and}\\
-\gamma\left(Y(E)
\begin{bmatrix}
N\\n
\end{bmatrix}
-
\begin{bmatrix}
\mu\\0
\end{bmatrix}\right)
&= -\gamma\left(Y(x)Y(x)^{-1}
\begin{bmatrix}
\mu\\0
\end{bmatrix}
-
\begin{bmatrix}
\mu\\0
\end{bmatrix}
\right)
=0,
\end{align*}
so the point $(E,N,n)$ is an equilibrium point of
system~\eqref{eq:system} with the injected electric field $u$.
For proving the second part of the proposition assume a point
$(E,N,n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ is an equilibrium point of
system~\eqref{eq:system} with injected electric field $u\in\mathbb{C}^2$,
and find vectors $x\in\mathbb{R}^2$ and $\phi=(\phi_-,\phi_+)\in\mathbb{R}^2$ such
that $E=J_\phi x$ and
\begin{equation}
\label{eq:re-is-pos}
\re\left(\frac{e^{-i\phi_\pm}u_\pm}{1+i\alpha}\right) \ge 0.
\end{equation}
Then from above and the definition of an equilibrium point it
follows that
\begin{equation*}
\begin{bmatrix}
\mu\\0
\end{bmatrix}
= Y(E)
\begin{bmatrix}
N\\n
\end{bmatrix}
= Y(x)
\begin{bmatrix}
N\\n
\end{bmatrix}
\text{ and }
u = (1+i\alpha)X(N,n)J_\phi x.
\end{equation*}
This implies $(N,n)=y(x)$, and consequently
$u = (1+i\alpha)J_\phi X(y(x))x$.
Now define $r:=X(y(x))x\in\mathbb{R}^2$. Then it only remains to show that
$r_j\ge 0$, but this follows from~\eqref{eq:re-is-pos}, since
$r=(1+i\alpha)^{-1}J_{-\phi}u$.
\end{proof}
\begin{proposition}
\label{prop:zeros-at-0}
A vector $x\in\mathbb{R}^2$ satisfies $X(y(x))x=0$, if and only if
$x = x^{(j)}$ for some $j\in\mathcal{J}$ (the index set $\mathcal{J}$ is defined
in~\eqref{eq:E-lambda-j}), where
\begin{subequations}
\label{eq:xk}
\begin{align}
x^{(\textsc{0})} &:=
\begin{bmatrix}
0\\0
\end{bmatrix},\\
x^{(\pm\textsc{l})} &:= \pm\sqrt{\frac{\delta(\mu-1)}{1+\delta}}
\begin{bmatrix}
1\\0
\end{bmatrix},\\
x^{(\pm\textsc{r})} &:= \pm\sqrt{\frac{\delta(\mu-1)}{1+\delta}}
\begin{bmatrix}
0\\1
\end{bmatrix},\\
x^{(\pm\textsc{x})} &:= \pm\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix}
1\\1
\end{bmatrix},\\
x^{(\pm\textsc{y})} &:= \pm\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix}
\phantom{-}1\\-1
\end{bmatrix}.
\end{align}
\end{subequations}
\end{proposition}
\begin{proof}
Suppose that $X(y(x))x=0$, or equivalently that
\begin{equation}
\label{eq:y-x-equality}
y_2(x)
\begin{bmatrix}
1 & \phantom{-}0 \\ 0 & -1
\end{bmatrix}
x = (y_1(x)-1)x,
\end{equation}
where $y(x) = (y_1(x), y_2(x))$. It follows that if
$x\neq x^{(\textsc{0})}$, then $|y_2(x)|=|y_1(x)-1|$.
Consider first the case $y_1(x)-1=y_2(x)\neq
0$. Then~\eqref{eq:y-x-equality} implies that $x$ is of the form
$(c,0)$ for some $c\in\mathbb{R}$. To find the possible values of $c$,
insert the candidate vector into $y_1(x)-1=y_2(x)$ and solve for
$c$. This shows that
$x\in \{{x^{(+\textsc{l})},x^{(-\textsc{l})}}\}$.
If $1-y_1(x)=y_2(x)\neq 0$, an analogous reasoning shows that then
$x\in \{{x^{(+\textsc{r})},x^{(-\textsc{r})}}\}$.
Consider the last case, namely $y_2(x)=0$ and $y_1(x)=1$. Then
$x_1^2=x_2^2$, and inserting $x=(x_1,\pm x_1)$ into $y_1(x)=1$ and
solving for $x_1$ shows that $x_1^2=x_2^2 = (\mu-1)/2$. Taking into
account all possible sign combinations yields
$x\in\{
{x^{(+\textsc{x})},x^{(-\textsc{x})},x^{(+\textsc{y})},x^{(-\textsc{y})}}
\}$.
On the other hand, a direct calculation shows that
$X(y(x^{(j)}))x^{(j)}=0$ for every $j\in\mathcal{J}$.
\end{proof}
Fix nonzero $\widehat{r}=(\widehat{r}_1,\widehat{r}_2)\in\mathbb{R}^2$ and define a function
\begin{equation}
\label{eq:def-F}
F_{\widehat{r}}(s,x) := X(y(x))x-s\widehat{r} \qquad(s\in\mathbb{R}, \, x\in\mathbb{R}^2).
\end{equation}
Our plan is to first find all zeros of $F_{\widehat{r}}(s,\cdot)$ for small
$s$, and then, assuming that the injected field $u$ in
system~\eqref{eq:system} is sufficiently weak, with
Proposition~\ref{prop:algebraic-version} convert these zeros to
equilibrium points of the system.
The Jacobian matrix of $F_{\widehat{r}}$ with respect to $x$ will be denoted
by $D_x F_{\widehat{r}}(x)$ (as the Jacobian is independent of $s$, it is
suppressed from the notation). A straightforward calculation shows
that
\begin{equation}
\label{eq:DxF-expression}
D_x F_{\widehat{r}}(x)
= I_2 + \frac{1}{\det Y(x)^2}
\begin{bmatrix}
p_{11}(x) & p_{12}(x)\\
p_{21}(x) & p_{22}(x)
\end{bmatrix},
\end{equation}
where $I_2\in\mathbb{R}^{2\times 2}$ is the identity matrix,
\begin{align*}
p_{11}(x_1,x_2) &:= \mu(\delta+2x_2^2)(-\delta+(1+\delta)(x_1^2-x_2^2)+4x_1^2x_2^2),\\
p_{12}(x_1,x_2) &:= 2\mu(\delta-1)(\delta+2x_1^2)x_1x_2,\\
p_{21}(x_1,x_2) &:= p_{12}(x_2, x_1),\text{ and}\\
p_{22}(x_1,x_2) &:= p_{11}(x_2, x_1)
\end{align*}
(an expression for $\det Y(x)$ is given in \eqref{eq:detY}).
\begin{proposition}
\label{prop:F-properties}
\begin{enumerate}[(i)]
\item The matrices $D_xF_{\widehat{r}}(x^{(j)})$, $j\in\mathcal{J}$, are
invertible, and
\begin{align*}
\big[D_xF_{\widehat{r}}(x^{(\textsc{0})})\big]^{-1}
&= -\frac{1}{\mu-1} I_2,\\
\big[D_xF_{\widehat{r}}(x^{(\pm\textsc{l})})\big]^{-1}
&= \frac{1}{2}\frac{1}{\mu-1}
\begin{bmatrix}
\mu & 0\\
0 & -(1+\delta)
\end{bmatrix},\\
\big[D_xF_{\widehat{r}}(x^{(\pm\textsc{r})})\big]^{-1}%
&=\frac{1}{2}\frac{1}{\mu-1}
\begin{bmatrix}
-(1+\delta) & 0\\
0 & \mu
\end{bmatrix},\\
\big[D_xF_{\widehat{r}}(x^{(\pm\textsc{x})})\big]^{-1}%
&=\frac{1}{4}\frac{1}{\mu-1}
\begin{bmatrix}
2\mu+\delta-1 & 1-\delta\\
1-\delta & 2\mu+\delta-1
\end{bmatrix},\\
\big[D_xF_{\widehat{r}}(x^{(\pm\textsc{y})})\big]^{-1}%
&=\frac{1}{4}\frac{1}{\mu-1}
\begin{bmatrix}
2\mu+\delta-1 & \delta-1\\
\delta-1 & 2\mu+\delta-1
\end{bmatrix}.
\end{align*}
\item For nonzero $x\in\mathbb{R}^2$ denote
\begin{equation*}
\widehat{x} := |x|^{-1} x\text{ and }
\widehat{x}_\perp := |x|^{-1}
\begin{bmatrix}
\phantom{-}x_2\\
-x_1
\end{bmatrix}.
\end{equation*}
Then
\begin{equation*}
X(y(x))x = |x|(a(x)\widehat{x} + b(x)\widehat{x}_\perp)
\qquad(x\in\mathbb{R}^2\setminus\{0\}),
\end{equation*}
where following estimates hold for the functions
${a,b}:\mathbb{R}^2\setminus\{0\}\to\mathbb{R}$:
\begin{subequations} \label{eq:ab-estimates}
\begin{align}
0 \le 1 - a(x) &< \mu\,\min\left\{1, \frac{1}{|x|^2}\right\}
\text{, and }\label{eq:a-estimate}\\
|b(x)| &< \mu\,\min\left\{\frac{1}{1+\delta}, \frac{1}{(1+\delta)^{2/3}|x|^{2/3}}\right\}\label{eq:b-estimate}
\end{align}
\end{subequations}
(recall that $\mu>1$). In particular, $a(x)\to 1$ and $b(x)\to 0$
as $|x|\to\infty$.
\end{enumerate}
\end{proposition}
\begin{proof}
Inserting the value of $x^{(j)}$ from~\eqref{eq:xk} into the
expression~\eqref{eq:DxF-expression} of $D_xF_{\widehat{r}}$ and inverting
yields {(i)}.
For {(ii)}, consider a vector $x\in\mathbb{R}^2\setminus\{0\}$. A
calculation shows that
\begin{equation*}
1-a(x
=\frac{\mu}{|x|^2}\,\frac{\delta|x|^2+4x_1^2x_2^2}{\delta + (1+\delta)|x|^2+4x_1^2x_2^2}
\in\left(0, \frac{\mu}{|x|^2}\right).
\end{equation*}
On the other hand, above together with the inequality
$4x_1^2x_2^2/|x|^2\le|x|^2$ yields
\begin{equation*}
1-a(x)
= \mu \frac{\delta+4x_1^2x_2^2/|x|^2}{\delta+(1+\delta)|x|^2+4x_1^2x_2^2}
< \mu.
\end{equation*}
Inequality~\eqref{eq:a-estimate} is now proved.
Regarding the second inequality, note that
\begin{equation}
\label{eq:b-expression}
|b(x)|
= 2\mu\frac{|x_1x_2|}{|x|^2}\frac{|x_ 1^2-x_2^2|}{\delta + (1+\delta)|x|^2+4x_1^2x_2^2}.
\end{equation}
Let $c\ge 0$ be a parameter and consider two cases: If
$|x_1x_2| < c|x|/2$, then
\begin{equation}
\label{eq:b-1}
|b(x)| < \mu\frac{c}{|x|}\frac{1}{1+\delta}.
\end{equation}
If $|x_1x_2|\ge c|x|/2$, applying the inequality $2|x_1x_2|\le|x|^2$
to~\eqref{eq:b-expression} shows that
\begin{equation}
\label{eq:b-2}
|b(x)|
\le\mu\frac{|x_1^2-x_2^2|}{\delta+(1+\delta)|x|^2+c^2|x|^2}
< \mu\frac{1}{1+\delta+c^2}.
\end{equation}
Inequalities~\eqref{eq:b-1} and~\eqref{eq:b-2} hold for every
$c\ge0$. Choosing $c=0$ yields one part of~\eqref{eq:b-estimate},
choosing $c=(1+\delta)^{1/3}|x|^{1/3}$ yields the other part.
\end{proof}
\begin{proposition}
\label{prop:implicit-function-theorem}
There exists $\ell>0$ and smooth functions
$h^{(j)}_{\widehat{r}}:(-\ell,\ell)\to\mathbb{R}^2$, $j\in\mathcal{J}$, such that the
following holds: $h^{(j)}_{\widehat{r}}(\textsc{0})=x^{(j)}$ for every
$j\in\mathcal{J}$, and if $s\in(-\ell,\ell)$, then
\begin{equation}
\label{eq:F-h-iff}
F_{\widehat{r}}(s,x) = 0,\text{ if and only if } x = h^{(j)}_{\widehat{r}}(s)\text{ for some } j\in\mathcal{J}.
\end{equation}
($F_{\widehat{r}}$ is defined in~\eqref{eq:def-F}, $x^{(j)}$
in~\eqref{eq:xk}, and $\mathcal{J}$ in~\eqref{eq:E-lambda-j}.) Furthermore,
if $\widehat{r}_1=\widehat{r}_2$ and $j\in\{0, \pm\textsc{x}\}$, then
$h_{\widehat{r}}^{(j)}$ is of the form
\begin{equation}
\label{eq:r1r2}
h^{(j)}_{\widehat{r}}(s) = (\eta^{(j)}(s), \eta^{(j)}(s))
\end{equation}
for some function $\eta^{(j)}:(-\ell,\ell)\to\mathbb{R}$.
\end{proposition}
\begin{proof}
Recall that $\widehat{r}\neq 0$ by assumption. By {(i)} of
Proposition~\ref{prop:F-properties} and the implicit function
theorem there exists neighborhoods $V^{(j)}\subset\mathbb{R}$ of $0\in\mathbb{R}$
and $W^{(j)}\subset\mathbb{R}^2$ of $x^{(j)}$ and smooth functions
$h^{(j)}_{\widehat{r}}:V^{(j)}\to W^{(j)}$ with
$h^{(j)}_{\widehat{r}}(\textsc{0})=x^{(j)}$ such that $F_{\widehat{r}}(s,x)=0$
for $(s,x)\in V^{(j)}\times W^{(j)}$, if and only if
$x=h^{(j)}_{\widehat{r}}(s)$.
Regarding the other direction of~\eqref{eq:F-h-iff}, it is enough to
show that there exists $\ell>0$ such that
\begin{equation}
\label{eq:subset-condition}
(-\ell,\ell)\subset\bigcap_{j\in\mathcal{J}} V^{(j)}
\end{equation}
and that $F_{\widehat{r}}(s,x)= 0$ implies that either
$(s,x)\in\bigcup_{j\in\mathcal{J}} V^{(j)}\times W^{(j)}$ or $|s|\ge\ell$.
If a pair $(s,x)\in\mathbb{R}\times\mathbb{R}^2$ satisfies $F_{\widehat{r}}(s,x)=0$ and
$|x|>\sqrt{2\mu}$, then by the Pythagorean theorem (with the
notation of Proposition~\ref{prop:F-properties}) we have
\begin{equation*}
s^2|\widehat{r}|^2
= |X(y(x))x|^2
= |x|^2(a(x)^2+b(x)^2)
> \frac{|x|^2}{4},
\end{equation*}
where the last inequality holds because $a(x)>1/2$
by~\eqref{eq:a-estimate}. This implies that
$|s|>\sqrt{\mu}/(\sqrt{2}|\widehat{r}|)$, which together with the
continuity of $F_{\widehat{r}}$ shows that the set
\begin{equation*}
K:=
F_{\widehat{r}}^{-1}(\{0\})
\cap\left\{(s,x) : |s|\le\sqrt{\mu}/(2|\widehat{r}|)\right\}
\cap\Big(\bigcup_{j\in\mathcal{J}} V^{(j)}\times W^{(j)}\Big)^\complement
\subset\mathbb{R}\times\mathbb{R}^2
\end{equation*}
is compact.
By Proposition~\ref{prop:zeros-at-0} the set $K$ and the closed set
$\{0\}\times\mathbb{R}^2$ are disjoint. Let $d>0$ be the distance between
those sets ($d=+\infty$ if $K=\emptyset$), and consider a pair
$(s,x)$ such that $|s|\le\sqrt{\mu}/(\sqrt{2}|\widehat{r}|)$ and
$F_{\widehat{r}}(s,x)=0$. Now if $(s,x)\in K$, then $|s|\ge d$, and if
$(s,x)\notin K$, then
$(s,x)\in\bigcup_{j\in\mathcal{J}} V^{(j)}\times W^{(j)}$. Consequently, if
we choose $\ell>0$ small enough so that~\eqref{eq:subset-condition}
and $\ell<\min\{d,\sqrt{\mu}/(\sqrt{2}|\widehat{r}|)\}$ hold,
then~\eqref{eq:F-h-iff} holds for every $|s|<\ell$.
Finally, if $\widehat{r}_1=\widehat{r}_2$ and $\eta\in\mathbb{R}$, then
$F_{\widehat{r}}(s, (\eta,\eta)) = 0$, if and only if
\begin{equation}
\label{eq:scalar-implicit-fn}
\eta\left(1-\frac{\mu(\delta+2\eta^2)}{\delta+2(1+\delta)\eta^2+4\eta^4}\right)
- s\widehat{r}_1 = 0.
\end{equation}
The implicit function theorem shows that in some neighborhoods of
$(0,0)\in\mathbb{R}\times\mathbb{R}$ and $(0,\pm\sqrt{(\mu-1)/2})\in\mathbb{R}\times\mathbb{R}$
equality~\eqref{eq:scalar-implicit-fn} implicitly defines
$\eta=\eta(s)$, and consequently, if $j\in\{0, \pm\textsc{x}\}$ and
$s$ is small enough, then $h^{(j)}(s)=(\eta(s),\eta(s))$.
\end{proof}
Following theorem shows that system~\eqref{eq:system} has at least
nine disjoint families of equilibrium points provided that the
injected field $u$ is weak enough. These families correspond to nine
distinct solutions of $F_{\widehat{r}}(s,\cdot)=0$, where $s>0$ is a fixed
parameter related to the strength of the field $u$. These solutions
can be found by solving an initial value problem for an ordinary
differential equation in $s$. As the initial value problem is easy to
solve numerically, the theorem provides a computational method for
obtaining numerical values for the nine families of equilibrium
points.
\begin{theorem}
\label{thm:solution-from-IVP}
Fix $\widehat{u}=(\widehat{u}_-,\widehat{u}_+)\in\mathbb{C}^2$ (with the possibility
$\widehat{u}_-=0$ or $\widehat{u}_+=0$ allowed), and consider
system~\eqref{eq:system} with $u=\lambda\widehat{u}$. Define
\begin{equation*}
\widehat{r} := \frac{1}{|1+i\alpha|}
\begin{bmatrix}
|\widehat{u}_-|\\
|\widehat{u}_+|
\end{bmatrix}
\in[0,\infty)\times[0,\infty)
\end{equation*}
and choose numbers $\phi_\pm\in\mathbb{R}$ such that
\begin{equation}
\label{eq:phi-choice}
\widehat{u}_\pm = |\widehat{u}_\pm|e^{i\phi_\pm}.
\end{equation}
Let $y:\mathbb{R}^2\to\mathbb{R}^2$ and $\mathcal{J}$ be as defined in~\eqref{eq:def-y}
and~\eqref{eq:E-lambda-j}, respectively, and define
$\theta:=-\arg(1+i\alpha)$.
Fix $j\in\mathcal{J}$. Suppose $I\subset\mathbb{R}$ is an interval containing the
origin and
\begin{equation*}
h = (h_1, h_2) : I\to\{x\in\mathbb{R}^2 : \det D_xF_{\widehat{r}}(x)\neq 0\}
\end{equation*}
is a solution to the initial value problem
\begin{subequations}
\label{eq:h-ode}
\begin{align}
\dot{h}(s) &= \big[D_xF_{\widehat{r}}(h(s))\big]^{-1}\widehat{r},\label{eq:ode}\\
h(0)&= x^{(j)}.\label{eq:initial-value}
\end{align}
\end{subequations}
Then for every $\lambda\in\mathbb{C}\setminus\{0\}$ such that
$|\lambda|\in I$ the triple $(E(\lambda),N(\lambda),n(\lambda))$
defined by
\begin{equation}
\label{eq:solution-from-IVP}
E(\lambda)
:= e^{i\theta}\frac{\lambda}{|\lambda|}
\begin{bmatrix}
h_1(|\lambda|)\,e^{i\phi_-}\\
h_2(|\lambda|)\,e^{i\phi_+}
\end{bmatrix}
\text{ and }
\begin{bmatrix}
N(\lambda)\\n(\lambda)
\end{bmatrix}
:= y(h(|\lambda|))
\end{equation}
is an equilibrium point of system~\eqref{eq:system} with injected
field $u=\lambda\widehat{u}$.
\end{theorem}
\begin{remark}
Initial value problem~\eqref{eq:h-ode} is straightforward to solve
numerically using the explicit expressions for $x^{(j)}$ and
$D_xF_{\widehat{r}}$ given in~\eqref{eq:xk} and~\eqref{eq:DxF-expression},
respectively. Therefore Theorem~\ref{thm:solution-from-IVP} provides
an easy method to trace the trajectories of the equilibrium points
$E^{(j)}_{\widehat{u}}$ starting from $\lambda=0$ for as long as
$|\lambda|$ is in the domain $I$ of existence of a solution
of~\eqref{eq:h-ode}. Also, the asymptotics of $E^{(j)}_{\widehat{u}}$ as
$\lambda\to 0$ immediately follow from the initial value
problem~\eqref{eq:h-ode}. On the other hand, if $I$ is a finite
interval, it may be possible to continue the trajectories even
beyond the interval $I$. In that case one can use numerical
continuation techniques, such as pseudo-arclength continuation, to
solve the functions $h^{(j)}_{\widehat{r}}(s)$ from~\eqref{eq:F-h-iff} and
use them in~\eqref{eq:solution-from-IVP} instead (cf.\
Figure~\ref{fig:h-solution}).
\end{remark}
\begin{remark}
Note that if $\widehat{u}_-=0$, then every $\phi_-\in\mathbb{R}$
satisfies~\eqref{eq:phi-choice}, and each one of these yields an
equilibrium point when plugged into~\eqref{eq:solution-from-IVP}. If
$\widehat{u}_+=0$, an analogous statement holds for $\phi_+$.
\end{remark}
\begin{figure}[t]
\centering \input{ax_h.tikz}
\caption[Solutions $h$]{Paths traced by solutions
$h^{(j)}_{\widehat{r}}(s)$, $j\in\mathcal{J}$, of equation~\eqref{eq:F-h-iff} as
$s\ge 0$ increases. Black dots denote the initial values
$h^{(j)}_{\widehat{r}}(0)=x^{(j)}$. The paths were solved with
BifurcationKit.jl~\cite{veltz:hal-02902346}.
Black lines denote complement of the domain of the initial value
problem~\eqref{eq:h-ode}. A solution of~\eqref{eq:h-ode} with
initial value $x^{(j)}$ coincides with $h^{(j)}_{\widehat{r}}(s)$ for as
long as it does not hit the boundary of the domain (at which point
the right-hand side of~\eqref{eq:ode} ceases to exist). This means
that the solution of~\eqref{eq:h-ode} starting from
$x^{(-\textsc{x})}$ follows the blue path up to the point where
the path first crosses the black line, and then ends there. All
other paths can be solved in full from the inital value
problem~\eqref{eq:h-ode}.
The solutions $h^{(j)}_{\widehat{r}}$ correspond
via~\eqref{eq:solution-from-IVP} to the equilibrium points
(depicted in Figure~\ref{fig:paths}) of a laser with injected
external optical field. The parameters used are those of
Figures~\ref{fig:ODE-solution} and~\ref{fig:paths}.}
\label{fig:h-solution}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{thm:solution-from-IVP}]
Let $h:I\to\mathbb{R}^2$ solve~\eqref{eq:h-ode}. Then by~\eqref{eq:ode} and
the chain rule
\begin{equation}
\label{eq:F-derivative}
\frac{d}{ds} F_{\widehat{r}}(s,h(s)) =
\begin{bmatrix}
-\widehat{r} & D_xF_{\widehat{r}}(h(s))
\end{bmatrix}
\begin{bmatrix}
1 \\
\dot{h}(s)
\end{bmatrix}
= 0,
\end{equation}
so the map $I\ni s\mapsto F_{\widehat{r}}(s,h(s))$ is constant, and
by~\eqref{eq:initial-value} and Proposition~\ref{prop:zeros-at-0}
the constant is zero.
Consider $\lambda\neq 0$ such that $|\lambda|\in I$, and choose
$\phi'_\pm\in\mathbb{R}$ such that
\begin{equation*}
e^{i\phi'_\pm} = \frac{\lambda}{|\lambda|}e^{i(\theta+\phi_\pm)}.
\end{equation*}
Because $F_{\widehat{r}}(|\lambda|, h(|\lambda|)) = 0$, it follows that
$x:=h(|\lambda|)$ satisfies $X(y(x))x=|\lambda|\widehat{r}$. Therefore by
Proposition~\ref{prop:algebraic-version} the triple
$(E(\lambda),N(\lambda),n(\lambda))$ with
\begin{equation}
\label{eq:solution-from-IVP-2}
E(\lambda) :=
\begin{bmatrix}
x_1\, e^{i\phi'_-}\\
x_2\, e^{i\phi'_+}
\end{bmatrix}
\text{ and }
\begin{bmatrix}
N(\lambda)\\n(\lambda)
\end{bmatrix}
:= y(x)
\end{equation}
is an equilibrium point of system~\eqref{eq:system} with injected
field
\begin{equation}
\label{eq:lambda-v}
u := (1+i\alpha)
|\lambda|
\begin{bmatrix}
\widehat{r}_1\,e^{i\phi'_-}\\
\widehat{r}_2\,e^{i\phi'_+}
\end{bmatrix}.
\end{equation}
Noticing that~\eqref{eq:solution-from-IVP}
and~\eqref{eq:solution-from-IVP-2} coincide and that the right-hand
side of~\eqref{eq:lambda-v} is equal to $\lambda\widehat{u}$ finishes the
proof.
\end{proof}
We are now ready to prove
Theorem~\ref{thm:equilibrium-small-dynamics}:
\begin{proof}[Proof of Theorem~\ref{thm:equilibrium-small-dynamics}]
We will first prove that there exists a constant $\ell>0$ and nine
continuous functions $E^{(j)}_{\widehat{u}}$, $j\in\mathcal{J}$, that are of the
form~\eqref{eq:E-lambda-j}, for which the points $(E,N,n)$ with
\begin{equation}
\label{eq:E-N-n}
E=E^{(j)}_{\widehat{u}}(\lambda)\text{ and }(N,n)=y(|E_-|,|E_+|)
\end{equation}
are equilibrium points of system~\eqref{eq:system} with
$u=\lambda\widehat{u}$, and that satisfy the
asymptotics~\eqref{eq:E-approximations} as $\lambda\to 0$.
Define
\begin{equation}
\label{eq:r-hat-final}
\widehat{r} := \frac{1}{|1+i\alpha|}
\begin{bmatrix}
|\widehat{u}_-|\\
|\widehat{u}_+|
\end{bmatrix}
\in(0,\infty)\times(0,\infty),
\end{equation}
and let $\ell>0$ be the constant and
$h_{\widehat{r}}^{(j)}:(-\ell,\ell)\to\mathbb{R}^2$, $j\in\mathcal{J}$, the smooth
functions from
Proposition~\ref{prop:implicit-function-theorem}. Define
\begin{equation}
\label{eq:E-lambda}
E_{\widehat{u}}^{(j)}(\lambda)
:= \frac{\lambda}{|\lambda|}e^{i\theta}
\begin{bmatrix}
\frac{\widehat{u}_-}{|\widehat{u}_-|} & 0\\
0 & \frac{\widehat{u}_+}{|\widehat{u}_+|}
\end{bmatrix}
h^{(j)}_{\widehat{r}}(|\lambda|)
\qquad(j\in\mathcal{J}, 0<|\lambda|<\ell).
\end{equation}
Note that if $|\widehat{u}_-|=|\widehat{u}_+|$, then $\widehat{r}_1=\widehat{r}_2$, and for
$j\in\{0, \pm\textsc{x}\}$ it follows from~\eqref{eq:E-lambda}
and~\eqref{eq:r1r2} that
$E_{\widehat{u}}^{(j)}(\lambda) = \rho(\lambda)\widehat{u}$ for some
$\rho(\lambda)\in\mathbb{C}$.
Fix $j\in\mathcal{J}$. If $0<|\lambda|<\ell$, then
$x:=h^{(j)}_{\widehat{r}}(|\lambda|)$ satisfies $X(y(x))x=|\lambda|\widehat{r}$,
and therefore from Proposition~\ref{prop:algebraic-version} it
follows that a point $(E,N,n)$ defined by~\eqref{eq:E-N-n} is an
equilibrium point of system~\eqref{eq:system} with
\begin{equation*}
u = (1+i\alpha)\frac{\lambda}{|\lambda|}e^{i\theta}
\begin{bmatrix}
\frac{\widehat{u}_-}{|\widehat{u}_-|} & 0\\
0 & \frac{\widehat{u}_+}{|\widehat{u}_+|}
\end{bmatrix}
|\lambda|\widehat{r} = \lambda\widehat{u}.
\end{equation*}
Because the function $h^{(j)}_{\widehat{r}}$ is differentiable, it holds
that
\begin{equation}
\label{eq:h-asymptotic}
h^{(j)}_{\widehat{r}}(s)
= h^{(j)}_{\widehat{r}}(0) + s\cdot\frac{d}{ds}h^{(j)}_{\widehat{r}}(0) + o(s)
\text{ as } s\to 0.
\end{equation}
The function $s\mapsto F_{\widehat{r}}(s,h^{(j)}_{\widehat{r}}(s))$ vanishes
identically, so differentiating it and simplifying
(see~\eqref{eq:F-derivative}) gives
\begin{equation*}
D_xF_{\widehat{r}}(h^{(j)}_{\widehat{r}}(s))\frac{d}{ds}h^{(j)}_{\widehat{r}}(s)
=\widehat{r},
\end{equation*}
which by Proposition~\ref{prop:F-properties} can be solved at $s=0$
to yield
\begin{equation}
\label{eq:ds}
\frac{d}{ds}h^{(j)}_{\widehat{r}}(0) = \big[D_xF_{\widehat{r}}(x^{(j)})\big]^{-1}\widehat{r}.
\end{equation}
The matrix $[D_xF_{\widehat{r}}(x^{(j)})]^{-1}$ in~\eqref{eq:ds} was
calculated in
Proposition~\ref{prop:F-properties}. Substituting~\eqref{eq:ds} and
the value of $h^{(j)}_{\widehat{r}}(0)=x^{(j)}$ from
Proposition~\ref{prop:zeros-at-0} into~\eqref{eq:h-asymptotic}, and
then inserting the resulting expression into~\eqref{eq:E-lambda},
shows that the function $E^{(j)}_{\widehat{u}}$ satisfies
asymptotics~\eqref{eq:E-approximations} as $\lambda\to 0$. It then
follows from~\eqref{eq:E-approximations} and the continuity of
$E_{\widehat{u}}^{(j)}$ that by decreasing $\ell>0$ if necessary, the
family $\{E_{\widehat{u}}^{(j)}\}_{j\in\mathcal{J}}$ of functions can be made to
have pairwise distinct values.
It only remains to prove that if a triple $(E,N,n)$ is an
equilibrium point of system~\eqref{eq:system} with injected field
$\lambda\widehat{u}$, where $0<|\lambda|<\ell$, then $E=E^{(j)}(\lambda)$
for some $j\in\mathcal{J}$, and $(N,n)=y(|E_-|,|E_+|)$. To that end, consider
an arbitrary equilibrium point $(E,N,n)$ of system~\eqref{eq:system}
with $u=\lambda\widehat{u}$, where $0<|\lambda|<\ell$. By
Proposition~\ref{prop:algebraic-version} there exists $x\in\mathbb{R}^2$,
$r\in[0,\infty)\times[0,\infty)$ and $\phi_\pm\in\mathbb{R}$ such that
\begin{subequations}
\begin{align}
X(y(x))x &= r,\label{eq:necessary-1}\\
E &=
\begin{bmatrix}
x_1\,e^{i\phi_-}\\
x_2\,e^{i\phi_+}
\end{bmatrix}
,\label{eq:necessary-2}\\
\begin{bmatrix}
N\\n
\end{bmatrix}
& = y(x),\text{ and}\label{eq:necessary-3}\\
\lambda\widehat{u} &= (1+i\alpha)
\begin{bmatrix}
r_1\, e^{i\phi_-}\\
r_2\, e^{i\phi_+}
\end{bmatrix}
\label{eq:necessary-4}.
\end{align}
\end{subequations}
Equalities~\eqref{eq:necessary-2} and~\eqref{eq:necessary-3} imply
that $(N,n)=y(|E_-|,|E_+|)$. Also, positivity of the components of
$r$ together with~\eqref{eq:r-hat-final} and~\eqref{eq:necessary-4}
imply that $r=|\lambda|\widehat{r}$. Then~\eqref{eq:necessary-1} implies
that $F_{\widehat{r}}(|\lambda|,x)=0$, so $x=h^{(j)}_{\widehat{r}}(|\lambda|)$
for some $j\in\mathcal{J}$ by
Proposition~\ref{prop:implicit-function-theorem}. Finally, dividing
the components of~\eqref{eq:necessary-4} by their modulus shows that
\begin{equation*}
\frac{\lambda}{|\lambda|}\frac{\widehat{u}_\pm}{|\widehat{u}_\pm|}
= \frac{1+i\alpha}{|1+i\alpha|}e^{i\phi_\pm} = e^{-i\theta}e^{i\phi_\pm}.
\end{equation*}
Solving for $e^{i\phi_\pm}$ and inserting these values
into~\eqref{eq:necessary-2} shows that $E$ is equal to the
right-hand side of~\eqref{eq:E-lambda}.
\end{proof}
\subsection{Stability of equilibrium points with weak injected fields}
\label{sec:stability}
\begin{figure}[tp]
\centering \input{ax_sigma.tikz}
\caption[Stability]{Linear stability analysis of equilibrium points
of a laser subject to external optical injection
$u=\lambda\widehat{u}$. Each line represents an equilibrium point, and
$\max\{\re\sigma(Df)\}$ denotes the maximum real part of
eigenvalues of the linearized system at an equilibrium point. A
positive value indicates that the equilibrium point is unstable,
while a negative value indicates that the equilibrium point is
asymptotically stable. The parameters used, the color and style
of the lines, as well as the labels {(a)}--{(f)} and {(i)}--{(iv)}
match those of Figure~\ref{fig:paths}.
At $\lambda=0$ the blue line from {(i)} to {(f)} and the red line
from {(a)} to {(iv)} change signs, this corresponds to a jump of
the stable equilibrium point from {(ii)} to {(iii)} in
Figure~\ref{fig:paths}. The other lines with shorter intervals of
existence are positive for all $\lambda\neq 0$, they are displayed
on the axis on the right-hand side (note the different scales on
the $\lambda$-axes).}
\label{fig:stability}
\end{figure}
In this section, we consider stability properties of the nine
equilibrium points from
Theorem~\ref{thm:equilibrium-small-dynamics}. We will prove in
Theorem~\ref{thm:stability} below that if $\alpha=0$ and the injected
field $u$ in system~\eqref{eq:system} is sufficiently weak, then the
system has exactly one \emph{asymptotically stable} (in the sense of
Lyapunov) equilibrium point, while the remaining equilibrium points
are \emph{unstable} (for the definitions of asymptotic stability and
instability of an equilibrium point, we refer the reader
to~\cite{MR1071170}).
By splitting the complex-valued functions $E_\pm(t)$ into their real
and imaginary parts, i.e., writing
$E_\pm(t)= E_\pm^{(\mathrm{re})}(t) + iE_\pm^{(\mathrm{im})}(t)$ with
${E_\pm^{(\mathrm{re})}(t), E_\pm^{(\mathrm{im})}(t)}\in\mathbb{R}$, we can
write system~\eqref{eq:system} in terms of real-valued functions as
\begin{equation}
\label{eq:real-system}
\frac{d}{dt}(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},N,n)
= f(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},N,n),
\end{equation}
where the function $f:\mathbb{R}^6\to\mathbb{R}^6$ is determined by
system~\eqref{eq:system}. A calculation shows that $Df$, the Jacobian
matrix of $f$, is given by the block matrix
\begin{equation}
\begin{split}
&Df(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},N,n) =\\
&-\!\!
\begin{bmatrix}
\label{eq:Df}
\kappa X(N,n)&-\alpha\kappa X(N,n)&-\kappa(F^{(\mathrm{re})}-\alpha F^{(\mathrm{im})})\\
\alpha\kappa X(N,n)&\kappa X(N,n)&-\kappa (\alpha F^{(\mathrm{re})}+F^{(\mathrm{im})})\\
2\gamma(F^{(\mathrm{re})})^T(I_2-X(N,n))&2\gamma(F^{(\mathrm{im})})^T(I_2-X(N,n))&\gamma
Y(E)
\end{bmatrix}\!\!,
\end{split}
\end{equation}
where the superscript $T$ denotes the transpose of a matrix, and
\begin{equation*}
F^{(j)} :=
\begin{bmatrix*}[r]
E_-^{(j)} & -E_-^{(j)}\\
E_+^{(j)} & E_+^{(j)}
\end{bmatrix*}
\qquad(j\in\{{\mathrm{im},\mathrm{re}}\}).
\end{equation*}
We proved in Theorem~\ref{thm:solution-from-IVP} a method for
calculating numerical values for the nine equilibrium points of
system~\eqref{eq:system} from
Theorem~\ref{thm:equilibrium-small-dynamics}. Inserting the value of
an equilibrium point into the expression~\eqref{eq:Df} for $Df$ and
finding the eigenvalues of the so obtained $6\times 6$-matrix is an
easy numerical method to test the stability of the equilibrium
point. Recall that if all the eigenvalues of $Df$ at an equilibrium
point have strictly negative real parts, then the equilibrium point is
asymptotically stable, while if at least one of the eigenvalues has a
strictly positive real part, then the equilibrium point is
unstable~\cite{MR1071170}. Only if none of the eigenvalues have
strictly positive real parts but at least one of them has real part
equal to zero, then this test for stability is inconclusive.
In Figure~\ref{fig:stability} we have used above test to determine
stability of the equilibrium points on Figure~\ref{fig:paths}. As
illustrated in Figure~\ref{fig:stability}, of the nine equilibrium
points depicted in Figure~\ref{fig:paths} that correspond to an
injected field $u=\lambda\widehat{u}$, for each
$\lambda\in[-1/4,1/4]\setminus{\{0\}}$ and $\widehat{u}$ exactly one
of the points is asymptotically stable, while the others are unstable.
\begin{lemma}
\label{lemma:eigs-of-Df}
Assume that $\alpha=0$ in system~\eqref{eq:system}, and consider the
Jacobian matrix $Df$ of the corresponding
system~\eqref{eq:real-system}. (An expression for $Df$ is given
at~\eqref{eq:Df}.)
\begin{enumerate}[(i)]
\item For arbitrary numbers
${E_\pm^{(\mathrm{re})}, E_\pm^{(\mathrm{im})}, N, n}\in\mathbb{R}$, and
for the matrix
\begin{equation*}
Df = Df(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},N,n)
\end{equation*}
the following hold:
\begin{align}
(E_-^{(\mathrm{im})}, 0, -E_-^{(\mathrm{re})}, 0, 0, 0)&\in\ker(Df+\kappa [1-(N-n)]I_6)\text{ and}
\label{eq:eigs-of-Df-1}\\
(0, E_+^{(\mathrm{im})}, 0, -E_+^{(\mathrm{re})}, 0, 0)&\in\ker(Df+\kappa [1-(N+n)]I_6).
\label{eq:eigs-of-Df-2}
\end{align}
\item Let $\theta_1$ and $\theta_2$ be the two roots of the
polynomial
\begin{equation}
\label{eq:eigs-of-Df-p1}
s^2 + \gamma\mu s + 2\kappa\gamma(\mu-1),
\end{equation}
and let $\theta_3$ and $\theta_4$ be the two roots of the
polynomial
\begin{equation}
\label{eq:eigs-of-Df-p2}
s^2 + \gamma(\delta+\mu-1)s + 2\kappa\gamma(\mu-1).
\end{equation}
Furthermore, let
${E_\pm^{(\mathrm{re})},E_\pm^{(\mathrm{im})}}\in\mathbb{R}$ be any
numbers such that
\begin{equation*}
|E_-^{(\mathrm{re})}+iE_-^{(\mathrm{im})}|=|E_+^{(\mathrm{re})}+iE_+^{(\mathrm{im})}|=\sqrt{
\frac{\mu-1}{2}}.
\end{equation*}
Then for the matrix
\begin{equation*}
Df = Df(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},1,0)
\end{equation*}
the following holds:
\begin{align}
(E_-^{(\mathrm{re})}, E_+^{(\mathrm{re})}, E_-^{(\mathrm{im})}, E_+^{(\mathrm{im})}, \theta_j/\kappa, 0)
&\in\ker(Df-\theta_jI_6), \, j = 1, 2,\text{ and}
\label{eq:eigs-of-Df-3}\\
(-E_-^{(\mathrm{re})}, E_+^{(\mathrm{re})}, -E_-^{(\mathrm{im})}, E_+^{(\mathrm{im})}, 0, \theta_j/\kappa)
&\in\ker(Df-\theta_jI_6), \, j = 3, 4.
\label{eq:eigs-of-Df-4}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
The straightforward calculation using expression~\eqref{eq:Df} for
$Df$ is omitted.
\end{proof}
Given $\widehat{u}\in\mathbb{C}^2$ such that $\widehat{u}_-\neq 0$ and $\widehat{u}_+\neq 0$,
let $E_{\widehat{u}}^{(j)}$, $j\in\mathcal{J}$, be the functions from
Theorem~\ref{thm:equilibrium-small-dynamics}. By {(ii)} of
Proposition~\ref{prop:algebraic-version}, if
$(E,N,n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ is an equilibrium point of
system~\eqref{eq:system}, then $(N,n)=y(|E_-|,|E_+|)$. Therefore we
can define functions
\begin{equation*}
\lambda\mapsto N_{\widehat{u}}^{(j)}(\lambda),\,
\lambda\mapsto n_{\widehat{u}}^{(j)}(\lambda),\text{ and }
\lambda\mapsto(Df)_{\widehat{u}}^{(j)}(\lambda)
\end{equation*}
in a punctured neighborhood of the origin of the complex plane by
requiring that the point
$(E_{\widehat{u}}^{(j)}(\lambda),N_{\widehat{u}}^{(j)}(\lambda),n_{\widehat{u}}^{(j)}(\lambda))\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$
is an equilibrium point of system~\eqref{eq:system}, and that
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ is the Jacobian matrix of
system~\eqref{eq:real-system} at that point. In other words, if
$\lambda\neq 0$ is sufficiently small and
${E_\pm^{(\mathrm{re})},E_\pm^{(\mathrm{im})}}\in\mathbb{R}$ are such that
$E_{\widehat{u}}^{(j)}(\lambda)=(E_-^{(\mathrm{re})}+iE_-^{(\mathrm{im})},E_+^{(\mathrm{re})}+iE_+^{(\mathrm{im})})$,
then
\begin{align}
\begin{bmatrix*}
N_{\widehat{u}}^{(j)}(\lambda)\\
n_{\widehat{u}}^{(j)}(\lambda)
\end{bmatrix*}
&= y(|E_-^{(\mathrm{re})}+iE_-^{(\mathrm{im})}|,|E_+^{(\mathrm{re})}+iE_+^{(\mathrm{im})}|),\text{ and}\label{eq:Nn-def}\\
(Df)_{\widehat{u}}^{(j)}(\lambda)
&= Df(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},N_{\widehat{u}}^{(j)}(\lambda),n_{\widehat{u}}^{(j)}(\lambda)).\label{eq:Df-lambda}
\end{align}
We call an equilibrium point
$(E_{\widehat{u}}^{(j)}(\lambda),N_{\widehat{u}}^{(j)}(\lambda),n_{\widehat{u}}^{(j)}(\lambda))$
the \emph{equilibrium point corresponding to
$E_{\widehat{u}}^{(j)}(\lambda)$}.
We can now prove instability for five of the equilibrium points from
Theorem~\ref{thm:equilibrium-small-dynamics}:
\begin{lemma}
\label{lemma:unstable-points}
Assume $\alpha=0$ in system~\eqref{eq:system}. Fix $\widehat{u}\in\mathbb{C}^2$
with $\widehat{u}_-\neq 0$ and $\widehat{u}_+\neq 0$, and let $\ell>0$ and
$E_{\widehat{u}}^{(j)}$, $j\in\mathcal{J}$, be as in
Theorem~\ref{thm:equilibrium-small-dynamics}. Then there exists
$0<\ell_0\le\ell$ such that if $0<|\lambda|<\ell_0$, then the
equilibrium points corresponding to $E_{\widehat{u}}^{(j)}(\lambda)$ with
$j\in\{\textsc{0},\pm\textsc{l},\pm\textsc{r}\}$ are unstable.
\end{lemma}
\begin{proof}
Choose $j\in\mathcal{J}$ and sufficiently small $\lambda\neq 0$, and set
$(E_-,E_+):=E_{\widehat{u}}^{(j)}(\lambda)$. By {(i)} of
Lemma~\ref{lemma:eigs-of-Df}, the number $-\kappa[1-(N-n)]$ is an
eigenvalue of $(Df)_{\widehat{u}}^{(j)}(\lambda)$ if $E_-\neq 0$, and
$-\kappa[1-(N+n)]$ is an eigenvalue of $(Df)_{\widehat{u}}^{(j)}(\lambda)$
if $E_+\neq 0$. It follows from the
asymptotics~\eqref{eq:E-approximations} that there exists
$0<\ell_1\le\ell$ such that $E_-\neq 0$ and $E_+\neq 0$ if
$0<|\lambda|<\ell_1$, and therefore the numbers
$-\kappa[1-(N\pm n)]$ are eigenvalues of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ for $0<|\lambda|<\ell_1$.
The limits of $N_{\widehat{u}}^{(j)}(\lambda)$ and
$n_{\widehat{u}}^{(j)}(\lambda)$ as $\lambda\to 0$ can be calculated using
the asymptotics~\eqref{eq:E-approximations} of
$E_{\widehat{u}}^{(j)}(\lambda)$ and~\eqref{eq:Nn-def}. In particular,
\begin{align*}
\lim_{\lambda\to 0} -\kappa\big[1-(N_{\widehat{u}}^{(\textsc{0})}(\lambda)\pm n_{\widehat{u}}^{(\textsc{0})}(\lambda))\big]
&= \kappa(\mu-1) > 0,\\
\lim_{\lambda\to 0} -\kappa\big[1-(N_{\widehat{u}}^{(\pm\textsc{l})}(\lambda)+ n_{\widehat{u}}^{(\pm\textsc{l})}(\lambda))\big]
&= 2\kappa\,\frac{\mu-1}{1+\delta}>0,\\
\lim_{\lambda\to 0} -\kappa\big[1-(N_{\widehat{u}}^{(\pm\textsc{r})}(\lambda)-n_{\widehat{u}}^{(\pm\textsc{r})}(\lambda))\big] &= 2\kappa\,\frac{\mu-1}{1+\delta}>0.
\end{align*}
It follows that there exists $0<\ell_0\le\ell_1$ such that if
$0<|\lambda|<\ell_0$ and
$j\in\{\textsc{0},\pm\textsc{l},\pm\textsc{r}\}$, then at least one
of the eigenvalues of $(Df)_{\widehat{u}}^{(j)}(\lambda)$ is strictly
positive.
We have shown that the linearization $(Df)_{\widehat{u}}^{(j)}(\lambda)$
of system~\eqref{eq:real-system} at an equilibrium point
corresponding to $E_{\widehat{u}}^{(j)}(\lambda)$ with
$0<|\lambda|<\ell_0$ and
$j\in\{\textsc{0},\pm\textsc{l},\pm\textsc{r}\}$ has at least one
strictly positive eigenvalue. Therefore the nonlinear
system~\eqref{eq:system} is unstable at such a
point~\cite[Theorem~{15.6}]{MR1071170}.
\end{proof}
Let $\mathbb{C}^6_{\mathrm{sym}}$ denote the quotient space of $\mathbb{C}^6$ by the
equivalence relation that identifies vectors whose coordinates are
permutations of each other, and let
\begin{equation}
\label{eq:map-sigma}
\sigma:\mathbb{C}^{6\times 6}\to\mathbb{C}^6_{\mathrm{sym}}
\end{equation}
denote the map that takes a matrix to the unordered $6$-tuple of its
eigenvalues (repeated according to their algebraic multiplicities).
Then $(\mathbb{C}^6_{\mathrm{sym}},d)$ is a metric space with the
\emph{optimal matching distance}~\cite{MR1477662}
\begin{equation*}
d([a],[b]) := \min_\beta\max_{1\le k\le 6} |a_k-b_{\beta(k)}|,
\end{equation*}
where $[a]$ and $[b]$ denote the equivalence classes of ${a,b}\in\mathbb{C}^6$
in $\mathbb{C}^6_{\mathrm{sym}}$, and the minimum is taken over all
permutations $\beta$ of $\{1,2,\ldots,6\}$. The map $\sigma$ is
continuous in this topology~\cite{MR1477662}.
Let $\widehat{u}=(\widehat{u}_-,\widehat{u}_+)\in\mathbb{C}^2$ be such that $\widehat{u}_-\neq 0$ and
$\widehat{u}_+\neq 0$. For $\lambda\neq 0$ and
$j\in\{\pm\textsc{x}, \pm\textsc{y}\}$ define
\begin{equation}
\label{eq:H-def}
H_{\widehat{u}}^{(j)}(\lambda) :=
Df(E_-^{(\mathrm{re})},E_+^{(\mathrm{re})},E_-^{(\mathrm{im})},E_+^{(\mathrm{im})},1,0),
\end{equation}
where the arguments $E_\pm^{(\mathrm{re})}\in\mathbb{R}$ and
$E_\pm^{(\mathrm{im})}\in\mathbb{R}$ are defined by
\begin{equation*}
\begin{bmatrix}
E_-^{(\mathrm{re})}+iE_-^{(\mathrm{im})}\\
E_+^{(\mathrm{re})}+iE_+^{(\mathrm{im})}
\end{bmatrix}
:=
\begin{cases}
\pm\frac{\lambda}{|\lambda|}\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix}
\widehat{u}_-/|\widehat{u}_-|\\
\widehat{u}_+/|\widehat{u}_+|
\end{bmatrix},\text{ if } j = \pm\textsc{x},\\[1em]
\pm\frac{\lambda}{|\lambda|}\sqrt{\frac{\mu-1}{2}}
\begin{bmatrix*}[r]
\widehat{u}_-/|\widehat{u}_-|\\
-\widehat{u}_+/|\widehat{u}_+|
\end{bmatrix*},\text{ if } j = \pm\textsc{y}.
\end{cases}
\end{equation*}
In other words, $H_{\widehat{u}}^{(j)}(\lambda)$ is defined as
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ in~\eqref{eq:Df-lambda}, except that
$E_{\widehat{u}}^{(j)}(\lambda)$, $N_{\widehat{u}}^{(j)}(\lambda)$ and
$n_{\widehat{u}}^{(j)}(\lambda)$ are replaced by their zeroth order
approximations from~\eqref{eq:E-approximations} (as we are considering
the case $\alpha=0$, we have $e^{i\theta}=1$
in~\eqref{eq:E-approximations}).
Our plan is to determine stability of the remaining equilibrium points
corresponding to $E_{\widehat{u}}^{(j)}(\lambda)$ with
$j\in\{\pm\textsc{x}, \pm\textsc{y}\}$ by finding all eigenvalues of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$. In the following lemma we will first
show that for small $\lambda\neq0$ the eigenvalues of
$H_{\widehat{u}}^{(j)}(\lambda)$ approximate those of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$, and after that in
Lemma~\ref{lemma:eigs-of-H} we will determine the eigenvalues of
$H_{\widehat{u}}^{(j)}(\lambda)$. Combining these results will then make it
possible for us to conclude stability of the equilibrium points.
\begin{lemma}
\label{lemma:Df-limit}
For every $j\in\{\pm\textsc{x},\pm\textsc{y}\}$,
\begin{equation}
\label{eq:limit-d}
\lim_{\lambda\to 0} d\big(\sigma\big((Df)_{\widehat{u}}^{(j)}(\lambda)\big),\sigma\big(H_{\widehat{u}}^{(j)}(\lambda)\big)\big) = 0.
\end{equation}
Here $d$ is the optimal matching distance on $\mathbb{C}^6_{\mathrm{sym}}$
and $\sigma$ is the map~\eqref{eq:map-sigma}.
\end{lemma}
\begin{proof}
A calculation shows that for every
$j\in\{\pm\textsc{x},\pm\textsc{y}\}$,
\begin{equation}
\label{eq:limit-GH}
\lim_{\lambda\to 0}
\big\|(Df)_{\widehat{u}}^{(j)}(\lambda)-H_{\widehat{u}}^{(j)}(\lambda)\big\|
= 0.
\end{equation}
There exists numbers $r>0$ and $R>0$ such that if $0<|\lambda|<r$
and $j\in\{\pm\textsc{x},\pm\textsc{y}\}$, then
$(Df)_{\widehat{u}}^{(j)}(\lambda)\in\overbar{B}_R$ and
$H_{\widehat{u}}^{(j)}(\lambda)\in\overbar{B}_R$, where
$\overbar{B}_R\subset\mathbb{R}^{6\times 6}$ is the closed ball of radius
$R$ centered at the origin. Because the continuous map $\sigma$ is
uniformly continuous on the compact set $\overbar{B}_R$,
from~\eqref{eq:limit-GH} it follows that the
limit~\eqref{eq:limit-d} holds.
\end{proof}
\begin{lemma}
\label{lemma:eigs-of-H}
Let $\theta_j\in\mathbb{C}$, $j\in\{1,2,3,4\}$, be the roots in {(ii)} of
Lemma~\ref{lemma:eigs-of-Df}. If
$j\in\{\pm\textsc{x},\pm\textsc{y}\}$ and $\lambda\neq 0$, then
$(0,0,\theta_1,\theta_2,\theta_3,\theta_4)$ is a sequence of all
eigenvalues of $H_{\widehat{u}}^{(j)}(\lambda)$ (repeated according to
their algebraic multiplicities).
\end{lemma}
\begin{proof}
Because $N=1$ and $n=0$ in the definition~\eqref{eq:H-def} of
$H_{\widehat{u}}^{(j)}(\lambda)$, {(i)} of Lemma~\ref{lemma:eigs-of-Df}
implies that zero is an eigenvalue of $H_{\widehat{u}}^{(j)}(\lambda)$. By
{(ii)} of the same lemma, also the four roots $\theta_j$ are
eigenvalues of $H_{\widehat{u}}^{(j)}(\lambda)$.
If $\theta_1\neq\theta_2$ and $\theta_3\neq\theta_4$, it can be
calculated that the six vectors on the left-hand sides
of~\eqref{eq:eigs-of-Df-1}, \eqref{eq:eigs-of-Df-2},
\eqref{eq:eigs-of-Df-3}, and \eqref{eq:eigs-of-Df-4} form a linearly
independent set. It follows that in this case
$(0,0,\theta_1,\theta_2,\theta_3,\theta_4)$ is a sequence of all
eigenvalues of $H_{\widehat{u}}^{(j)}(\lambda)$ (repeated according to
their algebraic multiplicities).
If $\theta_1=\theta_2$ or $\theta_3=\theta_4$ we proceed as
follows. So far $\gamma>0$ has been fixed, let us now temporarily
write $H_{\widehat{u}}^{(j)}(\lambda,\gamma)$ to consider
$H_{\widehat{u}}^{(j)}$ as a function of both $\lambda$ and
$\gamma>0$. Also, denote by $\theta_j(\gamma)$ the roots of the
polynomials~\eqref{eq:eigs-of-Df-p1} and~\eqref{eq:eigs-of-Df-p2}
for given $\gamma$ (in arbitrary order).
For $\lambda\neq 0$ fixed, both of the maps
$(0,\infty)\ni\gamma\mapsto\sigma(H_{\widehat{u}}^{(j)}(\lambda,\gamma))\in\mathbb{C}^6_{\mathrm{sym}}$
and
$(0,\infty)\ni\gamma\mapsto[(0, 0, \theta_1(\gamma),
\theta_2(\gamma),
\theta_3(\gamma),\theta_4(\gamma))]\in\mathbb{C}^6_{\mathrm{sym}}$ are
continuous. By the first part of the proof these maps agree except
possibly for the finite set of $\gamma$ where one of the
polynomials~\eqref{eq:eigs-of-Df-p1} and~\eqref{eq:eigs-of-Df-p2}
has a double root. But by continuity they then agree everywhere.
\end{proof}
We can now prove the main result of this section.
\begin{theorem}
\label{thm:stability}
Consider system~\eqref{eq:system} under the assumption that
$\alpha=0$ and that the injected field $u$ is of the form
$u=\lambda\widehat{u}$, where $\lambda\in\mathbb{C}\setminus\{0\}$ and
$\widehat{u}=(\widehat{u}_-,\widehat{u}_+)\in\mathbb{C}^2$ satisfies $\widehat{u}_-\neq 0$ and
$\widehat{u}_+\neq 0$. With reference to
Theorem~\ref{thm:equilibrium-small-dynamics}, let $\ell>0$ be a
constant and $E_{\widehat{u}}^{(j)}(\lambda)$, $j\in\mathcal{J}$, the functions
with asymptotics~\eqref{eq:E-approximations} such that for
$0<|\lambda|<\ell$ they determine the nine equilibrium points of
system~\eqref{eq:system} with injected field $u=\lambda\widehat{u}$.
There exists a constant $0<\ell_0\le\ell$ such that for every
$0<|\lambda|<\ell_0$ the equilibrium point corresponding to
$E_{\widehat{u}}^{(+\textsc{x})}(\lambda)$ is asymptotically stable, and
the other eight equilibrium points corresponding to
$E_{\widehat{u}}^{(j)}(\lambda)$ with
$j\in\{\textsc{0},\pm\textsc{l},\pm\textsc{r},-\textsc{x},\pm\textsc{y}\}$
are unstable.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:unstable-points} we know that the equilibrium
points corresponding to $E_{\widehat{u}}^{(j)}(\lambda)$ with
$j\in\{\textsc{0},\pm\textsc{l},\pm\textsc{r}\}$ and $\lambda\neq0$
sufficiently small are unstable. By decreasing $\ell>0$ if
necessary, we can assume that this is the case for all
$0<|\lambda|<\ell$.
To prove the theorem, we will show that for sufficiently small
$\lambda\neq0$ all of the eigenvalues of
$(Df)_{\widehat{u}}^{(+\textsc{x})}(\lambda)$ have strictly negative real
parts, and that at least one of the eigenvalues of each of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ with
$j\in\{-\textsc{x},\pm\textsc{y}\}$ has a strictly positive real
part. By~\cite[Theorem~{15.6}]{MR1071170} this will imply the
result.
Let $\theta_i$, $i\in\{1,2,3,4\}$, be the roots of the
polynomials~\eqref{eq:eigs-of-Df-p1} and~\eqref{eq:eigs-of-Df-p2} in
Lemma~\ref{lemma:eigs-of-Df}. Because all of the coefficients in the
polynomials are strictly positive, $\re\theta_i<0$ for every
$i$. Therefore it is possible to find a radius $r>0$ such that
$\cup_{i=1}^4B_r(\theta_i)\subset\mathbb{C}_-:=\{z\in\mathbb{C}:\re z<0\}$, and such
that this union is disjoint from $B_r(0)$. Here $B_r(z)\subset\mathbb{C}$
denotes the open disk of radius $r$ centered at $z\in\mathbb{C}$.
Fix $j\in\{\pm\textsc{x},\pm\textsc{y}\}$. By
Lemmas~\ref{lemma:Df-limit} and~\ref{lemma:eigs-of-H} and the
definition of the optimal matching distance $d$, we can find
$0<\ell_1\le\ell$ such that if $0<|\lambda|<\ell_1$, then
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ has two eigenvalues in $B_r(0)$ and
four eigenvalues in $\cup_{i=1}^4B_r(\theta_i)$. A calculation shows
that
\begin{equation*}
\lim_{\lambda\to 0} \kappa\big[1-(N_{\widehat{u}}^{(j)}(\lambda)\pm n_{\widehat{u}}^{(j)}(\lambda))\big] = 0,
\end{equation*}
so by {(i)} of Lemma~\ref{lemma:eigs-of-Df} the two eigenvalues of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ contained in $B_r(0)$ are
\begin{equation}
\label{eq:relevant-eigs}
-\kappa\big[1-(N_{\widehat{u}}^{(j)}(\lambda)\pm n_{\widehat{u}}^{(j)}(\lambda))\big].
\end{equation}
Because $\cup_{i=1}^4B_r(\theta_i)\subset\mathbb{C}_-$, only the two
eigenvalues~\eqref{eq:relevant-eigs} are relevant for determining
the stability for small $\lambda\neq0$.
Consider Theorem~\ref{thm:solution-from-IVP} and let $h$ be a
solution to the initial value problem~\eqref{eq:h-ode}. For
sufficiently small $\lambda\neq0$ let $E(\lambda)$, $N(\lambda)$ and
$n(\lambda)$ be defined in terms of $h$
by~\eqref{eq:solution-from-IVP}. Then by
Theorem~\ref{thm:equilibrium-small-dynamics} the vector $E(\lambda)$
is equal to $E_{\widehat{u}}^{(k)}(\lambda)$ for some $k\in\mathcal{J}$, and an
inspection shows that $k=j$ is the only possibility. If $y_1$ and
$y_2$ are the component functions of the function $y$
from~\eqref{eq:def-y}, i.e., $y(x)=(y_1(x),y_2(x))$, above implies
that
\begin{equation}
\label{eq:eig-equality}
-\kappa\big[1-(N_{\widehat{u}}^{(j)}(\lambda)\pm n_{\widehat{u}}^{(j)}(\lambda))\big]
= -\kappa\big[1-(y_1(h(|\lambda|))\pm y_2(h(|\lambda|)))\big].
\end{equation}
The functions $s\mapsto y_k\circ h(s)$ are defined and
differentiable in a neighborhood of the origin, and
\begin{equation}
\label{eq:dds}
\begin{split}
\frac{d}{ds}\big(y_1\circ h\pm y_2\circ h\big)(0)
&=\nabla (y_1\pm y_2)(h(0))\cdot \frac{d}{ds} h(0)\\
&=\nabla (y_1\pm
y_2)(x^{(j)})\cdot\big[D_xF_{\widehat{r}}(x^{(j)})\big]^{-1}\widehat{r},
\end{split}
\end{equation}
where $\widehat{r} = (|\widehat{u}_-|, |\widehat{u}_+|)\in(0,\infty)\times(0,\infty)$.
Calculating the gradient and applying the value of
$[D_xF_{\widehat{r}}(x^{(j)})]^{-1}$ obtained in {(i)} of
Proposition~\ref{prop:F-properties} to~\eqref{eq:dds}, we can
calculate that
\begin{align}
\frac{d}{ds}\big(-\kappa\big[1-(y_1\circ h-y_2\circ h)\big]\big)(0)
&=-\left(\frac{2\kappa|\widehat{u}_-|}{\mu-1}\right)x_1^{(j)},\text{ and}\label{eq:d-eig-1}\\
\frac{d}{ds}\big(-\kappa\big[1-(y_1\circ h+y_2\circ h)\big]\big)(0)
&=-\left(\frac{2\kappa|\widehat{u}_+|}{\mu-1}\right)x_2^{(j)}\label{eq:d-eig-2}.
\end{align}
The numbers in the parenthesis on the right-hand sides
of~\eqref{eq:d-eig-1} and~\eqref{eq:d-eig-2} are nonzero and
positive. If $j=+\textsc{x}$, then $x_1^{(j)}>0$ and $x_2^{(j)}>0$,
so both~\eqref{eq:d-eig-1} and~\eqref{eq:d-eig-2} are strictly
negative. This and~\eqref{eq:eig-equality} imply that there exists
$0<\ell_0\le\ell_1$ such that for $0<|\lambda|<\ell_0$,
\begin{equation*}
-\kappa\big[1-(N_{\widehat{u}}^{(+\textsc{x})}(\lambda)\pm n_{\widehat{u}}^{(+\textsc{x})}(\lambda))\big]<0.
\end{equation*}
Therefore for these $\lambda$ these two eigenvalues of
$(Df)_{\widehat{u}}^{(+\textsc{x})}(\lambda)$ are strictly negative, and
consequently the equilibrium point corresponding to
$E_{\widehat{u}}^{(+\textsc{x})}(\lambda)$ is asymptotically stable.
If $j\in\{-\textsc{x},\pm\textsc{y}\}$, then at least one of the
nonzero numbers $x_1^{(j)}$ and $x_2^{(j)}$ in~\eqref{eq:d-eig-1}
and~\eqref{eq:d-eig-2} is negative. An analogous reasoning as above
shows that by decreasing $\ell_0>0$ if necessary, we can conclude
that for $0<|\lambda|<\ell_0$ at least one of the
eigenvalues~\eqref{eq:relevant-eigs} of
$(Df)_{\widehat{u}}^{(j)}(\lambda)$ is strictly positive, and therefore
the equilibrium point corresponding to $E_{\widehat{u}}^{(j)}(\lambda)$ is
unstable.
\end{proof}
\subsection{Equilibrium points with strong injected fields}
\label{sec:strong-fields}
In this section, we consider equilibrium points of
system~\eqref{eq:system} under the assumption that the injected
electric field $u$ is strong (large in magnitude). We assume that the
injected field is of the form
\begin{equation*}
u=\lambda\widehat{u},
\end{equation*}
where $\lambda\in\mathbb{C}$ is a large parameter and
$\widehat{u}=(\widehat{u}_-,\widehat{u}_+)\in\mathbb{C}^2$ satisfies $\widehat{u}_-\neq 0$ and
$\widehat{u}_+\neq 0$, and we are interested in the behavior of the
equilibrium points as a function of the parameter $\lambda$.
For a number $0<\eta<1$ and a vector $\widehat{r}\in\mathbb{R}^2$ such that
\begin{equation}
\label{eq:rhat-assumptions}
\widehat{r}_1>0,\, \widehat{r}_2>0,\text{ and } |\widehat{r}|=1,
\end{equation}
let us define the compact set
\begin{equation*}
K(\eta,\widehat{r}) := \left\{
w\in\mathbb{R}^2 : w\cdot \widehat{r} \ge \eta|w|\text{ and } \frac{1}{2}\le|w|\le \frac{3}{2}
\right\}.
\end{equation*}
We will prove that given the vector $\widehat{r}\in\mathbb{R}^2$, we can choose a
number $\eta=\eta(\widehat{r})\in(0,1)$ and a constant $L=L(\widehat{r})>0$ so
that for the function $F_{\widehat{r}}$ defined in~\eqref{eq:def-F} the
following holds: If $s\ge L$, then
\begin{enumerate}[(i)]
\item $F_{\widehat{r}}(s,x)=0$ implies $x\in sK(\eta,\widehat{r})$, and
\item the map $\mathbb{R}^2\ni x\mapsto x-F_{\widehat{r}}(s,x)\in\mathbb{R}^2$ maps
$sK(\eta,\widehat{r})$ contractively into itself.
\end{enumerate}
Recall that by Proposition~\ref{prop:algebraic-version} the zeros of
$F_{\widehat{r}}(s,\cdot)$ and the equilibrium points of
system~\eqref{eq:system} are in one-to-one correspondence. Once {(i)}
and {(ii)} are proved, we can conclude from {(i)} that for $s\ge L$
every zero of $F_{\widehat{r}}(s,\cdot)$ is contained in $sK(\eta,\widehat{r})$,
and from {(ii)} and the Banach fixed-point theorem that there exists
exactly one such zero in $sK(\eta,\widehat{r})$. From this it follows that
if the injected field $u$ is strong enough, then there exists a unique
equilibrium point of system~\eqref{eq:system}.
\begin{lemma}
\label{lemma:i-of-large-lemma}
Let $0<\eta<1$. There exists a constant $L=L(\eta)>0$ such that if
$s\ge L$, $\widehat{r}\in\mathbb{R}^2$ is a vector that
satisfies~\eqref{eq:rhat-assumptions}, and $F_{\widehat{r}}(s,x)=0$, then
$x\in sK(\eta,\widehat{r})$. (The function $F_{\widehat{r}}$ is defined
in~\eqref{eq:def-F}.)
\end{lemma}
\begin{proof}
Recall the functions $a$ and $b$ defined in
Proposition~\ref{prop:F-properties}. Note that by
inequalities~\eqref{eq:a-estimate} and~\eqref{eq:b-estimate}, for
every $x\in\mathbb{R}^2\setminus\{0\}$ the inequality
\begin{equation*}
a(x)^2+b(x)^2 < 2\mu^2
\end{equation*}
holds, and that it is possible to find a constant $L_1=L_1(\eta)>0$
so that $|x|\ge L_1$ implies
\begin{equation}
\label{eq:i-of-large-ineq}
\frac{1}{4} \le \frac{1}{a(x)^2+b(x)^2} \le \frac{9}{4}
\end{equation}
and
\begin{equation}
\label{eq:i-of-large-eta}
\frac{a(x)^2}{a(x)^2+b(x)^2} \ge \eta^2.
\end{equation}
Now if $x\in\mathbb{R}^2\setminus\{0\}$ satisfies $F_{\widehat{r}}(s,x)=0$, that
is, $X(y(x))x=s\widehat{r}$, then
\begin{equation}
\label{eq:i-of-large-Pythagoras}
s^2 = |x|^2(a(x)^2+b(x)^2) < 2\mu^2|x|^2.
\end{equation}
Therefore if $s\ge\sqrt{2}\mu L_1$ and $F_{\widehat{r}}(s,x)=0$, then
$|x|>L_1$, and by~\eqref{eq:i-of-large-ineq}
and~\eqref{eq:i-of-large-Pythagoras}
\begin{equation*}
\frac{1}{2} \le \frac{|x|}{s} \le \frac{3}{2},
\end{equation*}
and by~\eqref{eq:i-of-large-ineq} and~\eqref{eq:i-of-large-eta}
\begin{equation*}
x\cdot\widehat{r}
= \frac{|x|}{s}\,\widehat{x}\cdot X(y(x))x
= \frac{|x|a(x)}{\sqrt{a(x)^2+b(x)^2}}
\ge \eta|x|.
\end{equation*}
It follows that $x/s\in K(\eta,\widehat{r})$, and consequently the lemma
holds if $L\ge \sqrt{2}\mu L_1$.
\end{proof}
Given $s>0$ and $\widehat{r}\in\mathbb{R}^2$, define a mapping
$G_{s\widehat{r}}:\mathbb{R}^2\to\mathbb{R}^2$ by
\begin{equation}
\label{eq:G-def}
G_{s\widehat{r}}(x) := x-F_{\widehat{r}}(s,x) = x - X(y(x))x + s\widehat{r}.
\end{equation}
Obviously, for every $s>0$ the set of zeros of $F_{\widehat{r}}(s,\cdot)$
and the set of fixed points of $G_{s\widehat{r}}$ coincide.
\begin{lemma}
Let $0<\eta<1$. There exists a constant $L=L(\eta)>0$ such that if
$s\ge L$ and $\widehat{r}\in\mathbb{R}^2$ is a vector that
satisfies~\eqref{eq:rhat-assumptions}, then
\begin{equation}
\label{eq:K-inclusion}
G_{s\widehat{r}}\big[sK(\eta,\widehat{r})\big]\subset sK(\eta,\widehat{r}).
\end{equation}
\end{lemma}
\begin{proof}
Let ${w,\widehat{r}}\in\mathbb{R}^2$ satisfy $1/2\le|w|\le3/2$ and $|\widehat{r}|=1$.
With the notation of Proposition~\ref{prop:F-properties}, for $s>0$,
\begin{equation*}
\frac{1}{s}G_{s\widehat{r}}(sw) - \widehat{r}
= \big(1-a(sw)\big)w - b(sw)|w|\widehat{w}_\perp
:= e(s,w,\widehat{r}).
\end{equation*}
From inequalities~\eqref{eq:a-estimate} and~\eqref{eq:b-estimate} it
follows $e(s,w,\widehat{r})\to0$ as $s\to\infty$, uniformly in $w$ and
$\widehat{r}$. It follows that there exists $L>0$ such that if $s\ge L$
and $w\in K(\eta,\widehat{r})$, then $G_{s\widehat{r}}(sw)/s\in
K(\eta,\widehat{r})$. This implies~\eqref{eq:K-inclusion}.
\end{proof}
Below $D_xG_{s\widehat{r}}$ denotes the Jacobian matrix of the map
$G_{s\widehat{r}}$ defined in~\eqref{eq:G-def}.
\begin{lemma}
\label{lemma:small-D}
Let $\widehat{r}\in\mathbb{R}^2$ satisfy~\eqref{eq:rhat-assumptions}. There exists
numbers $\eta = \eta(\widehat{r})\in(0,1)$ and $L=L(\widehat{r})>0$ such that if
$s\ge L$, ${x,x'}\in sK(\eta,\widehat{r})$, and $0\le\nu\le 1$, then
\begin{equation}
\label{eq:small-D}
\big\|D_xG_{s\widehat{r}}((1-\nu)x+\nu x')\big\| \le \frac{1}{2}.
\end{equation}
Here the norm is the operator norm on $\mathbb{R}^{2\times 2}$.
\end{lemma}
\begin{proof}
An expression for $D_xG_{s\widehat{r}}(x)$ is readily obtained from that
of $D_xF_{\widehat{r}}(x)$, which was calculated
in~\eqref{eq:DxF-expression}. Observe that all of the polynomials
$p_{ij}$ in~\eqref{eq:DxF-expression} have total degrees at most
six.
Let $C>0$ be large enough so that
$\|D_xG_{s\widehat{r}}(x)\|\le C|x|^6/\det Y(x)^2$ for every $x\in\mathbb{R}^2$
with $|x|\ge 1$. Next, choose a constant $\eta=\eta(\widehat{r})\in(0,1)$
so that if $x\in\mathbb{R}^2$ and $x\cdot\widehat{r}\ge\eta|x|$, then
$x_1\ge|x|\widehat{r}_1/\sqrt{2}$ and $x_2\ge|x|\widehat{r}_2/\sqrt{2}$. With
these constants, for every $x\in\mathbb{R}^2$ with $x\cdot\widehat{r}\ge\eta|x|$
and $|x|\ge 1$, it holds that
\begin{equation}
\label{eq:small-D-ineq}
\big\|DG_{s\widehat{r}}(x)\big\|
\le \frac{C|x|^6}{(\widehat{r}_1\widehat{r}_2)^4\, |x|^8}=\frac{C'}{|x|^2},
\end{equation}
where $C':=C/(\widehat{r}_1\widehat{r}_2)^4>0$.
Now consider $x=sw$ and $x'=sw'$, where $s>0$ and
${w,w'}\in K(\eta,\widehat{r})$. If $0\le\nu\le 1$, then for
$w_\nu:=(1-\nu)w+\nu w'$ both of the inequalities $|w_\nu|^2\ge 1/8$
and $w_\nu\cdot\widehat{r}\ge\eta|w_\nu|$ hold. Therefore, if $s^2\ge 8$,
it follows from~\eqref{eq:small-D-ineq} that
\begin{equation*}
\big\|D_xG_{s\widehat{r}}((1-\nu)x+\nu x')\big\|
= \big\|D_xG_{s\widehat{r}}(sw_\nu)\big\|
\le \frac{C'}{|sw_\nu|^2}
\le \frac{8C'}{s^2}.
\end{equation*}
Consequently, for $s\ge\max\{2\sqrt{2},4\sqrt{C'}\}$
inequality~\eqref{eq:small-D} holds.
\end{proof}
\begin{proposition}
\label{prop:fixed-point}
Let $\widehat{r}\in\mathbb{R}^2$ satisfy~\eqref{eq:rhat-assumptions}. There exists
a constant $L=L(\widehat{r})>0$ such that following hold:
\begin{enumerate}[(i)]
\item For every $s\ge L$ the function $G_{s\widehat{r}}$ has a unique
fixed point in $\mathbb{R}^2$.
\item If $h:[L,\infty)\to\mathbb{R}^2$ denotes the function that maps $s$ to
the unique fixed point of $G_{s\widehat{r}}$, then $h$ is differentiable
on $(L,\infty)$.
\item There exists a constant $C>0$ (independent of $\widehat{r}$) such
that the function $h$ from {(ii)} satisfies
\begin{equation}
\label{eq:xs-estimate}
h(s) = s(\widehat{r}+e(s)),\text{ where } |e(s)|\le\frac{C}{s^{2/3}}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\eta=\eta(\widehat{r})\in(0,1)$ and $L=L(\widehat{r})>0$ be such that for
$s\ge L$ inequality~\eqref{eq:small-D} holds for every
${x,x'}\in sK(\eta,\widehat{r})$ and $0\le\nu\le 1$. If necessary,
increase $L$ so that in addition for $s\ge L$
inclusion~\eqref{eq:K-inclusion} holds and equality
$F_{\widehat{r}}(s,x)=0$ implies that $x\in sK(\eta,\widehat{r})$ (cf.\
Lemma~\ref{lemma:i-of-large-lemma}).
Let $s\ge L$. Then $G_{s\widehat{r}}$ maps $sK(\eta,\widehat{r})$ into itself,
and if ${x,x'}\in sK(\eta,\widehat{r})$, applying the fundamental theorem
of calculus and estimating with~\eqref{eq:small-D} shows that
\begin{equation*}
|G_{s\widehat{r}}(x)-G_{s\widehat{r}}(x')|
\le |x-x'|\sup_{0\le\nu\le 1}\big\|D_xG_{s\widehat{r}}((1-\nu)x+\nu x')\big\|
\le\frac{|x-x'|}{2}.
\end{equation*}
Thus, the restriction of $G_{s\widehat{r}}$ to $sK(\eta,\widehat{r})$ is a
contraction.
By the Banach fixed-point theorem the function $G_{s\widehat{r}}$ has a
unique fixed point in $sK(\eta,\widehat{r})$. Because $G_{s\widehat{r}}(x)=x$ if
and only if $F_{\widehat{r}}(s,x)=0$, this fixed point is unique in
$\mathbb{R}^2$, also. Part {(i)} is now proved.
Let $s_0>L$ and $h$ be as in {(ii)}. Consider the function
$(L,\infty)\times\mathbb{R}^2\ni(s,x)\mapsto F_{\widehat{r}}(s,x)\in\mathbb{R}^2$ at a
neighborhood of its zero $(s_0,h(s_0))$. Since
\begin{equation*}
D_xF_{\widehat{r}}(x) = I_2 - D_xG_{s\widehat{r}}(x),
\end{equation*}
it follows from inequality~\eqref{eq:small-D} that at the point
$(s,x)=(s_0,h(s_0))$ the derivative $D_xF_{\widehat{r}}(x)$ is
invertible. Then by the implicit function theorem in some
neighborhood $(s_0-\epsilon,s_0+\epsilon)$ the zero of
$F_{\widehat{r}}(s,\cdot)$, i.e., $h(s)$, depends differentiably on
$s$. Because $s_0>L$ was arbitrary, the function $s\mapsto h(s)$ is
differentiable, and {(ii)} is proved.
If we write $h(s)$ as in~\eqref{eq:xs-estimate} and denote
$x:=h(s)$, then in the notation of
Proposition~\ref{prop:F-properties} we have
\begin{equation*}
e(s)
= \frac{1}{s}x-\widehat{r}
= \frac{1}{s}G_{s\widehat{r}}(x) - \widehat{r}
= \frac{|x|}{s}\big[\big(1-a(x)\big)\widehat{x}-b(x)\widehat{x}_{\perp}\big].
\end{equation*}
Because $1/2\le|x|/s\le3/2$ since $x\in sK(\eta,\widehat{r})$, we obtain
from~\eqref{eq:a-estimate} and~\eqref{eq:b-estimate} that for some
constant $C>0$ depending only on $\mu$ it holds that
$|e(s)|\le C/s^{2/3}$, for every $s\ge L$. This proves {(iii)}.
\end{proof}
With the previous proposition in hand, we can now prove the main
theorem of this section. Note that, among others, the theorem states
that unlike in the case of weak injected fields, in which case
system~\eqref{eq:system} has nine equilibrium points
(Theorem~\ref{thm:equilibrium-small-dynamics}), in the case of strong
injected fields, the system has a single equilibrium point.
\begin{theorem}
\label{thm:strong-injection}
Consider $\widehat{u}=(\widehat{u}_-, \widehat{u}_+)\in\mathbb{C}^2$ with $\widehat{u}_-\neq 0$ and
$\widehat{u}_+\neq 0$. There exists a constant $L=L(\widehat{u})>0$ and a
continuous function
\begin{equation*}
E_{\widehat{u}} : \{\lambda\in\mathbb{C} : |\lambda|\ge L\}\to\mathbb{C}^2
\end{equation*}
with the following property: If in system~\eqref{eq:system} the
injected field $u$ is of the form $u=\lambda\widehat{u}$ with
$|\lambda|\ge L$, then a triple $(E, N, n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$
is an equilibrium point of the system, if and only if
\begin{equation*}
E = E_{\widehat{u}}(\lambda) \text{ and } (N,n) = y(|E_-|,|E_+|)
\end{equation*}
(the function $y$ is defined in~\eqref{eq:def-y}). Furthermore,
there exists a constant $C=C(\widehat{u})>0$ such that the function
$E_{\widehat{u}}$ satisfies
\begin{equation}
\label{eq:Eu-estimate}
E_{\widehat{u}}(\lambda)
= \frac{\lambda e^{i\theta}}{|1+i\alpha|}(\widehat{u}+e(\lambda))
\text{, where } |e(\lambda)| \le\frac{C}{|\lambda|^{2/3}}
\text{ and }\theta:=-\arg(1+i\alpha).
\end{equation}
\end{theorem}
\begin{remark}
It follows from~\eqref{eq:Eu-estimate} that the magnitudes of the
emitted field $E_{\widehat{u}}(\lambda)$ and the injected field
$u=\lambda\widehat{u}$ are asymptotically related by
\begin{equation*}
\lim_{|\lambda|\to\infty}\frac{|E_{\widehat{u}}(\lambda)|}{|\lambda\widehat{u}|}
= \frac{1}{|1+i\alpha|},
\end{equation*}
and that as $\lambda$ grows, the polarization of the emitted field
$E_{\widehat{u}}(\lambda)$ approaches on the normalized Poincar\'e sphere
that of $\widehat{u}$.
\end{remark}
\begin{proof}
Define
\begin{equation}
\label{eq:rhat-strong}
\widehat{r} := \frac{1}{|\widehat{u}|}
\begin{bmatrix}
|\widehat{u}_-|\\|\widehat{u}_+|
\end{bmatrix}.
\end{equation}
Then $\widehat{r}$ satisfies~\eqref{eq:rhat-assumptions}, let
$L'=L'(\widehat{r})>0$ be a constant and $h:[L',\infty)\to\mathbb{R}^2$ a function
as in Proposition~\ref{prop:fixed-point}.
Fix a constant $L>|1+i\alpha||\widehat{u}|^{-1} L'$, and define for
$\lambda\in\mathbb{C}$ with $|\lambda|\ge L$ a function $E_{\widehat{u}}$ by
\begin{equation*}
E_{\widehat{u}}(\lambda)
:= e^{i\theta}\frac{\lambda}{|\lambda|}
\begin{bmatrix}
\frac{\widehat{u}_-}{|\widehat{u}_-|} & 0\\
0 & \frac{\widehat{u}_+}{|\widehat{u}_+|}
\end{bmatrix}
h\left(\tfrac{|\lambda\widehat{u}|}{|1+i\alpha|}\right).
\end{equation*}
As $h$ is differentiable on $(L',\infty)$, the function
$E_{\widehat{u}}(\lambda)$ is continuous on its domain. Also,
estimate~\eqref{eq:Eu-estimate} follows directly
from~\eqref{eq:xs-estimate}.
Now with $s:=|1+i\alpha|^{-1}|\lambda\widehat{u}|$ and $x:=h(s)$ it holds
that $X(y(x))x=s\widehat{r}$, so by
Proposition~\ref{prop:algebraic-version} the triple
$(E,N,n)\in\mathbb{C}^2\times\mathbb{R}\times\mathbb{R}$ with $E=E_{\widehat{u}}(\lambda)$ and
$(N,n)=y(x)=y(|x_1|,|x_2|)=y(|E_-|,|E_+|)$ is an equilibrium point
of system~\eqref{eq:system} with injected field
\begin{equation*}
u = (1+i\alpha)
e^{i\theta}\frac{\lambda}{|\lambda|}
\begin{bmatrix}
\frac{\widehat{u}_-}{|\widehat{u}_-|} & 0\\
0 & \frac{\widehat{u}_+}{|\widehat{u}_+|}
\end{bmatrix}
s\widehat{r}
= \lambda\widehat{u}.
\end{equation*}
On the other hand, consider an arbitrary equilibrium point $(E,N,n)$
of system~\eqref{eq:system} with $u=\lambda\widehat{u}$, where
$|\lambda|\ge L$. By Proposition~\ref{prop:algebraic-version} there
exists $x\in\mathbb{R}^2$, $s\ge 0$, $\widehat{r}\in[0,\infty)\times[0,\infty)$
with $|\widehat{r}|$=1, and $\phi_\pm\in\mathbb{R}$ such that
\begin{subequations}
\begin{align}
X(y(x))x &= s\widehat{r},\label{eq:necessary2-1}\\
E &=
\begin{bmatrix}
x_1\,e^{i\phi_-}\\
x_2\,e^{i\phi_+}
\end{bmatrix}
,\label{eq:necessary2-2}\\
\begin{bmatrix}
N\\n
\end{bmatrix}
& = y(x),\text{ and}\label{eq:necessary2-3}\\
\lambda\widehat{u} &= (1+i\alpha)
\begin{bmatrix}
s\widehat{r}_1\, e^{i\phi_-}\\
s\widehat{r}_2\, e^{i\phi_+}
\end{bmatrix}
\label{eq:necessary2-4}.
\end{align}
\end{subequations}
Equation~\eqref{eq:necessary2-4} implies that
$s=|1+i\alpha|^{-1}|\lambda\widehat{u}|>L'$ and that $\widehat{r}$
satisfies~\eqref{eq:rhat-strong}. Then from~\eqref{eq:necessary2-1}
it follows that $G_{s\widehat{r}}(x)=x$, so $x=h(s)$ by
Proposition~\ref{prop:fixed-point}. The numbers $e^{i\phi_\pm}$ can
be determined from~\eqref{eq:necessary2-4}, inserting them
into~\eqref{eq:necessary2-2} shows that
$E=E_{\widehat{u}}(\lambda)$. Finally, from~\eqref{eq:necessary2-3}
and~\eqref{eq:necessary2-2} it follows that $(N,n)=y(|E_-|,|E_+|)$.
\end{proof}
\section*{Appendix: Approximation theorem for complex-valued neural
networks}
In this appendix, we generalize the recent universal approximation
theorem for complex-valued neural networks by
F.~Voigtlaender~\cite{voigtlaender2020universal} to the case of
activation functions defined locally in an open subset $U\subset\mathbb{C}$,
instead of globally on the whole complex plane. The gist of the proof,
namely the use of Wirtinger calculus~\cite{MR716497} to show that the
functions $z^\alpha\overline{z}^\beta$ ($\overline{z}$ is the complex conjugate of
$z$) can be approximated by neural networks, is the same as in the
proof of Voigtlaender's theorem. However, the proof is complicated by
the fact that parameters for the network need to be chosen so that all
inputs to the activation function stay within $U$.
Let $\overbar{B}_R:=\{z\in\mathbb{C}^{i_0}:|z|_{\mathbb{C}^{i_0}}\le R\}$. We consider
(shallow) complex-valued neural networks
$\mathcal{N}:\overbar{B}_R\to\mathbb{C}^{k_0}$, whose $k$:th component function is of
the form
\begin{equation}
\label{eq:C-NN}
\mathcal{N}_k(z) := \sum_{j=1}^{j_0}c_{kj}\rho(a_j\cdot z + b_j),
\end{equation}
where $a_j\cdot z := \sum_i a_{ji}z_i$. Here the integers $i_0>0$,
$j_0>0$, and $k_0>0$ are the number of inputs of the network, the
width of the network, and the number of outputs of the network,
respectively, and $\rho:U\to\mathbb{C}$, where $U\subset\mathbb{C}$ is an open set, is
the activation function. The parameters
$a_j=(a_{j1}, a_{j2},\ldots,a_{ji_0})\in\mathbb{C}^{i_0}$, $j=1,2,\ldots,j_0$,
$b\in\mathbb{C}^{j_0}$, and $(c_{kj})\in\mathbb{C}^{k_0\times j_0}$ are required to
satisfy
\begin{equation}
\label{eq:parameter-requirement}
a_j\cdot z + b_j\in U\text{ for every } z
\in\overbar{B}_R\text{ and } j=1,2,\ldots,j_0.
\end{equation}
Following theorem is a local version of Voigtlaender's universal
approximation theorem for complex-valued neural
networks~\cite[Theorem~{1.3}]{voigtlaender2020universal}:
\begin{theorem}
\label{thm:UAT}
Let $i_0$, $k_0$, $R$, and $\rho$ be as above, and suppose that
\begin{enumerate}[(i)]
\item $\rho$ is locally bounded and continuous almost everywhere in
the nonempty open set $U\subset\mathbb{C}=\mathbb{R}^2$ (the measure is the
two-dimensional Lebesgue measure), and
\item $\Delta^m\rho$ does not vanish identically in $U$ for any
$m=0,1,2,\ldots$ (here
$\Delta=\partial^2/\partial x^2+\partial^2/\partial y^2$,
$z=x+iy$, is the Laplace operator defined in the sense of
distributions).
\end{enumerate}
If $f:\overbar{B}_R\to\mathbb{C}^{k_0}$ is continuous and $\epsilon>0$, then there
exists an integer $j_0>0$ and parameters $a_j\in\mathbb{C}^{i_0}$,
$j=1,2,\ldots,j_0$, $b\in\mathbb{C}^{j_0}$, and
$(c_{kj})\in\mathbb{C}^{k_0\times j_0}$ such
that~\eqref{eq:parameter-requirement} holds, and that the
complex-valued neural network $\mathcal{N}$ defined componentwise
by~\eqref{eq:C-NN} satisfies
\begin{equation}
\label{eq:universal-approximation}
\sup_{z\in\overbar{B}_R} \big|
\mathcal{N}(z) - f(z)
\big|_{\mathbb{C}^{k_0}}
\le \epsilon.
\end{equation}
\end{theorem}
There is a slight difference in the continuity assumption for the
activation function $\rho$ between Theorem~\ref{thm:UAT}
and~\cite[Theorem~{1.3}]{voigtlaender2020universal}. Here we require
that $\rho$ is continuous almost everywhere, i.e., that the set
$D\subset\mathbb{C}$ of its discontinuities is a null
set. In~\cite{voigtlaender2020universal} it is required that also the
closure of $D$ is a null set. The difference is due to how the
(potentially nonsmooth) activation function is smoothly approximated;
our approximation method is contained in the following two lemmas. Our
approach is similar to~\cite[Lemma~4]{hornik1993some}, in which
real-valued activation functions are considered. Theorem~\ref{thm:UAT}
will be proved after the lemmas.
\begin{lemma}
\label{lemma:Riemann-type}
For $\eta>0$, let $\mathcal{P}(\eta)$ denote the set of countable partitions
of $\mathbb{R}^2$ into measurable subsets with diameter at most $\eta$, and
let $\psi:\mathbb{R}^2\to\mathbb{C}$ be a bounded and almost everywhere continuous
function with compact support. Then
\begin{equation}
\label{eq:Riemann-like}
\lim_{\eta\to 0}\left(
\sup\left\{\sum_{j=1}^\infty\lambda_2(C_j)\,\sup_{{y,y'}\in C_j}|\psi(y)-\psi(y')|
: (C_j)_{j=1}^\infty\in\mathcal{P}(\eta)\right\}
\right)
= 0,
\end{equation}
where $\lambda_2$ denotes the Lebesgue measure on $\mathbb{R}^2$.
\end{lemma}
\begin{proof}
Choose a sequence of partitions
$((C_j(k))_{j=1}^\infty)_{k=1}^\infty\in\mathcal{P}(1/k)$, and define
\begin{equation*}
d_k(x) := \sum_{j=1}^\infty
\sup_{y,y'\in C_j(k)}|\psi(y)-\psi(y')|\, 1_{C_j(k)}(x),
\end{equation*}
where $1_{C_j(k)}$ is the characteristic function of the set
$C_j(k)$.
The functions $d_k$ are measurable, uniformly bounded by
$2\|\psi\|_\infty$, and they are all supported in a fixed compact
set. If $x\in\mathbb{R}^2$ is a point of continuity of $\psi$, then
$d_k(x)\to 0$. As a consequence, $d_k\to 0$ as $k\to\infty$ almost
everywhere in $\mathbb{R}^2$, and by the Lebesgue's dominated convergence
theorem
\begin{equation}
\label{eq:Riemann-like-2}
0
= \lim_{k\to\infty} \int_{\mathbb{R}^2} d_k(x)\,dx
= \lim_{k\to\infty}\left(
\sum_{j=1}^\infty\lambda_2(C_j(k))\sup_{{y,y'}\in C_j(k)}|\psi(y)-\psi(y')|
\right).
\end{equation}
This proves the lemma as the sequence
$((C_j(k))_{j=1}^\infty)_{k=1}^\infty\in\mathcal{P}(1/k)$ was
arbitrary. Namely, if~\eqref{eq:Riemann-like} did not hold, it would
be possible to construct a sequence
$((C_j(k))_{j=1}^\infty)_{k=1}^\infty\in\mathcal{P}(1/k)$ for
which~\eqref{eq:Riemann-like-2} fails.
\end{proof}
\begin{lemma}
\label{lemma:convolution-approximation}
Consider $\varphi\in C_c(\mathbb{R}^2)$ and let $\psi$ be as in
Lemma~\ref{lemma:Riemann-type}. Then
\begin{equation*}
\sum_{k\in\mathbb{Z}^2}\psi(x-kh)h^2\varphi(kh)\to\psi*\varphi(x)
\text{ as } h\to 0,
\end{equation*}
uniformly in $x\in\mathbb{R}^2$.
\end{lemma}
\begin{proof}
We can estimate
\begin{equation*}
\Big|\psi*\varphi(x) - \sum_{k\in\mathbb{Z}^2}\psi(x-kh)h^2\varphi(kh)\Big|
\le A + B,
\end{equation*}
where
\begin{align*}
A &:=\|\varphi\|_\infty\sum_{k\in\mathbb{Z}^2}
\int_{kh+[0,h)^2}\big|\psi(x-y)-\psi(x-kh)\big|\,dy,\text{ and }\\
B &:= \|\psi\|_\infty\sum_{k\in\mathbb{Z}^2}
\int_{kh+[0,h)^2}\big|\varphi(y)-\varphi(kh)\big|\,dy.
\end{align*}
The sum in $A$ can be bounded from the above by
\begin{equation*}
\begin{split}
&\sum_{k\in\mathbb{Z}^2} h^2\,\sup\big\{|\psi(z)-\psi(z')|
: {z,z'}\in x-kh-[0,h)^2\big\}\\
&\qquad\le \sup\Big\{\sum_{j=1}^\infty \lambda_2(C_j)
\sup_{{z,z'}\in C_j}|\psi(z)-\psi(z')| :
(C_j)_{j=1}^\infty\in\mathcal{P}(\sqrt{2}h) \Big\}.
\end{split}
\end{equation*}
By Lemma~\ref{lemma:Riemann-type} this tends to zero as
$h\to\infty$.
The number of nonzero terms in $B$ is bounded from the above by
$C/h^2$, where $C>0$ is a constant independent of $h$. Consequently,
$B$ can be estimated from the above by
$C'\sup\{|\varphi(z)-\varphi(z')|:|z-z'|^2\le 2h^2\}$, which tends
to zero as $h\to 0$ by the uniform continuity of $\varphi$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:UAT}]
It is enough to consider the case with a single output ($k_0=1$),
for the general case follows from a componentwise construction of
$\mathcal{N}$.
For any parameters $(a,b)\in\mathbb{C}^{i_0}\times U$ such that
\begin{equation}
\label{eq:smalness-condition}
a\cdot z+b\in U \text{ for every } z\in\overbar{B}_R,
\end{equation}
define a bounded function $f_{a,b}:\overbar{B}_R\to\mathbb{C}$ by setting
$f_{a,b}(z):=\rho(a\cdot z+b)$. Then define
\begin{equation}
\label{eq:Sigma}
\Sigma(\rho)
:= \overbar{\linspan}\{f_{a,b}:
(a,b)\in\mathbb{C}^{i_0}\times U\text{ satisfies~\eqref{eq:smalness-condition}}\}
\subset\mathcal{B}(\overbar{B}_R).
\end{equation}
Here $\mathcal{B}(\overbar{B}_R)$ is the complex algebra of bounded
functions on $\overbar{B}_R$ equipped with the supremum norm, and the
closure of the span is with respect to that norm. The theorem will
be proved by showing that $\Sigma(\rho)$ includes the subset of
continuous functions of $\mathcal{B}(\overbar{B}_R)$.
Let $\varphi$ be a mollifier on $\mathbb{R}^2$ and define
$\varphi_p(s):=p^2\varphi(ps)$ for $p=1,2,\ldots$
Fix an integer $m\ge 0$ and find open sets $V$ and $W$ such that
$\emptyset\neq V\subset\subset W\subset\subset U$ and that
$\Delta^m\rho$ does not vanish identically in $V$. Let
$\chi\in C_c(U)$ be such that $\chi\equiv 1$ on $W$. The convolution
\begin{equation*}
(\chi\rho)*\varphi_p(s) := \int_{\mathbb{R}^2}(\chi\rho)(s-y)\varphi_p(y)\,dy
\end{equation*}
is then defined everywhere, and
$(\chi\rho)*\varphi_p|_{V}\to\rho|_{V}$ as $p\to\infty$ in the sense
of distributions in $V$. Consequently, there exists an index $p_0$
such that $V-\supp\varphi_{p_0}\subset W$ and
$\Delta^m(\chi\rho)*\varphi_{p_0}$ does not vanish identically in
$V$. Define $\widetilde{\rho}:\mathbb{C}\to\mathbb{C}$ by
$\widetilde{\rho}(s) := (\chi\rho)*\varphi_{p_0}(s)$. Then $\widetilde{\rho}$ is
smooth everywhere (in the sense of real differentiability), and
$\Delta^m\widetilde{\rho}$ does not vanish identically in $V$.
Fix $b\in V$ and choose $\epsilon>0$ such that if $a\in\mathbb{C}^{i_0}$ and
$|a|_{\mathbb{C}^{i_0}}<\epsilon$, then $a\cdot z+b\in V$ for every
$z\in\overbar{B}_R$. Denote $\mathbb{N}_0:=\{0,1,2,\ldots\}$, and for any
multiindices ${\alpha,\beta}\in\mathbb{N}_0^{i_0}$ define
\begin{equation}
\label{eq:F}
F_{\alpha,\beta}(a,z)
:= z^\alpha\,\overline{z}^\beta\,
(\partial^{|\alpha|}\overbar{\partial}^{|\beta|}\widetilde{\rho})(a\cdot z + b),
\end{equation}
where $z\in\overbar{B}_R$ and $|a|_{\mathbb{C}^{i_0}}<\epsilon$. Here
$z^\alpha=z_1^{\alpha_1}z_2^{\alpha_2}\cdots z_{i_0}^{\alpha_{i_0}}$
(and analogously for $\overline{z}$, where the bar denotes elementwise
complex conjugation), and $\partial:=(\partial_x-i\partial_y)/2$ and
$\overbar{\partial}:=(\partial_x+i\partial_y)/2$ are the Wirtinger derivatives
operating on the complex function $\widetilde{\rho}(x+iy)$.
If $|\alpha|=|\beta|=0$, then
\begin{equation}
\label{eq:induction}
F_{\alpha,\beta}(a,\cdot)\in\Sigma(\rho)
\text{ for every } a\text{ with }|a|_{\mathbb{C}^{i_0}}<\epsilon.
\end{equation}
Namely, suppose $|a|_{\mathbb{C}^{i_0}}<\epsilon$ and let $h\in\mathbb{R}$ and
$k\in\mathbb{Z}^2$ be such that $\varphi_{p_0}(kh)\neq 0$. Then
$a\cdot z+b-kh\in W$ for every $z\in\overbar{B}_R$, so the parameters
$(a, b-kh)$ satisfy~\eqref{eq:smalness-condition}, and
$\chi(a\cdot z+b-kh)=1$. Consequently,
\begin{equation*}
h^2\sum_{k\in\mathbb{Z}^2} \varphi_{p_0}(kh)f_{a,b-kh}(z) =
\sum_{k\in\mathbb{Z}^2} (\chi\rho)(a\cdot z+b-kh)h^2\varphi_{p_0}(kh) \to
F_{0,0}(a,z)
\end{equation*}
as $h\to 0$, uniformly in $z\in\overbar{B}_R$, by
Lemma~\ref{lemma:convolution-approximation}, and therefore
$F_{0,0}(a,\cdot)\in\Sigma(\rho)$.
Next we will use Wirtinger calculus similarly
to~\cite[Lemma~{4.2}]{voigtlaender2020universal} to show
that~\eqref{eq:induction} holds for every $\alpha$ and $\beta$. For
a function of $a\in\mathbb{C}^{i_0}$, let us denote by $\partial_{a_i}$ and
$\overbar{\partial}_{a_i}$ the partial Wirtinger derivatives with respect to the
variable $a_i\in\mathbb{C}$. Fix ${\alpha,\beta}\in\mathbb{N}_0^{i_0}$, denote
$F:=F_{\alpha,\beta}$, and assume that~\eqref{eq:induction} holds
for $F$. The directional derivative of $F$ in the $a$-variable along
a direction $v\in\mathbb{C}^{i_0}$, denoted by $(\partial/\partial v)F$,
exists, and a calculation shows that
\begin{equation}
\label{eq:convergence}
\frac{F(a+hv,z)-F(a,z)}{h}\to \frac{\partial}{\partial v}F(a,z)\text{ as } h\to 0,
\end{equation}
uniformly in $z\in\overbar{B}_R$. For fixed $a$ and small $h\neq 0$, by
assumption the left-hand side of~\eqref{eq:convergence} as a
function of $z$ is in $\Sigma(\rho)$. Because of the uniform convergence
and closedness of $\Sigma(\rho)$, also the right-hand side
of~\eqref{eq:convergence} is in $\Sigma(\rho)$. It follows that
$\partial_{a_i} F(a,\cdot)\in\Sigma(\rho)$ and
$\overbar{\partial}_{a_i} F(a,\cdot)\in\Sigma(\rho)$, for every
$i=1,2,\ldots,{i_0}$. But by the chain rule for the Wirtinger
derivatives,
\begin{align*}
\partial_{a_i}F(a,z)
&= z_iz^\alpha\overline{z}^\beta(\partial\partial^{|\alpha|}\overbar{\partial}^{|\beta|}\widetilde{\rho})(a\cdot z + b)
= F_{\alpha+e_i,\beta}(a,z), \text{ and }\\
\overbar{\partial}_{a_i}\partial F(a,z)
&= \overline{z}_iz^\alpha\overline{z}^\beta(\overbar{\partial}\partial^{|\alpha|}\overbar{\partial}^{|\beta|}\widetilde{\rho})(a\cdot z + b)
= F_{\alpha,\beta+e_i}(a,z).
\end{align*}
Consequently, \eqref{eq:induction} is true for every $\alpha$ and
$\beta$.
Because $\Delta^m\widetilde{\rho}=(4\partial\overbar{\partial})^m\widetilde{\rho}$ does not vanish
identically in $V$, for every $\alpha$ and $\beta$ such that
$|\alpha|\le m$ and $|\beta|\le m$ there exists
$b_{\alpha,\beta}\in V$ such that
$\partial^{|\alpha|}\overbar{\partial}^{|\beta|}\widetilde{\rho}(b_{\alpha,\beta})\neq
0$. Then \eqref{eq:F} and~\eqref{eq:induction} with $a=0$ and
$b=b_{\alpha,\beta}$ imply that
$z^\alpha\,\overline{z}^\beta\in\Sigma(\rho)$. Consequently, $\Sigma(\rho)$ contains
all functions of the form
\begin{equation}
\label{eq:SW-polys}
p(z)
= \sum_{\substack{|\alpha|\le m,\\|\beta|\le m}}
c_{\alpha\beta}z^\alpha\overline{z}^\beta,
\end{equation}
where $z\in\overbar{B}_R$, $m\in\mathbb{N}$ and $c_{\alpha\beta}\in\mathbb{C}$ are
arbitrary. Functions of the form~\eqref{eq:SW-polys} form a
self-adjoint algebra of continuous complex functions on the compact
set $\overbar{B}_R$, and that algebra separates points on $\overbar{B}_R$ and
vanishes at no point of $\overbar{B}_R$. By the Stone--Weierstrass theorem
\cite{MR0385023} such an algebra contains all continuous complex
functions in its uniform closure, and therefore so does $\Sigma(\rho)$.
\end{proof}
\section{Introduction}
Self-sustained oscillatory systems will synchronize with an external
source of periodic perturbation, given that the frequency and the
strength of the injection occur within the locking range. A laser
subject to external optical injection behaves the
same~\cite{lau2009enhanced}. What sets optical oscillators apart from
the electronic ones is the nature of propagating electromagnetic field
that has two orthogonal polarization modes which can be observed with
a pair of base polarization components (meaningful reference
coordinate system), be it linear, circular, or some elliptical. In
following treatment, we choose to express polarization in terms of a
complex amplitude $E=(E_-, E_+)\in\mathbb{C}^2$ that multiplies carrier wave
of the form $e^{-i ( k x - \omega t )}$, where $k$ is the wave vector,
$x$ is the spatial coordinate, $\omega$ is the angular frequency, and
$t$ is the time, such that ${k,x,\omega,t}\in\mathbb{R}$. Coordinates $E_\pm$
of $E$ are the \emph{right} $(+)$ and \emph{left} $(-)$
\emph{circularly polarized} components, they are related to the
orthogonal linear components $E_x$ and $E_y$ of the electric field by
\begin{equation*}
E_x = \frac{E_++E_-}{\sqrt{2}}\text{ and }
E_y = -i\frac{E_+-E_-}{\sqrt{2}}.
\end{equation*}
Electric field emitted by a laser is
\begin{equation*}
\mathcal E(x,t)=\re\left( E(t)e^{-i(kx-\omega t)} \right),
\end{equation*}
where $E(t)$ is called a \emph{slowly varying amplitude}.
In absence of laser cavity anisotropies, the temporal behavior of a
semiconductor laser under external optical injection can be expressed
with a spin-flip rate
equations~\cite{san1995light,martin1997polarization} that describe the
complex-valued components $E_\pm(t)$ of the slowly varying amplitude
$E(t)$ as
\begin{subequations}
\label{eq:Martin-Regalado}
\begin{align}
\frac{d}{dt} E_\pm(t)
&=\kappa(1+i\alpha)\, (N(t) \pm n(t)-1) E_\pm(t) + \kappa \eta u_\pm(t),\\
\frac{d}{dt} N(t)
&= -\gamma(N(t)-\mu) - \gamma(N(t)+n(t))|E_+(t)|^2
- \gamma(N(t)-n(t))|E_-(t)|^2 ,\\
\frac{d}{dt} n(t)
&= -\gamma_sn(t) - \gamma(N(t)+n(t))|E_+(t)|^2
+ \gamma(N(t)-n(t))|E_-(t)|^2 ,
\end{align}
\end{subequations}
where $N(t)$ and $n(t)$ are real-valued functions; $N$ is the
difference between the normalized upper and lower state populations,
i.e., the normalized total carrier number in excess of its value at
transparency; $n$ is the normalized imbalance between the population
inversions (in reference to the populations of the magnetic
sublevels), $u_\pm$ are the circularly polarized components of the
electric field of an external injection $u=(u_-, u_+)\in\mathbb{C}^2$, that
is, the amplitude of the external light that goes into the laser,
$\eta$ is the coupling efficiency factor, $\alpha$ is the linewidth
enhancement factor that refers to saturable dispersion (Henry factor),
$\mu$ is the normalized injection current, $\kappa$ is the decay rate
of the cavity electric \emph{field} whence $(2\kappa)^{-1}$ is the
cavity photon lifetime, $\gamma$ is the decay rate of the total
carrier number, and $\gamma_s$ is the excess in the decay rate that
accounts for the mixing in the carriers with opposite spins.
The rate equations~\eqref{eq:Martin-Regalado} are derived to model and
explore polarization properties of Vertical-Cavity Surface-Emitting
Lasers (VCSELs). The rate equations use a normalized injection current
such that the unitless injection $\mu$ of $1$ refers to the laser
threshold operation, and $\mu\approx 3$ refers to the output emission of
$1$ mW on a typical VCSEL. In the physical world, an array of VCSELs
is produced on a semiconductor wafer, where stacks of dielectric
materials form high-reflectivity Bragg mirrors on the top and bottom
sides of the wafer. The mirrors confine an active region in between,
comprising just a few quantum wells with a thickness of some tens of
nanometers. Depending on the active region diameter, the threshold
current and the maximum emission power may be tailored for specific
applications.
Lasers are known to exhibit a rich dynamical behavior under external
optical injection~\cite{wieczorek2005dynamical, kane2005unlocking,
MR2346863, MR2313730, al2013dynamics}. Depending on laser
properties and the injected optical power and its frequency, the
differential equation system may converge toward an \emph{equilibrium
point} (a time independent solution, also called \emph{steady
state}, \emph{stationary point}, or \emph{critical point}) with
locked phase synchronization. This phenomenon is called
\emph{injection locking}~\cite{siegman1986lasers}. Alternatively, the
system may manifest periodic oscillations, or
chaos~\cite{thornburg1997chaos, MR2299634}. In this work, we explore
equilibrium points of system~\eqref{eq:Martin-Regalado} and study
their stability. While in a physical system injection locking is
possible only at a stable equilibrium, understanding the unstable
equilibrium points provides important insight about the phase space of
the system.
In our previous work~\cite{vonLerber2019alloptical} we concluded that
in the case of linear polarization, a stably injection-locked laser
approximates normalization operation that can be used for arithmetic
computations. In this paper, we widen the scope and explore the
equilibrium points in greater detail. Our main results regarding the
dynamics of system~\eqref{eq:Martin-Regalado} are:
\begin{enumerate}[(i)]
\item If the injected field $u\in\mathbb{C}^2$ is sufficiently weak (small in
magnitude), then system~\eqref{eq:Martin-Regalado} has nine
equilibrium points
(Theorem~\ref{thm:equilibrium-small-dynamics}). If $u$ is
sufficiently strong (large in magnitude), then it only has a single
equilibrium point (Theorem~\ref{thm:strong-injection}).
\item Dependence of the equilibrium points on the injected field $u$
is described in an asymptotic sense in the limits $|u|\to 0$ and
$|u|\to\infty$ (Theorems~\ref{thm:equilibrium-small-dynamics}
and~\ref{thm:strong-injection}). A method for calculating the exact
values of the equilibrium points is provided for weak $u$ in terms
of an ordinary differential equation
(Theorem~\ref{thm:solution-from-IVP}).
\item Under the assumption that $\alpha=0$ and that the injected field
$u$ is weak, it is proved that one of the nine equilibrium points is
asymptotically stable, while the remaining eight are unstable
(Theorem~\ref{thm:stability}).
\end{enumerate}
The consequence of the aforementioned results is that under weak
injection of elliptically polarized light the injection-locked laser
will emit linearly polarized output such that the input state of
polarization is projected to a linear state of polarization (see
Figure~\ref{fig:Poincare_sphere}a). Under strong injection of
elliptically polarized light, the injection-locked laser will emit
light with an elliptical state of polarization, yet, the polarization
is shifted toward a linear state of polarization, as shown in
Figure~\ref{fig:Poincare_sphere}b.
\begin{figure}
\centering
\begin{picture}(100,100)
\put(-50,0){\includegraphics[width=7cm]{Poincare_sphere.png}}
\put(-10,100){a} \put(92,105){b}
\end{picture}
\caption{The state of polarization is transformed by the
injection-locked laser. Schematic illustration on Poincar\'e
sphere~\cite{shurcliff}: {(a)} A weak injected arbitrary state of
elliptical polarization ($\bullet$) is projected on equator
($\circ$) by the injection-locked laser emission. {(b)} Under a
strong elliptical state of polarization input, the state of
polarization of the injection-locked output emission is shifted
toward the equator, yet, will not reach it.}
\label{fig:Poincare_sphere}
\end{figure}
In the last section of this paper, we will investigate a possibility
to use lasers as nodes of an optical neural network. In general,
optical technologies are commonly used for linear operations, such as
Fourier transformation and matrix multiplications, which come
virtually free by use of lenses, mirrors, and other common light
transforming elements. In this respect, optical solutions have been
proposed for matrix multiplications in optical neural
networks~\cite{shen2017deep, harris2018linear}. However, a neural
network consisting of linear transformations only is impossible, as
such a network is itself linear. As recognized by the optics
community, the nonlinear functions are difficult to realize in
practice, as noted in recent publication
\begin{quote}
\emph{Despite these positive results, the scheme faces major
challenges. [...] Then there is the question of the nonlinear
operation needed to link one set of [Mach-Zehnder Interferometers]
with another, which [was] simply simulated using a normal
computer.}~\cite{cartlidge2020optical}
\end{quote}
In this respect, we propose that a laser could provide a useful
nonlinearity. More specifically, a nonlinear activation function of a
node is provided by injection locking; a laser nonlinearly transforms
an injected field (input) into an injection-locked emitted field
(output). As the fields are complex-valued, this also leads in a
natural way to a \emph{complex-valued neural network}.
Complex-valued neural networks are a less studied object than their
real counterpart, nevertheless, they have attracted a considerable
amount of research~\cite{hirose2012complex, aizenberg2011complex,
hirose2014guest}. A desired quality of any class of neural networks
is the \emph{universal approximation property}, namely, that any
continuous function can be approximated to any degree of accuracy by a
network from that class. For real-valued neural networks, necessary
and sufficient conditions for an activation function to generate a
class of neural networks with the universal approximation property are
known~\cite{leshno1993multilayer,hornik1993some}, and also
quantitative bounds for the approximation exist~\cite{mhaskar,
yarotsky2017error}. Besides for the theoretical expressiveness of
neural networks, the choice of an activation function affects their
empirical performance, as, among others, it affects the efficacy of
the training
algorithms~\cite{MR3617773}. In~\cite{2017arXiv170907900V} we
considered universality of laser based neural networks with a
complex-valued activation function.
The recent \emph{universal approximation theorem} for complex-valued
neural networks by F.~Voigtlaender~\cite{voigtlaender2020universal}
characterizes those activation functions for which the associated
complex-valued neural networks have the universal approximation
property. In this theorem, the activation function is required to be
defined globally on the complex plane. As the activation function
induced by injection locking is defined only locally in a neighborhood
of the origin, we extend Voigtlaender's theorem by proving a local
version of the universal approximation theorem (Theorem~\ref{thm:UAT}
stated in the Appendix). This theorem and the results about dynamics
of system~\eqref{eq:Martin-Regalado} will prove the following:
\begin{enumerate}
\item[] The class of complex-valued optical neural networks with nodes
composed of optically injected semiconductor lasers and an
activation function based on injection locking has the universal
approximation property, namely, it can approximate any
complex-valued continuous function to any degree of accuracy
(Theorem~\ref{thm:ONN}).
\end{enumerate}
The paper is organized as follows. In Sections~\ref{sec:weak-fields}
and~\ref{sec:stability} we assume that the injected field $u$ is weak
and consider equilibrium points of system~\eqref{eq:Martin-Regalado}
and their stability, respectively. In Section~\ref{sec:strong-fields}
we consider the case of a strong injected field. In
Section~\ref{sec:neural-network} we propose a design for an optical
neural network with working principle based on injection locking,
provide a mathematical model for such a network, and prove that these
networks have the universal approximation property. In the Appendix,
we prove a local version of the universal approximation theorem for
complex-valued neural networks.
\section*{Acknowledgment}
ML and LY were supported by the Academy of Finland (Finnish Centre of
Excellence in Inverse Modelling and Imaging and projects 273979,
284715, and 312110).
\section{Optical neural networks based on injection locking}
\label{sec:neural-network}
We now describe a design of an optical neural network that can be
implemented with a network of lasers, and whose working principle is
based on injection locking (see Figure~\ref{fig:neural-network}). The
network consists of an input layer (Layer~$I$), an output layer
(Layer~$K$), and one hidden layer (Layer~$J$) in between (the working
principle naturally generalizes to a network with several hidden
layers):
\begin{enumerate}[(i)]
\item In the input layer, each node (artificial neuron) is a
laser. The nodes in this layer are not connected to each other, and
the output of a node is the electric field emitted by the
corresponding laser.
\item In the hidden layer, the nodes are lasers that are coupled to
injected electric fields. The injected fields are composed of fixed
external electric fields together with outputs of the input layer
modified by some passive optical elements, e.g., polarizers or
mirrors, optical isolators, and absorbing components. Due to
injection locking, each laser in the hidden layer stabilizes to some
equilibrium point determined by the injected field, and the output
of a node is the emitted electric field.
The coupling between layers $I$ and $J$ is unidirectional, we note
that one can use lasers of varying powers to replace the use of
optical isolators.
\item Between the hidden layer and the output layer, the electric
fields from the hidden layer are first modified by passive optical
elements, and then joined to form the output of the network. The
nodes in the output layer correspond to exits of optical cables or
waveguides in integrated optics.
\end{enumerate}.
The relation between inputs and outputs of the network is set by
choosing the external electric fields that are part of the injected
fields in the hidden layer, and the passive optical elements on both
sides of the hidden layer. We will show that an arbitrary continuous
function can be approximated within any given accuracy by networks of
this form.
\begin{figure}[t]
\hspace{-1.25em}
\begin{subfigure}[b]{0.54\textwidth}
\centering \scalebox{0.9}{\input{nn.tikz}}
\caption{Optical neural network}
\label{fig:neural-network}
\end{subfigure}
\hspace{1.5em}
\begin{subfigure}[b]{0.45\textwidth}
\centering \scalebox{0.95}{\input{ax_actfn.tikz}}
\caption{Activation function}
\label{fig:activation-function}
\end{subfigure}
\caption[Optical neural network]{Schematic illustration of an
optical neural network and a complex-valued activation function
$\rho$ based on injection locking. The parameters in {(b)} are
those of Figures~\ref{fig:ODE-solution} to~\ref{fig:stability}. In
this figure, polarization $\widehat{u}$ of the electric fields in the
network has been chosen so that $\im\rho(\lambda)=0$ for
$\lambda\in\mathbb{R}$. Labels {(i)}--{(iv)} in {(b)} match those of
Figures~\ref{fig:paths} and~\ref{fig:stability}.
Fields $E_i^{(I)}=\lambda_i^{(I)}\widehat{u}$ in the input layer $I$ are
inputs to the network. They are passed through passive optical
elements (which correspond to multiplication by $a_{ji}\in\mathbb{C}$) and
joined with fixed external fields $E_j^{(\mathrm{ext})}=b_j\widehat{u}$
to form a field
$(\sum_i a_{ji}\lambda_i^{(I)}+b_j)\widehat{u} = \lambda_j^{(J)}\widehat{u}$
injected into the $j$:th laser in the hidden layer $J$. Due to
injection locking, the corresponding emitted field $E_j^{(J)}$ is
$\rho(\lambda_j^{(J)})\widehat{u}$. The fields from the hidden layer are
passed through passive optical elements and joined to form outputs
$E_k^{(K)} = \lambda_k^{(K)}\widehat{u}$ of the network.}
\label{fig:neural-network-ab}
\end{figure}
The optical neural network is modeled mathematically as follows.
Indexes of lasers in the input layer are denoted by
$I=\{1,2,\dots,i_0\}$. The output of $i$:th laser is a linearly
polarized electric field $E_i^{(I)}\in\mathbb{C}^2$, and all electric fields
in this layer are assumed to share the same linear polarization, i.e.,
for all $i=1,2,\ldots,i_0$,
\begin{equation}
\label{eq:EJ}
E_i^{(I)} = \lambda_i^{(I)}\widehat{u},
\end{equation}
where $\lambda_i^{(I)}\in\mathbb{C}$, and
$\widehat{u}=(\widehat{u}_-, \widehat{u}_+)\in\mathbb{C}^2\setminus\{0\}$ is fixed and satisfies
$|\widehat{u}_-|=|\widehat{u}_+|$. It is also assumed that the set of all possible
inputs is bounded, i.e., there exists $R>0$ such that whenever
$(\lambda_i^{(I)}\widehat{u})_{i=1}^{i_0}$ is an input to the network, then
$|(\lambda_i^{(I)})_{i=1}^{i_0}|_{\mathbb{C}^{i_0}}\le R$. Here
$|\cdot|_{\mathbb{C}^{i_0}}$ denotes the Euclidean norm on $\mathbb{C}^{i_0}$.
In the hidden layer indexes of lasers are denoted by
$J=\{1,2,\dots,j_0\}$. The passive optical elements between the input
layer and the hidden layer may induce scaling and phase shift to the
electric fields, i.e., field $E_i^{(I)}$ from the $i$:th laser of the
input layer to the $j$:th laser of the hidden layer transforms to
$a_{ji}E_i^{(I)}$, where $a_{ji}\in\mathbb{C}$. The total injected field
$u_j\in\mathbb{C}^2$ to the $j$:th laser in the hidden layer is then the sum
of the modified fields and an external electric field
$E_j^{\mathrm{(ext)}}$, which is assumed to share the same
polarization with the lasers in the input layer:
$E_j^{\mathrm{(ext)}}=b_j\widehat{u}$ for some $b_j\in\mathbb{C}$. Thus,
\begin{equation}
u_j
= \sum_{i=1}^{i_0} a_{ji} E_i^{(I)} + E_j^{\mathrm{(ext)}}
= \left(\sum_{i=1}^{i_0} a_{ji}\lambda_i^{(I)} + b_j\right)\widehat{u}.
\end{equation}
By Theorems~\ref{thm:equilibrium-small-dynamics}
and~\ref{thm:stability}, if the linewidth enhancement factor $\alpha$
of the laser is zero (i.e., $\alpha=0$ in system~\eqref{eq:system})
and the injected field $u_j$ to the $j$:th laser is written as
$u_j=\lambda_j^{(J)}\widehat{u}$, then for some constant $\ell>0$ it holds
that as long as $0<|\lambda_j^{(J)}|<\ell$, then the $j$:th laser has
a unique stable equilibrium point (denoted by
$E_{\widehat{u}}^{(+\textsc{x})}(\lambda_j^{(J)})$ in
Theorems~\ref{thm:equilibrium-small-dynamics}
and~\ref{thm:stability}). If $\alpha>0$, then this point is still an
equilibrium point, and it was shown in Section~\ref{sec:weak-fields}
how to numerically check if for weak enough injected fields it is a
unique stable equilibrium point. Assuming this is the case, after a
successful injection locking the emitted field $E_j^{(J)}\in\mathbb{C}^2$ of
the $j$:th laser in the hidden layer with small enough injected field
$u_j=\lambda_j^{(J)}\widehat{u}\neq 0$ stabilizes to
\begin{equation*}
E_j^{(J)}
= \rho(\lambda_j^{(J)})\widehat{u},
\end{equation*}
where the function
\begin{equation}
\label{eq:actfn}
\rho:=\rho^{(+\textsc{x})}:\{\lambda\in\mathbb{C} : 0<|\lambda|<\ell\} \to \mathbb{C}
\end{equation}
is defined in Theorem~\ref{thm:equilibrium-small-dynamics}.
Figure~\ref{fig:neural-network-ab}b illustrates the function $\rho$
corresponding to the system in Figure~\ref{fig:ODE-solution}.
In the output layer nodes are indexed by $K=\{1,2,\ldots,k_0\}$, and
the $k$:th output $E_k^{(K)}\in\mathbb{C}^2$ of the network is a superposition
of the emitted fields $E_j^{(J)}$ of lasers in the hidden layer
modified by passive optical elements represented by complex numbers
$c_{kj}$:
\begin{equation}
\label{eq:El}
E_k^{(K)}
= \sum_{j=1}^{j_0} c_{kj} E_j^{(J)}
= \sum_{j=1}^{j_0} c_{kj}
\rho(\lambda_j^{(J)})\widehat{u},
\end{equation}
whenever $0<|\lambda_j^{(J)}|<\ell$ for all $j=1,2,\ldots,j_0$.
As the input to the network is of the form
$(\lambda_i^{(I)}\widehat{u})_{i=1}^{i_0}\in(\mathbb{C}^2)^{i_0}$,
$\lambda_i^{(I)}\in\mathbb{C}$, and the output is by~\eqref{eq:El} of the form
$(\lambda_k^{(K)}\widehat{u})_{k=1}^{k_0}\in(\mathbb{C}^2)^{k_0}$,
$\lambda_k^{(K)}\in\mathbb{C}$, the network essentially computes the map
\begin{equation*}
(\lambda^{(I)}_i)_{i=1}^{i_0}
\mapsto
(\lambda^{(K)}_k)_{k=1}^{k_0}
=:\mathcal{M}((\lambda_i^{(I)})_{i=1}^{i_0}).
\end{equation*}
It follows from equations~\eqref{eq:EJ}--\eqref{eq:El} that the $k$:th
component function $\mathcal{M}_k$ of $\mathcal{M}$ is
\begin{equation}
\label{eq:M-l}
\mathcal{M}_k((\lambda_i^{(I)})_{i=1}^{i_0})
= \sum_{j=1}^{j_0}c_{kj}\rho\left(\sum_{i=1}^{i_0} a_{ji}\lambda^{(I)}_i + b_j\right),
\end{equation}
where it is assumed that
\begin{equation}
\label{eq:ONN-assumption}
0<\left|\sum_{i=1}^{i_0} a_{ji}\lambda^{(I)}_i + b_j\right|
<\ell
\text{ for every } j=1,2,\ldots,j_0.
\end{equation}
In~\eqref{eq:M-l} and~\eqref{eq:ONN-assumption} parameters
${a_{ji},c_{kj}}\in\mathbb{C}$ correspond to the passive optical elements
between the layers, and parameters $b_j\in\mathbb{C}$ correspond to the fixed
external electric fields.
\begin{remark}
The lasers in the input layer are not connected with each other,
yet, the formulation assumes that the phase differences remain
constant at the equilibrium point. As known, all oscillatory signal
sources, lasers included, fluctuate in phase. This drift will
inevitably invalidate the assumption of the constant phase
difference between two lasers unless they share a common reference
(seed) signal. Therefore, a practical implementation of a
laser-based optical neural network will require a common
narrow-linewidth reference signal that is used to lock enough lasers
in the network. At the bare minimum, all lasers of the first layer
must be injected from the same source. The phase of the injected
reference light may be controlled individually for each network
node, but the natural fluctuations of the reference must be
experienced equally among the injected lasers. This arrangement is
not unlike the clock signal of a digital computer that is used to
synchronize operations between individual circuits.
\end{remark}
Below $\overbar{B}_R\subset\mathbb{C}^{i_0}$ is the closed ball of radius $R$ centered
at the origin.
\begin{theorem}
\label{thm:ONN}
Fix integers $i_0>0$ and $k_0>0$ and a number $R>0$, let $\rho$ be
as in~\eqref{eq:actfn}, and consider an arbitrary continuous
function $f:\overbar{B}_R\to\mathbb{C}^{k_0}$. Let $\epsilon>0$. There exists an
integer $j_0>0$ and numbers ${a_{ji},b_j,c_{kj}}\in\mathbb{C}$,
$j=1,2,\ldots,j_0$, $i=1,2,\ldots,i_0$, $k=1,2,\ldots,k_0$, such
that following holds:
\begin{enumerate}[(i)]
\item The inequalities~\eqref{eq:ONN-assumption} hold for a.e.\
$(\lambda_i^{(I)})_{i=1}^{i_0}\in\overbar{B}_R$ (the measure on
$\overbar{B}_R\subset\mathbb{C}^{i_0}=\mathbb{R}^{2i_0}$ is the $2i_0$-dimensional
Lebesgue measure), and
\item the function $\mathcal{M}$ defined componentwise a.e.\ in $\overbar{B}_R$
by~\eqref{eq:M-l} is measurable and satisfies
\begin{equation}
\label{eq:ONN}
\big\| \mathcal{M} - f \big\|_{L^\infty(\overbar{B}_R;\mathbb{C}^{k_0})} \le \epsilon.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Let $U:=\{\lambda\in\mathbb{C}:|\lambda|<\ell\}$ and extend the function
$\rho$ defined in~\eqref{eq:actfn} into a function $\rho:U\to\mathbb{C}$ by
setting $\rho(0):=0$. Then $\rho$ is locally bounded on $U$ and
continuous on $U\setminus\{0\}$, and by
Theorem~\ref{thm:equilibrium-small-dynamics}
\begin{equation*}
\lim_{\substack{\lambda\in\mathbb{R},\\\lambda\to 0^+}}\rho(\lambda)
= -\lim_{\substack{\lambda\in\mathbb{R},\\\lambda\to 0^-}}\rho(\lambda)
\neq 0.
\end{equation*}
In particular $\rho$ is not a.e.\ equal to a continuous function,
and consequently it satisfies both {(i)} and {(ii)} of
Theorem~\ref{thm:UAT} stated in the Appendix (note that if
$\Delta^m\rho\equiv 0$ for some $m\in\mathbb{N}$ in the sense of
distributions, then $\rho$ is a.e.\ equal to a smooth function by
elliptic regularity~\cite{MR1157815}).
Let $f:\overbar{B}_R\to\mathbb{C}^{k_0}$ be a continuous function and fix
$\epsilon>0$. By Theorem~\ref{thm:UAT} there exists an integer
$j_0>0$ and parameters ${a_{ji},b_j,c_{kj}}\in\mathbb{C}$ such that
\begin{equation*}
\sum_{i=1}^{i_0} a_{ji}\lambda_i+b_j \in U
\end{equation*}
for every $j=1,2,\ldots,j_0$ and $(\lambda_i)_{i=1}^{i_0}\in\overbar{B}_R$,
and such that the network $\mathcal{N}:\overbar{B}_R\to\mathbb{C}^{k_0}$ defined
componentwise by~\eqref{eq:C-NN} satisfies
\begin{equation*}
\sup_{(\lambda_i)\in\overbar{B}_R} \big| \mathcal{N}((\lambda_i)_{i=1}^{i_0}) - f((\lambda_i)_{i=1}^{i_0}) \big|_{\mathbb{C}_{k_0}} \le\epsilon.
\end{equation*}
Furthermore, it may be assumed that for every $j$ either
$(a_{j1},a_{j2},\ldots,a_{ji_0})\neq 0$ or $b_j\neq 0$, since
otherwise the corresponding term does not affect the value of
$\mathcal{N}$. Observe that $\mathcal{N}$ is measurable, because the
set
\begin{equation*}
N := \bigcup_{j=1}^{j_0}
\Big\{(\lambda_i)_{i=1}^{i_0}\in\mathbb{C}^{i_0} :
\sum_{i=1}^{i_0 }a_{ji}\lambda_i + b_j = 0\Big\}
\end{equation*}
has $2i_0$-dimensional Lebesgue measure zero and the restriction of
$\mathcal{N}$ to $\overbar{B}_R\setminus N$ is continuous.
Let us define $\mathcal{M}$ by the same parameters $j_0$, $a_{ji}$, $b_j$ and
$c_{kj}$ as $\mathcal{N}$. Because
inequalities~\eqref{eq:ONN-assumption} hold on $\overbar{B}_R\setminus N$,
the function $\mathcal{M}$ is defined a.e.\ in $\overbar{B}_R$. Furthermore,
$\mathcal{M}=\mathcal{N}$ a.e.\ in $\overbar{B}_R$, so $\mathcal{M}$ is measurable and
inequality~\eqref{eq:ONN} holds.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction }
Some of the results of this study have already been reported in \citeauthor{denis1}
\citeyear{denis1} (Paper I). For a comprehensive
description, the goals and earlier results of this project are
repeated here, but the reader is referred to paper I for
details on earlier presented results.
About 25\% of the optically visible extragalactic sky is obscured by the dust
and stars of our Milky Way. Dynamically important structures --- individual
nearby galaxies ({\it cf.\,}\ \citeauthor{Dw1} \citeyear{Dw1}) as well as large clusters
and superclusters ({\it cf.\,}\ \citeauthor{A3627} \citeyear{A3627}) --- might still lie
hidden in this zone.
Complete whole-sky mapping of the galaxy and mass distribution is
required in explaining the origin of the peculiar velocity of the
Local Group and the dipole in the Cosmic Microwave
Background.
Various approaches are presently being employed to uncover the galaxy
distribution in the ZOA: deep optical searches, far-infrared
(FIR) surveys (\eg IRAS), and blind \HI\ searches. All methods produce new
results, but all suffer from (different) limitations and selection
effects. Here, the near infrared (NIR) surveys such as 2MASS \cite{2m}
and DENIS in the southern sky, \cite{den,denmes} could provide
important complementary data. NIR surveys will:\\
\indent $\bullet$ be sensitive to early-type galaxies --- tracers of
massive groups and clusters --- which are missed in IRAS and \HI\ surveys,\\
\indent $\bullet$ have less confusion with Galactic objects compared to FIR
surveys,\\
\indent $\bullet$ be less affected by absorption than optical surveys.\\
But can we detect galaxies and obtain accurate magnitudes in crowded
regions and at high foreground extinction using NIR surveys? To assess
the performance of the DENIS survey at low Galactic latitudes we
addressed the following questions:
(1) How many galaxies visible in the $B_J$ band ($B_{\rm lim} \approx 19\fm0$)
can we recover in {$I_c$}\ ($0.8\mu \rm m$), {$J$} ($1.25\mu \rm m$) and {$K_s$}
($2.15\mu \rm m$)? Although less affected by extinction (45\%, 21\% and 9\%
as
compared to $B_J$), their respective completeness limits are lower ($16\fm0,
14\fm5$, and $12\fm2$, \citeauthor{gam3} \citeyear{gam3,gam4}).
(2) Can we determine the {$I_c$}, {$J$}, and {$K_s$}\ band
luminosity functions?
(3) Can we map the Galactic extinction from NIR colours of galaxies
behind the Milky Way?
(4) Can we identify galaxies at high extinction ($A_B > 4-5^{\rm
m}$) where optical surveys fail and FIR surveys are plagued by confusion?
(5) Can we recover heavily obscured spiral galaxies detected in a
blind \HI\ search and hence extend the peculiar velocity field
into the ZOA via the NIR Tully\,--\,Fisher relation ?
We pursued these questions by comparing available DENIS data with results
from a deep optical survey in the southern ZOA (Kraan-Korteweg \& Woudt 1994,
Kraan-Korteweg {\it et~al.}\ 1995, 1996, and references therein). In this region
(\mbox{$265{^\circ} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \ell \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 340{^\circ}$,} $|b| \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10{^\circ}$), over 11\,000
previously unknown galaxies above a diameter limit of $D\!=\!0\farcm2$ and
with $B \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 19\fm0-19\fm5$ have been identified ({\it cf.\,}\ Fig.~1 in Paper I).
Many of the faint low-latitude galaxies are intrinsically bright
galaxies. Within the survey region, we investigated DENIS data at what seems
to be the core of the Great Attractor (GA), \ie\ in the low-latitude
($\ell\!=\!325{^\circ}$, $b\!=\!-7{^\circ}$), rich cluster Abell
3627, where the Galactic
extinction is well determined \cite{Sey}, and in its extension across the
Galactic Plane where the Milky Way is fully opaque.
\section {Expectation from DENIS}
What are the predictions for DENIS at low latitudes? In unobscured
regions, the density of galaxies per square degree is 110 in the blue
for $B_J\le19\fm0$ \cite{gar}, and 30, 11, and 2 in the {$I_c$} , {$J$}\ and
{$K_s$}\ bands for their respective completeness limits of
$I_{\rm lim}\!=\!16\fm0$, $J_{\rm lim}\!=\!14\fm0$, $K_{\rm lim}\!=\!12\fm2$
(\citeauthor{gam3} \citeyear{gam3,gam4}).
The number counts in the blue decrease with increasing obscuration as
$N(<\!B) \simeq 110 \times {\rm dex} (0.6\,[B-19])\,$deg$^{-2}$. According to
\citeauthor{Car} \shortcite{Car}, the extinction in the NIR passbands are
$A_{I_c}\!=\!0\fm45$, $A_J\!=\!0\fm21$, and $A_{K_s}\!=\!0\fm09$ for
$A_B\!=\!1\fm0$,
hence the decrease in number counts as a function of extinction is
considerably slower. Figure~\ref{galctsplot} shows the predicted surface
number density of
galaxies for DENIS and for $B < 19$, as a
function of Galactic foreground extinction.
\begin{figure} [hbt]
\centerline {\epsfxsize=9.5cm \epsfbox[20 161 564 532]{gal_cts.eps}}
\caption{Predicted galaxy counts in {$B$} , {$I_c$} , {$J$}\ and {$K_s$}\ as a function of
absorption in {$B$} , for highly complete and reliable DENIS galaxy samples and
a $B_J \leq
19^{\rm m}$ optical sample. }
\label{galctsplot}
\end{figure}
The NIR becomes notably more efficient at $A_B\simeq 2-3^{\rm m}$, while
the Milky Way becomes opaque at $A_B \ge 4^{\rm m}$. At an extinction of $A_B
\simeq 6^{\rm m}$, {$J$}~and {$K_s$}\ become superior to the {$I_c$}\ band, and we can
expect to
find galaxies in {$J$}\ and {$K_s$}, even at $A_B \!=\! 10^{\rm m}$. These are very
rough predictions and do not take into account any dependence on morphological
type, surface brightness, orientation and crowding, which will surely lower the
counts of actually detectable galaxies counts \cite{gam}.
In April 1997, a new cooling system for the focal instrument of DENIS has been
mounted. This appears to increase the {$K_s$}\ band limiting magnitude by $\sim$ 0.5
magnitude and therewith the
number of galaxies detectable in the deepest obscuration layer of the Milky
Way by a factor of about 2.
Consequently,
the {\it long dashed curve\/} representing
the {$K_s$}\ counts in Figure~\ref{galctsplot} should be moved up by roughly a factor
of 2, which
would make the {$K_s$}\ passband competitive with {$J$}\ starting at $A_B \simeq
7^{\rm m}$.
\section{DENIS-data in the Norma cluster A3627}
\subsection{Recovery of galaxies found in the {$B$}\/ band } \label{recov}
Three high-quality DENIS strips cross the cluster Abell 3627
practically through its center. We inspected 66 images
which cover about one-eighth of the cluster area within
its Abell-radius of $R_A = 1\fdg75$ (each DENIS image is
$12\hbox{$^\prime$}$x$12\hbox{$^\prime$}$, offset by $10\hbox{$^\prime$}$ in declination and
right ascension). The extinction over the regarded cluster area varies
as $1\fm2 \le$ A$_B \le 2\fm0$.
We cross-identified the galaxies found in the optical survey with the
DENIS {$I_c$}, {$J$}, and {$K_s$}\ images. An example of a DENIS image in
the central part of the cluster is given in Figure~3 of Paper I.
On the 66 images, 151 galaxies had been identified in the optical. We
have recovered 122 galaxies in the {$I_c$}\ band, 100 in the {$J$}\ band, and
74 in the {$K_s$}\ band (not including galaxies visible on more than one
image). As suggested by Figure~\ref{galctsplot}, the {$K_s$}\
band indeed is not optimal for identifying obscured galaxies at
these latitudes due to its shallow magnitude limit. Most
of the galaxies not re-discovered in {$K_s$}\ are low surface brightness
spiral galaxies.
Surprisingly, the {$J$}\ band provides better galaxy detection than the {$I_c$}\ band.
In the latter, the severe star crowding makes identification of faint
galaxies very difficult. At these extinction levels, the optical survey does
remain the most efficient in {\it identifying} obscured galaxies.
\subsection{Photometry of galaxies in the Norma cluster }
We have used a preliminary galaxy pipeline \cite{gam3,gam4}, based upon the
SExtractor package \cite{bertinarnouts}
on the DENIS data in the Norma cluster to obtain {$I_c$} , {$J$}\
and {$K_s$}\ Kron photometry. Although many of the galaxies have a considerable
number of stars superimposed on their images, magnitudes derived
from this fairly automated algorithm agree well with the few known, independent
measurements.
Magnitudes could be determined for 109, 98 and 64 galaxies of the 122,
100, 74 galaxies re-discovered in {$I_c$}, {$J$}, and {$K_s$}. Figure~\ref{lfnorplot} shows
the luminosity function (LF) of these galaxies together with the {$B$}\ band
LF of the 151 galaxies visible on the same 66 DENIS
images. The histograms are normalised to the area covered
by the 66 images. The hashed area marks the 60 galaxies common to all
4 passbands. This subsample is mainly restricted by the {$K_s$}\ band.
The magnitudes in the bottom row are corrected for extinction. The
corrections are derived from Mg$_2$-indices
of elliptical galaxies in the cluster (Woudt {\it et~al.}\ in prep.)
and interpolations according to the Galactic \HI\ distribution.
\begin{figure} [ht]
\vspace{-4.5cm}
\centerline {\epsfxsize=12.cm \epsfbox{lf_n_all.eps}}
\caption{The luminosity function for the observed Norma galaxies in {$B$}, {$I_c$},
{$J$},
and {$K_s$}. The bottom panels display magnitudes corrected for foreground
extinction. The {\it hashed histograms\/} represent the sample common to all 4
passbands ($N=60$).}
\label{lfnorplot}
\end{figure}
To assess whether the LFs displayed here are, in fact, representative of the
cluster as a whole --- and therefore the extinction corrected NIR {$I_c$}, {$J$},
and {$K_s$}\ band LFs displayed in the lower panels characteristic for rich
clusters --- we compared the {$B$}\ band LF of the 151 galaxies on the 66
DENIS-images with the cluster LF as a whole ({\it cf.\,}\ \citeauthor{P_dis}, \citeyear{P_dis}).
The extinction-corrected blue cluster LF of the 609 galaxies within the Abell
radius, scaled to the Abell area, actually has lower number counts than the
{$B$}$^o$ band LF displayed in the bottom panel of Figure~\ref{lfnorplot}. This is
explained by the fact that our three strips cross the center of the cluster
and therewith the region of highest density. The comparison indicates that we
are fairly complete to a magnitude of $B^o = 16\fm5$, which is more or less
the shaded area, and that the shape of the total LF is very similar to the
distribution of the common subsample.
Even though these LFs are still preliminary (we have so
far covered only a small area of the Norma cluster and will have missed
dwarf galaxies and other LSB galaxies due to the foreground
obscuration) the here determined extinction-corrected
LFs of the galaxies common to all passbands can be regarded as a first
indication of the bright end of the NIR \mbox{{$I_c$},} {$J$}, and {$K_s$}\ band
LFs in rich clusters.
{}From the below discussed colours of the Norma galaxies, we know that
the extinction corrections are of the correct order.
Adopting a distance to A3627 of 93 Mpc \cite{A3627},
thus $m\!-\!M = 34\fm8$, the 60 galaxies cover a luminosity range
in {$K_s$}\ of \mbox{$-25\fm3 < M_K^o < -21\fm8$.} This
compares well with the bright end of the \mbox{{$K_s$}\ band} LF of the Coma
cluster core derived by \citeauthor{mobasher} \shortcite{mobasher}, although it remains
puzzling why the number counts derived by them ({\it cf.\,}\ their Table~1)
are so much lower compared to the A3627 cluster.
The NIR magnitudes have been used to study the colour\,--\,colour diagram
\mbox{$I-J$}\ versus \mbox{$J-K$}. This has been presented and discussed in detail in Paper~I.
Here it suffices to state that the extinction-corrected colours of the
cluster galaxies match the colours of galaxies in unobscured high latitude
regions \cite{gam3} extremely well, suggesting that our preliminary
photometry is reasonably accurate. Moreover, the shift in colour can be
fully explained by the foreground extinction or, more interestingly, the NIR
colours of obscured galaxies provide, in principle, an independent way of
mapping the extinction in the ZOA (see also \citeauthor{gam2}, \citeyear{gam2}).
\section{`Blind' search for galaxies}
The GA is suspected to cross the Galactic Plane from the Norma cluster in the
south towards the Centaurus cluster in the north. In this region, we
performed a search for highly obscured galaxies on the so far existing
DENIS survey images. The search area within the GA-region
--- marked as a {\it dashed box\/} in Figure~\ref{bsrchplot} --- is defined as
$320{^\circ} \leq \ell \leq 325^{^\circ}$ and $|b|\leq 5{^\circ}$.
\begin{figure} [ht]
\hfil{\epsfxsize=12.5cm \epsfbox{bsrch3.ps}}\hfil
\caption{Galaxy distribution in the GA region displaying Lauberts galaxies
($D \ge 1\farcm0$, Lauberts 1982) and galaxies from the deep optical search
($D \ge 0\farcm2$, outlined area). The superimposed {\it contours\/} represent
absorption levels of $A_B=1\fm5, 2\fm5, 5\fm0$ ({\it thick line\/}), $7\fm5$
and $10\fm0$, as determined from \hi\ column densities and assuming a constant
gas/dust ratio. The {\it box\/} marks the DENIS blind search area with the
results shown enlarged in the right panel: optical galaxies re-identified on
DENIS images ($N\!=\!31$, including 3 uncertain identifications) as {\it
large encircled crosses\/}, optical galaxies not seen by DENIS ($N\!=\!6$) as
{\it triangles\/}, and newly identified, optically invisible galaxies
($N\!=\!15$) as {\it filled dots\/}.}
\label{bsrchplot}
\end{figure}
Of the 1800 images in this area we have inspected 385 by eye (308 in {$K_s$}).
37 galaxies at higher latitudes were known from the optical survey.
28 of these could be re-identified in the {$I_c$}\ band, 26 in the
{$J$}\ band, and 14 in the {$K_s$}\ band. They are plotted as {\it encircled
crosses\/} in Figure~\ref{bsrchplot}. In addition, we found 15 new galaxies
in {$I_c$}\ and {$J$}, 11 of which also appear in the {$K_s$}\ band ({\it filled
circles\/}). The ratios of galaxies found in {$I_c$}\ compared
to {$B$}, and of {$K_s$}\ compared to {$I_c$}\, are higher than in the Norma
cluster. This is due to the higher obscuration level (starting
with A$_B \simeq 2\fm3 -3\fm1$ at the high-latitude border of the
search area, {\it cf.\,}\ {\it contours\/} of Fig.~\ref{bsrchplot}).
On average, we have found about 3.5 galaxies per square degree in the
{$I_c$}\ band. This roughly agrees with the predictions of
Figure~\ref{galctsplot}, although the number of the inspected images and
detected galaxies are too low to allow a statistical conclusion. Since we
looked in an overdense region we expect {\it a priori} more
galaxies. On the other hand, we do not expect to find galaxies below
latitudes of $b \simeq 1{^\circ}-2{^\circ}$ in this longitude
range \cite{gam}. The visual impression of the low-latitude
images substantiates this --- the images are nearly fully covered with stars.
\begin{figure} [htb]
\centerline {\epsfxsize=12.cm \epsfbox{bs_ex_b.ps}}
\caption{DENIS survey images (before bad pixel filtering)
of four galaxies found in
the deepest extinction layer of the Milky Way; the {$I_c$}\ band image
is at the {\it top\/}, {$J$}\ in the {\it middle\/} and {$K_s$}\ at the {\it
bottom\/}.}
\label{bsexplot}
\end{figure}
Figure~\ref{bsexplot} shows a few characteristic examples of highly
obscured galaxies found in the DENIS blind search. {$I_c$}\ band images are
at the top, {$J$}\ in the middle and {$K_s$}\ at the
bottom. The left-most galaxy is located at $(l,b) = (324\fdg6,-4\fdg5$),
with $A_B = 2\fm8$ as estimated from \mbox{\HI -column} densities
\cite{kerr} following the precepts of \citeauthor{BH} \shortcite{BH}.
It is barely visible in the {$J$}\ band, although its
{$B$}\ band image is similar to the {$B$}\ of the second galaxy. This galaxy
at $(l,b) = (324\fdg7,-3\fdg5$) is, however, subject to heavier extinction
($A_B = 3\fm7$) and hence easier to recognise in the NIR. The most
distinct image is the {$J$}\ band. The third galaxy
at even higher extinction $(l,b,A_B) = (320\fdg1,+2\fdg5,4\fm6$) is
not visible anymore in the {$B$}\ band. Neither is the fourth galaxy:
at $b=+1\fdg9$ and $A_B = 6\fm3$ this galaxy is not even
visible in the {$I_c$}\ band and very faint in {$J$}\ and {$K_s$}.
The most important result from this search is that {\it
highly obscured, optically
invisible galaxies can indeed be unveiled in the NIR\/} and --- as
indicated with the distribution in the right panel of
Figure~\ref{bsrchplot} --- found at lower latitudes than the deep optical
survey. The lowest Galactic latitude at which we found a galaxy is
$b \simeq 1.5{^\circ}$ and $A_B \simeq 7\fm5$.
\section{Galaxies detected in H\thinspace\protect\footnotesize\bf I\protect\normalsize }
NIR surveys are the only tools that will identify
early-type galaxies and therewith uncover the
cores of massive groups and clusters at very low-latitudes.
In addition, highly obscured spiral galaxies should be detectable with
these surveys as well. Such identifications will proof important
in connection with the systematic blind \HI\ survey currently
conducted with the Multibeam Receiver (13 beams in the focal plane
array) at the 64\,m Parkes telescope: a deep survey with a $5 \sigma$
detection limit of 10\,mJy is being performed in the most opaque
region of the southern Milky Way ($213{^\circ} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \ell \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}
33{^\circ}$; $|b| \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 5{^\circ}$) for the velocity range of
$-1000 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} v \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} $12000 \,km\,s$^{-1}$ \cite{MB}. Roughly
3000 detections are predicted. Hardly any of
them will have an optical counterpart. However, at these latitudes
many might be visible
in the NIR. The combination of data from these two surveys, \ie\
NIR photometry with \HI -data (velocity and linewidth) will
proof particularly interesting because it will allow
the extension of peculiar velocity data {\it into} the ZOA
via the NIR Tully\,--\,Fisher relation.
Only a few cross-identifications were possible with the data
available from both surveys by June 1997. But we could identify thirteen
galaxies detected blindly in \HI\ on existing DENIS images. Four of
them are visible in the {$B$}, {$I_c$}, {$J$}, and {$K_s$}\ bands. The other
galaxies are only seen in the NIR. Four of them need
further confirmation.
\begin{figure} [htb]
\centerline {\epsfxsize=12.cm \epsfbox{mb_zoa_4.ps}}
\vspace{-.5cm}
\caption{DENIS survey images (before bad pixel filtering)
of four galaxies detected
blindly in \hi\ at $|b| \le 5{^\circ}$; the {$I_c$}\ band image
is at the {\it top
\/}, {$J$}\ in the {\it middle\/} and {$K_s$}\ at the {\it bottom\/}.}
\label{hidetplot}
\end{figure}
Figure~\ref{hidetplot} shows four examples of the candidates.
The first galaxy is a nearby ($v\!=\!1450 \, \rm km \, s^{-1}$) ESO-Lauberts
galaxy (L223-12) at
$b=+4\fdg8$ and $A_B = 3\fm2$. It is very impressive
in all three NIR passbands (note the larger image scale for this galaxy,
\ie $3\farcm3$ instead of $1\farcm7$).
The second galaxy at $(l,b,A_B) = (306\fdg9,+3\fdg6,3\fm3$) is
slightly more distant ($v\!=\!2350\, \rm km \, s^{-1}$).
This galaxy has also been
identified in {$B$}\ and is quite distinct in {$I_c$}\ and {$J$} .
The third galaxy at $(b,A_B) \simeq (-2\fdg9, 4\fm6)$ had been detected by us
as an OFF-signal at $v\!=\!2900\, \rm km \, s^{-1}$
during pointed \HI\ observations
in the ZOA. It has no optical counterpart but can be clearly
seen in all three NIR passbands. The last example is an uncertain
NIR counterpart at $(b,A_B) \simeq (+1\fdg5,7\fm5$) of a galaxy
detected in \HI\ at $v\!=\!1450\, \rm km \, s^{-1}$.
It is barely visible in the {$I_c$}\ band.
Although the present data is scarce, NIR counterparts of \HI\ detected,
highly obscured galaxies certainly seem to merit a systematic
exploitation for large-scale structure investigations.
\section{Conclusion }
Our pilot study illustrates the promises of using the NIR surveys for
extragalactic large-scale studies behind the ZOA as well as for the mapping
of the Galactic extinction.
{\sl At intermediate latitudes and extinction}
($5{^\circ} < |b| < 10{^\circ}$, $1^{\rm m} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} A_B \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 4-5^{\rm m}$)
optical surveys remain superior for identifying
galaxies. However, the NIR luminosities and colours together with
extinction data from the NIR colours will prove invaluable in
analysing the optical survey data and their distribution in redshift
space, and in the final merging of these data with existing sky
surveys. Despite the high extinction and the star crowding
at these latitudes, {$I_c$} , {$J$}\ and {$K_s$}\ photometry from the survey
data can be successfully performed at these low latitudes and lead,
for instance, to the preliminary $I_c^o$, $J^o$ and $K_s^o$ galaxy
luminosity functions in A3627.
{\sl At low latitudes and high extinction}
($|b| < 5{^\circ}$ and $A_B \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 4-5^{\rm m}$)
the search for `invisible' obscured galaxies on existing DENIS-images
implicate that NIR-surveys can trace galaxies down to about $|b| \simeq
1\fdg5$. The {$J$}\ band was found to be optimal for identifying galaxies up to
$A_B \simeq 7^{\rm m}$, although this might change in favour of {$K_s$}\
with the new cooling system. NIR surveys can hence further reduce the
width of the ZOA. This is furthermore the only tool that permits the
mapping of early-type galaxies --- tracers of density peaks --- at
high extinction.
The combination of two different surveys, \ie NIR data for highly obscured
spiral galaxies detected in a systematic blind \HI\ survey --- a fair
fraction could indeed be re-identified on DENIS-images --- allows the mapping
of the peculiar velocity field in the ZOA through the NIR Tully\,--\,Fisher relation. This will be
pursued as well at intermediate latitudes ($5{^\circ} < |b| < 10{^\circ}$) with
pointed \HI\ observations of optically identified spiral galaxies. About 300
spiral galaxies have alrady been detected (\citeauthor{HIw} \citeyear{HIw}).
Whether the systematic identification of ZOA galaxies from the DENIS survey
must be performed by visual examination or whether galaxies can be
successfully extracted using classical algorithms (\citeauthor{gam3} \citeyear{gam3,gam4}) or
artificial neural
networks (\citeauthor{bertinarnouts} \citeyear{bertinarnouts}, Bertin, in these
proceedings) or a combination of both requires further
exploration.
\acknowledgements{We thank Jean Borsenberger for providing bias subtracted,
flat fielded DENIS
images, Emmanuel Bertin for supplying recent updates of his
SExtractor software package, and Eric Copet for providing software to display
Figures 4 and 5.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let us begin with an example.
\begin{example}
Prove that $f\ge 0$ under the constraints that $a\ge 0, b\ge 0, c\ge 0, abc-1=0,$ where
\[\begin{array}{rl}
f = & 2b^4c^4+2b^3c^4a+2b^4c^3a+2b^3c^3a^2+2a^3c^3b^2+2a^4c^3b+2a^3c^4b+2a^4c^4\\
& +2a^3b^4c+2a^4b^4+2a^3b^3c^2+2a^4b^3c-3b^5c^4a^3-6b^4c^4a^4-3b^5c^3a^4\\
& -3b^4c^3a^5-3b^4c^5a^3-3b^3c^5a^4-3b^3c^4a^5.
\end{array}
\]
To prove the inequality by Maple, we first start Maple and load two relative packages of \RC\ as follows.
\noindent\verb|> with(RegularChains):|\\
\verb|> with(ParametricSystemTools):|\\
\verb|> with(SemiAlgebraicSetTools):|
Then define an order of the unknowns:
\noindent\verb|> R := PolynomialRing([a, b, c]);|
Now, by calling
\noindent\verb|> RealRootClassification([abc-1], [a, b, c], [-f], [ ], 2, 0, R);|\\
we will know at once that the inequality holds.
\end{example}
In this paper, we give in detail an introduction on how to use the function \RRC\ ({\tt RRC} for short) to prove a polynomial is nonnegative under some polynomial inequality and/or equation constraints. Before we start, we would like to give some history remarks here.
It is well-known that Tarski \cite{tarski} proved that all elementary
algebraic and geometric propositions are decidable and gave an
algorithm for deciding whether or not a given elementary algebraic
and geometric proposition is true. Although Tarski's method cannot be applied to any non-trivial theorem
proving due to its high complexity, it is a milestone since, for the first time, it told us quantifier elimination (QE) in real closed fields is decidable. Collins \cite{CAD} proposed a so-called Cylindrical
Algebraic Decomposition (CAD) method in 1975. Although the CAD
method is of doubly exponential complexity, it has been successfully
applied to many non-trivial theorem proving and discovering. There
are many subsequent work which improved the CAD algorithm and have
been implemented as several well-known tools for solving general
QE problems, {\it e.g.}, {\tt QEPCAD}.
Yang {\it et. al.} \cite{yhz} gave a theorem for explicitly
determining the condition for a given polynomial to have a given
number of real (and/or complex) zeros.
Sometimes the conditions are called the
root-classification of the polynomial. With this theorem and its
generalization to the case of semi-algebraic system, Yang {\it et. al.}
proposed an algorithm for proving and discovering inequality-type theorems
automatically \cite{yhx, yang, xia}. Indeed, the algorithm solves a special kind of QE problems which have at least one polynomial equation. A key concept of the method is {\em border polynomial}.
This algorithm has been improved
and implemented by Xia as a Maple package DISCOVERER
\cite{discover}. Since 2009, the main functions of DISCOVERER have
been integrated into the {\tt RegularChains} library of Maple.
Since then, the implementation has been improved by Chen {\it et. al.} \cite{chen12a,chen12b,chen13}. All the examples reported in this paper can be solved with Maple of version higher than Maple 13.
There are many other methods based on different principles for polynomial inequality proving. Since this paper is more or less a user guide on using \RRC\ to prove the nonnegativity of polynomials, we omit the introduction to those methods.
The rest of the paper is organized as follows. Section \ref{RC} describes the usage of the function
\RRC\ of {\tt RegularChains}. Section \ref{3} shows by examples how to use \RRC\ to prove a polynomial is nonnegative subject to some polynomial inequality and/or equation constraints. Some tricks for using the tool are also provided.
\section{\RRC}\label{RC}
In this section we describe in detail
the calling sequence, the input and output of {\tt
RealRootClassification} ({\tt RRC} for short).
First of all, you should install Maple in your computer. The
version of Maple should be at least Maple 13. Then, when Maple is
started, you should load the \RC\ library as follows before using
{\tt RRC}.
\noindent\verb|> with(RegularChains):|\\
\verb|> with(ParametricSystemTools):|\\
\verb|> with(SemiAlgebraicSetTools):|
The calling sequence of {\tt RealRootClassification} is
\[{\tt RealRootClassification}(F, N, P, H, d, a, R);\]
where the first four parameter $F, N, P$ and $H$ represent a semi-algebraic system of the following form
\[F=0, N\geq 0, P>0, N\neq 0.\]
Herein, each of $F, N, P$ and $H$ is a set of polynomials in unknowns
$x_1,...,x_n$ with rational coefficients. If $F=[f_1,...,f_s],$
$N=[g_1,...,g_t],$ $P=[p_1,...,p_k],$ and $H=[h_1,...,h_m],$ then
$F=0, N\geq 0, P>0, N\neq 0$ is a short form for the following
system
\[\left\{\begin{array}{l}
f_1=0,...,f_s=0,\\
g_1\geq 0,...,g_t\geq 0,\\
p_1>0,...,p_k>0,\\
h_1\ne 0,...,h_m\ne 0.
\end{array}\right.\]
It should be pointed out that $s$ must be positive, {\it i.e.}, the system must have at least one equation.
The last formal parameter $R$ is a list of the variables
$x_1,...,x_n$, which defines an order of the variables and should be defined as a type {\em PolynomialRing} (see Example \ref{ex:1}).
The formal
parameter $d$ is a positive integer which indicates the last $d$
elements in $R$ are to be viewed as parameters of the given system.
The formal parameter $a$ has two possible forms. If $a$ is a
nonnegative integer, then \RRC\ will output the conditions for the
system $[F=0, N\geq 0, P>0, N\neq 0]$ to have exactly $a$ distinct
real solutions. If $a$ is a range, {\it e.g.} $2..3$, then \RRC\ will
output the conditions for the number of distinct real solutions of
the system $[F=0, N\geq 0, P>0, N\neq 0]$ falls into the range $a$.
If the second element of a range is an unassigned name, it means positive infinity.
We illustrate the usage of \RRC\ by the following simple example.
\begin{example}\label{ex:1}
We want to know the conditions on the coefficients of $f=ax^2+bx+c$
for $f$ to have real roots if $a\ne 0.$
\end{example}
After loading \RC\ library and two relative packages, we define the system as follows.
\noindent\verb|> f:=a*x^2+b*x+c;|\\
\verb|> F:=[f]; N:=[ ]; P:=[ ]; H:=[a];|\\
\verb|> R:=PolynomialRing([x,a,b,c]);|
To get more information from the output of the function directly, we
type in:\\
\verb|> infolevel[RegularChains]:=1;|
Then, we call\\
\verb|> RealRootClassification(F, N, P, H, 3, 1..n, R);|\\
where the range $1..n$ means ``the polynomial has at
least one real roots".
The output is: $R_1>0$ where $R_1=b^2-4ac$ provided that $a\ne 0$
and $R_1\ne 0.$ To discuss the case when $R_1=0,$ we can add this
equation into the original system and call \RRC\ again.\\
\begin{verb}
> RealRootClassification([b^2-4*a*c,op(F)], N, P, H, 3, 1..n,
R);
\end{verb}
In this way, we finally know that the condition is $R_1\geq 0.$
\section{Deciding nonnegativity by {\tt RRC}}\label{3}
We first give a detailed explanation of Example 1.
\noindent{\bf Example 1} (continued).\ Obviously,
\[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge abc-1=0 \Longrightarrow f\ge 0 \]
is equivalent to the following system is inconsistent
\[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge abc-1=0 \wedge f<0.\]
So, in Example 1, we call
\noindent\verb|> RealRootClassification([abc-1], [a, b, c], [-f], [ ], 2, 0, R);|\\
where the ``0" means we want to compute the conditions for the system to have no real solutions.
The output is:
\begin{center}
There is always given number of real solution(s)!\\
PROVIDED THAT\\
$\phi(b,c)\ne 0,$
\end{center}
where $\phi(b,c)$ is a polynomial in $b$ and $c$ with $19$ terms and of degree $18.$
The output means that the system always has no real solutions provided that the polynomial $\phi(b,c)$ does not vanish. In other word, {\tt RRC} proves that the proposition holds for almost all $a,b$ and $c$ except those such that $\phi(b,c)=0$.
Because the inequality to be proved is a non-strict inequality ($f\ge 0$), by continuity, we know at once that $f\ge 0$ holds for all $a,b$ and $c$ such that $a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge abc-1=0$. Thus, the proposition is proved.
\begin{example}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge ab+bc+ca-1=0 \Longrightarrow g\ge 0\]
where
\[\begin{array}{rl}
g = & -10a^3b^3-10b^3c^3-10a^3c^3-5a^4b^2-5c^2a^4-5c^4a^2-5a^2b^4+4c^3a\\
& -5b^4c^2-5b^2c^4+4ca^3+2a^4+2b^4+2c^4-10cab^4-30c^2a^3b-10ca^4b\\
& -10c^4ab+4a^3b^4c+16a^3b^3c^2+4a^4b^3c+16b^3c^3a^2+16a^3c^3b^2\\
& +4a^3c^4b+4b^3c^4a+4b^4c^3a+4a^4c^3b+6b^2c^2a^4-30b^3c^2a\\
& -30c^3a^2b+6b^2c^4a^2+16c^2ab+16ca^2b-50b^2c^2a^2+16cab^2-30b^2c^3a\\
& -30ca^3b^2-30a^2b^3c+6b^4c^2a^2+6c^2a^2+6a^2b^2+6b^2c^2+4c^3b+4b^3c\\
& +4b^3a+4a^3b+2a^4b^4+2a^4c^4+2b^4c^4.
\end{array}\]
\end{example}
\begin{example}
Prove that \[x\ge 0 \wedge y\ge 0 \wedge z\ge 0 \wedge r \ge 0 \wedge (r+1)^2-4/3 \ge 0 \wedge x+y+z-3=0 \Longrightarrow h\ge 0\]
where
\[\begin{array}{rl}
h = & -3+z-3r^3y^2z^2x^2+ry^3+r^2z^3+rz^3-3ry^2-3rz^2+r^2x^3+yr\\
& +r^2y^3+zr+rx^3+rx-3rx^2+xr^2z^2+yrx^2+xrz^2+r^3y^3x^2\\
& +r^2y^3x^2-3r^2y^2z^2+r^2y^2z^3+r^3z^2x^3+r^2z^2x^3+zr^2y^2+zry^2\\
& +yr^2x^2-3r^2z^2x^2-3r^2y^2x^2+r^3y^2z^3+y+x.
\end{array}\]
\end{example}
\begin{example}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge d \ge 0\wedge a+b+c+d-1=0 \Longrightarrow p\ge 0\]
where
\[p = 1+176abcd-27(bcd+cda+dab+abc).\]
\end{example}
\begin{example}
Prove that for any given integer $n\ge 3$,
\[-1\le x_i\le 1\ (1\le i\le n) \wedge \sum{x_i^3}=0 \Longrightarrow \sum{x_i}\le \frac{n}{3}.\]
Although the problem is not so hard for a mathematician, it is really hard for a computer. We proved the proposition for $n=3,4,5$ by Maple.
\end{example}
\begin{example}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge a^3b+b^3c+c^3a-3=0 \Longrightarrow q\ge 0\]
where
\[q = -75a^4b^4c^4-5a^4b^4-5a^4c^4-5b^4c^4+21a^4+21b^4+21c^4+27.\]
\end{example}
\begin{example}\footnote{http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52\&t=432676}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge a^3b+ac^3+b^3c+abc-4=0 \Longrightarrow w\ge 0\]
where
\[w = 27(a+b+c)^4-1024.\]
\end{example}
Examples 3-8 have a common property that the systems themselves have at least one equation. So, we can use \RRC\ directly. We show by the following two examples how to deal with the situation where no equations appear in the system.
\begin{example}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge d\ge 0 \Longrightarrow u\ge 0\]
where
\[\begin{array}{rl}
u = & 1280bd^3c+624bc^2d^2+320ab^4+464ac^4-112ad^4-112a^4b+464a^4c\\
& -112b^4c+464b^4d+208c^3b^2+1072d^3b^2-224b^3c^2+1072b^3d^2\\
& +320bc^4+464bd^4-112c^4d+208d^3c^2-224c^3d^2+320cd^4+128ad^3c\\
& +624ab^2c^2+740b^3cd+1812ab^2d^2+516ac^2d^2+1812b^2cd^2\\
& +128bc^3d+516b^2c^2d+128a^3bd+624a^2b^2d+516a^2bd^2+1280a^3cd\\
& +1812a^2c^2d+624a^2cd^2+128ab^3c+1280ab^3d+1280ac^3b+740ac^3d\\
& +740ad^3b+1812a^2bc^2+740a^3bc+516a^2b^2c+1896ab^2cd+1896abc^2d\\
& +1896abcd^2+1896a^2bcd+320a^4d+208b^3a^2+1072c^3a^2-224d^3a^2\\
& -224a^3b^2+1072a^3c^2+208a^3d^2+64a^5+64b^5+64c^5+64d^5.
\end{array}\]
As usual, we want to prove that the following system has no real solutions
\[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge d\ge 0 \wedge u<0.\]
However, the system does not contain equations and thus {\tt RRC} cannot be applied directly.
We introduce a new variable $T$ and the system being inconsistent is equivalent to that the following new system is inconsistent
\[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \wedge d\ge 0 \wedge u+T=0 \wedge T>0.\]
For this new problem, we first define
\noindent\verb|> R := PolynomialRing([T, a, b, c, d]);|\\
and then call
\noindent\verb|> RealRootClassification([u+T], [a, b, c, d], [T], [ ], 4, 0, R);|\\
The problem is solved immediately.
\end{example}
\begin{example}
Prove that \[a\ge 0 \wedge b\ge 0 \wedge c\ge 0 \Longrightarrow v\ge 0\]
where {\small
\[\begin{array}{rl}
v = & 104976a^{12}+1679616a^{11}b+1469664a^{11}c+10850112a^{10}b^2\\
& +19046016a^{10}bc+8076024a^{10}c^2+36149760a^9b^3+95364864a^9b^2c\\
& +80561952a^9bc^2+22935528a^9c^3+65762656a^8b^4+228601856a^8b^3c\\
& +282635520a^8b^2c^2+162625040a^8bc^3+42710593a^8c^4+63474176a^7b^5\\
& +251921856a^7b^4c+354740704a^7b^3c^2+288770224a^7b^2c^3\\
& +207550776a^7bc^4+83017484a^7c^5+29076288a^6b^6+60534016a^6b^5c\\
& -155234320a^6b^4c^2-380047056a^6b^3c^3+3130676a^6b^2c^4\\
& +375984436a^6bc^5+181119606a^6c^6+8313344a^5b^7-89738240a^5b^6c\\
& -760459488a^5b^5c^2-1768157568a^5b^4c^3-1403613720a^5b^3c^4\\
& +236428572a^5b^2c^5+824797636a^5bc^6+291288188a^5c^7\\
& +13943056a^4b^8-3628032a^4b^7c-514131904a^4b^6c^2-1869896304a^4b^5c^3\\
& -2495402586a^4b^4c^4-783163260a^4b^3c^5+1171287578a^4b^2c^6\\
& +1122586500a^4bc^7+288706561a^4c^8+18028800a^3b^9+116005472a^3b^8c\\
& +171678496a^3b^7c^2-347011440a^3b^6c^3-1231272792a^3b^5c^4\\
& -894635820a^3b^4c^5+731754984a^3b^3c^6+1497257080a^3b^2c^7\\
& +851454308a^3bc^8+170469720a^3c^9+10593792a^2b^{10}+100409472a^2b^9c\\
& +365510616a^2b^8c^2+624203728a^2b^7c^3+480156788a^2b^6c^4\\
& +215762988a^2b^5c^5+511667522a^2b^4c^6+990571720a^2b^3c^7\\
& +861820134a^2b^2c^8+356931720a^2bc^9+58375800a^2c^{10}\\
& +2985984ab^{11}+34730496ab^{10}c+165207744ab^9c^2+415788248ab^8c^3\\
& +606389880ab^7c^4+560561092ab^6c^5+437187748ab^5c^6+422470380ab^4c^7\\
& +390424292ab^3c^8+235263240ab^2c^9+77497200abc^{10}+10692000ac^{11}\\
& +331776b^{12}+4478976b^{11}c+25292160b^{10}c^2+77899104b^9c^3\\
& +144247489b^8c^4+170606684b^7c^5+141892350b^6c^6+102086036b^5c^7\\
& +76748161b^4c^8+52182360b^3c^9+24766200b^2c^{10}+6804000bc^{11}\\
& +810000c^{12}.
\end{array}\] }
Similar to Example 8, the inequality is proved by first defining
\noindent\verb|> R := PolynomialRing([T, a, b, c]);|\\
and then calling
\noindent\verb|> RealRootClassification([v+T], [a, b, c], [T], [ ], 3, 0, R);|
\end{example}
We report the timings on the examples in the following table.
All the computation were performed on a computer (CPU 3.2GHz, 2G RAM, Windows XP) with Maple 13.
\begin{center}
\begin{tabular}{|r|r|r|}
\hline
No. & timing & memory\\
\hline
{\em EX1} &0.06s& 0.81M\\
{\em EX3} &0.04s& 0.81M\\
{\em EX4} &6.04s& 53.55M\\
{\em EX5} &0.03s& 0.81M\\
{\em EX6(n=5)} &377.35s& 118.60M\\
{\em EX7} &16.67s& 63.11M\\
{\em EX8} &2.98s& 44.67M\\
{\em EX9} &1.26s& 39.36M\\
{\em EX10}&0.57s& 38.86M\\
\hline
\end{tabular}
\end{center}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sect:intro}
Since the pioneering paper \cite{ZK65}, the so-called KdV limit of atomic chains with nearest neighbor interactions -- often called Fermi-Pasta-Ulam or FPU-type chains -- has attracted a lot of interest in both the physics and the mathematics community, see \cite{FM14} for a recent overview. The key observation is that in the limiting case of long-wave-length data with small amplitudes the dynamics of the nonlinear lattice system is governed by the Korteweg-de Vries (KdV) equation, which is a completely integrable PDE and hence well understood. For rigorous results concerning initial value problems
we refer to \cite{SW00} and to \cite{CCPG12,GMWZ14} for similar result in chains with periodically varying masses.
\par
Of particular interest are the existence of KdV-like solitary waves and their stability with respect to the FPU dynamics. Both problems have been investigated by Friesecke and Pego in the seminal four-paper series \cite{FP99,FP02,FP04a,FP04b}, see also \cite{HW13} for simplifications in the stability proof and \cite{FM14} concerning the existence of periodic KdV-type waves. The more general cases of two or finitely many solitary waves have been studied in \cite{HW08,HW09} and \cite{Miz11,Miz13}, respectively. In this paper we generalize the existence result from \cite{FP99} and prove that chains with interactions between further than nearest-neighbors also admit KdV-type solitary waves. The corresponding stability problem is beyond the scope of this paper and left for future research.
\subsection{Setting of the problem}
We consider an infinite chain of identical particles which interact with up to $M$ neighbors on both sides. Assuming unit mass, the equations of motion are therefore given by
\begin{align}
\label{LawOfMotion}
\ddot{u}_j=\sum_{m=1}^M\Phi_m^\prime\at{u_{j+m}-u_j}-
\Phi_m^\prime\at{u_{j}-u_{j-m}}\,,
\end{align}
where $u_j\at{t}$ denotes the position of particle $j$ at time $t$. Moreover, the potential $\Phi_1$ describes the interactions between nearest-neighbors, $\Phi_2$ between the next-to-nearest-neighbors, and so on.
\par
A traveling wave is an exact solution to \eqref{LawOfMotion} which satisfies
\begin{align}
\label{eqn:TW.ansatz}
u_j\at{t}=r_*j + v_*t+{\varepsilon}\,U_{\varepsilon} \at{x}\,,\qquad x :={\varepsilon}{j}-{\varepsilon}{c_{\varepsilon} t}\,,
\end{align}
where the parameters $r_*$ and $v_*$ denote the prescribed background strain and background velocity, respectively. Moreover, ${\varepsilon}>0$ is an additional scaling parameter which will be identified below and becomes small in the KdV limit. A direct computation reveals that the wave speed $c_{\varepsilon}$ as well as the rescaled wave profile $U_{\varepsilon}$ must solve the rescaled traveling wave equation
\begin{align}
\label{eq:scaledfpu1}
{\varepsilon}^3c_{\varepsilon}^2\,U^{\prime\prime}_{\varepsilon}=\sum_{m=1}^M m{\varepsilon}\nabla_{-m{\varepsilon}}\Phi^\prime_m\at{mr_*+ m{\varepsilon}^2\nabla_{+m{\varepsilon}}U_{\varepsilon}}\,,
\end{align}
where the discrete differential operators are defined by
\begin{align}
\label{eq:diffoperators}
\at{\nabla_{+m{\varepsilon}}Y}\at{x}:=\frac{Y\at{x+m{\varepsilon}}-Y\at{x}}{m{\varepsilon}}\,,\qquad
\at{\nabla_{-m{\varepsilon}}Y}\at{x}:=\frac{Y\at{x}-Y\at{x-m{\varepsilon}}}{m{\varepsilon}}\,.
\end{align}
Note that $v_*$ does not appear in \eqref{eq:scaledfpu1} due to the Galilean invariance of the problem and that the solution set is invariant under the addition of constants to $U_{\varepsilon}$. It is therefore natural to interpret \eqref{eq:scaledfpu1} as an equation for the rescaled velocity profile $W_{\varepsilon}:=U^\prime_{\varepsilon}$; the corresponding distance or strain profile $\nabla_{+{\varepsilon}}U_{\varepsilon}$ can then be computed by convoluting $W_{\varepsilon}$ with the rescaled indicator function of an interval, see formula \eqref{eq:scaledfpu1a.px1} below.
\par
For $M=1$ and fixed ${\varepsilon}>0$ there exist -- depending on the properties of $\Phi_1$ -- many different types of traveling waves with periodic, homoclinic, heteroclinic, or even more complex shape of the profile $W_{\varepsilon}$, see for instance \cite{Her10,HR10, HMSZ13} and references therein. In the limit ${\varepsilon}\to0$, however, the most fundamental waves are periodic and solitary waves, for which $W_{\varepsilon}$ is either periodic or decays to $0$
as $x\to\pm\infty$.
\par
In this paper we suppose $r_*=0$ -- this condition can always be ensured by elementary transformations -- and split off both the linear and the quadratic terms from the force functions $\Phi^\prime_m$. This reads
\begin{align}
\label{eqn:ForceTerms}
\Phi_m^\prime\at{r}={\alpha}_m r + \beta_m r^2 + \Psi_m^\prime\at{r}\,,\qquad
\Psi_m^{\prime}\at{r}=\DO{r^3}\,,\qquad m=1\tdots M
\end{align}
or, equivalently, $\Phi_m\at{r}=\tfrac12 {\alpha}_mr^2 + \tfrac13 \beta_m r^3 + \Psi_m\at{r}$ with $\Psi_m\at{r}=\DO{r^4}$. In order to keep the presentation as simple as possible, we restrict our considerations to solitary waves -- the case of periodic profiles can be studied along the same lines -- and rely on the following standing assumption.
\begin{assumption}[properties of the interaction potentials]
\label{MainAssumption}
For all $m=1\tdots M$, the coefficients $\alpha_m$ and $\beta_m$ are positive. Moreover, $\Psi_m^\prime$ is continuously differentiable with $\Psi_m^\prime\at{0}=0$ and
\begin{align}
\label{MainAssumption.Eqn1}
\abs{\Psi_m^{\prime\prime}\at{r}}\leq {\gamma}_m r^2
\end{align}
for some constants ${\gamma}_m$ and all $r$ with $\abs{r}\leq 1$.
\end{assumption}
Note that the usual requirements for $M=1$ are ${\alpha}_1>0$ and ${\beta}_1\neq0$ but the case ${\beta}_1<0$ can be traced back to the case ${\beta}_1>0$ by a simple reflection argument with respect to the strain variable $r$. Below we discuss possible generalizations of Assumption \ref{MainAssumption} including cases in which the coefficients come with different signs.
\begin{figure}[t!]%
\centering{%
\includegraphics[width=0.6\textwidth]{profile}%
}%
\caption{
Sketch of the rescaled velocity profile $W_{\varepsilon}$ for ${\varepsilon}>0$ (black) and ${\varepsilon}=0$ (gray) as function of the rescaled phase variable $x$. The grid with spacing ${\varepsilon}>0$ describes the rescaled particle index ${\varepsilon} j$ while the dashed arrows indicate the height and the width of the pulse $W_{\varepsilon}$. The rescaled distance profile $\mathcal{A}_{\varepsilon} W_{\varepsilon}$ has a similar shape.
}%
\label{Fig0}%
\end{figure}%
\subsection{Overview on the main result and the proof strategy}
The overall strategy for proving the existence of KdV-type solitary waves in the lattice system \eqref{LawOfMotion} is similar to the approach in \cite{FP99} but many aspects are different due to the nonlocal coupling. In particular, we base our analysis on the velocity profile
\begin{align}
\label{Eqn:Def.Vel.prof}
W_{\varepsilon}:=U_{\varepsilon}^\prime
\end{align}
instead of the distance profile $\nabla_{{\varepsilon}}U_{\varepsilon}$, deviate in the justification of the key asymptotic estimates, and solve the final nonlinear corrector problem by the Banach fixed-point theorem. A more detailed comparison is given throughout the paper.
\par
As for the classical case $M=1$, we prescribe a wave speed $c_{\varepsilon}$ that is slightly larger than the sound speed $c_0$ and construct profile functions that satisfy \eqref{eq:scaledfpu1} and decay for $x\to\pm\infty$. More precisely, we set
\begin{align}
\label{eq:speedscaling}%
c_{\varepsilon}^2 := c_0^2+{\varepsilon}^2\, ,\qquad
c_0^2:=\sum_{m=1}^M {\alpha}_m m^2>0\,,
\end{align}
i.e., the small parameter ${\varepsilon}>0$ quantifies the supersonicity of the wave. Note that the subsonic case $c_{\varepsilon}<c_0$ is also interesting but not related to solitary waves, see discussions at the end of \S\ref{sect:prelim} and the end of \S\ref{sect:proof}.
\par
The asymptotic analysis from \S\ref{sect:prelim} reveals that the limiting problem as ${\varepsilon}\to0$ is the nonlinear ODE
\begin{align}
\label{Eqn:WaveODE}
W_0^{\prime\prime} = d_1 W_0- d_2 W_0^2\,,
\end{align}
where the positive constants $d_1$ and $d_2$ depend explicitly on the coefficient ${\alpha}_m$ and ${\beta}_m$, see formula \eqref{Eqn:LeadingOrderProblem.2x} below. This equation admits a homoclinic solution, which is unique up to shifts (see \S\ref{sect:proof.1}) and provides via $w\pair{t}{x}=W_0\at{x- t}$ a solitary wave to the KdV equation
\begin{align*}
d_1\,\partial_t w + d_2\,\partial_x w^2 + \partial_x^3 w=0\,.
\end{align*}
For ${\varepsilon}>0$ we start with the ansatz
\begin{align}
\label{eqn.def.corrector}
W_{\varepsilon} = W_0+{\varepsilon}^2 V_{\varepsilon} \in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}
\end{align}
and derive in \S\ref{sect:proof} the fixed point equation
\begin{align}
\label{Eqn:FixedPoint}
V_{\varepsilon} = \mathcal{F}_{\varepsilon}\ato{V_{\varepsilon}}
\end{align}
for the corrector $V_{\varepsilon}$, where the operator $\mathcal{F}_{\varepsilon}$ is introduced in \eqref{Thm:FixedPoints.Eqn1}. The definition of $\mathcal{F}_{\varepsilon}$ requires to invert a linear operator $\mathcal{L}_{\varepsilon}$, which is defined in \eqref{Eqn:DefLandM} and admits a singular limit as ${\varepsilon}\to0$. The linear leading order operator $\mathcal{L}_0$ stems from the linearization of \eqref{Eqn:WaveODE} around the KDV wave $W_0$ and can be inverted on the space $\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ but not on $\fspace{L}^2\at{\mathbb{R}}$ due to the shift invariance of the problem. The first technical issue in our perturbative existence proof is to show that this invertibility property persists for small ${\varepsilon}>0$, see Theorem \ref{Lem:InvertibilityOfLeps}. The second one is to guarantee that $\mathcal{F}_{\varepsilon}$ is contractive on some ball in $\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$, see Theorem \ref{Thm:FixedPoints}. Our main findings are illustrated in Figure~\ref{Fig0} and can be summarized as follows, see also Corollary~\ref{Cor:Summary}.
\begin{result*}
For any sufficiently small ${\varepsilon}>0$ there exists a unique even and nonnegative solution $W_{\varepsilon}$ to the rescaled traveling wave equation \eqref{eq:scaledfpu1} with \eqref{eq:speedscaling} such that
\begin{align*}
\norm{W_{\varepsilon}-W_0}_2+\norm{W_{\varepsilon}-W_0}_\infty\leq C {\varepsilon}^2
\end{align*}
holds for some constant $C$ independent of ${\varepsilon}$, where $W_0$ is the unique even solution to \eqref{Eqn:WaveODE}.
\end{result*}
The asymptotic analysis presented below can -- for the price of more notational and technical effort -- be applied to a wider class of chains. Specifically, we expect that the following generalizations are feasible:
\begin{enumerate}
\item
We can allow for $M=\infty$ provided that the coefficients
${\alpha}_m$, ${\beta}_m$ and ${\gamma}_m$ decay sufficiently fast with respect to $m$ (say, exponentially).
\item
Some of the coefficients $\alpha_m$ and ${\beta}_m$ might even be negative. In this case, however, one has to ensure that the contributions from the negative coefficients are compensated by those from the positive ones. A first natural condition is
\begin{align*}
\sum_{m=1}^M {\alpha}_m m^2>0
\end{align*}
which ensures that uniform states are stable under small amplitude perturbations and that the sound speed $c_0$ from \eqref{eq:speedscaling} is positive. A further minimal requirement is
\begin{align*}
\sum_{m=1}^M {\alpha}_m m^4>0\,,\qquad
\sum_{m=1}^M {\beta}_m m^3\neq 0
\end{align*}
because otherwise the leading order problem -- see \eqref{Eqn:WaveODE} and \eqref{Eqn:LeadingOrderProblem.2x} below -- degenerates and does not admit exponentially decaying homoclinic orbits.
\item
The non-quadratic contributions to the forces might be less regular in the sense of
\begin{align*}
\abs{\Psi^{\prime\prime}\at{r}}\leq {\gamma}_m \abs{r}^{1+{\kappa}_m}
\end{align*}
for some constants ${\gamma}_m$ and exponents $0<\kappa_m<1$.
\end{enumerate}
The paper is organized as follows: In \S\ref{sect:prelim} we introduce a family of convolution operators and reformulate \eqref{eq:scaledfpu1} as an eigenvalue problem for $W_{\varepsilon}$. Afterwards we provide singular asymptotic expansions for a linear auxiliary operator $\mathcal{B}_{\varepsilon}$, which is defined in \eqref{Eqn:OperatorBeps} and plays a prominent role in our method. \S\ref{sect:proof} is devoted to the proof of the existence theorem. We first study the leading order problem in \S\ref{sect:proof.1} and show afterwards in \S\ref{sect:proof.2} that the linear operator $\mathcal{L}_{\varepsilon}$ is invertible. In \S\ref{sect:proof.3} we finally employ Banach's contraction mapping principle to construct solutions $V_{\varepsilon}$ to the nonlinear fixed problem \eqref{Eqn:FixedPoint} and conclude with a brief outlook. A list of all important symbols is given in the appendix.
\section{Preliminaries and linear integral operators}\label{sect:prelim}
In this section we reformulate the nonlinear advance-delay-differential equation \eqref{eq:scaledfpu1} as an integral equation and provide asymptotic estimates for the arising linear integral operators.
\subsection{Reformulation in terms of convolution operators}
For any $\eta>0$, we define the operator $\mathcal{A}_{\eta}$ by
\begin{align}
\label{eq:intoperator}
\at{\mathcal{A}_{\eta}Y}\at{x}:=\frac{1}{\eta}\int_{x-\eta/2}^{x+\eta/2}Y\at{\xi}\dint\xi
\end{align}
and regard \eqref{eq:scaledfpu1} as an equation for the rescaled velocity profile \eqref{Eqn:Def.Vel.prof}. Notice that $\mathcal{A}_\eta$ can be viewed as the convolution with the rescaled indicator function of the interval $\ccinterval{-\eta/2}{+\eta/2}$.
\begin{lemma}[reformulation as nonlinear eigenvalue problem]
Suppose that $W_{\varepsilon}=U^\prime_{\varepsilon}$ belongs to $\fspace{L}^2\at{\mathbb{R}}$. Then, the nonlinear eigenvalue problem
\begin{align}
\label{eq:scaledfpu1a}%
{\varepsilon}^2 c_{\varepsilon}^2 W_{\varepsilon} = \sum_{m=1}^M m \mathcal{A}_{m{\varepsilon}}\Phi_m^\prime\at{m{\varepsilon}^2 \mathcal{A}_{m{\varepsilon}} W_{\varepsilon}}
\end{align}
is equivalent to the traveling wave equation \eqref{eq:scaledfpu1}.
\end{lemma}
\begin{proof}
The operators defined in \eqref{eq:diffoperators} and \eqref{eq:intoperator} satisfy
\begin{align}
\label{eq:scaledfpu1a.px1}
\at{\nabla_{\pm m{\varepsilon}}U_{\varepsilon}}\at{x}= \bat{
\mathcal{A}_{m{\varepsilon}}U_{\varepsilon}}^\prime\at{x\pm\tfrac12m{\varepsilon}}=\bat{
\mathcal{A}_{m{\varepsilon}}W_{\varepsilon}}\at{x\pm\tfrac12m{\varepsilon}}\,,
\end{align}
so \eqref{eq:scaledfpu1} follows from \eqref{eq:scaledfpu1a} after differentiation with respect to $x$ and defining $U_{\varepsilon}$ as the primitive of $W_{\varepsilon}$. In order to derive \eqref{eq:scaledfpu1a} from \eqref{eq:scaledfpu1}, we first notice that $W_{\varepsilon}\in\fspace{L}^2\at{\mathbb{R}}$ implies $\mathcal{A}_{m{\varepsilon}}W_{\varepsilon}\in\fspace{W}^{1,2}\at{\mathbb{R}}$ (cf. Corollary \ref{Cor:RegularityOperatorA} below) and hence $\at{\mathcal{A}_{m{\varepsilon}}W_{\varepsilon}}\at{x}$ to $0$ as $x\to\pm\infty$. Afterwards we integrate \eqref{eq:scaledfpu1} with respect to $x$ and eliminate the constant of integration by means of the decay condition at infinity.
\end{proof}
In the case $M=1$, we can derive from \eqref{eq:scaledfpu1a} the identity
\begin{align*}
{\varepsilon}^2 c_{\varepsilon}^2 \mathcal{A}_{\varepsilon} W_{\varepsilon} = \mathcal{A}_{{\varepsilon}}^2\Phi_1^\prime\at{{\varepsilon}^2 \mathcal{A}_{{\varepsilon}} W_{\varepsilon}}\,,
\end{align*}
which is the equation for the distance profile $\mathcal{A}_{{\varepsilon}}W_{\varepsilon}$ and has been studied in \cite{FP99} (see equation (2.7) there for the function $\phi=\mathcal{A}_{\varepsilon} W_{\varepsilon}$). For $M>1$, however, we have to work with the velocity profile $W_{\varepsilon}$ since for a general function $W$ it is not possible to express $\mathcal{A}_{m{\varepsilon}}W$ for $m>1$ in terms of $\mathcal{A}_{\varepsilon} W$.
\par\quad\newline\noindent
We next summarize important properties of the convolution operators defined in \eqref{eq:intoperator}.
\begin{lemma}[properties of $\mathcal{A}_\eta$]
\label{Lem:PropertiesOperatorA}
For each $\eta>0$, the integral operator $\mathcal{A}_\eta$ has the following properties:
\begin{enumerate}
\item
For any $W\in\fspace{L}^2\at{\mathbb{R}}$, we have
$\mathcal{A}_\eta W\in\fspace{L}^2\cap\fspace{L}^\infty\at{\mathbb{R}}$ with
\begin{align}
\label{Lem:PropertiesOperatorA.Eqn1}
\norm{\mathcal{A}_\eta W}_\infty \leq \eta^{-1/2}\norm{W}_2\,,\qquad \norm{\mathcal{A}_\eta W}_2 \leq \norm{W}_2\,.
\end{align}
Moreover, $\mathcal{A}_\eta W$ admits a weak derivative with $\norm{\at{\mathcal{A}_\eta W}^\prime}_2\leq 2\eta^{-1}\norm{W}_2$.
\item
For any $W\in\fspace{L}^\infty\at{\mathbb{R}}$, we have $\norm{\mathcal{A}_\eta W}_\infty\leq \norm{W}_\infty$.
\item
$\mathcal{A}_\eta$ respects the even-odd parity, the nonnegativity, and the unimodality of functions. The latter means monotonicity for both negative and positive arguments.
\item
$\mathcal{A}_\eta$ diagonalizes in Fourier space and corresponds to
the symbol function
\begin{align}
\label{Eqn:SymbolFct}
a_\eta\at{k}=\sinc\at{\eta k/2}
\end{align}
with $\sinc\at{z}:=\sin\at{z}/z$.
\item
$\mathcal{A}_\eta$ is self-adjoint in the $\fspace{L}^2$-sense.
\end{enumerate}
\end{lemma}
\begin{proof}
All assertions follow immediately from the definition of $\mathcal{A}_\eta$; see \cite{Her10} for the details.
\end{proof}
\begin{corollary}[regularity of $\mathcal{A}_\eta W$]
\label{Cor:RegularityOperatorA}
$W\in\fspace{L}^2\at{\mathbb{R}}$ implies
$\mathcal{A}_\eta W\in\fspace{W}^{1,2}\at{{\mathbb{R}}}\subset\fspace{BC}\at{{\mathbb{R}}}$ and hence
$\at{\mathcal{A}_\eta W}\at{x}\to0$ as $x\to\pm\infty$.
\end{corollary}
\subsection{Asymptotic analysis for the convolution operators \texorpdfstring{$\mathcal{A}_\eta$}{}}
The symbol function $a_\eta$ from \eqref{Eqn:SymbolFct} is analytic with respect to $z=\eta k/2$ and in view of
\begin{align*}
\sinc\at{z}=\sum_{j=0}^\infty \frac{\at{-1}^j z^{2j}}{\at{2j+1}!}
\end{align*}
we readily verify
\begin{align*}
\mathcal{A}_{\eta} \mhexp{\mathtt{i} k{x}}=\sinc\at{\eta k/2} \mhexp{\mathtt{i} k{x}}=
\sum_{j=0}^{\infty} (-1)^j \frac{\eta^{2j}k^{2j}\mhexp{\mathtt{i} k{x}}}{2^{2j}\at{2j+1}!}=
\sum_{j=0}^\infty\frac{\eta^{2j}\partial_{x}^{2j}\mhexp{\mathtt{i} k{x}}}{2^{2j}\at{2j+1}!}\,.
\end{align*}
The integral operator \eqref{eq:intoperator} therefore admits the \emph{formal} expansion
\begin{align}
\label{Eqn:AsymptoticsA}
\mathcal{A}_\eta=\sum_{j=0}^\infty\frac{\eta^{2j}\partial_{x}^{2j}}{2^{2j}\at{2j+1}!}\qquad
\text{and hence}\qquad
\mathcal{A}_{m{\varepsilon}}=\id +{\varepsilon}^2\frac{m^2}{24}\partial_{x}^2+\DO{{\varepsilon}^4}\,,
\end{align}
which reveals that $\mathcal{A}_{m{\varepsilon}}$ should be regarded as a \emph{singular perturbation} of the identity operator $id$. This singular nature complicates the analysis because the error terms in \eqref{Eqn:AsymptoticsA} can only be bounded in terms of higher derivatives.
\par%
One key observation for dealing with the limit ${\varepsilon}\to0$ is -- roughly speaking -- that the resolvent-type operator
\begin{align}
\nota
\at{\id + {\kappa} \frac{\id - \mathcal{A}_{m{\varepsilon}}^2}{{\varepsilon}^2}}^{-1}
\end{align}
is well-defined and almost compact as long as ${\kappa}>0$. It thus exhibits nice regularizing properties which allows us
to compensate bad terms stemming from the expansion \eqref{Eqn:AsymptoticsA}. The same idea has been employed in \cite{FP99} in the context of the distance profile $\mathcal{A}_{\varepsilon} W$, showing that
the Yosida-type operator
\begin{align*}
\at{\id +{\kappa} \frac{\id - \mathcal{A}_{{\varepsilon}}^2}{{\varepsilon}^2}}^{-1} \mathcal{A}_{{\varepsilon}}^2
\end{align*}
behaves nicely since the corresponding Fourier symbol
\begin{align*}
\frac{{\varepsilon}^2 a_{\varepsilon}^2\at{k}}{{\varepsilon}^2+{\kappa}\at{1-a_{\varepsilon}^2\at{k}}}
\end{align*}
is well-defined and bounded by $C/\at{1+{\varepsilon}^2k^2}$, cf. \cite[Corollary 3.4.]{FP99}. Before we establish a related but weaker result in next subsection, we derive explicit error bounds for the singular expansion of $\mathcal{A}_{m{\varepsilon}}$.
\begin{figure}[t!]%
\centering{%
\includegraphics[width=0.8\textwidth]{symbol_a}%
}%
\caption{ %
\emph{Left panel:} Graph of the $\sinc$ function
$z\mapsto \sin\at{z}/z$. \emph{Right panel} Lower bound for $1-\sinc^2$ as used in the proof of Lemma \ref{Lem:InversOfB}.
}%
\label{Fig1}%
\end{figure}%
\begin{lemma}[small-parameter asymptotics of $\mathcal{A}_{\eta}$ ]
\label{Lem:LimitOperatorA}
There exists a constant $C$, which does not depend on $\eta$,
such that the estimates
\begin{align}
\label{Lem:LimitOperatorA.EqnA}
\norm{\mathcal{A}_{\eta}W-W}_2 \leq C\eta^2 \norm{W^{\prime\prime}}_2\,,\qquad
\norm{\mathcal{A}_{\eta}W-W}_\infty\leq C\eta^2 \norm{W^{\prime\prime}}_\infty
\end{align}
and
\begin{align}
\label{Lem:LimitOperatorA.EqnB}
\norm{\mathcal{A}_{\eta}W-W-\frac{\eta^2}{24}W^{\prime\prime}}_2 \leq C\eta^4 \norm{W^{\prime\prime\prime\prime}}_2\,,\qquad
\norm{\mathcal{A}_{\eta}W-W-\frac{\eta^2}{24}
W^{\prime\prime}}_\infty\leq C\eta^4 \norm{W^{\prime\prime\prime\prime}}_\infty
\end{align}
hold for any sufficiently regular $W$. In particular, we have
\begin{align}
\label{Lem:LimitOperatorA.Eqn0}
\mathcal{A}_{\eta} W\quad\xrightarrow{\;\eta\to0\;}\quad W\qquad\text{strongly}\quad\text{in}\quad \fspace{L}^2\at{\mathbb{R}}
\end{align}
for any $W\in\fspace{L}^2\at{\mathbb{R}}$.
\end{lemma}
\begin{proof}
\emph{\ul{$\fspace{L}^\infty$-estimates}}:
For any $W\in\fspace{W}^{4,\infty}\at{\mathbb{R}}$, the weak variant of Taylor's expansion theorem implies
\begin{align*}
\Babs{P\pair{x}{\xi}}\leq
\norm{W^{\prime\prime\prime\prime}}_\infty
\frac{\at{x-\xi}^4}{24}
\end{align*}
for almost all $x,\xi\in{\mathbb{R}}$, where
\begin{align*}
P\pair{x}{\xi}:=
W\at\xi-W\at{x}-W^\prime\at{x}\bat{x-\xi}-
\tfrac{1}{2}W^{\prime\prime}\at{x}\bat{x-\xi}^2-
\tfrac{1}{6}W^{\prime\prime\prime}\at{x}\bat{x-\xi}^3\,.
\end{align*}
Integrating $P\pair{x}{\xi}$ with respect to $\xi\in\ccinterval{x-\eta/2}{x+\eta/2}$ we therefore get
\begin{align*}
\abs{\eta \mathcal{A}_\eta W\at{x}-\eta W\at{x}-\frac{\eta^3}{24}W^{\prime\prime}\at{x}}&=\abs{\int_{x-\eta/2}^{x+\eta/2}P_\eta\pair{x}{\xi}\dint\xi}\\&\leq
\frac{\norm{W^{\prime\prime\prime\prime}}_\infty}{24}\int_{x-\eta/2}^{x+\eta/2}\at{x-\xi}^4\dint\xi=C\norm{W^{\prime\prime\prime\prime}}_\infty\eta^5\,,
\end{align*}
and \eqref{Lem:LimitOperatorA.EqnB}$_2$ follows immediately. The derivation of \eqref{Lem:LimitOperatorA.EqnA}$_2$ is similar.
\par
\emph{\ul{$\fspace{L}^2$-estimates}}:
Now let $W\in\fspace{W}^{4,2}\at{\mathbb{R}}$ be arbitrary.
By Parseval's Theorem -- and employing that
$\abs{1-\sinc\at{z}- z^2/6}\leq C z^4$ holds for some constant $C$ and all $z\in{\mathbb{R}}$ -- we find
\begin{align*}
\norm{\mathcal{A}_\eta W -W-\frac{\eta^2}{24}W^{\prime\prime}}_2^2 &=
\norm{\widehat{W}-\widehat{\mathcal{A}_\eta W}+\frac{\eta^2}{24}\widehat{W^{\prime\prime}}}_2^2
\\&=\int_{\mathbb{R}} \at{1-\sinc\at{\eta k/2}-\frac{\eta^2k^2}{24}}^2\abs{\widehat{W}\at{k}}^2\dint{k}
\\&\leq C\eta^8\int_{\mathbb{R}} \abs{k^4\widehat{W}\at{k}}^2\dint{k}=C\eta^8\norm{W^{\prime\prime\prime\prime}}_2^2\,,
\end{align*}
and this implies \eqref{Lem:LimitOperatorA.EqnB}$_1$. The estimate \eqref{Lem:LimitOperatorA.EqnA}$_1$
can by proven analogously since we have $\abs{1-\sinc\at{z}}\leq z^2/6$ for all $z\in{\mathbb{R}}$.
\par
\emph{\ul{Final argument}}:
Let $W\in\fspace{L}^2\at{\mathbb{R}}$ be arbitrary but fixed. Since $\mathcal{A}_\eta$ is self-adjoint, see Lemma \ref{Lem:PropertiesOperatorA}, and in view of \eqref{Lem:LimitOperatorA.EqnA} we readily demonstrate
\begin{align}
\label{Lem:LimitOperatorA.PEqn0}
\mathcal{A}_{\eta} W\quad\xrightarrow{\;\eta\to0\;}\quad W\qquad \text{weakly in}\quad \fspace{L}^2\at{\mathbb{R}}\,,
\end{align}
and this implies $\norm{W}_2\leq \liminf_{\eta\to0}\norm{\mathcal{A}_\eta W}_2$. On the other hand, the estimate \eqref{Lem:PropertiesOperatorA.Eqn1}$_2$ ensures that $\limsup_{\eta\to0}\norm{\mathcal{A}_\eta W}_2\leq\norm{W}_2$.
We therefore have $\norm{W}_2=\lim_{\eta\to0}\norm{\mathcal{A}_\eta W}_2$ and combining this with the weak convergence
\eqref{Lem:LimitOperatorA.PEqn0} we arrive at \eqref{Lem:LimitOperatorA.Eqn0} since $\fspace{L}^2\at{\mathbb{R}}$ is a Hilbert space.
\end{proof}
\subsection{Asymptotic properties of the auxiliary operator \texorpdfstring{$\mathcal{B}_{\varepsilon}$}{}}
As already outlined above, we introduce for any given ${\varepsilon}>0$ the operator
\begin{align}
\label{Eqn:OperatorBeps}
\mathcal{B}_{\varepsilon}:= \id + \sum_{m=1}^M{\alpha}_m m^2 \frac{\id-\mathcal{A}_{m{\varepsilon}}^2}{{\varepsilon}^2}\,,
\end{align}
which appears in \eqref{eq:scaledfpu1a} if we collect all linear terms on the left hand side, insert the wave-speed scaling \eqref{eq:speedscaling}, and divide the equation by ${\varepsilon}^4$. We further define the operator
\begin{align}
\label{Eqn:OperatorB0}
\mathcal{B}_0:=\id - \frac{\sum_{m=1}^M {\alpha}_m m^4}{12}\,\partial_x^2\,,
\end{align}
which can -- thanks to Lemma \ref{Lem:LimitOperatorA} -- be regarded as the formal limit of $\mathcal{B}_{\varepsilon}$ as ${\varepsilon}\to0$. In Fourier space, these operators correspond to the symbol functions
\begin{align}
\label{Eqn:OperatorBSymb}
b_{\varepsilon}\at{k} =1+ \sum_{m=1}^M{\alpha}_m m^2 \frac{1-\sinc^2\at{mk{\varepsilon}/2}}{{\varepsilon}^2}\,,\qquad
b_0\at{k} =1+ \frac{\sum_{m=1}^M {\alpha}_m m^4}{12}k^2\,,
\end{align}
which are illustrated in Figure~\ref{Fig2} and satisfy
\begin{align*}
b_{\varepsilon}\at{k}\quad \xrightarrow{{\varepsilon}\to0}\quad b_0\at{k}
\end{align*}
for any fixed $k\in{\mathbb{R}}$. This convergence, however, does not hold uniformly in $k$ since $\mathcal{B}_{\varepsilon}$ is a singular perturbation of $\mathcal{B}_0$. Using the uniform positivity of these symbol functions, we easily demonstrate the existence of the inverse operators
\begin{align*}
\mathcal{B}_{\varepsilon}^{-1},\,\mathcal{B}_0^{-1} \;:\;\fspace{L}^{2}\at{\mathbb{R}}\to\fspace{L}^{2}\at{\mathbb{R}}\,,
\end{align*}
where $\mathcal{B}_0^{-1}$ maps actually into the Sobolev space $\fspace{W}^{2,2}\at{\mathbb{R}}$ and is hence smoothing because $1/b_0\at{k}$ decays quadratically at infinity. The inverse of $\mathcal{B}_{\varepsilon}$, however, is less regularizing because $b_{\varepsilon}\at{k}$ remains bounded as $k\to\pm\infty$. In order to obtain asymptotic estimates for $\mathcal{B}_{\varepsilon}^{-1}$, we introduce the cut-off operator
\begin{align*}
\Pi_{\varepsilon}\;:\;\fspace{L}^2\at{\mathbb{R}}\to\fspace{L}^2\at{\mathbb{R}}
\end{align*}
by defining its symbol function $\pi_{\varepsilon}$ as follows
\begin{align}
\label{eqn.cutoff.def}
\pi_{\varepsilon} \at{k}:=\left\{\begin{array}{lcl}
1&&\text{for \;$\abs{k}\leq\displaystyle \frac{4}{{\varepsilon}}$}\,,\\0&&\text{else}\,.
\end{array}\right.
\end{align}
One of our key technical results is the following characterization of $\mathcal{B}_{\varepsilon}^{-1}$, which reveals that $\mathcal{B}_{\varepsilon}$ admits an almost compact inverse. For $m=1$, a similar but slightly stronger result has been given in \cite[Corollary 3.5]{FP99} using a careful Fourier-pole analysis of the involved integral operators. For $m>1$, however, the symbol functions possess more poles in the complex plane and hence we argue differently.
\begin{figure}[t!]%
\centering{%
\includegraphics[width=0.8\textwidth]{symbol_b}%
}%
\caption{Sketch of the symbol function $b_{\varepsilon}$ from \eqref{Eqn:OperatorBSymb}, depicted on two intervals for ${\varepsilon}>0$ (black) and ${\varepsilon}=0$ (gray). Notice that $b_{\varepsilon}\at{0}=\min_{k\in{\mathbb{R}}} b_{\varepsilon}\at{k}=1$ holds for all ${\varepsilon}\geq0$.
}%
\label{Fig2}%
\end{figure}%
\begin{lemma}[asymptotic estimates for $\mathcal{B}_{\varepsilon}^{-1}$]
\label{Lem:InversOfB}
For any ${\varepsilon}>0$, the operator $\mathcal{B}_{\varepsilon}$ respects the even-odd parity and is both
self-adjoint and invertible on $\fspace{L}^2\at{\mathbb{R}}$. Moreover, there exists a constant
$C$ such that
\begin{align}
\label{Lem:InversOfB.Eqn1}
\norm{\Pi_{\varepsilon} \mathcal{B}_{\varepsilon}^{-1} G}_{2,2}+{\varepsilon}^{-2}\norm{\at{\id-\Pi_{\varepsilon}}\mathcal{B}_{\varepsilon}^{-1} G}_{2}\leq C \norm{G}_{2}
\end{align}
holds for all $G\in\fspace{L}^2\at{\mathbb{R}}$ and all $0<{\varepsilon}\leq1$. Here,
$\norm{\cdot}_{2,2}$ denotes the usual norm in $\fspace{W}^{2,2}\at{\mathbb{R}}$.
\end{lemma}
\begin{proof}
In view of \eqref{Eqn:OperatorBeps}, \eqref{Eqn:OperatorBSymb} and Lemma \ref{Lem:PropertiesOperatorA}, it remains to show \eqref{Lem:InversOfB.Eqn1}. Using the properties of the $sinc$ function, see Figure \ref{Fig1}, we readily demonstrate
\begin{align*}
1\geq 1-\sinc^2\at{mz}\geq \frac{\at{\min\{\abs{z},\,2\}}^2}{6}\qquad \text{for all} \quad z\in{\mathbb{R}}\quad\text{and} \quad m\in{\mathbb{N}}\,.
\end{align*}
Consequently, we get
\begin{align*}
1-\sinc^2\at{m{\varepsilon} k /2}\geq
\frac{1}{24}\left\{\begin{array}{lcl}%
\displaystyle{\varepsilon}^2k^2&& \text{for $\;\abs{k}\leq \frac{4}{{\varepsilon}}$}
\\
16&& \text{else}
\end{array}\right.%
\end{align*}
for all $m$, and hence
\begin{align*}
b_{\varepsilon}\at{k}\geq c \left\{\begin{array}{lcl}%
1+k^2&& \text{for $\;\abs{k}\leq \frac{4}{{\varepsilon}}$}
\\
\D1/{\varepsilon}^2&& \text{else}
\end{array}\right.%
\end{align*}
for some positive constant $c>0$. Moreover, noting that
\begin{align*}
\widehat{\mathcal{B}_{\varepsilon}^{-1}G}\at{k}= \frac{\widehat{G}\at{k}}{b_{\varepsilon}\at{k}}
\end{align*}
and using Parseval's theorem we estimate
\begin{align*}
\norm{\Pi_{\varepsilon} \mathcal{B}_{\varepsilon}^{-1} G}_{2,2}^2&= \int_{\abs{k}\leq \frac 4{\varepsilon}}
\at{1+k^2+k^4}\frac{\babs{\widehat{G}(k)}^2}{b_{\varepsilon}\at{k}^2}\dint k
\leq \frac{1}{c^2}
\int_{\abs{k}\leq \frac4{\varepsilon}}\frac{1+k^2+k^4}{1+2k^2+k^4}
\babs{\widehat{G}(k)}^2\dint{k}\leq
\frac{1}{c^2}\norm{G}_2^2
\end{align*}
as well as
\begin{align*}
\norm{\at{\id-\Pi_{\varepsilon}}\mathcal{B}_{\varepsilon}^{-1} G}_{2}^2=
\int_{\abs{k}\geq \frac{4}{{\varepsilon}}}{\frac{\babs{\widehat{G}(k)}^2}{b_{\varepsilon}\at{k}^2}}\dint k\leq \frac{{\varepsilon}^4}{c^2} \norm{G}_2^2\,.
\end{align*}
so \eqref{Lem:InversOfB.Eqn1} follows immediately.
\end{proof}
There exists another useful characterization of $\mathcal{B}_{\varepsilon}^{-1}$, which relies on the non-expansive estimate $\norm{\mathcal{A}_{m{\varepsilon}}W}_\infty \leq \norm{W}_\infty$, see Lemma \ref{Lem:PropertiesOperatorA}.
\begin{lemma}[von Neumann representation]
\label{Lem:vonNeumann}
We have
\begin{align*}
\mathcal{B}_{\varepsilon}^{-1}={\varepsilon}^2\sum_{i=0}^\infty \frac{\at{\sum_{m=1}^M{\alpha}_m m^2\mathcal{A}_{m{\varepsilon}}^2}^{i}}{\at{{\varepsilon}^2+\sum_{m=1}^M {\alpha}_m m^2}^{i+1}}\,,
\end{align*}
where the series on the right hand converges for any $W\in\fspace{L}^2\at{\mathbb{R}}$.
\end{lemma}
\begin{proof}
In the first step we regard all operators as defined on and taking values in $\fspace{L}^\infty\at{\mathbb{R}}$. We also use the abbreviation
\begin{align*}
\mathcal{I}_{\varepsilon} := \frac{\sum_{m=1}^M {\alpha}_m m^2 \mathcal{A}_{m{\varepsilon}}^2}{{\varepsilon}^2+c_0^2}
\end{align*}
and notice that \eqref{eq:speedscaling} and \eqref{Eqn:OperatorBeps} imply
\begin{align*}
\mathcal{B}_{\varepsilon}=
\frac{{\varepsilon}^2+c_0^2}{{\varepsilon}^2}\at{\mathrm{Id}-\mathcal{I}_{\varepsilon}}\,.
\end{align*}
Since the operator norm of $\mathcal{I}_{\varepsilon}$ -- computed with respect to the $\infty$-norm -- satisfies
\begin{align*}
\norm{\mathcal{I}_{\varepsilon}}_{{\mathrm{op}}}\leq\frac{c_0^2}{{\varepsilon}^2+c_0^2}<1\,,
\end{align*}
the von Neumann formula provides
\begin{align}
\label{Lem:vonNeumann.PEqn1}
\mathcal{B}_{\varepsilon}^{-1}=\frac{{\varepsilon}^2}{{\varepsilon}^2+c_0^2}\Bat{\id +\mathcal{I}_{\varepsilon}+\mathcal{I}_{\varepsilon}^2+\tdots}=
\frac{{\varepsilon}^2}{{\varepsilon}^2+c_0^2}\id +\frac{{\varepsilon}^2}{{\varepsilon}^2+c_0^2}\Bat{\id+\mathcal{I}_{\varepsilon}+\mathcal{I}_{\varepsilon}^2+\tdots}\mathcal{I}_{\varepsilon}
\end{align}
in the sense of an absolutely convergent series of $\fspace{L}^\infty$-operators. In the second step we generalize this result using the estimates from Lemma \ref{Lem:PropertiesOperatorA}. In particular, the right-hand side in \eqref{Lem:vonNeumann.PEqn1} is well-defined for any $W\in\fspace{L}^2\at{\mathbb{R}}$ since Lemma \ref{Lem:PropertiesOperatorA} ensures $\mathcal{I} _{\varepsilon} W\in\fspace{L}^\infty\at{\mathbb{R}}$.
\end{proof}
\begin{corollary}[invariance properties of $\mathcal{B}_{\varepsilon}^{-1}$]
\label{Cor:InvarianceProperties}
The operator $\mathcal{B}_{\varepsilon}^{-1}$ respects for both ${\varepsilon}>0$ and ${\varepsilon}=0$ the
nonnegativity, the evenness, and the unimodality of functions.
\end{corollary}
\begin{proof}
For ${\varepsilon}>0$, all assertions follow from the representation formula in Lemma \ref{Lem:vonNeumann} and the corresponding properties of the operators $\mathcal{A}_{m{\varepsilon}}$, see Lemma \ref{Lem:PropertiesOperatorA}. For ${\varepsilon}=0$ we additionally employ the approximation results from Lemma \ref{Lem:LimitOperatorA} as well as the estimates from Lemma \ref{Lem:InversOfB}.
\end{proof}
Note that all results concerning $\mathcal{B}_{\varepsilon}^{-1}$ are intimately related to the supersonicity condition $c_{\varepsilon}^2> c_0^2$. In a subsonic setting, one can still establish partial inversion formulas but the analysis is completely different, cf. \cite{HMSZ13} for an application in a different context.
\section{Proof of the main result}\label{sect:proof}
In view of the wave-speed scaling \eqref{eq:speedscaling} and the fixed point formulation \eqref{eq:scaledfpu1a}, the rescaled traveling wave problem consists in finding solutions $W_{\varepsilon}\in\fspace{L}^2\at{\mathbb{R}}$ to the operator equation
\begin{align}
\label{Eqn:RescaledTWEqn}
\mathcal{B}_{\varepsilon} W_{\varepsilon} = \mathcal{Q}_{\varepsilon}\ato{W_{\varepsilon}}+{\varepsilon}^2 \mathcal{P}_{\varepsilon}\ato{W_{\varepsilon}}\,,
\end{align}
where the linear operator $\mathcal{B}_{\varepsilon}$ has been introduced
in \eqref{Eqn:OperatorBeps}. Moreover, the
nonlinear operators
\begin{align}
\label{Eqn:NonlinOp.W}
\mathcal{Q}_{\varepsilon}\ato{W}:= \sum_{m=1}^M {\beta}_mm^3 \mathcal{A}_{m{\varepsilon}}\at{\mathcal{A}_{m{\varepsilon}}W}^2\,,\qquad
\mathcal{P}_{\varepsilon}\ato{W}:=
\frac{1}{{\varepsilon}^6}\sum_{m=1}^M m \mathcal{A}_{m{\varepsilon}} \Psi_m^\prime\at{m {\varepsilon}^2 \mathcal{A}_{m{\varepsilon}} W}
\end{align}
encode the quadratic and cubic nonlinearities, respectively, and are scaled such that the respective formal ${\varepsilon}$-expansions involve nontrivial leading order terms. In particular, we have
\begin{align}
\label{Eqn:NonlinOp.Q}
\mathcal{Q}_{\varepsilon}\ato{W}\quad\xrightarrow{\;\;{\varepsilon}\to0\;\;}\quad
\mathcal{Q}_0\ato{W}:=\at{\sum_{m=1}^M {\beta}_mm^3} W^2\,,
\end{align}
for any fixed $W\in\fspace{L}^2\at{\mathbb{R}}$, see \eqref{Lem:LimitOperatorA.Eqn0}. Note also
that \eqref{Eqn:RescaledTWEqn} always admits the trivial solution $W_{\varepsilon}\equiv0$.
\par
In what follows we solve the leading order problem to obtain the KdV wave $W_0$, transform \eqref{Eqn:RescaledTWEqn} via the ansatz \eqref{eqn.def.corrector} into another fixed point equation, and employ the contraction mapping principle to prove the existence of a corrector $V_{\varepsilon}\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ for all sufficiently small ${\varepsilon}>0$. In \cite{FP99}, the last step has been solved using a operator-valued variant of the implicit function theorem.
\subsection{The leading order problem and the KdV wave}\label{sect:proof.1}
Passing formally to limit ${\varepsilon}\to0$ in \eqref{Eqn:RescaledTWEqn}, we obtain the leading order equation
\begin{align}
\label{Eqn:LeadingOrderProblem.1}
\mathcal{B}_0 W_0 = \mathcal{Q}_0\ato{W_0}\,,
\end{align}
which is the ODE \eqref{Eqn:WaveODE} with parameters
\begin{align}
\label{Eqn:LeadingOrderProblem.2x}
d_1 := \frac{12}{\sum_{m=1}^M {\alpha}_m m^4}\,,\qquad
d_2 := \frac{12\sum_{m=1}^M {\beta}_mm^3}{\sum_{m=1}^M {\alpha}_m m^4}\,.
\end{align}
In particular, the leading order problem is a planar Hamiltonian ODE with conserved quantity $E=\tfrac12\at{W^\prime}^2+\tfrac13d_2W^3-\tfrac12d_1W^2$
and admits precisely one homoclinic orbit as shown in Figure \ref{Fig3}.
\begin{figure}[t!]%
\centering{%
\includegraphics[width=0.8\textwidth]{tw_pot}%
}%
\caption{%
Potential energy (\emph{left panel}) and phase diagram (\emph{right panel}) for the nonlinear oscillator ODE \eqref{Eqn:WaveODE} with coefficients \eqref{Eqn:LeadingOrderProblem.2x}, which determines the KdV wave $W_0$. There exists precisely one homoclinic orbit (solid black curve in the right panel) which corresponds to the solitary wave $W_0$. The closed loops inside the homoclinic orbits correspond to periodic KdV waves, see \cite{FM14}.
}%
\label{Fig3}%
\end{figure}%
\begin{lemma}[linear and nonlinear leading-order problem]
\label{Lem:LeadingOrder}
There exists a unique solution $W_0\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ to \eqref{Eqn:LeadingOrderProblem.1}, which is
moreover smooth, pointwise positive, and exponentially decaying. Moreover, the $\fspace{L}^2$-kernel of the linear operator $\mathcal{L}_0$ with
\begin{align}
\label{Lem:LeadingOrder.Eqn1}
\mathcal{L}_0 V:= \mathcal{B}_0 V- \mathcal{M}_0V\,,\qquad \mathcal{M}_0V:=2\at{\sum_{m=1}^M \beta_m m^3}W_0 V
\end{align}
is simple and spanned by the odd function $W_0^\prime$.
\end{lemma}
\begin{proof}
The existence and uniqueness of $W_0$ follow from standard ODE arguments and the identity
$\mathcal{L}_0W_0^\prime=0$ holds by construction. Moreover, the simplicity of the $\fspace{L}^2$-kernel of the differential operator $\mathcal{L}_0$ can be proven by the following Wronski-type argument: Suppose for contradiction that $V_1, V_2\in\fspace{L}^2\at{\mathbb{R}}$ are two linearly independent kernel functions of $\mathcal{L}_0$ such that ${\omega}\at{0}\neq0$, where
\begin{align*}
,\qquad {\omega}\at{x}:=\det\begin{pmatrix} V_1\at{x}& V_2\at{x}\\
V_1^\prime\at{x}& V_2^\prime\at{x}
\end{pmatrix}\,.
\end{align*}
The ODE $\mathcal{L}_0V_i=0$ combined with $V_i\in\fspace{L}^2\at{\mathbb{R}}$ implies that $V_i$ and $V_i^\prime$ are continuous functions with
\begin{align*}
\abs{V_i\at{x}}+\abs{V_i^\prime\at{x}}\quad\xrightarrow{\;\abs{x}\to\infty\;}\quad0\,,
\end{align*}
and we conclude that ${\omega}\at{x}\to0$ as $\abs{x}\to\infty$. On the other hand, we easily compute ${\omega}^\prime\at{x}=0$ and obtain the desired contradiction.
\end{proof}
Since $W_0$ is smooth, it satisfies \eqref{Eqn:RescaledTWEqn} up to small error terms.
In particular, the corresponding linear and the quadratic terms almost cancel due to \eqref{Eqn:LeadingOrderProblem.1}.
\begin{lemma}[${\varepsilon}$-residual of $W_0$]
\label{Lem.epsResidual}
There exists a constant $C$ such that
\begin{align}
\label{Lem.epsResidual.Eqn1}
\norm{R_{\varepsilon}}_2+\norm{S_{\varepsilon}}_2\leq C\qquad \text{with}\qquad R_{\varepsilon} : =\frac{\mathcal{Q}_{\varepsilon}\ato{W_0}-\mathcal{B}_{\varepsilon} W_0}{{\varepsilon}^2} \,,\qquad S_{\varepsilon} : =\mathcal{P}_{\varepsilon}\ato{W_0}
\end{align}
holds for all $0<{\varepsilon}\leq 1$.
\end{lemma}
\begin{proof}
We first notice that Lemma \ref{Lem:PropertiesOperatorA} ensures
\begin{align*}
\norm{\mathcal{A} _{m{\varepsilon}}^2W_0}_2\leq\norm{\mathcal{A} _{m{\varepsilon}}W_0}_2\leq\norm{W_0}_2\,,\qquad
\norm{\mathcal{A} _{m{\varepsilon}}W_0}_\infty\leq \norm{W_0}_\infty
\end{align*}
and in view of Assumption \ref{MainAssumption} we find
\begin{align*}
\norm{S_{\varepsilon}}_2&\leq\frac{1}{{\varepsilon}^6}\sum_{m=1}^M m {\gamma}_m\norm{m{\varepsilon}^2\mathcal{A}_{m{\varepsilon}}W_0}_\infty^2\norm{m{\varepsilon}^2\mathcal{A}_{m{\varepsilon}}^2W_0}_2
\leq C\,.
\end{align*}
Thanks to the smoothness of $W_0$, Lemma \ref{Lem:LimitOperatorA}
provides a constant $C$ such that
\begin{align*}
\norm{\mathcal{A}_{m{\varepsilon}}W_0^j-W_0^j}_2+\norm{\mathcal{A}_{m{\varepsilon}}W_0^j-W_0^j}_\infty \leq Cm^2{\varepsilon}^2
\end{align*}
holds for $j\in\{1,2\}$, and this implies
\begin{align*}
\Bnorm{\mathcal{A}_{m{\varepsilon}}\bat{\mathcal{A}_{m{\varepsilon}}W_0}^2-W_0^2}_2&\leq
\Bnorm{\mathcal{A}_{m{\varepsilon}}\bat{\mathcal{A}_{m{\varepsilon}}W_0}^2-\mathcal{A}_{m{\varepsilon}}W_0^2}_2+\Bnorm{\mathcal{A}_{m{\varepsilon}}W_0^2-W_0^2}_2
\\&\leq
\Bnorm{\bat{\mathcal{A}_{m{\varepsilon}}W_0}^2-W_0^2}_2+
Cm^2{\varepsilon}^2
\\&\leq\bat{\norm{\mathcal{A}_{m{\varepsilon}}W_0}_\infty+\norm{W_0}_\infty}\norm{\mathcal{A}_{m{\varepsilon}}W_0-W_0}_2+
Cm^2{\varepsilon}^2
\\&\leq Cm^2{\varepsilon}^2
\end{align*}
and hence
\begin{align*}
\bnorm{\mathcal{Q}_{\varepsilon}\ato{W_0}-\mathcal{Q}_0\ato{W_0}}_2=
\norm{\sum_{m=1}^M {\beta}_m m^3 \mathcal{A}_{m{\varepsilon}}\at{\mathcal{A}_{m{\varepsilon}}W_0}^2-\at{\sum_{m=1}^{M}{\beta}_m m^3} W_0^2}_2\leq C{\varepsilon}^2.
\end{align*}
Therefore, and since $W_0$ satisfies \eqref{Eqn:LeadingOrderProblem.1}, we get
\begin{align}
\label{Lem.epsResidual.PEqn1}
\norm{R_{\varepsilon}}_2&\leq \frac{\norm{\mathcal{B}_{\varepsilon} W_0- \mathcal{B}_0W_0}_2}{{\varepsilon}^2}+C
\leq
\sum_{m=1}^M{\alpha}_mm^2\frac{\Bnorm{\mathcal{A}_{m{\varepsilon}}^2 W_0 - W_0- \frac{m^2{\varepsilon}^2}{12}W_0^{\prime\prime}}_2}{{\varepsilon}^4}+C\,,
\end{align}
where the second inequality stems from the definitions of $\mathcal{B}_{\varepsilon}$ and $\mathcal{B}_0$, see \eqref{Eqn:OperatorBeps} and \eqref{Eqn:OperatorB0}.
Lemma \ref{Lem:LimitOperatorA} also yields
\begin{align*}
\norm{\mathcal{A}_{m{\varepsilon}}W_0-W_0-\frac{{\varepsilon}^2m^2}{24}W_0^{\prime\prime}}_2\leq Cm^4{\varepsilon}^4\,,\qquad
\norm{\mathcal{A}_{m{\varepsilon}}W_0^{\prime\prime}-W_0^{\prime\prime}}_2\leq Cm^2{\varepsilon}^2
\end{align*}
and combining this with \eqref{Lem:PropertiesOperatorA.Eqn1}$_2$ and the identity
\begin{align*}
\mathcal{A}_{m{\varepsilon}}^2 W_0 - W_0- \frac{m^2{\varepsilon}^2}{12}W_0^{\prime\prime}
&=\at{\mathcal{A}_{m{\varepsilon}}+\mathrm{id}}\Bat{\mathcal{A}_{m{\varepsilon}} W_0 - W_0- \frac{m^2{\varepsilon}^2}{24}W_0^{\prime\prime}}+\frac{m^2{\varepsilon}^2}{24}\Bat{\mathcal{A}_{m{\varepsilon}}W_0^{\prime\prime}-W_0^{\prime\prime}}\,,
\end{align*}
we arrive at
\begin{align*}
\norm{\mathcal{A}_{m{\varepsilon}}^2W_0-W_0-\frac{{\varepsilon}^2m^2}{24}W_0^{\prime\prime}}_2&\leq 2\norm{\mathcal{A}_{m{\varepsilon}} W_0 - W_0- \frac{m^2{\varepsilon}^2}{24}W_0^{\prime\prime}}_2+\frac{m^2{\varepsilon}^2}{24}\norm{\mathcal{A}_{m{\varepsilon}}W_0^{\prime\prime}-W_0^{\prime\prime} }_2
\\&\leq Cm^4{\varepsilon}^4\,.
\end{align*}
The desired estimate for $R_{\varepsilon}$ is now a direct consequence of
\eqref{Lem.epsResidual.PEqn1}.
\end{proof}
For completeness we mention that
\begin{align*}
W_0\at{x}=\frac{3d_1}{2d_2}\,{\mathrm{sech}}^2\at{\tfrac12\sqrt{d_1}{x}}
\end{align*}
can be verified by direct calculations and that formulas for the spectrum of $\mathcal{L}_0$ can, for instance, be found in \cite[page 768]{MF53}; see also \cite[Lemma 4.2]{FP99}.
\subsection{The linearized traveling wave equation \texorpdfstring{for ${\varepsilon}>0$}{}}\label{sect:proof.2}
For any ${\varepsilon}>0$, we define the linear operator $\mathcal{L}_{\varepsilon}$ on $\fspace{L}^2\at{\mathbb{R}}$ by
\begin{align}
\label{Eqn:DefLandM}
\mathcal{L}_{\varepsilon} V:= \mathcal{B}_{\varepsilon} V-\mathcal{M}_{\varepsilon} V\,,\qquad \mathcal{M}_{\varepsilon} V:= 2\sum_{m=1}^M \beta_m m^3 \mathcal{A}_{m{\varepsilon}}\Bat{\at{\mathcal{A}_{m{\varepsilon}}W_0}\at{\mathcal{A}_{m{\varepsilon}} V}},
\end{align}
where $W_0\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ is the unique even KdV wave provided by Lemma \ref{Lem:LeadingOrder}. This operator appears naturally in the linearization \eqref{Eqn:RescaledTWEqn} around $W_0$ as
\begin{align*}
\mathcal{B}_{\varepsilon}\at{W_0+{\varepsilon}^2V}-\mathcal{Q}_{\varepsilon}\ato{W_0+{\varepsilon}^2V} = - {\varepsilon}^2 R_{\varepsilon} + {\varepsilon}^2 \mathcal{L}_{\varepsilon} V - {\varepsilon}^4 \mathcal{Q}_{\varepsilon}\at{V}
\end{align*}
holds due to the linearity of $\mathcal{B}_{\varepsilon}$ and the quadraticity of $\mathcal{Q}_{\varepsilon}$.
\begin{lemma}[elementary properties of $\mathcal{L}_{\varepsilon}$]
\label{Lem:PropertiesOfL}
For any ${\varepsilon}>0$, the operator $\mathcal{L}_{\varepsilon}$ is self-adjoint in $\fspace{L}^2\at{\mathbb{R}}$ and respects the even-odd parity. Moreover, we have
\begin{align*}
\mathcal{L}_{\varepsilon} W\quad\xrightarrow{\;{\varepsilon}\to0\;}\quad \mathcal{L}_0 W\qquad \text{strongly in}\quad \fspace{L}^2\at{\mathbb{R}}
\end{align*}
for any $W\in\fspace{W}^{2,2}\at{\mathbb{R}}$.
\end{lemma}
\begin{proof}
Since $W_0$ is smooth and even, all assertions follow directly from the properties
of $\mathcal{A}_{m{\varepsilon}}$ and $\mathcal{B}_{\varepsilon}$, see \eqref{Eqn:OperatorBeps} and Lemma \ref{Lem:LimitOperatorA}.
\end{proof}
Our perturbative approach requires to invert the operator $\mathcal{L}_{\varepsilon}$ on the space $\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ -- see the fixed point problem in Theorem \ref{Thm:FixedPoints} below -- and in view of Lemma \ref{Lem:LeadingOrder} one easily shows that $\mathcal{L}_0$ has this properties. The singularly perturbed case ${\varepsilon}>0$, however, is more involved and addressed in the following theorem, which is
actually the key asymptotic result in our approach. Notice that the analogue for $M=1$ is not stated explicitly in \cite{FP99} although it could be derived from the asymptotic estimates therein.
\begin{theorem}[uniform invertibility of $\mathcal{L}_{\varepsilon}$]
\label{Lem:InvertibilityOfLeps}
There exists $0<{\varepsilon}_*\leq1$ such that for any $0<{\varepsilon}\leq{\varepsilon}_*$ the operator $\mathcal{L}_{\varepsilon}$ is continuously invertible on $\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$. More precisely, there exists a constant $C$ which depends on ${\varepsilon}_*$ but not on ${\varepsilon}$ such that
\begin{align*}
\norm{\mathcal{L}_{\varepsilon}^{-1}G}_2\leq C\norm{G}_2
\end{align*}
holds for all $0<{\varepsilon}\leq{\varepsilon}_*$ and any $G\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$.
\end{theorem}
\begin{proof}
\ul{\emph{Preliminaries}}: Our strategy is to
show the existence of a constant $c_*>0$ such that
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn10}
\norm{\mathcal{L}_{\varepsilon} V}_2\geq c_* \norm{V}_2
\end{align}
holds for all $V\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ and all sufficiently small ${\varepsilon}>0$, because this implies the desired result. In fact, \eqref{Lem:InvertibilityOfLeps.PEqn10} ensures that the operator
\begin{align*}
\mathcal{L}_{\varepsilon}:\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}\to \fspace{L}^2_\mathrm{even}\at{\mathbb{R}}
\end{align*}
has both trivial kernel and closed image. The symmetry of $\mathcal{L}_{\varepsilon}$ gives
\begin{align*}
\ker \mathcal{L}_{\varepsilon} = \mathrm{coker}\, \mathcal{L}_{\varepsilon}
\end{align*}
and due to the closed image we conclude that $\mathcal{L}_{\varepsilon}$ is not only injective but also surjective. Moreover,
the ${\varepsilon}$-uniform continuity of the inverse $\mathcal{L}_{\varepsilon}^{-1}$ is a further consequence of \eqref{Lem:InvertibilityOfLeps.PEqn10}.
\par
Now suppose for contradiction that such a constant $c_*$ does not exist. Then we can choose a sequence $\at{{\varepsilon}_n}_{n\in{\mathbb{N}}}\subset\ocinterval{0}{1}$ with ${\varepsilon}_n\to0$ as well as sequences $\at{V_n}_{n\in{\mathbb{N}}}\subset\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ and $\at{G_n}_{n\in{\mathbb{N}}}\subset\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ such that
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn8}
\mathcal{L}_{{\varepsilon}_n}V_n=G_n\,,\qquad \norm{V_n}_2=1\,,\qquad \norm{G_n}_2\quad\xrightarrow{n\to\infty}\quad0\,.
\end{align}
\par
\ul{\emph{Weak convergence to $0$}}:
By weak compactness we can assume that there exists $V_\infty\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ such that
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn1}
V_n\quad \xrightharpoonup{\;n\to\infty\;}\quad V_\infty\qquad \text{weakly in $\fspace{L}^2\at{\mathbb{R}}$}\,,
\end{align}
and using Lemma \ref{Lem:PropertiesOfL} we find
\begin{align*}
\skp{V_\infty}{\mathcal{L}_{0}\phi}=
\lim_{n\to\infty}\skp{V_n}{\mathcal{L}_{{\varepsilon}_n}\phi}=
\lim_{n\to\infty}\skp{\mathcal{L}_{{\varepsilon}_n}V_n}{\phi}=
\lim_{n\to\infty}\skp{G_n}{\phi}=0
\end{align*}
for any sufficiently smooth test function $\phi$. In view of
the definition of the differential operator $\mathcal{L}_0$ -- see \eqref{Eqn:OperatorB0} and \eqref{Lem:LeadingOrder.Eqn1} -- we estimate
\begin{align*}
\abs{\int_{\mathbb{R}} W_0\at{x}\phi^{\prime\prime}\at{x}\dint{x}}\leq C \norm{\phi}_2
\end{align*}
for all $\phi\in\fspace{W}^{2,2}\at{\mathbb{R}}$ and conclude that $V_\infty$ belongs to $\fspace{W}^{2,2}\at{\mathbb{R}}$, where $\mathcal{L}_0V_\infty=0$ holds due to
\begin{align*}
\skp{\mathcal{L}_{0} V_\infty}{\phi}=\skp{V_\infty}{\mathcal{L}_{0}\phi}=0\,.
\end{align*}
In other words, the even function $V_\infty$ belongs to the kernel of $\mathcal{L}_0$ and
\begin{align*}
V_\infty=0
\end{align*}
follows from Lemma \ref{Lem:LeadingOrder}.
\par
\ul{\emph{Further notations:}}
For the remaining considerations we abbreviate the constant from Lemma \ref{Lem:InversOfB} by $D$ and denote by $C$ any generic constant (whose value may change from line to line) that is independent of $n$ and $D$. We further choose $K>M$ sufficiently large such that
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn5a}
\sup\limits_{\abs{\xi}\geq K-M} W_0\at{\xi}\leq \frac{1}{4 D\sum_{m=1}^M {\beta}_m m^3}\,,
\end{align}
and denote by $\chi_K$ the characteristic function of the interval
$I_K:=\ccinterval{-K}{{+}K}$. In what follows we write $V_n=V_n^\upidx{1}+V_n^\upidx{2}+V_n^\upidx{3}$ with
\begin{align*}
V_n^\upidx{1} := \chi_K \,\Pi_{{\varepsilon}_n} V_n\,,\qquad
V_n^\upidx{2} := \at{1-\chi_K} \,\Pi_{{\varepsilon}_n} V_n\,,\qquad
V_n^\upidx{3} :=\at{\mathrm{id}- \Pi_{{\varepsilon}_n}}V_n
\end{align*}
and observe that these definitions imply
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn5b}
\max\limits_{i\in\{1,2,3\}}\bnorm{V_n^\upidx{i}}_{2}\leq \norm{V_n}_2=1\,.
\end{align}
We also set
\begin{align*}
U_n^\upidx{i}:=\mathcal{M}_{{\varepsilon}_n} V_n^\upidx{i}
\end{align*}
and combine Lemma \ref{Lem:PropertiesOperatorA} with the smoothness of $W_0$ to obtain
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn7}
\bnorm{U_n^\upidx{i}}_2\leq C\bnorm{V_n^\upidx{i}}_2\,.
\end{align}
Moreover, by construction we have
\begin{align*}
V_n=\mathcal{B}_{{\varepsilon}_n}^{-1}\Bat{U_n^\upidx{1}+U_n^\upidx{2}+U_n^\upidx{3}+G_n}\,,
\end{align*}
so the estimate
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn6a}
\bnorm{V_n^\upidx{1}+V_n^\upidx{2}}_{2,2} + {\varepsilon}_n^{-2} \bnorm{V_n^\upidx{3}}_{2}\leq D \at{\bnorm{U_n^\upidx{1}}_2+\bnorm{U_n^\upidx{2}}_2+\bnorm{U_n^\upidx{3}}_2 + \norm{G_n}_2}
\end{align}
is provided by Lemma \ref{Lem:InversOfB}.
\par
\ul{\emph{Strong convergence of $V_n^\upidx{1}$ and $V_n^\upidx{3}$}}:
Inserting \eqref{Lem:InvertibilityOfLeps.PEqn8}, \eqref{Lem:InvertibilityOfLeps.PEqn5b}, and \eqref{Lem:InvertibilityOfLeps.PEqn7} into \eqref{Lem:InvertibilityOfLeps.PEqn6a} gives
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn6b}
\bnorm{V_n^\upidx{1}+V_n^\upidx{2}}_{2,2} + {\varepsilon}_n^{-2} \bnorm{V_n^\upidx{3}}_{2}\leq CD
\end{align}
and hence
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn2}
V_n^\upidx{3}\quad\xrightarrow{n\to\infty}\quad 0\qquad\text{strongly in $\fspace{L}^2\at{\mathbb{R}}$}\,.
\end{align}
Thanks to
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn2a}
V_n^\upidx{2}=0\qquad \text{in}\qquad \fspace{L}^2\at{I_K}
\end{align}
we also infer from \eqref{Lem:InvertibilityOfLeps.PEqn6b} the estimate
\begin{align*}
\bnorm{V_n^\upidx{1}}_{2,2, I_K}\leq
\bnorm{V_n^\upidx{1}+V_n^\upidx{2}}_{2,2}\leq CD\,,
\end{align*}
where $\bnorm{\cdot}_{2,2, I_K}$ denotes the norm in $\fspace{W}^{2,2}\at{I_K}$.
Since $\fspace{W}^{2,2}\at{I_K}$ is compactly embedded into $\fspace{L}^{2}\at{I_K}$, we conclude that the sequence $\bat{V_n^\upidx{1}}_{n\in{\mathbb{N}}}$ is precompact in $\fspace{L}^2\at{I_K}$. On other hand, the weak convergence \eqref{Lem:InvertibilityOfLeps.PEqn1} combined with \eqref{Lem:InvertibilityOfLeps.PEqn2} and \eqref{Lem:InvertibilityOfLeps.PEqn2a}
implies
\begin{align*}
V_n^\upidx{1}\quad \xrightharpoonup{\;n\to\infty\;}\quad V_\infty=0\qquad \text{weakly in}\quad \fspace{L}^2\at {I_K},
\end{align*}
and in summary we find $V_n^\upidx{1}\to0$ strongly in $\fspace{L}^2\at{I_K}$ by standard arguments.
This even implies
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn3}
V_n^\upidx{1}\quad\xrightarrow{n\to\infty}\quad 0\qquad\text{strongly in $\fspace{L}^2\at{{\mathbb{R}}}$}
\end{align}
as $V_n^\upidx{1}$ vanishes outside the interval $I_K$.
\par
\ul{\emph{Upper bounds for $\nnorm{U_n^\upidx{2}}_2$}}: %
Since the functions $V_n^\upidx{2}$ are supported in ${\mathbb{R}}\setminus I_K$, the functions $\mathcal{A}_{m{\varepsilon}_n}V_n^\upidx{2}$ are supported in ${\mathbb{R}}\setminus I_{K-m{\varepsilon}_n/2}=\{{x}:\abs{{x}}>K-m{\varepsilon}_n/2\}$. Moreover, we have
\begin{align*}
\babs{\at{\mathcal{A}_{m{\varepsilon}_n}W_0}\at{\xi}}\;\leq
\sup_{\abs{{x}-\xi}\leq m{\varepsilon}_n/2} {W_0\at{x}}\;\leq
\sup_{\abs{{x}-\xi}\leq M/2} {W_0\at{x}}
\end{align*}
for any given $\xi\in{\mathbb{R}}$. Therefore, and using
\begin{align*}
\Babs{\bat{\mathcal{A}_{m{\varepsilon}} V_n^\upidx2}\at{x}}\leq \Bat{\mathcal{A}_{m{\varepsilon}} \babs{V_n^\upidx2}}\at{x}\qquad\text{for all}\quad x\in{\mathbb{R}}\,,
\end{align*}
we estimate
\begin{align*}
\abs{\bat{\mathcal{A}_{m{\varepsilon}_n}W_0}\bat{\mathcal{A}_{m{\varepsilon}_n}V_n^\upidx2}}&\leq \at{\sup\limits_{\abs{\xi}\geq K-M/2}\babs{\at{\mathcal{A}_{m{\varepsilon}_n}W_0}\at\xi}}
\babs{\mathcal{A}_{m{\varepsilon}_n}V_n^\upidx{2}}\\&\leq
\at{\sup\limits_{\abs{\xi}\geq K-M}{W_0\at\xi}}
\mathcal{A}_{m{\varepsilon}_n}\babs{V_n^\upidx{2}}\,,
\end{align*}
so Lemma \ref{Lem:PropertiesOperatorA} gives
\begin{align*}
\norm{\mathcal{A}_{m{\varepsilon}_n}\at{\bat{\mathcal{A}_{m{\varepsilon}_n}W_0}\bat{\mathcal{A}_{m{\varepsilon}_n}V_n^\upidx2}}}_2\leq
\at{\sup\limits_{\abs{\xi}\geq K-M}{W_0\at\xi}}
\bnorm{V_n^\upidx{2}}_2
\end{align*}
and hence
\begin{align}
\label{Lem:InvertibilityOfLeps.PEqn4}
\bnorm{U_n^\upidx{2}}_2\leq \at{\sup\limits_{\abs{\xi}\geq K-M}{W_0\at\xi}}\at{2\sum_{m=1}^M{\beta}_mm^3}\bnorm{V_n^\upidx{2}}_2\leq
\frac{1}{2D}
\end{align}
due to \eqref{Lem:InvertibilityOfLeps.PEqn5a} and \eqref{Lem:InvertibilityOfLeps.PEqn5b}.
\par
\ul{\emph{Derivation of the contradiction}}:
Combining \eqref{Lem:InvertibilityOfLeps.PEqn6a} with
\eqref{Lem:InvertibilityOfLeps.PEqn7} gives
\begin{align*}
\bnorm{V_n}_2&\leq
\bnorm{V_n^\upidx{1}+V_n^\upidx{2}}_2+\bnorm{V_n^\upidx{3}}_2
\\&\leq
D\at{\bnorm{U_n^\upidx{1}}_2+
\bnorm{U_n^\upidx{2}}_2+\bnorm{ U_n^\upidx{3}}_2+
\bnorm{ G_n}_2}
\\&\leq
D\Bat{C\bnorm{V_n^\upidx{1}}_2+\bnorm{U_n^\upidx{2}}_2+C\bnorm{V_n^\upidx{3}}_2+\bnorm{ G_n}_2} \,,
\end{align*}
and passing to the limit $n\to\infty$ we get
\begin{align*}
\limsup_{n\to\infty}\norm{V_n}_2\leq D\limsup_{n\to\infty} \bnorm{U_n^\upidx{2}}_2\leq\tfrac12
\end{align*}
thanks to \eqref{Lem:InvertibilityOfLeps.PEqn8}$_3$, \eqref{Lem:InvertibilityOfLeps.PEqn2},
\eqref{Lem:InvertibilityOfLeps.PEqn3}, and \eqref{Lem:InvertibilityOfLeps.PEqn4}. This, however, contradicts the normalization condition \eqref{Lem:InvertibilityOfLeps.PEqn8}$_2$. In particular, we have shown the existence of a constant $c_*$ as in \eqref{Lem:InvertibilityOfLeps.PEqn10} and the proof is complete.
\end{proof}
\subsection{Nonlinear fixed point argument}\label{sect:proof.3}
Setting $W_{\varepsilon}=W_0+{\varepsilon}^2V_{\varepsilon}$, the nonlocal traveling wave equation \eqref{Eqn:RescaledTWEqn} is equivalent to
\begin{align*}
\mathcal{L}_{\varepsilon} V_{\varepsilon} = R_{\varepsilon} + S_{\varepsilon}+{\varepsilon}^2 \,\mathcal{Q}_{\varepsilon}\ato{V_{\varepsilon}}+{\varepsilon}^2\,\mathcal{N}_{\varepsilon}\ato{V_{\varepsilon}}
\end{align*}
with
\begin{align}
\label{Eqn:NonlinOP.V}
\mathcal{N}_{\varepsilon}\ato{V}:=
\frac{ \mathcal{P}_{\varepsilon} \ato{W_0+{\varepsilon}^2 V}-\mathcal{P}_{\varepsilon}\ato{W_0}}{{\varepsilon}^2}\,,
\end{align}
where $\mathcal{Q}_{\varepsilon}$, $\mathcal{P}_{\varepsilon}$ and $R_{\varepsilon}$, $S_{\varepsilon}$ have been introduced in \eqref{Eqn:NonlinOp.W} and \eqref{Lem.epsResidual.Eqn1}, respectively.
Since $\mathcal{L}_{\varepsilon}$ can be inverted for all sufficiently small ${\varepsilon}>0$, we finally arrive at
the following result.
\begin{theorem}
[existence and uniqueness of the corrector $V_{\varepsilon}$]
\label{Thm:FixedPoints}
There exist constants $D>0$ and $0<{\varepsilon}_*\leq1$ such that
the nonlinear operator $\mathcal{F}_{\varepsilon}$ with
\begin{align}
\label{Thm:FixedPoints.Eqn1}
\mathcal{F}_{\varepsilon} \ato{V} := \mathcal{L}_{\varepsilon}^{-1}\Bat{R_{\varepsilon} + S_{\varepsilon} + {\varepsilon}^2\, \mathcal{Q}_{\varepsilon}\ato{V}+{\varepsilon}^2\, \mathcal{N}_{\varepsilon}\ato{V}}
\end{align}
admits for any $0< {\varepsilon}\leq{\varepsilon}_*$ a unique fixed point $V_{\varepsilon}$ in the set $B_D=\{V:\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}\;:\; \norm{V}_2\leq{D}\}$.
\end{theorem}
\begin{proof}
Our strategy is to demonstrate that the operator $\mathcal{F}_{\varepsilon}$ maps $B_D$ contractively into itself provided that $D$ is sufficiently large and ${\varepsilon}$ sufficiently small; the desired result is then a direct consequence of the Banach fixed-point theorem. Within this proof we denote by $C$ any generic constant that is independent of $D$ and ${\varepsilon}$. We also observe that $\abs{\mathcal{A}_\eta Z}\leq \mathcal{A}_\eta\abs{Z}$ holds for any $Z\in\fspace{L}^1_{\mathrm{loc}}\at{\mathbb{R}}$ and $\eta>0$, and recall that
\begin{align*}
\norm{R_{\varepsilon}+S_{\varepsilon}}_2\leq C
\end{align*}
is provided by Lemma \ref{Lem.epsResidual}.
\par
\emph{\ul{Estimates for the quadratic terms}}:
For $V\in B_D$ we find
\begin{align*}
\babs{{\varepsilon}^2 \mathcal{Q}_{\varepsilon}\ato{V}}\leq{\varepsilon}^2 \sum_{m=1}^M
{\beta}_m m^3 \norm{\mathcal{A}_{m{\varepsilon}}V}_\infty \mathcal{A}_{m{\varepsilon}}^2\abs{V}\leq
{\varepsilon}^{3/2}\at{\sum_{m=1}^M \beta_mm^{5/2} D}\mathcal{A}_{m{\varepsilon}}^2\babs{V}\,,
\end{align*}
where we used the estimate \eqref{Lem:PropertiesOperatorA.Eqn1}$_1$, and
in view of \eqref{Lem:PropertiesOperatorA.Eqn1}$_2$ we obtain
\begin{align*}
\bnorm{{\varepsilon}^2 \mathcal{Q}_{\varepsilon}\ato{V}}_2
\leq {\varepsilon}^{3/2} C D \norm{\mathcal{A}_{m{\varepsilon}}^2V}_2
\leq {\varepsilon}^{3/2} C D \norm{V}_2\leq{\varepsilon}^{3/2} C D^2.
\end{align*}
In the same way we verify the estimate
\begin{align*}
\bnorm{{\varepsilon}^2 \mathcal{Q}_{\varepsilon}\ato{V_2}-{\varepsilon}^2 \mathcal{Q}_{\varepsilon}\ato{V_2}}_2&\leq \norm{{\varepsilon}^2\sum_{m=1}^M
{\beta}_m m^3 \Bat{\norm{\mathcal{A}_{m{\varepsilon}}V_2}_\infty+
\norm{\mathcal{A}_{m{\varepsilon}}V_1}_\infty}\mathcal{A}_{m{\varepsilon}}^2\babs{V_2-V_1}}_2
\\&\leq
{\varepsilon}^{3/2}CD\norm{\mathcal{A}_{m{\varepsilon}}^2\abs{V_2-V_1}}_2\leq
{\varepsilon}^{3/2}CD\norm{V_2-V_1}_2
\end{align*}
for arbitrary $V_1,V_2\in B_D$.
\par
\emph{\ul{Estimates for the higher order terms}}:
For $V_1,V_2\in B_D$ we set
$Z_{m,{\varepsilon},i}:={\varepsilon}^2m \mathcal{A}_{m{\varepsilon}}\at{W_0+{\varepsilon}^2 V_i}$ and employ \eqref{Lem:PropertiesOperatorA.Eqn1}$_1$ to estimate
\begin{align*}
\norm{Z_{m,{\varepsilon},i}}_\infty&\leq
{\varepsilon}^2m\norm{\mathcal{A}_{m{\varepsilon}}W_0}_\infty+
{\varepsilon}^4m\norm{\mathcal{A}_{m{\varepsilon}}V_i}_\infty
\\&\leq
{\varepsilon}^2m\norm{W_0}_\infty+
{\varepsilon}^{7/2}m^{1/2}\norm{V_i}_2
\\&\leq
{\varepsilon}^2 m\at{C+{\varepsilon}^{3/2}D}=:\zeta_{m,{\varepsilon}}\,.
\end{align*}
Due to the intermediate value theorem as well as the properties of $\Psi_m^{\prime\prime}$ we get
\begin{align*}
\Babs{{\varepsilon}^2\mathcal{N}_{\varepsilon}\ato{V_2}-{\varepsilon}^2\mathcal{N}_{\varepsilon}\ato{V_1}}&\leq
\sum_{m=1}^M m\abs{\frac{\Psi^\prime_m\bat{Z_{m,{\varepsilon},2}}-\Psi^\prime_m\bat{Z_{m,{\varepsilon},1}}}{{\varepsilon}^6}}
\\&
\leq
\sum_{m=1}^M
\frac{m{\gamma}_m \zeta _{m,{\varepsilon}}^2 \abs{Z_{m,{\varepsilon},2}-Z_{m,{\varepsilon},1}}}{{\varepsilon}^6}
\\&
\leq
\sum_{m=1}^M
\frac{m^2{\gamma}_m \zeta _{m,{\varepsilon}}^2 \abs{\mathcal{A}_{m{\varepsilon}}V_2 - \mathcal{A}_{m{\varepsilon}}V_1}}{{\varepsilon}^2}
\\&\leq {\varepsilon}^2\at{C+{\varepsilon}^{3/2}D}^2\at{\sum_{m=1}^M
{\gamma}_m m^{4}}
\mathcal{A}_{m{\varepsilon}}\babs{V_2-V_1}
\end{align*}
and hence
\begin{align*}
\bnorm{{\varepsilon}^2 \mathcal{N}_{\varepsilon}\ato{V_2}-{\varepsilon}^2\mathcal{N}_{\varepsilon}\ato{V_1}}_2\leq
{\varepsilon}^2C \at{C+{\varepsilon}^{3/2}D}^2\bnorm{V_2-V_1}_2\,
\end{align*}
after integration. A particular consequence is the estimate
\begin{align*}
\bnorm{\mathcal{N}_{\varepsilon}\ato{V}}_2\leq
{\varepsilon}^2CD \at{C+{\varepsilon}^{3/2}D}^2
\end{align*}
for any $V\in B_D$, where we used that $\mathcal{N}_{\varepsilon}\ato{0}=0$.
\par
\emph{\ul{Concluding arguments}}:
Combining all estimates derived so far with the definition of $\mathcal{F}_{\varepsilon}$ and the bounds for $\mathcal{L}_{\varepsilon}^{-1}$ -- see Lemma \ref{Lem:InvertibilityOfLeps} -- we verify
\begin{align*}
\norm{\mathcal{F}_{\varepsilon}\ato{V}}_2\leq C + {\varepsilon}^{3/2}CD^2 +
{\varepsilon}^2CD \at{C+{\varepsilon}^{3/2}D}^2
\end{align*}
for all $V\in B_D$ as well as
\begin{align*}
\norm{\mathcal{F}_{\varepsilon}\ato{V_2}}_2-\norm{\mathcal{F}_{\varepsilon}\ato{V_1}}_2\leq\at{{\varepsilon}^{3/2}CD+
{\varepsilon}^2C \at{C+{\varepsilon}^{3/2}D}^2}\norm{V_2-V_1}_2
\end{align*}
for all $V_1,V_2\in B_D$. To complete the proof we first set $D:=2\,C$ and choose afterwards ${\varepsilon}>0$ sufficiently small.
\end{proof}
\begin{corollary}[main result from \S\ref{sect:intro}]
\label{Cor:Summary}
For any sufficiently small ${\varepsilon}>0$,
the reformulated traveling wave equation \eqref{eq:scaledfpu1a} admits a unique even solution $W_{\varepsilon}$ with speed $\sqrt{c_0^2+{\varepsilon}^2}$ such that
\begin{align*}
\norm{W_{\varepsilon}-W_0}_{2}+\norm{W_{\varepsilon}-W_0}_{\infty}\leq C{\varepsilon}^2
\end{align*}
holds for some constant $C$ independent of ${\varepsilon}$. Moreover, $W_{\varepsilon}$ is nonnegative and smooth.
\end{corollary}
\begin{proof}
The existence and local uniqueness of $W_{\varepsilon}=W_0+{\varepsilon}^2 V_{\varepsilon}$ along with the $\fspace{L}^2$-estimate is a direct consequence of Theorem \ref{Thm:FixedPoints}. Moreover, re-inspecting the arguments from the proof of Theorem \ref{Thm:FixedPoints} and using Lemma \ref{Lem:vonNeumann} we easily derive an uniform $\fspace{L}^\infty$-bound for the corrector $V_{\varepsilon}$. Finally, the right hand side in \eqref{Eqn:RescaledTWEqn} is -- at least for sufficiently small ${\varepsilon}>0$ -- nonnegative due to the properties of the KdV wave $W_0$ and the potential $\Phi$, see Lemma \ref{Lem:LeadingOrder} and Assumption \ref{MainAssumption}. The nonnegativity of $W_{\varepsilon}$ is hence granted by Corollary~\ref{Cor:InvarianceProperties}.
\end{proof}
The constants in the proof of Theorem \ref{Thm:FixedPoints} are, of course, far from being optimal. In general, a solution branch ${\varepsilon}\to W_{\varepsilon}\in\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$ on an interval $\ccinterval{0}{{\varepsilon}_*}$ can be continued for ${\varepsilon}>{\varepsilon}_*$ as long as the linearization of the traveling wave equation around $W_{{\varepsilon}_*}$ provides an operator $\mathcal{L}_{{\varepsilon}_*}$ that can be inverted on the space $\fspace{L}^2_\mathrm{even}\at{\mathbb{R}}$. Since the shift symmetry always implies that $W_{{\varepsilon}_*}^\prime$ is an odd kernel function of $\mathcal{L}_{{\varepsilon}_*}$, the unique continuation can hence only fail if the eigenvalue $c_{{\varepsilon}_*}^2$ of the linearized traveling wave operator
\begin{align}
\label{Eqn:LinTWOp}
V\mapsto \sum_{m=1}^M m^2 A_{m{\varepsilon}_*}\Phi_m^{\prime\prime}\at{m{\varepsilon}_*^2 A_{m{\varepsilon}_*} W_{{\varepsilon}_*}}A_{m{\varepsilon}_*} V
\end{align}
is not simple anymore. Unfortunately, almost nothing is known about the spectral properties of the operator \eqref{Eqn:LinTWOp} for moderate values ${\varepsilon}_*$. It remains a challenging task to close this gap, especially since any result in this direction should have implications concerning the orbital stability of $W_{{\varepsilon}_*}$.
\par
For $M=1$ it has also been shown in \cite[Propositions 5.5 and 7.1]{FP99} that the distance profile $\mathcal{A}_{\varepsilon} W_{\varepsilon}$ is unimodal (`monotonic falloff') and decays exponentially for $x\to\pm\infty$. For $M>1$, it should be possible to apply a similar analysis to the velocity profile $W_{\varepsilon}$ but the technical details are much more involved. It remains open to identify alternative and more robust proof strategies. For instance, if one could show that the waves from Corollary \ref{Cor:Summary} can be constructed by some variant of the abstract iteration scheme
\begin{align*}
W\mapsto \mathcal{B}_{\varepsilon}^{-1}\at{Q_{\varepsilon}\ato{W}+{\varepsilon}^2\mathcal{P}_{\varepsilon}\ato{W}}\,,
\end{align*}
the unimodality of $W_{\varepsilon}$ would be implied by the invariance properties of $\mathcal{A}_{m{\varepsilon}}$ and $\mathcal{B}_{\varepsilon}^{-1}$, see Lemma \ref{Lem:PropertiesOperatorA} and Corollary \ref{Cor:InvarianceProperties}. A similar argument could be used for the exponential decay because $\mathcal{A}_{m{\varepsilon}}$ maps a function with decay rate ${\lambda}$ to a function that decays with rate
\begin{align*}
\bar{\lambda} = \frac{\sinh\at{\tfrac12 {\varepsilon} m {\lambda} }}{\tfrac12 {\varepsilon} m {\lambda}}
\end{align*}
and since the von Neumann formula from Lemma \ref{Lem:vonNeumann} provides corresponding expressions for $\mathcal{B}_{\varepsilon}^{-1}$; see \cite{HR10} for a similar argument to identify the decay rates of front-like traveling waves. In this context we further emphasize that only supersonic waves can be expected to decay exponentially. For subsonic waves with speed $c_{\varepsilon}^2<c_0^2$, the linearization of the traveling wave equation \eqref{eq:scaledfpu1} predicts tails oscillations and hence non-decaying waves, see \cite{HMSZ13} for a similar analysis with non-convex interaction potentials.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In this paper we study a generalization of the metric dimension problem, which is a problem of finding
a landmark set or a resolving set of a graph
\cite{Slat75,HM1976,Babai,Chvatal,MT,KRR1996,CEJO00,SSH02,ST04,HSV11,ELW,HN12}.
A landmark set is a subset of vertices $L\subseteq V$ of an
undirected graph $G=(V,E)$, such that for any $u,v\in V$ ($u\neq
v$), there exists $\tau\in L$ with $d(u,\tau)\neq d(v,\tau)$,
where $d(x,y)$ denotes the number of edges in a shortest path
between $x$ and $y$. In this case, $\tau$ is called a separating
vertex for $u$ and $v$. Alternatively, it is equivalent to require
for $L$ to have a separating vertex for every pair $u,v\in
V\setminus L$.
This problem was introduced by Harary and Melter \cite{HM1976} and by Slater \cite{Slat75}. The problem was studied in the combinatorics literature \cite{Babai,CH+07,CEJO00,CZ03}, with respect to complexity \cite{KRR1996,BE+06,HSV11,HN12,DPL11,ELW}, and with respect to design of polynomial time algorithms for certain graph classes (sometimes even for the
weighted version), and in particular for paths and trees \cite{Slat75,HM1976,KRR1996,CEJO00,SSH02,ELW}.
The $k$-metric dimension problem (for an integer $k \geq 2$) \cite{kmetric1,AEfirst,kmetric2,kmetric3,kmetric4,kmetric5} is the problem of finding a
subset $L$ that has at least $k$ separating vertices for every
pair of distinct vertices $u$ and $v$.
If this is required for
every $u,v \in V$, the model is called AP (all-pairs model) (introduced in \cite{kmetric1} and independently in \cite{AEfirst}), and
if this is required for every $u,v \in V \setminus L$, the model is
called NL (non-landmarks model), which was introduced in \cite{AEfirst}.
In all cases, a valid solution is
called a landmark set. In the weighted case, a non-negative
rational cost is given for each vertex by a function
$c:V\rightarrow \mathbb{Q}^{+}$. For a set $U\subseteq V$,
$c(U)={\sum_{v\in U}}w(v)$ is the total cost of vertices in $U$,
and the goal is to find a landmark set $L$ with the minimum value
$w(L)$. For a given graph, the minimum cardinality of any landmark set is called the $k$-metric dimension, while the minimum cost of any set is called the weighted $k$-metric dimension. Yero, Estrada-Moreno, and Rodr{\'{\i}}guez{-}Vel{\'{a}}zquez \cite{kmetric3} showed that computing the $k$ metric dimension of an arbitrary graph is NP-hard.
In this paper we study the case of trees for $k=2$, in the non-landmarks model (NL). In this model, a landmark set always exists for any $k$ as $V$ is a landmark set.
Let $T=(V,E)$ be a tree graph. Let $n=\left\vert V\right\vert $ be
the number of $T$'s vertices. The case of a path graph was completely analyzed in \cite{AEfirst} (for all values of $k$ and the two models, see also \cite{kmetric4} for the all-pairs model), and therefore we will assume that $T$ has at least one vertex of degree greater than $2$. It was shown in \cite{AEfirst} that a minimal cost landmark set for a path graph and $k=2$ consists of two or three vertices. Note that (by definition) every solution for AP is a solution for NL. Any subset of three vertices of a path form a landmark set (for each of the models), the two endpoints of the path also for a landmark set for both models. However, a solution consisting of the first two vertices of the path or the last two vertices of the path are landmark sets for NL but not for AP. For the case $k=1$, trees were studied in \cite{CEJO00,HM1976,KRR1996,Slat75,ELW}.
Given a tree, a vertex of degree at least $3$ is called a core vertex or a {\it core}. A vertex of degree $2$ is called a path vertex, and a vertex of degree $1$ is called a leaf.
As paths were completely studied in \cite{AEfirst}, we will consider trees that have at least one core.
For a core $v$, we often consider the subtrees creating by removing $v$ from the tree, and call them the subtrees of its neighbors (one subtree for each neighbor). We sometimes consider the BFS tree that is created by rooting the tree at $v$.
A subtree of a neighbor of a core $v$ that is a path (without any cores) is called a leg (or a standard leg) of $v$. If a leg consists of a single vertex (that is, $v$ is connected to a leaf), we called it a short leg, and otherwise it is called a long leg. For a leg $\ell$ of $v$ which consists of $j \geq 1$ vertices, the vertex of position $i$, also denoted by $\ell^i$ (for $1 \leq i \leq j$), is the vertex of the leg $\ell$ whose distance from $v$ is $i$.
Khuller et al. \cite{KRR1996} showed that a landmark set for $k=1$ can be created by selecting all leaves except for the leaf of one leg of each core. In \cite{ELW}, it was shown that a minimum cost landmark set is created by selecting one vertex of each leg of a core except for one leg, such that the selected vertices have total minimal cost.
Given a tree, we define a {\it small core} to be a core vertex that satisfies all the following conditions; The core has a degree of exactly $3$, it has at least two legs, one of which is a short leg. The second leg of a small core may be either short or long, and the third subtree of a neighbor of $v$ can be a (short or long) leg or it can contain one or more cores. Other cores are called regular cores. If the third subtree of a neighbor of $v$ contains a core, the closest core to $v$, $x$, is connected to $v$ by a path consisting of path vertices. For a small core $v$, if the closest core, $x$, is a small core too, then since $x$ also has two legs (in addition to the subtree of a neighbor of $x$ that contains $v$), the tree has exactly two (small) cores and no regular cores (while in any other case, $x$ is a regular core). We will have two special cases. One special case is where the tree contains exactly two small cores (and no regular cores). The other special case is where the tree has exactly one small core (and no regular cores, that is, the tree consists of a core with three legs, at least one of which is short). The two special cases will be considered separately after some properties will be discussed, while all cases where the tree has at least one regular core will be treated together.
Next, we define a {\it modified leg}. For a core vertex $v$, a subtree of a neighbor of $v$ that has exactly one core, and this core is a small core, is called a modified leg (in this case $v$ is the closest regular core to at least one small core). That is, a modified leg of $v$ consists of a path to a small core, a small core, and the standard legs of the small core (where the small core has one short leg and one leg that is short or long).
A g-leg of a core $v$ is a subtree of a neighbor of $v$ that is either a standard leg (short or long) or a modified leg (the terms short leg and long leg will refer only to standard legs). A position on a modified leg $\ell$ is defined exactly as it is defined for a standard leg, but if the small core (which is part of $\ell$) is in position $i\geq 1$, then there are two vertices with position $i+1$ on $\ell$, denoted by $\ell^a$ and $\ell^b$, where $\ell^b$ is the unique vertex of a short leg of the small core (if both its standard legs are short, the choice of which vertex of a short leg is $\ell^a$ and which one is $\ell^b$ is done arbitrarily).
As explained above, except for the special case treated later, if a small core $u$ that belongs to a modified leg of $v$, then $v$ is a regular core.
A regular core is called minor if one of the two following conditions holds. The first condition is that it has at most one g-leg (as its degree is at least $3$, there are at least two other subtrees of its neighbors that are neither standard legs nor modified legs, and each such subtree contains a regular core).
The second condition is that its degree is at least $4$, it has no modified legs, it has exactly two standard legs, one of which is short (the other leg is either short or long) and the other (at least two) subtrees of its neighbors are not g-legs and contain regular cores. Thus, for a minor core $v$, no matter which condition out of the two is satisfied, there are always at least two subtrees of the neighbors of $v$ which are not g-legs, and it either has at most one g-leg (which is standard or modified), or it has two standard legs, at least one of which is short. If a regular core is not minor, then we call it a {\it main core}.
In the AP (all pairs model), two separations in a landmark set $L$ are required for any pair of distinct vertices and not only for the vertices of $V\setminus L$. In this case, it is sufficient to analyze cores (without splitting them into small cores and regular cores) and standard legs. The condition for a set $L \subseteq V$ to be a landmark set is defined on the sets of legs of cores. For each core, if there is a leg that does not have any vertex in $L$, any other leg must have at least two vertices in $L$. In particular, if a core has a single leg, it will not have any vertices in $L$ in a minimum cost landmark set (see \cite{kmetric1,kmetric2,kmetric3}).
\begin{figure} [h!]
\hspace{1.4in}
\includegraphics[angle=0,width=0.5\textwidth]{example1a}
\caption{An example of a tree with seven core vertices $a$,$b$,$c$,$d$,$e$,$f$,$g$. The vertices $b$ and $d$ are small cores, $c$ and $e$ are minor cores, all other are main cores. Note that $c$ is a core with a single (modified) leg, while $e$ is a core without any g-legs. \label{exmpl}}
\end{figure}
\section{Properties}
In this section we analyze the structure of landmark sets. We prove a number of lemmas and claims that determine the requirements of minimal landmark sets (with respect to cost or to set inclusion). We will define required solution types for g-legs. This will allow us to design a relatively simple algorithm for computing a minimum cost landmark set for a given tree in the next section.
\begin{lemma} \label{lemma1}
Consider a tree and a regular core $v$.
Every subtree of a neighbor of $v$ that has a regular core must have a main core. In
particular, a tree that has a minor core also has at least two
main cores, and every tree with a regular core has a main core.
\end{lemma}
\begin{proof}
Root the tree at $v$,and consider a subtree of $v$ that has a regular core. Let $x$ be a regular core of a largest distance to $v$ in this subtree (that is, a regular core of largest depth in the rooted tree). The core $x$ must have at least two children in the rooted tree, as its degree in the tree is at least $3$. If the degree of $x$ in the tree is at least $4$, $x$ has at least three children in the rooted three, and the subtrees of these children are g-legs, as these subtrees contain no regular cores (since otherwise $x$ is not a regular core of maximum depth). Since $x$ has at least three g-legs, it is not a minor core, and therefore it is a main core. Consider the case where the degree of $x$ is $3$ and it has exactly two children in the rooted tree, where the subtrees of these children are g-legs. A minor core of degree $3$ only has one g-leg, and therefore this case is impossible. We find that $x$ is a main core.
Consider a minor core $y$. By definition, the subtrees of at least two neighbors of $y$ are not g-legs, each of them has a regular core and thus it also has main core, proving that there are at least two main cores in the tree.
Finally, consider a tree that has a regular core $u$. If $u$ is a main core, we are done. Otherwise, it is a minor core, and in this case there are at least two main cores in the tree.
\end{proof}
We define algorithms for finding subsets of vertices for g-legs of cores. We will call these sets {\it local sets}. We will show that for trees with at least one regular core, a minimum weight landmark set can be found by combining the local sets (in the two special cases of trees without regular cores, the conditions will still be necessary but they will not always be sufficient). In the algorithms, we will assume that a core with a set of g-legs is given. For a regular core this is simply the set of its g-legs. The case of small cores is relevant only for the two special cases. If the tree has a single core which is small, we consider the three standard legs of this core. In the case of a tree with two small cores, each one of them can be seen as a core with two standard legs (one of which is small), and one modified leg.
Each solution for the set of g-legs of one core $v$ will consist of finding a solution for each g-leg and combining such solutions, but the g-legs cannot be dealt with independently (and $v$ will never be selected as a part of a local set). We now define several types of solutions for standard legs and for modified legs. For standard legs, the solution types are as follows. A type $(s,0)$ solution is an empty set (that is, no vertices of the leg are selected). A type $(s,1)$ solution consists of one vertex of the leg whose position on the leg is at least $2$. A type $(s,2)$ solution consists of at least two vertices of the leg. Notice that solutions of types $(s,1)$ and $(s,2)$ valid only for long legs. A type $(s,3)$ solution consists of the vertex with position $1$ in the leg.
For a modified leg $\ell$, whose small core is at position $i$, the solution types are as follows. A type $(m,1)$ solution contains exactly one vertex out of $\ell^a$ and $\ell^b$, that is, the solution is either $\{\ell^a\}$ or it is $\{\ell^b\}$. A type $(m,2)$ solution does not have any of the vertices $\ell^a$ and $\ell^b$, it consists of at least two vertices with positions $i+2$ or larger, and possibly other vertices (whose positions are not $i+1$). A type $(m,3)$ solution contains at least two vertices, one of which is either $\ell^a$ or $\ell^b$ (it is possible that it contains both these vertices).
A subset $S$ of vertices of the g-legs of a core $v$, inducing solutions for the g-legs, is called a local set if it satisfies the following conditions.
\begin{enumerate}
\item There is at most one standard leg whose solution is of type $(s,0)$, and the other standard legs have solutions of types $(s,1)$, $(s,2)$, and $(s,3)$.
\item All modified legs have solutions of types $(m,1)$, $(m,2)$, and $(m,3)$.
\item If there is a standard leg whose solution is of type $(s,0)$, then no modified leg has a solution of type $(m,1)$.
\item If there is a long leg $\ell$ whose solution is of type $(s,0)$, then every long leg except for $\ell$ has a solution of type $(s,2)$.
\item If there is a short leg whose solution is of type $(s,0)$, then every long leg has a solution of type $(s,2)$ or $(s,3)$.
\end{enumerate}
As mentioned above, a short leg cannot have a solution of type $(s,1)$ or $(s,2)$. Thus, if some standard leg $\ell$ has a solution of type $(s,0)$, then all short legs (expect for $\ell$, if it is short) have solutions of type $(s,3)$.
\begin{lemma}\label{mustbelocal}
Consider a landmark set $L$, and a core $v$. The set $L$ contains a local set $S$ of $v$ as a subset.
\end{lemma}
\begin{proof}
First, we show that if the vertices of a g-leg that are in $L$ do not form any of the types of solutions defined above, then $L$ is not a landmark set. For standard legs, these types cover all possible solutions, thus we consider modified legs.
Consider a modified leg $\ell$ of a regular core $v$, let $u$ be its small core, and let $i$ be the position of $u$ on $\ell$. Any solution that contains at least one of $\ell^a$ and $\ell^b$ is either of type $(m,1)$ or of type $(m,3)$. If the solution does not contain any of the vertices $\ell^a$ and $\ell^b$, these two vertices cannot be separated by any vertex that is not on the legs of $u$, as their paths to such vertices traverse $u$ (or end at $u$), and their distances to $u$ are equal. To obtain two separations between $\ell^a$ and $\ell^b$ (if none of $\ell^a$ and $\ell^b$ is in $L$), $L$ must contain at least two vertices whose positions on $\ell$ are at least $i+2$ (these vertices are on the same standard leg of $u$ as $\ell^a$). Thus, in this case the solution is of type $(m,2)$. This also proves the second condition of local sets.
Consider two standard legs of $v$, $\ell_1$ and $\ell_2$. If none of the vertices $\ell_1^1$ and $\ell_2^1$ is in $L$, then they can only be separated by vertices of $\ell_1 \cup \ell_2$, as they have equal distances to $v$, and all paths from $\ell_1^1$ and $\ell_2^1$ to vertices not in $\ell_1 \cup \ell_2$ traverse $v$ (or end at $v$). Thus, the case that both legs have type $(s,0)$ solutions is impossible. This proves the first condition of local sets (and that expect for at most one short leg with an $(s,0)$ type solution, the solutions of every short leg are of type $(s,3)$).
Consider a standard leg without any landmarks, $\ell_1$, and a leg $\ell_2$ that is either long or modified. We will show that $\ell_2$ has at least two landmarks unless $\ell_1$ is short and $\ell_2$ is long, in which case it may have a type $(s,3)$ solution, and in all other cases the possible types of solutions are those having at least two vertices, that is, $(s,2)$, $(m,2)$, and $(m,3)$ type solutions. Assume that $\ell_2$ has exactly one landmark.
The landmark of $\ell_2$ must be in the first position of $\ell_2$, as otherwise none of $\ell_1^1$ and $\ell_2^1$ is in $L$,
and there is just one separation between them (because only vertices of $\ell_1 \cup \ell_2$ can separate them). If $\ell_2$ is a modified leg, then as the positions of $\ell_2^a$ and $\ell_2^b$ on $\ell_2$ are at least $2$, this is not a solution of type $(m,1)$, and since it has a single vertex, it is not a solution of type $(m,2)$ or $(m,3)$, and we reach a contradiction. We are left with the case that $\ell_2$ is long, and its solution is of type $(s,3)$. However, if $\ell_1$ is long as well, since $\ell_1^2,\ell_2^2 \notin L$ (and $\ell_1$ has no vertices in $L$ while $\ell_2$ only has $\ell_2^1$ in $L$), these two vertices will only have one separation, showing that the solution is not valid. This proves the last three conditions.
\end{proof}
\begin{lemma} \label{lemma3}
Consider a set $L \subseteq V$. If for every core $v$, the subset of $L$ that is restricted to the vertices of the g-legs of $v$ is a local set, then every pair of vertices $x,y \notin L$ with equal positions on the g-legs of $v$ has at least two separations in $L$.
\end{lemma}
\begin{proof}
If $x$ and $y$ are on the same g-leg of $v$, then this must be a modified leg $\ell$ of $v$, and $\{x,y\}=\{\ell^a,\ell^b\}$. Since $x,y \notin L$, the solution for $\ell$ must be of type $(m,2)$. In this case there are two vertices of $L$ on the leg that contains $\ell^a$, the distance of each such vertex to $\ell^a$ is smaller than its distance to $\ell^b$, and there are at least two separations between $x$ and $y$.
Consider the case where $x$ and $y$ are on different g-legs of $v$. We claim that any vertex $z$ on the leg of $x$ is closer to $x$ than it is to $y$. Let $\tilde{\ell}$ be the leg of $x$. The path from $y$ to $z$ traverses $\tilde{\ell}^1$, thus $d(y,z)=d(y,v)+1+d(\tilde{\ell}^1,z)=d(x,v)+1+d(\tilde{\ell}^1,z)=d(x,\tilde{\ell}^1)+2+d(\tilde{\ell}^1,z)$. However, $d(x,z) \leq d(x,\tilde{\ell}^1)+d(\tilde{\ell}^1,z)$, proving $d(x,z)<d(y,z)$. Thus, if the two g-legs (of $x$ and $y$) have at least two vertices of $L$ in total, then there are at least two separations between $x$ and $y$. Any modified leg has at least one vertex in any local set, and if there is a standard leg with a $(s,0)$ type solution, then any modified leg has at least two vertices in any local set. We find that the only case where the two g-legs have at most one vertex of $L$ (together) is where both these g-legs are standard legs, one leg have a type $(s,0)$ solution, and the other leg has a type $(s,3)$ solution. We show such a solution is not possible. If at least one of the legs of $x$ and $y$ is short, then it only has a vertex in position $1$. Thus, the positions of $x$ and $y$ are equal to $1$. This last case is impossible as $x,y \notin L$ implies that none of these two legs has a type $(s,3)$ solution if the positions of $x$ and $y$ are $1$. We find that both these legs are long, but then the leg that does not have a type $(s,0)$ solution must have a type $(s,2)$ solution, and the two legs have at least two vertices of $L$ in total, so this case is impossible too.
\end{proof}
\begin{lemma}\label{twotwo}
A local set of a main core has at least two vertices.
\end{lemma}
\begin{proof}
Consider a main core $v$ where the local set of its g-legs has at most one vertex.
Any local set of $v$ contains at least one vertex on every g-leg except for possibly one g-leg, and it has at least one vertex of every modified leg. Thus, $v$ has at most two g-legs.
If $v$ has at most one g-leg, then it is a minor core. If it has exactly two g-legs, then one of them has no vertices in the local set, while the other one has one vertex in this set. The g-leg with one vertex in the local set cannot be a modified leg, as in this case (where there is a standard leg with a $(s,0)$ type solution), it would have at least two vertices in the local set, by property $3$ of local sets. Thus, $v$ has two standard legs. If one them is short, then $v$ is a minor core again. Otherwise, as a long leg has a type $(s,0)$ solution, the other long leg has a type $(s,2)$ solution, and the local set has at least two vertices, contradicting the assumption.
\end{proof}
\begin{claim}\label{clm1}
Consider a tree, a core $v$, and a set $L \subseteq V$ that contains a local set for $v$. If $x \neq y$ are vertices of the subtree consisting of $v$ and its g-legs such that $x,y \notin L$ and $d(x,v)<d(y,v)$, then any vertex of $L$ expect, possibly, for the vertices on the g-leg of $v$ that contains $y$ separates them.
\end{claim}
\begin{proof}
Consider a vertex $z \in L$ that is not on any g-leg of $v$ that contains at least one of $x$ and $y$ (note that $y\neq v$, as $d(y,v)>0$, so $y$ is on a g-leg of $v$). The vertex $z$ separates $x$ and $y$ as
their paths to $z$ traverse $v$ (a path can start at $v$ if $x=v$, or the paths can end at $v$ if $z=v$), while $x$ and $y$ have different distances to $v$. If $x$ is on a g-leg that is not the g-leg of $y$, consider a vertex $z'$ on this g-leg. Since $d(x,z') \leq d(x,v)+d(v,z')<d(y,v)+d(v,z')$ while $d(y,z')=d(y,v)+d(v,z')$, $z'$ also separates $x$ and $y$.
\end{proof}
\begin{lemma}\label{subt}
Consider a tree that has at least two regular cores, and a set $L \subseteq V$. If for every regular core $v$, the subset of $L$ that is restricted to the vertices of the g-legs of $v$ is a local set, then for every pair of vertices $x,y \notin L$ in the subtree consisting of $v$ and its g-legs, there are two separations between $x$ and $y$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma1}, a tree with a minor core also has at least two main
cores. Thus, the tree has at least two main cores in any case. If
$d(x,v)=d(y,v)$, then there are two separations between $x$ and
$y$ in the subset of $L$ restricted to the g-legs of $v$, as it is
a local set. Otherwise, the tree has at least one additional main
core $u$ except for $v$, such that the vertices of $L$ on the
g-legs of $u$ form a local set for $u$, and therefore there are at
least two such vertices, by Lemma \ref{twotwo}. By Claim
\ref{clm1}, $x$ and $y$ are separated by the vertices of $L$ that
are in the local set which is the subset of $L$ restricted to the
g-legs of $u$.
\end{proof}
\begin{lemma}\label{toocor}
Consider a tree with at least two regular cores, and a set $L
\subseteq V$. If for every regular core $v$, the subset of $L$
that is restricted to the vertices of the g-legs of $v$ is a local
set, then $L$ is a landmark set.
\end{lemma}
\begin{proof}
Consider two vertices $x\neq y$, such that $x,y\notin L$. If $x$
and $y$ are in the subtree consisting of a regular core and its
g-legs, then there are two separations between them due to Lemma
\ref{subt}. Next, assume that $x$ is a vertex of a subtree
consisting of a main core $v$ and its g-legs (that is, $x=v$ or
$x$ is on a g-leg of $v$), while $y$ is not in this subtree. Let
$v'$ be the neighbor of $v$ on the tree path from $v$ to $y$. As
the subtree of $v'$ is not a g-leg of $v$, it has at least one
regular core and therefore it has a main core, by Lemma \ref{lemma1}. Let $z$ denote such
a core, let $z_1,z_2 \in L$ be vertices of the g-legs of $z$, and
let $v_1,v_2 \in L$ be vertices of the g-legs of $v$ (all of which
must exist by Lemma \ref{twotwo}). Consider the case $d(y,v)\leq
d(x,v)$. As $d(y,z_i)\leq d(y,v')+d(v',z_i)=d(y,v)-1+d(v',z_i)$,
while $d(x,z_i)=d(x,v)+1+d(v',z_i)\geq d(y,v)+1+d(v',z_i)$, for $i \in \{$1$,$2$\}$,
$z_1$ and $z_2$ separate $x$ and $y$. In the case $d(y,v) > d(x,v)$, we find
$d(y,v_i)=d(y,v)+d(v,v_i)$, for $i \in \{$1$,$2$\}$, while $d(x,v_i) \leq d(x,v)+d(v,v_i) <
d(y,v)+d(v,v_i)$, so $v_1$ and $v_2$ separate $x$ and $y$. If $x$
is a minor core or on a g-leg of a minor core $a$, root the tree
at $a$ (we let $a=x$ if $x$ is a minor core). There are at least
two subtrees of $a$ with main cores (by Lemma \ref{lemma1}) and therefore, with at least
two vertices of $L$ (in each). Since we assume that $y$ is not $a$
or on a g-leg of $a$, it is in one of these subtrees. Let $a'$
be the neighbor of $a$ that is the root of this subtree. If
$d(y,a)\leq d(x,a)$, then for every vertex $u$ in the subtree of
$a'$, $d(x,u)=d(x,a)+1+d(a',u) > d(y,a)+d(a',u)$ and $d(y,u)\leq
d(y,a')+d(a',u)=d(y,a)-1+d(a',u)$, thus every such vertex $u$
separates $x$ and $y$, and there are two separations for this pair
of vertices. Otherwise, $d(y,a) > d(x,a)$, and for every vertex
$w$ in the subtree of a different neighbor of $a$ that has a main
core (and thus at least two vertices of $L$) in its subtree,
$d(x,w)=d(x,a)+d(a,w)$ and $d(y,w)=d(y,a)+d(a,w)$, giving two
separations for $x$ and $y$.
Finally, assume that none of $x$ and $y$ is a regular core or on a g-leg of a regular core (as all cases where one of the two vertices satisfies this condition were considered). Root the tree at $x$. There are at least two subtrees (since a leaf must be a part of a g-leg), and each one must have a regular core (otherwise $x$ is a part of a g-leg). For any vertex $b$ in the subtree that does not contain $y$, $d(y,b)=d(y,x)+d(x,b)>d(x,b)$, giving two separations between $x$ and $y$ again.
\end{proof}
\begin{lemma}\label{oneforeach}
Consider a tree with exactly one core $v$ (that has at least three g-legs), and a set $L \subseteq V$ that contains a local set for $v$. In all the following cases $L$ is a landmark set of the tree.
\begin{enumerate}
\item The set $L$ contains at least one vertex of each g-leg of $v$.
\item The core $v$ has at least four g-legs.
\item The core $v$ has at least two g-legs that are modified legs.
\item The set $L$ contains $v$.
\end{enumerate}
\end{lemma}
\begin{proof}
In each one of the cases we consider two vertices $x,y \notin L$.
If $d(x,v)=d(y,v)$, then $x$ and $y$ have the same position in their g-legs and there are two separations between $x$ and $y$ since $L$ is a local set, by Lemma \ref{lemma3}. Otherwise, assume without loss of generality that $d(x,v)<d(y,v)$ (and thus $y\neq v$).
In the first two cases, $v$ has at least two additional g-legs except for the g-leg of $y$ each having at least one vertex of $L$. In the first case this holds as there are two additional g-legs, and every g-leg has a vertex of $L$. In the second case, $L$ contains a local set and every g-leg, except for at most one, has a vertex of $L$. There are at least four g-legs in total, there are at least two g-legs except for the g-leg of $y$ and a g-leg with no vertices of $L$ (if such a standard leg exists). Thus, there are at least two separations between $x$ and $y$ as by Claim \ref{clm1}, any vertex of $L$ separates $x$ and $y$ possibly except for vertices on the g-leg of $y$.
In the third case, if every g-leg has a vertex of $L$, then the property follows from the first case. Otherwise, there is a standard leg without any vertex of $L$, and therefore every g-leg, except for this leg, has at least two vertices of $L$, by property $3$ of local sets. Since at least two g-legs of $v$ are modified legs, there is a modified leg that is not the g-leg of $y$, and its vertices that belong to $L$ separate $x$ and $y$.
In the fourth case, it is sufficient to consider a tree and a set $L$ that do not satisfy any of the conditions of the first three cases. Thus, $v$ has three g-legs. As $d(x,v)<d(y,v)$, $v$ separates $x$ and $y$. At least one of the g-legs that are not the g-leg of $y$ has at least one vertex in $L$, and this vertex separates $x$ and $y$ as well.
\end{proof}
\begin{corollary}\label{localsetsaregreat}
Consider a tree that has at least one regular core, and a set $L
\subseteq V$. If for every regular core $v$, the subset of $L$
that is restricted to the vertices of the g-legs of $v$ is a local
set, then $L$ is a landmark set.
\end{corollary}
\begin{proof}
If the tree has at least two regular cores, this claim was proved in Lemma \ref{toocor}. Assume that the tree has one regular core $u$. If $u$ has at least four g-legs, or it has three g-legs, out of which at least two g-legs are modified legs, then the claim was proved in Lemma \ref{oneforeach} (the second and third parts). Next, assume that $u$ has three g-legs, out of which at most one is modified. If $u$ has three g-legs, such that all of them are standard, and at least one leg is short, then $u$ is a small core (and not regular). If $u$ has one modified leg, and two standard legs, out of which at least one leg is short, then $u$ is a small core (in this case $u$ is not regular either, the tree has two small cores and no regular cores). Thus, we find that $u$ has two long legs, and the third g-leg is either modified, or it is standard and long.
If every leg of $u$ contains a vertex of $L$, then the claim was proved in Lemma \ref{oneforeach} (the first part). Otherwise, $u$ has a standard leg with no vertices of $L$. By the properties of local sets, there are two g-legs with at least two vertices of $L$ on each g-leg. In this case, for any pair $x,y\notin L$, if $d(x,u)=d(y,u)$, then there are two separations between $x$ and $y$ as $L$ contains a local set for the g-legs of $u$, and if $d(x,u)<d(y,u)$, then the two vertices of $L$ that are on a g-leg that does not contain $y$ separate $x$ and $y$.
\end{proof}
\begin{lemma}\label{threev}
Consider a tree with no regular cores and a single small core $v$, such that $v$ has three standard legs. If $L \subseteq V$ contains a local set for $v$ and $|L|\geq 3$, then $L$ is a landmark set. If $L \subseteq V$ contains a local set for $v$ and $|L| \leq 2$, then $L$ is a landmark set if and only if $L$ consists of the two vertices of two short legs of $v$.
\end{lemma}
\begin{proof}
Assume that $|L|\geq 3$. By Lemma \ref{oneforeach}, if $v \in L$, or if every leg has a vertex of $L$, we are done.
Otherwise, consider a pair $x,y \notin L$.
If $d(x,u)=d(y,u)$, then by Lemma \ref{lemma3} there are two separations between $x$ and $y$ as $L$ contains a local set for the g-legs of $u$.
If $d(x,u)<d(y,u)$, and the two legs that are not the leg of $y$ have at least two vertices of $L$, we are done too (by Claim \ref{clm1}).
As $|L|\geq 3$, the remaining case is that the leg of $y$ has at least two vertices in $L$, while one of the other legs of $u$ has one vertex of $L$, and it separates $x$ and $y$.
At most one vertex of the leg of $y$ can have equal distances to $x$ and $y$, and therefore, any other vertex of the leg of $y$ which is in $L$ separates them as well, and we assumed there is at least one such vertex for this case.
A local set cannot contain less than two vertices by its properties, and in this case there are two legs with one vertex of $L$ each, one standard leg without any vertices of $L$, and $v \notin L$. Every vertex of $L$ separates every pair $x,y \notin L$, as $|L|=2$.
For every leg $\ell$, if $\ell^i \in L$ for $i \geq 1$ then $\ell$ has $i$ vertices, as otherwise $\ell^i$ does not separate $\ell^{i-1}$ and $\ell^{i+1}$ if $i\geq 2$, and it does not separate $v$ and $\ell^2$ if $i=1$. As there is a leg with a type $(s,0)$ solution, the solution of $\ell$ is of type $(s,2)$ or $(s,3)$, and since it has one vertex of $L$, the solution must be of type $(s,3)$. Thus, $i=1$, and $\ell$ is short. On the other hand, if $L$ consists of two vertices of short legs, the remaining vertices are $v$ and the vertices of one leg $\tilde{\ell}$, each having a different distance to the two vertices of $L$ (this distance is $1$ for $v$, and $i+1$ for $\tilde{\ell}^i$).
\end{proof}
Consider a tree with two small cores $v$ and $u$, and no regular cores. The tree consists of a path between $u$ and $v$, and each of them has two standard legs, a short leg and another standard leg which can be short or long. For each small core, the path to the other small core and its legs can be seen as its modified leg.
\begin{lemma}
Consider a tree with no regular cores and two small cores $v$ and $u$. A set $L \subseteq V$ is a landmark set if and only if it contains a local set for the three g-legs of $v$ and it is a local set for the three g-legs of $u$.
A minimal landmark set, with respect to set inclusion, will contain at most one vertex on the path between $u$ and $v$ (excluding $u$ and $v$), and at most two vertices of each long leg of $u$ and $v$.
\end{lemma}
\begin{proof}
Since the tree can be seen as a core and its three g-legs in two way, $L$ must contain a local set for the g-legs of $u$ and for the g-legs of $v$.
Assume now that $L$ contains local sets for the two sets of g-legs. If for at least one small core, each of its g-legs has a vertex of $L$, we are done by the first part of Lemma \ref{oneforeach}. Moreover, if $u\in L$ or $v \in L$, we are done by the fourth part of Lemma \ref{oneforeach}.
Since a modified leg has at least one vertex of a local set (by property $2$ of local sets) we find that each of $u$ and $v$ has a standard leg without any vertex of $L$, and every modified leg has at least two vertices of $L$.
If $L$ contains a vertex $z$ on the path between $u$ and $v$ (excluding $u$ and $v$), for the sake of the proof we see $z$ as having two g-legs (one containing $u$ as a small core, and the other one containing $v$ as a small core). As each of $u$ and $v$ has a standard leg with at least one vertex in $L$, each g-leg of $z$ has at least one vertex of $L$. Consider two vertices $x,y \notin L$. As $z \in L$, each of $x$ and $y$ is on a g-leg of $z$. If $x$ and $y$ are on the same g-leg, assume (without loss of generality) that this is the g-leg containing $u$. As the solution of one of the standard legs of $u$ is of type $(s,0)$, its other leg has either a type $(s,2)$ solution or a type $(s,3)$ solution. If $d(x,z)=d(y,z)$, then $d(x,u)=d(y,u)$ (as the paths of $x$ and $y$ to $z$ traverse $u$), $x$ and $y$ must be the vertices in position $1$ on the standard legs of $u$, and in this case the solution type of the standard leg of $u$ with at least one vertex of $L$ cannot be $(s,3)$ (as $x,y \notin L$). The two vertices of $L$ of the long standard leg of $u$ are closer to the vertex of position $1$ of that leg than to the vertex of position $1$ of the short leg of $u$, and thus they separate $u$ and $v$. If $d(x,z) \neq d(y,z)$, then one of $x$ and $y$ is closer to $z$ than the other vertex, and it is also closer to any vertex of $L$ on the other g-leg of $z$. This results in at least two separations between $x$ and $y$. If $x$ is on the g-leg of $z$ containing $u$ while $y$ is on the g-leg of $z$ containing $v$, let $u' \in L$ be on the former g-leg of $z$ and let $v' \in L$ be on the latter g-leg of $z$. We consider the two cases again. Assume that $d(x,z) \leq d(y,z)$. We get $d(x,u')\leq d(x,u)+d(u,u')$ while $d(y,u')=d(x,u)+d(u,u')$, and $d(x,u)<d(x,z) \leq d(y,z)$, showing that $u'$ separates $x$ and $y$. If $d(x,z) < d(y,z)$, $z$ also separates $x$ and $y$, and if $d(x,z) = d(y,z)$, then $d(y,v')\leq d(y,v)+d(v,v')$ while $d(x,v')=d(x,v)+d(v,v')$, and $d(y,v)<d(y,z)=d(x,z)<d(x,v)$, showing that $v'$ also separates $x$ and $y$.
We are left with the case that each modified leg has at least two vertices, and these vertices are not on the path between $u$ and $v$ (and they are not $u$ or $v$), and as one standard leg of each small core has no vertices of $L$, each small core has one long leg with two vertices of $L$. In this case consider two vertices $x,y \notin L$. Each core has two g-legs with at least two vertices of $L$ on each. If $d(x,u)=d(y,u)$, then there are two separations between $x$ and $y$ as $L$ contains a local set for $u$ (by Lemma \ref{lemma3}). Otherwise, if $d(x,u)<d(y,u)$, there is a g-leg of $u$ that is not the g-leg of $y$ and has two vertices of $L$, and these vertices separate $x$ and $y$.
Next, consider a minimal landmark set $L$. If $L$ has at least two vertices on the path between $u$ and $v$ excluding the endpoints, then the modified legs of $u$ and $v$ (both containing this path) have at least three vertices of $L$ each (as each of $u$ and $v$ has at least one vertex of $L$ on a standard leg). Removing one vertex of the path between $u$ and $v$ does not change the type of solutions of modified legs (a solution of type $(m,2)$ remains of type $(m,2)$ and a solution of type $(m,3)$ remains of type $(m,3)$). Next, assume that a leg of one small core has at least three vertices. Removing the vertex of maximum distance to the small core of its standard leg does not change the type of solution of this leg (the solution was of type $(s,2)$ and remains of this type).
\end{proof}
We say that a local set is thrifty if solutions of types $(s,2)$, $(m,2)$ and $(m,3)$ consist of exactly two vertices.
\begin{lemma}
\begin{enumerate}
\item Consider a landmark set $L$, such that the subset of vertices of $L$ on the g-legs of a regular core $v$ is not a thrifty local set. Then, $L$ is not minimal with respect to set inclusion.
\item Consider a landmark set $L'$ for a tree with a single core $u$ that is small, such that the subset of vertices of $L'$ on the standard legs of $u$ is not a thrifty local set. Then, $L'$ is not minimal with respect to set inclusion.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $S'=L'\setminus \{u\}$. If $S'$ is not thrifty, we find $|S'|\geq 4$, as there is a leg $\ell'$ with a type $(s,2)$ solution consisting of at least three vertices, and at least one of the other two legs has at least one vertex in $S'$. Removing an arbitrary vertex of $\ell'$ from $S'$ to obtain $S''$ still results in a local set with at least three vertices, and by Lemma \ref{threev}, $S''$ is a landmark set for the tree, showing that $L'$ is not minimal.
Let $S$ be the subset of $L$ restricted to the g-legs of $v$. By Lemma \ref{mustbelocal}, $S$ is a local set. Since $S$ is not thrifty, there is a g-leg $\ell$ that contains at least three vertices. If $\ell$ has a type $(s,2)$ solution, remove an arbitrary vertex of $\ell$ from $L$ to obtain $\tilde{L}$. If $\ell$ has a type $(m,2)$ or type $(m,3)$ solution, remove a vertex of $\ell$ from $L$ to obtain $\tilde{L}$, where the removed vertex is such that the solution remains of the same type (if the small core has position $i$ on $\ell$, then for a type $(m,2)$ solution, remove a vertex such that at least two vertices of positions $i+2$ or larger on $\ell$ are not removed. For a type $(m,3)$ solution, remove a vertex whose position on $\ell$ is not $i+1$).
In all cases, the modification does not change the local sets of other regular cores, and the subset of $\tilde{L}$ restricted to the g-legs of $v$ remains a local set, thus by Lemma \ref{localsetsaregreat}, $\tilde{L}$ is a landmark set for the tree, showing that $L$ is not minimal.
\end{proof}
The case of a tree with two cores that are small was not considered in the lemma as in this case there can be a minimal landmark set $L$ that is not thrifty.
\section{The algorithm} The algorithm consists of two parts. In the first part, we detect the cores, and partition them into regular cores and small cores. Every regular core will have a list of its neighbors whose subtrees are g-legs. Finding such a list can be done by running DFS on the tree. In the second part (which we describe in more detail below), if the tree has at least one regular core, the algorithm simply finds a thrifty local set for each regular core. If the tree does not have any regular cores, then a landmark set is computed by considering all possible solution types. The resulting running times are linear.
In the case of a tree with at least one regular core $u$, the local sets are computed independently, and moreover, the dependence between the g-legs is only in the sense that for a given g-leg of $u$, only the types of solutions of the other g-legs are relevant, and not the specific solutions of the other g-legs of $u$. Thus, for each g-leg, we search for a solution of minimum cost for a given solution type. The case of a single small core is similar, as the identity of the vertex in a type $(s,1)$ solution or the vertices in a type $(s,2)$ solution does not affect the validity of the local set as a landmark set (this property only depends on number of vertices). In the case of two small cores, there are several similar properties. If there is a vertex on the path between the two cores (excluding the cores) in a landmark set, its exact identity is not important since replacing it with another vertex of this path keeps the types of solutions of the modified legs as they were. Similarly, for a long leg of one of the small cores, replacing one vertex whose position is at least $2$ with another such vertex does not change the solution type (neither for the long leg nor for the modified leg of the other small core that contains this long leg).
\paragraph{Finding a minimum cost local set for a regular core.}
Consider a regular core $v$. At most three solutions (which are thrifty local sets for the g-legs of $v$) will be considered, and a solution of minimum cost will be given as the output. The solution kinds are based on the definitions of local sets and thrifty local sets.
First, the different kinds of solutions for each g-leg are computed. For every standard leg compute the minimum cost solutions of types $(s,1)$, $(s,2)$, and $(s,3)$ (by finding the minimum cost vertex whose position is not $1$, and the two minimum cost vertices, using linear time in the length of the leg). For short legs there a unique solution is computed, which has type $(s,3)$.
For every modified leg $l'$ compute the minimum cost solutions of types $(m,1)$, $(m,2)$, and $(m,3)$ (if the small core has position $i$ on $l'$, find the minimum cost vertex $b$ out of the two vertices of positions $i+1$ on $l'$ , the two minimum cost vertices of positions $i+2$ or greater on $l'$, and a vertex of minimum cost excluding $b$ (to be combined with $b$ in an output set), using linear time in the number of vertices of the leg).
The first solution is computed independently for each g-leg. For every standard leg, select a solution of minimum cost, out of the solutions computed for it. In the second solution, there will be a short leg whose solution is of type $(s,0)$ (if there is no such leg of $v$, then no such solution is computed). For each modified leg, select a minimum cost solution out of its already calculated solutions of types $(m,2)$ and $(m,3)$. For each long leg, select a minimum cost solution out of its already calculated solutions of types $(s,2)$ and $(s,3)$. Select a short leg whose $(s,3)$ type solution has maximum cost and change its solution into type $(s,0)$. This completes the description of the second solution.
In the third solution, there will be a long leg of $v$ whose solution is of type $(s,0)$ (if there is no such leg, then no such solution is computed). For each modified leg, select a minimum cost solution out of its already calculated solutions of types $(m,2)$ and $(m,3)$. For each short leg, the solution is of type $(s,3)$. Find a long leg whose $(s,2)$ solution has the maximum cost. Define the solution of this leg to be of type $(s,0)$, and any other long leg will have a type $(s,2)$ solution.
\paragraph{Finding a minimum cost landmark set for a tree with no regular cores and one small core.} Let $v$ denote the small core of the tree.
In this case we will consider solutions of several kinds, and as not every local set with two vertices is a valid landmark set, we will consider local sets of two vertices separately. As in the case of a regular core, the first step is to computed minimum cost solutions of the three types ($(s,1)$, ($s,2)$, and $(s,3)$) for each leg.
The first solution is computed as for regular cores, that is, a minimum cost solution out of the three types is selected for each leg. Next, for each of the three legs, local sets where this leg has a type $(s,0)$ solution are considered. Consider a leg $\ell$ whose solution will be of type $(s,0)$. If $\ell$ is long, a type $(s,2)$ solution is selected for any long leg except for $\ell$, and a type $(s,3)$ solution is selected for any short leg. This is a landmark set as it either contains at least three vertices, or $L$ consists of two vertices the two of short legs, by Lemma \ref{threev}. If $\ell$ is short, then there are (at most) four additional solutions to be considered for the case where $\ell$ has a type $(s,0)$ solution, where any short leg except for $\ell$ has a type $(s,3)$ solution, and any long leg has either a type $(s,2)$ solution or a type $(s,3)$ solution. The only kind of solution with two vertices of the legs is the one where two short legs have type $(s,3)$ solution, resulting in two selected vertices. In this case, $v$ is added to the solution if there is a long leg with a type $(s,3)$ solution. We found at most four solutions for each leg, giving a constant number of solutions, where the output is a minimum cost solution out of these solutions.
\paragraph{Finding a minimum cost landmark set for a tree with no regular cores and two small cores.}
Recall that in this case the output may contain a local set that is not thrifty. Let the two cores be denoted by $u$ and $v$. Let $a_{uv}$ denote a vertex of minimum cost on the path between $u$ and $v$ excluding the endpoints. Let $u_1$, $u_2$, $v_1$, $v_2$ denote the four neighbors of $u$ and $v$ on theirs standard legs (the vertices of position $1$). Let $b_1$ and $b_2$ be two vertices of minimum costs on the long leg of $v$ that are not neighbors of $v$, and let $b'_1$ and $b'_2$ be two vertices of minimum cost on the long leg of $u$ that are not neighbors of $u$ (it is possible that some of the vertices $b_1$, $b_2$, $b'_1$, and $b'_2$ do not exist if at least one of $u$ and $v$ does not have a long leg, or it has a long leg with two vertices). Consider all subsets of $\{a_{uv},u,v,u_1,u_2,v_1,v_2,b_1,b_2,b'_1,b'_2\}$ .
For each subset, test whether it is a local set for $u$ and a local set for $v$ (or alternatively, test whether it is a landmark set, which can be done in linear time by computing the distances from each of the eleven vertices to all vertices), and let the sets satisfying this property be called valid solutions. As the set $\{u_1,u_2,v_1,v_2\}$ contains a local set for $u$ and a local set for $v$, at least one valid solution is found. Return a subset of minimum cost out of the valid solutions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the last decade or so neural networks, originally introduced in the 1940's and 50's \cite{hebb1949organization,rosenblatt1958perceptron}, have become indispensable tools for machine learning tasks ranging from computer vision \cite{krizhevsky2012imagenet} to natural language processing \cite{brown2020language} and reinforcement learning \cite{silver2017mastering}. Their empirical success has raised many new mathematical questions in approximation theory \cite{devore2020neural, yarotsky2017error,yarotsky2018optimal}, probability (see \S \ref{S:RMT} for some references), optimization/learning theory \cite{bartlett2020benign, belkin2019reconciling,jacot2018neural,zhang2021understanding} and so on. The present article concerns a fundamental probabilistic question about arguably the simplest networks, the so-called \textit{fully connected} neural networks, defined as follows:
\begin{definition}[Fully Connected Network]\label{D:FC}
Fix a positive integer $L$ as well as $L+2$ positive integers $n_0,\ldots, n_{L+1}$ and a function $\sigma:\mathbb R\rightarrow \mathbb R$. A fully connected depth $L$ neural network with input dimension $n_0$, output dimension $n_{L+1}$, hidden layer widths $n_1,\ldots, n_L$, and non-linearity $\sigma$ is any function $x_\alpha\in \mathbb R^{n_0}\mapsto z_\alpha^{(L+1)}\in \mathbb R^{n_{L+1}}$ of the following form
\[
z_\alpha^{(\ell)} = \begin{cases}
W^{(1)}x_\alpha+b^{(1)},&\quad \ell=1\\
W^{(\ell)}\sigma(z_\alpha^{(\ell-1)})+b^{(\ell)},&\quad \ell=2,\ldots, L+1
\end{cases},
\]
where $W^{(\ell)}\in \mathbb R^{n_{\ell}\times n_{\ell-1}}$ are matrices, $b^{(\ell)}\in \mathbb R^{n_\ell}$ are vectors, and $\sigma$ applied to a vector is shorthand for $\sigma$ applied to each component.
\end{definition}
The parameters $L,n_0,\ldots,n_{L+1}$ are called the \textit{network architecture}, and $z_\alpha^{(\ell)}\in \mathbb R^{n_\ell}$ is called the \textit{vector of pre-activations at layer $\ell$} corresponding to input $x_\alpha.$ A fully connected network with a fixed architecture and given non-linearity $\sigma$ is therefore a finite (but typically high) dimensional family of functions, parameterized by the network weights (entries of the weight matrices $W^{(\ell)}$) and biases (components of bias vectors $b^{(\ell)}$).
This article considers the mapping $x_\alpha\mapsto z_\alpha^{(L+1)}$ when the network's weights and biases are chosen independently at random and the hidden layer widths $n_1,\ldots, n_L$ are sent to infinity while the input dimension $n_0,$ output dimension $n_{L+1}$, and network depth $L$ are fixed. In this \textit{infinite width limit}, akin to the large matrix limit in random matrix theory (see \S \ref{S:why}), neural networks with random weights and biases converge to Gaussian processes (see \S \ref{S:disc} for a review of prior work). Unlike prior work Theorem \ref{T:NNGP}, our main result, is that this holds for general non-linearities $\sigma$ and distributions of network weights (cf \S \ref{S:main-result}).
Moreover, in addition to establishing convergence of wide neural networks to a Gaussian process under weak hypotheses, the present article gives a mathematical take aimed at probabilists of some of the ideas developed in the recent monograph \cite{roberts2021principles}. This book, written in the language and style of theoretical physics by Roberts and Yaida, is based on research done jointly with the author. It represents a far-reaching development of the breakthrough work of Yaida \cite{yaida2020non}, which was the first to systematically explain how to compute \textit{finite width corrections} to infinite width Gaussian process limit of random neural networks for arbitrary depth, width, and non-linearity. Previously, such finite width (and large depth) corrections were only possible for some special observables in linear and ReLU networks \cite{ hanin2018neural,hanin2019finite,hanin2019products,hanin2021non, noci2021precise, zavatone2021exact}. The present article deals only with the asymptotic analysis of random neural networks as the width tends to infinity, leaving to future work a probabilistic elaboration of the some aspects of the approach to finite width corrections from \cite{roberts2021principles}.
\subsection{Roadmap} The rest of this article is organized as follows. First, in \S \ref{S:why} we briefly motivate the study of neural networks with random weights. Then, in \S \ref{S:main-result} we formulate our main result, Theorem \ref{T:NNGP}. Before giving its proof in \S \ref{S:proof}, we first indicate in \S \ref{S:disc} the general idea of the proof and its relation to prior work.
\subsection{Why Random Neural Networks?}\label{S:why}
\subsubsection{Practical Motivations} It may seem at first glance that studying neural networks with random weights and biases is of no practical interest. After all, a neural network is only useful after it has been ``trained,'' i.e. one has found a setting of its parameters so that the resulting network function (at least approximately) interpolates a given training dataset of input-output pairs $(x,f(x))$ for an otherwise unknown function $f:\mathbb R^{n_0}\rightarrow \mathbb R^{n_{L+1}}$.
However, the vast majority of neural network training algorithms used in practice are variants of gradient descent starting from \textit{a random initialization} of the weight matrices $W^{(\ell)}$ and bias vectors $b^{(\ell)}$. Studying networks with random weights and biases therefore provides an understanding of the initial conditions for neural network optimization.
Beyond illuminating the properties of networks at the start of training, the analysis of random neural networks can reveal a great deal about networks after training as well. Indeed, on a heuristic level, just as the behavior of the level spacings of the eigenvalues of large random matrices is a surprisingly good match for emission spectra of heavy atoms \cite{wigner1958distribution}, it is not unreasonable to believe that certain coarse properties of the incredibly complex networks used in practice will be similar to those of networks with random weights and biases. More rigorously, neural networks used in practice often have many more tunable parameters (weights and biases) than the number of datapoints from the training dataset. Thus, at least in certain regimes, neural network training provably proceeds by an approximate linearization around initialization, since no one parameter needs to move much to fit the data. This so-called NTK analysis \cite{du2018gradient, fan2020spectra,huang2020dynamics,jacot2018neural,liu2020linearity} shows, with several important caveats related to network size and initialization scheme, that in some cases the statistical properties of neural networks at the start of training are the key determinants of their behavior throughout training.
\subsubsection{Motivation from Random Matrix Theory}\label{S:RMT} In addition to being of practical importance, random neural networks are also fascinating mathematical objects, giving rise to new problems in approximation theory \cite{daubechies2021nonlinear,devore2020neural,hanin2019universal,yarotsky2017error, yarotsky2018optimal}, random geometry \cite{hanin2019complexity,hanin2019deep}, and random matrix theory (RMT). Perhaps the most direct, though by no means only, connection to RMT questions is to set the network biases $b^{(\ell)}$ to zero and consider the very special case when $\sigma(t)=t$ is the identity (in the machine learning literature these are called deep linear networks). The network function
\begin{equation}\label{E:deep-linear}
z_\alpha^{(L+1)} = W^{(L+1)}\cdots W^{(1)}x_\alpha
\end{equation}
is then a linear statistic for a product of $L+1$ independent random matrices. Such matrix models have been extensively studied, primarily in two regimes. The first is the multiplicative ergodic theorem regime \cite{crisanti2012products,furstenberg1963noncommuting,furstenberg1960products,ruelle1979ergodic}, in which all the layer widths $n_0,\ldots, n_{L+1}$ are typically set to a fixed value $n$ and the network depth $L$ tends to infinity. The second regime, where $L$ is fixed and the layer widths $n_\ell$ (i.e. matrix dimensions) tend to infinity, is the purview of free-probability \cite{nica2006lectures,voiculescu1986addition}.
In the presence of a non-linearity $\sigma$, random neural network provide non-linear generalizations of the usual RMT questions. For instance, the questions taken up in this article are analogs of the joint normality of linear statistics of random matrix products in the free probability regime. Further, random neural networks give additional motivation for studying matrix products appearing in \eqref{E:deep-linear} when the matrix dimensions $n_\ell$ and the number of terms $L$ are simultaneously large. This double scaling limit reveals new phenomena \cite{ahn2019fluctuations,akemann2012universal,akemann2014universal, akemann2019integrable,gorin2018gaussian,hanin2019products,hanin2021non} but is so far poorly understood relative to the ergodic or free regimes.
Finally, beyond studying linear networks, random matrix theory questions naturally appear in neural network theory via non-linear analogs of the Marchenko-Pastur distribution for empirical covariance matrices of $z_\alpha^{(L+1)}$ when $\alpha\in A$ ranges over a random dataset of inputs \cite{adlam2019random,hastie2019surprises, peche2019note, pennington2019nonlinear} as well as through the spectrum of the input-output Jacobian \cite{hanin2019products, pennington2018emergence} and the NTK \cite{adlam2020neural,fan2020spectra}.
\subsection{Main Result}\label{S:main-result}
Our main result shows that under rather general conditions, when the weights $W^{(\ell)}$ and biases $b^{(\ell)}$ of a fully connected network are chosen at random, the resulting field $x_\alpha\mapsto z_\alpha^{(L+1)}$ converges to a centered Gaussian field with iid components when the input dimension $n_0$ and output dimension $n_{L+1}$ are held fixed but the hidden layer widths $n_1,\ldots, n_L$ tend to infinity. To give the precise statement in Theorem \ref{T:NNGP} below, fix a fully connected neural network with depth $L\geq 1$, input dimension $n_0$, output dimension $n_{L+1}$, hidden layer widths $n_1,\ldots, n_{L}\geq 1$, and non-linearity $\sigma:\mathbb R\rightarrow \mathbb R$. We assume that $\sigma$ is absolutely continuous and that its almost-everywhere defined derivative (and hence $\sigma$ itself) is polynomially bounded:
\begin{equation}\label{E:sigma-prop}
\exists k>0\text{ s.t. }\forall x\in \mathbb R\quad \norm{\frac{\sigma'(x)}{1+\abs{x}^k}}_{L^\infty(\mathbb R)}< \infty.
\end{equation}
All non-linearities used in practice satisfy these rather mild criteria. Further, let us write $W_{ij}^{(\ell)}$ for the entries of the weight matrices $W^{(\ell)}$ and $b_i^{(\ell)}$ for the components of the bias vectors $b^{(\ell)}$. For $\ell \geq 2$ the Definition \ref{D:FC} of fully connected networks means that the formula for the components of the pre-activations $z_\alpha^{(\ell)}$ at layer $\ell$ in terms of those for $z_\alpha^{(\ell-1)}$ reads
\begin{equation}\label{E:z-def}
\z{i}{\alpha}{\ell}:= b_i^{(\ell)}+\sum_{j=1}^{n_{\ell-1}}W_{ij}^{(\ell)}\sigma(\z{j}{\alpha}{\ell-1}),\qquad i=1,\ldots, n_{\ell},
\end{equation}
where we've denoted by $\z{i}{\alpha}{\ell}$ the $i^{th}$ component of the $n_\ell$-dimensional vector of pre-activations $z_\alpha^{(\ell)}$ in layer $\ell$ corresponding to a network input $x_\alpha\in \mathbb R^{n_\ell}$. We make the following assumption on the network weights:
\begin{equation}\label{E:W-def}
W_{ij}^{(\ell)}:=\lr{\frac{C_W}{n_{\ell-1}}}^{1/2}\widehat{W}_{ij}^{(\ell)},\qquad \widehat{W}_{ij}^{(\ell)}\sim \mu\quad \text{iid},
\end{equation}
where $\mu$ is a fixed probability distribution on $\mathbb R$ such that
\begin{equation}\label{E:mu-W-def}
\mu\text{ has mean }0,\text{ variance }1\text{, and finite higher moments.}
\end{equation}
We further assume the network biases are iid Gaussian\footnote{As explained in \S \ref{S:disc} the universality results in this article are simply not true if the biases are drawn iid from a fixed non-Gaussian distribution.} and independent of the weights:
\begin{equation}\label{E:b-def}
b_i^{(\ell)} \sim \mathcal N(0,C_b)\quad \text{iid}.
\end{equation}
In \eqref{E:W-def} and \eqref{E:b-def}, $C_W>0$ and $C_b\geq 0$ are fixed constants. These constants do not play an important role for the analysis in this article but will be crucial for followup work. With the network weights and biases chosen at random the vectors $z_\alpha^{(\ell)}$ are also random. Our main result is that, in the infinite width limit, they have independent Gaussian components.
\begin{theorem}\label{T:NNGP}
Fix $n_0,n_{L+1}$ and a compact set $T\subseteq \mathbb R^{n_0}$. As the hidden layer widths $n_1,\ldots, n_L$ tend to infinity, the sequence of stochastic processes
\[x_\alpha\in \mathbb R^{n_0}\quad \mapsto \quad z_{\alpha}^{(L+1)}\in \mathbb R^{n_{L+1}}\]
converges weakly in $C^0(T,\mathbb R^{n_{L+1}})$ to a centered Gaussian process taking values in $\mathbb R^{n_{L+1}}$ with iid coordinates. The coordinate-wise covariance function
\[
K_{\alpha\beta}^{(L+1)}:=\lim_{n_1,\ldots, n_L\rightarrow\infty}\Cov\lr{\z{i}{\alpha}{L+1},\, \z{i}{\beta}{L+1}}
\]
for this limiting process satisfies the layerwise recursion
\begin{equation}\label{E:K-rec}
K_{\alpha\beta}^{(\ell+1)}=C_b+C_W\mathbb E\left[\sigma(z_\alpha)\sigma(z_\beta)\right],\qquad \lr{\begin{array}{c}
z_\alpha \\
z_\beta
\end{array}}\sim \mathcal N\lr{0,\twomat{K_{\alpha\alpha}^{(\ell)}}{K_{\alpha\beta}^{(\ell)}}{K_{\alpha\beta}^{(\ell)}}{K_{\beta\beta}^{(\ell)}}}
\end{equation}
for $\ell \geq 2$, with initial condition
\begin{equation}\label{E:K-initial}
K_{\alpha\beta}^{(2)}=C_b+C_W\E{\sigma\lr{z_{1;\alpha}^{(1)}}\sigma\lr{z_{1;\beta}^{(1)}}},
\end{equation}
where the distribution of $(z_{1;\alpha}^{(1)}, z_{1;\beta}^{(1)})$ is determined via \eqref{E:z-def} by the distribution of weights and biases in the first layer and hence is not universal.
\end{theorem}
\noindent We prove Theorem \ref{T:NNGP} in \S \ref{S:proof}. First, we explain the main idea and review prior work.
\subsection{Theorem \ref{T:NNGP}: Discussion, Main Idea, and Relation to Prior Work}\label{S:disc}
At a high level, the proof of Theorem \ref{T:NNGP} (specifically the convergence of finite-dimensional distributions) proceeds as follows:
\begin{enumerate}
\item Conditional on the mapping $x_\alpha\mapsto z_\alpha^{(L)}$, the components of the neural network output $x_\alpha\mapsto z_\alpha^{(L+1)}$ are independent sums of $n_L$ independent random fields (see \eqref{E:z-def}), and hence, when $n_L$ is large, are each approximately Gaussian by the CLT.
\item The conditional covariance in the CLT from step $1$ is random at finite widths (it depends on $z_\alpha^{(L)}$). However, it has the special form of an average over $j=1,\ldots, n_L$ of the same function applied to each component $z_{j;\alpha}^{(L)}$ of the vector $z_\alpha^{(L)}$ of pre-activations at the last hidden layer. We call such objects \textit{collective observables} (see \S \ref{S:collective-proof} and \eqref{E:sigma-intro}).
\item While $z_{j;\alpha}^{(\ell)}$ are not independent at finite width when $\ell \geq 2$, they are weakly sufficiently correlated that a LLN still applies to any collective observable in the infinite width limit (see \S \ref{S:collective-proof}).
\item The LLN from step 3 allows us to replace the random conditional covariance matrix from steps 1 and 2 by its expectation, asymptotically as $n_1,\ldots, n_L$ tend to infinity.
\end{enumerate}
We turn to giving a few more details on steps 1-4 and reviewing along the way the relation of the present article to prior work. The study of the infinite width limit for random neural networks dates back at least to Neal \cite{neal1996priors}, who considered networks with one hidden layer:
\[
z_{i;\alpha}^{(2)} = b_i^{(2)} +\sum_{j=1}^{n_1}W_{ij}^{(2)}\sigma\lr{z_{j;\alpha}^{(1)}},\qquad z_{j;\alpha}^{(1)}=b_j^{(1)}+\sum_{k=1}^{n_0}W_{jk}^{(1)}x_{k;\alpha},
\]
where $i = 1,\ldots, n_2.$ In the shallow $L=1$ setting of Neal if in addition $n_2=1$, then neglecting the bias $b_1^{(2)}$ for the moment, the scalar field $z_{1;\alpha}^{(2)}$ is a sum of iid random fields with finite moments, and hence the asymptotic normality of its finite-dimensional distributions follows immediately from the multidimensional CLT. Modulo tightness, this explains why $z_{1;\alpha}^{(2)}$ ought to converge to a Gaussian field. Even this simple case, however, holds several useful lessons:
\begin{itemize}
\item If the distribution of the bias $b_1^{(2)}$ is fixed independent of $n_1$ is and non-Gaussian, then the distribution of $z_{1;\alpha}^{(2)}$ will not be Gaussian, even in the limit when $n_1\rightarrow \infty$.
\item If the first layer biases $b_j^{(1)}$ are drawn iid from a fixed distribution $\mu_b$ and $\sigma$ is non-linear, then higher moments of $\mu_b$ will contribute to the variance of each neuron post-activation $\sigma(z_{j;\alpha}^{(1)})$, causing the covariance of the Gaussian field at infinite width to be non-universal.
\item Unlike in deeper layers, as long as $n_0$ is fixed, the distribution of each neuron pre-activation $z_{j;\alpha}^{(1)}$ in the first layer will not be Gaussian, unless the weights and biases in layer $1$ are themselves Gaussian. This explains why, in the initial condition \eqref{E:K-initial} the distribution is non-Gaussian in the first layer.
\end{itemize}
In light of the first two points, what should one assume about the bias distribution? There are, it seems, two options. The first is to assume that the variance of the biases tends to zero as $n_1\rightarrow \infty$, putting them on par with the weights. The second, which we adopt in this article, is to declare all biases to be Gaussian.
The first trick in proving Theorem \ref{T:NNGP} for general depth and width appears already when $L=1$ but the output dimension $n_2$ is at least two.\footnote{Neal \cite{neal1996priors} states erroneously on page 38 of his thesis that $z_{i;\alpha}^{(2)}$ and $z_{j;\alpha}^{(2)}$ will be independent because the weights going into them are independent. This is not true at finite width but becomes true in the infinite width limit.} In this case, even for a single network input $x_\alpha$, at finite values of the network width $n_1$ different components of the random $n_2$-dimensional vector $z_\alpha^{(2)}$ are not independent, due to their shared dependence on the vector $z_\alpha^{(1)}$. The key observation, which to the author's knowledge was first presented in \cite{lee2017deep}, is to note that the components of $z_\alpha^{(2)}$ are independent \textit{conditional on the first layer} (i.e. on $z_\alpha^{(1)}$) and are approximately Gaussian when $n_1$ is large by the CLT. The conditional variance, which captures the main dependence on $z_\alpha^{(1)}$, has the following form:
\begin{equation}\label{E:z-intro}
\Sigma_{\alpha\alpha}^{(2)} := \Var{z_{i;\alpha}^{(2)}~\big|~z_\alpha^{(1)}} = C_b + \frac{C_W}{n_1}\sum_{j=1}^{n_1} \sigma\lr{z_{j;\alpha}^{(1)}}^2.
\end{equation}
This is an example of what we'll call a \textit{collective observable}, an average over all neurons in a layer of the same function applied to the pre-activations at each neuron (see \S \ref{S:collective-proof} for the precise definition). In the shallow $L=1$ setting, $\Sigma_{\alpha\alpha}^{(2)}$ is a sum of $n_1$ iid random variables with finite moments. Hence, by the LLN, it converges almost surely to its mean as $n_1\rightarrow \infty$. This causes the components of $z_\alpha^{(2)}$ to become independent in the infinite width limit, since the source of their shared randomness, $\Sigma_{\alpha\alpha}^{(L+1)}$, can be replaced asymptotically by its expectation.
The proof for general $L$ follows a similar pattern. Exactly as before, for any $0\leq \ell\leq L$, the components of the pre-activations at layer $\ell+1$ are still conditionally independent, given the pre-activations at layer $\ell$. When the width $n_\ell$ is large the conditional distribution of each component over any finite collection of inputs is therefore approximately Gaussian by the CLT. Moreover, the conditional covariance across network inputs has the form:
\begin{equation}\label{E:sigma-intro}
\Sigma_{\alpha\beta}^{(\ell+1)} := \Cov\lr{z_{i;\alpha}^{(\ell+1)},\, z_{i;\beta}^{(\ell+1)}~\big|~z_\alpha^{(\ell)},\, z_\beta^{(\ell)}} = C_b + \frac{C_W}{n_{\ell}}\sum_{j=1}^{n_{\ell}} \sigma\lr{z_{j;\alpha}^{(\ell)}}\sigma\lr{z_{j;\beta}^{(\ell)}}.
\end{equation}
The summands on the right hand side are no longer independent at finite width if $\ell \geq 2$. However, $\Sigma_{\alpha\beta}^{(\ell+1)}$ are still collective observables, and the crucial point is to check that their dependence is sufficiently weak that we may still apply the LLN. Verifying this is the heart of the proof of Theorem \ref{T:NNGP} and is carried out in \S \ref{S:collective-proof}.
Let us mention that, in addition to the approach outlined above, other methods for showing that wide neural networks are asymptotically Gaussian processes are possible. In the prior article \cite{matthews2018gaussian}, for instance, the idea is to use that the entries of $z_\alpha^{(\ell)}$ are exchangeable and argue using an exchangeable CLT. This leads to some technical complications which, at least in the way the argument is carried out in \cite{matthews2018gaussian}, result in unnatural restrictions on the class of non-linearities and weight distributions considered there. Let us also mention that in the article \cite{lee2017deep}, the non-independence at finite width of the components of $z_{\alpha}^{(\ell)}$ for large $\ell$ was circumvented by considering only the sequential limit in which $n_\ell\rightarrow \infty$ in order of increasing $\ell$. The effect is that for every $\ell$ the conditional covariance $\Sigma_{\alpha\beta}^{(\ell)}$ has already converged to its mean before $n_{\ell+1}$ is taken large. However, this way of taking the infinite width limit seems to the author somewhat unnatural and is any case not conducive to studying finite width corrections as in \cite{roberts2021principles,yaida2020non}, which we plan to take up in future work.
We conclude this section by pointing the reader to several other related strands of work. The first are articles such as \cite{daniely2016toward}, which quantify the magnitude of the difference
\[
\frac{1}{n_\ell}\sum_{i=1}^{n_\ell}z_{i;\alpha}^{(\ell)}z_{i;\beta}^{(\ell)} - \lim_{n_1,\ldots, n_{\ell}\rightarrow \infty}\E{\frac{1}{n_\ell}\sum_{i=1}^{n_\ell}z_{i;\alpha}^{(\ell)}z_{i;\beta}^{(\ell)}}
\]
between the empirical overlaps $n_{\ell}^{-1}\inprod{z_\alpha^{(\ell)}}{z_\beta^{(\ell)}}$ of pre-activations and the corresponding infinite width limit uniformly over network inputs $x_\alpha,x_\beta$ in a compact subset of $\mathbb R^{n_0}$. In a similar vein are articles such as \cite{eldan2021non}, which give quantitative estimates at finite width for the distance from $x_\alpha\mapsto z_\alpha^{(\ell)}$ to a nearby Gaussian process.
The second is the series of articles starting with the work of Yang \cite{yang2019scaling,yang2019tensori,yang2020tensorii,yang2020tensoriii}, which develops the study not only of initialization but also certain aspects of inference with infinitely wide networks using what Yang terms tensor programs. As part of that series, the article \cite{yang2019tensori} establishes that in the infinite width limit many different architectures become Gaussian processes. However, the arguments in those articles are significantly more technical than the ones presented here since they are focused on building the foundation for the tensor program framework. At any rate, to the best of the author's knowledge, no prior article addresses universality of the Gaussian process limit with respect to the weight distribution in deep networks (for shallow networks with $L=1$ this was considered by Neal in \cite{neal1996priors}). Finally, that random neural networks converge to Gaussian processes in the infinite width limit under various restrictions but for architectures other than fully connected is taken up in \cite{garriga2018deep,novak2018bayesian,yang2019tensori}.
\section{Proof of Theorem \ref{T:NNGP}}\label{S:proof}
Let us recall the notation. Namely, we fix a network depth $L\geq 1$, an input dimension $n_0\geq 1,$ an output dimension $n_{L+1}\geq 1$, hidden layer widths $n_1,\ldots, n_L\geq 1$ and a non-linearity $\sigma$ satisfying \eqref{E:sigma-def}. We further assume that the networks weights and biases are independent and random as in \eqref{E:W-def} and \eqref{E:b-def}. To prove Theorem \ref{T:NNGP} we must show that the random fields $x_\alpha\mapsto z_\alpha^{(L+1)}$ converge weakly in distribution to a Gaussian process in the limit where $n_1,\dots, n_L$ tend to infinity. We start with the convergence of finite-dimensional distributions. Let us therefore fix a collection
\[
x_A = \set{x_\alpha,\quad \alpha\in A}
\]
of $\abs{A}$ distinct network inputs in $\mathbb R^{n_0}$ and introduce for each $\ell=0,\ldots, L+1$, every $i=1,\ldots, n_\ell$, and all $\alpha\in A$ the vectorized notation
\[
z_{i;A}^{(\ell)} := \lr{z_{i;\alpha}^{(\ell)},\, \alpha\in A}\in \mathbb R^{\abs{A}},\qquad z_{A}^{(\ell)}:= \lr{z_{i;A}^{(\ell)},\, i=1,\ldots, n_\ell}\in \mathbb R^{n_\ell \times \abs{A}}.
\]
The following result states that the distribution of the random variable $z_A^{(L+1)}$ with values in $\mathbb R^{n_{L+1}\times \abs{A}}$ converges to that of the claimed Gaussian field.
\begin{proposition}[Convergence of Finite-Dimensional Distributions]\label{P:fdd}
Fix $L\geq 1$ and $n_0,n_{L+1}.$ The distribution of $z_A^{(L+1)}$ converges weakly as $n_1,\ldots, n_L\rightarrow \infty$ to that of a centered Gaussian in $\mathbb R^{n_{L+1}\times \abs{A}}$ with iid rows for which the covariance
\[
K_{\alpha\beta}^{(L+1)}=\lim_{n_1,\ldots, n_L\rightarrow\infty} \Cov\lr{z_{i;\alpha}^{(L+1)},z_{i;\beta}^{(L+1)}} ,\qquad \alpha,\beta\in A
\]
between the entries in each row satisfies the recursion \eqref{E:K-rec} with initial condition \eqref{E:K-initial}.
\end{proposition}
Once we have proved Proposition \ref{P:fdd} in \S \ref{S:fdd-proof}, it remains to show tightness. For this, we fix a compact subset $T\subseteq \mathbb R^{n_0}$. The tightness of $x_\alpha\mapsto z_\alpha^{(L+1)}$ in $C^0(T,\mathbb R^{n_{L+1}})$ follows immediately from the Arzel\`a-Ascoli Theorem and the following result, which we prove in \S \ref{S:tightness-proof}.
\begin{proposition}
[High Probability Equicontinuity and Equiboundedness of $z_\alpha^{(L+1)}$]\label{P:tightness}
For every $L\geq 1,\,\epsilon>0$ there exists $C=C(\epsilon,\sigma,T,L,C_b,C_W)>0$ so that
\begin{equation}\label{E:aa-hyp}
\sup_{x_\alpha,x_\beta\in T}\frac{ \norm{z_\alpha^{(L+1)}-z_\beta^{(L+1)}}_2}{\norm{x_\alpha-x_\beta}_2}\leq C \qquad \text{and}\qquad \sup_{x_\alpha\in T}\norm{z_\alpha^{(L+1)}}\leq C
\end{equation}
with probability at least $1-\epsilon$.
\end{proposition}
\subsection{Finite-Dimensional Distributions: Proof of Proposition \ref{P:fdd}}\label{S:fdd-proof}
We will prove Proposition \ref{P:fdd} in two steps. First, we prove a special case in which we keep the weights in layer $1$ completely general as in \eqref{E:W-def} but take weights in layers $\ell\geq 2$ to be independent Gaussians:
\[
W_{ij}^{(\ell)}\sim \mathcal N\lr{0,C_Wn_{\ell-1}^{-1}},\qquad \text{iid}.
\]
We continue to assume (as in the statement of Theorem \ref{T:NNGP}) that all biases are Gaussian:
\[
b_i^{(\ell)}\sim \mathcal N(0,C_b),\qquad \text{iid}.
\]
The argument in this case is the technical heart of this paper and is presented in \S \ref{S:Gaussian-proof} - \S\ref{S:collective-proof}. Ultimately, it relies on the analysis of collective observables, which we isolate in \S \ref{S:collective-proof}. A simple Lindeberg swapping argument and induction on layer detailed in \S \ref{S:gen-weights} allows us to extend Proposition \ref{P:fdd} to general weights in layers $\ell\geq 2$ from the Gaussian case.
\subsubsection{Proof of Proposition \ref{P:fdd} with Gaussian Weights in Layers $\ell\geq 2$}\label{S:Gaussian-proof} Fix
\[
\Xi = \lr{\xi_i,\,i=1,\ldots, n_{L+1}}\in\mathbb R^{n_{L+1}\times \abs{A}},\qquad \xi_i=\lr{\xi_{i;\alpha},\,i=1,\ldots, n_{L+1},\, \alpha\in A}\in \mathbb R^{\abs{A}}
\]
and consider the characteristic function
\[
\chi_A(\Xi) = \E{\exp\left[-i\inprod{z_A^{(L+1)}}{\Xi}\right]}=\E{\exp\left[-i\sum_{\alpha\in A}\sum_{i=1}^{n_{L+1}}z_{i;\alpha}^{(L+1)}\xi_{i;\alpha}\right]}
\]
of the random variable $z_A^{(L+1)}\in \mathbb R^{n_{L+1}\times \abs{A}}$. By Levy's continuity theorem, it is sufficient to show that
\begin{equation}\label{E:levy-goal}
\lim_{n_1,\ldots,n_L\rightarrow \infty}\chi_A(\Xi) = \exp\left[-\frac{1}{2}\sum_{i=1}^{n_{L+1}} \inprod{K_A^{(L+1)}\xi_i}{\xi_i}\right],
\end{equation}
where
\[
K_A^{(L+1)} = \lr{K_{\alpha\beta}^{(L+1)}}_{\alpha,\beta\in A}
\]
is the matrix defined by the recursion \eqref{E:K-rec} with initial condition \eqref{E:K-initial}. Writing
\begin{equation}\label{E:F-def}
\mathcal F_\ell : = \text{filtration defined by }\set{W^{(\ell')}, b^{(\ell')},\, \, \ell'=1,\ldots, \ell},
\end{equation}
we may use the tower property to write
\begin{align}
\label{E:cond-chi}\chi_A(\Xi) &= \E{\E{\exp\left[-i\inprod{z_A^{(L+1)}}{\Xi}\right]~\big|~\mathcal F_{L}}}.
\end{align}
Note that conditional on $\mathcal F_L$, the random vectors $z_{i;A}^{(L+1)}\in \mathbb R^{\abs{A}}$ in layer $L+1$ for each $i=1,\ldots, n_{L+1}$ are iid Gaussians, since we've assumed for now that weights in layers $\ell\geq 2$ are Gaussian. Specifically,
\[
z_{i;A}^{(L+1)} \stackrel{d}{=} \lr{\Sigma_A^{(L+1)}}^{1/2}G_i,\qquad G_i\sim \mathcal N\lr{0, \mathrm{I}_{\abs{A}}} \text{ iid}\qquad 1\leq i \leq n_{L+1},
\]
where for any $\alpha,\beta\in A$ the conditional covariance is
\begin{equation}\label{E:sigma-def}
\lr{\Sigma_A^{(L+1)}}_{\alpha\beta} = \Cov\lr{z_{i;\alpha}^{(L+1)},\, z_{i;\beta}^{(L+1)}~\big|~\mathcal F_{L}}=C_b + \frac{C_W}{n_L}\sum_{j=1}^{n_L}\sigma\lr{z_{j;\alpha}^{(L)}}\sigma\lr{z_{j;\beta}^{(L)}}.
\end{equation}
Using \eqref{E:cond-chi} and the explicit form of the characteristic function of a Gaussian reveals
\begin{align}
\label{E:chi-form}\chi_A(\Xi) &= \E{\exp\left[-\frac{1}{2}\sum_{i=1}^{n_{L+1}} \inprod{\Sigma_A^{(L+1)}\xi_i}{\xi_i} \right] }.
\end{align}
The crucial observation is that each entry of the conditional covariance matrix $\Sigma_A^{(L+1)}$ is an average over $j=1,\ldots, n_L$ of the same fixed function applied to the vector $z_{j;A}^{(L)}$. While $z_{j;A}^{(L)}$ are not independent at finite values of $n_1,\ldots, n_{L-1}$ for $L>1$, they are sufficiently weakly correlated that a weak law of large numbers still holds:
\begin{lemma}\label{L:collective-sigma}
Fix $n_0,n_{L+1}$. There exists a $\abs{A}\times \abs{A}$ PSD matrix
\[
K_A^{(L+1)} = \lr{K_{\alpha\beta}^{(L+1)}}_{\alpha,\beta\in A}
\]
such that for all $\alpha,\beta\in A$
\[
\lim_{n_1,\ldots, n_L\rightarrow \infty}\E{\lr{\Sigma_A^{(L+1)}}_{\alpha\beta}}= K_{\alpha\beta}^{(L+1)}\qquad \text{and}\qquad \lim_{n_1,\ldots, n_L\rightarrow \infty}\Var{\lr{\Sigma_A^{(L+1)}}_{\alpha\beta}}= 0.
\]
\end{lemma}
\begin{proof}
Lemma \ref{L:collective-sigma} is a special case of Lemma \ref{L:collective-properties} (see \S \ref{S:collective-proof}).
\end{proof}
\vspace{.3cm}
Lemma \ref{L:collective-sigma} implies that $\Sigma_A^{(L+1)}$ converges in distribution to $K_A^{(L+1)}$. In view of \eqref{E:chi-form} and the definition of weak convergence this immediately implies \eqref{E:levy-goal}. It therefore remains to check that $K_A^{(L+1)}$ satisfies the desired recursion. For this, note that at any values of $n_1,\ldots, n_L$ we find
\begin{align*}
\Cov\lr{z_{i;\alpha}^{(L+1)},\, z_{i;\beta}^{(L+1)}} &= \E{\lr{\Sigma_A^{(L+1)}}_{\alpha\beta}}
= C_b + C_W\E{\sigma\lr{z_{1;\alpha}^{(L)}}\sigma\lr{z_{1;\beta}^{(L)}}}.
\end{align*}
When $L=1$, we therefore see that
\[
\Cov\lr{z_{i;\alpha}^{(2)}, z_{i;\beta}^{(2)}} = C_b +C_W\E{\sigma(z_{i;\alpha}^{(1)})\sigma(z_{i;\beta}^{(1)})},
\]
where the law of $(z_{i;\alpha}^{(1)},z_{i;\beta}^{(1)})$ is determined by the distribution $\mu_W$ of weights in layer $1$ and does not depend on $n_1$. This confirms the initial condition \eqref{E:K-initial}. Otherwise, if $L>1$, the convergence of finite-dimensional distributions that we've already established yields
\begin{align*}
K_{\alpha\beta}^{(L+1)} =& \lim_{n_1,\ldots, n_L\rightarrow \infty} \Cov\lr{z_{i;\alpha}^{(L+1)},\, z_{i;\beta}^{(L+1)}} =\lim_{n_1,\ldots, n_{L-1}\rightarrow \infty} \lr{C_b + C_W\E{\sigma\lr{z_{1;\alpha}^{(\ell)}}\sigma\lr{z_{1;\beta}^{(\ell)}}}}.
\end{align*}
Since $\sigma$ is continuous we may invoke the continuous mapping theorem to conclude that
\begin{align*}
K_{\alpha\beta}^{(L+1)} &=C_b + C_W\mathbb E_{(z_\alpha,z_\beta)\sim G(0,K^{(L)})}\left[\sigma(z_\alpha)\sigma(z_\beta)\right],
\end{align*}
which confirms the recursion \eqref{E:K-rec}. This completes the proof that the finite-dimensional distributions of $z_\alpha^{(L+1)}$ converge to those of the desired Gaussian process, modulo two issues. First, we must prove Lemma \ref{L:collective-sigma}. This is done in \S \ref{S:collective-proof} by proving a more general result, Lemma \ref{L:collective-properties}. Second, we must remove the assumption that the weights in layers $\ell\geq 2$ are Gaussian. This is done in \S \ref{S:gen-weights}. \hfill $\square$
\subsubsection{Collective Observables with Gaussian Weights: Generalizing Lemma \ref{L:collective-sigma}}\label{S:collective-proof}
This section contains the key technical argument in our proof of Proposition \ref{P:fdd}. To state the main result, define a \textit{collective observable at layer $\ell$} to be any random variable of the form
\[
\mathcal O_{n_\ell,f;A}^{(\ell)}:= \frac{1}{n_\ell} \sum_{i=1}^{n_\ell} f(z_{i;A}^{(\ell)}),
\]
where $f:\mathbb R^{\abs{A}}\rightarrow \mathbb R$ is measurable and polynomially bounded:
\[
\exists C>0,\, k\geq 1\text{ s.t. }\forall z\in \mathbb R^{\abs{A}}\qquad \abs{f(z)}\leq C\lr{1+\norm{z}_2^k}.
\]
We continue to assume, as \S \ref{S:Gaussian-proof}, that the weights (and biases) in layers $\ell\geq 2$ are Gaussian and will remove this assumption in \S \ref{S:gen-weights}. Several key properties of collective observables are summarized in the following Lemma:
\begin{lemma}[Key Properties of Collective Observables]\label{L:collective-properties}
Let $\mathcal O_{n_\ell,f;A}^{(\ell)}$ be a collective observable at layer $\ell$. There exists a deterministic scalar $\overline{O}_{f;A}^{(\ell)}$ such that
\begin{equation}\label{E:collective-mean}
\lim_{n_1,\ldots, n_{\ell-1}\rightarrow \infty}\E{\mathcal O_{n_\ell, f;A}^{(\ell)}}=\overline{O}_{f;A}^{(\ell)}.
\end{equation}
Moreover,
\begin{equation}\label{E:self-avg}
\lim_{n_1,\ldots, n_\ell\rightarrow \infty}\Var{\mathcal O_{n_\ell,f;A}^{(\ell)}}=0.
\end{equation}
Hence, we have the following convergence in probability
\[
\lim_{n_1,\ldots, n_\ell\rightarrow \infty} \mathcal O_{n_\ell,f;A}^{(\ell)} \quad \stackrel{p}{\longrightarrow}\quad \overline{O}_{f;A}^{(\ell)}.
\]
\end{lemma}
\begin{proof}
This proof is by induction on $\ell$, starting with $\ell=1$. In this case, $z_{i;A}^{(1)}$ are independent for different $i$. Moreover, for each $i,\alpha$ we have
\[
z_{i;\alpha}^{(1)}= b_i^{(1)} + \sum_{j=1}^{n_0}W_{ij}^{(1)} x_{j;\alpha} =b_i^{(1)} + \lr{\frac{C_W}{n_0}}^{1/2}\sum_{j=1}^{n_0}\widehat{W}_{ij}^{(1)} x_{j;\alpha}.
\]
Hence, $z_{i;\alpha}^{(1)}$ have finite moments since $b_i^{(1)}$ are iid Gaussian and $\widehat{W}_{ij}^{(1)}$ are mean $0$ with finite higher moments. In particular, since $f$ is polynomially bounded, we find for every $n_1$ that
\[
\E{\mathcal O_{n_1,f;A}^{(1)}} = \E{f(z_{1;A}^{(1)})},
\]
which is finite and independent of $n_1$, confirming \eqref{E:collective-mean}. Further, $\mathcal O_{n_1,f;A}^{(1)}$ is the average of $n_1$ iid random variables with all moments finite. Hence, \eqref{E:self-avg} follows by the weak law of large numbers, completing the proof of the base case.
Let us now assume that we have shown \eqref{E:collective-mean} and \eqref{E:self-avg} for all $\ell=1,\ldots, L$. We begin by checking that \eqref{E:collective-mean} holds at layer $L+1$. We have
\begin{equation}\label{E:mean-red}
\E{\mathcal O_{n_{L+1}, f;A}^{(L+1)}} = \E{f(z_{1;A}^{(L+1)})}.
\end{equation}
Since the weights and biases in layer $L+1$ are Gaussian and independent of $\mathcal F_L$, we find
\begin{equation}\label{E:z-Gaussian}
z_{1;A}^{(L+1)} \stackrel{d}{=} \lr{\Sigma_A^{(L+1)}}^{1/2} G ,
\end{equation}
where $\Sigma_A^{(L+1)}$ is the conditional covariance defined in \eqref{E:sigma-def} and $G$ is an $\abs{A}$-dimensional standard Gaussian. The key point is that $\Sigma_A^{(L+1)}$ is a collective observable at layer $L$. Hence, by the inductive hypothesis, there exists a PSD matrix $\overline{\Sigma}_A^{(L+1)}$ such that $\Sigma_A^{(L+1)}$ converges in probability to $\overline{\Sigma}_A^{(L+1)}$ as $n_1,\ldots, n_L\rightarrow \infty$. To establish \eqref{E:collective-mean} it therefore suffices in view of \eqref{E:mean-red} to check that
\begin{equation}\label{E:mean-goal}
\lim_{n_1,\ldots, n_{L}\rightarrow \infty} \E{f\lr{\lr{\Sigma_A^{(L+1)}}^{1/2} G}} = \E{f\lr{\lr{\overline{\Sigma}_A^{(L+1)}}^{1/2} G}},
\end{equation}
where the right hand side is finite since $f$ is polynomially bounded and all polynomial moments of $G$ are finite. To establish \eqref{E:mean-goal}, let us invoke the Skorohod representation theorem to find a common probability space on which there are versions of $\Sigma_A^{(L+1)}$ -- which by an abuse of notation we will still denote by $\Sigma_A^{(L+1)}$ -- that converge to $\overline{\Sigma}_A^{(L+1)}$ almost surely. Next, note that since $f$ is polynomially bounded we may repeatedly apply $ab\leq \frac{1}{2}(a^2+b^2)$ to find
\begin{equation}\label{E:poly-f}
\abs{f((\Sigma_A^{(L+1)})^{1/2} G)}\leq p\lr{(\Sigma_A^{(L+1)})^{1/2}} + q(G),
\end{equation}
where $p$ is a polynomial in the entries of $(\Sigma_A^{(L+1)})^{1/2}$, $q$ a polynomial in the entries of $G$, and the polynomials $p,q$ don't depend on $n_1,\ldots, n_{L}$. The continuous mapping theorem shows that
\[
\lim_{n_1,\ldots, n_L\rightarrow \infty}\E{p\lr{(\Sigma_A^{(L+1)})^{1/2}}} = \E{p\lr{(\overline{\Sigma}_A^{(L+1)})^{1/2}}}.
\]
Thus, since all moments of Gaussian are finite, \eqref{E:mean-goal} follows from the generalized dominated convergence theorem. It remains to argue that \eqref{E:self-avg} holds at layer $L+1$. To do this, we write
\begin{align}
\label{E:var-red}\Var{\mathcal O_{n_{L+1},f;A}^{(L+1)}} = \frac{1}{n_{L+1}} \Var{f(z_{1;A}^{(L+1)})} + \lr{1-\frac{1}{n_{L+1}}}\Cov\lr{f(z_{1;A}^{(L+1)}),\, f(z_{2;A}^{(L+1)})}.
\end{align}
Observe that
\[
\Var{f(z_{1;A}^{(L+1)})} \leq \E{f(z_{1;A}^{(L+1)})^2} = \E{\frac{1}{n_{L+1}}\sum_{i=1}^{n_{L+1}}\lr{f(z_{i;A}^{(L+1)})}^2}<\infty,
\]
since we already showed that \eqref{E:collective-mean} holds at layer $L+1$. Next, recall that, conditional on $\mathcal F_L$, neurons in layer $L+1$ are independent. The law of total variance and Cauchy-Schwartz yield
\begin{align}
\notag \abs{\Cov\lr{f(z_{1;A}^{(L+1)}),\, f(z_{2;A}^{(L+1)})}}&=\abs{\Cov\lr{\E{f(z_{1;A}^{(L+1)})~|~\mathcal F_{L}},\, \E{f(z_{2;A}^{(L+1)})~|~\mathcal F_L}}}\\
\label{E:var-reduce}&\leq \Var{\E{f(z_{1;A}^{(L+1)})~|~\mathcal F_{L}}}.
\end{align}
Using \eqref{E:z-Gaussian} and the polynomial estimates \eqref{E:poly-f} on $f$, we conclude that the conditional expectation on the previous line is some polynomially bounded function of the components of $(\Sigma_{A}^{(L+1)})^{1/2}$. Hence, we may apply dominated convergence as above to find
\[
\lim_{n_1,\ldots,n_L\rightarrow \infty} \Var{\E{f(z_{1;A}^{(L+1)})~|~\mathcal F_{L}}} = \Var{\E{f(\overline{\Sigma}_A^{1/2})G}} = 0,\qquad G\sim \mathcal N(0,\mathrm{I}_{\abs{A}})
\]
since $\E{f(\overline{\Sigma}_A^{1/2})G}$ is a constant. This proves \eqref{E:self-avg} for observables at layer $L+1$ and completes the proof of Lemma \ref{L:collective-properties}.
\end{proof}
\subsubsection{Proof of Proposition \ref{P:fdd} for General Weights}\label{S:gen-weights} In this section, we complete the proof of Proposition \ref{P:fdd} by removing the assumption from \S \ref{S:Gaussian-proof} that weights in layers $\ell\geq 2$ are Gaussian. To do this, let us introduce some notation. Let us write
\[
x_\alpha\mapsto z_\alpha^{(\ell)}
\]
for the pre-activations at layers $\ell=0,\ldots, L+1$ of a random fully connected network in which, as in the general case of Theorem \ref{T:NNGP}, all weights and biases are independent, biases are Gaussian as in \eqref{E:b-def} and weights in all layers are drawn from a general centered distribution with the correct variance and finite higher moments as in \eqref{E:W-def} and \eqref{E:mu-W-def}. Next, let us write
\[
x_\alpha\mapsto \twiddle{z}_\alpha^{(\ell)},
\]
for the vector of pre-activations obtained by replacing, in each layer $\ell=2,\ldots, L+1$, all weights $W_{ij}^{(\ell)}$ by iid centered Gaussians with variance $C_W/n_{\ell-1}$. We already saw that the distribution of $\twiddle{z}_A^{{(L+1)}}$ converges weakly to that of the desired Gaussian in the infinite width limit. Our goal is thus to show that, as $n_1,\ldots, n_L$ tend to infinity, $z_A^{{(L+1)}}$ and $\twiddle{z}_A^{{(L+1)}}$ converge weakly to the same distribution. We will proceed by induction on $L$. When $L=0$ the claim is trivial since, by construction, the weight and bias distributions in layer $1$ are identical in both $z_\alpha^{(1)}$ and $\twiddle{z}_\alpha^{(1)}$ (recall that when we proved Proposition \ref{P:fdd} in \S \ref{S:Gaussian-proof} we had Gaussian weights only starting from layer $2$ and general weights in layer $1$.)
Suppose therefore that we have shown the claim for $\ell=0,\ldots, L$. By the Portmanteau theorem and the density of smooth compactly supported functions in the space of continuous compactly supported functions equipped with the supremum norm, it suffices to show that for any smooth function $g:\mathbb R^{n_{L+1}\times\abs{A}}\rightarrow \mathbb R$ with compact support we have
\begin{equation}\label{E:g-goal}
\lim_{n_1,\ldots, n_L\rightarrow \infty}\lr{ \E{g(z_A^{(L+1)})} - \E{g(\twiddle{z}_A^{(L+1)})} }= 0.
\end{equation}
To check \eqref{E:g-goal}, let us define an intermediate object:
\[
z_\alpha^{(L+1),{\tiny{\bullet}}} = b^{(L+1)}+W^{(L+1),{\tiny{\bullet}}}\sigma\lr{z_\alpha^{(L)}},
\]
where the entries $W_{ij}^{(L+1),{\tiny{\bullet}}}$ of $W^{(L+1),{\tiny{\bullet}}}$ are iid Gaussian with mean $0$ and variance $C_W/n_L$. That is, we take the vector $\sigma(z_\alpha^{(L)})$ of post-activations from layer $L$ obtained by using general weights in layers $1,\ldots, L$ and use Gaussian weights only in layer $L+1$. Our first step in checking \eqref{E:g-goal} is to show that it this relation holds when $z_A^{(L+1)}$ is replaced by $z_A^{(L+1),{\tiny{\bullet}}}$.
\begin{lemma}\label{L:Lindeberg}
Let $x_A=\set{x_\alpha,\,\in A}$ be a finite subset of $\mathbb R^{n_0}$ consists of $\abs{A}$ distinct elements. Fix in addition $n_{L+1}\geq 1$ and a smooth compactly supported function $g:\mathbb R^{n_{L+1}\times\abs{A}}\rightarrow \mathbb R$. There exists $C>0$ and a collective observable $\mathcal O_{n_L,f;A}^{(L)}$ at layer $L$ so that
\[
\abs{\E{g\lr{z_A^{(L+1)}}}-\E{g\lr{z_A^{(L+1),{\tiny{\bullet}}}}}}\leq C n_{L+1}^3 n_{L}^{-1/2} \E{\mathcal O_{n_L,f;A}^{(L)}}.
\]
\end{lemma}
\begin{proof}
This is a standard Lindeberg swapping argument. Namely, for each $\alpha\in A$ and $k=0,\ldots, n_L$ define
\[
z_\alpha^{(L+1),k}:=b^{(L+1)}+W^{(L+1),k}\sigma\lr{z_\alpha^{(L)}},
\]
where the first $k$ entries of each row of $W^{(L+1),k}$ are iid Gaussian with mean $0$ and variance $C_W/n_L$, while the remaining entries are $(C_W/n_L)^{-1/2}$ times iid draws $\widehat{W}_{ij}^{(L+1)}$ from the general distribution $\mu$ of network weights, as in \eqref{E:W-def} and \eqref{E:mu-W-def}. With this notation, we have
\[
z_\alpha^{(L+1)}=z_\alpha^{(L),0},\qquad \twiddle{z}_\alpha^{(L+1),{\tiny{\bullet}}} = z_\alpha^{(L),n_L}.
\]
Thus,
\[
\E{g\lr{z_A^{(L+1)}}}-\E{g\lr{\twiddle{z}_A^{(L+1)}}} = \sum_{k=1}^{n_L} \E{g\lr{z_A^{(L+1),k-1}}}-\E{g\lr{z_A^{(L+1),k}}}.
\]
For any $Z\in \mathbb R^{n_{\ell+1}\times \abs{A}}$ and
\[
\delta Z =\lr{\delta z_{i;\alpha}\, i = 1,\ldots, n_{\ell+1},\, \alpha\in A}\in \mathbb R^{n_{\ell+1}\times \abs{A}}
\]
consider the third order Taylor expansion of $g$ around $Z$.
\begin{align*}
g(Z+\delta Z) &= g(Z) + \sum_{\substack{\alpha\in A\\ i=1,\ldots, n_{L+1}}} g_{i;\alpha} \delta z_{i;\alpha} + \sum_{\substack{\alpha_1,\alpha_2\in A\\ i_1,i_2=1,\ldots, n_{L+1}}} g_{i_1,i_2;\alpha_1,\alpha_2} \delta z_{i_1;\alpha_1}\delta z_{i_2;\alpha_2}\\
&+ O\lr{ \sum_{\substack{\alpha_1,\alpha_2,\alpha_3\in A\\ i_1,i_2,i_3=1,\ldots, n_{L+1}}} \abs{\delta z_{i_1;\alpha_1}\delta z_{i_2;\alpha_2}\delta z_{i_3;\alpha_3}}}.
\end{align*}
Let us write
\[
z_{i;\alpha}^{(L+1),k-1} = z_{i;\alpha}^{(L+1),k} + n_L^{-1/2} Y_{i,k;\alpha},\qquad \delta Z_{i,k;\alpha} = C_W^{1/2} \lr{\twiddle{W}_{ik}^{(L+1)}-\widehat{W}_{ik}^{(L+1)}}\sigma(z_{k;\alpha}^{(L)}),
\]
where $\twiddle{W}_{ik}^{(L+1)}\sim \mathcal N(0,1)$. Then, Taylor expanding $g$ to third order around $Z_k=z_{i;\alpha}^{(L+1),k}$ and, using that the first two moments of $(C_W n_L^{-1})^{1/2}\widehat{W}_{ij}^{(L)}$ match those of $\mathcal N(0,C_Wn_L^{-1})$, we find that
\[
\E{g\lr{z_A^{(L+1),k-1}}}-\E{g\lr{\twiddle{z}_A^{(L+1),k}}} = O\lr{n_L^{-3/2} n_{L+1}^3 \E{p\lr{\abs{\sigma(z_{k;\alpha}^{(L)})},\,\alpha\in A)}} },
\]
where $p$ is a degree $3$ polynomial in the $\abs{A}$-dimensional vector of absolute values $|\sigma(z_{k;\alpha}^{(\ell)})|,\,\alpha\in A.$ Summing this over $k$ completes the proof.
\end{proof}
\noindent To make use of Lemma \ref{L:Lindeberg} let us consider any collective observable $\mathcal O_{n_L,f;A}^{_{(L)}}$ at layer $L$. Recall that by \eqref{E:mean-red} and \eqref{E:var-red} both the mean and variance of $\mathcal O_{n_L,f;A}^{_{(L)}}$ depend only on the distributions of finitely many components of the vector $z_A^{(L)}$. By the inductive hypothesis we therefore find
\begin{equation}\label{E:mean-equiv}
\lim_{n_1,\ldots, n_L\rightarrow \infty} \E{\mathcal O_{n_L,f;A}^{(L)}} = \lim_{n_1,\ldots, n_L\rightarrow \infty} \E{\twiddle{\mathcal O}_{n_L,f;A}^{(L)}},
\end{equation}
where the right hand side means that we consider the same collective observable but for $\twiddle{z}_A^{(L)}$ instead of $z_A^{(L)}$, which exists by Lemma \ref{L:collective-properties}. Similarly, again using Lemma \ref{L:collective-properties}, we have
\begin{equation}\label{E:var-equiv}
\lim_{n_1,\ldots, n_L\rightarrow \infty} \Var{\mathcal O_{n_L,f;A}^{(L)}} = 0.
\end{equation}
Therefore, we conclude that
\begin{equation}\label{E:dist-conv}
\mathcal O_{n_L,f;A}^{(L)}- \twiddle{\mathcal O}_{n_L,f;A}^{(L)} \quad \stackrel{d}{\longrightarrow}\quad 0,\qquad \text{as }n_1,\ldots, n_L\rightarrow \infty.
\end{equation}
Note that by \eqref{E:mean-equiv} the mean of any collective observable $\E{\mathcal O_{n_L,f;A}^{(L)}}$ is bounded independent of $n_1,\ldots, n_L$ since we saw in Lemma \ref{L:collective-properties} that the limit exists and is bounded when using Gaussian weights. Since $n_{L+1}$ is fixed and finite, the error term $n_L^{-1/2}n_{L+1}^3 \E{\mathcal O_{n_L,f;A}^{_{(L)}}}$ in Lemma \ref{L:Lindeberg} is therefore tends to zero as $n_1,\ldots, n_L\rightarrow \infty$, and \eqref{E:g-goal} is reduced to showing that
\begin{equation}\label{E:g-goal-2}
\lim_{n_1,\ldots, n_L\rightarrow \infty}\lr{ \E{g(z_A^{(L+1),{\tiny{\bullet}}})} - \E{g(\twiddle{z}_A^{(L+1)})} }= 0.
\end{equation}
This follows from \eqref{E:var-equiv} and the inductive hypothesis. Indeed, by construction, conditional on the filtration $\mathcal F_L$ defined by weights and biases in layers up to $L$ (see \eqref{E:F-def}), the $\abs{A}$-dimensional vectors $z_{i;A}^{(L+1),{\tiny{\bullet}}}$ are iid Gaussians:
\[
z_{i;A}^{(L+1),{\tiny{\bullet}}} \stackrel{d}{=} \lr{\Sigma_A^{(L+1)}}^{1/2}G_i,\qquad G_i\sim \mathcal N(0, \mathrm{I}_{\abs{A})}\quad iid,
\]
where $\Sigma_A^{(L+1)}$ is the conditional covariance matrix from \eqref{E:sigma-def}. The key point, as in the proof with all Gaussian weights, is that each entry of the matrix $\Sigma_A^{(L+1)}$ is a collective observable at layer $L$. Moreover, since the weights and biases in the final layer are Gaussian for $z_A^{(L+1),{\tiny{\bullet}}}$ the conditional distribution of $g(z_A^{(L+1),{\tiny{\bullet}}})$ given $\mathcal F_L$ is completely determined by $\Sigma_A^{(L+1)}$. In particular, since $g$ is bounded and continuous, we find that
\[
\E{g(z_A^{(L+1),{\tiny{\bullet}}})} - \E{g(\twiddle{z}_A^{(L+1)})} = \E{h\lr{\Sigma_A^{(L+1)}}} -\E{h\lr{\twiddle{\Sigma}_A^{(L+1)}}},
\]
where $h:\mathbb R^{n_{L+1}\times \abs{A}}\rightarrow \mathbb R$ is a bounded continuous function and $\twiddle{\Sigma}_A^{(L+1)}$ is the conditional covariance matrix at layer $L+1$ for $\twiddle{z}_A^{(L+1)}$. Combining this with the convergence in distribution from \eqref{E:dist-conv} shows that \eqref{E:g-goal-2} holds and completes the proof of Proposition \ref{P:fdd} for general weight distributions. \hfill $\square$
\subsection{Tightness: Proof of Proposition \ref{P:tightness}}\label{S:tightness-proof}
In this section, we provide a proof of Proposition \ref{P:tightness}. In the course of showing tightness, we will need several elementary Lemmas, which we record in the \S \ref{S:prep-lem}. We then use them in \S \ref{S:tightness-proof-2} to complete the proof of Proposition \ref{P:tightness}.
\subsubsection{Preparatory Lemmas}\label{S:prep-lem}
For the first Lemma, let us agree to write $\mathcal C(A)$ for the cone over a subset $A$ in a euclidean space and $B_1(\mathbb R^n)$ for the unit ball in $\mathbb R^n$.
\begin{lemma}\label{L:lip-image}
Fix integers $n_0,n_1\geq 1$ and a real number $\lambda \geq 1$. Suppose that $T$ is a compact of $\mathbb R^{n_0}$ and $f:\mathbb R^{n_0}\rightarrow \mathbb R^{n_1}$ is $\lambda$-Lipschitz with respect to the $\ell_2$-norm on both $\mathbb R^{n_0}$ and $\mathbb R^{n_1}$. Define the Minkowski sums
\[
\widehat{T}=f(T)+\mathcal C\lr{f(T)-f(T)}\cap B_1(\mathbb R^{n_1}).
\]
There exists a constant $C>0$ a compact subset $T'$ of $\mathbb R^{3n_0+1}$, and a $C\lambda$-Lipschitz map $g:\mathbb R^{3n_0+1}\rightarrow \mathbb R^{n_1}$ (all depending only $T, \lambda$,), so that $\widehat{T} = g(T').$
\end{lemma}
\begin{proof}
By definition,
\begin{equation}\label{E:g-def}
\widehat{T} = g(T\times T\times T\times [0,1]),\qquad g(x,y,z,r) = f(x)+r(f(y)-f(z)).
\end{equation}
In particular, for some constant $C$ depending on $T_0, \lambda$, we have
\begin{align*}
\norm{g(x,y,z,r)-g(x',y',z',r')}_2&\leq \norm{f(x)-f(x')}_2 + \norm{f(y)-f(y')}_2+\norm{f(z)-f(z')}_2\\
&+ \abs{r-r'}\norm{f(y)-f(z)}_2\\
&\leq \lambda\lr{\norm{x-x'}_2 + \norm{y-y'}_2 + \norm{z-z'}_2 + \mathrm{Diam}(T_0) \abs{r-r_0}}\\
&\leq C \lambda \norm{(x-x',y-y',z-z',r-r')}_2.
\end{align*}
Hence, $\widehat{T}$ is the image under a Lipschitz map with a Lipschitz constant depending only on $T_0,\lambda$ of a compact set in $\mathbb R^{3n_0+1}$.
\end{proof}
\noindent The second Lemma we need is an elementary inequality.
\begin{lemma}\label{L:split-abc}
Let $a,b,c\geq 0$ be real numbers and $k\geq 1$ be an integer. We have
\[
(a+b+c)^k \leq 2^{2k-1} \lr{1 + a^{2k}}+\frac{1}{4}\left[(2+b)^{4k}+\lr{1+c}^{4k}\right].
\]
\end{lemma}
\begin{proof}
For any $a,b\geq 0$ we have
\begin{align}
\notag (a+b)^k &=\sum_{j=0}^{k}\binom{k}{j}a^jb^{k-j}\leq \sum_{j=0}^{k}\binom{k}{j}\lr{1+a}^kb^{k-j} = (1+a)^k(1+b)^k\\
\label{E:inq-1}&\leq \frac{1}{2}\lr{(1+a)^{2k} + (1+b)^{2k}}
\end{align}
Further, breaking into cases depending on whether $0\leq a\leq 1$ or $1\leq a$ we find that
\begin{align}
\label{E:inq-2}
(a+b)^k \leq 2^{2k-1} \lr{1 + a^{2k}}+\frac{1}{2}(1+b)^{2k}.
\end{align}
Combining \eqref{E:inq-1} with \eqref{E:inq-2} we see as desired that any $a,b,c\geq 0$
\[
(a+b+c)^k \leq 2^{2k-1} \lr{1 + a^{2k}}+\frac{1}{4}\left[(2+b)^{4k}+\lr{1+c}^{4k}\right].
\]
\end{proof}
\noindent The next Lemma is also an elementary estimate.
\begin{lemma}\label{L:CS-power}
Fix an integer $k\geq 1$, and suppose $X_1,\ldots, X_k$ are non-negative random variables. There exists a positive integer $q=q(k) $ such that
\[
\E{\prod_{i=1}^k X_i} \leq \lr{\prod_{i=1}^k \E{X_i^q}}^{1/q}.
\]
\end{lemma}
\begin{proof}
The proof is by induction on $k$. For the base cases when $k=1$, we may take $q=1$ and when $k=2$ we may take $q=2$ by Cauchy-Schwartz. Now suppose we have proved the claim for all $k=1,2,\ldots, K$ for some $K\geq 3$. Note that $1\leq v \lceil{(K+1)/2}\rceil\leq K$. So we may use Cauchy-Schwartz and the inductive hypothesis to obtain
\begin{align*}
\E{\prod_{i=1}^{K+1} X_i} &= \E{\prod_{i=1}^{\lceil \frac{K+1}{2}\rceil } X_i \prod_{i=\lceil \frac{K+1}{2}\rceil+1}^K X_i}\\
&\leq \E{\prod_{i=1}^{\lceil \frac{K+1}{2}\rceil } X_i^2}^{1/2} \E{\prod_{i=\lceil \frac{K+1}{2}\rceil+1}^K X_i^2}^{1/2}\\
&\leq \lr{\prod_{i=1}^{K+1} \E{X_i^{2q}}}^{1/2q},
\end{align*}
where $q=\max\set{q\lr{\lceil \frac{1}{2}(K+1)\rceil}, q\lr{K-\lceil \frac{1}{2}(K+1)\rceil-1}}$.
\end{proof}
\noindent The next Lemma is an elementary result about the moments of marginals of iid random vectors.
\begin{lemma}\label{L:marginals}
Fix an even integer $p\geq 2$ a positive integer, and suppose $\mu$ is a probability measure on $\mathbb R$ with mean $0$ and finite higher moments. Assume also that $w=\lr{w_1,\ldots, w_n}$ is a vector with iid components, each with distribution $\mu$. Then, there exists a constant $C$ depending only on $p$ and first $p$ moments of $\mu$ such that for all $n\geq 1$
\[
\sup_{\norm{u}=1} \E{\abs{w\cdot u}^{p}}\leq C.
\]
\end{lemma}
\begin{proof}
We will use the following result of \L ata\l a \cite[Thm. 2, Cor. 2, Rmk. 2]{latala1997estimation}. Suppose $X_i$ are independent random variables and $p$ is a positive even integer. Then
\begin{equation}\label{E:latala}
\E{\abs{\sum_{i=1}^n X_i}^p} \simeq \inf\set{t>0~\bigg|~\sum_{i=1}^n \log \E{\abs{1+\frac{X_i}{t}}^p}\leq p},
\end{equation}
where $\simeq$ means bounded above and below up to universal multiplicative constants. Let us fix a unit vector $u=\lr{u_1,\ldots, u_n}\in S^{n-1}$ and apply this to $X_i = u_i w_i$. Since $w_i$ have mean $0$ and $p$ is even we find
\begin{align*}
\sum_{i=1}^n \log \E{\abs{1+\frac{X_i}{t}}^p} &\leq \sum_{i=1}^n \log\lr{1+ \sum_{k=2}^p \binom{p}{k} \frac{\abs{u_i}^k\E{\abs{w_1}^k}}{t^k}}
\end{align*}
Note that for each $k=2,\ldots, p$ we have
\[
\E{\abs{w_1}^k} \leq \E{(1+\abs{w_1})^p}\qquad \text{and}\qquad \abs{u_i}^k \leq u_i^2.
\]
Hence, using that $\log(1+x)\leq x$ we find
\begin{align*}
\sum_{i=1}^n \log \E{\abs{1+\frac{X_i}{t}}^p} &\leq \sum_{i=1}^n \log\lr{1+ u_i^2 \E{(1+\abs{w_1})^p}\sum_{k=2}^p \binom{p}{k} \frac{1}{t^k}}\\
&= \sum_{i=1}^n \log\lr{1+ u_i^2 \E{(1+\abs{w_1})^p}\left[\lr{1+\frac{1}{t}}^p - 1 - \frac{p}{t}\right]}\\
&\leq\E{(1+\abs{w_1})^p}\left[\lr{1+\frac{1}{t}}^p - 1 - \frac{p}{t}\right].
\end{align*}
Note that for $2<t$, there is a universal constant $C>0$ so that
\[
\abs{\lr{1+\frac{1}{t}}^p - 1 - \frac{p}{t}} \leq \frac{Cp^2}{t^2}.
\]
Thus, there exists a constant $C'>0$ so that
\[
t> C'\sqrt{p\E{(1+\abs{w_1})^p})}\quad \Rightarrow \quad \sum_{i=1}^n \log \E{\abs{1+\frac{X_i}{t}}^p} \leq p.
\]
Combining this with \eqref{E:latala} completes the proof.
\end{proof}
\noindent The final Lemma we need is an integrability statement for the supremum of certain non-Gaussian fields over low-dimensional sets.
\begin{lemma}\label{L:moment-chaining}
Fix a positive integer $n_0$, an even integer $k\geq 1$, a compact set $T_0\subseteq \mathbb R^{n_0}$, a constant $\lambda >0$, and a probability measure $\mu$ on $\mathbb R$ with mean $0$, variance $1,$ and finite higher moments. For every $\epsilon \in (0,1)$ there exists a constant $C=C(T_0, \epsilon,\lambda, n_0, k,\mu)$ with the following property. Fix any integer $n_1\geq 1$ and a $\lambda$-Lipschitz map $f:\mathbb R^{n_0}\rightarrow \mathbb R^{n_1}$. Define $T_1:=f(T_0)$, and let $w=\lr{w_1,\ldots, w_{n_1}}$ be a vector with iid components $w_i$, each drawn from $\mu$. Then, for any fixed $y_0\in T_1$
\begin{equation}\label{E:sup-moments}
\E{\sup_{y\in T_1} (w\cdot (y-y_0))^k}\leq C.
\end{equation}
\end{lemma}
\begin{proof}
The proof is a standard chaining argument. For each $y\in T_1$ write $\Pi_k(y)$ for the closest point to $y$ in a $2^{-k}$ net in $T_1$ and assume without loss of generality that the diameter of $T_1$ is bounded above by $1$ and that $\Pi_0(y)=y_0$ for all $y\in T_1$. We have using the usual chaining trick that
\begin{align}
\label{E:chaining-ub}\E{\lr{\sup_{y\in T_1} w\cdot (y-y_0)}^k}&\leq \sum_{q_1,\ldots, q_k=0}^\infty \E{\prod_{i=1}^k \sup_{y_i\in T_1} \abs{w\cdot (\Pi_{q_i}(y_i)-\Pi_{q_i-1}(y_i))}}.
\end{align}
By Lemma \ref{L:CS-power}, there exists $q$ depending only on $k$ so that for any $q_1,\ldots, q_k$ we have
\begin{equation}\label{E:prod-est}
\E{\prod_{i=1}^k \sup_{y_i\in T_1} \abs{w\cdot (\Pi_{q_i}(y_i)-\Pi_{q_i-1}(y_i))}}\leq \prod_{i=1}^k \lr{\E{ \sup_{y\in T_1} \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}}}^{1/q}.
\end{equation}
We seek to bound each expectation on the right hand side in \eqref{E:prod-est}. To do this, write
\[
\E{ \sup_{y\in T_1} \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}} = \int_0^\infty \mathbb P\lr{\sup_{y\in T_1} \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}> t}dt.
\]
Note that the supremum is only over a finite set of cardinality at most
\[
\abs{\Pi_{q_i}}\abs{\Pi_{q_{i-1}}}\leq 2^{cn_0q_i}
\]
for some $c>0$ depending only $T_0,\lambda$. This is because, by assumption $T_1$ is the image of $T_0$ under a $\lambda$-Lipschtiz map and Lipschitz maps preserve covering numbers. Thus, by a union bound,
\[
\mathbb P\lr{\sup_{y\in T_1} \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}> t}\leq 2^{cn_0q_i} \sup_{y\in T_1}\mathbb P\lr{ \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}> t}
\]
But for any $y\in T_1$ and any $s>0,p\geq 1$ we have
\begin{align*}
\Prob{\abs{w\cdot (\Pi_q(y)-\Pi_{q-1}(y))}^{q} > s} &\leq \sup_{\norm{u}=1} \E{\abs{w\cdot u}^p} \lr{\frac{\norm{\Pi_k(y)-\Pi_{k-1}(y)}^p}{s^{p/q}}}\\
&= 2^{-pq_i} s^{-p/q}\sup_{\norm{u}=1}\E{\abs{w\cdot u}^p}.
\end{align*}
Putting this all together we find for any $p\geq 2\max\set{q+2,cn_0}$ that
\[
\E{ \sup_{y\in T_1} \abs{w\cdot (\Pi_{q_i}(y)-\Pi_{q_i-1}(y))}^{q}} \leq 2^{-cn_0q_i} \sup_{\norm{u}=1}\E{\abs{w\cdot u}^p}.
\]
Thus, substituting this into \eqref{E:chaining-ub} yields
\begin{align*}
\E{\lr{\sup_{y\in T_1} w\cdot (y-y_0)}^k}&\leq \sup_{\norm{u}=1}\E{\abs{w\cdot u}^p} \sum_{q_1,\ldots, q_k=0}^\infty 2^{-cn_0\sum_{i=1}^k q_i}\leq 2^k \sup_{\norm{u}=1}\E{\abs{w\cdot u}^p}.
\end{align*}
Appealing to Lemma \ref{L:marginals} completes the proof of Lemma \ref{L:moment-chaining}.
\end{proof}
\subsubsection{Proof of Proposition \ref{P:tightness} Using Lemmas from \S \ref{S:prep-lem}}\label{S:tightness-proof-2}
Let us first establish the equi-Lipschitz condition, which we recall states that for each $\epsilon\in (0,1)$ and each compact set $T\subseteq \mathbb R^{n_0}$ there exist $C>0$ so that with probability at least $1-\epsilon$ we have
\begin{equation}\label{E:equi-lip}
\sup_{x_\alpha,x_\beta\in T}\frac{ \norm{z_\alpha^{(L+1)}-z_\beta^{(L+1)}}_2}{\norm{x_\alpha-x_\beta}_2}\leq C.
\end{equation}
For \eqref{E:equi-lip} to hold,
we need a result about the Lipschitz constant of each layer. To ease the notation define a normalized single layer with random weights $W$ and random biases $b$ via the map $\psi:\mathbb R^{n_1}\rightarrow\mathbb R^{n_2}$:
\begin{equation}\label{E:psi-def}
\psi(x;W,b) = \frac{1}{\sqrt{n_2}}\sigma\lr{Wx+b},
\end{equation}
where $b\sim \mathcal N(0,C_b\mathrm{I}_{n_2})$ and $W=\lr{w_{ij}}\in \mathbb R^{n_2\times n_1}$ with $w_{ij}$ drawn iid from a distribution with mean $0$, variance $1$, and finite higher moments. We choose the variance of $w_{ij}$ to be $1$ instead of $C_W/n_1$ since we will later think of $x$ as the normalized vector $(C_W/n_\ell)^{1/2}\sigma(z_\alpha^{(\ell)})$ of post-activations in a given layer.
\begin{lemma}\label{L:layer-lip}
Fix an integer $n_0\geq 1$, a compact set $T_0\subseteq \mathbb R^{n_0}$, and a constant $\lambda>0$. For every $\epsilon \in (0,1)$ there exists a constant $C=C(\epsilon, n_0, T_0, \sigma, \lambda)$ with the following property. Fix any integers $n_1,n_2\geq 1$, and define $\psi:\mathbb R^{n_1}\rightarrow \mathbb R^{n_2}$ as in \eqref{E:psi-def}. Suppose that $T_1\subseteq \mathbb R^{n_1}$ is the image of $T_0$ under a $\lambda$-Lipschitz map from $\mathbb R^{n_0}$ to $\mathbb R^{n_1}$. Then,
\[
\mathbb P\lr{\sup_{x_\alpha,x_\beta\in T_1}\frac{\norm{\psi(x_\alpha)-\psi(x_\beta)}_2}{\norm{x_\alpha-x_\beta}_2} \leq C}\geq 1-\epsilon.
\]
\end{lemma}
\begin{proof}
Fix $x_\alpha\neq x_\beta\in T_1$ and define
\[
\xi_{\alpha\beta}=\frac{x_\alpha-x_\beta}{\norm{x_\alpha-x_\beta}_2}.
\]
Write $W_i$ for the $i$-th row of $W$ and $b_i$ for the $i$-th component of $b$. Since $ab\leq \frac{1}{2}(a^2+b^2)$ and $\sigma$ is absolutely continuous, we have
\begin{align}
\notag \norm{\psi(x_\alpha)-\psi(x_\beta)}_2^2&\quad= \frac{1}{n_2}\sum_{i=1}^{n_2} \lr{\sigma\lr{W_i\cdot x_\alpha + b_i}-\sigma\lr{W_i\cdot x_\alpha + b_i}}^2\\
\notag &\quad=\frac{1}{n_2}\sum_{i=1}^{n_2} \lr{ W_i\cdot \xi_{\alpha\beta} \int_{0}^{\norm{x_\alpha-x_\beta}_2} \sigma'\lr{W_i\cdot \lr{x_\beta +t\xi_{\alpha\beta}} + b_i}dt}^2\\
\notag &\quad \leq \norm{x_\alpha-x_\beta}_2^2 \frac{1}{n_2}\sum_{i=1}^{n_2} \sup_{y\in \widehat{T}}\lr{\sigma'\lr{W_i\cdot y + b_i}}^2 \sup_{\xi\in \twiddle{T}} \lr{W_i\cdot \xi}^2\\
\label{E:diff-est-1}&\quad \leq \norm{x_\alpha-x_\beta}_2^2 \frac{1}{n_2}\sum_{i=1}^{n_2}\left[ \sup_{y\in \widehat{T}}\lr{\sigma'\lr{W_i\cdot y + b_i}}^4+ \sup_{\xi\in \twiddle{T}} \lr{W_i\cdot \xi}^4\right],
\end{align}
where we've set
\[
\twiddle{T} = \mathcal C(T_1-T_1)\cap S^{n_1-1}\qquad \text{and}\qquad
\widehat{T} := T_1+\mathcal C(T_1-T_1)\cap B_1(\mathbb R^{n_1}),
\]
and have denoted by $\mathcal C(A)$ the cone over a set $A$ and by $B_1(\mathbb R^{n_1})$ the unit ball in $\mathbb R^{n_1}$. The estimate \eqref{E:diff-est-1} yields
\begin{align*}
&\Prob{\sup_{x_\alpha,x_\beta\in T}\frac{\norm{\psi(x_\alpha)-\psi(x_\beta)}_2^2}{\norm{x_\alpha-x_\beta}_2^2} > C}\\
&\qquad\leq \Prob{\frac{1}{n_2}\sum_{i=1}^{n_2}\left[ \sup_{y\in \widehat{T}}\lr{\sigma'\lr{W_i\cdot y + b_i}}^4+ \sup_{\xi\in \twiddle{T}} \lr{W_i\cdot \xi}^4\right] > C}
\end{align*}
Since $\sigma'$ is polynomially bounded by assumption \eqref{E:sigma-prop}, we find by Markov's inequality that there exists an even integer $k\geq 2$ so that for any $C>1$
\begin{align}
\Prob{\sup_{x_\alpha,x_\beta\in T}\frac{\norm{\psi(x_\alpha)-\psi(x_\beta)}^2}{\norm{x_\alpha-x_\beta}^2} > C}
\label{E:lip-est-1}&\leq \frac{1+\E{\sup_{y\in \widehat{T}} \abs{W_1\cdot y + b_1}^k + \sup_{\xi\in \twiddle{T}} \abs{W_1\cdot \xi}^4}}{C-1}.
\end{align}
Our goal is now to show that the numerator in \eqref{E:lip-est-1} is bounded above by a constant that depends only on $T_0, n_0, \lambda$. For this, let us fix any $y_0\in \widehat{T}$ and apply Lemma \ref{L:split-abc} as follows:
\begin{align*}
\abs{W_1\cdot y + b_1}^k& = \abs{W_1\cdot (y-y_0) + W_1\cdot y_0 + b_1}^k\\
&\leq 2^{2k-1}\lr{1+\abs{W_1\cdot (y-y_0)}^{2k}}+\frac{1}{4}\left[(2+\abs{W_1\cdot y_0}^{4k})+\lr{1+\abs{b_1}}^{4k}\right].
\end{align*}
Substituting this and the analogous estimate for $\abs{W\cdot \xi}^4$ into \eqref{E:lip-est-1}, we see that since all moments of the entries of the weights and biases exist, there exists a constant $C'>0$ depending on $\lambda, T_0,k$ so that
\begin{align}
\notag&\Prob{\sup_{x_\alpha,x_\beta\in T}\frac{\norm{\psi(x_\alpha)-\psi(x_\beta)}^2}{\norm{x_\alpha-x_\beta}^2} > C}\\
\label{E:lip-est-2}&\qquad\leq \frac{C'+\E{\sup_{y\in \widehat{T}} \abs{W_1\cdot (y-y_0)}^{2k}+ \sup_{\xi\in \twiddle{T}} \abs{W_1\cdot (\xi-\xi_0)}^4}}{C-1},
\end{align}
where $\xi_0\in \twiddle{T}$ is any fixed point. Note that by Lemma \ref{L:lip-image}, the sets $\widehat{T},\twiddle{T}$ are both contained in the image of a compact subset $T'\subseteq \mathbb R^{3n_0+1}$ under a Lipschitz map, with Lipschitz constant depending only on $\lambda, T$. Thus, an application of Lemma \ref{L:lip-image} shows that there exists a constant $C''$ depending only $\lambda,T,k$ so that
\[
\E{\sup_{y\in \widehat{T}} \abs{W_1\cdot (y-y_0)}^{2k}}+\E{\sup_{\xi\in \twiddle{T}} \abs{W_1\cdot(\xi-\xi_0)}^4}\leq C''.
\]
Substituting this into \eqref{E:lip-est-2} and taking $C$ sufficiently large completes the proof of Lemma \ref{L:lip-image}.
\end{proof}
Lemma \ref{L:layer-lip} directly yields the equi-Lipschitz estimate \eqref{E:equi-lip}. Indeed, let us fix $\epsilon \in (0,1)$ and a compact set $T\subseteq\mathbb R^{n_0}$. Let us define
\[
h_\alpha^{(\ell)}:=\begin{cases}
\sqrt{\frac{C_W}{n_0}} x_\alpha,&\qquad \ell=0\\
\sqrt{\frac{C_W}{n_\ell}} \sigma(z_\alpha^{(\ell)}),&\quad \ell=1,\ldots, L\\
\frac{1}{\sqrt{n_{L+1}}} z_{\alpha}^{(L+1)},&\qquad \ell = L+1
\end{cases}.
\]
For $\ell=1,\ldots, L+1$ we have
\[
h_\alpha^{(\ell)} =\begin{cases} \sqrt{\frac{C_W}{n_\ell}}\sigma\lr{\widehat{W}^{(\ell)}h_\alpha^{(\ell-1)}+b^{(\ell)}},&\qquad \ell=1,\ldots, L\\
\frac{1}{\sqrt{n_{L+1}}}\lr{\widehat{W}^{(L+1)}h_\alpha^{(L)}+b^{(L+1)}},&\qquad \ell = L+1
\end{cases}
\]
where the rescaled weight matrices $\widehat{W}^{(\ell)}$, defined in \eqref{E:W-def}, has iid mean $0$ variance $1$ entries with finite higher moments. To each of the transformations $h_\alpha^{(\ell)}\mapsto h_\alpha^{(\ell+1)}$ we may now apply Lemma \ref{L:layer-lip}. Specifically, applying Lemma \ref{L:layer-lip} in the first layer shows that there exists $C^{(1)}>0$ so that the rescaled first layer map
\[
\sup_{x_\alpha,x_\beta\in T}\frac{\norm{h_\alpha^{(1)}-h_\beta^{(1)}}}{\norm{x_\alpha-x_\beta}}\leq C^{(1)}
\]
with probability at least $1-\epsilon/(L+1)$. Thus, the image
\[
T^{(1)}:=h^{(1)}(T)
\]
of $T$ under the normalized first layer map is the image under a $C^{(1)}-$Lipschitz map of the compact set $T\subseteq \mathbb R^{n_0}$. This allows us to apply Lemma \ref{L:layer-lip} again, but this time to the second layer, to conclude that, again, there exist $C^{(2)}>0$ so that with probability at least $1-2\epsilon/(L+1)$ the normalized second layer map satisfies
\[
\sup_{x_\alpha,x_\beta\in T}\frac{\norm{h_\alpha^{(2)}-h_\beta^{(2)}}}{\norm{x_\alpha-x_\beta}}\leq\sup_{x_\alpha,x_\beta\in T}\frac{\norm{h_\alpha^{(2)}-h_\beta^{(2)}}}{\norm{h_\alpha^{(1)}-h_\beta^{(1)}}} \sup_{x_\alpha,x_\beta\in T}\frac{\norm{h_\alpha^{(1)}-h_\beta^{(1)}}}{\norm{x_\alpha-x_\beta}} \leq C^{(1)}C^{(2)}.
\]
Proceeding in this way, with probability at least $1-\epsilon$ we that
\[
\sup_{x_\alpha,x_\beta\in T}\frac{\norm{z_\alpha^{(L+1)}-z_\beta^{(L+1)}}}{\norm{x_\alpha-x_\beta}}=n_{L+1}^{1/2}\sup_{x_\alpha,x_\beta\in K}\frac{\norm{h_\alpha^{(L+1)}-h_\beta^{(L+1)}}}{\norm{x_\alpha-x_\beta}}\leq n_{L+1}^{1/2} C^{(1)}\cdots C^{(L+1)}.
\]
Since $n_{L+1}$ is fixed and finite, this confirms \eqref{E:equi-lip}. It remains to check the uniform boundedness condition in \eqref{E:aa-hyp}. For this note that for any fixed $x_\beta\in K$ by Lemma \ref{L:collective-properties}, we have
\begin{align*}
\sup_{n_1,\ldots, n_L\geq 1}\E{\frac{1}{n_{L+1}}\norm{z_{\beta}^{(L+1)}}^2}<\infty.
\end{align*}
Thus, by Markov's inequality, $\norm{z_\beta^{(L+1)}}$ is bounded above with high probability. Combined with the equi-Lipschitz condition $\eqref{E:equi-lip}$, which we just saw holds with high probability on $K$, we conclude that for each $\epsilon>0$ there exists $C>0$ so that
\[
\mathbb P\lr{\sup_{x_\alpha\in K}\norm{z_\alpha^{(L+1)}}\leq C}\geq 1-\epsilon,
\]
as desired. \hfill $\square$
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
All animals behave in 3D, and analyzing 3D posture and movement is crucial for a variety of applications, including the study of biomechanics, motor control, and behavior~\cite{marshall2022leaving}.
However, annotations for supervised training of 3D pose estimators are expensive and time-consuming to obtain, especially for studying diverse animal species and varying experimental contexts.
Self-supervised keypoint discovery has demonstrated tremendous potential in discovering 2D keypoints from video~\cite{JakabNeurips18,jakab20self-supervised,sun2022self}, without the need for manual data annotation.
These models have not been well-explored in 3D, which is more challenging compared to 2D due to depth ambiguities, a larger search space, and the need to incorporate geometric constraints.
Here, our goal is to enable 3D keypoint discovery of humans and animals from synchronized multi-view videos, without 2D or 3D supervision.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/BKinD3D_intro.pdf}
\caption{\textbf{Self-supervised 3D keypoint discovery}. Previous work studying self-supervised keypoints either requires 2D supervision for 3D pose estimation or focuses on 2D keypoint discovery. Currently, self-supervised 3D keypoint discovery is not well-explored. We propose methods for discovering 3D keypoints directly from multi-view videos of different organisms, such as human and rats, without 2D or 3D supervision. The 3D keypoint discovery examples demonstrate the results from our method. }
\vspace{-0.1in}
\label{fig:intro}
\end{figure}
\textbf{Self-Supervised 3D Keypoint Discovery.}
Previous works for self-supervised 3D keypoints typically start from a pre-trained 2D pose estimator~\cite{usman2022metapose,kocabas2019self}, and thus do not perform \textit{keypoint discovery} (Figure~\ref{fig:intro}).
These models are suitable for studying human poses because 2D human pose estimators are widely available and the pose and body structure of humans is well-defined.
However, for many scientific applications~\cite{pereira2020quantifying,marshall2022leaving,sun2022self}, it is important to track diverse organisms in different experimental contexts. These situations require time-consuming 2D or 3D annotations for training pose estimation models.
The goal of our work is to enable 3D keypoint discovery from multi-view videos directly, without any 2D or 3D supervision, in order to accelerate the analysis of 3D poses from diverse animals in novel settings.
To the best of our knowledge, self-supervised 3D keypoint discovery have not been well-explored for real-world multi-view videos.
\textbf{Behavioral Videos.} We study 3D keypoint discovery in the setting of behavioral videos with stationary cameras and backgrounds.
We chose this for several reasons.
First, this setting is common in many real-world behavior analysis datasets~\cite{segalin2020mouse,eyjolfsdottir2014detecting,burgos2012social,marstaller2019deepbees,pereira2020quantifying,jhuang2010automated,sun2021multi}, where there has been an emerging trend to expand the study of behavior from 2D to 3D~\cite{marshall2022leaving}.
Thus, 3D keypoint discovery would directly benefit many scientific studies in this space using approaches such as biomechanics, motor control, and behavior~\cite{marshall2022leaving}.
Second, studying behavioral videos in 3D enables us to leverage recent work in 2D keypoint discovery for behavioral videos~\cite{sun2022self}.
Finally, this setting enables us to tackle the 3D keypoint discovery challenge in a modular way.
For example, in behavior analysis experiments, many tools are already available for camera calibration~\cite{karashchuk2021anipose}, and we can assume that camera parameters are known.
\textbf{Our Approach.} The key to our approach, which we call \textbf{B}ehavioral \textbf{K}eypo\textbf{in}t
\textbf{D}iscovery in \textbf{3D} (BKinD-3D), is to encode self-supervised learning signals from videos across multiple views into a single 3D geometric bottleneck.
We leverage the spatiotemporal difference reconstruction loss from~\cite{sun2022self} and use multi-view reconstruction to train an encoder-decoder architecture.
Our method does not use any bounding boxes or keypoint annotations as supervision.
Critically, we impose links between our discovered keypoints to discover connectivity across points.
In other words, keypoints on the same parts of the body are connected, so that we are able to enforce joint length constraints in 3D.
To show that our model is applicable across multiple settings, we demonstrate our approach on multi-view videos from different organisms.
\begin{table}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{lcccccc}
\toprule[0.2em]
Method & 3D sup. & 2D sup. & camera params & data type \\
\toprule[0.2em]
Isakov et al.~\cite{iskakov2019learnable} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{\checkmark} & intrinsics & \multirow{2}{*}{real} \\
DANNCE~\cite{dunn2021geometric} & & & extrinsics & \\
\hline
Rhodin et al.~\cite{rhodin2018learning} & \checkmark & optional & intrinsics & real\\
\hline
Anipose~\cite{karashchuk2021anipose} & \multirow{2}{*}{$\times$} & \multirow{2}{*}{\checkmark} & intrinsics & \multirow{2}{*}{real}\\
DeepFly3D~\cite{gunel_deepfly3d_2019} & & & extrinsics & \\
\hline
EpipolarPose~\cite{kocabas2019self} & \multirow{2}{*}{$\times$} & \multirow{2}{*}{\checkmark} & \multirow{2}{*}{optional} & \multirow{2}{*}{real}\\
CanonPose~\cite{wandt2021canonpose} & & & & \\
\hline
MetaPose~\cite{usman2022metapose} & $\times$ & \checkmark & $\times$ & real\\
\hline
\multirow{2}{*}{Keypoint3D~\cite{chen2021unsupervised}} & \multirow{2}{*}{$\times$} & \multirow{2}{*}{$\times$} & intrinsics & \multirow{2}{*}{simulation}\\
& & & extrinsics & \\
\hline
\multirow{2}{*}{Ours (3D discovery)} & \multirow{2}{*}{$\times$} & \multirow{2}{*}{$\times$} & intrinsics & \multirow{2}{*}{real} \\
& & & extrinsics & \\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Comparison of our work with representative related work for 3D pose using multi-view training}. Previous works require either 3D or 2D supervision, or simulated environments to train jointly with reinforcement learning. Our method addresses a gap in discovering 3D keypoints from real videos without 2D or 3D supervision.}
\vspace{-0.2in}
\label{tab:related_work}
\end{center}
\end{table}
Our main contributions are:
\begin{itemize}
\item We introduce self-supervised 3D keypoint discovery, which discovers 3D pose from real-world multi-view behavioral videos of different organisms, without any 2D or 3D supervision.
\item We propose a novel method (BKinD-3D) for end-to-end 3D discovery from video using multi-view spatiotemporal difference reconstruction and 3D joint length constraints.
\item We demonstrate quantitatively that our work significantly closes the gap between supervised 3D methods and 3D keypoint discovery across different organisms (humans and rats).
\end{itemize}
We plan to release our code.
\section{Related Work}
\textbf{3D Pose Estimation.} There has been a large body of work studying 3D human pose estimation from images or videos, as reviewed in~\cite{sarafianos20163d,wang2021deep}, with recent works also focusing on 3D animal poses~\cite{dunn2021geometric,marshall2022leaving,gosztolai2021liftpose3d,karashchuk2021anipose,gunel_deepfly3d_2019}.
Most of these methods are fully supervised from visual data~\cite{iskakov2019learnable,sun2018integral,chen2020cross}, with some models perform lifting starting from 2D poses~\cite{martinez2017simple,chen20173d,pavllo20193d,rayat2018exploiting}.
We focus our discussion on multi-view 3D pose estimation methods, but all of these models require either 3D or 2D supervision during training.
This 2D supervision is typically in the form of pre-trained 2D detectors~\cite{kocabas2019self}, or ground truth 2D poses~\cite{usman2022metapose}.
In comparison, our method uses multi-view videos to discover 3D keypoints without 2D or 3D supervision.
Methods more closely related to our work are those that also leverage multi-view structure to estimate 3D pose (Table~\ref{tab:related_work}).
\cite{iskakov2019learnable} proposed a supervised method that uses learnable triangulation to aggregate 2D information across views to 3D.
Here we study similar approaches for representing 3D information, but using self-supervision instead of supervised 3D.
Other methods in this space propose training methods such as enforcing consistency of predicted poses across views~\cite{rhodin2018learning}, regression to 3D pose estimated from epipolar geometry of multi-view 2D~\cite{kocabas2019self}, constraining 3D poses to project to realistic 2D pose~\cite{chen2019unsupervised}, or estimates camera parameters using detected and ground truth 2D poses~\cite{usman2022metapose}.
While we also leverage multi-view information, our goal is different from the work above, in that our approach aims to discover 3D poses without 2D or 3D supervision given camera parameters.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/BKinD3D_method.pdf}
\caption{\textbf{BKinD-3D: 3D keypoint discovery using 3D volume bottleneck}. We start from input multi-view videos with known camera parameters, then unproject feature maps from geometric encoders into 3D volumes for timestamps $t$ and $t+k$. We next aggregate 3D points from volumes into a single edge map at each timestamp, and use edges as input to the decoder alongside appearance features at time $t$. The model is trained using multi-view spatiotemporal difference reconstruction. Best viewed in color.
}
\label{fig:method}
\end{figure*}
\textbf{Self-supervised Keypoint Discovery.}
2D keypoint discovery has been studied from images~\cite{JakabNeurips18,ZhangKptDisc18,he2022autolink} and videos~\cite{jakab20self-supervised,sun2022self}.
Our approach focuses on behavioral videos, similar to~\cite{sun2022self}, but we aim to use multi-view information to discover 3D keypoints, instead of 2D.
Many approaches use an encoder-decoder setup to disentangle appearance and geometry information~\cite{ZhangKptDisc18,JakabNeurips18,Lorenz19,sun2022self}.
Our setup also consists of encoders and decoders, but our encoder maps information across views to aggregate 2D information into a 3D geometry bottleneck.
The discovery model most similar to our approach is Keypoint3D~\cite{chen2021unsupervised}, which discovers 3D keypoints for control from virtual agents, using a combination of image reconstruction and reinforcement learning.
However, this setup is designed for simulated data and does not translate well to real videos, since updating the keypoints through a reinforcement learning policy requires videos generated through the simulated environment.
Keypoint discovery models typically represent discovered parts as 2D Gaussian heatmaps~\cite{JakabNeurips18,sun2022self} or 2D edges~\cite{he2022autolink}.
While we also use an edge-based representation, our edges are in 3D, which enables our training objective to enforce joint length consistency.
\textbf{Behavioral Video Analysis.}
Pose estimation is a common intermediate step in automated behavior quantification; behavioral videos are commonly captured with stationary camera and background, with moving agents.
To date, supervised 2D pose estimators are most often used for analyzing behavior videos~\cite{kabra2013jaaba,hong2015automated,eyjolfsdottir2016learning,Mathisetal2018,egnor2016computational,segalin2020mouse}.
However, 2D pose estimation is inadequate for many applications: it cannot reliably capture the angle of joints for kinematics, fails to generalize across views, is sensitive to occlusion, and cannot incorporate body plan constraints as skeleton length or range of motion of joints.
Thus, there has recently been an accelerating trend to study behavior in 3D~\cite{karashchuk2021anipose,marshall2022leaving,dunn2021geometric,gosztolai2021liftpose3d}.
These models typically require more expensive 3D training annotations compared to 2D poses.
While 2D self-supervision has been studied for behavioral videos~\cite{sun2022self}, 3D keypoint discovery in real-world behavioral videos have not been well-explored.
\section{Method}\label{sec:method}
Our goal is to discover 3D keypoints from multi-view behavioral videos without 2D or 3D supervision (Figure~\ref{fig:method}).
Our approach is inspired by BKinD~\cite{sun2022self}, which uses spatiotemporal difference reconstruction to discover 2D keypoints in behavioral videos.
In these videos, the camera and background is stationary, and the spatiotemporal difference is a strong signal for the agent movement.
We develop several approaches for 3D keypoint discovery, but focus on our volumetric model (Figure~\ref{fig:method}) in this section, as this model generally performed the best in our evaluations.
More details on other approaches are in Section~\ref{sec:model_comparisons} and supplemental materials.
In our volumetric model (Figure~\ref{fig:method}), BKinD-3D, we use multi-view spatiotemporal reconstruction to train an encoder-decoder architecture with 2D information aggregated to a 3D volumetric heatmap. Projections from the 3D heatmap in the form of agent skeletons are then used to reconstruct movement in each view.
\subsection{3D Keypoint Discovery}
Given behavioral videos captured from $M$ synchronized camera views, with known camera projection matrix $P^{(i)}$ for each camera $i \in
\{1...M \}$, we aim to discover a set of $J$ 3D keypoints $U_t \in \mathbb{R}^{J \times 3}$ on a single behaving agent, at each timestamp $t$. We assume access to camera projection matrices so that our model discovers 3D keypoints in the global coordinate frame.
During training, our model uses two timestamps in the video $t$ and $t+k$ to compute the spatiotemporal difference in each view as the reconstruction target. In other words, for each camera view $i$, our training starts with a frame $I_t^{(i)}$ and a future frame $I_{t+k}^{(i)}$. During inference, only a single timestamp is required:
once the model is trained, the model only needs $I_t^{(i)}$ for each camera view $i$.
In our model setup, the appearance encoder $\Phi$, geometry decoder $\Psi$, and reconstruction decoder $\psi$ are shared across views and timestamps (in previous work~\cite{sun2022self}, these networks are shared across timestamps, but only a single view is addressed). The appearance encoder $\Phi$ is used to generate appearance features, which are decoded into 2D heatmaps by the geometry decoder $\Psi$. These 2D heatmaps are then aggregated across views to form a 3D volumetric bottleneck (Section~\ref{sec:view_agg}), which is processed by a volume-to-volume network $\rho$. We compute the 3D keypoints using spatial softmax on the 3D volume. Then, we project these keypoints to 2D, compute edges between points, and output these edges into the reconstruction decoder $\psi$ (Section~\ref{sec:projection_recon}) for training. The reconstruction decoder $\psi$ is only used during training, and not required for inference.
\subsubsection{Feature Encoding}
To start, we first compute appearance features from frame pairs $I_t^{(i)}$ and $I_{t+k}^{(i)}$ using the appearance encoder $\Phi$: $\Phi(I_t^{(i)})$ and $\Phi(I_{t+k}^{(i)})$. These appearance features are then fed into the geometry decoder $\Psi$ to generate 2D heatmaps $\Psi(\Phi(I_t^{(i)})) = H_t^{(i)}$ and $H_{t+k}^{(i)}$.
Each 2D heatmap has $C$ channels, where $H_{t,c}^{(i)}$ represents channel $c$ of $H_t^{(i)}$.
\subsubsection{View Aggregation using Volumetric Model}\label{sec:view_agg}
To aggregate information across views, we unproject our 2D heatmaps to a 3D volumetric bottleneck.
We perform view aggregation separately across timestamps $t$ and $t+k$.
We aggregate 2D heatmaps into a 3D volume similar to~\cite{iskakov2019learnable}, which used previously for supervised 3D human pose estimation.
One important difference is that in the supervised setting, an $L \times L \times L$ sized volume is drawn around the human pelvis, with $L$ being around twice the size of a person.
As we perform keypoint discovery, we do not have information on the location or size of the agent.
Instead, we initialize our volume with $L$ representing the maximum size of the space/room for the behaving agent.
This process aggregates 2D heatmaps $H_{t,c}^{(i)}$ for cameras $i \in \{1...M\}$ and channels $c \in \{1...C\}$ to 3D keypoints $U_t$, for timestamp $t$.
Our volume is first discretized into voxels $V_{coords} \in \mathbb{R}^{B \times B \times B \times 3}$, where $B$ represents the number of distinct coordinates in each dimension. Each voxel corresponds to a global 3D coordinate.
These 3D coordinates are projected to a 2D plane using the projection matrices in each camera view $i$: $V_{proj}^{(i)} = P^{(i)} V_{coords}$.
A volume $V_{c}^{(i)}$ is then created and filled for each camera view $i$ and each channel $c$ using bilinear sampling~\cite{jaderberg2015spatial} from the corresponding 2D heatmap: $V_{c}^{(i)} = H_{t,c}^{(i)}\{V_{proj}^{(i)}\}$, where $\{\cdot\}$ denotes bilinear sampling.
We then aggregate these $V_{c}^{(i)}$ across views for each channel $c$ using a softmax approach~\cite{iskakov2019learnable}:
$$V_c^{agg} = \sum_{i} \frac{\exp(V_{c}^{(i)})}{\sum_j \exp(V_{c}^{(j)})} \odot V_{c}^{(i)}.$$
$V^{agg}$ is then mapped to 3D heatmaps corresponding to each joint using a volumetric convolutional network~\cite{moon2018v2v} $\rho$: $V^{agg*} = \rho(V^{agg})$. We compute the 3D spatial softmax over the volume, for each channel $j$ of $V_j^{agg*}$, $j \in \{1...J\}$, to obtain the 3D keypoint locations $U_{t}$ for timestamp t, as in \cite{iskakov2019learnable}.
In many supervised works, the keypoint locations $U_{t}$ are optimized to match to ground truth 3D poses; however, we aim to discover 3D keypoints, and train our network by using $U_{t}$ to decode spatiotemporal difference across views.
\subsubsection{Projection and Reconstruction}\label{sec:projection_recon}
In this step, we project the discovered 3D keypoints to a 2D representation in each view using camera parameters. For training, 2D representations in timestamps $t$ and $t+k$ are used as input to the reconstruction decoder $\psi$.
We train the 3D keypoints $U_t$ at each timestamp $t$ using multi-view spatiotemporal difference reconstruction.
The target spatiotemporal difference is computed using the 2D image pair $I_t^{(i)}$ and $I_{t+k}^{(i)}$ at each view $i$.
First, we project the 3D keypoints using camera projection matrices into 2D keypoints $u_t^{(i)} = P^{(i)} U_t$.
We create an edge representation for each view for each timestamp, which enables us to discover connections between points and enforce joint length constraints in 3D.
For each keypoint pair $u_{t,m}^{(i)}$ and $u_{t,n}^{(i)}$, we draw a differentiable edge map as a Gaussian along the line connecting them, similar to~\cite{he2022autolink}:
$$E_{t, (m,n)}^{(i)}(\mathbf{p}) = \exp(d_{m,n}^{(i)}(\mathbf{p})^2/\sigma^2),$$
where $\sigma$ controls the line thickness and $d_{m,n}(\mathbf{p})^{(i)}$ is the distance between pixel $\mathbf{p}$ and the line connecting $u_{t,m}^{(i)}$ and $u_{t,n}^{(i)}$. We then aggregate the edge heatmaps at each timestamp using a set of learned weights $w_{m,n}$ for each edge, where $w_{m,n}$ is shared across all timestamps and all views. An edge is active and connects two points if $w_{m,n} > 0$, otherwise the points are not connected.
Finally, we aggregate all the edge heatmaps using the max across all edge pairs~\cite{he2022autolink}: $$E_t^{(i)}(\mathbf{p}) = \max_{m,n} w_{m,n} E_{t, (m,n)^{(i)}}(\mathbf{p}).$$
In our framework, for each view $i$, the decoder $\psi$ uses the edge maps $E_t^{(i)}$ and $E_{t+k}^{(i)}$ as well as the appearance feature $\Phi(I_t^{(i)})$ for reconstructing the spatiotemporal difference across each view.
The ground truth spatiotemporal difference is computed from the original images $S(I_t^{(i)}, I_{t+k}^{(i)})$.
The reconstruction from the model is $\hat{S} = \psi(E_t^{(i)}, E_{t+k}^{(i)}, \Phi(I_t^{(i)}))$, through the 3D volumetric bottleneck in order to discover informative 3D keypoints for reconstructing agent movement.
\subsection{Learning Formulation}
The entire training pipeline (Figure~\ref{fig:method}) is differentiable, and we train the model end-to-end. We note that our model is only given multi-view video and corresponding camera parameters, without any keypoint or bounding box supervision.
\vspace{-0.1in}
\subsubsection{Multi-View Reconstruction Loss}
Our multi-view spatiotemporal difference reconstruction is based on the single-view spatiotemporal difference studied for 2D keypoint discovery~\cite{sun2022self}. We compute the Structural Similarity Index Measure (SSIM)~\cite{Wang04imagequality} as a reconstruction target in each view.
SSIM has been used to measure perceived differences between images based on luminance, contrast, and structure features.
Here, we use SSIM as a reconstruction target and we compute a similarity map using local SSIM on corresponding patches between $I_t^{(i)}$ and $I_{t+k}^{(i)}$. This similarity map is negated to obtain the dissimilarity map used as the target: $S(I_t^{(i)}, I_{t+k}^{(i)})$.
We use perceptual loss~\cite{Johnson2016Perceptual} in each view between the target $S$ and the reconstruction $\hat{S}$. This loss computes the L2 distance between features of the target and reconstruction computed from the VGG network $\phi$~\cite{VGG14}:
\begin{align}
\mathcal{L}_{recon}^{(i)} = \left\Vert \phi(S(I_t^{(i)},I_{t+T}^{(i)})) - \phi(\hat{S}(I_t^{(i)},I_{t+T}^{(i)})) \right\Vert_2.
\end{align}
The error is computed by comparing features from intermediate convolutional blocks of the network. Our final perceptual loss is summed over each view $\mathcal{L}_{recon} = \sum_i \mathcal{L}_{recon}^{(i)}$.
\vspace{-0.1in}
\subsubsection{Learned Length Constraint}
Since many animals have a rigid skeletal structure, we encourage that the length of active edges ($w_{m,n} > 0$ for point pairs $m$ and $n$) are consistent across samples.
We do not assume that these lengths and connections are known, such as previous work~\cite{usman2022metapose}; rather, they are learned during training.
We do this by maintaining a running average of the length of all active edges $l_{avg(m,n)}$, and minimizing the difference between the average length and each sample $l_{m,n}$:
\begin{align}
\mathcal{L}_{length} = \sum_{m}\sum_{n} \mathbbm{1}_{w_{m,n} > 0} \left\Vert l_{avg(m,n)} - l_{m,n} \right\Vert_2.
\end{align}
During training, we update $l_{avg(m,n)}$ using an exponential running average and $w_{m,n}$ indicating edge weights for every pair is learned. Both of these parameters are shared across all viewpoints and timestamps. Notably, the length constraint is only applied to active edges, since there are many point pairs without rigid connections (e.g. elbow to feet), while we want to enforce this constraint only for rigid connections (e.g. elbow to wrist).
\subsubsection{Separation Loss}
To encourage unique keypoints to be discovered, we apply separation loss to our 3D keypoints, which has been previously studied in 2D~\cite{ZhangKptDisc18,sun2022self}. On a set of 3D keypoints $U_{it}$, where $i$ is the index of a keypoint and $t$ is the time, the separation loss is:
\begin{align}
\mathcal{L}_{s} = \sum_{i \neq j} \exp{\left( \frac{-(U_{it} - U_{jt})^2}{2\sigma_s^2} \right)},
\end{align}
where $\sigma_s$ is a hyperparameter that controls the strength of separation.
\subsubsection{Training Objective}\label{sec:final_objective}
Our full training objective is the sum of the multi-view spatiotemporal reconstruction loss $\mathcal{L}_{recon}$, learned length constraints $\mathcal{L}_{length}$, and separation loss $\mathcal{L}_s$:
\begin{align}
\mathcal{L} = \mathcal{L}_{recon} + \mathbbm{1}_{epoch>e} (\omega_r \mathcal{L}_{length} + \omega_s\mathcal{L}_s).
\label{eq:full_objective}
\end{align}
Our model is trained using curriculum learning~\cite{Bengio2009}. We only apply $\mathcal{L}_{length}$ and $\mathcal{L}_s$ when the keypoints are more consistent, after $e$ epochs of training using reconstruction loss.
\section{Experiments}
We demonstrate BKinD-3D using real-world behavioral videos, using a human dataset and a recently released large-scale rat dataset (Section~\ref{sec:exp_setup}).
We evaluate our discovered keypoints using a standard linear regression protocol based on previous works for 2D keypoint discovery~\cite{JakabNeurips18,sun2022self} (also described in Section~\ref{sec:training_procedure}).
Here, we present results on pose regression (Section~\ref{sec:results}) as well as ablation studies (Section~\ref{sec:ablation}), with additional results in supplementary materials.
\subsection{Experimental Setup}\label{sec:exp_setup}
\subsubsection{Datasets}
We demonstrate our method by evaluating it on two representative datasets: Human 3.6M and Rat7M. The datasets have different environments and focus on subjects of different sizes, with humans being about 1700mm tall and rats about 250mm long.
\textbf{Human 3.6M}. We evaluate our method on Human3.6M to compare to recent works in self-supervised 3D from 2D~\cite{usman2022metapose}.
Human 3.6M~\cite{ionescu2013human3} is a large-scale motion capture dataset with videos from 4 viewpoints.
We follow the standard evaluation protocol~\cite{iskakov2019learnable,kocabas2019self} to use subjects 1, 5, 6, 7, and 8 for training and 9 and 11 for testing.
Our test set matches the set specified in~\cite{usman2022metapose} using every 16th frame (8516 test frame sets).
Notably, unlike baselines such as~\cite{iskakov2019learnable}, our method does not require any pre-processing with 2D bounding box annotations but rather is directly applied to the full image frame.
\textbf{Rat7M}. We also evaluate our method on Rat7M~\cite{dunn2021geometric}, a 3D pose dataset of rats moving in a behavioral arena.
This dataset most closely matches the expected use case for our method, which is a dataset of non-human animal behavior in a static environment.
Rat7M consists videos from 6 viewpoints captured at 1328$\times$1048 resolution and 120Hz, along with ground truth annotations obtained from marker-based tracking. We train on subjects 1, 2, 3, 4, and test on subject 5, as in \cite{dunn2021geometric}. We train and evaluate on every 240th frame of each video (3083 train, 1934 test frame sets).
\subsubsection{Model Comparisons}\label{sec:model_comparisons}
We compare our method with three main categories of baselines: supervised 3D pose estimation methods (ex:~\cite{iskakov2019learnable}), 3D pose estimation methods from 2D supervision (ex:~\cite{usman2022metapose}), and a 3D keypoint discovery method developed for control in simulation~\cite{chen2021unsupervised}. A more detailed comparison of methods in this space is in Table~\ref{tab:related_work}.
For baselines with model variations, we use evaluation results from the version that is the closest to our model (multi-view inference, and camera parameters during inference). We note that all previous methods require additional 3D or 2D supervision, or jointly training a reinforcement learning policy in simulation~\cite{chen2021unsupervised}, which we do not require for 3D keypoint discovery in real videos.
Another notable difference is that previous methods typically pre-process video frames using detected or ground truth 2D bounding boxes~\cite{iskakov2019learnable}, while our method does not require this pre-processing step.
Since 3D keypoint discovery has not been thoroughly explored, we additionally study methods in this area using multi-view 2D discovery and triangulation (Triang.+Reproj.), and multi-view 2D discovery with a depth map estimates (Depth Map), in addition to our volumetric approach (Section~\ref{sec:method}, BKinD-3D).
For multi-view 2D discovery and triangulation, we use BKinD~\cite{sun2022self} to discover 2D keypoints in each view, and perform triangulation using camera parameters to obtain 3D keypoints. We then project the 3D keypoints for multi-view reconstruction. We add an additional loss on the reprojection error to learn keypoints consistent across multiple views.
For the depth map approach, in each camera view, we estimate 2D heatmaps corresponding to each keypoint alongside a view-specific depthmap estimate. The final 3D keypoints are then computed from a confidence-weighted average of each view's estimated 3D keypoint coordinates (from the per-view 2D heatmaps and depth estimates).
More details on each method are in the supplementary materials.
\vspace{-0.1in}
\subsubsection{Training and Evaluation Procedure}\label{sec:training_procedure}
We train our volumetric approach using the full objective (Eq~\ref{eq:full_objective}). We scale images to $256\times256$ for training, with a frame gap of 20 frames (0.4 s) for Human3.6M and 80 (0.66 s) for Rat7M. We use a maximum volume size of 7500mm for Human3.6M and 1000mm for Rat7M. The results are computed for all 3D keypoint discovery methods with 15 keypoints unless otherwise specified. We train using videos from the train split with camera parameters provided by each dataset.
\begin{table}
\begin{center}
\scalebox{0.95}{
\begin{tabular}{lccc}
\toprule[0.2em]
Method & Supervision & PMPJPE & MPJPE \\
\toprule[0.2em]
\multicolumn{4}{c}{\textbf{\textit{Supervised 3D}}} \\
Rhodin et al.~\cite{rhodin2018learning} & 3D/2D & 52 & 67\\
Isakov et al.~\cite{iskakov2019learnable} & 3D/2D & - & 21\\
\bottomrule[0.1em]
\multicolumn{4}{c}{\textbf{\textit{Supervised 2D + self-supervised 3D}}} \\
Anipose~\cite{karashchuk2021anipose} & 2D & 75 & -\\
CanonPose~\cite{wandt2021canonpose} & 2D & 53 & 74\\
EpipolarPose~\cite{kocabas2019self} & 2D & 67 & 77\\
Iqbal et al.~\cite{iqbal2020weakly} & 2D & 55 & 69\\
MetaPose~\cite{usman2022metapose} & 2D & 74 & - \\
\bottomrule[0.1em]
\multicolumn{4}{c}{\textbf{\textit{3D Discovery + Regression}}} \\ Keypoint3D~\cite{chen2021unsupervised} & $\times$ & 168 & 368\\
Triang+reproj & $\times$ & 134 & 241\\
Depth Map & $\times$ &122 & 161 \\
BKinD-3D (ours) & $\times$ & 105 & 125\\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Comparing performance with related work on Human3.6M}. We note that previous approaches typically require additional 2D or 3D supervision, whereas our model discovers 3D keypoints directly from multi-view video. The 3D keypoint discovery models are evaluated using a linear regression protocol (Section~\ref{sec:training_procedure}).}
\label{tab:human36m}
\end{center}\vspace{-0.4cm}
\end{table}
We evaluate our 3D keypoint discovery through keypoint regression based on similar methods from 2D, using a linear regressor without a bias term~\cite{sun2022self,JakabNeurips18,ZhangKptDisc18}.
For this regression step, we extract our discovered 3D keypoints from a frozen network, and learn a linear regressor to map our discovered keypoints to the provided 3D keypoints in each of the training sets.
We then perform evaluation on regressed keypoints on the test set.
For metrics, we compute Mean Per Joint Position Error (MPJPE) in line with previous works in 3D pose estimation~\cite{iskakov2019learnable,iqbal2020weakly}, which is the L2 distance between the regressed and ground truth 3D poses, accounting for the mean shift between the regressed and ground truth points.
To compare to methods that require addition alignment before MPJPE computation (e.g.~\cite{usman2022metapose} which does not use camera parameters during inference), we also compute Procrustes aligned MPJPE (PMPJPE)~\cite{usman2022metapose,kocabas2019self,iqbal2020weakly}.
PMPJPE applies the optimal rigid alignment to the predicted and ground truth 3D poses before metric computation.
\subsection{Results}\label{sec:results}
We evaluate our discovered keypoints quantitatively using keypoint regression on Human3.6M (Table~\ref{tab:human36m}) and Rat7M (Table~\ref{tab:rat7m}).
Over both datasets with diverse organisms, our approach generally outperforms all other fully self-supervised 3D keypoint discovery approaches.
Additionally, among all the approaches we developed for 3D keypoint discovery, BKinD-3D using the volumetric bottleneck performs the best overall.
Results demonstrate that BKinD-3D is directly applicable to discover 3D keypoints on novel model organisms, potentially very different in appearance or size, without additional supervision.
Notably, on Humam3.6M, Keypoint3D~\cite{chen2021unsupervised}, developed for control of simulated videos, does not work well in our setting with real videos, and qualitative results demonstrate that this method was not able to discover keypoints that tracked the agent (supplementary materials).
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/30kpts.pdf}
\caption{\textbf{Qualitative results for 3D keypoint discovery on Human3.6M}. Representative samples of 3D keypoints discovered from BKinD-3D without regression or alignment for 15 and 30 total discovered keypoints. We visualize all keypoints that are connected using the learned edge weights, and the projected 3D keypoints in the leftmost column are from the keypoint model with 30 discovered keypoints.
}
\label{fig:qualitative}
\end{figure*}
\textbf{Qualitative results.} We find that the discovered points and skeletons are reasonable and look similar to the ground truth annotations for Human3.6M (Figure~\ref{fig:qualitative}) and Rat7M (Figure~\ref{fig:qualitative_rat}). Furthermore, we find that a volumetric model with 30 keypoints learns a more detailed human skeleton representation than a model with 15 keypoints. For example, the model with 30 keypoints is able to track both legs, while the 15 keypoint model only tracks 1 leg; however, both models miss the knees. Importantly, our model discovers the skeleton in global coordinates, and is able to track the agent as they move around the space.
\begin{table}
\begin{center}
\scalebox{0.95}{
\begin{tabular}{lccc}
\toprule[0.2em]
Method & Supervision & PMPJPE & MPJPE \\
\toprule[0.2em]
\multicolumn{4}{c}{\textbf{\textit{Supervised 3D}}} \\
DANNCE~\cite{dunn2021geometric} & 3D & 11 & -\\
\multicolumn{4}{c}{\textbf{\textit{3D Discovery + Regression}}} \\
Triang+reproj & $\times$ & 21 & 108\\
Depth Map & $\times$ & 27 & 56\\
BKinD-3D (ours) & $\times$ & 24 & 76 \\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Comparison with 3D keypoint discovery methods on Rat7M}. Results from the top three 3D keypoint discovery methods on Rat7M. The 3D keypoint discovery models are evaluated using a linear regression protocol (Section~\ref{sec:training_procedure}).}
\label{tab:rat7m}
\end{center}\vspace{-0.4cm}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/rat.pdf}
\caption{\textbf{Qualitative results for 3D keypoint discovery on Rat7M}. Representative samples of 3D keypoints discovered from BKinD-3D without regression or alignment. We visualize all connected keypoints using the learned edge weights and visualize the first 4 cameras (out of 6 cameras) in Rat7M for projected 3D keypoints.
}
\label{fig:qualitative_rat}
\end{figure*}
\begin{table}
\begin{center}
\scalebox{0.95}{
\begin{tabular}{lccc}
\toprule[0.2em]
Method & PMPJPE & MPJPE\\
\toprule[0.2em]
BKinD-3D (8 kpts) & 120 & 149\\
BKinD-3D (15 kpts) & 105 & 125 \\
BKinD-3D (30 kpts) & 109 & 130\\
\hline
BKinD-3D (point) & 110 & 137 \\
BKinD-3D (edge, without length) & 108 & 129 \\
BKinD-3D (edge, full objective) & 105 & 125\\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Ablation results on Human3.6M}. We perform an ablation study of our volumetric bottleneck method comparing different numbers of keypoints as well as variations to the edge bottleneck with length constraints.
}
\label{tab:human36m_ablation}
\end{center}\vspace{-0.4cm}
\end{table}
While there exists a gap in terms of quantitative metrics between supervised methods and self-supervised 3D keypoint discovery, our approach has closed the gap substantially to supervised methods compared to previous work, without requiring time-consuming 2D or 3D annotations. Qualitative results demonstrate that our approach is able to discover structure across diverse model organisms, providing a method for accelerating the study of organism movements in 3D.
\subsection{Ablation}\label{sec:ablation}
To study the effect of keypoints and edges within our model, we perform an ablation study of our model trained on Human3.6M (Table~\ref{tab:human36m_ablation}). We focus on BKinD-3D as it is the best performing approach on Human3.6M.
Results show that 15 keypoints performed the best quantitatively, but 30 keypoints is comparable and qualitatively provides a more informed skeleton (Figure~\ref{fig:qualitative}). This may be due to the linear regressor used for evaluation overfitting on the training data with more keypoints.
We additionally find that adding edge information has a quantitative improvement on performance and provides more qualitative information on connectivity between joints (Figures~\ref{fig:qualitative}, ~\ref{fig:qualitative_rat}). In our 3D setting, we found that the point bottleneck (studied in previous works in 2D~\cite{sun2022self,JakabNeurips18}) did not work as well as the edge bottleneck (studied in previous works in 2D~\cite{he2022autolink}). By studying edge bottlenecks in 3D and expanding beyond 2D, our approach is able to enforce joint length constraints through the discovered
edge connectivity.
\section{Discussion}
We present a method for 3D keypoint discovery directly from multi-view video, without any requirement for 2D or 3D supervision.
Our method discovers 3D keypoint locations as well as joint connectivity in behaving organisms using a volumetric heatmap with multi-view spatiotemporal difference reconstruction.
Results demonstrate that our work has closed the gap significantly to supervised methods for studying 3D pose, and is applicable to different organisms.
\textbf{Limitations and Future Directions}.
Currently, our approach uses multi-view videos with camera parameters for training and focuses on behavioral videos with stationary cameras and backgrounds.
Future directions to jointly estimate camera parameters, camera movement, and pose from visual data can improve the applicability of 3D keypoint discovery.
We were also limited by the small amount of publicly available multi-view datasets of non-human animals.
More open datasets in this space would encourage the development of pose estimation models with broader impacts beyond humans.
While challenges exist, we highlight the potential for 3D keypoint discovery in studying the 3D movement of diverse organisms without supervision.
\textbf{Broader Impacts}. 3D keypoint discovery has the potential to accelerate the study of agent movements and behavior in 3D~\cite{marshall2022leaving}, since these methods does not require time-consuming manual annotations for training.
This advance enables scientists to study behavior in novel organisms and experimental setups, for which annotations and pre-trained models are not available.
However, risks are inherent in applications of behavior analysis, especially regarding human behavior, and thus important considerations must be taken to respect privacy and human rights.
In research, responsible use of these models involves being informed and following policies, which often includes obtaining internal review board (IRB) approval, as well as obtaining written informed consent from human participants in studies.
Overall, we hope to inspire more efforts in self-supervised 3D keypoint discovery in order to understand the capabilities and limitations of vision models as well as enable new applications, such as studying natural behaviors of organisms from diverse taxa in biology.
\section{Acknowledgements}
This work is generously supported by the Amazon AI4Science Fellowship (to JJS), NIH NINDS (R01NS102333 to JCT), and the Air Force Office of Scientific Research (AFOSR FA9550-19-1-0386 to BWB).
{\small
\bibliographystyle{ieee_fullname}
\section*{Supplementary Material}
We present additional experimental results (Section~\ref{sec:addtional_results}), method description for the approaches we studied for 3D keypoint discovery in addition to the volumetric method (Section~\ref{sec:addtional_approaches}), additional implementation details (Section~\ref{sec:additional_implementation}), and qualitative results (Section~\ref{sec:qualitative_results}).
\section{Additional Experimental Results}~\label{sec:addtional_results}
We perform additional experiments of BKinD-3D on Human3.6M and Rat7M using the evaluation procedure based on 3D keypoint regression, similar to the main paper.
On Human3.6M (Table~\ref{tab:supp_h36m}), we vary the number of cameras from 4 to 2, and compute the mean performance over all camera pairs. For the 4 camera experiment, we used all 4 cameras for training and inference, while for the 2 camera experiment, we used the same selections of 2 cameras for training and inference.
The mean performance with 2 cameras is slightly lower than using all cameras.
Notably, on the best performing camera pair, we observe that the performance is similar to using all 4 cameras. This result is promising for 3D keypoint discovery in settings that might limit the number of cameras, such as due to cost of additional cameras, maintenance effort, or difficulty of hardware setups.
Additionally, we did not observe a significant difference in performance with a bigger volumetric representation (Table~\ref{tab:supp_h36m}). This volume feature corresponds to $C$, which is the number of channels of the volumetric representation before input to the volume-to-volume network $\rho$.
We additionally visualize the error distribution across joint types from our 3D volumetric model (Figure~\ref{fig:joints}). We observe that generally joints on the limbs (e.g. wrist, ankle) have higher errors than joints closer to the center of the body (e.g. thorax, neck), for both MPJPE and PMPJPE.
This could be due to the wider range of motion of these limbs compared to the center in Human3.6M.
There is not a significant difference in error across joints on the left side or right side.
Since we currently perform inference per frame, future work to incorporate temporal constraints, or extend our method to identify meshes without supervision, could reduce errors on the limbs.
\begin{table}[b]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{lccc}
\toprule[0.2em]
Method & PMPJPE & MPJPE\\
\toprule[0.2em]
BKinD-3D (2 cams) mean & 117 & 155 \\ \quad\quad\quad\quad\quad (2 cams) best & 108 & 133 \\
\quad\quad\quad\quad\quad (2 cams) worst & 125 & 167 \\
BKinD-3D (4 cams) & 105 & 125\\
\hline
BKinD-3D (32 volume features) & 105 & 125 \\
BKinD-3D (64 volume features) & 107 & 125 \\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Additional results on Human3.6M}.
We vary the number of cameras used for training and inference, as well as the size of the volumetric representation. Since there are multiple choices of 2 camera configurations, we chose the mean, best, and worst performance metrics. }
\label{tab:supp_h36m}
\end{center}
\end{table}
On Rat7M (Table~\ref{tab:supp_rat7m}), we compare model performance when discovering 15 keypoints and 30 keypoints. We observe a small improvement in PMPJPE and MPJPE with an increased number of discovered keypoints, and also observe that the discovered keypoints cover a greater portion of the rat body in qualitative results (Figure~\ref{fig:qualitative_supp_rat7m}).
This is similar to our observations on varying keypoints on Human 3.6M (Figure 3 in the main paper). It is possible that further increasing the number of keypoints could lead to a better body representation. Future work that explores using more efficient models with a much higher number of learned keypoints could further improve performance.
\begin{figure*}
\centering
\includegraphics[width=0.48\linewidth]{figures/mpjpe.png}
\includegraphics[width=0.48\linewidth]{figures/pmpjpe.png}
\caption{\textbf{Per joint errors on Human3.6M}. Errors of each joint in mm using BKinD-3D, corresponding to the skeleton definition from the Human3.6M dataset. The dotted red line corresponds to the mean across joints (the MPJPE and PMPJPE respectively).
}
\label{fig:joints}
\end{figure*}
\begin{table}[b]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{lccc}
\toprule[0.2em]
Method & PMPJPE & MPJPE\\
\toprule[0.2em]
BKinD-3D (15 kpt) & 24 & 76\\
BKinD-3D (30 kpt) & 23 & 70 \\
\bottomrule[0.1em]
\end{tabular}}
\caption{\textbf{Additional results on Rat7M}.
We vary the number of discovered keypoints. }
\label{tab:supp_rat7m}
\end{center}
\end{table}
\section{Additional 3D Keypoint Discovery Approaches}~\label{sec:addtional_approaches}
\subsection{Triangulation and reprojection}
One of the simplest approaches to extending current 2D keypoint discovery methods~\cite{sun2022self} to three dimensions is to triangulate the discovered 2D keypoints to obtain 3D keypoints, then reproject the points back to 2D. This model can be trained using the same loss (spatiotemporal difference reconstruction) using the discovered 2D keypoints and the projected 2D keypoint in each view. We implement this approach, along with an additional loss for minimizing reprojection error to encourage detecting consistent keypoints across views.
We use an encoder-decoder architecture, with a shared appearance encoder $\Phi$, geometry decoder $\Psi$, and reconstruction decoder $\psi$. For each camera view $i$ and time $t$, a frame $I_{t}^{(i)}$ is processed to obtain a heatmap $H_{t}^{(i)} = \Psi(\Phi(I_{t}^{(i)}))$. We apply a spatial softmax to obtain 2D keypoints $y_{t}^{(i)}$ for each view. The 2D keypoints across all views are triangulated to produce 3D keypoints. The triangulation is done by applying singular value decomposition (SVD) to find a solution to the following problem:
\begin{align*}
\text{argmin}_{\tilde{z}_{t}^{(i)}} ||y_{t}^{(i)} - P^{(i)} \tilde{U}_{t}||_2
\end{align*}
where $\tilde{U}_{t}$ represents the 3D keypoints in homogeneous coordinates and $P^{(i)}$ the projection matrix for camera view $i$.
The 3D keypoints are projected back into 2D for each view forming $y*_{t}^{(i)}$.
To train the network, we minimize a sum of three losses:
\begin{itemize}
\item $\mathcal{L}_{recon}^{(i)}$: the multi-view reconstruction loss (described in Section 3.2.1 of the main paper) using the detected 2D keypoints $y_{t}^{(i)}$ and $y_{t+k}^{(i)}$
\item $\mathcal{L}_{projrecon}^{(i)}$: the same multi-view reconstruction loss as above, but applied to the projected 2D keypoints $y*_{t}^{(i)}$ and $y*_{t+k}^{(i)}$
\item $\mathcal{L}_{reproj}^{(i)} = ||y*_{t}^{(i)} - y_{t}^{(i)}||_2$: the reprojection error
\item $\mathcal{L}_{s}$: the separation loss (described in Section 3.2.3 of the main paper)
\end{itemize}
The final loss is
$$\mathcal{L} = \sum_i \mathcal{L}_{recon}^{(i)} + \mathbbm{1}_{epoch>e}( \omega_s \mathcal{L}_s + \omega_{p} \sum_i \mathcal{L}_{projrecon}^{(i)} + \omega_{r} \sum_i \mathcal{L}_{reproj}^{(i)})$$
Our model is trained using curriculum learning~\cite{Bengio2009}. We only apply the losses based on projected 2D points after $e$ epochs, when the model learns some consistent keypoints with each view. We train our model for 5 epochs and apply the losses after $e=2$ epochs.
\subsection{Depth Approach}
Based on the success in 2D unsupervised behavioral video keypoint discovery~\cite{sun2022self} and 3D keypoint discovery for robotic control~\cite{chen2021unsupervised}, we experiment with a framework that encodes appearance as well as 2D and depth representations (Figure~\ref{fig:depth_method}). Given multiple camera views with known extrinsic and instrinsic parameters, our framework learns 2D keypoints and depth maps to estimate 3D keypoints.
For each camera $i$, there is an appearance encoder $\Phi^{(i)}$, a pose decoder $\Psi^{(i)}$, and a depth decoder $D^{(i)}$. A frame $I_t^{(i)}$ and future frame $I_{t+k}^{(i)}$ are fed into the appearance encoder and subsequently the pose decoder. The pose decoder outputs $J$ heatmaps corresponding to the $J$ keypoints: $\Psi^{(i)}(\Phi^{(i)}(\cdot))$. A spatial softmax operation is applied to the output of the pose decoder, representing confidence or a probability distribution for the location of each keypoint. We interpret each of the heatmaps as a 2D Gaussian. The depth decoder outputs one depth map $D(\Phi(\cdot))$, representing a dense prediction of distance from the camera plane for the scene.
The appearance features $\Phi^{(i)}(I_t^{(i)})$ are fused with the 2D geometry features for both $I_t$ and $I_{t+k}$. These are fed into the reconstruction decoder $\psi$ to reconstruct the 2D spatiotemporal difference between $I_t$ and $I_{t+k}$. The spatiotemporal difference encourages the network to focus on meaningful regions of movement and be invariant to the background and other irrelevant features. This framework is repeated across camera views.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{figures/BKinD3D_method_depth.pdf}
\caption{\textbf{3D keypoint discovery using depth maps}. The model is trained using multi-view spatiotemporal difference reconstruction to learn 2D heatmaps and depth representations at each view. Then the 3D information from each view is aggregated using a confidence-weighted average to produce the final 3D pose.
}
\label{fig:depth_method}
\end{figure*}
The reconstruction objective uses spatiotemporal difference reconstruction similar to our volumetric approach. To make the model more robust to rotation, we rotate the geometry bottleneck $h_g$ for image $I$ to create pseudo labels $h_g^{R^{\circ}}$ for the rotated input images $I^{R^\circ}$. where $R = {90^{\circ}, 180^{\circ}, 270^{\circ}}$. We apply mean squared error between the predicted geometry bottlenecks $\hat{h}_g$ and the rotated images and the generated pseudo labels $h_g$:
\begin{equation}
\mathcal{L}_{rot} = \text{MSE} (h_g^{R^{\circ}}, \hat{h}_g(I^{R^\circ}) )
\end{equation}
The rotational loss can lead to a degenerate solution, with the keypoints converging to the center of the image. As such, we employ a separation loss as was done in our volumetric method.
For camera $i$ and a 3D point $(x,y,z)$ in the world coordinate system, we can use the projection matrix $P^{(i)}$ to project the 3D point to camera $i$'s normalized coordinate system $(u,v,d)$. Let the $\Omega^{(i)}$ operator denote the transformation to the camera plane and $\Omega^{*(i)}$ denote the inverse transformation. These transformations are differentiable and can be expressed analytically~\cite{chen2021unsupervised}.\\
After outputting the 2D keypoint heatmap $\Psi(\Phi(\cdot))$ and the depth map $D(\Phi(\cdot))$ for an input frame, we integrate over the probability distributions on the $\mathbb{R}^{S\times S}$ heatmaps and the depth maps to get the expected value for each coordinate $j$ and camera $i$:
\begin{equation}
\mathbb{E}[u_j^{(i)}] = \frac{1}{S}\sum_{u,v} u\cdot H_j^{(i)}(u,v)
\end{equation}
\begin{equation}
\mathbb{E}[v_j^{(i)}] = \frac{1}{S}\sum_{u,v} v\cdot H_j^{(i)}(u,v)
\end{equation}
\begin{equation}
\mathbb{E}[d_j^{(i)}] = \sum^S_{u=1}\sum^S_{v=1} D_j^{(i)}(u,v)\cdot H_j^{(i)}(u,v)
\end{equation}
The keypoints are unprojected into the world coordinate system: $\Omega^-1_n(u,v,d)$. To penalize disagreement between predictions from different views, we use a multi-view consistency loss via mean-squared error.
\begin{table}
\begin{center}
\begin{tabular}{lccc}
\toprule
Type & Input dimension & Output dimension & Output size \\
\midrule
Upsampling & - & - & 16x16 \\
Conv\_block & 2048 + \# keypoints $\times$ 2 & 1024 & 16x16 \\
Upsampling & - & - & 32x32 \\
Conv\_block & 1024 + \# keypoints $\times$ 2 & 512 & 32x32 \\
Upsampling & - & - & 64x64 \\
Conv\_block & 512 + \# keypoints $\times$ 2 & 256 & 64x64 \\
Upsampling & - & - & 128x128 \\
Conv\_block & 256 + \# keypoints $\times$ 2 & 128 & 128x128 \\
Upsampling & - & - & 256x256 \\
Conv\_block & 128 + \# keypoints $\times$ 2 & 64 & 256x256 \\
Convolution & 64 & 3 & 256x256 \\
\bottomrule\\[-1em]
\end{tabular}
\caption{\textbf{Reconstruction decoder architecture.} ``Conv\_block" refers to combination of 3$\times$3 convolution, batch normalization, and ReLU activation. This architecture setup is also used for reconstruction decoding in~\cite{sun2022self,ryou2021weakly}.}
\label{tab:decoder_details}
\end{center}
\end{table}
\section{Additional Implementation Details}~\label{sec:additional_implementation}
\textbf{Architecture Details}. Our model architecture is based on ones studied before for 2D keypoint discovery~\cite{sun2022self,ryou2021weakly}.
Our encoder $\Phi$ is a ResNet-50~\cite{He2016DeepRL}, which outputs our appearance features. Our pose decoder $\Psi$ uses GlobalNet~\cite{CPN17}, which outputs our 2D heatmaps. Our volume-to-volume network $\rho$ is based on V2V~\cite{moon2018v2v}. Finally, our reconstruction decoder $\psi$ is a a series of convolution blocks, where the architecture details are in Table~\ref{tab:decoder_details}. Our code is included in the supplementary materials and we plan to open-source our code.
\textbf{Hyperparameters}. The hyperparameters for the volumetric 3D keypoint discovery model is in Table~\ref{tab:hyperparameter_1}. All keypoint discovery models are trained until convergence, with 5 epochs for Human3.6M and 8 epochs for Rat 7M. We include additional details on each dataset:
\begin{table}[!t]
\centering
\small
\scalebox{1.0}{
\begin{tabular}{c | c | c| c | c| c |c|c}
\hline
Dataset & \# Keypoints & Batch size & Volume dimension & Volume size & Resolution & Frame Gap & Learning Rate \\
\hline
Human3.6M & 15 & 1 & 7500 & 64 & 256 & 20 & 0.001\\
Rat7M & 15 & 1 & 1000 & 64 & 256 & 80 & 0.001\\
\hline
\end{tabular}
}
\caption{{\bf Hyperparameters for 3D Keypoint Discovery.} } \label{tab:hyperparameter_1}
\end{table}
\textbf{Human3.6M}.
The Human 3.6M dataset~\cite{ionescu2013human3} contains 3.6 million frames of 3D human poses with corresponding video captured from 4 different camera views, recorded from a set of different scenarios (discussion, sitting, eating, ...). Each scenario consists of videos from all 4 views with the same background, across a set of human participants. The person in the video is approximately 1700mm tall while the room is approximately 4000mm in dimension. The dataset is captured at 50Hz.
This dataset is licensed for academic use, and more details on the dataset and license are provided by the Human 3.6M authors within~\cite{ionescu2013human3}.
\textbf{Rat7M}. The Rat7M dataset~\cite{dunn2021geometric} consists of 3D pose and videos from a behavioral experiment with a set of rats, recorded across 6 views.
This is currently one of the largest dataset with animal 3D poses.
The dataset consists of 5 rats, with videos from some of the rats across multiple days.
The rats are approximately 250mm long with the cage being around 1000mm in dimension.
The video is captured at 120Hz.
Some of the ground truth poses in Rat7M contains nans, and during processing, similar to ~\cite{dunn2021geometric}, we remove frames with nans from evaluation.
Our training procedure is not affected since we do not use any 3D poses during training.
This dataset is open-sourced for research.
\section{Qualitative Results}~\label{sec:qualitative_results}
We present additional qualitative results from BKinD-3D in Figures~\ref{fig:qualitative_supp_h36m} and \ref{fig:qualitative_supp_rat7m}.
For Human 3.6M (Figure~\ref{fig:qualitative_supp_h36m}), qualitative results demonstrate that the volumetric method discovers 3D keypoints and connections that qualitatively match the ground truth, even with self-occlusion or unusual poses, such as when the subject is laying or sitting down.
The 30 keypoint model generally tracks the legs, shoulders, hips, arms, and head of the subject. The 15 keypoint model tracks the shoulders, arms, and head of the subject but fails to discover the legs and hips. This may be because we use spatiotemporal difference reconstruction, and there is more movements in these discovered parts.
We observe that most discovered edges correspond to limbs, although there are extra discovered edges within the body.
For example, the shoulders to feet connection in the 15 keypoint model. This edge likely allows the volumetric bottleneck to model the human shape with the limited keypoints available.
In addition to extra edges that may be discovered by our model, we may also miss parts, such as the knees of the subject, and occasionally the wrist keypoints (e.g. the left wrist for both 15 and 30 keypoint model in the last row).
Despite this, we note that the discovered skeleton is reasonable across a wide range of poses.
In contrast to the volumetric bottleneck, the method Keypoint3D~\cite{chen2021unsupervised} does not work well on our real videos. In~\cite{chen2021unsupervised}, Keypoint3D jointly trains image reconstruction with a reinforcement learning (RL) policy loss in simulated environments. We find that training in real videos using only image reconstruction leads to poor performance: the discovered keypoints do not track any semantically meaningful parts (Figure~\ref{fig:keypoint3d}).
For Rat7M (Figure~\ref{fig:qualitative_supp_rat7m}), we also find that the volumetric bottleneck discovers interpretable keypoints that qualitatively match the ground truth. The head and front legs in particular are well tracked in both 15 and 30 keypoint models, across a variety of rat poses from having 4 feet on the ground to crouching to standing up. However, the back legs are only partially discovered in the 30 keypoint model. Furthermore, the discovered rat skeleton has much more edges compared to the ground truth.
This highlights a limitation in our model, as the rat's skin and fat hide its underlying skeleton, making it difficult to discover the skeleton from video data alone.
Future work could explore applying self-supervised learning constrained by body priors, such as animal X-rays, in order to discover a more precise skeleton.
Overall, qualitative results from the volumetric method demonstrates the potential of 3D keypoint discovery for discovering the pose and structure of different agents without supervision, across organisms that are significantly different in appearance and scale.
\begin{figure*}[b]
\centering
\includegraphics[width=\linewidth]{figures/keypoint3d.pdf}
\caption{\textbf{Qualitative results for Keypoint3D on Human3.6M}. Representative samples of 3D keypoints discovered using Keypoint3D method~\cite{chen2021unsupervised} on real videos.
}
\label{fig:keypoint3d}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{figures/30kpts_2.pdf}
\caption{\textbf{Qualitative results for 3D keypoint discovery on Human3.6M}. Representative samples from BKinD-3D without regression or alignment for 15 and 30 total discovered keypoints. We visualize all keypoints that are connected using the learned edge weights, and the projected 3D keypoints in the leftmost column are from the keypoint model with 30 discovered keypoints.
}
\label{fig:qualitative_supp_h36m}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{figures/rat_2.pdf}
\caption{\textbf{Qualitative results for 3D keypoint discovery on Rat7M}. Representative samples of 3D keypoints discovered from BKinD-3D without regression or alignment for 15 and 30 total discovered keypoints. We visualize all connected keypoints using the learned edge weights and visualize the first 4 cameras (out of 6 cameras) in Rat7M for projected 3D keypoints from the 30 keypoint model.
}
\label{fig:qualitative_supp_rat7m}
\end{figure*}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Background}
For a polynomial $f(X) \in \K[X]$ and $n \geq 0$, we write $f^{(n)}(X)$ for the $n$th iterate of $f$, that is, $f^{(0)}(X) = X$ and
\[
f^{(n)}(X) = \underbrace{f \circ f \circ \ldots \circ f}_{n\:\text{times}}(X), \qquad n \ge 1.
\]
The orbit of $\alpha \in \K$ is the set $\{\alpha, f(\alpha), f^{(2)}(\alpha), \ldots\}$.
In case the set is finite we say that $\alpha$ is \textit{preperiodic\/} and we use
$\PrePer(f)$ to denote the set of preperiodic points $\alpha \in \K$.
A famous theorem of Northcott~\cite{North}
says that for any number field $\K$, for any
nontrivial polynomial
$f(X) \in \K[X]$ the set $\PrePer(f)$ is finite.
Namely there are only finitely many $\alpha \in\K$ such that
\begin{equation}
\label{eq:Northcott}
f^{(m)}(\alpha) = f^{(n)}(\alpha)
\end{equation}
for two distinct iterations of $f$ (that is, for $m \neq n$).
Coupled with modern counting
results on the number of algebraic numbers of bounded height and degree, see~\cite{Bar1,Bar2,Wid1,Wid2},
one can obtain various effective and rather explicit versions of this result.
Several generalisations of the finiteness result of Northcott~\cite{North}
have recently been considered in~\cite{BOSS,OSSZ,OstShp},
where
\Cref{eq:Northcott} has been replaced by
various restrictions of multiplicative type on the ratios $f^{(m)}(\alpha) /f^{(n)}(\alpha)$ or even
the ratios of the powers $f^{(m)}(\alpha)^r /f^{(n)}(\alpha)^s$.
For example, it is shown in~\cite[Theorems~1.3 and~1.4]{BOSS} that if $f(X) \in \K[X]$ of degree $d \ge 2$
is not of the form $f(X)= a X(X-b)^{d-1}$ with $a \in \K^*$ and $b \in \K$, then for any
finitely generated multiplicative subgroup $\Gamma \in \K^*$, there are only finitely many
$\alpha \in \K$ for which
\begin{equation}
\label{eq:Northcott-Gamma}
f^{(n)}(\alpha) /f^{(m)}(\alpha)\in \Gamma
\end{equation}
for some integers $m > n \ge 0$.
\subsection{New results}
Unfortunately the method of~\cite{BOSS} relies on the results of Faltings~\cite{Falt1,Falt2}
and thus is not effective.
We consider the more general equation
\begin{equation}
\label{eq:mnuIntegers}
f^{(n)}(\alpha) = \sint f^{(m)}(\alpha)
\end{equation}
for some integers $m > n \geq 0$ and an $\cS$-integer $\sint \in \o_\cS$
(see \Cref{eq:SIntDefn} for a definition).
Note that \Cref{eq:Northcott-Gamma} is a special case of \Cref{eq:mnuIntegers}.
However, \Cref{eq:mnuIntegers} is no longer symmetric in $m$ and $n$.
In \Cref{thm:MultDepNorthcottNF} (see also \Cref{thm:LowerBoundm}) we show that in the case where $f(X) \in \o[X]$, where $\o$ is the ring of integers of $\K$,
and $\alpha \in \o$,
one can obtain an effective bound on the size of $\alpha$.
In fact, we trace the explicit dependence on $\cS$.
This is an effective version of~\cite[Theorem~1.4]{BOSS}.
Furthermore, we also provide an effective variant of~\cite[Theorem~1.7]{BOSS}
which states that, under mild additional constraints, there are only finitely many $\alpha \in \K$ that
satisfy the following relation of multiplicative dependence modulo $\cS$-units among values in an orbit
\[
f^{(n+k)}(\alpha)^r \cdot f^{(k)}(\alpha)^s \in \o_\cS^*
\]
for some $n,k \geq 1$ and $(r,s) \neq (0,0)$.
That is, we give an \textit{effective upper bound} on the height
of $\alpha \in \o$
that satisfy
\begin{equation}
\label{eq:MultDepRelation}
\left(f^{(m)}(\alpha)\right)^r = u \left(f^{(n)}(\alpha)\right)^s
\end{equation}
for some integers $m > n \geq 1$,
$(r,s) \neq (0,0)$ and
an $\cS$-unit $u \in \o_\cS^*$, see \Cref{eq:S-unit} for a
definition.
As in~\cite{BOSS}, the key to proving \Cref{thm:MultDepNorthcottNF}
is an upper bound on the \textit{$\cS$-height of polynomial values\/}, see \Cref{eq:S-height} for a
precise definition, which we believe
is of independent interest and may find other applications.
Recall that in~\cite{BOSS} this upper bound is provided by~\cite[Theorem~11(c)]{HsiaSilverman}
which is unfortunately not effective.
Here we modify the argument of~\cite{BOSS}
to use an effective variant of~\cite[Theorem~11(c)]{HsiaSilverman}
which we provide by
extending~\cite[Theorem~2.2]{BEG} to number fields.
We note that obtaining such extensions
in terms of the norm in $\o$ can be done by following the arguments in~\cite{BEG}.
However, such a generalisation is not sufficient for our purpose, so we add some additional
ideas and ingredients in order to obtain a bound in terms of the $\cS$-height (see \Cref{eq:S-height}).
\subsection{General notation and conventions}
\label {sec:note}
We set the following notation which we use for the rest of this paper.
We refer to~\cite{BomGub} for a background on valuations, height and other notions
we introduce below.
Throughout this paper, we assume that $\K$ is a number field of degree $d$, with class number $\fh$, regulator $R$ and ring of integers $\o$.
We use $\MK$ to denote the set of places of $\K$ and write
\[
\MK = \MK^{\infty} \cup \MK^0,
\]
where $\MK^{\infty}$ and $\MK^0$ are the set of archimedean (infinite) and non-archimedean (finite) places of $\K$ respectively.
We always assume that $\cS$ is a finite set of places containing $\MK^{\infty}$
and use $\cS_0 = \cS \cap \MK^0$ to denote the set of finite places in $\cS$. We also define
\[
s = \# \cS \mand t = \# \cS_0.
\]
As usual, $\o_\cS^*$ denotes the group of $\cS$-units, that is
\begin{equation}
\label{eq:S-unit}
\o_\cS^*= \{u \in \K^*:~\abs{u}_v = 1 \ \forall v \in \cM \setminus \cS \}.
\end{equation}
In particular, $\o^* = \o_{\MK^{\infty}}^*$ is the group of units which, by the Dirichlet Unit Theorem, is
a finitely generated group of rank $\# \MK^{\infty} -1$.
Similarly, $\o_\cS$ denotes the ring of $\cS$-integers, that is
\begin{equation}
\label{eq:SIntDefn}
\o_\cS= \{a \in \K:~\abs{a}_v \leq 1 \ \forall v \in \cM \setminus \cS \}.
\end{equation}
We use $\Nm(\fa)$ for the norm of the ideal $\fa$, we also write
$\Nm(\alpha)$ to mean $\Nm([\alpha])$, where $[\alpha]$ is the principal ideal in $\o$ generated by $\alpha \in \o$.
In particular, $\Nm(\alpha) >0$ for $\alpha \ne 0$.
For $x \geq 0$
it is convenient to introduce the functions
\[
\log^+ x = \max\{\log x, 0\} \mand \log^*x = \max\{\log x, 1\},
\]
with $\log^+ 0 = 0$, $\log^* 0 = 1$.
We are now able to define the \textit{logarithmic height\/} of $\alpha \in \K$ as
\[
h(\alpha) = \sum_{v \in \MK} \frac{\ell_v}{d} \log^+\abs{\alpha}_v,
\]
where
\begin{itemize}
\item $\abs{\alpha}_v$ is the absolute value extending
the valuation on $\Q$.
That is, for a finite place $v$ corresponding to a prime ideal $\mathbf{p} \mid p$
\[
\abs{\alpha}_v = p^{-\ord_\mathbf{p}\, \alpha/e_v},
\]
where $\ord_\mathbf{p}\, \alpha$ is the $\mathbf{p}$-adic order of $\alpha$.
\item $\ell_v$ denotes the local degree of the valuation $v$, that is
\[
\ell_v = [\K_v : \Q_v],
\]
where $\K_v$ and $\Q_v$ are the completions at $v$.
\end{itemize}
Finally, for a set $\cT \subseteq \MK$ we use
\begin{equation}
\label{eq:S-height}
h_\cT(\alpha) = \sum_{v \in \cT} \frac{\ell_v}{d} \log^+\abs{\alpha}_v
\end{equation}
to denote the \textit{$\cT$-height} of $\alpha \in \K$.
We also recall the identity,
\begin{equation}
\label{eq:sum l/d}
\sum_{v_i \in \MK^\infty} \frac{\ell_{v_i}}{d} = 1.
\end{equation}
which is a special case of~\cite[Corollary~1.3.2]{BomGub} applied to the archimedean valuation of $\Q$.
Let $\allprimes_\K$ denote the set of all prime ideals of $\o$.
For $\alpha \in \K^*$ define
\[
\suppset(\alpha) = \{\mathbf{p} \in \allprimes_\K \mid \ord_\mathbf{p}(\alpha) > 0\}
\]
and
\[
\largestsupp(\alpha) = \max_{\mathbf{p} \in \suppset(\alpha)} \Nm(\mathbf{p})
\]
with the convention that $\largestsupp(\alpha) = 1$ if $\suppset(\alpha) = \varnothing$.
We use $A$ with or without
subscripts or arguments for fully \textit{explicit} constants, while $c$ and $C$
are used for not explicit but \textit{effective\/}
constants depending on their arguments.
\section{Main Results}
\subsection{Height of \texorpdfstring{$\cS$}{S}-parts of polynomials in number fields}
We start with an
effective version
of~\cite[Theorem~1.4]{BOSS}
where we also make the dependence on $\cS$ completely explicit.
This result is proven by
extending~\cite[Theorem~2.2]{BEG} to number fields. We note that it is
also indicated in~\cite{BEG} that such an extension to number fields should be possible.
However, if one follows closely the argument of the proof of~\cite[Theorem~2.2]{BEG}
this leads to such an extension in terms of the norm, while for our purpose we need
it in terms of the height, which requires bringing in additional tools.
First we need to define some notation stemming from the use
of~\cite{GyoryYu}.
Suppose we are working with a finite set of
places $\cS$ and the prime ideals
$\{\mathbf{p}_1, \cdots, \mathbf{p}_t\}$
correspond to the places in $\cS_0$.
Then define
\begin{equation}
\label{eq:PQTdefn}
P = \max\limits_{i \in [1,t]} \Nm(\mathbf{p_i}),\quad Q = \Nm(\mathbf{p_1\cdots p_t}),
\quad \sumloglogprimes = \sum_{i=1}^t \log^* \log \Nm(\mathbf{p_i})
\end{equation}
with the convention that for $\cS = M^{\infty}$ we set $P = Q = 1$, $\sumloglogprimes = 0$.
Also define the functions
\begin{equation}
\label{eq:A1defn}
A_1(u,v) = v^{2v + 3.5}2^{7v}\log(2v)u^{2v}
\end{equation}
and
\begin{equation}
\label{eq:A2defn}
A_2(u,v) = (2048u)^v v^{3.5}
\end{equation}
which stem from~\cite{GyoryYu} which underlies our argument.
We recall the definition of $h_\cS$ in Section~\ref{sec:note} and also that $d = [\K:\Q]$.
\begin{theorem}
\label{thm:SPartBoundNF}
Let $f(X) \in \o[X]$ be a polynomial with at least 3 distinct roots.
Let $\L$ be a splitting field of $f$ over $\K$, let $\dD = [\L : \K]$ and let $\fh_\L$ denote the class number of $\L$.
Let $\cS$ be a finite set of $s$ places of $\K$ containing all infinite places
and let $t = \#\cS_0$.
Then for all $\alpha \in \o$, $f(\alpha) \neq 0$
we have
\begin{equation}
\label{eq:SPartBoundGeneric}
h_\cS(f(\alpha)^{-1})
< (1-\eta_1(\K,f,\cS))
(h(f(\alpha)) + 1),
\end{equation}
where
\begin{align*}
\eta_1(\K,f,\cS)^{-1} & = c_1(\K, f) A_1(d \dD, s\dD) \max\{1,t\}\\
& \qquad \qquad \times P^{\dD} (\log^* P+ \sumloglogprimes)\prod_{i=1}^t \log (\Nm(\mathbf{p_i}))^{\dD},
\end{align*}
and, for $t > 0$, we have
\begin{equation}
\label{eq:SPartBound2}
h_\cS(f(\alpha)^{-1})
< (1-\eta_2(\K,f,\cS))
(h(f(\alpha)) + 1),
\end{equation}
where
\[
\eta_2(\K,f,\cS)^{-1} = c_1(\K, f) A_2(d \dD \fh_\L, t\dD) t P^{\dD} \prod_{i=1}^t \log (\Nm(\mathbf{p_i}))^{\dD},
\]
where $c_1(\K,f) > 0$ is effectively computable.
\end{theorem}
Note that \Cref{eq:SPartBound2}
omits the $v^v$ term
in~\Cref{eq:A1defn}.
This is necessary for the proofs of
\Cref{thm:LowerBoundm,thm:WeakZsigmondy}.
We also note that, using
the recent improvement~\cite[Corollary 4]{Gyory}
in place of~\Cref{lem:BoundhDecomposableForm},
we can replace the main dependence on $P$
by a dependence on the third largest value of
$\Nm(\mathbf{p_i})$, $i = 1, \ldots, t$.
\subsection{Effective bounds on points with multiplicatively dependent orbits}
We recall that $d = [\K : \Q]$.
\begin{theorem}
\label{thm:MultDepNorthcottNF}
Let $f(X)\in\o[X]$ be a polynomial with at least 3 distinct roots
and for which $0$ is not periodic.
Let $\cS$ be a finite set of places of $\K$ containing all infinite places
and let $t = \# \cS_0$.
Then for any $\alpha \in \o$ such that \Cref{eq:mnuIntegers} holds
for some non-negative integers $m > n$ and $\sint \in\o_{\cS}$ we have
\begin{equation}
\label{eq:MultDepNorthcottBound1}
\begin{split}
h(\alpha) <\ & c_2(\K, f) \eta_1(\K, f, \cS)^{-1},
\end{split}
\end{equation}
and, for $t > 0$,
\begin{equation}
\label{eq:MultDepNorthcottBound2}
\begin{split}
h(\alpha) <\ & c_2(\K, f) \eta_2(\K,f,\cS)^{-1},
\end{split}
\end{equation}
where $\eta_1(\K,f,\cS)$, $\eta_2(\K,f,\cS)$
are as in \Cref{thm:SPartBoundNF}
and
$c_2(\K, f)$ is an effectively computable constant.
\end{theorem}
With \Cref{thm:MultDepNorthcottNF} we can also prove the following effective variant
of \cite[Theorem~1.7]{BOSS}.
\begin{theorem}
\label{thm:MultDepGeneralThm}
Let $f(X) \in \o[X]$ be a polynomial of degree at least 3 without multiple roots and for which 0 is not periodic.
Let $\cS$ be a finite set of places of $\K$ containing all infinite places.
Then for any tuple $(m,n,\alpha,r,s)$ for which \Cref{eq:MultDepRelation} holds with $n \geq 1$
we have
\[
h(\alpha) < c_3(\K, f, \cS)
\]
for some effectively computable $c_3(\K,f,\cS)$.
\end{theorem}
Note that we have assumed $m,n \neq 0$, otherwise there are trivially
infinitely many solutions of the form
$\left(f^{(m)}(u)\right)^0 = u^{-1} \left(f^{(0)}(u)\right)$.
\Cref{thm:MultDepGeneralThm} almost directly follows from the proof of \cite[Theorem~1.7]{BOSS}
but instead using \Cref{thm:MultDepNorthcottNF} in place of \cite[Theorem~1.3]{BOSS}.
\subsection{Applications to the existence of large prime ideals in factorisations}
For $\alpha \in \K$, define the function
\[
\loght(\alpha) = \log^* h(\alpha).
\]
We obtain an effective lower bound on the largest norm of a prime ideal
appearing with a higher order in $f^{(m)}(\alpha)$ than in $f^{(n)}(\alpha)$.
\begin{theorem}
\label{thm:LowerBoundm}
Let $f(X)\in\o[X]$ be a polynomial with at least 3 distinct roots
and for which 0 is not periodic.
Let $\alpha \in \o$, $m, n \in \Z$, $m > n \geq 0$ such that $f^{(m)}(\alpha)$, $f^{(n)}(\alpha) \neq 0$.
Then
\[
\largestsupp\left(\frac{f^{(m)}(\alpha)}{f^{(n)}(\alpha)}\right)
> c_4(\K,f)
\frac{\loght\left(f^{(m)}(\alpha)\right) \log^* \loght\left(f^{(m)}(\alpha)\right)}{\log^* \log^* \loght\left(f^{(m)}(\alpha)\right)},
\]
where $c_4(\K, f) > 0$ is an effectively computable constant.
\end{theorem}
Using standard properties of height, such as \Cref{eq:hfnalphaIneqs} below, we see that
\Cref{thm:LowerBoundm} implies that
if, in addition, $\alpha$ is not preperiodic,
then
\[
\largestsupp\left(\frac{f^{(m)}(\alpha)}{f^{(n)}(\alpha)}\right)
> c_5(\K,f)
\frac{m \log^* m}{\log^* \log^* m},
\]
where $c_5(\K,f) > 0$ is an effectively computable constant.
Finally, we obtain a result on the existence of primitive divisors
within small sets of iterates.
\begin{theorem}
\label{thm:WeakZsigmondy}
Let $f(X) \in \o[X]$ be a polynomial with at least 3 distinct roots
and for which 0 is not periodic.
Then there exists an effectively computable constant $c_6(\K, f) > 0$ such that, letting
\[
k(m, \alpha) = \floor{c_6(\K, f) \log \lambda(f^{(m)}(\alpha))},
\]
for every $m \in \Z$, $m > 0$, and every $\alpha \in \o$,
$f^{(m)}(\alpha)$ not a unit,
there exists a prime ideal $\mathbf{p}$ that divides
$f^{(m)}(\alpha)$ but does not divide any element in the set
\[
\{ f^{\left(\max(0,m-k(m,\alpha))\right)}(\alpha), f^{\left(\max(0,m-k(m,\alpha))+1\right)}(\alpha), \cdots, f^{(m-1)}(\alpha)\}.
\]
\end{theorem}
If, in addition, $\alpha$ is not preperiodic, then,
using \Cref{eq:hfnalphaIneqs}, \Cref{thm:WeakZsigmondy} also holds for
\[
k(m, \alpha) = \floor{c_7(\K, f) \log m}
\]
for some effectively computable $c_7(\K, f) > 0$.
\section{Proof of \texorpdfstring{\Cref{thm:SPartBoundNF}}{Theorem~\ref{thm:SPartBoundNF}}}
\subsection{Preliminaries}
As in the proof of~\cite[Theorem~2.10]{BEG}, the main tool
is~\cite[Theorem~3]{GyoryYu}.
We state the special case for 2 variables where it is
easy to state a sufficient condition for $F$ to be \textit{triangularly connected}.
We maintain the dependence on $\cS$; however, we omit the
explicit dependence on $\K$ and $F$.
Let $R$ be the regulator of $\K$.
In \Cref{lem:BoundhDecomposableForm} below, which is a simplified version of~\cite[Theorem~3]{GyoryYu},
we have made use of the inequality
(see~\cite{BugeaudGyory})
\[
R_\cS \leq \fh R \prod_{i=1}^{t} \log \Nm(\mathbf{p_i}),
\]
where $\fh$ is the class number of $\K$ and $R_\cS$ is the $\cS$-regulator of $\K$
(see~\cite{BugeaudGyory} for a definition, it is the natural generalisation of the regulator to S-units).
In particular, we absorb $\fh, R$ into the constant $C_1(\K,F)$.
We recall that a binary form (that is, a homogeneous polynomial)
$F \in \K[X,Y]$ is called \textit{decomposable over $\K$\/}, if
$F$ factors into linear factors over $\K$.
\begin{lemma}
\label{lem:BoundhDecomposableForm}
Let $F \in \K[X,Y]$ be a decomposable form over $\K$ which has at least 3
pairwise non-proportional linear factors.
Let $\cS$ be a finite set of $s$ places of $\K$ containing all infinite places
and let $t = \# \cS_0$.
Let $P, Q, \sumloglogprimes$ be as defined in
\Cref{eq:PQTdefn}. Let $\beta \in \K \setminus \{0\}$.
Then all solutions $(x_1, x_2) \in \o_\cS^2$ of
\[
F(x_1, x_2) = \beta
\]
satisfy
\begin{equation}
\label{eq:DecomposableBound1}
\begin{split}
h(x_1), h(x_2) &<
C_1(\K,F)A_1(d,s) \left(\log^* Q + h(\beta) \right) \\
&\qquad\qquad\qquad \times
P (1+ \sumloglogprimes/\log^* P)\prod_{i=1}^t \log \Nm(\mathbf{p_i}) ,
\end{split}
\end{equation}
and, for $t > 0$,
\begin{equation}
\label{eq:DecomposableBound2}
\begin{split}
h(x_1), h(x_2) &<
C_2(\K,F)A_2(d\fh,t) \left(\log^* Q + h(\beta) \right) \\
&\qquad\qquad\qquad \times
(P/\log^* P)\prod_{i=1}^t \log \Nm(\mathbf{p_i}),
\end{split}
\end{equation}
where $A_1$ is defined as in \Cref{eq:A1defn}, $A_2$ is defined as in \Cref{eq:A2defn},
$d = [\K : \Q]$, $\fh$ is the class number of $\K$ and $C_1(\K,F)$, $C_2(\K,F)$
are effectively computable constants.
\end{lemma}
We refer to~\cite{GyoryYu} for a fully explicit statement.
For the case $t > 0$
see also the recent improvement~\cite[Corollary 4]{Gyory}.
To adapt the proof of~\cite[Theorem~2.10]{BEG} to number fields,
we need a well known fact on the approximation of archimedean valuations by units.
To obtain explicit bounds, we first need~\cite[Lemma~1]{BugeaudGyory}
in the case where $\cS = \MK^{\infty}$
(see also~\cite[Lemma~2]{GyoryYu} for an alternative bound when the unit rank is at least 2).
\begin{lemma}
\label{lem:UnitSystemBounds}
Let $\K$ be a number field with unit rank at least 1.
Then there exists a fundamental system of units
$\varepsilon_1, \ldots, \varepsilon_{r}$
such that
\[
\max\limits_{1 \leq i \leq r} h(\varepsilon_i) \leq A_3(\K) R,\]
where
\[
A_3(\K) = \frac{(r!)^2}{2^{r-1}d^{r}}\left(\frac{\delta_\K}{d}\right)^{1-r},
\]
where $d = [\K:\Q]$
and $\delta_\K$ is any positive constant such that every non-zero algebraic number $\alpha \in \K$
which is not a root of unity
satisfies $h(\alpha) \geq \delta_\K/d$.
\end{lemma}
A result of Voutier~\cite{Voutier} states that we can take
\[
\delta_\K = \begin{cases}
\displaystyle{\frac{\log 2}{d}} &\text{if $d = 1, 2$}, \\
\displaystyle{ \frac{1}{4}\left(\frac{\log \log d}{\log d}\right)^3 } &\text{if $d \geq 3$},
\end{cases}
\]
in \Cref{lem:UnitSystemBounds}.
In the following result, little effort has been made to optimise the
right hand side as it suffices for our results that it is effectively computable
in terms of $\K$. In fact, it is essentially established in the proof of~\cite[Lemma~2]{BugeaudGyory}
(see also~\cite[Lemma~3]{GyoryYu}); however, for the sake of completeness, we give a short proof.
\begin{lemma}
\label{lem:UnitExistsHtNm}
For every $\alpha \in \o \setminus \{0\}$ and for every integer $n \geq 1$
there exists an $\varepsilon \in \o^*$ such that
\begin{equation}
\label{eq:UnitHtNm}
\abs*{\log \abs{\varepsilon^n \alpha}_{v} -
\frac{1}{d} \log(\Nm(\alpha))} \leq \frac{1}{2}A_3(\K)nd^2R
\end{equation}
for all $v\in \MK^{\infty}$ with $A_3(\K)$ as in \Cref{lem:UnitSystemBounds}
and $d = [\K : \Q]$.
\end{lemma}
\begin{proof}
For this proof let $\MK^\infty = \{v_1, \ldots, v_{r+1}\}$.
Since the case $r = 0$ is trivial, we henceforth assume that $r \geq 1$.
Let $\varepsilon_1, \ldots, \varepsilon_{r}$ be a fundamental system of units satisfying
the inequalities of \Cref{lem:UnitSystemBounds}.
We note that by the Dirichlet Unit Theorem (see, for example,~\cite[Theorem~I.7.3]{Neukirch}),
the columns of the $(r + 1) \times r$ matrix $M$ with
\[
M_{i,j} = \ell_{v_i} \log \abs{\varepsilon_j}_{v_i}
\]
(where as before $\ell_v = [\K_v : \Q_v]$ for $v\in \MK$)
form a basis for the hyperplane in $\R^{r+1}$
of vectors whose coordinates sum to 0.
Let $\mathbf{v}$ be the
column vector of dimension $r+1$, where
\[
(\mathbf{v})_i = \ell_{v_i} \log\(\Nm(\alpha)^{-1/d}\abs{\alpha}_{v_i}\), \qquad i =1, \ldots, r+1.
\]
Then there exists a unique vector $\mathbf{x} = (x_1, \ldots, x_{r})^T$ such that
\[
M \mathbf{x} = \mathbf{v}.
\]
For each $i=1, \ldots, r$, we write
\[
x_i = ny_i + z_i,\quad y_i, z_i \in \Z, \ z_i \in \left(-\frac{n}{2}, \frac{n}{2}\right].
\]
Let $\varepsilon = \varepsilon_1^{-y_1}\cdots\varepsilon_{r}^{-y_{r}}$.
Then, for all $v_i$,
\[
\log \abs{\varepsilon_1^{z_1} \cdots \varepsilon_{r}^{z_{r}}}_{v_i}
=
\log \abs{\varepsilon^n\alpha}_{v_i} -
\frac{1}{d}\log(\Nm(\alpha)).
\]
For each $j=1, \ldots, r$, by \Cref{lem:UnitSystemBounds}, we have
\[
\abs{z_j \log \abs{\varepsilon_j}_{v_i}}
\leq \frac{n}{2} d h(\varepsilon_j)
\leq \frac{n}{2} d A_3(\K) R
\]
and summing over $j=1, \ldots, r$ yields the desired statement.
\end{proof}
We now have all the tools we need for the proof of
\Cref{thm:SPartBoundNF}.
\subsection{Concluding the proof}
We will first prove \Cref{eq:SPartBoundGeneric} holds
assuming that $f$ splits in $\K$.
Let $F(X,Y)$ be the homogenisation of $f$, that is,
\[
F(X,Y) = Y^\degf f(X/Y),
\]
where $\degf = \deg f$.
Then $F$ is a decomposable form in $\K$ with $F(x,1) = f(x)$.
Suppose $x \in \o$ and $f(x) \neq 0$.
Let
$b = F(x,1) = f(x)$.
We can write $[b]$ uniquely in the form
\begin{equation}
\label{eq:bIdealDecomp}
[b] = \mathbf{p_1}^{b_1}\ldots\mathbf{p_t}^{b_t}\mathbf{a},
\end{equation}
where $\mathbf{a}$ is an ideal coprime to $\mathbf{p_1},\ldots,\mathbf{p_t}$
and $b_i = \ord_{\mathbf{p_i}}\, b$, $i=1, \ldots, t$.
Decompose each $b_i$ (uniquely) as
\[
b_i = \degf\fh q_i+ r_i,
\]
where $\fh$ is the class number of $\K$
and $q_i, r_i \in \Z_{\geq 0}$, $r_i < \degf\fh$.
For each $i \in [1, t]$, define $p_i \in \o$
to be any generator of $\mathbf{p_i}^{\fh}$
(which is a principal ideal).
Now let
\begin{equation}
\label{eq:Defc}
c = F\left(\frac{x}{p_1^{q_1}\ldots p_t^{q_t}}, \frac{1}{p_1^{q_1}\ldots p_t^{q_t}}\right)
= \frac{b}{p_1^{\degf q_1} \ldots p_t^{\degf q_t}},
\end{equation}
so that
\begin{equation}
\label{eq:cideal}
[c] = \mathbf{p_1}^{r_1}\ldots\mathbf{p_t}^{r_t}\mathbf{a}.
\end{equation}
We now apply \Cref{lem:UnitExistsHtNm} to $c$ and
let $\varepsilon \in \o^*$ be any unit
satisfying \Cref{eq:UnitHtNm} where
$\alpha$ is replaced by $c$
and $n $ by $\degf$.
Multiplying the arguments of $F$ by $\varepsilon$ we get
\[
F\left(\frac{\varepsilon x}{p_1^{q_1}\ldots p_t^{q_t}},
\frac{\varepsilon}{p_1^{q_1}\ldots p_t^{q_t}}\right)
= \varepsilon^\degf c
\]
and applying
\Cref{lem:BoundhDecomposableForm}
we obtain the inequality
\begin{equation}
\begin{split}
\label{eq:hLHSRHSineq}
h(\varepsilon/p_1^{q_1}\ldots p_t^{q_t}) &<
C_1(\K,f)A_1(d,s) \left(\log^* Q + h(\varepsilon^\degf c) \right) \\
& \qquad \qquad
\times P (1+ \sumloglogprimes/\log^* P)\prod_{i=1}^t \log \Nm(\mathbf{p_i}).
\end{split}
\end{equation}
We separately lower bound $h(\varepsilon/p_1^{q_1}\ldots p_t^{q_t})$
and upper bound $h(\varepsilon^\degf c)$.
\subsubsection*{--- Lower bound on $h(\varepsilon/p_1^{q_1}\ldots p_t^{q_t})$:}\quad
Since $\varepsilon/p_1^{q_1}\ldots p_t^{q_t}$ is an $\cS$-integer
\begin{equation}
\begin{split}
\label{eq:hvarepsilon}
h(\varepsilon/p_1^{q_1}\ldots p_t^{q_t}) =
&\sum_{\substack{v_i \in \MK^0 \cap \cS}}
\frac{\ell_{v_i}}{d} \log^+\abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i} \\
&+ \sum_{v_i \in \MK^\infty} \frac{\ell_{v_i}}{d} \log^+\abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i}.
\end{split}
\end{equation}
From~\Cref{eq:Defc},
we have that for all $v_i$
\[
\degf\log \abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i}
- \log \left(\abs{\varepsilon^\degf c}_{v_i}\right)
= \log \left(\abs{b}_{v_i}^{-1}\right).
\]
If $v_i \in \MK^0 \cap \cS$, by direct calculation we get
\begin{equation}
\label{eq:viFiniteLowerBound}
\log^+ \abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i}
> \frac{1}{\degf}\log^+ (\abs{b}_{v_i}^{-1})
- \frac{\fh}{e_{v_i}} \log \underprime_i
\end{equation}
(where $\underprime_i$ is the prime in $\Z$ that $\mathbf{p_i}$ lies over
and $e_{v_i}$ is the ramification index of $v_i$, $i =1, \ldots, t$.).
If $v_i \in \MK^{\infty}$, using the bound of \Cref{lem:UnitExistsHtNm} and dividing by $\degf$ we get
\[
\log \abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i} \geq
\frac{1}{\degf}\left(\log \left(\abs{b}_{v_i}^{-1}\right) + \frac{1}{d} \log(\Nm(c))\right) - \frac{1}{2} A_3(\K) d^2R.
\]
Hence
\begin{equation}
\begin{split}
\label{eq:viInfiniteLowerBound}
\log^+ \abs{\varepsilon/p_1^{q_1}\ldots p_t^{q_t}}_{v_i}
&\geq
\frac{1}{\degf}\log^+ \left(\abs{b}_{v_i}^{-1}\right) - \frac{1}{2} A_3(\K) d^2R.
\end{split}
\end{equation}
Substituting \Cref{eq:viFiniteLowerBound} and \Cref{eq:viInfiniteLowerBound}
into \Cref{eq:hvarepsilon} and using the trivial bound $ e_{v_i} \ge 1$
and~\Cref{eq:sum l/d} we obtain
\begin{equation}
\label{eq:LHSLowerBoundOnh}
h(\varepsilon/p_1^{q_1}\ldots p_t^{q_t}) \geq
\frac{1}{\degf} h_\cS(b^{-1})
- \fh \log Q - \frac{1}{2} A_3(\K) d^2R,
\end{equation}
where again we let
\[
Q = \Nm(\mathbf{p_1}\cdots \mathbf{p_t}) \ge \underprime_1 \ldots \underprime_t.
\]
\subsubsection*{--- Upper bound on $h(\varepsilon^\degf c)$:} \quad
Since $\varepsilon^\degf c \in \o$ we have
\[
h(\varepsilon^\degf c) = \sum_{v_i \in \MK^\infty} \frac{\ell_{v_i}}{d} \log^+ \abs{\varepsilon^\degf c}_{v_i}.
\]
From \Cref{eq:sum l/d} and \Cref{eq:UnitHtNm}
we obtain
\[
h(\varepsilon^\degf c) \leq \frac{1}{d} \log (\Nm(c)) + \frac{1}{2} A_3(\K) \degf d^2 R.
\]
Since $r_i < \degf\fh$, from \Cref{eq:cideal} we get that
\begin{equation}
\label{eq:Proofhenc1dlog}
h(\varepsilon^\degf c) \leq \frac{1}{d} \log (\Nm(\mathbf{a}))
+ \frac{\degf\fh}{d} \log Q
+ \frac{1}{2} A_3(\K) \degf d^2 R,
\end{equation}
where again we let $Q = \Nm(\mathbf{p_1}\cdots \mathbf{p_t})$.
By definition $\ord_{\mathbf{p_i}}\, \mathbf{a} = 0$
for all finite valuations in $\cS$.
Substituting \Cref{eq:bIdealDecomp} into \Cref{eq:Proofhenc1dlog}
we obtain the upper bound
\begin{equation}
\begin{split}
\label{eq:RHSUpperBoundOnh}
h(\varepsilon^\degf c) &\leq h_{\MK \setminus \cS}(b^{-1})
+ \frac{\degf\fh}{d} \log Q
+ \frac{1}{2} A_3(\K) \degf d^2 R.
\end{split}
\end{equation}
\subsubsection*{--- Combining the bounds:} \quad
Substituting \Cref{eq:LHSLowerBoundOnh} and \Cref{eq:RHSUpperBoundOnh}
into \Cref{eq:hLHSRHSineq} we obtain
\begin{align*}
h_\cS(b^{-1}) < C_3(\K, f)A_1(d,s) &
\left(
\log^* Q +
h_{\MK \setminus \cS}(b^{-1}) \right) \\
& \times
P (1+ \sumloglogprimes/\log^* P)\prod_{i=1}^t \log \Nm(\mathbf{p_i}),
\end{align*}
where $C_3(\K, f)$ is an effectively computable constant.
Noting that $\log Q \leq t \log P$
we can simplify to get
\[
h_\cS(b^{-1})< C_3(\K,f) A_4(d,\cS)
\left(1
+
h_{\MK \setminus \cS}(b^{-1})
\right),
\]
where
\[
A_4(d,\cS) = A_1(d,s) \max\{1,t\} P (\log^* P+ \sumloglogprimes)\prod_{i=1}^t \log \Nm(\mathbf{p_i}).
\]
Using
\[
h(b) = h(b^{-1}) = h_{\MK \setminus \cS}(b^{-1}) + h_\cS(b^{-1})
\]
we now arrive to
\begin{equation}
\label{eq:hSbFinalBound}
h_\cS(b^{-1})
<
\frac{ C_3(\K,f) A_4(d,\cS)}
{1+ C_3(\K,f) A_4(d,\cS)}
\left(1
+
h(b)
\right),
\end{equation}
concluding the proof of \Cref{eq:SPartBoundGeneric}
in the case where $f$ splits in $\K$.
\subsubsection*{--- Proving \Cref{eq:SPartBoundGeneric}:} \quad
Now, suppose that $f$ does not split in $\K$.
Let $\L$ be the splitting field of $f$ over $\K$
and let $\cT$ be the set of places in $\L$ lying over $\cS$.
Then \Cref{eq:hSbFinalBound} holds in $\L$
where we replace $\cS$ by $\cT$.
For ease of notation, we introduce the subscript $\L$
when talking about constants defined in terms of $\L$ (some of them also depend on $\cS$).
In particular, $d_\L = [\L : \Q]$, $s_\L = \# \cT$ and so on.
Let $\dD = [\L : \K]$.
We note that
\begin{equation}
\begin{split}
\label{eq:NewParam-L}
& \qquad d_\L = \dD d, \quad t_{\L} \leq \dD t, \quad s_{\L} \leq \dD s, \\
& \quad P_\L \leq P^{\dD}, \qquad
\sumloglogprimes_\L \leq \dD \sumloglogprimes + C_4(\K,f), \\
& \prod_{\mathbf{q} \in \cT_0} \log \Nm(\mathbf{q}) <
C_5(\K, f) \prod_{\mathbf{p} \in \cS_0} (\log \Nm(\mathbf{p}))^{\dD},
\end{split}
\end{equation}
where $C_4(\K,f), C_5(\K,f)$ are effective constants that depend on $\dD$ and the number of prime ideals of $\K$ with norm less than $e^e$.
We also note that heights are independent of extension
in the sense that if $b \in \K$, then
\[
h(b) = h_\L(b),
\mand
h_\cS(b^{-1}) = h_\cT(b^{-1}).
\]
Using
\Cref{eq:hSbFinalBound} with $\K$ replaced by $\L$ and other parameters replaced
by the upper bounds in~\Cref{eq:NewParam-L} we conclude the proof of \Cref{eq:SPartBoundGeneric}.
\subsubsection*{--- Proving \Cref{eq:SPartBound2}:} \quad
This is the same proof as above, except using
\Cref{eq:DecomposableBound2} in place of \Cref{eq:DecomposableBound1} in the derivation of
\Cref{eq:hLHSRHSineq}.
\section{Proofs of \texorpdfstring{\Cref{thm:MultDepNorthcottNF,thm:MultDepGeneralThm}}{Theorems~\ref{thm:MultDepNorthcottNF} and~\ref{thm:MultDepGeneralThm}}}
\subsection{Dynamical canonical height function}
We introduce the dynamical canonical height function
which is useful in the proofs of
\Cref{thm:MultDepNorthcottNF,thm:MultDepGeneralThm}.
The following result is standard and proofs of its statements can be found in~\cite[Section~3.4]{Silv};
see also~\cite[Remark~B.2.7]{HindrySilverman} and~\cite[Proposition~3.2]{Zan}
regarding the effectiveness of the result.
\begin{lemma}
\label{lem:canonicalheights}
For a fixed $f \in \K(X)$ with $\degf = \deg f \geq 2$
there exists a function $\cht_f: \K \to [0, \infty)$
such that:
\begin{enumerate}[(a)]
\item There is an effectively computable constant $C_6(\K,f) $ such that \[
\abs{\cht_f(\alpha) - h(\alpha)} < C_6(\K,f),
\]
for all $\alpha \in \K$.
\item For all $\alpha \in \K$ we have
\[
\cht_f(f(\alpha)) = \degf\cht_f(\alpha).
\]
\item For all $\alpha \in \K$ we have
\[
\cht_f(\alpha) = 0 \iff \alpha \in \PrePer(f).
\]
\end{enumerate}
\end{lemma}
As a consequence, there exists an effectively computable constant $C_7(\K,f)$
such that for all $\ell \in \Z$, $\ell \geq 0$ and $\alpha \in \K$, $h(\alpha) > C_7(\K,f)$,
\begin{equation}
\label{eq:hfnalphaIneqs}
\degf^\ell C_7(\K,f) h(\alpha) > h(f^{(\ell )}(\alpha)) > \degf^\ell C_7(\K,f)^{-1}h(\alpha).
\end{equation}
\subsection{Proof of \texorpdfstring{\Cref{thm:MultDepNorthcottNF}}{Theorem~\ref{thm:MultDepNorthcottNF}}}
We first prove~\Cref{eq:MultDepNorthcottBound1}.
Suppose $\alpha$ satisfies \Cref{eq:mnuIntegers}.
Let $\degf = \deg f$.
We assume that
\begin{equation}
\label{eq:halpha large}
h(\alpha) >
\max\left\{
\frac{\degf+1}{\degf-1} C_6(\K,f),
\
2\eta_1(\K,f,\cS)^{-1}
\right\}
\end{equation}
with $C_6(\K,f)$ as in \Cref{lem:canonicalheights}~(a) and
$\eta_1(\K,f,\cS)$ as in \Cref{thm:SPartBoundNF}.
The first term in the maximum on the right hand side of~\Cref{eq:halpha large}, along with \Cref{lem:canonicalheights}~(a) and~(b), ensures that
\begin{equation}
\label{eq:hfalphabigger}
\begin{split}
h(f(\alpha))& > \cht_f(f(\alpha)) - C_6(\K,f) = \degf \cht_f(\alpha) - C_6(\K,f) \\
& > \degf h(\alpha) - (\degf+1) C_6(\K,f) > h(\alpha).
\end{split}
\end{equation}
The second term in the maximum in~\Cref{eq:halpha large} and \Cref{thm:SPartBoundNF}, along with~\Cref{eq:hfalphabigger}, implies that
\begin{equation}
\label{eq:hMkSlowerbound}
\begin{split}
h_{\MK \setminus \cS}(f^{(l)}(\alpha)^{-1})
&> \eta_1(\K,f,\cS) h(f^{(l)}(\alpha)) - 1 \\
&> \frac{\eta_1(\K,f,\cS)}{2} h(f^{(l)}(\alpha)),
\end{split}
\end{equation}
for any iterate $f^{(l)}(\alpha)$ with $l \geq 1$.
For any $\alpha \in \o$, we write $\yideal_\cS(\alpha)$ to mean
the $\cS$-free part of $[\alpha]$, that is, the ideal
\[
\yideal_\cS(\alpha) = \frac{[\alpha]}{\prod_{\fp \in \cS_0} \fp^{\ord_{\fp}\, \alpha}}.
\]
We now write $[f^{(m)}(\alpha)] = \fa \cdot \fb$ where
\[\fa = \yideal_\cS(f^{(m)}(\alpha)) \mand
\fb = \prod_{\fp \in \cS_0} \fp^{\ord_{\fp}\, f^{(m)}(\alpha)}.
\]
Observe that \Cref{eq:mnuIntegers} implies that $\fa \mid f^{(n)}(\alpha)$.
Setting
\[
k = m-n > 0,
\]
we write
\[
f^{(k)}(f^{(n)}(\alpha)) = f^{(m)}(\alpha)
\]
which, with the above observation,
implies that $\fa \mid f^{(k)}(0)$.
Since $0$ is not a periodic point, we have $f^{(k)}(0) \ne 0$ and
combining the above observation
with the notation in \Cref{lem:canonicalheights}
we obtain
\begin{equation}
\begin{split}
\label{eq:hMkShfk0}
h_{\MK \setminus \cS} (f^{(m)}(\alpha)^{-1})
&\leq h_{\MK \setminus \cS} (f^{(k)}(0)^{-1})
\\
&\leq h(f^{(k)}(0)^{-1})
= h(f^{(k)}(0))
\\
&< \degf^k \cht_f(0) + C_6(\K,f).
\end{split}
\end{equation}
On the other hand, \Cref{eq:hMkSlowerbound} along with \Cref{lem:canonicalheights}~(a)
implies that
\begin{equation}
\label{eq:hMkSfmaGThfma}
h_{\MK \setminus \cS}(f^{(m)}(\alpha)^{-1})
> \frac{\eta_1(\K,f,\cS)}{2} (\degf^m \cht_f(\alpha) - C_6(\K,f)).
\end{equation}
Comparing \Cref{eq:hMkShfk0} and \Cref{eq:hMkSfmaGThfma},
since $k \leq m$,
we obtain
\[
\cht_f(\alpha) < 2 \eta_1(\K,f,\cS)^{-1} \cht_f(0)
+ \frac{2\eta_1(\K,f,\cS)^{-1} + 1}{\degf^m}C_6(\K,f).
\]
As $\cht_f(0) < C_6(\K, f)$
we obtain the upper bound
\[
h(\alpha) < \left(2\eta_1(\K,f,\cS)^{-1} + 1\right)\left(1 + \frac{1}{\degf^m}\right)C_6(\K,f),
\]
as required.
The proof for~\Cref{eq:MultDepNorthcottBound2} is the same, except with $\eta_2$ instead of $\eta_1$ throughout.
\subsection{Proof of \texorpdfstring{\Cref{thm:MultDepGeneralThm}}{Theorem~\ref{thm:MultDepGeneralThm}}}
First we establish the result when one of $r$ or $s$ is equal to 0.
We obtain an explicit dependence on $\cS$ which may be interesting in its own right.
In particular, this gives a somewhat explicit version of~\cite[Proposition~1.5(a)]{Kriegeretal}.
More generally, an explicit version of \Cref{lem:falphainSunits} for $f(z) \in \K(z)$ can be derived from the
proof of~\cite[Proposition~1.5(a)]{Kriegeretal}.
The key ingredient of the proof is Siegel's Theorem for curves of genus 0
which can be made fully explicit using Baker's method
(see, for example, the end of~\cite[Theorem~4.3]{BakerBook}).
\begin{lemma}
\label{lem:falphainSunits}
Let $f(X) \in \o[X]$ be a polynomial with at least 3 distinct roots.
Suppose that $\alpha \in \o$ satisfies
\[
f(\alpha) \in \o_\cS^*.
\]
Then
\[
h(\alpha) < \frac{\eta_1(\K,f,\cS)^{-1}}{\degf}
+ \left(1 + \frac{1}{\degf}\right) C_6(\K, f),
\]
with $\eta_1(\K, f, \cS)$ as in \Cref{thm:SPartBoundNF}, $\degf = \deg f$
and $C_6(\K,f)$ as in \Cref{lem:canonicalheights}.
\end{lemma}
\begin{proof}
If $f(\alpha) \in \o_\cS^*$, then $h_\cS(f(\alpha)^{-1}) = h(f(\alpha)^{-1}) = h(f(\alpha))$.
Substituting into \Cref{thm:SPartBoundNF} we obtain
\begin{equation}
\label{eq:falphainSunitsProof1}
h(f(\alpha)) < \eta_1(\K,f,\cS)^{-1}.
\end{equation}
By \Cref{lem:canonicalheights} we have the inequality
\begin{equation}
\begin{split}
\label{eq:falphachtbound}
h(f(\alpha)) &> \cht_f(f(\alpha)) - C_6(\K,f) = \degf\cht_f(\alpha) - C_6(\K,f) \\
&> \degf h(\alpha) - (\degf+1) C_6(\K,f).
\end{split}
\end{equation}
The result now follows from substituting \Cref{eq:falphachtbound} into \Cref{eq:falphainSunitsProof1}.
\end{proof}
We now prove \Cref{thm:MultDepGeneralThm}.
\Cref{thm:MultDepGeneralThm} essentially follows from the proof of~\cite[Theorem~1.7]{BOSS}
(in the case where $\alpha \in R_{\cS_{f,\Gamma}}$)
upon replacing the use of~\cite[Theorem~1.2]{BOSS} with \Cref{lem:falphainSunits}
and the use of~\cite[Theorem~1.3]{BOSS} with \Cref{thm:MultDepNorthcottNF}.
We use the same cases as in the proof of~\cite[Theorem~1.7]{BOSS} and just indicate the changes necessary.
As in~\cite{BOSS}, we can effectively bound the height of elements of $\PrePer(f)$ (see \Cref{lem:canonicalheights}~(a) and~(c)),
hence we assume $\alpha \notin \PrePer(f)$ from now on.
First, if $r = 0$ or $s = 0$, we then have that $f^{(n)}(\alpha) \in \o_\cS^*$ for some $n \geq 1$.
\Cref{lem:falphainSunits} bounds the height of $f^{(n-1)}(\alpha)$.
From this, \Cref{lem:canonicalheights} provides an effective upper bound on $h(\alpha)$ as required.
By replacing $(r,s)$ by $(-r,-s)$ we may assume that $r > 0$.
If, in addition, $s < 0$, then, as in~\cite{BOSS}, we can conclude that $f^{(m)}(\alpha) \in \o_\cS^*$ and bound $h(\alpha)$ as above.
If either $s \geq 2$ or $r \geq 2$, then the argument in~\cite{BOSS}
applies directly
(noting that as $\deg f \geq 3$, we can always apply one of~\cite[Theorem~2.1]{BEGBerczes} or~\cite[Theorem~2.2]{BEGBerczes} to obtain effective results).
Finally, the case $r = s = 1$ is just a consequence of \Cref{thm:MultDepNorthcottNF}, which concludes the proof.
\section{Proof of \texorpdfstring{\Cref{thm:LowerBoundm}}{Theorem~\ref{thm:LowerBoundm}}}
\subsection{The case where \texorpdfstring{$m$}{m} is much larger than \texorpdfstring{$n$}{n}}
\begin{lemma}
\label{lem:MultDepPrimeSupportDependsOnmn}
Let $f(X)\in\o[X]$ be a polynomial with at least 3 distinct roots.
Let $\alpha \in \o$, $m, n \in \Z$, $m > n \geq 0$ such that $f^{(m)}(\alpha), f^{(n)}(\alpha) \neq 0$.
Let
\[
L =
\log^* \left(\frac{h\left(f^{(m)}(\alpha)\right)}{h\left(f^{(n)}(\alpha)\right) + 1}\right).
\]
Then
\[
\largestsupp\left(\frac{f^{(m)}(\alpha)}{f^{(n)}(\alpha)}\right)
>
C_8(\K, f)
\frac{L \log^* L}{\log^* \log^* L},
\]
where $C_8(\K, f) > 0$ is an effectively computable constant.
\end{lemma}
\begin{proof}
For any $X > 0$, let $\cS^X = \MK^{\infty} \cup \MK^{\leq X}$,
where
\[
\MK^{\leq X} = \{\abs{\cdot}_{v_\mathbf{p}} \mid \Nm(\mathbf{p}) \leq X\}.
\]
If $X > 1$, then, since at most $d$ prime ideals lie over each prime $p \in \Z$,
using an explicit bound on the prime counting function~\cite[Theorem~4.6]{Apostol}
we derive
\begin{equation}
\label{eq:sXtX}
\begin{split}
& t_X = \# \(\cS^X \cap \MK^0\) \leq 6dX/\log X, \\
& s_X = \# \cS^X \leq d + 6dX/\log X.
\end{split}
\end{equation}
Let
\[
X = \largestsupp\left(f^{(m)}(\alpha) / f^{(n)}(\alpha)\right).
\]
Then $\left(f^{(n)}(\alpha) / f^{(m)}(\alpha)\right) \in \o_{\cS^X}$.
Therefore,
\begin{equation}
\label{eq:PropProofhMSUpperBound}
\begin{split}
h_{\MK \setminus \cS^X}(f^{(m)}(\alpha)^{-1})
&\leq
h_{\MK \setminus \cS^X}(f^{(n)}(\alpha)^{-1}) \\
&\leq
h(f^{(n)}(\alpha)^{-1}) = h(f^{(n)}(\alpha)).
\end{split}
\end{equation}
Suppose that $X > 1$, hence $t_X > 0$.
Then \Cref{thm:SPartBoundNF}
implies that
\begin{equation}
\label{eq:PropProofhMSLowerBound}
h_{\MK \setminus \cS^X}(f^{(m)}(\alpha)^{-1})
> \eta_2(\K,f,\cS^X) \cdot h(f^{(m)}(\alpha)) - 1.
\end{equation}
Combining \Cref{eq:PropProofhMSUpperBound} and \Cref{eq:PropProofhMSLowerBound}
we obtain
\begin{equation}
\label{eq:HRatioBound1}
\frac{h\left(f^{(m)}(\alpha)\right)}{h\left(f^{(n)}(\alpha)\right) + 1}
<
\eta_2(\K,f,\cS^X)^{-1}.
\end{equation}
We note that
\begin{equation}
\label{eq:boundsWithXlogX}
\begin{split}
& A_2(d \dD \fh_\L, t_X\dD) \leq C_9(\K, f)^{X/\log X}
\end{split}
\end{equation}
for some effectively computable constant $C_9(\K, f)$.
Substituting \Cref{eq:sXtX} and \Cref{eq:boundsWithXlogX}
into \Cref{eq:HRatioBound1}
we obtain
\[
\frac{h\left(f^{(m)}(\alpha)\right)}{h\left(f^{(n)}(\alpha)\right) + 1}
< \left(C_{10}(\K, f) \log^* X\right)^{C_{10}(\K, f) X/\log^* X},
\]
where $C_{10}(\K, f) > 0$ is effectively computable.
Taking logs, we obtain
\[
\log^*\left(\frac{h\left(f^{(m)}(\alpha)\right)}{h\left(f^{(n)}(\alpha)\right) + 1}\right)
< C_{11}(\K, f) X \frac{\log^* \log^* X}{\log^* X},
\]
where $C_{11}(\K, f) > 0$ is effectively computable.
The desired result follows after some simple calculation.
In the case where $X = 1$, the same procedure but using
\Cref{eq:SPartBoundGeneric} instead of \Cref{eq:SPartBound2}
shows that
\[
\frac{h\left(f^{(m)}(\alpha)\right)}{h\left(f^{(n)}(\alpha)\right) + 1}
<
C_{12}(\K, f),
\]
where $C_{12}(\K,f)$ is an effectively computable constant, as required.
\end{proof}
\subsection{The case where \texorpdfstring{$m$}{m} and \texorpdfstring{$n$}{n} are of comparable sizes}
\begin{lemma}
\label{lem:MultDepPrimeSupport}
Let $f(X)\in\o[X]$ be a polynomial with at least 3 distinct roots
and for which $0$ is not periodic.
Let $\alpha \in \o$, $m, n \in \Z$, $m > n \geq 0$ such that $f^{(m)}(\alpha)$, $f^{(n)}(\alpha) \neq 0$.
Then
\[
\largestsupp\left(\frac{f^{(m)}(\alpha)}{f^{(n)}(\alpha)}\right)
>
C_{13}(\K, f)
\frac{\loght\left(f^{(n)}(\alpha)\right) \log^* \loght\left(f^{(n)}(\alpha)\right)}{\log^* \log^* \loght\left(f^{(n)}(\alpha)\right)},
\]
where $C_{13}(\K,f) > 0$ is an effectively computable constant.
\end{lemma}
\begin{proof}
Define $\cS^X$ and $X$ as in the proof of \Cref{lem:MultDepPrimeSupportDependsOnmn}.
Then $\left(f^{(n)}(\alpha) / f^{(m)}(\alpha)\right) \in \o_{\cS^X}$.
If $X = 1$, hence $\cS^X = \MK^{\infty}$,
applying~\Cref{thm:MultDepNorthcottNF} with $\cS = \MK^{\infty}$
and $\alpha = f^{(n)}(\alpha)$
yields an effective upper bound on $h(f^{(n)}(\alpha))$
in terms of $\K$ and $f$, as required.
Otherwise, $\# (\cS^X \cap \MK^{0}) > 0$.
Hence, applying~\Cref{thm:MultDepNorthcottNF}
with $\cS = \cS^X$ and $\alpha = f^{(n)}(\alpha)$, we obtain
\begin{equation}
\label{eq:hBoundWithSx}
\begin{split}
\frac{h(f^{(n)}(\alpha))}{c_2(\K,f)} <\ &
\eta_2(\K,f,\cS^X)^{-1}.
\end{split}
\end{equation}
We can now proceed as in the proof of \Cref{lem:MultDepPrimeSupportDependsOnmn},
except with \Cref{eq:hBoundWithSx} in place of \Cref{eq:HRatioBound1},
to obtain the desired result.
\end{proof}
\subsection{Concluding the proof}
We now prove \Cref{thm:LowerBoundm}.
If $h(f^{(n)}(\alpha)) > C_7(\K,f)$,
with $C_7(\K, f)$ as in \Cref{eq:hfnalphaIneqs},
then
a combination of \Cref{lem:MultDepPrimeSupportDependsOnmn},
used for
\[
m-n \geq \frac{ \lambda(f^{(n)}(\alpha))}{\log \degf},
\]
and
of \Cref{lem:MultDepPrimeSupport} otherwise
implies the result.
Otherwise, $h(f^{(n)}(\alpha)) \leq C_7(\K,f)$.
The result now follows from \Cref{lem:MultDepPrimeSupportDependsOnmn}.
\section{Proof of \texorpdfstring{\Cref{thm:WeakZsigmondy}}{Theorem~\ref{thm:WeakZsigmondy}}}
If $\mathbf{p} \mid f^{(m)}(\alpha)$ and $\mathbf{p} \mid f^{(n)}(\alpha)$
with $m > n$,
then, writing
\[
f^{(m)}(\alpha) = f^{(m-n)}\left(f^{(n)}(\alpha)\right),
\]
we see that $\mathbf{p} \mid f^{(m-n)}(0)$.
Fix a $k \in \Z$, $k > 0$ and
let $\cS_k$ be the finite set of places containing $\MK^\infty$
and all finite places corresponding to a prime dividing a value in the set
\[
\{f^{(1)}(0), f^{(2)}(0), \cdots, f^{(k)}(0)\}
\]
($\cS_k$ is finite since $0$ is not periodic).
With the above observation, to prove the desired statement for
\[
k(m, \alpha) = k,
\]
it suffices to show that
\[
h_{\cS_k}(f^{(m)}(\alpha)^{-1}) < h(f^{(m)}(\alpha)).
\]
The case where $\cS_k$ contains no finite places is trivial.
Hence, we assume that $\cS_k$ contains at least one finite place.
Then, by \Cref{thm:SPartBoundNF}, we have that
\begin{equation}
\label{eq:LowerBoundhdiff}
h(f^{(m)}(\alpha)) - h_{\cS_k}(f^{(m)}(\alpha)^{-1}) > \eta_2(\K, f, \cS_k)h(f^{(m)}(\alpha)) - 1.
\end{equation}
We note the following inequalities for $\cS_k$ which are consequences of \Cref{lem:canonicalheights} and simple calculation
(for the last inequality note that
$\sum_{i=1}^t \log (N(\mathbf{p_i})) < C_{14}(\K, f) \degf^k$):
\[t < C_{14}(\K, f) \degf^k, \qquad P < e^{ C_{14}(\K, f) \degf^k}, \qquad
\prod_{i=1}^t \log (N(\mathbf{p_i})) < e^{C_{14}(\K, f) \degf^k}
\]
for an effectively computable $C_{14}(\K, f)$. Hence,
\begin{equation}
\label{eq:Proofeta2cSkBound}
\eta_2(\K, f, \cS_k)^{-1} < e^{C_{15}(\K, f) \degf^k}.
\end{equation}
Substituting \Cref{eq:Proofeta2cSkBound}
into \Cref{eq:LowerBoundhdiff}, the required statement holds for any $k$ such that
\[
\log h(f^{(m)}(\alpha)) > C_{15}(\K, f) \degf^k.
\]
If $h(f^{(m)}(\alpha))$ is sufficiently large, then \Cref{thm:WeakZsigmondy} follows immediately.
Otherwise, $h(f^{(m)}(\alpha))$ is bounded and we may pick $c_6(\K, f)$
small enough such that $k(m, \alpha) = 0$.
\section*{Acknowledgement}
The authors are grateful to Attila B\' erczes for supplying a proof of a version of
\Cref{thm:SPartBoundNF} in terms of the norm of the $\cS$-part of
$f(\alpha)$ and Alina Ostafe for her encouragement and comments on an initial draft of the paper.
This work was supported, in part, by the Australian Research Council Grant~DP180100201.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Estimation under unknown inputs (whose models or statistical properties are
not assumed to be available), also called unknown input decoupled
estimation, has received much attention in the past. In the existing
literature, many uncertain phenomena in control systems have been modeled as
unknown inputs, including system faults/attacks \cite{Johansen2014}-\cit
{Yang2019}, abrupt/impulsive disturbances or parameters \cite{Ohlsson2012}
\cite{Chai2017}, arbitrary vehicle tires/ground interactions \cit
{Johansen2007}, etc. A seminal work on unknown input decoupled estimation is
due to Hautus \cite{Hautus1983} where it has been shown that the strong
detectability criterion, including a rank matching condition and the system
being minimum phase requirement, is necessary and sufficient for the
existence of a stable observer for estimating the state/unknown input for
deterministic systems\footnote
The strong$^{\ast }$ detectability concept was also introduced in \cit
{Hautus1983}. The two criteria, as discussed in \cite{Hautus1983}, are
equivalent for discrete-time systems, but differ for continuous systems.}.
Works on the filtering case, e.g., \cite{Darouach2003}-\cite{Su2015}, have
similar rank matching and system being minimum phase requirements as in \cit
{Hautus1983}. Extensions to cases with rank-deficient shaping matrices have
been discussed in \cite{Hsieh2009}-\cite{Frazzoli2016}. It has also been
shown in the above works that for unbiased and minimum variance estimation
of the state/unknown input, the initial guess of the state must be unbiased.
Very recently, connections between the above-mentioned results and Kalman
filtering (KF) of systems within which the unknown input is taken to be a
white noise of unbounded variance, have been established in \cit
{Bitmead2019}. There are also some works dedicated to alleviating the strong
detectability conditions and the unbiased initialization requirement (see
\cite{Kong2019Auto}-\cite{Kong2020Auto} and the references therein).
However, most existing filtering works mentioned above assume that the
process and measurement noise covariances (denoted as $Q$ and $R$,
respectively) are perfectly known for the optimal filter design. This raises
the question of whether and under what conditions one can identify $Q/R$
from real data. We believe that addressing the identifiability issue of
noise covariances under arbitrary unknown inputs is important because in
practice the noise covariances are not known \textit{a priori} and have to
be identified from real closed-loop data where there might be unknown system
uncertainties such as faults, etc. Another relevant application is path
planning of sensing robots for tracking targets whose motions might be
subject to abrupt disturbances (in the form of unknown inputs), as
considered in our recent work \cite{Kong2021}.
To the best of our knowledge, \cite{Yu2016}-\cite{Pan2013} are the only
existing works on identification of stochastic systems under unknown inputs.
However, in the former two works, the unknown inputs are assumed to be a
wide-sense stationary process with rational power spectral density or
deterministic but unknown signals, respectively. Here, we do not make such
assumptions. Also, we are mainly interested to investigate the
identifiability of the original noise covariances for linear time-invariant
(LTI) stochastic systems with unknown inputs. This is in contrast to the
work in \cite{Yu2016} where the measurement noise covariance of the
considered system is assumed to be known, and the input autocorrelations are
identified from the output data and then used for input realization and
filter design. Our work is also different from subspace identification where
the stochastic parameters of the system are estimated, which can be used to
calculate the optimal estimator gain \cite{Gevers2006}. It should be
remarked that apart from filter design, knowledge of noise covariances can
also be used for other purposes such as performance monitoring \cit
{Moni2011}.
We note that noise covariance estimation is a topic of lasting interest for
the systems and control community, and the literature is fairly mature.
Existing noise covariance estimation methods can be classified as Bayesian,
maximum likelihood, covariance matching, and correlation techniques, etc.,
(see \cite{Hu2011}-\cite{Arnold2020} and the references therein).
Especially, the correlation methods can be classified into two groups where
the state/measurement prediction error (or measurement difference), as a
stochastic time series, is computed either explicitly via a stable filter
(for example, the autocovariance least-squares (ALS) framework in \cit
{Rawlings2006}-\cite{Rawlings2009}) or implicitly by manipulating the
measurements (see \cite{Xia2014} for the case using one step measurement,
and \cite{Dunik2018}-\cite{Moghe2019} for the case using multi-step
measurements, respectively, in computing the measurement differences).
Still, most above-mentioned noise covariance estimation methods have not
considered the case with unknown inputs. This observation motivates us to
study the identifiability of $Q/R$ for systems under unknown inputs.
Especially, we adopt the correlation-based methodology, and mainly discuss
the implicit correlation-based frameworks, in particular, the measurement
difference approach using single-step measurement.
Moreover, given that this paper focuses on the identifiability of $Q$/$R$
via the measurement difference approach using single-step measurement, some
of the assumptions (e.g., the output matrix $C$ is assumed to be of full
column rank) seem to be stringent. Nevertheless, we believe the
consideration of the case using single-step measurement serves as the first
crucial step to fully understand the identifiability of $Q$/$R$ under the
presence of unknown inputs. A thorough investigation of the more general
case using multi-step measurements is the subject of our current and future
work.
Finally, we remark that the considered problem is inherently a theoretical
one, although we are motivated by its potential applications in practice.
However, we believe that addressing the considered question specifically for
LTI systems is the first step towards a more thorough understanding on the
topic.
The remainder of the paper is structured as follows. In Section II, we
recall preliminaries on estimation of systems with unknown inputs. Section
III contains our major results for the single-step measurement case. Section
IV illustrates the theoretical results with some numerical examples. Section
V concludes the paper.
\textbf{Notation:} $A^{\mathrm{T}}$ denotes the transpose of matrix $A$.
\mathbf{R}^{n}$ stands for the $n$-dimensional Euclidean space. $I_{n}$
stands for the identity matrix of dimension $n$. $\mathbf{0}$ stands for the
zero matrices with compatible dimensions. $\mathbb{C}$ and $\left\vert
z\right\vert $ denote the field of complex numbers, and the absolute value
of a given complex number $z$, respectively.
\section{\label{lyap}Preliminaries and Problem Statement}
We consider the discrete-time LTI model of the plant:
\begin{subequations}
\begin{align}
x_{k+1}=Ax_{k}+Bd_{k}+Gw_{k} \\
y_{k}=Cx_{k}+Dd_{k}+v_{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ }
\end{align}
\label{plant}
\end{subequations}
where $x_{k}\in \mathbf{R}^{n},$ $d_{k}\in \mathbf{R}^{q}$, and
$y_{k}\in \mathbf{R}^{p}$ are the state, the unknown input, and the
output, respectively; $w_{k}\in \mathbf{R}^{g}$ and $v_{k}\in
\mathbf{R}^{p}$ represent zero-mean mutually uncorrelated process
and measurement noises with covariances $Q\in \mathbf{R}^{g\times
g}$ and $R\in \mathbf{R}^{p\times p}$, respectively; $A,B,G,C,$ and
$D$ are real and known matrices with appropriate dimensions; the
pair $(A,C)$ is assumed to be detectable.
Without loss of generality, we assume $n\geq g$ and $G\in \mathbf{R
^{n\times g}$ to be of full column rank (when this is not the case, one can
remodel the system to obtain a full rank shaping matrix $\overline{G}$).
For system (\ref{plant}), a major question of interest is the existence
condition of an observer/filter that can estimate the state/unknown input
with asymptotically stable error, using only the output. To address these
questions, concepts such as strong detectability and strong estimator have
been rigorously discussed in \cite{Hautus1983} for deterministic system
\footnote
Extensions of the strong detectability to linear stochastic systems have
been discussed in \cite{Frazzoli2016}.}. As remarked in \cite{Hautus1983},
the term \textquotedblleft strong\textquotedblright\ is to emphasize that
state estimation has to be obtained without knowing the unknown input. For
later use, we include the strong detectability conditions in the sequel.
Note, however, that the measurement-difference approach does not require
strong detectability since we do not need to design a filter to explicitly
estimate the state/unknown input. Instead, we manipulate the system outputs
to implicitly estimate the state and construct a stationary time series. The
required conditions associated with the measurement-difference approach are
different from the strong detectability conditions and presented in
Proposition \ref{case_abc} and Theorem \ref{decouple_sufficent}.
\begin{lemma}
\label{condition}(\cite{Hautus1983}) The following statements hold
true: (i) the system (\ref{plant}) has a strong estimator if and
only if it is strongly detectable; (ii) system (\ref{plant}) is
strongly detectable if and only if
\begin{equation}
rank\left( \left[
\begin{array}{cc}
CB & D \\
D &
\end{array
\right] \right) =rank(D)+rank\left( \left[
\begin{array}{c}
B \\
\end{array
\right] \right) , \label{rankmatching}
\end{equation
and all its invariant zeros are stable, i.e.
\begin{equation}
rank\left( \underset{\mathcal{M}\left( z\right) }{\underbrace{\left[
\begin{array}{cc}
zI_{n}-A & -B \\
C &
\end{array
\right] }}\right) =n+rank\left( \left[
\begin{array}{c}
B \\
\end{array
\right] \right) , \label{miniphase}
\end{equation
for all $z\in \mathbb{C}\ $and $\left\vert z\right\vert \geq 1$.
\end{lemma}
Conditions (\ref{rankmatching})-(\ref{miniphase}) are the so-called rank
matching and minimum phase requirements, respectively. Note that Lemma \re
{condition} holds for both the deterministic and stochastic cases (hence we
use \textquotedblleft estimator\textquotedblright\ instead of KF/Luenberger
observer; for more detailed discussions on the design and stability of KF
under unknown inputs, we refer the reader to \cite{Darouach2003}-\cit
{Frazzoli2016} and the references therein. For system (\ref{plant}), the
noise covariances $Q$ and $R$ are usually not available, and have to be
identified from data. However, all existing filtering methods for systems
with unknown inputs in the literature adopt the assumption of knowing $Q$
and $R$ exactly, which is not practical. The identificability questions of $Q
$ and/or $R$ considered in this paper are formally stated as follows:
\begin{problem}
\label{problem1}Given system (\ref{plant}) with unknown inputs, and known
A,B,G,C,$ and $D$, we aim to investigate the following questions: using the
measurement difference approach, whether and under what conditions one can
(i) uniquely jointly identify $Q$ and $R$; (ii) uniquely identify $Q$ or $R
, assuming the other covariance to be known.
\end{problem}
\section{\label{internal}Identifiability of Q/R Using the Single-step
Measurement Difference Approach}
This section contains the first major results of the paper. We show that, in
theory, the single-step measurement difference approach does not have a
unique solution for jointly estimating $Q$ and $R$ of system (\ref{plant}).
Estimating $Q$ or $R$, assuming the other to be known, will also be
considered. For deriving the results in this section, we will assume that $C$
is of full column rank. We remark that although the assumption on $C$ is
restrictive, the discussions in the sequel bring some insights into the
identifiability study of $Q$/$R$, i.e., even with the above stringent
assumption, it will be shown that only under restrictive conditions, $Q$/$R$
can be uniquely identified.
\subsection{Conditions for obtaining an unknown input decoupled stationary
time series}
When $C$ is of full column rank, from (\ref{plant}), it can be obtained tha
\begin{equation}
\begin{array}{l}
y_{k+1}=Cx_{k+1}+Dd_{k+1}+v_{k+1} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ }=CAx_{k}+CBd_{k}+CGw_{k}+Dd_{k+1}+v_{k+1}
\end{array}
\label{output}
\end{equation
an
\begin{equation}
x_{k}=My_{k}-MDd_{k}-Mv_{k}, \label{state}
\end{equation
wher
\begin{equation}
M=(C^{\mathrm{T}}C)^{-1}C^{\mathrm{T}}. \label{M}
\end{equation
By substituting (\ref{state}) into (\ref{output}), one has that
\begin{equation*}
\begin{array}{l}
\overline{z}_{k}=y_{k+1}-CAMy_{k}=(CB-CAMD)d_{k} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }+Dd_{k+1}+CGw_{k}+v_{k+1}-CAMv_{k}
\end{array
\end{equation*
Given that we do not assume to have any knowledge of the unknown input, it
is not possible for us to conduct any analysis of the statistical properties
of $\overline{z}_{k}$. Hence, a necessary and sufficient condition to
decouple the influence of the unknown input on $\overline{z}_{k}$ is the
existence of a nonzero matrix $K\in \mathbf{R}^{r\times p}$ such tha
\begin{equation}
\begin{array}{l}
z_{k}=K\overline{z}_{k}=K(CB-CAMD)d_{k}+KDd_{k+1} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }+KCGw_{k}+Kv_{k+1}-KCAMv_{k}
\end{array}
\label{measurement_diff}
\end{equation
wit
\begin{equation}
K\underset{H\in \mathbf{R}^{p\times 2q}}{\underbrace{\left[
\begin{array}{ll}
C(B-AMD) &
\end{array
\right] }}=0. \label{necessary_con1}
\end{equation}
\begin{remark}
\label{rem1} For the single-step measurement difference approach, later we
will establish (i) necessary conditions under which $Q$ and $R$ can be
uniquely jointly identified (see Proposition \ref{ALS_fullrank}); (ii)
necessary and sufficient conditions under which $Q$ can be uniquely
identified, when $R$ is known (see Proposition \ref{only_Q} and Corollary
\ref{corco_estimateQ}); (iii) necessary conditions under which $R$ can be
uniquely identified, when $Q$ is known (see Proposition \ref{corco_estimateR
). Moreover, it will be shown that for achieving the results mentioned
above, the measurement difference approach requires some decoupling
conditions for constructing a stationary time series (see Proposition \re
{case_abc}). The latter conditions are proved to be sufficient (see Theorem
\ref{decouple_sufficent}) for the strong detectability requirement in \cit
{Hautus1983}. Also, if the existence conditions on $K$ are satisfied, then
one can use standard techniques to calculate $K$ \cite[Chap. 6]{Laub2005}.
\end{remark}
There are a few potential scenarios when (\ref{necessary_con1}) holds
\begin{equation}
\begin{array}{l}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{(a) }C(B-AMD)\neq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }D\neq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{(b) }C(B-AMD)=CB\neq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }D=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{(c) }C(B-AMD)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }D\neq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{(d) }C(B-AMD)=CB=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }D=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.
\end{array}
\label{four_cases}
\end{equation
Note that since $C$ is assumed to be of full column rank, case (d) in (\re
{four_cases}) cannot happen. This is because when $C$ is of full column rank
\begin{equation}
\begin{array}{l}
C(B-AMD)=CB=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }D=0 \\
\Rightarrow CB=0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }D=0, \\
\Rightarrow B=0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }D=0
\end{array}
\label{unknown_vanish}
\end{equation
i.e., the unknown input $d_{k}$ vanishes in system (\ref{plant}). Note that
here in this work we focus on the case with unknown inputs, i.e., the
situation of $B=0$ and $D=0$ is not applicable. For cases (a)-(c), we have
the following immediate results.
\begin{proposition}
\label{case_abc}Given system (\ref{plant}) with $C$ being of full column
rank, then the following statements hold true:
(i) for case (a) in (\ref{four_cases}), there exists a matrix $K$ such that
the equality in (\ref{necessary_con1}) holds if and only i
\begin{equation}
rank(H)=2q, \label{nece1}
\end{equation
where, $H$ is defined in (\ref{necessary_con1}); for condition (\ref{nece1})
to hold, it is necessary that $rank(B-AMD)=q$, $n\geq q$, $rank(D)=q$,
p\geq 2q$;
(ii) for case (b) in (\ref{four_cases}), there exists a matrix $K$ such that
the equality in (\ref{necessary_con1}) holds if and only if $rank(B)=q$;
(iii) for case (c) in (\ref{four_cases}), there exists a matrix $K$ such
that the equality in (\ref{necessary_con1}) holds if and only if $B-AMD=0$,
rank(D)=q$.
\end{proposition}
\begin{proof}
(i) For case (a), from the solution properties of matrix equations \cite
Chap. 6]{Laub2005}, there exists $K$ such that equalities in (\re
{necessary_con1}) hold if and only if $rank(H)=2q$. The rest of the proof
for part (i) follows naturally from condition (\ref{nece1}). Parts (ii) and
(iii) follow similarly.
\end{proof}
Note that case (c) is unrealistic as it requires $B-AMD=0$. However, we
include the discussion on it just for completeness. One would wonder how
stringent the decoupling condition in (\ref{necessary_con1}) and possible
cases (a)-(c) in (\ref{four_cases}) are, compared to the strong
detectability conditions in Lemma \ref{condition}. This question is answered
in the following theorem.
\begin{theorem}
\label{decouple_sufficent}For cases (a)-(c), $C$ being of full column rank
and the decoupling condition (\ref{necessary_con1}) are sufficient for the
strong detectability conditions in Lemma \ref{condition}.
\end{theorem}
\begin{proof}
We prove the claim for cases (a)-(c), respectively.
For case (a), we note from Proposition \ref{case_abc} that $D$ has to be of
full column rank. This further implies that the rank matching condition (\re
{rankmatching}) holds. Note tha
\begin{eqnarray*}
\begin{bmatrix}
I_{n} & AM \\
0 & I_{p
\end{bmatrix
\mathcal{M}\left( z\right) &=
\begin{bmatrix}
zI_{n}-A+AMC & AMD-B \\
C &
\end{bmatrix}
\\
&=&\underset{\overline{\mathcal{M}}\left( z\right) }{\underbrace
\begin{bmatrix}
zI_{n} & AMD-B \\
C &
\end{bmatrix
}}
\end{eqnarray*
where $\mathcal{M}\left( z\right) $ is defined in (\ref{miniphase}).
When $D$ is of full column rank, there always exists a matrix $X\in \mathbf{
}^{n\times p}$ such that $XD=AMD-B.$ Denot
\begin{equation*}
X\left( z\right) =\left[
\begin{array}{cc}
\frac{1}{z}I_{n} & -\frac{1}{z}X \\
0 & I_{p
\end{array
\right] ,
\end{equation*
which is of full column rank for all $z\in \mathbb{C}\ $and $\left\vert
z\right\vert \geq 1$. Multiplying $X\left( z\right) $ on the left hand side
of $\overline{\mathcal{M}}\left( z\right) $ gives u
\begin{equation*}
\begin{array}{l}
rank(\mathcal{M}\left( z\right) )=rank(\overline{\mathcal{M}}\left( z\right)
)=rank(X\left(z\right) \overline{\mathcal{M}}\left( z\right) ) \\
\Rightarrow rank(\overline{\mathcal{M}}\left( z\right) )=rank\left( \left[
\begin{array}{cc}
I_{n}-\frac1zXC & 0 \\
C &
\end{array
\right] \right) =n+q
\end{array
\end{equation*
for all $z\in \mathbb{C}\ $and $\left\vert z\right\vert \geq 1$. In other
words, the minimum phase condition in (\ref{miniphase}) holds. The proof for
case (a) is completed.
For case (b), given that $C$ and $B$ are of full column rank, and $D=0$, it
can be easily checked that conditions (\ref{rankmatching})-(\ref{miniphase})
hold.
For case (c), given that both $C$ and $D$ are of full column rank, and $B=AMD
$, it can be checked that condition (\ref{rankmatching}) holds. The matrix
\overline{\mathcal{M}}\left( z\right)$ appeared for the case (a) satisfies
\begin{align*}
\overline{\mathcal{M}}\left( z\right)
\begin{bmatrix}
zI_{n} & AMD-B \\
C &
\end{bmatrix}
\begin{bmatrix}
zI_{n} & 0 \\
C &
\end{bmatrix
.
\end{align*}
Thus $rank(\mathcal{M}\left( z\right))=rank(\overline{\mathcal{M}}\left(
z\right))=n+q,$ for all $z\in \mathbb{C}\ $and $z\neq 0$ and hence condition
(\ref{miniphase}) holds. This completes the proof.
\end{proof}
Theorem \ref{decouple_sufficent} reveals that the measurement difference
approach requires more stringent conditions than strong detectability
conditions. As it will be discussed in the sequel, even with the above
stringent requirement, $Q$/$R$ could be potentially uniquely identified
under restrictive conditions.
\subsection{Joint identifiability analysis of $Q$ and $R$}
We next discuss the joint identifiability of $Q\ $and $R$. As such, assume
that one of the cases (a)-(c) happens so that the decoupling condition in
\ref{necessary_con1}) holds. From (\ref{measurement_diff}), one ha
\begin{equation}
z_{k}=KCGw_{k}+Kv_{k+1}-KCAMv_{k}, \label{final_time}
\end{equation
which is a zero-mean stationary time series. We also have
\begin{subequations}
\begin{align}
&S_0\eq E(z_{k}(z_{k})^{\mathrm{T}})=KCGQ(KCG)^{\mathrm{T}}+KRK^{\mathrm{T}}
\notag \label{autocov1} \\
&\hspace{30mm}+KCAMR(KCAM)^{\mathrm{T}}, \\
&S_1\eq E(z_{k+1}(z_{k})^{\mathrm{T}})=-KCAMRK^{\mathrm{T}} ,
\label{autocov2} \\
&\hspace{8mm}E(z_{k+j}(z_{k})^{\mathrm{T}})=0,~j\geq 2,
\end{align}
where $E(\cdot)$ denotes the mathematical expectation. The above equations
give
\end{subequations}
\begin{equation}
\begin{array}{l}
S
\begin{bmatrix}
S_{0} \\
S_{1
\end{bmatrix
=E\left( \left[
\begin{array}{c}
z_{k}(z_{k})^{\mathrm{T}} \\
z_{k+1}(z_{k})^{\mathrm{T}
\end{array
\right] \right) \\
=\left[
\begin{array}{c}
KCG \\
\end{array
\right] Q(KCG)^{\mathrm{T}}+\left[
\begin{array}{c}
K \\
-KCA
\end{array
\right] RK^{\mathrm{T}} \\
+\left[
\begin{array}{c}
KCAM \\
\end{array
\right] R(KCAM)^{\mathrm{T}}
\end{array}
\label{autocovariance}
\end{equation
Denote the vectorization operator of a matrix $A=[a_1,a_2,\cdots,a_n]$ by
\mathrm{vec}(A)=[a_1^{\mathrm{T}},a_2^{\mathrm{T}},\cdots,a_n^{\mathrm{T}}]^
\mathrm{T}}$ and the Kronecker product of $A$ and $B$ by $A\otimes B$,
respectively. By applying the identity $\mathrm{vec}~\!(ABC)=(C^{\mathrm{T
}\otimes A)\mathrm{vec}~\!(B)$ involving the vectorization operator (\re
{autocovariance}) and the Kronecker product, we have the following system of
linear equations
\begin{align} \label{LSQ1}
\mathcal{A}~\!\mathrm{vec}{([Q,R])} = \mathrm{vec}{(S)},
\end{align}
where
\begin{equation}
\mathcal{A}=\left[
\begin{array}{cc}
\overline{K} & 0 \\
0 & \overline{K
\end{array
\right] \left[
\begin{array}{cc}
\mathcal{A}_{1} & \mathcal{A}_{2} \\
0 & -I_{p}\otimes CA
\end{array
\right] , \label{cap_A}
\end{equation
in whic
\begin{equation*}
\begin{array}{l}
\overline{K}=K\otimes K,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\mathcal{A}_{1}=CG\otimes CG\in \mathbf{R
^{p^{2}\times g^{2}}, \\
\mathcal{A}_{2}=(I_{p}\otimes I_{p})+(CAM\otimes CAM)\in \mathbf{R
^{p^{2}\times p^{2}}
\end{array
\end{equation*
Given the process is ergodic, a valid procedure of approximating the
expectation from the data is to use the time average. Especially, given all
the collected data as $z_{0:N}$, one ha
\begin{equation}
\widetilde{S}=[\widetilde{S}_{0},\widetilde{S}_{1}], \label{Eab}
\end{equation
wher
\begin{equation}
\widetilde{S}_{0}=\frac{1}{N+1}\sum\limits_{i=0}^{N}z_{k}z_{k}^{\mathrm{T}}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\widetilde{S}_{1}=\frac{1}{N}\sum\limits_{i=0}^{N-1}z_{k+1}(z_{k})^
\mathrm{T}}. \label{appro}
\end{equation
Denot
\begin{equation*}
e=\mathrm{vec}(\widetilde{S}).
\end{equation*
We then have the following standard least-squares problem for identifying
Q\ $and $R$
\begin{equation}
\Xi ^{\ast }=\arg \min\limits_{\Xi }\left\Vert \mathcal{A}\Xi -e\right\Vert
^{2} \label{LSQ}
\end{equation
where $\Xi =\left[ {\mathrm{vec}(\widehat{Q}),\mathrm{vec}(}\widehat{{R}}{)
\right] $, $e$ is defined above (\ref{LSQ}). The joint identifiability of $Q$
and $R$ is determined by the full column rankness of $\mathcal{A}$.
It should be noted that in the least-square problems listed in the remainder
of the paper, some permutation matrices can be introduced to identify the
unique elements of $Q$ and $R$, and additional constraints need to be
enforced on the $Q$ and $R$ estimates (see, e.g., \cite{Rawlings2006}-\cit
{Rawlings2009}, \cite{Dunik2018}), given that they are both symmetric and
positive semidefinite matrices. Then the constrained least-squares problems
can be transformed to semidefinite programs \cite[Chap. 3.4]{Arnold2020} and
solved efficiently using existing software packages such as CVX \cit
{Boyd2013}. For simplicity, we have not formally included these constraints
in the least-squares problem formulations, because this will not impact the
discussions on the solution uniqueness of these least-squares problems. It
should also be noted that in the simulation examples shown in Section V,
such symmetric and positive semidefinite constraints have been enforced.
Moreover, in the least-square problems listed in the remainder of the paper,
including (\ref{LSQ}), (\ref{estimating_Q}), we consider the most general
scenario and do not assume to have any knowledge of the structure of $Q$ and
$R$ except that they are supposed to be symmetric and positive semidefinite
matrices, which we intend to identify. In practice, if one has some
knowledge of their structures, for example, if $Q$ and/or $R$ are assumed to
be partially known or they are diagonal matrices, the least-square problems
listed in this paper can be readily modified to incorporate such knowledge.
For $\mathcal{A}$ in (\ref{LSQ}), We have the following results.
\begin{proposition}
\label{ALS_fullrank}Given system (\ref{plant}) with $C$ being of full column
rank, the following statements hold true:
(i) $\mathcal{A}$ is of full column rank only i
\begin{equation}
rank\left( \underset{K_{M}}{\underbrace{\left[
\begin{array}{c}
K \\
KCA
\end{array
\right] }}\right) =p,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }rank\left( KCG\right) =g;
\label{nece_condition_uni}
\end{equation}
(ii) when $G=I_{n}$ (i.e., $g=n$) and $p=n$, $\mathcal{A}$ is of full column
rank if and only i
\begin{equation*}
rank\left( K\right) =p\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }rank\left( CAM\right) =p\RIfM@\expandafter\text@\else\expandafter\mbox\fi{; }
\end{equation*}
(iii) when $G=I_{n}$ (i.e., $g=n$) and $p=n$, for $\mathcal{A}$ to be of
full column rank, the unknown input $d_{k}$ has to vanish from system (\re
{plant}), i.e., $B=0$, $D=0$.
\end{proposition}
\begin{proof}
(i) We prove the result by contradiction. Firstly, assume that the matrix
K_{M}$ in (\ref{nece_condition_uni}) loses rank, i.e., there exists a
nonzero vector $h$ such that $K_{M}h=0$. Set $R=hh^{\mathrm{T}}$ so tha
\begin{equation*}
\begin{array}{l}
KCAMhh^{\mathrm{T}}K^{\mathrm{T}}=0, Khh^{\mathrm{T}}K^{\mathrm{T}}=0, \\
KCAMhh^{\mathrm{T}}(KCAM)^{\mathrm{T}}=0
\end{array
\end{equation*
Further by selecting $Q=0$, then one has that $\mathcal{A} \mathrm{vec
~\!([0,hh^{\mathrm{T}}]) =0$. This means that $\mathcal{A}$ is not of full
column rank. Similarly, now assume that $rank\left( KCG\right) < g$ and
there exists a nonzero vector $e$ such that $KCGe=0$. If we set $Q=ee^
\mathrm{T}}$, $R=0$, then $\mathcal{A} \mathrm{vec}~\!([ee^{\mathrm{T}},0])
=0$. Hence, $\mathcal{A}$ is not of full column rank.
(ii) When $G=I_{n}$ (i.e., $g=n$) and $p=n,$ $\mathcal{A}$\ is of full
column rank if and only if
\begin{equation*}
\begin{array}{l}
rank\left( \left[
\begin{array}{cc}
\overline{K} & 0 \\
0 & \overline{K
\end{array
\right] \right) =2p^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, } \\
rank\left( \left[
\begin{array}{cc}
\mathcal{A}_{1} & \mathcal{A}_{2} \\
0 & -I_{p}\otimes CA
\end{array
\right] \right) =2p^{2} \\
\Leftrightarrow rank\left( K\right) =p\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }rank\left( CAM\right) =p\RIfM@\expandafter\text@\else\expandafter\mbox\fi
,
\end{array
\end{equation*
given the fact that both $\mathcal{A}_{1}$ and $I_{p}\otimes CAM$ become
square matrices when $g=p=n$.
(iii) From part (ii) of the current theorem, when $G=I_{n}$ and $p=n,$
\mathcal{A}$\ is of full column rank only if $K$ is of full column rank.
Based on the decoupling condition (\ref{necessary_con1}), this further
implies that case (d) in (\ref{four_cases}) happens. From the arguments
listed in (\ref{unknown_vanish}), one has that the unknown input $d_{k}$
vanishes in system (\ref{plant}). This completes the proof.
\end{proof}
Note that for the necessary conditions in (\ref{nece_condition_uni}) to
hold, one must have that $r\geq \max \{\left\lceil \frac{p}{2}\right\rceil
,g\}$, where $\left\lceil a\right\rceil $ stands for the ceiling operation
generating the least integer not less than $a$, where $a$ is a real number.
Also, from (\ref{cap_A}), one can see that for $\mathcal{A}$ to be of full
column rank, it must hold that $2r^{2}\geq p^{2}+g^{2}$. Hence, it is
necessary tha
\begin{equation}
r\geq \max \left\{ \left\lceil \frac{p}{2}\right\rceil ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }g,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
\left\lceil \sqrt{\frac{p^{2}+g^{2}}{2}}\right\rceil \right\} .
\label{r_lowerbound}
\end{equation}
In practice, the structure of $G$ represents how the process noise affects
the system dynamics. When no such knowledge is available, $G$ is usually
chosen to be the identity matrix. From part (i) of the above proposition, it
can be seen that for the general case where $G$ is known, we have only
established necessary condition for $\mathcal{A}$ to have full column rank.
For the special case when $G=I_{n}$ and $p=n$, although necessary and
sufficient conditions are obtained in part (ii), part (iii) further reveals
that for $\mathcal{A}$ to have full column rank, the unknown input has to be
absent from the system model, i.e., it is not an applicable case. The above
findings motivate us to take a step back, and consider part (ii) of Problem
\ref{problem1}.
\subsection{Identifiability analysis of $Q$ when $R$ is known}
In this subsection, we investigate one case of part (ii) of Problem \re
{problem1}, i.e., analyze the identifiability of $Q$ when $R$ is available.
When $R$ is known, the equation \eqref{autocov1} reduces to
\begin{equation}
\mathcal{A}_{Q}\mathrm{vec}(Q)=\mathrm{vec}(S_{0})-\overline{K}\mathcal{A
_{2}\mathrm{vec}(R), \label{estimating_Q}
\end{equation
wher
\begin{equation*}
\mathcal{A}_{Q}\displaystyle=\overline{K}\mathcal{A}_{1}
\end{equation*
and $\overline{K},\mathcal{A}_{1},\mathcal{A}_{2}$ are defined in (\re
{cap_A}). Defin
\begin{equation}
e_{Q}=\mathrm{vec}(\widetilde{S}_{0})-\overline{K}\mathcal{A}_{2}\mathrm{vec
(R). \label{b_k_bar}
\end{equation
By following a similar procedure with the previous subsection, we have the
following standard least-squares problem formulation for identifying $Q$
\begin{equation}
\Xi _{Q}^{\ast }=\arg \min\limits_{\Xi _{Q}}\left\Vert \mathcal{A}_{Q}\Xi
_{Q}-e_{Q}\right\Vert ^{2} \label{ls_onlyQ}
\end{equation
where $\Xi _{Q}={\mathrm{vec}(\widehat{Q})}$, $e_{Q}$ is defined in (\re
{b_k_bar}). Thus the identifiability of $Q$ when $R$ is known is equivalent
to the matrix $\mathcal{A}_{Q}$ being of full column rank.
\begin{proposition}
\label{only_Q}Given system (\ref{plant}) with $C$ being of full column rank,
the following statements hold true:
(i) {$\mathcal{A}_{Q}$ in (\ref{estimating_Q}) is of full column rank only
if $r\geq g$; }
(ii) for case (a) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is of full column rank if and only i
\begin{equation}
rank(H)+g=rank\left( \left[
\begin{array}{ll}
H & C
\end{array
\right] \right) , \label{case_a_onlyQ}
\end{equation
where $H$ is defined in (\ref{necessary_con1});
(iii) for case (b) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is of full column rank if and only i
\begin{equation}
rank\left( B\right) +g=rank\left( \left[
\begin{array}{ll}
B &
\end{array
\right] \right) ; \label{case_b_onlyQ}
\end{equation}
(iv) for case (c) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is of full column rank if and only if $B-AMD=0$ and
\begin{equation}
rank\left( D\right) +g=rank\left( \left[
\begin{array}{ll}
D & C
\end{array
\right] \right) . \label{case_c_onlyQ}
\end{equation}
\end{proposition}
\begin{proof}
(i) Note that $\mathcal{A}_{Q}=KCG\otimes KCG\in \mathbf{R}^{r^{2}\times
g^{2}}$. Hence, $\mathcal{A}_{Q}$ is of full column rank if and only if
KCG\in \mathbf{R}^{r\times g}$ is of full column rank. Hence, for $KCG$ to
be of full column rank, it is necessary that $r\geq g$.
{(ii) For case (a), the conclusion is implied by the identity
\begin{align*}
\begin{bmatrix}
I & 0 \\
-K &
\end{bmatrix}
\begin{bmatrix}
H & CG \\
0 & KC
\end{bmatrix}
=
\begin{bmatrix}
H & CG \\
-KH &
\end{bmatrix}
\begin{bmatrix}
H & CG \\
0 &
\end{bmatrix
\end{align*}
and the requirement on the full column rank of $KCG$ as well as the
decoupling condition $KH=0$ in (\ref{necessary_con1}). }
(iii) This part is straightforward by using the full column rank of $C$.
(iv) This part follows by similar arguments with part (i).
The proof is completed.
\end{proof}
We also have the following corollary when $G=I_{n}$.
\begin{corollary}
\label{corco_estimateQ} Given system (\ref{plant}) with $C$ being of full
column rank, and $G=I_{n}$, the following statements hold true:
(i) {$\mathcal{A}_{Q}$ in (\ref{estimating_Q}) is of full column rank only
if $r\geq n$; }
(ii) for case (a) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is of full column rank if and only i
\begin{equation}
rank\left( \left[
\begin{array}{ll}
C &
\end{array
\right] \right) =rank(H)+n. \label{G_identity}
\end{equation
where $H$ is defined in (\ref{necessary_con1});
(iii) for case (b) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is not of full column rank;
(iv) for case (c) in (\ref{four_cases}), $\mathcal{A}_{Q}$ in (\re
{estimating_Q}) is of full column rank if and only if $B-AMD=0$ an
\begin{equation*}
rank\left( D\right) +n=rank\left( \left[
\begin{array}{ll}
D &
\end{array
\right] \right) .
\end{equation*}
\end{corollary}
\subsection{Identifiability analysis of $R$ when $Q$ is known}
Now, we consider the other case of part (ii) of Problem \ref{problem1},
i.e., analyze the identifiability of $R$ when $Q$ is available. When $Q$ is
known, the system of equations \eqref{autocov1}-\eqref{autocov2} becomes
\begin{equation}
\mathcal{A}_{R}\mathrm{vec}(R) = \mathrm{vec}(S)-\left[
\begin{array}{c}
\overline{K}\mathcal{A}_{1}\mathrm{vec}(Q) \\
\end{array
\right], \label{estimating_R}
\end{equation
where
\begin{equation*}
\mathcal{A}_{R}=\left[
\begin{array}{c}
\overline{K}\mathcal{A}_{2} \\
-\overline{K}(I_{p}\otimes CAM
\end{array
\right]
\end{equation*
with $\overline{K},\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ being defined in
\ref{cap_A}). Similarly with the previous subsection, we have the following
results.
\begin{proposition}
\label{corco_estimateR}{Given system (\ref{plant}) with $C$ being of full
column rank, $\mathcal{A}_{R}$ is of full column rank only if $rank\left(
K_{M}\right) =p$, where $K_{M}$ is defined in (\ref{nece_condition_uni}). }
\end{proposition}
\begin{proof}
The proof follows a similar procedure with that of Proposition \re
{ALS_fullrank}, and is omitted.
\end{proof}
For the cases (e.g., part (iii) of Corollary \ref{corco_estimateQ} or the
conditions in (\ref{case_a_onlyQ})-(\ref{case_c_onlyQ}) do not hold) when
the solutions to the systems of linear equations are not unique, a natural
idea is to use regularization to introduce further constraints to uniquely
determine the solution \cite{Chen2013}. However, a key question to be
answered is whether some desirable properties can be guaranteed for the
covariance estimates. A full investigation of the above questions is the
subject of our current and future work.
\section{\label{exten_delay} Numerical Examples}
We next use some numerical examples to illustrate the theoretical results.
Firstly, consider the plant model (\ref{plant}) wit
\begin{equation*}
\begin{array}{l}
A=\left[
\begin{array}{ll}
1 & 1 \\
0 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }B=\left[
\begin{array}{ll}
0 & 1 \\
1 &
\end{array
\right] ,G=\left[
\begin{array}{l}
1 \\
\end{array
\right] , \\
C=\left[
\begin{array}{ll}
1 & 0 \\
1 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }D=\left[
\begin{array}{ll}
1 & 0 \\
2 &
\end{array
\right]
\end{array
\end{equation*
It can be verifie
\begin{equation*}
H=\left[
\begin{array}{llll}
-2 & 1 & 1 & 0 \\
-2 & 1 & 2 &
\end{array
\right]
\end{equation*
so that the above model fits case (a) in (\ref{four_cases}). Also, from (\re
{r_lowerbound}), it is necessary that $r\geq 2$. Selec
\begin{equation*}
K=\left[
\begin{array}{ll}
t_{11} & t_{12} \\
t_{21} & t_{22
\end{array
\right] .
\end{equation*
From the decoupling condition in (\ref{necessary_con1}), we then hav
\begin{equation*}
\begin{array}{l}
\left[
\begin{array}{llll}
-2(t_{11}+t_{12}) & t_{11}+t_{12} & t_{11}+2t_{12} & 0 \\
-2(t_{21}+t_{22}) & t_{21}+t_{22} & t_{21}+2t_{22} &
\end{array
\right] =0 \\
\Leftrightarrow t_{11}=t_{12}=t_{21}=t_{22}=0
\end{array
\end{equation*
Note that increasing the row dimension of $K$ still leads to the same
conclusion, i.e., $K$ is a zero matrix. In this case, we have $\mathcal{A=}$
$0$ in (\ref{LSQ}), $\mathcal{A}_{Q}=0$ in (\ref{estimating_Q}), $\mathcal{A
_{R}=0$ in (\ref{estimating_R}), i.e., the noise covariances $Q/R$ are
unidentifiable at all.
Secondly, consider the plant model (\ref{plant}) wit
\begin{equation*}
\begin{array}{l}
A=\left[
\begin{array}{ll}
1 & 1 \\
0 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }B=\left[
\begin{array}{l}
1 \\
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }G=\left[
\begin{array}{ll}
1 & 0 \\
1 & -
\end{array
\right] . \\
C=\left[
\begin{array}{cc}
1 & -2 \\
1 & 1 \\
-2 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }D=\left[
\begin{array}{l}
1 \\
2 \\
\end{array
\right
\end{array
\end{equation*
It can be verifie
\begin{equation*}
H=\left[
\begin{array}{cc}
1 & 1 \\
0 & 2 \\
-1 &
\end{array
\right]
\end{equation*
so that the above model fits case (a) in (\ref{four_cases}). Also, from (\re
{r_lowerbound}), it is necessary that $r\geq 3$. Selec
\begin{equation}
K=\left[
\begin{array}{lll}
t_{11} & t_{12} & t_{13} \\
t_{21} & t_{22} & t_{23} \\
t_{31} & t_{32} & t_{33
\end{array
\right] . \label{K123}
\end{equation
From the decoupling condition in (\ref{necessary_con1}), we then hav
\begin{equation*}
\begin{array}{l}
\left[
\begin{array}{c}
\begin{array}{ll}
t_{11}-t_{13} & t_{11}+2t_{12}+t_{13} \\
t_{21}-t_{23} & t_{21}+2t_{22}+t_{23
\end{array}
\\
\begin{array}{ll}
t_{31}-t_{33} & t_{31}+2t_{32}+t_{33
\end{array
\end{array
\right] =0 \\
\Leftrightarrow \left.
\begin{array}{l}
t_{11}=t_{13},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{12}=-t_{11}; \\
t_{21}=t_{23},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{22}=-t_{21}; \\
t_{31}=t_{33},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{32}=-t_{31}
\end{array
\right
\end{array
\end{equation*
If we se
\begin{equation*}
K=\left[
\begin{array}{lll}
1 & -1 & 1 \\
2 & -2 & 2 \\
3 & -3 &
\end{array
\right] ,
\end{equation*
it can be verified that neither of the two necessary conditions in (\re
{nece_condition_uni}) is satisfied. In particular, $rank(K_{M})=2,$
rank\left( KCG\right) =1$. Hence, it is not possible for $\mathcal{A}\in
\mathbf{R}^{18\times 13}$ in (\ref{LSQ}) to have full column rank. In fact,
it can be checked that $rank\left( \mathcal{A}\right) =2$, i.e., $Q$ and $R$
are not uniquely jointly identifiable. Moreover, it can be calculated that
for $\mathcal{A}_{Q}\in \mathbf{R}^{9\times 4}$ in (\ref{estimating_Q}), we
have $rank\left( \mathcal{A}_{Q}\right) =1$. This reinforces the results of
Corollary \ref{only_Q} since one can easily see that $rank(H)+2=4\neq
rank\left( \left[
\begin{array}{ll}
H & C
\end{array
\right] \right) =3$, i.e., the condition in (\ref{case_a_onlyQ}) does not
hold. In other words, assuming $R$ to be known, $Q$ is not uniquely
identifiable. Similarly, for $\mathcal{A}_{R}\in \mathbf{R}^{18\times 9}$ in
(\ref{estimating_R}), since $rank(K_{M})=2$, from Corollary \re
{corco_estimateR}, one has that $\mathcal{A}_{R}$ cannot have full column
rank. To double confirm, it can be checked that $rank\left( \mathcal{A
_{R}\right) =2$, i.e., $R$ is not uniquely identifiable, when $Q$ is assumed
to be known. Note that increasing the row dimension of $K$ still leads to
the same conclusions as above for this example.
Thirdly, consider the plant model (\ref{plant}) wit
\begin{equation*}
\begin{array}{l}
A=\left[
\begin{array}{ll}
1 & 1 \\
0 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }B=\left[
\begin{array}{l}
1 \\
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }G=\left[
\begin{array}{l}
1 \\
\end{array
\right] , \\
C=\left[
\begin{array}{cc}
1 & -2 \\
1 & 1 \\
-2 &
\end{array
\right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }D=0
\end{array
\end{equation*
It can be verifie
\begin{equation*}
H=\left[
\begin{array}{cc}
1 & 0 \\
1 & 0 \\
-2 &
\end{array
\right]
\end{equation*
so that the above model fits case (b) in (\ref{four_cases}). From (\ref{r_lowerbound}), it is necessary that $r\geq 3$. Denote $K$ as in (\ref{K123
). From the decoupling condition in (\ref{necessary_con1}), we then hav
\begin{equation*}
\left[
\begin{array}{c}
\begin{array}{ll}
t_{11}+t_{12}-2t_{13} & 0 \\
t_{21}+t_{22}-2t_{23} &
\end{array}
\\
\begin{array}{ll}
t_{31}+t_{32}-2t_{33} &
\end{array
\end{array
\right] =0.
\end{equation*
Selec
\begin{equation*}
K=\left[
\begin{array}{lll}
1 & -1 & 0 \\
1 & 3 & 2 \\
2 & 4 &
\end{array
\right] .
\end{equation*
One then has that $rank(K_{M})=2,$ $rank\left( KCG\right) =1$, i.e., it is
not possible for $\mathcal{A}\in \mathbf{R}^{18\times 10}$ in (\ref{LSQ}) to
have full column rank. In fact, it can be checked that $rank\left( \mathcal{
}\right) =5$, i.e., $Q$ and $R$ are not uniquely jointly identifiable. Also,
it can be obtained that for $\mathcal{A}_{Q}\in \mathbf{R}^{9\times 1}$ in
\ref{estimating_Q}), we have $rank\left( \mathcal{A}_{Q}\right) =1$. This
reinforces the results of Corollary \ref{only_Q} since it can be confirmed
that $rank\left( B\right) +1=2=rank\left( \left[
\begin{array}{ll}
B &
\end{array
\right] \right) $, i.e., the condition in (\ref{case_b_onlyQ}) holds. In
other words, assuming $R$ to be known, $Q$ is uniquely identifiable.
Similarly, for $\mathcal{A}_{R}\in \mathbf{R}^{18\times 9}$ in (\re
{estimating_R}), since $rank(K_{M})=2$, from Corollary \ref{corco_estimateR
, one can conclude that $\mathcal{A}_{R}$ cannot have full column rank. To
double confirm, it can be checked that $rank\left( \mathcal{A}_{R}\right) =4
, i.e., $R$ is not uniquely identifiable, when $Q$ is assumed to be known.
Note that increasing the row dimension of $K$ still leads to the same
conclusions as above for this example.
Next for the third example, assume that the true covariances are $Q=1,$
R=0.1I_{3}$. With the above system information, we follow the
procedure in Section III. C, and estimate $Q$, assuming $R$ to be
known. We run the simulation for 500 scenarios. For each scenario,
we use in total 1000 data points to estimate $S_{0}$ as in
(\ref{appro}). and the estimate for $Q$ is obtained by solving the
optimization problem (\ref{ls_onlyQ}). The results are shown in
Figure \ref{Q_es} for the above-mentioned 500 different scenarios.
It can be seen from Figure \ref{Q_es} that the estimates for $Q$ are
well dispersed around its true value. We finally remark that for
solving (\ref{ls_onlyQ}), an additional positive semidefinite
constraint has been enforced on the $Q$ estimates (i.e., for this
example, $\widehat{Q}$ is a nonnegative scalar). The optimization
problem is transformed to a standard semidefinite program and solved
by cvx \cite{Boyd2013}.
\vspace{0.5cm}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.335\textwidth,bb=108 221 485
505]{figure.eps} \caption{Estimates for $Q$ under 500 different
scenarios} \label{Q_es}
\end{figure}
\section{\label{conclusion}Conclusions}
The past few decades have witnessed much progress in optimal filtering for
systems with arbitrary unknown inputs and stochastic noises. However, the
existing works assume perfect knowledge of the noise covariances in the
filter design, which is impractical. In this paper, for stochastic systems
under unknown inputs, we have investigated the identifiability question of
the process and measurement noises covariances (i.e., $Q$ and $R$) using the
correlation-based measurement difference approach.
More specifically, we have focused on the single-step measurement case, and
established (i) necessary conditions under which $Q$ and $R$ can be uniquely
jointly identified (see Proposition \ref{ALS_fullrank}); (ii) necessary and
sufficient conditions under which $Q$ can be uniquely identified, when $R$
is known (see Proposition \ref{only_Q} and Corollary \ref{corco_estimateQ});
(iii) necessary conditions under which $R$ can be uniquely identified, when
Q$ is known (see Proposition \ref{corco_estimateR}). Moreover, it has been
shown that for achieving the results mentioned above, the measurement
difference approach requires some decoupling conditions for constructing a
stationary time series (see Proposition \ref{case_abc}). The latter
conditions are proved to be sufficient (see Theorem \ref{decouple_sufficent
) for the strong detectability requirement in \cite{Hautus1983}.
The above findings reveal that only under restrictive conditions, $Q$/$R$
can be potentially uniquely identified. This not only helps us to have a
better understanding of the applicability of existing filtering frameworks
under unknown inputs (since almost all of them require perfect knowledge of
the noise covariances) but also calls for further investigation of
alternative and more viable noise covariance methods under unknown inputs.
\section{Acknowledgment}
The authors would like to thank the reviewers and Editors for their
constructive suggestions which have helped to improve the quality and
presentation of this paper significantly. He Kong's work was supported by
the Science, Technology, and Innovation Commission of Shenzhen Municipality
[Grant No. ZDSYS20200811143601004]. Tianshi Chen's work was supported by the
Thousand Youth Talents Plan funded by the central government of China, the
Shenzhen Science and Technology Innovation Council under contract No.
Ji-20170189, the President's grant under contract No. PF. 01.000249 and the
Start-up grant under contract No. 2014.0003.23 funded by the Chinese
University of Hong Kong, Shenzhen.
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\defLaTeX2e{LaTeX2e}
\def\chkcompat{%
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{\Greekmath 010B }%
\def\beta{\Greekmath 010C }%
\def\gamma{\Greekmath 010D }%
\def\delta{\Greekmath 010E }%
\def\epsilon{\Greekmath 010F }%
\def\zeta{\Greekmath 0110 }%
\def\eta{\Greekmath 0111 }%
\def\theta{\Greekmath 0112 }%
\def\iota{\Greekmath 0113 }%
\def\kappa{\Greekmath 0114 }%
\def\lambda{\Greekmath 0115 }%
\def\mu{\Greekmath 0116 }%
\def\nu{\Greekmath 0117 }%
\def\xi{\Greekmath 0118 }%
\def\pi{\Greekmath 0119 }%
\def\rho{\Greekmath 011A }%
\def\sigma{\Greekmath 011B }%
\def\tau{\Greekmath 011C }%
\def\upsilon{\Greekmath 011D }%
\def\phi{\Greekmath 011E }%
\def\chi{\Greekmath 011F }%
\def\psi{\Greekmath 0120 }%
\def\omega{\Greekmath 0121 }%
\def\varepsilon{\Greekmath 0122 }%
\def\vartheta{\Greekmath 0123 }%
\def\varpi{\Greekmath 0124 }%
\def\varrho{\Greekmath 0125 }%
\def\varsigma{\Greekmath 0126 }%
\def\varphi{\Greekmath 0127 }%
\def\Greekmath 0272 {\Greekmath 0272 }
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinpu
\else
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1
}
\makeatother
\endinput
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The starvation probability of a buffer is an important performance measure
for protocol design of telecommunication networks, as well as in storage
systems and ecological systems (e.g. dams).
Starvation is said to occur when the buffer is empty.
Various applications use buffering in order to control the rate
at which packets are served at the destination. As long as there
are packets in the buffer, packets arrive at the destination
regularly, i.e. they are spaced by the service time of the
buffer. Once the buffer empties packets may arrive at the destination
separated by larger times, as the spacing between packets now depends
also on the inter-arrival times at the queue. Starvation is
in particular undesirable in real time voice as well as
in video streaming applications.
The time till starvation of a queue is related to the busy period which
has been well studied under the assumption of a stationary arrival process
(see \cite{Baccelli, Ledermann} and their references).
In contrast to this assumption, we consider a finite number of arrivals
as we are interested in statistics of starvation when a file of fixed size
is transferred.
The main goal of this paper is to find the \emph{distribution of the
number of starvations}
within a file of $N$ packets. We first model the buffer as an M/M/1 queue, and
then extend it to incorporate the bursty packet arrival that is modeled by an \emph{interrupted
Poisson process (IPP)}. In this system, a fixed amount of packets are \emph{prefetched}
(also called ``prefetching threshold'') before the service begins or resumes after a starvation event.
In this paper, we propose two approaches (that give the same result) to compute the starvation
probabilities and the distribution of the number of starvations for a single file.
The first approach gives an explicit result based on the
\emph{Ballot theorem} \cite{Takacs}. The second approach
provides a recursive computation algorithm. Both are done in
an M/M/1 queue on a \emph{packet level}.
Using Ballot theorem, we can compute in a simple way the exact distribution
of the number of starvations explicitly. When the file size approaches infinity,
we present the asymptotic starvation probability using Gaussian (interchangeable with Normal)
approximation as well as an approximation of the Riemann integral.
As a special case that the playout buffer is modeled as an M/D/1 queue, the
probability of meeting a certain number of starvations can be obtained on the basis of a discrete
version of Ballot theorem.
Whereas the Ballot Theorem provides an explicit solution, we
propose an alternative approach which constitutes a
recursive algorithm for computing starvation probabilities.
Although the recursive approach does not generate an explicit solution,
it does perform better than the Ballot Theorem in terms of complexity under certain
circumstances. In addition, the recursive approach enables us to compute
the probability of starvations with ON/OFF bursty packet arrivals.
We further propose a fluid analysis of starvation behavior on the file level.
This approach, instead of looking into the stochastic packet arrivals and departures,
predicts the starvation where the servers manage a large quantity of file transfers.
Given the traffic intensity and the distribution of file
size, we are able to compute the starvation probability as a function of the
prefetching threshold. The fluid analysis, though simple, offers an
important insight on how to control the probability of starvation for many
files, instead of for one particular file.
The probabilities of starvations developed in this work have various applications
in the different fields. A prominent example is the media streaming service.
This application demonstrates a dilemma between the prefetching process
and the starvation. A longer prefetching process causes a larger start-up delay, while
a shorter one might result in starvations. The user perceived media quality (or
QoE equivalently) is impaired by either the large start-up delay or the undesirable starvations.
This problem becomes increasingly important in the epoch that
web video hogs up to more than 37\% of total traffic during peak hours in USA \cite{webvideo}.
In contrast to the rapid growth of traffic load, the bandwidth provision usually lags behind.
In this context, media providers and network operators face a crucial challenge of
maintaining a satisfactory QoE of streaming service.
With the results developed in this work, we are able to answer the fundamental question: \emph{
How many packets should the media player prefetch to optimize the users' quality of experience?}
We propose a set of QoE metrics for both the finite and the infinite file size. The optimal QoE
is achieved by configuring the start-up threshold in packets. Recently, the similar QoE issue
is studied in the important works \cite{TMM10:Luan,TMM08:Liang,JSAC11:ParandehGheibi}.
Liang et al. \cite{TMM08:Liang} studies the bounds of start-up delay, given the \emph{deterministic}
playout and arrival curves. Authors in \cite{JSAC11:ParandehGheibi} present a minimum
prefetching threshold for an M/D/1 queue, other than an exact solution. They further extend
their method to consider the arrival process depicted by a two-state Markov chain. Luan et al.
\cite{TMM10:Luan} adopts diffusion approximation to investigate the time-dependent starvation behavior.
Their technique is inadequate to provide insights on starvation in a media file with small number of units
(in packets or chunks). Compared to state of the art, our approaches target at the exact solution, and can analyze
the starvation of small files. This is particularly important in the evaluation of adaptive streaming
where the entire media file is subdivided into many chunks encoded by multiple playback rates. The starvation
is more likely to happen when the packets from one or several high-definition video chunks are being played.
The rest of this paper is organized as follows. Section \ref{sec:related}
reviews the related work. We propose a Ballot approach in
Section \ref{sec:Ballot}. Section \ref{sec:recursive} presents the recursive
approach for an M/M/1 queue and extents it to the ON/OFF bursty arrival
process. Section \ref{sec:fluid} performs a fluid analysis for a large number of files.
Section \ref{sec:QoE} presents the QoE metrics and their optimization issues.
Our theoretical results are verified in section \ref{sec:simulation}.
Section \ref{sec:conclusion} concludes this paper and discusses the future work.
\section{Related Work}
\label{sec:related}
The analysis of starvation is close to that
of busy period in transient queues.
In \cite{Baccelli,Ledermann}
authors solve the distribution of the buffer size
as a function of time for the M/M/1 queue. The exact
result is expressed as an infinite sum of modified
Bessel functions.
The starvation analysis of this work is different from the
transient queueing analysis in two aspects.
First, the former aims to find the probability generating function of starvation events
while not the queue size. Second, the former does not assume
a stationary arrival process.
Ballot theorem and recursive equations have been used to analyze
the packet loss probability in a finite buffer when the
forward error-correcting technique is deployed. Citon et al. \cite{TIT93:Citon}
propose a recursive approach that enables them to compute
the packet loss probability in a block of consecutive packet arrivals
into an M/M/1/K queue.
Based on their recursive approach, Altman and Jean-Marie in \cite{JSAC98:Eitan} obtain
the expressions for the multidimensional generating function of
the packet loss probability. The distribution of message delay
is given in an extended work \cite{Infocom95:Eitan}.
Dubea and Altman in \cite{Dubea} analyze the packet loss probability
with the consideration of random loss in incoming and outgoing links.
In \cite{Gurewitz}, Gurewitz et al. introduce the powerful Ballot theorem to find
this probability within a block of packet arrivals into an M/M/1/K queue.
They consider two cases, in which
the block size is smaller or greater than the buffer limit. Another example
of applying Ballot theorem to evaluate networking system is found in \cite{Humblet}.
Humblet et al. present a method based on Ballot theorem
to study the performance of nD/D/1 queue with periodical arrivals and deterministic
service time. In \cite{Infocom01:Sohraby}, He and Sohraby use Ballot theorem
to find the stationary probability distribution in a general class of discrete
time systems with batch arrivals and departures. Privalov and Sohraby \cite{PIT07:Sohraby}
study the underflow behavior of CBR traffic in a time-slotted queueing system. However,
they do not provide the insights of having a certain number of starvations.
In the applications related to our work, Stockhammer et al. \cite{TMM02:Stockhammer} specify
the minimum start-up delay and the minimum buffer size for a given video stream and
a deterministic variable bit rate (VBR) wireless channel.
Recently, \cite{TMM08:Liang} presents a deterministic bound, and
\cite{JSAC11:ParandehGheibi} provides a stochastic bound
of start-up delay to avoid starvation. Authors in \cite{TMM10:Luan} model the playout buffer
as a G/G/1 queue. By using diffusion approximation,
they obtain the closed-form starvation probability with asymptotically large file size.
Xu et.al \cite{yuedong2}
study the scheduling algorithms for multicast streaming in multicarrier wireless downlink.
In the application field, our paper differs from state of the art works in the following ways:
i) we present new theories that yield an \emph{exact} probability of starvation,
and the probability generating function of starvation events; ii) we study
of asymptotic behavior with error analysis; iii) we perform a macroscopic starvation analysis
using a fluid model; iv) we configure optimal prefetching thresholds to
optimize the QoE metrics.
\section{Starvation Analysis Using Ballot Theorem}
\label{sec:Ballot}
In this section, we study
the starvation behavior of an M/M/1 queue with finite number of
arrivals. The analytical method is based
on the powerful Ballot theorem.
\subsection{System Description}
We consider a single media file with finite size $N$.
The media content is pre-stored in the media server. When a
user makes a request, the server segments this media into
packets, and transfers them to the user by use of TCP or UDP
protocols. When packets traverse the wired or wireless links,
their arrivals to the media player of a user are not deterministic
due to the dynamics of the available bandwidth.
The Poisson assumption is not the most realistic way to describe packet
arrivals, but it reveals the essential features of the system, and
is the first step for more general arrival processes.
After the streaming
packets are received, they are first stored in the playout buffer.
The interval between two packets that are served is assumed to be
exponentially distributed so that we can model
the receiver buffer as an M/M/1 queue.
The maximum buffer size is assumed to be large enough
so that the whole file can be stored. This simplification is justified
by the fact that the storage space is usually very large in the receiver
side (e.g. several GB).
The user perceived media quality has two measures called \emph{start-up delay}
and \emph{starvation}. As explained earlier, the media player
wants to avoid the starvation by prefetching packets.
However, this action might incur a long waiting time.
In what follows, we reveal the relationship between the start-up delay
and the starvation behavior, with the consideration of file size.
\subsection{A Packet Level Model}
We present a packet level model to investigate the starvation behavior.
Denote by $\lambda$ the Poisson
arrival rate of the packets, and by $\mu$ the Poisson service rate. Define $\rho:=\lambda/\mu$ as the traffic intensity.
In a non-empty M/M/1 queue with everlasting arrivals, the rate at which
either an arrival or a departure occurs is given by $\lambda + \mu$. This event corresponds
to an arrival with probability $p$, or is otherwise to an end of service
with probability $q$, where
\begin{eqnarray}
p = \frac{\lambda}{\lambda + \mu} = \frac{\rho}{1+\rho}; \;\;\;\;\; q = \frac{\mu}{\lambda + \mu} = \frac{1}{1+\rho}. \nonumber
\end{eqnarray}
The buffer is initially empty. Let $T_1$ be the start-up delay, in which
$x_1$ packets are accumulated in the buffer.
Once the service begins, the probability of starvation is given
by Theorem \ref{theorem:nobuffering}.
\begin{theorem}
\label{theorem:nobuffering}
For the initial queue length $x_1$ and the total size $N$ of a file, the probability of
starvation is given by:
\begin{eqnarray}
P_{s} = \sum_{k=x_1}^{N-1} \frac{x_1}{2k-x_1}\binom{2k-x_1}{k-x_1}p^{k-x_1}(1-p)^k.
\label{eq:ballot}
\end{eqnarray}
\end{theorem}
\noindent \textbf{Proof:}
Before proving this theorem, we iterate the classical Ballot theorem first.
\noindent \textbf{Ballot Theorem:} {\em In a ballot, candidate A scores $N_A$
votes and candidate B scores $N_B$ votes, where $N_A > N_B$. Assume
that while counting, all the ordering (i.e. all sequences of A's and B's)
are equally alike, the probability that throughout the counting, A is always
ahead in the count of votes is $\frac{N_A - N_B}{N_A + N_B}$.}
We define $E_k$ to be an event that the buffer becomes empty
for the first time when the service of packet $k$ is finished.
It is obvious that all the events $E_k, k=1,\cdots N,$ are mutually exclusive.
Then, the event of starvation is the union $\cup_{k=x_1}^{N-1} E_k$.
This union of events excludes $E_N$ because the empty buffer seen by packet $N$
is not a starvation. When the buffer is empty at the end
of the service of the $k^{th}$ packet, the number of arrivals is $k - x_1$
after the prefetching process. The probability of having $k-x_1$ arrivals
and $k$ departures is computed from a binomial distribution, $\binom{2k-x_1}{k-x_1}p^{k-x_1}(1-p)^k$.
We next find the necessary and sufficient condition of the event $E_k$.
If we have a backward time axis that starts from the time point when the buffer is empty for the
first time, the number of departure packets is always more than that of arrival packets.
As a result, the Ballot Theorem can be applied.
For example, among the last $m$ events (i.e. $m\leq 2k-x_1$),
the number of packets that have been played is always greater than the number of arrivals.
Otherwise, the empty buffer already happens before the $k^{th}$ packet is served.
According to the Ballot theorem, the probability of event $E_k$ is computed by
$\frac{x_1}{2k-x_1}\binom{2k-x_1}{k-x_1}p^{k-x_1}q^k$. Therefore, the probability
of starvation, $P_{s}$, is the probability of the union $\cup_{k=x_1}^{N-1} E_k$, given by
eq.\eqref{eq:ballot}.
\hspace*{\fill} \rule{1.8mm}{2.5mm}
\begin{figure*}[!htb]
\includegraphics[width=5in]{samplepath.eps}
\centering
\caption{A path with $j$ starvations}
\label{fig:samplepath}
\vspace{-0.3cm}
\end{figure*}
The starvation event may happen for more than once during the file transfer. We are particularly
interested in the probability distribution of starvations, given a finite file size $N$. The maximum
number of starvations is $J=\lfloor\frac{N}{x_1}\rfloor$ where $\lfloor\cdot\rfloor$
is the floor of a real number. We define \emph{path} as a complete sequence of
packet arrivals and departures. The probability of a path depends
on the number of starvations. We illustrate a typical path with $j$ starvations in Figure \ref{fig:samplepath}.
To carry out the analysis, we start from the event that the first starvation takes place. Denote
by $k_l$ the $l^{th}$ departure of a packet that sees an empty queue. We notice that the path
can be decomposed into three types of mutually exclusive events as follows:
\begin{itemize}
\item Event $\mathcal{E}(k_1)$: the buffer becoming empty for the first time in the entire path.
\item Event $\mathcal{S}_l(k_l,k_{l+1})$: the empty buffer after the service of packet $k_{l+1}$ given
that the previous empty buffer happens at the departure of packet $k_{l}$.
\item Event $\mathcal{U}_j(k_j)$: the last empty buffer observed after the departure of packet $k_j$.
\end{itemize}
Obviously, a path with $j$ starvations is composed of a succession of events
$$\mathcal{E}(k_1), \mathcal{S}_1(k_1, k_2), \mathcal{S}_2(k_2, k_3), \cdots, $$
$$\mathcal{S}_{j-2}(k_{j-2}, k_{j-1}), \mathcal{S}_{j-1}(k_{j-1}, k_{j}), \mathcal{U}_j(k_j).$$
\noindent Let $P_{\mathcal{E}(k_1)}$, $P_{\mathcal{S}_l(k_l, k_{l+1})}$ and $P_{\mathcal{U}_j(k_j)}$ be
the probabilities of events $\mathcal{E}(k_1)$, $\mathcal{S}_l(k_l, k_{l+1})$ and $\mathcal{U}_j(k_j)$
respectively. The main difficulty to analyze the probability mass function is that the media player
pauses for $x_1$ packets upon starvation.
In what follows, we analyze the probabilities of these events step by step.
The event $\mathcal{E}(k_1)$ can happen
after the departure of packet $k_1 \in [x_1, N-1]$. According to the proof of Theorem \ref{theorem:nobuffering},
the probability distribution of event $\mathcal{E}(k_1)$ can be expressed as
\begin{eqnarray}
P_{\mathcal{E}(k_1)} := \left\{\begin{matrix}
0 &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\textrm{ if } k_1 < x_1 \textrm{ or } k_1 = N ;\\
\frac{x_1}{2k_1-x_1}\binom{2k_1-x_1}{k_1-x_1}p^{k_1-x_1}q^{k_1} &&\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventE}
\end{eqnarray}
The first starvation cannot happen at the departure of first $(x_1-1)$ packets, and cannot happen
after all $N$ packets have been served. We next solve the probability distribution of the event
$\mathcal{U}_j(k_j)$. Suppose that there are $j$ starvations after the service of packet $k_j$.
The extreme case is that these $j$ starvations take place consecutively. Thus,
$k_j$ should be greater than $jx_1-1$. Otherwise there cannot have $j$ starvations. If $k_j$
is no less than $N-x_1$, the media player resumes until all the remaining $N-k_j$ packets are stored
in the buffer. Then, starvation will not appear afterwards. In the remaining cases,
the event $\mathcal{U}_j(k_j)$ is equivalent to the event that no starvation happens after the
service of packet $k_j$. We can take the complement of starvation
probability as the probability of no starvation. Hence, the probability distribution of event
$\mathcal{U}_j(k_j)$ is given by
\begin{eqnarray}
P_{\mathcal{U}_j(k_j)} := \left\{\begin{matrix}
\!\!\!0, \;\;\;\;\;\;\textrm{ if } k_j < jx_1 \textrm{ or } k_j=N;\\
1, \;\;\;\;\textrm{ if } N-x_1 \leq k_j < N ;\\
1 - \sum^{N-k_j-1}_{m=x_1}\frac{x_1}{2m-x_1}\binom{2m-x_1}{m}p^{m-x_1}q^m, \\
\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventU}
\end{eqnarray}
\noindent Denote by $P_s(j)$ the probability of having $j$ starvations. The probability
$P_s(0)$ can be obtained from Theorem \ref{theorem:nobuffering} directly. For the case
with one starvation, $P_s(1)$ is solved by
\begin{eqnarray}
P_s(1) = \sum^{N}_{i=1} P_{\mathcal{E}(i)} P_{\mathcal{U}_1(i)} = \mathbf{P}_{\mathcal{E}} \cdot \mathbf{P}_{\mathcal{U}_1}^{T}
\label{eq:onestarvation}
\end{eqnarray}
\noindent where $\mathbf{P}_{\mathcal{E}}$ is the row vector of $P_{\mathcal{E}(i)}$,
and $\mathbf{P}_{\mathcal{U}_1}$ is the row vector of $P_{\mathcal{U}_1(i)}$, for $i=1,2,\cdots, N$.
To compute the probability of having more than one starvations, we need to find the probability of
event $\mathcal{S}_l(k_l,k_{l+1})$ beforehand.
Solving $P_{\mathcal{S}_l(k_l, k_{l+1})}$
is non-trivial due to that the probability of this event depends on the remaining file size
and the number of starvations. After packet $k_l$ is served, the $l^{th}$ starvation is observed.
It is clear that $k_l$ should not be less than $lx_1$ in order to have $l$ starvations.
Given that the buffer is empty after serving packet $k_l$,
the $(l+1)^{th}$ cannot happen at $k_{l+1} \in [k_l+1, k_l+x_1-1]$. Since there are $j$ starvations
in total, the $(l+1)^{th}$ starvation must satisfy $k_{l+1} < N-(j-l-1)x_1$.
We next compute the remaining case that the $l^{th}$ and the $(l+1)^{th}$ starvations happen
after packets $k_l$ and $k_{l+1}$ are served. Then, there are $(k_{l+1}-k_l)$ departures, and
$(k_{l+1}-k_l-x_1)$ arrivals after the prefetching process. According to the Ballot theorem, a path without starvation
between the departure of packet $(k_l+1)$ and that of packet $(k_{l+1})$ is
expressed as $\frac{x_1}{2k_{l+1}-2k_l-x_1}$. Therefore, we can express $P_{\mathcal{S}_l(k_l, k_{l+1})}$ as
\begin{eqnarray}
\left\{\begin{matrix}
\frac{x_1}{2k_{l+1}-2k_l-x_1}\binom{2k_{l+1}-2k_l-x_1}{k_{l+1}-k_l-x_1} p^{k_{l+1}-k_l-x_1}q^{k_{l+1}-k_l}, \\
\;\;\textrm{ if } k_l \geq lx_1, k_l+x_1 \leq k_{l+1} < N-(j-l-1)x_1 ;\\
0, \;\;\;\;\;\;\;\;\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventS}
\end{eqnarray}
\begin{comment}
Given that the buffer is empty after serving
packet $k_l$, the $(l+1)^{th}$ cannot happen at $k_{l+1} \in [k_l+1, k_l+x_1-1]$.
For the cases $k_l+x_1 \leq k_{l+1} \leq N-1$, there are $(k_{l+1}-k_l)$ departures, and
$(k_{l+1}-k_l-x_1)$ arrivals. According to the Ballot theorem, a path without starvation
between between the departure of packet $(k_l+1)$ and that of packet $(k_{l+1})$ is
expressed as $\frac{x_1}{2k_{l+1}-2k_l-x_1}$. The probability of the paths
with $(k_{l+1}-k_l-x_1)$ arrivals and $(k_{l+1}-k_l)$ departures is given by
$\binom{2k_{l+1}-2k_l-x_1}{k_{l+1}-k_l-x_1} p^{k_{l+1}-k_l-x_1}q^{k_{l+1}-k_l}$.
Therefore, we can express $P_{\mathcal{S}_l(k_l, k_{l+1})}$ as
\begin{eqnarray}
P_{\mathcal{S}_l(k_l, k_{l+1})} := \left\{\begin{matrix}
0 \;\;\; &&\textrm{ if } k_l < lx_1;\\
0 \;\;\; &&\textrm{ if } k_l > N-x_1-1;\\
0 \;\;\; &&\textrm{ if } k_{l+1} < k_l + x_1;\\
\frac{x_1}{2k_{l+1}-2k_l-x_1}\binom{2k_{l+1}-2k_l-x_1}{k_{l+1}-k_l-x_1} p^{k_{l+1}-k_l-x_1}q^{k_{l+1}-k_l}. \;\;\; &&\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventS}
\end{eqnarray}
\end{comment}
\noindent We denote by $\mathbf{P}_{\mathcal{S}_l}$ the matrix of $P_{\mathcal{S}_l(k_l, k_{l+1})}$ for $k_l, k_{l+1} \in [1,N]$.
Here, $\mathbf{P}_{\mathcal{S}_l}$ is an upper triangle matrix where all the elements in the first $(lx_1-1)$ rows, and the last $x_1$ rows are 0.
The probability of having $j (j\geq 2)$ starvations is given by
{\small
\begin{eqnarray}
P_s(j) \!\!\!&=&\!\!\! \sum^{N}_{k_1=1} \sum_{k_2=1}^{N}\cdots \sum_{k_{j-1}=1}^{N}\sum_{k_j=1}^{N} P_{\mathcal{E}(k_1)}\cdot P_{\mathcal{S}_1(k_1, k_{2})} \cdots \nonumber\\
\!\!\!&&\!\!\! P_{\mathcal{S}_{j-1}(k_{j-1}, k_{j})}\cdot P_{\mathcal{U}_j(k_j=1)}
= \mathbf{P}_{\mathcal{E}}\Big(\prod_{l=1}^{j-1} \mathbf{P}_{\mathcal{S}_l}\Big) \mathbf{P}_{\mathcal{U}_j}^{T}.
\label{eq:jstarvation}
\end{eqnarray}
}
\noindent Since the starvation event takes non-negative integer values, we can write the probability generating function $G(z)$ by
\begin{eqnarray}
G(z) = E(z^j) = \sum_{j=0}^{J} P_s(j) z^j = \mathbf{P}_{\mathcal{E}}\Big(\prod_{l=1}^{j-1} \mathbf{P}_{\mathcal{S}_l}\Big) \mathbf{P}_{\mathcal{U}_j}^{T} \cdot z^j.
\label{eq:generatingfunc}
\end{eqnarray}
\noindent In $\mathbf{P}, \mathbf{P}_{\mathcal{S}_l}$ and $\mathbf{P}_{\mathcal{U}_j}$,
the binomial distributions can be approximated by the corresponding Normal distributions
with negligible errors (see Appendix).
The Gaussian approximation significantly reduces
the computational complexity of binomial distributions. The approximated probability of
no starvation computed by the complement of eq.\eqref{eq:ballot} has a complexity $O(N)$
obviously. The probability of having only one starvation is a product of two vectors, which
also yields a complexity $O(N)$. If there are only two starvations, we need to compute
the product of two vectors and one matrix, which has a complexity order $O(N^2)$.
When $j\geq 3$, the computation of
$P_s(j)$ involves the product of two matrices. In general, multiplying two matrices
has a complexity $\mathbf{O}(N^{3})$ so that the direct computation of eq.\eqref{eq:generatingfunc}
is extremely difficult for large $N$. Recall that $\mathbf{P}_{\mathcal{S}_l}$ satisfies
i) an upper triangle matrix, ii) first $lx_1-1$ rows being 0, iii) last $x_1$ rows being 0 and iv)
$\mathbf{P}_{\mathcal{S}_l}(k_l, k_{l+1}) = \mathbf{P}_{\mathcal{S}_l}(k_l+1, k_{l+1}+1)$
if they are not zero. These properties facilitate us to compute the product of the upper triangle
matrices with much less effort. Due to the properties i) and iv), the product of two upper triangle matrices
has a complexity order $O(N^2)$. Detailed analysis is provided in the Appendix.
When there are $j (j\geq 3)$ starvations, the number of matrix production
is $j-2$, resulting in a complexity order $O(N^{2(j-2)})$ for multiplying all the matrices.
To obtain $P_s(j)$, we still need to compute the product of the
vector $P_{\mathcal{E}(k_1)}$ and the matrix. To sum up,
the total complexity is $O(N^{2j-2})$ for $j\geq 2$.
\noindent \textbf{Asymptotic Property:}
We want to know whether the starvation event yields simple implications
as the file size $N$ approaches $\infty$. The
asymptotic behavior of the starvation probability is given by
\begin{eqnarray}
\lim_{N\rightarrow \infty}P_{s} := \left\{\begin{matrix}
1 \;\;\; &&\textrm{ if } \rho < 1 ;\\
\exp\big(\frac{x_1(1-2p)}{2pq}\big) \;\;\; &&\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probstarv_asymp}
\end{eqnarray}
\noindent The detailed analysis can be found in the Appendix.
The asymptotic analysis reveals that the probability of starvation has
nothing to do with the start-up threshold when $\rho < 1$.
Under this situation, it is necessary to know
how frequent the starvation event happens. Here, we compute
the average time interval between two starvations. Let $T_s$ be the
duration of starvation interval. Its expectation $E[T_s]$ is the
expected busy period of an M/M/1 queue with $x_1$ customers in the beginning \cite{JAP96:Liu}, i.e.
\begin{eqnarray}
E[T_s] = \frac{x_1}{\lambda(1-\rho)}.
\end{eqnarray}
\subsection{Extension to Discrete-time Systems}
In general, the playback rate of video streaming has a much smaller
variance than the arrival rate. Hence, the playback of streaming
packets is sometimes regarded as a time-slotted process (e.g. \cite{TMM08:Liang,JSAC11:ParandehGheibi})
where only one packet is served at the beginning of a time slot.
We consider a playout buffer modeled as an M/D/1 queue.
Denote by $d$ the duration of a slot.
In this subsection, we introduce a discrete Ballot theorem named
Tak\'{a}cs Ballot Theorem.
\begin{theorem} (Tak\'{a}cs Ballot Theorem \cite{Takacs})
\label{theorem:takacs}
If $X(1), X(2), \cdots$, $X(l)$ are cyclically interchangeable r.v.s taking on nonnegative integer
values summing to $k$, then
\begin{eqnarray}
\mathbb{P}\Big\{\sum_{s=1}^{t}X(s) < t, \forall \; t\in [1, l]\Big\} = \frac{[l-k]^+}{l}, \;\; (t>x_1).
\end{eqnarray}
\end{theorem}
The Tak´acs Ballot Theorem presents a probability that the number of departures
is larger than that of arrivals in all $l$ slots.
If the arrival process $\{X(s)\}$ is Poisson, $X(s)$ is i.i.d. at different slots, and thus cyclically interchangeable.
Suppose that the starvation event happens after the media player has served $t$ packets, ($t>x_1$).
The total number of arrivals is $t{-}x_1$.
We create a backward time axis where the starvation event happens at slot 1. The number
of departures is always greater than that of arrivals. Otherwise, the starvation event
has already taken place. Hence, according to Tak\'{a}cs Ballot theorem, the probability of the departure
leading the arrival is
\begin{eqnarray}
\mathbb{P}\big\{\sum_{s=1}^{t}X(s) < t, \; \forall t\in [1, l]\big\} = \frac{x_1}{l}.
\end{eqnarray}
Therefore, the probability that the first starvation takes place after the service of
the $l^{th}$ packet (i.e. starvaton event happening at slot $(l{+}1)$
\begin{eqnarray}
P_s(l) = \frac{x_1}{l} \cdot \mathbb{P}\{\sum_{s=1}^{l} X(s)= l{-}x_1\} .
\label{eq:takacs_starvprob}
\end{eqnarray}
For the Poisson process $\{X(s)\}$, the distribution of the total packet arrival is given by
\begin{eqnarray}
\mathbb{P}\{\sum_{s=1}^{l} X(s)= l{-}x_1\} = \frac{(\lambda ld)^{l{-}x_1} }{(l{-}x_1)!}\exp(-\lambda ld).
\label{eq:takacs_poisson}
\end{eqnarray}
Given the file size $N$ and the prefetching threshold $x_1$, the starvation might happens upon the departure of packets from $x_1$ to $N-1$.
Then the starvation probability is obtained by
\begin{eqnarray}
P_{s} = \sum_{l=x_1}^{N-1} \frac{x_1}{l}\frac{(\lambda ld)^{l{-}x_1} }{(l{-}x_1)!}\exp(-\lambda ld).
\label{eq:takacs_ballot}
\end{eqnarray}
We next show how the p.g.f. of starvation events can be derived using the Tak\'{a}cs Ballot theorem.
The path with $j$ starvations is the same as that in Fig.\ref{fig:samplepath}. With certain abuse of notations,
we reuse $\mathcal{E}(k_1)$, $\mathcal{U}_j(k_j)$ and $\mathcal{S}_l(k_l,k_{l+1})$ to denote
the first, the last and the other starvation events.
According to eq.\eqref{eq:takacs_poisson}, there has
\begin{eqnarray}
P_{\mathcal{E}(k_1)} :=
\left\{\begin{matrix}
0 &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\textrm{ if } k_1 < x_1 \textrm{ or } k_1 = N ;\\
\frac{x_1}{k_1}\frac{(\lambda k_1d)^{k_1{-}x_1} }{(k_1{-}x_1)!}\exp(-\lambda k_1d) &&\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventE_cbr}
\end{eqnarray}
Since there exist $j$ starvations in total, the last starvation event will not happen at the departure of packets less than $jx_1$. Given the last starvation happening as soon as the $k_j^{th}$ packet is served, the probability of no
starvation afterwards can also be solved using eq.\eqref{eq:takacs_poisson}.
\begin{eqnarray}
P_{\mathcal{U}_j(k_j)} := \left\{\begin{matrix}
\!\!\!0, \;\;\;\;\;\;\textrm{ if } k_j < jx_1 \textrm{ or } k_j=N;\\
1, \;\;\;\;\textrm{ if } N-x_1 \leq k_j < N ;\\
1{-} \sum^{N{-}k_j{-}1}_{s=x_1}\frac{x_1}{s}\frac{(\lambda sd)^{s{-}x_1} }{(s{-}x_1)!}\exp({-}\lambda sd), \\
\textrm{ otherwise }.
\end{matrix}\right. \nonumber
\label{eq:probeventU_cbr}
\end{eqnarray}
When the $l^{th}$ and the $l{+}1^{th}$ starvation events appear at the departure of
packet $k_l$ and $k_{l{+}1}$, the probability $P_{\mathcal{S}_l(k_l, k_{l+1})}$ is given by
\begin{eqnarray}
\left\{\begin{matrix}
\frac{x_1}{k_{l{+}1}-k_{l}}\frac{(\lambda (k_{l{+}1}-k_{l})d)^{(k_{l{+}1}-k_{l}{-}x_1)}}{(k_{l{+}1}-k_{l}{-}x_1)!}
\exp({-}\lambda (k_{l{+}1}{-}k_{l})d)\\
\;\;\textrm{ if }k_l \geq lx_1, k_l+x_1 \leq k_{l+1} < N-(j-l-1)x_1 ;\\
0, \;\;\;\;\;\;\;\;\textrm{ otherwise }.
\end{matrix}\right.
\label{eq:probeventS_cbr}
\end{eqnarray}
Then, the p.g.f. of starvation events can be solved using eq.\eqref{eq:generatingfunc} in the same way.
\section{Starvation Analysis Via a Recursive Approach}
\label{sec:recursive}
In this section, we present a recursive approach to compute
the starvation probability based on \cite{TIT93:Citon}.
Compared with the one using Ballot theorem,
the recursive approach has less computational complexity, though
without an explicit expression.
\subsection{Probability of Starvation}
The probability of starvation and the p.g.f can be analyzed all in once.
However, we compute them separately because the analysis of
the starvation probability provides an easier route to understand
this approach.
We denote by $P_i(n)$ the probability of starvation with a file of $n$ packets,
given that there are $i$ packets in the system just before the arrival epoch
of the first packet of this file. In the original system, our purpose
is to obtain the starvation probability of a file with the size $N$ when
$x_1$ packets are prefetched before the service begins. This corresponds to
$P_i(n)$ with $n=N-x_1$ and $i=x_1-1$. \emph{Here, the expression $i=x_1-1$
means that the service starts when the packet $x_1$ sees $x_1-1$ packets accumulated
in the buffer.} To compute $P_i(n)$, we will introduce recursive equations.
We define a quantity $Q_i(k)$, $i=0,1,\cdots, n$, $0\leq k\leq i$, which is the probability
that $k$ packets out of $i$ leave the system during an inter-arrival period.
This probability is equivalent to the probability of
$k$ Poisson arrivals with rate $\mu$ during an exponentially distributed period
with parameter $1/\lambda$. According to \cite{Book:Papoulis}, we obtain
\begin{eqnarray}
Q_i(k) &=& \rho \big(\frac{1}{1+\rho}\big)^{k+1} = pq^k , \;\; 0\leq k \leq i-1, \\
Q_i(i) &=& \big(\frac{1}{1+\rho}\big)^{i} = q^i.
\end{eqnarray}
To carry out the recursive calculation, we start from the case $n=1$.
\begin{eqnarray}
P_i(1) = 0, \;\;\; \forall i \geq 1.
\end{eqnarray}
When the file size is 1 and the only packet observes a non-empty queue, the probability
of starvation is 0 obviously. If $i$ is 0, the starvation happens for sure, thus yielding
\begin{eqnarray}
P_0(n) = 1, \;\;\; \forall n.
\end{eqnarray}
\noindent For $n\geq 2$, we have the following recursive equations:
\begin{eqnarray}
P_i(n) = \sum_{k=0}^{i+1}Q_{i+1}(k) P_{i+1-k}(n-1), \;\;\;0\leq i \leq N-1.
\label{eq:recursive1}
\end{eqnarray}
\noindent We explain \eqref{eq:recursive1} as the following. When the first packet
of the file arrives and sees $i$ packets in the system, the starvation does not happen.
However, the starvation might happen in the service of remaining $n-1$ packets.
Upon the arrival of the next packet, $k$ packets out of $i+1$ leave the system
with probability $Q_{i+1}(k)$. We next add constraints to the recursive equation \eqref{eq:recursive1}
for a file of size $N$. Since the total number of packets is N, the
starvation probability must satisfy $P_i(n) = 0$ for $i+n > N$.
\subsection{P.G.F. of Starvations}
To compute the p.g.f. of starvation, we use the same recursive approach, despite of
the more complicated structure. With certain reuse of notation, we
denote by $P_i(j,n)$ the probability of $j$ starvation of a file with size $n$, given
that the first packet of the file sees $i$ packets in the system upon its arrival.
Our final purpose is to compute the probability of starvation for a file of size $N$.
It can be obtained from $P_i(j,n)$ with $i=x_1-1$ and $n=N-x_1$.
In order to compute $P_i(j,n)$ recursively, we provide the initial conditions first:
\begin{eqnarray}
P_{i}(j,1) = \left\{\begin{matrix}
0 \; &&\forall i=1,2,\cdots, N-1, \textrm{ and } j\geq 1;\\
1 \; &&\forall i=1,2,\cdots, N-1, \textrm{ and } j = 0,
\end{matrix}\right.
\label{eq:recursive2}
\end{eqnarray}
\noindent and
\begin{eqnarray}
P_{0}(j,1) = \left\{\begin{matrix}
0 \; && j = 0 \textrm{ or } j\geq 2;\\
1 \; && j =1.
\end{matrix}\right.
\label{eq:recursive3}
\end{eqnarray}
\noindent The equation \eqref{eq:recursive2} means that
the probability of no starvation is 1 conditioned by $i\geq 1$ and $n=1$.
Thus, the probability of having one or more starvations is 0 obviously if
the only packet sees a nonempty system. The equation \eqref{eq:recursive3}
reflects that the starvation happens for sure when the only packet
observes an empty queue. However, there can only have one starvation event
due to $n=1$. Another practical constraint is
\begin{eqnarray}
P_{i}(j,n) = 0, \;\;\; \textrm{ if } i+n \geq N
\label{eq:recursive4}
\end{eqnarray}
\noindent because of the finite file size $N$.
To compute $P_{i}(j,n)$, we need to know what will happen if
the buffer is empty, i.e. $i=0$. One intuitive observation is
\begin{eqnarray}
P_{0}(0,n) = 0, \;\;\; \forall \;1\leq n\leq N-b;
\label{eq:recursive5}
\end{eqnarray}
because an empty queue means at least one starvation event.
For a more general probability $P_{0}(j,n)$, we begin with the case
$j=1$. If only one starvation event exists, there has
\begin{eqnarray}
P_{0}(1,n) = 1, \;\;\; \forall \;1\leq n\leq b,
\label{eq:recursive5_1}
\end{eqnarray}
\noindent where $b:=x_1-1$ is denoted to be the prefetching threshold.
If $n>b$, $b$ packets will be prefetched. Thus, the remaining file
size is $n-b$. We see $b$ packets in the system upon the arrival
of the first packet in the remaining file.
Given that the only one starvation
event has taken place, there will be no future starvations. Therefore,
the following equality holds,
\begin{eqnarray}
P_{0}(1,n) = P_{b}(0,n-b), \;\;\; \forall \;b<n\leq N-b.
\label{eq:recursive6}
\end{eqnarray}
\noindent Using the similar method, we can solve $P_{0}(j,n)$ for $j>1$.
However, the property of $P_{0}(j,n)$ with $j>1$ is quite different
\begin{eqnarray}
P_{0}(j,n) = 0, \;\;\; \forall \; j>1 \textrm{ and } 1\leq n\leq b.
\label{eq:recursive7}
\end{eqnarray}
This means that the probability of having $>1$ starvations is 0 if
the file size is no larger than $b$. If $n$ is greater than $b$,
then $b$ packets are prefetched, leaving $n-b$ packets in the remaining file.
The remaining $n-b$ packets encounter $j-1$ starvations, given that the
first packet sees $b$ packets in the system upon arrival, i.e.
\begin{eqnarray}
P_{0}(j,n) = P_{b}(j-1,n-b), \;\;\; \forall \; j>1 \textrm{ and } n > b.
\label{eq:recursive8}
\end{eqnarray}
So far, we have computed a critical quantity $P_0(j,n)$, the probability of meeting
an empty buffer. Next, we construct recursive equations to compute $P_i(j,n)$
as the following:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!&&P_i(j,n) = \sum_{k=0}^{i+1} Q_{i+1}(k) P_{i+1-k}(j, n-1), \nonumber\\
\!\!\!\!\!\!\!\!&&=\sum_{k=0}^{i} pq^k P_{i+1-k}(j, n-1) + q^{i+1}P_{0}(j-1, n-1),
\label{eq:recursive9}
\end{eqnarray}
\noindent for $0\leq i \leq N-1$.
The eq.\eqref{eq:recursive9} contains two parts. The former expression
reflects the cases that the next arrival sees an \emph{non-empty} queue.
The latter one characterizes the transition of the system to a prefetching
process.
\begin{comment}
\newpage
We next construct recursive equations to compute $P_i(j,n)$ as follows:
\begin{eqnarray}
P_i(j,n) = \sum_{k=0}^{i+1} Q_{i+1}(k) P_{i+1-k}(j, n-1), \;\; 0\leq i \leq N-1,
\label{eq:recursive7}
\end{eqnarray}
where
\begin{eqnarray}
P_0(j,n) = P_{x_1}(j-1, n-x_1) \;.
\label{eq:recursive8}
\end{eqnarray}
The explanation of \eqref{eq:recursive7} is the same as that in the computation
of starvation probability. The difference lies in the term \eqref{eq:recursive8}
when the queue becomes empty. Here, $P_0(j,n)$ is equivalent
to the probability of starvation when the remaining $n-x_1$ packets meet
with $j-1$ starvations with $x_1$ packet prefetched before the server resumes.
\end{comment}
We are interested in how efficient the recursive method is. Hence, we present
the roadmap to compute $P_i(j,n)$ and its complexity:
\begin{itemize}
\item \textbf{Step 1:} Solving $P_i(0,2)$, for $i=1$ to $N-2$;
\item \textbf{Step 2:} Solving $P_i(0,n)$, for $i=1$ to $N-2$, and $n=3$ to $N-x_1+1$ based on \emph{Step 1};
\item \textbf{Step 3:} Adding $j$ by 1 and computing $P_i(j,n)$ based on \emph{Step 1} and \emph{Step 2}.
\end{itemize}
The complexity analysis is carried out from this roadmap. In \textbf{step 1}, the computation
of $P_i(0,2)$ incurs up to $N$ summations for each $i$, resulting in at most $N^2$ sums in total.
The \textbf{Step 2} compute $P_i(0,n)$ repeatedly for each $n$ and the \textbf{Step 3} repeats
\textbf{Step 1\&2} for each $j$. Therefore, the total complexity has a order $O\big((j+1) N^3\big)$.
\noindent \textbf{Remark 1:} The complexity orders of the Ballot approach with Gaussian approximation
and the recursive approach are $O\big(N^{2j-2})$ for $j\geq 2$ and $O\big((j+1) N^3\big)$ respectively.
When $j\geq 3$, the recursive approach may have less computational complexity than the Ballot approach.
\subsection{ON/OFF Bursty Traffic}
In this section, we model the arrival process as an \emph{interrupted Poisson process (IPP)},
which is commonly used to characterize the busty and correlated arrivals. The source
may stay for relatively long durations in ON and OFF states. The ON/OFF arrival model
also has direct applications. For example, the Youtube servers use a simple ON/OFF rate control
algorithm to transfer streaming packets to the users \cite{CCR11:Alcock}. Our objective is to understand
the interaction between the parameters of arrival process and the probability of starvation.
\begin{figure}[!htb]
\centering
\includegraphics[width=2.5in]{onoffmode.eps}
\caption{Two-state Markov process to model bursty traffic}
\label{fig:onoff}
\end{figure}
We illustrate the bursty traffic model in figure \ref{fig:onoff} with the state transition
rate $\alpha$ and $\beta$.
We denote by $Q_i(k)^{ON}$, $0\leq i\leq N-1$, $0\leq k\leq i$, the probability that $k$ packets
out of $i$ leave the system upon an arrival at the ON state (i.e. no arrival during the OFF period).
According to \cite{TIT93:Citon}, the following proposition holds.
\begin{Proposition}\cite{TIT93:Citon}
\label{Proposition:no1}
The probability $Q_i(k)^{ON}$ is expressed as
\begin{eqnarray}
Q_i(k)^{\mathrm{ON}} &=& c_1\big(\frac{1}{a_1}\big)^k + c_2\big(\frac{1}{a_2}\big)^k, \;\;\; 0\leq k \leq i-1, \nonumber\\
Q_i(i)^{\mathrm{ON}} &=& c_1\frac{(1/a_1)^i}{1-1/a_1} + c_2\frac{(1/a_2)^i}{1-1/a_2},
\end{eqnarray}
\noindent where $a_1$, $a_2$, $c_1$ and $c_2$ are solved by
\begin{eqnarray}
\Delta &=& (\lambda + \alpha + \beta)^2 - 4\lambda\beta, \nonumber\\
a_{1,2} &=& 1+ \frac{\lambda+\alpha+\beta}{2\mu} \pm \frac{\sqrt{\Delta}}{2\mu}, \nonumber\\
c_1 &=& \frac{\lambda(\beta+\mu)-\lambda\mu a_1}{a_1(a_2-a_1)}, \;\;c_2 = \frac{\lambda(\beta+\mu)-\lambda\mu a_2}{a_2(a_1-a_2)}.\nonumber
\end{eqnarray}
\end{Proposition}
We next show how the starvation probability $P_i(j,n)$ is obtained. The starvation
event can happen in both the ON and OFF states.
However, the starvation event at the OFF state is equivalent to the event that
the first new arrival of the ON state sees an empty queue. Therefore, we can use
\eqref{eq:recursive7} to compute the p.g.f. of starvations with busty arrivals,
simply replacing $Q_i(k)$ by $Q_i^{\mathrm{ON}}(k)$.
\section{Fluid Model Analysis of Starvation Probability}
\label{sec:fluid}
So far we have studied the starvation behavior of a single file, which is concerned
by either the media servers or the users. In fact, the media servers are more
interested in the QoE evaluation scaled to a large quantity of files they supply.
They cannot afford the effort of configuring each file a different start-up delay.
In this section, we present a fluid analysis of starvation probability, given the
distribution of file size.
In the fluid model, the arrival and departure rates are deterministic.
Let $\lambda$ be the number of packet
arrivals \emph{per second}, and $\mu$ be the number of departures \emph{per second}.
Here, $\mu$ depends on the encoding rate that the media files use.
We focus on the setting $\mu \geq \lambda$ because no starvation will happen
with $\mu < \lambda$ in the fluid model. Let $x_1$ be the start-up threshold.
The start-up delay $T_1$ is simply computed by $x_1/\lambda$.
Once the media packets are played, the queue length decreases at a rate $\mu - \lambda$.
The time needed to empty the queue is thus $\frac{x_1}{\mu-\lambda}$. Let $N_p$
be the total number of packets that are served until a starvation happens,
\begin{eqnarray}
N_p = x_1\big( 1+ \frac{\lambda}{\mu-\lambda}\big ) = \frac{x_1 \mu}{\mu - \lambda}.
\end{eqnarray}
\noindent If the size of a file is less than $N_p$, there will be no starvation event.
The distribution of media file size depends on the types of contents. A measurement study
in \cite{IWQoS08:Cheng} reveals that the music, entertainment, comedy and sports videos
have different distributions of file size. In this section, we compare
the starvation probability of several commonly used distributions, given the start-up threshold.
Note that these distributions possess the same mean file size. We further assume
that the users are homogeneous so that $\lambda$ and $\mu$ are the same for
different types of file size distributions.
i) \emph{Exponential distribution:} Suppose that the file size $N$
follows an exponential distribution with parameter $\theta$. The probability
of starvation, $P_s^{(1)}$, is obtained by
\begin{eqnarray}
P_s^{(1)} = \textrm{Prob }(N > N_p) = \exp(- \frac{\theta x_1 \mu}{\mu - \lambda}).
\label{eq:prob_starv_exp}
\end{eqnarray}
ii) \emph{Pareto distribution:} It is frequently adopted to model the file size
distribution of Internet traffic using TCP protocol. Let $N_m$ be the minimum
possible value of the file size, and $\upsilon$ be the exponent in the
Pareto distribution. The probability of starvation is computed by
\begin{eqnarray}
P_s^{(2)} = \textrm{Prob }(N > N_p) = \left\{\begin{matrix}
\big(\frac{N_m(\mu-\lambda)}{\mu x_1}\big)^{\upsilon} \!\!\! && \forall N_m \leq \frac{x_1 \mu}{\mu - \lambda};\\
1 \!\!\!&& \textrm{ otherwise },
\end{matrix}\right.
\label{eq:prob_starv_pareto}
\end{eqnarray}
\noindent where the expectation of the Pareto distribution
is equal to that of the exponential distribution, i.e. $\frac{\upsilon N_m}{\upsilon-1} = \frac{1}{\theta}$.
iii) \emph{Log-Normal distribution:} We suppose that the file size
follows a log-normal distribution $\ln \mathcal{N}(\varrho,\sigma)$,
where $\varrho$ and $\sigma$ are the mean and the standard deviation
of a natural normal distribution. Given that $N_p$ packets can be served without an interruption,
the starvation probability $P_{s}^{(3)}$ is computed by
\begin{eqnarray}
P_s^{(3)} = \textrm{Prob }(N > N_p) = \frac{1}{2} - \frac{1}{2}\mathrm{erf}\big[\frac{\log \frac{x_1 \mu}{\mu - \lambda} -\varrho}{\sqrt{2}\sigma}\big],
\label{eq:prob_starv_lognormal}
\end{eqnarray}
\noindent where its expectation $\exp(\varrho+\frac{\sigma^2}{2})$ equals to $\frac{1}{\theta}$.
Equations \eqref{eq:prob_starv_exp},\eqref{eq:prob_starv_pareto} and
\eqref{eq:prob_starv_lognormal} show that the probability of starvation can be
controled by setting $x_1$,
if the distribution of file size, the arrival and departure rates
are pre-knowledge\footnote{Because the starvation probabilities $P_s^{(1)}$, $P_s^{(2)}$ and $P_s^{(3)}$ take complicated forms, we
will compare their dependency on $x_1$ numerically in section \ref{sec:simulation}.
Both Pareto and Log-normal distributions have two parameters. In the comparison, we fix
one of them, and solve the other according to the property of identical expectations.}.
\section{Application to Streaming Service}
\label{sec:QoE}
This section presents four scenarios in streaming service in which
our analysis can be utilized to optimize the quality of
experience. Here, we focus on the M/M/1 system.
The cost of a user reflects the tradeoff between the start-up delay and the
starvation behaviors (either the starvation probability or the
continuous playback interval). We first let the starvation probability
be one of the QoE metrics.
Let $g(\cdot)$ be a strictly increasing but convex function of the expected start-up delay $E[T_1]$.
We denote by $C_1(x_1)$ the cost of a user watching the media stream,
\begin{eqnarray}
C_1(x_1) = P_s + \gamma g(E(T_1)),
\label{eq:QoE_metric1}
\end{eqnarray}
\noindent where $\gamma$ is a positive constant. A large $\gamma$ represents that
the users are more sensitive to the start-up delay, and a smaller $\gamma$ means
a higher sensitivity to the starvation. Our goal is to find the optimal start-up threshold
$x_1^*$ to minimize $C_1(x_1)$.
The choice of $C_1(x_1)$ should satisfy three basic principles. First, it is convex in $x_1$ so that only one
optimal threshold $x_1^*$ exists. Second, $C_1(x_1)$ is bounded even if $\rho$ is close to 1. Otherwise,
the configuration of $x_1$ is extremely sensitive to $\rho$. Third, though $x_1^*$ is not required to be
a decreasing function of the arrival rate $\lambda$, it cannot grow unbounded when $\lambda$ is large enough.
In what follows, we simply let
$g(E(T_1)):= (E(T_1))^2 = \big(\frac{x_1}{\lambda}\big)^2$.
We apply our models to optimize QoE in three scenarios: i) finite media streaming, ii) everlasting
media streaming and iii) file level. The scenarios i) and ii) are designed for a single stream, while
iii) is designed for a large number of streams. When the streaming file has a finite size, the congested
bottlenecks such as the 3G base station or the wifi access point can configure or suggest a start-up threshold before
the media stream is played. If the steaming file is large enough (e.g. realtime sport channel), a user
can measure the arrival/service processes, and then configure the rebuffering delay locally. In the third scenario,
the media server can set up one same start-up threshold for all the streams that it distributes.
To avoid malfunctions in realistic scenarios, a user can configure lower and upper bounds
for the start-up delay. Once the upper bound is reached, the media player starts to play regardless
of the prefetching threshold.
\subsection{Finite Media Size}
We hereby consider the adaptive buffering technique for a streaming with finite size.
The eq.\eqref{eq:ballot} and eq.\eqref{eq:QoE_metric1} yield
{\small
\begin{eqnarray}
C_1(x_1) = \sum_{k=x_1}^{N-1} \frac{x_1}{2k-x_1}\binom{2k-x_1}{k-x_1}p^{k-x_1}(1-p)^k + \gamma (\frac{x_1}{\lambda})^2.
\label{eq:totalcost_finite}
\end{eqnarray}}
The starvation probability decreases and the start-up delay increases strictly as $x_1$ grows.
In the QoE optimization of finite media size, there does not exist a simple expression of the optimal
threshold $x_1^*$. To find $x_1^*$ numerically, we need to compare the costs from all possible thresholds.
The complexity order is low if the binomial distribution in eq.\eqref{eq:ballot}
is replaced by the Gaussian distribution. If a user can tolerate up to $1$ starvations,
$P_s$ will be replaced by the probability $(P_s(0)+P_s(1))$ according to eq.\eqref{eq:onestarvation}.
\subsection{Infinite Media Size}
We revisit the user perceived streaming quality
in two scenarios: 1) $\rho \geq 1$ and 2) $\rho < 1$.
\noindent \textbf{Case 1: $\rho \geq 1$.} The starvation probability converges
to a fixed value when the file size approaches infinity.
We adopt the same QoE metric as that of the finite media size.
Note that $P_s$ can be directly replaced by its asymptotic value in eq.\eqref{eq:probstarv_asymp}.
Submitting $P_s$ to $C_{1}(x_1)$, we have the following cost function
\begin{eqnarray}
C_{1}(x_1) = \exp\big(\frac{x_1(1-2p)}{2pq}\big) + \gamma (\frac{x_1}{\lambda})^2.\nonumber
\end{eqnarray}
\noindent Letting the derivative $\frac{dC_{1}}{dx_1}$ be 0, we obtain
\begin{eqnarray}
x_1\cdot\exp\big(\frac{x_1(2p-1)}{2pq}\big) = \frac{(2p-1)\lambda^2}{4\gamma pq}. \nonumber
\end{eqnarray}
\noindent The optimal threshold $x_1^*$ is solved by
\begin{eqnarray}
x_1^* = LambertW\big((\frac{(2p-1)\lambda}{2 pq})^2\cdot \frac{1}{2\gamma}\big)\cdot\frac{2pq}{2p-1},\nonumber
\end{eqnarray}
\noindent where $LambertW(\cdot)$ is the Lambert W-function.
\noindent \textbf{Case 2: $\rho < 1$.} When $\rho < 1$, $P_s$
is 1 for an infinite media size. If we adopt the QoE metric $C_{1}$ directly,
the optimal start-up delay is always 0. This requires a new QoE
metric for the case $\rho < 1$. Since the starvation
happens many times, the continuous playback interval can serve as a measure of users' satisfaction.
We denote by $C_2(x_1)$ the cost function for an infinite media size with $\rho < 1$,
\begin{eqnarray}
C_2(x_1) := \exp(-\frac{\delta x_1}{\lambda(1-\rho)}) + \gamma (\frac{x_1}{\lambda})^2,\nonumber
\end{eqnarray}
\noindent where $\delta$ is a user defined weighting factor to the expected playback duration
($\delta:=1$ in our numerical examples).
We differentiate $C_2(x_1)$ over $x_1$, and let the derivative be 0, then the optimal start-up threshold is
\begin{eqnarray}
x_1^* = LambertW\big( \frac{\delta^2}{2\gamma(1-\rho)^2}\big)\cdot\frac{\lambda(1-\rho)}{\delta}.\nonumber
\end{eqnarray}
\subsection{Optimal QoE in the File Level}
Unlike the above QoE optimizations, the threshold $x_1$ for many files is configured
by the media server, instead of the users. The objective is still to
balance the tradeoff between the start-up delay and the starvation probability.
Here, only the exponentially distributed file size is considered.
We choose the cost function $C_{1}(x_1)$ that yields $C_{1}(x_1) = \exp(- \frac{\theta x_1 \mu}{\mu - \lambda})
+ \gamma \big(\frac{x_1}{\lambda}\big)^2$. The optimal threshold $x_1^*$ can be easily found as
\begin{eqnarray}
x_1^* = LambertW\big( (\frac{\theta\mu\lambda}{\mu-\lambda})^2\cdot\frac{1}{2\gamma}\big)\cdot\frac{\mu-\lambda}{\mu\theta}.\nonumber
\end{eqnarray}
\section{Numerical Examples}
\label{sec:simulation}
\subsection{Starvation of M/M/1 Queue}
This set of experiments compares the probability of starvations with the
event driven simulations using MATLAB. The M/M/1 queue is tested for up to 5000 times
with arrivals from files of different sizes. We deliberately consider four combinations
of parameters: $\rho = 0.95$ or $1.1$, and $x_1 = 20$ or $40$ pkts.
The departure rate $\mu$ is normalized as 1 if not mentioned explicitly. The choice
of the start-up thresholds coincides with the playout of audio or video streaming services
in roughly a couple of seconds (e.g. 200$\sim$400kbps playback rate on average given
the packet size of 1460 bytes in TCP).
The file size in the experiments ranges between 40 and 1000 in terms of packets.
Figure \ref{fig:prob_rho095b20} displays the probability of 0,1, and 2 starvations
with parameters $\rho = 0.95$ and $x_1 = 20$. When the file size grows, the probability
of no starvation decreases. We observe that
the probabilities of 1 and 2 starvations increase first, and then decline after
reaching the maximum values. The reason lies in that the traffic intensity
$\rho$ is less than 1. Figure \ref{fig:prob_rho095b20} also shows that our analytical
results match the simulation well. Figure \ref{fig:prob_rho095b40} exhibits the similar
results when the start-up threshold is 40 pkts. The comparison between figure
\ref{fig:prob_rho095b20} and \ref{fig:prob_rho095b40} manifests that
a larger $x_1$ is very effective in reducing starvation probability.
Figure \ref{fig:prob_rho110nostarv} plots the probability of no starvation
with the traffic intensity $\rho=1.1$. The probability of no
starvation is improved by more than 10\% (e.g. $N\geq 300$)
when $x_1$ increases from 20 to 40. Figure \ref{fig:prob_rho110nostarv}
also validates the asymptotic probability of no starvation
obtained from Gaussian and Riemann integral approximations etc.
Figure \ref{fig:prob_rho110onestarv} plots the probability of one
starvation with the same parameters. Recall that the probability
of one starvation decreases to 0 as $N$ increases in the case $\rho=0.95$.
While figure \ref{fig:prob_rho110onestarv} exhibits a different trend
along with the increase of file size. This probability becomes saturated, instead of
decreasing to 0. When $\rho$ is greater than 1, the probability of
having a particular number of starvations approaches a constant.
In both figure \ref{fig:prob_rho110nostarv} and \ref{fig:prob_rho110onestarv},
simulation results validate the correctness of our analysis. Hence, in the following experiments,
we only illustrate the analytical results.
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_rho95b20_valid.eps}
\caption{Probability of 0, 1, and 2 starvations with $\rho = 0.95$ and $x_1 = 20$}
\label{fig:prob_rho095b20}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_rho95b40_valid.eps}
\caption{Probability of 0, 1, and 2 starvations with $\rho = 0.95$ and $x_1 = 40$}
\label{fig:prob_rho095b40}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_rho110nostarv_valid.eps}
\caption{Probability of no starvation with $\rho = 1.1$: $x_1 = 20$ and $x_1 = 40$}
\label{fig:prob_rho110nostarv}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_rho110onestarv_valid.eps}
\caption{Probability of one starvation with $\rho = 1.1$: $x_1 = 20$ and $x_1 = 40$}
\label{fig:prob_rho110onestarv}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_onoff_valid.eps}
\caption{ON/OFF traffic: probability of 0, 1, and 2 starvations with $\rho=1.5$ and $x_1=40$}
\label{fig:prob_onoff}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_onoff_valid2.eps}
\caption{ON/OFF traffic: prob. of 0, 1, and 2 starvations for $x_1=20$: $\rho=2.5$ and $3.0$}
\label{fig:prob_onoff2}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_onoff_rho2_thvsstarv.eps}
\caption{ON/OFF traffic: probability of no starvation with $\rho=2$ versus the threshold $x_1$}
\label{fig:onoff_thresholdvsnostarv}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_onoff_rho250_abvstarv_final.eps}
\caption{ON/OFF traffic: probability of no starvation with $\rho = 2.5$ and $N=800$ versus the state transition rates}
\label{fig:onoff_abvsnostarv}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{fluid.eps}
\caption{Fluid analysis: prob. of starvation versus the threshold $x_1$}
\label{fig:fluid}
\end{figure}
\subsection{Starvation of Bursty Traffic}
We consider the ON/OFF bursty arrival of packets into an M/M/1 queue.
For the ease of comparison, we let the transition rates $\alpha$
and $\beta$ be both 0.2. The file size ranges from 40 to 500 pkts.
In figure \ref{fig:prob_onoff}, we plot
the probabilities of having no more than two starvations with $\rho = 1.5$
and $x_1=40$.
As the file size increases, the probability of no starvation decreases.
The probabilities of 1 and 2 starvations increases first, and then decreases to 0.
This means that the starvation is for sure when the file size approaches infinity.
In figure \ref{fig:prob_onoff2}, we plot the starvation probabilities for $\rho = 2.5$ and $3.0$
where the start-up threshold is set to 20. In contrast to figure \ref{fig:prob_onoff},
the probability of no starvation converges to a positive constant as $N$ is large enough.
Figure \ref{fig:onoff_thresholdvsnostarv} illustrates the impact of $x_1$ on the probability
of no starvation. In this set of experiments, $\rho$ is set to 2. The start-up threshold
$x_1$ increases from 20 to 60 pkts, and the file size increases from 400 to 800 pkts.
It is clearly shown that a slight increase in $x_1$ can greatly improve the starvation probability.
In figure \ref{fig:onoff_abvsnostarv}, we plot the probability of no starvation with
$\rho = 2.5$ and $N=800$ pkts. The transition rates $\alpha$ and $\beta$ increases from
0.05 to 0.25. It can be seen that the probability of no starvation increases monotonically
with the symmetric transition rates $\alpha$ and $\beta$.
\subsection{Starvation in the File Level}
This set of numerical experiments shows the relationship between the starvation probability
and the distribution of file size. The traffic intensity $\rho$ is set to 0.95.
Let $\theta$ be 1/2000 in the exponential distribution.
Then, the average file size is 2000 pkts. For the Pareto distribution, we set the minimum
file size to be 300 pkts so that the exponent $\upsilon$ is 1.1765. The parameters
$\varrho$ and $\sigma$ of the Log-normal distribution are set to 5.0 and 2.2807.
We plot the CDF curves of the file size and the starvation probabilities in figure \ref{fig:fluid}.
The left-side subfigure illustrates the distribution of file size with the parameters configured above.
The Pareto and the Log-normal distributions exhibit heave-tail property.
In the right-side subfigure, we plot the starvation probability of different file distributions
when $x_1$ increases from 10 to 150. The starvation probability of the Pareto distribution is
very high with small $x_1$. This is because the files have a minimum size (i.e. $N_p < N_m$). The log-normal distribution
demonstrates a small starvation probability. In addition, increasing the threshold
$x_1$ does not having a significant impact on the starvation probability when $x_1$ is greater
than 90. Therefore, as the take-home message of fluid analysis, the configuration of $x_1$ relies
on the distribution of file size to a great extent. To obtain a better QoE, the media servers
can set different $x_1$ for different classes of media files.
\subsection{Optimizing Quality of Experience}
\noindent \textbf{QoE optimization of finite media size:}
We illustrate the total QoE cost (including the starvation cost and the start-up delay cost)
in figure \ref{fig:finite_qoe1} with $\lambda = 16,20,24$
and $\mu=25$. The file size is set to $N = 1000$ and the weight $\gamma$ is $10^{-3}$.
We find that the total QoE looks neither ``concave'' nor ``convex'' with regard to the start-up threshold.
For example, when $x_1$ is less than 300 with $\lambda = 16$, the increase in
the start-up delay cost cannot be compensated by the reduction of the starvation probability.
We further plot the optimal start-up threshold obtained from the maximum of eq.\eqref{eq:totalcost_finite} in figure
\ref{fig:finite_qoe2}. When $\gamma = 10^{-4}$ and $\gamma = 10^{-3}$,
the optimal start-up threshold $x_1^*$ decreases when $\lambda$ increases.
We also observe for each $\lambda$ $x_1^*$ of the case $\gamma = 10^{-4}$ is higher than that of $\gamma = 10^{-3}$
because the former user is more sensitive to the starvation. In the extreme scenario
$\gamma = 0$, the streaming user will download the whole media file before watching it.
In the case $\gamma = 5\times10^{-3}$,
$x_1^*$ are always 1 if the arrival rate $\lambda$ is less than 20. The reason lies in
that the total cost is always greater than 1 in those situations. The start-up threshold
$x_1^* = 1$ will induce numerous consecutive starvations, which definitely degrades the
streaming QoE. To mitigate this malfunction, we can introduce a minimum playback delay
that works independent of the QoE optimization.
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{total_cost_finitesize.eps}
\caption{Finite media size: total cost with $\mu = 25$ and $\gamma = 0.001$}
\label{fig:finite_qoe1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{opt_th_finitesize.eps}
\caption{Finite media size: optimal thresholds with $\gamma = 10^{-4}, 10^{-3}$, and $5\times10^{-3}$}
\label{fig:finite_qoe2}
\end{figure}
\noindent \textbf{QoE optimization of infinite media size:}
We plot the optimal prefetching thresholds $x_1$ for the case $\rho>1$ in figure \ref{fig:inf_qoe_threshold1}
and the case $\rho<1$ in figure \ref{fig:inf_qoe_threshold3}. As $\lambda$ increases,
the optimal prefetching threshold $x_1^*$ reduces. Unlike figure \ref{fig:finite_qoe2},
there do not exist an abrupt change in $x_1^*$. This is because the cost function $C_1(x_1)$
is a convex function of $x_1$ with both $\rho>1$ and $\rho<1$.
Furthermore, $x_1^*$ decreases as $\gamma$ increases (the user putting more weight to
the prefetching delay).
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_inf_qoe_optx1.eps}
\caption{Optimal threshold $x_1^*$ of infinite file size: $\rho > 1$}
\label{fig:inf_qoe_threshold1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_inf_qoe_optx1_rholess1.eps}
\caption{Optimal threshold $x_1^*$ of infinite file size: $\rho < 1$}
\label{fig:inf_qoe_threshold3}
\end{figure}
\noindent \textbf{QoE optimization in the flow level:}
We investigate the cost minimization problem at the media server side numerically.
Let $\mu:=25$ which means that 25 packets are served per second. Given the packet size
of 1460 bytes, this service rate is equivalent to 292Kbps (without considering protocol overheads).
Let the mean file size $1/\theta$ be 1000 and 2000 packets respectively (equivalent
to the playback time of 40 and 80 seconds). The sensitivity $\gamma$ is set to 0.01 or
0.005. Figure \ref{fig:fluid_qoe_threshold} illustrates the choice of the optimal start-up thresholds
when $\lambda$ increases from 20 to 25 (i.e. $\rho\leq 1$). We evaluate four combinations of $\theta$ and $\gamma$ numerically.
Our observations are summarized as follows.
First, for the same file size distribution, a smaller $\gamma$ causes a higher optimal start-up threshold.
Second, $x_1^*$ is not a strictly decreasing function of $\lambda$. When $\lambda$ is small (e.g. 20pkts/s),
a large start-up threshold does not help much in reducing the starvation probability, but causes
impatience of users of waiting the end of prefetching. If $\lambda$ increases, the adverse impact of setting
a larger $x_1$ on the start-up delay can be compensated by the gain in the reduction of starvation probability.
Third, with the same sensitivity $\gamma$, the optimal $x_1^*$ of a long video stream can be smaller
than that of a short one in some situations. This is caused by the fact that the large threshold
might not significantly improve the starvation probability for a file of large size. We further
show the starvation probability in figure \ref{fig:fluid_qoe_starvprob}. A larger mean file size, or a
smaller $\gamma$ result in a larger probability of starvation. Unlike the start-up threshold $x_1^*$,
the starvation probability is shown to be strictly decreasing as $\lambda$ increases. Once the starvation
event happens, the playout buffer enters the rebuffering phase.
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_fluid_qoe_optx1.eps}
\caption{Optimal threshold $x_1^*$ for QoE enhancement at the file level: $\mu=25$}
\label{fig:fluid_qoe_threshold}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=2.7in, height = 2.0in]{test_fluid_qoe_starv.eps}
\caption{Starvation probability at the file level for the optimal start-up threshold $x_1^*$: $\mu=25$}
\label{fig:fluid_qoe_starvprob}
\end{figure}
\section{Conclusion, Discussion and Future Work}
\label{sec:conclusion}
We have conducted an \emph{exact} analysis of the starvation
behavior in Markovian queues with a finite number of packet arrivals.
We perform a packet level analysis and a fluid level analysis. The packet level study
is carried out via two approaches, the Ballot theorem and the recursive equations.
Both of them have pros and cons; the former providing an explicit
expression, but with high complexity order in general, while the latter is
more computationally efficient, but without an explicit result.
In order to analyze the behavior from a media service provider's point of view,
we perform a fluid level analysis that computes the probability of starvation among
many files. We further apply the theoretical results to perform QoE optimization
for media streaming services. Our work can be extended to study the QoE metrics in
a more general network with multiple bottlenecks between the server and the user. In
this situation, the arrival process can be modeled as a phase-type renewal process.
In terms of future works, we aim at extending the analytical methods to perform QoE optimization
in adaptive streaming services. Another important extension is the starvation analysis in a wireless
environment where the wireless link is shared by multiple connections. In such a case, the arrival rate
to a user is time varying due to the arrivals and departures of other calls.
\noindent \textbf{Acknowledgements:} The work of the authors from INRIA and from
Univ of Avignon was supported by a contract with Orange Lab, Issy Les Moulineaux.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The era of precision cosmology from galaxy surveys is upon us. Galaxy survey data sets have achieved comparable constraining power on a subset of cosmological parameters to measurements of the Cosmic Microwave Background (CMB) \citep{Alam2017,Y13x2}, but unlike the CMB, these constraints rely on the measurement and modeling of non-linear structure. In a very real sense these analyses are already systematics limited, disregarding significant portions of their data in order to mitigate modeling uncertainties. For example, \citet{Y13x2} limited itself to scales for which baryonic feedback and non-linear effects from galaxy biasing could be ignored. \citet{Alam2017}, presenting the final analysis of the BOSS galaxy redshift survey, restricted their redshift space distortion measurements to $s>20 ~h^{-1}\textrm{Mpc}$ and $k<0.15 ~h~ \textrm{Mpc}^{-1}$ in configuration and Fourier space respectively to avoid uncertainties in modeling the galaxy velocity field.
Analytic models of these effects for simply selected samples are improving, but even the best models only claim to be accurate to the percent level at $k\sim 0.3~ h~ \textrm{Mpc}^{-1}$ for matter and halo power spectra before taking into account effects due to hydrodynamics, feedback and redshift space distortions \citep{Cataneo2017, Perko2016}. Non-linear effects are much more difficult to avoid in the halo mass function (HMF), and analytic predictions such as those in \citet{PressSchechter74} and \citet{ShethTormen99} are only accurate at the $\sim 10\%$ level \citep{Tinker2008}. Depending on the observable, this level of precision is either already a dominant source of error, or will be in the very near future \citep[see e.g.][]{Tinker2012}. While $1\%$ precision in observables is often quoted as a necessary goal, the required precision on predictions for observables is often not this stringent. For instance, \citet{McClintock2018} determines that the precision required for the halo mass function in order for it to contribute no more than $10\%$ of the total uncertainty in cluster mass calibration for upcoming surveys is $3\%$ at its most demanding.
While analytic methods struggle with non-linear structure formation, a clear alternative exists in numerical simulations. In the case of gravity, where we have a well-understood standard theory described by General Relativity, the effectiveness of simulations is limited only by the coarseness of the discretization allowed by currently available computers. Different algorithms for solving for non-linear structure growth in dark matter only simulations have been shown to produce predictions for the matter power spectrum that are converged at better than the $1\%$ level to $k\sim 1 ~h~ \textrm{Mpc}^{-1}$ \citep{Heitmann2009, Schneider2016}. It should be noted that these studies are of relative convergence, whereas studies of absolute convergence to the true physical solution is still an open question that likely depends on a better understanding of baryonic physics, neutrinos, and the nature of dark matter itself. Because of the relative successes of the aforementioned simulations, almost all cosmological analyses involving galaxy surveys now use them in some form \citep{y1sim2params,Kitaura2016,Joudaki2018}.
While great strides have been made in improving their computational efficiency, $N$-body\ simulations are still relatively expensive. For example the $\textsc{ds14\_a}$ simulation \citep{Skillman2014}, one of the largest simulations run to date with a simulated volume of $(8~h^{-1}\textrm{Gpc})^3$ and $1.07\times10^{12}$ particles, took approximately 34 hours on 12,288 nodes, approximately $2/3$ of the \textsc{Titan} supercomputer. While this simulation approaches the volume of many ongoing and upcoming galaxy surveys, it does not resolve even all of the host halos of galaxies in a survey like DES.
Cosmological parameter constraints typically rely on sampling schemes such as Monte Carlo Markov Chains (MCMC) in order to explore parameter space. Modern analysis including cosmological and nuisance parameters numbering in the tens must sample on the order of millions of different cosmologies in order to reach convergence. Running an $N$-body\ simulation at each of these steps is not a prospect that will be achievable in the near future, even when considering smaller simulations than $\textsc{ds14\_a}$, such as those presented in this work. Thus, there is a need for methodologies which can use relatively few simulations to make robust predictions for the full cosmological parameter space being constrained. Much of the work in this area has been driven by the need for accurate predictions of the matter power spectrum for weak lensing analyses. For example, the \textsc{Halofit} methodology \citep{Smith02,Takahashi2012} fit an analytic expression to a set of $N$-body\ simulations in various cosmologies to obtain predictions for the matter power spectrum accurate to $5\%$ for $k<1~h~ \textrm{Mpc}^{-1}$ and $10\%$ for $1~h~ \textrm{Mpc}^{-1}<k<10~h~ \textrm{Mpc}^{-1}$.
Investigations into more advanced methodologies are ongoing, typically combining algorithms for optimally sampling a chosen cosmological parameter space and a method for interpolating between the observables at the sampled cosmologies. This approach, dubbed cosmic emulation, was first demonstrated for the matter power spectrum in \cite{Heitmann2009}. They showed convergence of their simulation results with respect to a number of choices made in solving the $N$-body\ problem, including mass resolution, force softening and simulation volume. This work has since been extended to the Friends-of-Friends halo mass function \citep{Heitmann2016}, galaxy correlation function and galaxy-shear cross correlation function \citep{Wibking2017}, among other observables. Studies of the convergence of these statistics are not as complete as those for the matter power spectrum. Work towards validating the convergence of these statistics is vital to ensuring the accuracy of predictions built from simulations.
This type of validation is the primary concern of this work. The simulations presented here form the basis for the first set of emulators that is being built as a part of the \aemulus\ project, a collaboration focused on the emulation of galaxy survey observables. The goal of the validation presented here is to provide robust convergence estimates for the statistics in question so that they may be properly accounted for in emulators built from these simulations. Emulators for the halo mass function and redshift space galaxy clustering using these simulations are presented in \citet{McClintock2018} and \citet{Zhai2017} respectively. Additionally, we hope to provide convergence guidelines for future work that simulates the statistics presented here.
In \autoref{sec:params} we present our cosmological parameter space and the Latin Hypercube algorithm used to sample from it. In \autoref{sec:simulations} we discuss our simulation framework. In \autoref{sec:nbodyconv} we show that the observables we emulate are converged with respect to the choices made in our $N$-body\ solver. In \autoref{sec:halofinding} we discuss issues related to halo finding and halo definitions, and in \autoref{sec:otheremu} we compare our simulations to existing emulators. In \autoref{sec:datarelease}, we discuss our plans to release these simulations to the public, and in \autoref{sec:summary} we conclude.
\section{Cosmological Parameter Space}
\label{sec:params}
The goal of the parameter selection algorithm is to optimally span a large-dimensional space with a limited number of points. Our criterion for optimization is to maximize the accuracy of any scheme to interpolate statistics between the points, which requires the points to be as close to uniformly spaced as possible, while covering as much of the space as possible. We follow the technique outlined in \cite{Heitmann2009}, with minor modifications. The process begins with a Latin Hypercube (LH) containing $M=40$ samples of our $N=7$-dimensional space. In an LH design, each of the $N$ dimensions is divided into $M$ bins. In each dimension, each of the bins is selected once with no repeats, thus guaranteeing the full range of each parameter within the space is represented sparsely.
A random LH design is not optimally spaced, however. Points can be clumped together, as shown in a two-dimensional projection of our 7-dimensional space in the left-hand side of \autoref{fig:hypercube}. To quantify the spacing of a given LH, for every point in the space we calculate the distance to the closest point in each two-dimensional projection of the space. The quantity of interest is the sum of all minimum distances for all points in all projections. The space is optimal when this quantity is maximized, thus removing any clumping between points and pushing the points to a uniform distribution. To accomplish this, we use an iterative procedure that takes two points from the sample and swaps values in one dimension. If this swapping increases the quantity of interest, the swap is accepted. If it does not, the swap is rejected. This procedure is iterated until convergence. The result of this procedure is shown in the middle panel of \autoref{fig:hypercube}.
An LH design, by construction, creates a distribution of points in an $M$-dimensional cube. However, we do have prior knowledge on the distribution of cosmological parameters, and we want the distribution of our points to follow the degeneracies between parameters given current constraints. We use the combination of CMB, baryon acoustic oscillations (BAO), and supernovae (SN). Specifically, we use the CosmoMC chains produced in the cosmology analysis of the BOSS DR11 BAO analysis (\citealt{Anderson2014}). Separate chains were run for nine-year WMAP results \citep{WMAP9} and for Planck 2013 results \citep{Planck2013}. Given the differences in these CMB results, as well as our desire for our simulations to span a larger volume of parameter space than current constraints, we combine the chains from WMAP and Planck. The eigenvalues and eigenvectors of the combined chains are used to set the dimensions of the LH design. The generic LH design has 7 dimensions, with points ranging from [0,1]. Each of these dimensions is an eigenvector of the cosmological parameter space, and the range [0,1] maps onto $[-4,4]\times \sigma_i$, where $\sigma_i$ is the eigenvalue of vector $i$. The right-hand panel in \autoref{fig:hypercube} shows the generic LH design projected into cosmological parameter space. In this example, we plot $\Omega_m$ vs. $100\Omega_b$. In this projection, the data points may appear somewhat clumped, but recall that this is an angled projection of the LH. For reference, the $1\sigma$ and $2\sigma$ contours from the CMB+BAO+SN analyses are presented for both WMAP and Planck.
\begin{figure*}
\includegraphics[width=\linewidth]{fig/phase_slice_z0_b35.pdf}
\label{fig:slice}
\caption{A $50~h^{-1}\textrm{Mpc}$ thick slice through \textsc{B25} with density deposition performed as described in \citet{Kaehler2012}.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{fig/hypercube.pdf}
\label{fig:hypercube}
\caption{{\it Left Panel:} A two-dimensional projects of a random 7-dimensional Latin Hypercube (LH), with 40 points in total. {\it Middle Panel:} The same LH, now optimized for more uniform spacing between points. {\it Right Panel:} The same LH as shown in the middle panel, but now rotated into the eigenspace defined by CMB data. Contours are the WMAP9 and Planck13 joint constraints with BAO and supernovae.
}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[]{fig/allcontours.pdf}
\label{fig:allcontours}
\caption{The contours show the $3\sigma$ CMB+BAO+SNIa constraints in our parameter space. The 40 training cosmologies and seven test cosmologies are shown in black and red respectively.}
\end{figure*}
\begin{table*}[t!]
\centering
\begin{tabular}{l|c|c|C|c|c|c|c}
Name & $\Omega_{b}\/h^{2}$ & $\Omega_{c}\/h^{2}$ & $\textrm{w}_{0}$ & $n_{s}$ & $\log 10^{10}\textrm{A}_{s}$ & $\textrm{H}_{0}$ & $\textrm{N}_{\text{eff}}$\\
\hline
B00 & 0.0227 & 0.1141 & -0.817 & 0.9756 & 3.093 & 63.37 & 2.919 \\
B01 & 0.0225 & 0.1173 & -1.134 & 0.9765 & 3.150 & 73.10 & 3.174 \\
B02 & 0.0230 & 0.1087 & -0.685 & 0.9974 & 3.094 & 63.71 & 3.259 \\
B03 & 0.0227 & 0.1123 & -0.744 & 0.9481 & 3.001 & 64.04 & 3.556 \\
B04 & 0.0221 & 0.1063 & -0.767 & 0.9651 & 3.119 & 65.05 & 2.664 \\
B05 & 0.0207 & 0.1295 & -1.326 & 0.9278 & 3.024 & 72.75 & 2.961 \\
B06 & 0.0229 & 0.1115 & -0.710 & 0.9706 & 3.016 & 62.70 & 2.706 \\
B07 & 0.0228 & 0.1196 & -0.867 & 0.9663 & 3.162 & 64.37 & 3.939 \\
B08 & 0.0207 & 0.1238 & -1.164 & 0.9491 & 3.147 & 69.40 & 3.599 \\
B09 & 0.0213 & 0.1158 & -0.831 & 0.9475 & 3.072 & 62.36 & 3.896 \\
B10 & 0.0219 & 0.1290 & -1.241 & 0.9610 & 3.050 & 72.09 & 4.236 \\
B11 & 0.0226 & 0.1090 & -0.861 & 0.9960 & 3.158 & 67.73 & 2.834 \\
B12 & 0.0225 & 0.1168 & -0.879 & 0.9540 & 3.048 & 65.38 & 2.876 \\
B13 & 0.0219 & 0.1172 & -1.120 & 0.9788 & 3.068 & 71.08 & 3.004 \\
B14 & 0.0226 & 0.1271 & -1.117 & 0.9724 & 3.094 & 68.73 & 2.749 \\
B15 & 0.0215 & 0.1285 & -1.303 & 0.9336 & 3.094 & 74.10 & 3.726 \\
B16 & 0.0218 & 0.1207 & -1.131 & 0.9662 & 3.014 & 70.07 & 3.769 \\
B17 & 0.0223 & 0.1194 & -1.248 & 0.9520 & 3.035 & 74.44 & 3.216 \\
B18 & 0.0229 & 0.1157 & -1.032 & 0.9533 & 3.020 & 70.75 & 4.279 \\
B19 & 0.0224 & 0.1133 & -1.092 & 0.9673 & 3.096 & 72.43 & 3.684 \\
B20 & 0.0223 & 0.1225 & -0.990 & 0.9529 & 3.120 & 67.06 & 3.386 \\
B21 & 0.0236 & 0.1172 & -0.866 & 0.9758 & 3.132 & 66.39 & 3.854 \\
B22 & 0.0215 & 0.1210 & -1.032 & 0.9586 & 3.072 & 68.06 & 2.621 \\
B23 & 0.0227 & 0.1012 & -0.566 & 0.9746 & 3.019 & 62.03 & 3.471 \\
B24 & 0.0225 & 0.1103 & -0.761 & 0.9589 & 3.144 & 63.03 & 4.151 \\
B25 & 0.0209 & 0.1171 & -0.948 & 0.9345 & 3.037 & 65.71 & 3.089 \\
B26 & 0.0224 & 0.1192 & -1.125 & 0.9443 & 3.128 & 71.76 & 2.791 \\
B27 & 0.0214 & 0.1134 & -0.965 & 0.9664 & 3.015 & 67.39 & 4.024 \\
B28 & 0.0217 & 0.1318 & -1.400 & 0.9586 & 3.147 & 74.77 & 3.811 \\
B29 & 0.0223 & 0.1289 & -1.236 & 0.9401 & 3.159 & 71.41 & 3.429 \\
B30 & 0.0219 & 0.1239 & -1.224 & 0.9552 & 3.118 & 73.43 & 4.066 \\
B31 & 0.0212 & 0.1276 & -1.382 & 0.9561 & 3.076 & 73.76 & 3.344 \\
B32 & 0.0225 & 0.1128 & -0.926 & 0.9495 & 3.043 & 68.40 & 3.981 \\
B33 & 0.0234 & 0.1150 & -0.875 & 0.9892 & 3.149 & 66.05 & 3.641 \\
B34 & 0.0228 & 0.1222 & -1.032 & 0.9500 & 3.107 & 69.07 & 3.131 \\
B35 & 0.0234 & 0.1076 & -0.613 & 0.9956 & 3.140 & 61.69 & 3.046 \\
B36 & 0.0220 & 0.1213 & -1.108 & 0.9674 & 3.179 & 70.41 & 3.301 \\
B37 & 0.0229 & 0.1097 & -0.849 & 0.9776 & 3.072 & 66.73 & 3.514 \\
B38 & 0.0237 & 0.1150 & -0.955 & 0.9766 & 3.054 & 69.75 & 4.109 \\
B39 & 0.0217 & 0.1201 & -0.941 & 0.9602 & 3.093 & 64.70 & 4.194 \\
\end{tabular}
\caption{The cosmologies used in training our emulators, deemed training cosmologies in this paper. Each has one realization with volume $(1050 \,\ h^{-1}\textrm{Mpc})^3$ and $N_{\text{part}}=1400^3$. Each uses the fiducial settings detailed in \autoref{sec:simulations}. In particular they have mass resolutions of $3.51\times 10^{10} \left(\frac{\Omega_{m}}{0.3}\right) ~h^{-1}\textrm{M}_{\odot}$ and force resolutions of $20\,\ h^{-1}\textrm{kpc}$.}
\label{table:trainingsims}
\end{table*}
\begin{table*}[t!]
\centering
\begin{tabular}{l|c|c|C|c|c|c|c}
Name & $\Omega_{b}\/h^{2}$ & $\Omega_{c}\/h^{2}$ & $\textrm{w}_{0}$ & $n_{s}$ & $\log 10^{10}\textrm{A}_{s}$ & $H_{0}$ & $N_{\text{eff}}$\\
\hline
T00 & 0.0233 & 0.1078 & -0.727 & 0.9805 & 3.039 & 63.23 & 2.950 \\
T01 & 0.0228 & 0.1128 & -0.862 & 0.9715 & 3.064 & 65.73 & 3.200 \\
T02 & 0.0223 & 0.1178 & -0.997 & 0.9625 & 3.089 & 68.23 & 3.450 \\
T03 & 0.0218 & 0.1228 & -1.132 & 0.9535 & 3.114 & 70.73 & 3.700 \\
T04 & 0.0213 & 0.1278 & -1.267 & 0.9445 & 3.139 & 73.23 & 3.950 \\
T05 & 0.0218 & 0.1153 & -1.089 & 0.9514 & 3.119 & 69.73 & 3.700 \\
T06 & 0.0228 & 0.1203 & -0.904 & 0.9736 & 3.059 & 66.73 & 3.200
\end{tabular}
\caption{The cosmologies used in the test simulations. Each has five realizations, each with volume $(1050\,\ h^{-1}\textrm{Mpc})^3$ and $N_{\text{part}}=1400^3$ using the fiducial settings detailed in \autoref{sec:simulations}. In particular, they have mass resolutions of $3.51\times 10^{10} \left(\frac{\Omega_{m}}{0.3}\right) ~h^{-1}\textrm{M}_{\odot}$ and force resolutions of $20\,\ h^{-1}\textrm{kpc}$.}
\label{table:testsims}
\end{table*}
\section{$N$-body\ Simulations}
\label{sec:simulations}
There are three sets of simulations discussed in this work, all run using the \textsc{L-Gadget2} $N$-body\ solver, a version of \textsc{Gadget2} \citep{Springel2005} modified for memory efficiency when running dark-matter-only (DMO) simulations. The first of these sets, which we dub ``training simulations'', is a set of 40 $(1.05~h^{-1} \/\textrm{Gpc})^{3}$ boxes with $1400^3$ particles, resulting in a mass resolution of $3.51\times 10^{10} \left(\frac{\Omega_{m}}{0.3}\right) ~h^{-1}\textrm{M}_{\odot}$. The cosmologies of these simulations, listed in \autoref{table:trainingsims}, are drawn from the LH discussed in \autoref{sec:params}. These are run with a Plummer equivalent force softening of $20~h^{-1}\/\textrm{kpc}$, and maximum time step of $\textsc{max}(\Delta \ln a)=0.025$. We use 2nd order Lagrangian perturbation theory (2LPT) initial conditions generated at $a=0.02$ using \textsc{2LPTIC} \citep{Crocce2006} with input power spectra as computed by CAMB \citep{CAMB}, taking $\Omega_{\nu}=0$.
Each of these 40 simulations is initialized with a different random seed. This is different from the approach taken in some recent simulation suites designed for emulators \citep[e.g][]{Garrison2017}, but \citet{McClintock2018} and \citet{Zhai2017} show that this enables our emulators to perform better than the sample variance of our individual simulations, whereas simulations using the same initial seed are guaranteed to perform only as well as the sample variance of the chosen individual simulation volume. We save 10 snapshots at redshifts of $z=\{3.0, 2.0, 1.0, 0.85, 0.7, 0.55, 0.4, 0.25, 0.1, 0.0\}$.
In order to test the accuracy of our emulators we have also run a set of seven test cosmologies using the same settings as our training simulations. For each test cosmology we have run 5 simulations, each with different initial conditions, totaling 35 simulations. We will refer to these as ``test simulations'' throughout.
Additionally, we have run a set of simulations varying a number of choices with respect to the \textsc{L-Gadget2} $N$-body\ solver, which we will refer to as ``convergence test simulations''. A few of these simulations were run using a number of cosmologies, including the Chinchilla cosmology \citep{Lehmann2017} with $(\Omega_m,h,N_{\text{eff}},n_s,\sigma_8,w) = (0.286\/,0.7\/,3.04,\/0.96,\/0.82,\/-1)$, which is not used for any of the test or training boxes, but is well within our cosmological parameter space. See \autoref{table:convsims} for a summary of the simulations that we have used for these tests.
The names of these simulations all begin with \textsc{CT} to denote that they were run for convergence testing. The first number following \textsc{CT} enumerates the $N$-body\ solver parameter set that was used to run the simulation. Various sets of these simulations were run with the same random seed for their initial conditions. The sets with the same seed are demarcated with the same last number in their name, e.g. \textsc{CT00} and \textsc{CT60}. When necessary, we distinguish between the different cosmologies used by including them in the simulation name. For example, \textsc{CT00-T00} refers to the simulation run with our fiducial $N$-body\ solver parameters using the first random seed for its initial conditions in the \textsc{T00} cosmology as listed in \autoref{table:testsims}.
We have chosen to use volumes of $(400~h^{-1}\textrm{Mpc})^3$ for the \textsc{CT} simulations rather than the $(1.05~h^{-1}\textrm{Gpc})^3$ used for the training and test simulations, as changing some of the settings for convergence tests significantly increases the runtime of the simulations. Using a smaller volume allows these simulations to complete in a modest amount of time. We have mitigated the smaller volumes by running 4 pairs of boxes, \textsc{CT00,...,CT03} and \textsc{CT40,...,CT43}, in 3 different cosmologies, \textsc{Chinchilla}, \textsc{T00} and \textsc{T04}, for our particle loading test as it is for this test that we find our largest deviations from convergence and we wish to constrain these more precisely with better statistics.
We employ the \textsc{rockstar} spherical overdensity halo finder \citep{Behroozi2013} for all of our simulations. \textsc{rockstar} employs a 6D phase space friends-of-friends (FoF) algorithm in order to identify density peaks and their surrounding overdensities. We have chosen to use $\mathrm{M}_\text{200b}$ strict spherical overdensity (SO) masses as our fiducial mass definition, where strict refers to the inclusion of unbound particles in the mass estimates of all halos. A discussion of this choice is presented in \autoref{sec:halofinding}. Other than enabling strict SO masses, we have used the default \textsc{rockstar} settings, choosing the \textsc{rockstar} softening length to be the same as that used in the $N$-body\ solver and leaving on the particle downsampling that \textsc{rockstar} performs in its initial construction of friends-of-friends groups, as we find that this does not affect any of the conclusions presented in this work. Additionally, all results presented here use only host halos, i.e. halos which are not found to lie within a halo with a higher maximum circular velocity.
\section{$N$-Body Convergence Tests}
\label{sec:nbodyconv}
The sampling of the parameter space, the effective volume of the training set, and the details of the emulators are all important aspects in determining the final precision and accuracy of our predictions. Equally important is that the observables that are used to train the emulators are converged with respect to all possible choices made when running the simulations. Otherwise, there is a risk of biasing the predictions in ways that are difficult to identify post-hoc. For instance, comparison with other predictions is a useful sanity check so long as they agree to within their purported precision as it is unlikely that both sets of simulations have the same systematic biases, and so their agreement indicates that both predictions are likely converged. However, in the case that such comparisons disagree it is impossible to determine why unless detailed convergence tests are conducted.
It should be noted again that all of the tests we perform are of relative convergence and not absolute convergence. The reasons for this are twofold. First, we are knowingly leaving out physics that we believe to be important at some level, such as the effects of baryonic feedback on the matter distribution. Additionally, we do not have an analytic solution towards which we are measuring convergence even for the physics that we have implemented, particularly in the non-linear regime. There is a growing literature on the possible lack of absolute convergence in $N$-body\ simulations in this regime \citep{vandenbosch2018a,vandenbosch2018b}, but these issues typically arise when considering dark matter substructure within host halos. Constraining the statistics of substructure is not the goal of the present work, and so we conduct no tests of convergence of such statistics here.
\begin{table*}[htbp!]
\centering
\begin{tabular}{l|c|c|c|c|c|c|c|c|c}
Name & Cosmology & $N_{realizations}$ & $L_{box}$ $[h^{-1}\textrm{Mpc}]$ & $m_{\text{part}}$ $[h^{-1}\textrm{M}_{\odot}]$& $\epsilon$ $[h^{-1} \textrm{kpc}]$& $\Delta \ln a_{max}$ & $a_{\text{start}}$ & $\alpha$& $\eta$\\
\hline
\textsc{CT0} & Chinchilla, T00, T04 & $3\times4$ & $400$ & $3.30\times 10^{10}\left(\frac{\Omega_{m}}{0.286}\right)$ & 20 & 0.0250 & 0.02 & 0.002& 0.0250\\
\textsc{CT1} & Chinchilla & 1 & $400$ & $3.30\times 10^{10}$ & 20 & 0.0250 & 0.01 & 0.002& 0.0250\\
\textsc{CT2} & Chinchilla & 1 & $400$ & $3.30\times 10^{10}$ & 10 & 0.0250 & 0.02 & 0.002& 0.0250\\
\textsc{CT3} & Chinchilla & 1 & $400$ & $3.30\times 10^{10}$ & 20 & 0.0250 & 0.02 & 0.001& 0.0250\\
\textsc{CT4} & Chinchilla & 1 & $400$ & $3.30\times 10^{10}$ & 20 & 0.0125 & 0.02 & 0.002& 0.0250\\
\textsc{CT5} & Chinchilla & 1 & $400$ & $3.30\times 10^{10}$ & 20 & 0.0250 & 0.02 & 0.002& 0.0125\\
\textsc{CT6} & Chinchilla, T00, T04 & $3\times4$ & $400$ & $4.12\times 10^{9}\left(\frac{\Omega_{m}}{0.286}\right)$ & 20 & 0.0250 & 0.02 & 0.002& 0.0250 \\
\textsc{CT7} & T00,...,T06 & 7 & $3000$ & $2.49\times 10^{12} \left(\frac{\Omega_{m}}{0.286}\right)$ & 20 & 0.0250 & 0.02 & 0.002& 0.0250\\
\end{tabular}
\caption{Summary of the boxes run for convergence tests. Columns are simulation name, cosmologies, number of different initial condition realizations, box side length, particle mass, Plummer equivalent force softening, maximum time step, starting scale factor, force error tolerance, and time integration error tolerance.}
\label{table:convsims}
\end{table*}
\subsection{Measurements}
\label{subsec:measurements}
Below we describe our measurements of the following observables:
\begin{enumerate}[label=\alph*)]
\item Matter power spectrum, $P(k)$,
\item 3-dimensional matter correlation function, $\xi_{mm}(r)$,
\item Spherical overdensity halo mass function, $N(M_\text{200b})$
\item 3-dimensional halo--halo correlation function, $\xi_{hh}(r)$,
\item Projected galaxy--galaxy correlation function, $w_{p}(r_{p})$, and
\item Monopole and quadrupole moments of the redshift space galaxy--galaxy correlation function, $\xi_0(s)$, $\xi_{2}(s)$.
\end{enumerate}
We briefly detail how we measure each of these statistics and describe our convergence tests in the following subsections.
\subsubsection{Matter Power Spectrum}
The first statistic we will be interested in is the matter power spectrum, $P(k)$, which is given by
\begin{align}
\langle \delta(\mathbf{k}) \delta(\mathbf{k^{\prime}})^{*} \rangle = \delta_{\mathbf{k},\mathbf{k^{\prime}}} P(k)
\end{align}
where $\delta(\mathbf{k})$ is the Fourier transform of the matter overdensity field $\delta(\mathbf{x}) = \frac{\rho(\mathbf{x}) - \bar{\rho}}{\bar{\rho}}$:
\begin{align}
\delta(\mathbf{x}) = \frac{1}{(2\pi)^{3}}\int d\mathbf{k}~ e^{-i\mathbf{k\cdot x}}~\delta(\mathbf{k})
\end{align}
and the angle brackets denote an ensemble average over independent volumes, $V$. The power spectrum fully describes the statistics of any Gaussian random field, and as such is a useful statistic for describing cosmological density fields, which are still Gaussian at most scales until late times.
Because our simulations have periodic boundary conditions, we can estimate the power spectrum using a Fast Fourier Transform (FFT). First, we deposit the density field onto a mesh of dimensions $N_{mesh}^3$ where $N_{mesh}=\frac{2 L_{box} k_{max}}{\pi}$ using a cloud-in-cell deposition, such that wavenumbers $k\le k_{max}$ are sampled at or above their Nyquist rate. We take $k_{max}=5 ~h~ \textrm{Mpc}^{-1}$. We then compensate for the mass-deposition window function and average the resulting $3D$ power spectrum in bins of $k$, with $dk = \frac{L_{box}}{2\pi}$. All of this is performed using the \textsc{python} package \textsc{nbodykit} \citep{nbodykit}. We do not perform any shot-noise subtraction, since for the scales we are using, the standard $\frac{V}{N_{\text{part}}}$ correction is small, and it is not clear that the correction should necessarily take this form. Unless otherwise noted, this is the only statistic for which we do not include error estimates. At the scales of interest, the errors on $P(k)$ are very small, and estimating them via jackknife as we have done for our other measurements is non-trivial due to our use of an FFT to measure $P(k)$.
\subsubsection{$3D$ Matter Correlation Function}
Since $\delta(\mathbf{x})$ is assumed to be a stationary random field, its correlation function is given by the Fourier transform of its power spectrum,
\begin{align}
\xi(r) &= \langle \delta(\mathbf{x}) \delta(\mathbf{x} + \mathbf{r}) \rangle \\
&= \frac{1}{(2\pi)^{3}} \int d\mathbf{k} ~P(k)~e^{-i\mathbf{k}\cdot \mathbf{r}}
\end{align}
We estimate the $3D$ matter correlation function from our boxes by jackknifing the Landy-Szalay estimator \citep{Landy93}
\begin{align}
\label{eq:LS}
\hat{\xi}(r) &= \frac{DD - 2DR + RR}{RR}
\end{align}
where $DD$, $DR$ and $RR$ are particle-particle, particle-random, and random-random pair counts normalized by the total number of possible pairs in a given radial bin. We use 27 jackknife regions and down-sample the particle distribution by a factor of 100 which we have checked does not affect our results.
Despite the simple relation between the matter power spectrum and $3D$ correlation function, we check the convergence of both since measurement errors take different forms in the two statistics, e.g. in configuration space, correlations functions are formally only affected by shot-noise at $r=0$, whereas for power spectra the correction affects all wavenumbers. Additionally, emulators built from these boxes may choose to use one or the other quantity, and determining the scales or wavenumbers where one of these statistics is converged using the other is non-trivial. All pair counting was done using \textsc{Corrfunc} \citep{corrfunc}.
\subsubsection{Spherical Overdensity Halo Mass Function}
In modern theories of $\Lambda$CDM galaxy formation, all galaxies are assumed to form within dark matter halos. As such, making converged predictions for the abundance of dark matter halos is of great importance for accurately predicting galaxy statistics. In particular, \citet{McClintock2018} uses the simulations presented here to build an emulator for the abundance of SO dark matter halos using $\Delta=\text{200b}$ and so we focus our convergence tests on the statistic used in that work, namely the total number of halos per bin in $\log_{10}(M_\text{200b})$, $N(M_\text{200b})$. Additionally, we are interested in only so-called host halos and not subhalos. This is because the galaxy models we employ (in e.g. \citealt{Zhai2017}) are based on the Halo Occupation Distribution (HOD) formalism, which has no need for subhalo information, and because the first applications of our halo mass function emulator will be cosmology constraints using cluster number counts. We estimate $N(M_\text{200b})$ and its errors in our simulations using a jackknife estimator with 27 jackknife regions.
\subsubsection{3D Halo Correlation Function}
The other diagnostic we will use to asses the convergence of our halo populations is the $3D$ halo correlation function, $\xi_{hh}(r)$. Because the clustering of halos is biased with respect to the matter distribution due to their preferential formation in overdense regions of the matter distribution, convergence of $\xi_{hh}$ at a particular scale, $r$, does not directly follow from convergence of $\xi_{mm}$ at the same scale, and thus it is important to test for convergence of these separately.
For a discrete field, the two-point correlation function measures the excess probability, relative to a Poisson distribution, of finding two halos at the volume elements $dV_{1}$ and $dV_{2}$ separated by a distance $r$ \citep{Peebles1980}:
\begin{equation}
dP_{12} = \bar{n}^2[1+\xi(r)]dV_{1}dV_{2},
\end{equation}
where $\bar{n}$ is the mean number density of the sample. To estimate $\xi_{hh}$ we again jackknife the Landy-Szalay estimator given by \autoref{eq:LS}, using 27 subvolumes. The measurements of $\xi_{hh}(r)$ presented here are for halos with $M_\text{200b} > 10^{12}~h^{-1}\textrm{M}_{\odot}$, except where otherwise noted.
\subsubsection{Galaxy Correlation Functions}
In order to calculate galaxy clustering, we employ 10 BOSS massive galaxy sample-like HOD models and populate halos with a galaxy number density of $4.2\times10^{-4}\/(h^{-1}\textrm{Mpc})^{-3}$ at $z=0.55$.
The typical mass scale $M_{\rm{min}}$ for the dark matter halo at which half of the halos have a central galaxy is in the range $12.9 < \log{M_{\rm{min}}[h^{-1}{\rm M}_{\odot}]} < 13.5$, and the scatter of halo mass at fixed galaxy luminosity $\sigma_{\log{M}}$ ranges from 0.05 to 0.5. These models have satellite fraction ranging from $10\sim13\%$ and galaxy bias from $2.0\sim2.13$. Satellites are assumed to have a NFW profile \citep{NFW} and their velocity dispersion is assumed to be independent from the position within the halo. More details of this HOD model can be found in \citet{Zhai2017}.
The correlation function of the resulting galaxy catalogs is described by the projected correlation function $w_{p}$ and redshift space multipoles $\xi_{l}$. The former is used to mitigate redshift space distortions, probing the real space clustering signal, while the later can be calculated via Legendre decomposition at given order $l$. In the calculation of $w_{p}$, the integral along the line of sight direction is truncated at 80 $h^{-1}\textrm{Mpc}$, which is large enough to include most of the correlated pairs and produce a stable result. For $\xi_{l}$, we perform the decomposition up to $l=2$ to get the redshift space monopole and quadrupole. Clustering measurements are presented at scales from 0.1 to 50 $h^{-1}\textrm{Mpc}$ and averaged over 10 random realizations of each of the 10 HOD models.
\subsection{Convergence Tests}
Having described the statistics for which we will check convergence, we now report on the tests that we have performed as well as their results.
\subsubsection{Initial Conditions}
Since our $N$-body\ simulations do not start at $a=0$ we must justify our choice of initial conditions. Two decisions must be made: 1) which analytic prescription to use to generate the initial density and velocity fields and 2) what epoch to generate the initial conditions. For the first, we use 2LPT. For the second, care must be taken to ensure that the analytic treatment used to generate the initial conditions remains accurate for all modes in the simulation until the scale factor at which we start the $N$-body\ solver. To ensure this, we have chosen a starting time of $a=0.02$ ($z=49$). In this section we show that our observables are robust to this choice by comparing measurements made in a fiducial simulation with the same specifications used in our emulator suite, \textsc{CT00-Chinchilla}, to a simulation that has been run with a starting scale factor of $a=0.01$ ($z=99$), \textsc{CT10}. It should be noted again that, unless otherwise stated, all convergence tests presented here vary one parameter at a time from the fiducial parameters used for our training and test simulations.
For this test and all following tests, we will only report on deviations from $1\%$ convergence which exceed $1\sigma$ in significance. For this test, a few statistics deviate from convergence by more than this as can be seen in \autoref{fig:starttime}. For $M_\text{200b}<10^{13} h^{-1}\textrm{M}_{\odot}$, the halo mass function deviates from convergence. This mass would fall in the lowest mass bin used in \cite{McClintock2018}, and is still within the range of halo masses used in HOD models in \cite{Zhai2017}. We also find deviations from convergence approaching $1\sigma$ in $w_{p}$ for $\sim r<300 h^{-1}\textrm{kpc}$, $P(k)$ for $k\sim 4~h~ \textrm{Mpc}^{-1}$ at $z=0$, and $\xi_{hh}(r)$ for $r \le 1~ h^{-1}\textrm{Mpc}$. The largest scale data point of $\xi_{mm}(r)$ also deviates from $1\%$ convergence for $z=1$, but this is likely a statistical fluctuation given the convergence of $P(k)$ at large scales.
A likely explanation for the observed deviations from convergence is inaccuracies in 2LPT at these scales in describing non-linear evolution between $0.01<a<0.02$. If this is the case then these effects may also vary with cosmology such that cosmologies with less (more) structure growth at early times will deviate from convergence by less (more), and cosmologies with less (more) late time structure growth will have less (more) redshift evolution of this effect.
\begin{figure*}[htbp!]
\begin{tabular}{ccc}
\centering
\includegraphics[width=0.3\linewidth]{fig/start_time_pk_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/start_time_xi_mm_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/start_time_mass_fcn_comp.pdf} \\
\includegraphics[width=0.3\linewidth]{fig/start_time_xi_hh_comp.pdf}&
\includegraphics[width=0.3\linewidth]{fig/start_time_wp_comp.pdf} & \includegraphics[width=0.3\linewidth]{fig/start_time_rsd_comp.pdf}
\\
\end{tabular}
\caption{Comparison of a number of observables from \textsc{CT00-Chinchilla}, a simulation with our fiducial starting scale factor, $a=0.02$, and \textsc{CT10}, a simulation with a starting scale factor of $a=0.01$. The gray band in all figures denotes $1\%$ accuracy. \textbf{(a)} Matter power spectrum. \textbf{(b)} Matter correlation function. \textbf{(c)} Halo mass function, where the hatched region corresponds to halos with fewer than 200 particles. \textbf{(d)} Halo--halo correlation function for $M_\text{200b}>10^{12}~h^{-1}\textrm{M}_{\odot}$. \textbf{(e)} Galaxy projected correlation function averaged over 10 realizations of 10 different HODs. \textbf{(f)} Redshift space monopole and quadrupole for the same HODs.}
\label{fig:starttime}
\end{figure*}
\subsubsection{Force Softening}
When solving the $N$-body problem as an approximation to collisionless dynamics, one must employ a so-called force softening in order to mitigate the effects of unphysical two body interactions. In \textsc{l-gadget2} this is done by representing the single particle density distribution as a Dirac delta function convolved with a spline kernel \citep{Monaghan1985} $\delta(\textbf{x}) = W(|\textbf{x}|, 2.8\epsilon)$, where $W(r)$ is given by a cubic spline.
Using this kernel, the potential of a point mass at $r=0$ for non-periodic boundary conditions is given by $-Gm/\epsilon$. It is this $\epsilon$ that we refer to as the force softening length. Typically, smaller $\epsilon$ yields equations which are closer to those that govern the true universe, but decreasing $\epsilon$ by too much at fixed mass resolution will lead to undesirable two body interactions as mentioned above. There is an extensive literature on convergence of various quantities with respect to force softening length \citep[e.g.]{Power2003}, but for completeness we investigate this convergence in the context of the exact statistics that we plan to measure and emulate with this simulation suite.
For our fiducial simulations, we have set $\epsilon=20~h^{-1}\textrm{kpc}$, and for this convergence test we have run an additional simulation, \textrm{CT20} with $\epsilon=10~h^{-1}\textrm{kpc}$. The results of the comparison between our fiducial simulation, \textrm{CT00-Chinchilla}, and \textrm{CT20} can be found in \autoref{fig:forcesoftening}. The only statistic which deviates from convergence is $\xi_{mm}(r)$ for $r<200~ h^{-1}\textrm{kpc}$. $P(k)$ is converged to the $1\%$ level for the scales measured here, but by $k\sim 3 ~h~ \textrm{Mpc}^{-1}$ is showing systematic deviations from perfect convergence. This is consistent with the findings of \citet{Heitmann2010}, although they only consider $\epsilon>25~h^{-1}\textrm{kpc}$. The deviations seen in $\xi_{mm}$ and $P(k)$ do not appear to have a significant effect on other statistics.
\begin{figure*}[htbp!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\linewidth]{fig/force_softening_pk_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_softening_xi_mm_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_softening_mass_fcn_comp.pdf} \\
\includegraphics[width=0.3\linewidth]{fig/force_softening_xi_hh_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_softening_wp_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_softening_rsd_comp.pdf} \\
\end{tabular}
\caption{Convergence tests with respect to force softening. Observables measured from a simulation with our fiducial parameters, \textsc{CT00-Chinchilla} are compared to \textsc{CT20}, a simulation with half the force softening: $\epsilon=10h^{-1}\textrm{kpc}$. Subfigures are the same as in \autoref{fig:starttime}.}
\label{fig:forcesoftening}
\end{figure*}
\subsubsection{Absolute Force Error Tolerance}
Another parameter which governs the accuracy of the gravitational force calculations is how deeply to walk the octtree used to partition space when summing the small-scale contributions to the gravitational force on each particle. This is typically referred to as the cell opening criterion, since it is used to determine whether or not a cell in the tree should be ``opened'' and traversed. We use the standard \textsc{l-gadget2} relative opening criterion which opens a cell containing mass $M$, extension $l$ at a distance from the point under consideration of $r$ if
\begin{align}
\frac{GM^2}{r^2}\left(\frac{l}{r}\right)^{2} > \alpha |\textbf{a}_{\rm{old}}|
\end{align}
where $|\textbf{a}_{\rm{old}}|$ is the magnitude of the acceleration of the particle under consideration in the last time step and $\alpha$ is a free parameter allowing tuning of the accuracy. In general, decreasing $\alpha$ leads to smaller errors in force computation, but greater run time as more nodes in the tree must be opened per time step. Our fiducial runs use $\alpha=0.002$.
In order to test that our results are converged with respect to this choice, we have run an additional simulation, \textsc{CT30}, with $\alpha=0.001$. We find no significant deviations from convergence as can be seen in \autoref{fig:forceerrortol}.
\begin{figure*}[htbp!]
\begin{tabular}{ccc}
\centering
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_pk_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_xi_mm_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_mass_fcn_comp.pdf}\\
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_xi_hh_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_wp_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/force_error_tolerance_rsd_comp.pdf}\\
\end{tabular}
\caption{Convergence tests with respect to force error tolerance. Observables measured from a simulation with our fiducial parameters, \textsc{CT00-Chinchilla} are compared to \textsc{CT30}, a simulation with half the force error tolerance: $\alpha=0.002$. Subfigures are the same as in \autoref{fig:starttime}.}
\label{fig:forceerrortol}
\end{figure*}
\subsubsection{Time Stepping}
Another significant choice that must be made in the \textsc{l-gadget2} algorithm is the maximum allowed time step. The time step for the leapfrog integrator that \textsc{l-gadget2} uses is determined by $\Delta \textrm{ln}(a) = \textrm{min}\left [ \Delta \textrm{ln}(a)_{\textrm{max}}, \sqrt{2\eta \epsilon /|a|}\right ]$ where $\eta$ is the free parameter determining integration error tolerance. Typically $\Delta\textrm{ln}(a)_{\textrm{max}}$ sets the time step at early times when densities are low, and the $\sqrt{2\eta \epsilon /|a|}$ criterion sets the time step in collapsed regions at late times and thus is important in dictating the convergence of halo density profiles \citep{Power2003}.
We have run additional simulations in order to check convergence with respect to time-stepping criteria. In \textsc{CT40}, $\Delta\textrm{ln}(a)_{\textrm{max}}=0.0125$ and in \textsc{CT50}, $\eta=0.0125$, half of their respective values for our fiducial simulation, \textsc{CT00}. Comparisons of the same measurements detailed in \autoref{subsec:measurements} between \textsc{CT00-Chinchilla} and \textsc{CT40} are shown in \autoref{fig:timestepping}. No significant deviations are found. The same comparisons were made between \textsc{CT00-Chinchilla} and \textsc{CT50} and were found to be nearly identical, and so we have not included them for conciseness.
\begin{figure*}[htbp!]
\begin{tabular}{ccc}
\centering
\includegraphics[width=0.3\linewidth]{fig/time_stepping_pk_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/time_stepping_xi_mm_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/time_stepping_mass_fcn_comp.pdf} \\
\includegraphics[width=0.3\linewidth]{fig/time_stepping_xi_hh_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/time_stepping_wp_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/time_stepping_rsd_comp.pdf} \\
\end{tabular}
\caption{Convergence tests with respect to maximum time step. Observables measured from a simulation with our fiducial parameters, \textsc{CT00-Chinchilla} are compared to \textsc{CT40}, a simulation with half the maximum time step: $\Delta \ln(a)_{\mathrm{max}}=0.0125$. Subfigures are the same as in \autoref{fig:starttime}.}
\label{fig:timestepping}
\end{figure*}
\subsubsection{Particle Loading}
\label{subsubsec:particleloading}
The $N$-body\ algorithm solves for the evolution of a discretization of the phase-space distribution function of dark matter. Since this phase-space distribution is fundamentally continuous, at least on macroscopic scales, an important parameter governing the accuracy of the algorithm is the number of particles used to sample this distribution function. For the following tests we have run a set of simulations, \textsc{CT60,...,CT63} in three different cosmologies, \textsc{Chinchilla}, \textsc{T00}, and \textsc{T04}, where we have doubled the number of points with which we sample each spatial dimension, increasing the mass resolution by a factor of 8 from our fiducial settings. Results for the comparison of these simulations with our fiducial set in the Chinchilla cosmology can be found in \autoref{fig:massresolution}.
The mass function is converged to within $1\%$ for halos that are resolved with 500 particles or more. For masses below this, we observe varying degrees of deviation from convergence which depend to good approximation on just the number of particles that the halo is resolved with. This can be seen in \autoref{fig:massresolutionfit}, which demonstates that bins in particle number show similar behavior for all redshifts except for very low particle numbers at high redshift. Only one cosmology is plotted, but a similar trend holds in the other two cosmologies, despite the three cosmologies spanning a large range in $\sigma_{8}$ and $\Omega_{m}$. We have fit the following function to the average of these residuals over redshift in order to characterize and correct for them in other works:
\begin{align}
\label{eq:massresfit}
\frac{N(M^{fid}_\text{200b}) - N(M^{m_{\mathrm{part}}/8}_\text{200b})}{N(M^{m_{\mathrm{part}}/8}_\text{200b})} &= -\textrm{exp}\frac{-(\log_{10}N_{\text{part}}-\log_{10}N_{0})}{\sigma_{\log_{10}N}}
\end{align}
where $N_{\mathrm{part}}$ is the number of particles corresponding to $M^{fid}_\text{200b}$, the halo mass measured in our fiducial simulations. We find $\log_{10}N_{0}=0.25\pm0.13$ and $\sigma_{\log_{10}N}=0.557\pm 0.046$.
To higher order, the deviations from convergence appear to be dependent on the local logarithmic slope of the mass function, $\Gamma = \frac{d \log_{10} N^{fid}}{d\log_{10}M_\text{200b}}$, with the worst deviations occurring at low particle number and very steep slopes. This can be seen in \autoref{fig:logslope}. Here, we have measured the deviations of our fiducial simulations from convergence as a function of particle number and $\Gamma$, where $\Gamma$ is determined by fitting a quartic spline to $N(M_\text{200b})$ in the \textsc{CT0} simulations at all redshifts and taking its logarithmic derivative. We have also interpolated these measurements in order to make the trends more obvious. Above about 1000 particles, the deviations from convergence of the mass function are less than $1\%$ for all slopes. Below this particle number, there is a trend in error with $\Gamma$, leading to the larger errors seen at high redshift in \autoref{fig:massresolutionfit}. For these reasons, we caution against using the correction as determined above for halos with particle numbers less than 1000 when $\Gamma<-2$.
The deficit of halos that we find in our fiducial simulations compared to the \textsc{CT6} simulations cannot be explained by increased Poisson random noise in the mass estimates, as this would lead to an over-abundance of halos at a given mass due to the negative slope of the mass function in a manner analogous to Eddington bias. Instead, the observed deficit suggests that a bias is being introduced in the density field, which is clear from the deviations observed in $P(k)$, such that low mass halos are less likely to form in lower resolution simulations.
These errors also propagate into other observables involving halo mass. For instance, $\xi_{hh}$ deviates from convergence by $7.5\%$ when using all halos with $M_\text{200b}>10^{12} ~h^{-1}\textrm{M}_{\odot}$, but quickly converges as a function of mass as can be seen by the fact that halos with $M_\text{200b}>10^{12.5} ~h^{-1}\textrm{M}_{\odot}$ only deviate by $3\%$ from the simulations with higher particle loading. Mass cuts above this have noisy $\xi_{hh}$ measurements and so we cannot make precise statements about their convergence. The galaxy correlation functions are less sensitive to mass resolution at the low mass end because our HODs are tuned to match the massive BOSS massive galaxy sample. This can be seen in \autoref{fig:massresolution}e and \autoref{fig:massresolution}f, where $w_{p}$ is converged at the $1-2\%$ level, with the redshift space measurements performing only slightly worse. $\xi_{mm}$ is converged, while $P(k)$ deviates from convergence for $z=0$ above $k\sim 1.5\/h~ \textrm{Mpc}^{-1}$, with a maximal deviation of about $2\%$ at $k\sim 3\/h~ \textrm{Mpc}^{-1}$. The deviations from convergence for $P(k)$ are consistent with those found in \citet{Schneider2016} who find $\sim 1\%$ deviations from convergence for $P(k)$ at $k\sim 1h~ \textrm{Mpc}^{-1}$ for a $L_\text{box}=512$, $N_\text{part}=512^3$ simulation.
\begin{figure*}[htbp!]
\begin{tabular}{ccc}
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_pk_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_xi_mm_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_mass_fcn_comp.pdf} \\
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_xi_hh_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_wp_comp.pdf} &
\includegraphics[width=0.3\linewidth]{fig/mass_resolution_rsd_comp.pdf} \\
\end{tabular}
\caption{Convergence tests with respect to mass resolution. Subfigures are the same as in \Cref{fig:starttime}}.
\label{fig:massresolution}
\end{figure*}
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fig/mass_resolution_mass_fcn_error_fit.pdf} \\
\caption{Deviations of the mass functions measured in simulations using our fiducial parameters from simulations with higher mass resolution as a function of redshift. The line is a fit to all of these points in addition to the points for the other two cosmologies that are not shown in this figure.}
\label{fig:massresolutionfit}
\end{figure}
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fig/mass_function_error_v_npart_gamma.pdf} \\
\caption{Deviations of the mass functions measured in simulations using our fiducial parameters from simulations with higher mass resolution as a function of particle number and logarithmic slope of the mass function. A clear trend can be seen with logarithmic slope of the mass function for $N_{\mathrm{part}}<1000$. Black lines show the logarithmic slopes of mass functions measured in the \textsc{CT00-T00} simulation for different redshifts. The solid black region is beyond where we have any data and so we exclude it from this plot.}
\label{fig:logslope}
\end{figure}
\subsubsection{Finite Box Effects}
It is currently beyond the realm of possibility to simulate the entire observable universe at high enough resolution to be useful. Instead, the common practice in cosmological simulations is to assume periodic boundary conditions with a fundamental mode which is much larger than the scales of interest for the problem at hand. One effect of doing this is that modes larger than the fundamental mode of the box are not included in the growth of structure. Because gravitational collapse is a non-linear process, the growth of small-scale structure couples to large-scale growth, and thus missing large variance can cause inaccuracies and alter sample variance at smaller scales. Additionally, because our simulations are periodic, only discrete modes, $\vec{k}=\frac{2\pi(i,j,k)}{L_{box}}$ where $i,j,k\in \mathbb{Z}$ are included in the initial conditions. In order to test the effects of these approximations we have run a set of much larger, lower resolution simulations, \textsc{CT70,...,CT76} at the same cosmologies as our test simulations, where we have $(5\,\ h^{-1}\textrm{Gpc})^3$ for each cosmology to compare with. The results of the comparison for the \textsc{T04} cosmology are shown in \autoref{fig:finitebox}; the other cosmologies show nearly identical results.
Because the \textsc{CT7} simulations have worse mass resolution than our test simulations, the analysis in \autoref{subsubsec:particleloading} indicates that there should be residual effects in this comparison due to mass resolution. In order to mitigate the differences arising from mass resolution, we have applied the correction in \autoref{eq:massresfit} to both sets of measurements. We find convergence to within sample variance of the test boxes for all masses at both $z=0$ and $z=1$, although this is significantly larger than the percent level at $z=1$ for all masses shown here. We also compared $\xi_{mm}(r)$ and found no deviations from convergence for $r<100 ~h^{-1}\textrm{Mpc}$.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fig/finite_box_effects_mass_fcn.pdf} \\
\caption{Comparison of the average mass function over the five realizations of \textsc{T04} with that measured in \textsc{CT7-T04}. The top panel shows the mass functions, where the points are measured from \textsc{T04} and the lines are measurements from \textsc{CT72-T04}. The bottom panel shows the fractional difference of the mass functions between these simulations. These measurements are consistent with no finite box effects.}
\label{fig:finitebox}
\end{figure}
\section{Halo Finding}
\label{sec:halofinding}
In this section we discuss the sensitivity of our results to choices made with regards to halo finding. We have defined dark matter halos as spherical structures with overdensities of 200 times the background density. This choice is relatively arbitrary, having its basis in simple spherical collapse models which have been shown to be imprecise compared to modern cosmological standards. As such, we discuss the possible impacts that this definition might have on cosmological results obtained from emulators built upon these simulations. Note that we consider the choice of halo finder and the settings used in that halo finder to be part of the mass definition, and as such we do not consider the effects of different halo finders separately.
In the case of using galaxy clustering to constrain cosmology, there is a large literature on how choice of halo definition can impact and possibly bias inferred cosmology. Much of this literature has focused on the effect of secondary parameters on the clustering signals of halos at fixed mass \citep[e.g.][]{Wechsler2006,Gao2005,Mao2018,Chue2018}. This effect propagates differently into galaxy clustering depending on which proxy is then used to assign galaxies to halos \citep{Reddick2013, Lehmann2017}, and can lead to biases in inferred HOD parameters when neglected \citep{Zentner2014}. Whether these effects lead to biases in inferred cosmology when using HODs, and whether these biases can be mitigated through extensions to the HOD model are still open questions.
In the case of the halo mass function, the situation is equally complicated. We do not directly measure the halo mass function, unlike the galaxy correlation function, but rather some distribution of observables such as cluster richness or X-ray temperature. In order to constrain cosmology, a mass--observable relation (MOR) must be obtained, and the calibration of this mass--observable relation must also assume a halo mass definition. It is imperative that the definition used when constraining the MOR and the definition used for the halo mass function be the same in order to obtain unbiased cosmological constraints. If the scatter in the MOR is smaller for a particular mass definition, that definition will yield tighter constraints, but a study of the halo mass definition that minimizes scatter in the MOR is beyond the scope of this paper. A more practical reason for our choice of $\Delta=200\text{b}$ is that it makes the typical radii of cluster mass halos, $\sim0.5-2 ~h^{-1}\textrm{Mpc}$, significantly larger than the force softening lengths used in our simulations.
\section{Comparison with Other Emulators}
\label{sec:otheremu}
Having internally validated our simulations, we now compare our measurements to those obtained in other works. Unfortunately, the most precise determination of the matter power spectrum available to date, the Mira--Titan universe emulator \citep{Heitmann2016, Lawrence2017}, does not cover the same parameter space as our simulations. In particular, they do not include $N_{\text{eff}}$ in their parameter space, and varying this can lead to deviations in $P(k)$ on the order of $\sim 10\%$, much greater than the precision at which such a comparison would be relevant. Instead, we have compared our simulations to predictions from the widely used \textsc{Halofit} algorithm \citep{Smith2003,Takahashi2012} which does span our parameter space. Our simulations are not large enough in volume for precision emulation of the matter power spectrum, but nevertheless we can compare our measured matter power spectra to the \textsc{Halofit} predictions for our cosmologies as both an external validation of our simulations and as a further consistency check for the \textsc{Halofit} algorithm.
The results of this comparison can be found in \autoref{fig:halofit_comp}. Error bars in this figure correspond to the variance of the deviations of our 40 training simulations from \textsc{Halofit}. We find better than $1\%$ agreement in the mean deviation until $k\sim 0.3~h~ \textrm{Mpc}^{-1}$, but observe maximum errors close to $5\%$, consistent with the \textsc{Halofit} internal error estimation in \citet{Takahashi2012}. For scales smaller than $k=1~h~ \textrm{Mpc}^{-1}$ we find large deviations of up to $12\%$ which are likely due to a combination of inaccuracies in \textsc{Halofit} and resolution effects in our simulations. The maximum errors that we observe for $0.1<k<1$ are slightly smaller than those reported in \citet{heitmann2014}, but this may be attributable to the differences in the parameter spaces spanned by the two sets of simulations. In future work, we will construct our own emulator for the matter power spectrum in order to facilitate a direct comparison with the Mira--Titan emulator.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fig/halofit_pk_comp.pdf}
\caption{Comparison of the 40 simulation boxes to Takahashi halofit. The vertical black line demarcates the wavenumber that we expect the effects of mass resolution to become important at the $>1\%$ level. Agreement is to within the reported \textsc{Halofit} accuracy.}
\label{fig:halofit_comp}
\end{figure}
\section{Data Release}
\label{sec:datarelease}
Upon posting of this article we are making the simulations described here available upon request. This includes the initial conditions, the particle snapshots and halo catalogs at all 10 redshifts described in \autoref{sec:simulations} and any measurements used in this paper or \citet{McClintock2018} and \citet{Zhai2017}.
We will make the aforementioned data products freely downloadable at \url{https://AemulusProject.github.io} at the time this study and its companion papers are published.
\section{Discussion and Conclusions}
\label{sec:summary}
We have presented a new suite of $N$-body simulations for emulating cosmological observables. The cosmologies of these simulations were sampled from the $w$CDM $4\sigma$ allowed CMB+BAO+SN parameter space using an orthogonal Latin Hypercube. We investigated the convergence of the following observables with respect to choices made in the \textsc{L-Gadget2} $N$-body\ solver:
\begin{enumerate}[label=\alph*)]
\item Matter power spectrum, $P(k)$,
\item 3-dimensional matter correlation function, $\xi_{mm}(r)$,
\item Spherical overdensity halo mass function, $N(M_\text{200b})$
\item 3-dimensional halo--halo correlation function, $\xi_{hh}(r)$,
\item Projected galaxy--galaxy correlation function, $w_{p}(r_{p})$, and
\item Monopole and quadrupole moments of the redshift space galaxy--galaxy correlation function, $\xi_0(s)$, $\xi_{2}(s)$.
\end{enumerate}
We conclude that our observables are converged with respect to choices made in time stepping and force resolution. Choices with respect to initial conditions lead to minor deviations from $1\%$ convergence for halos resolved with fewer than 200 particles. Our choice of force softening leads to deviations from $1\%$ convergence for scales $r<200~h^{-1}\textrm{kpc}$.
Particle loading is by far the parameter that our observables are most sensitive to. For halos with greater than 500 particles we also find convergence at better than the $1\%$ level, but for masses smaller than this, deviations from convergence due to insufficient particle loading increase rapidly. Halos with more than 200 particles, like those used in \citet{McClintock2018} are still converged to better than $2.5\%$. We have shown that this deviation is largely a function of particle number alone, and have fit this dependence and applied it to build the emulator in \citet{McClintock2018}. Additional tests in that study using even higher resolution simulations provide more evidence that this correction is satisfactory for our needs.
We have shown that our halo mass function predictions are not affected by finite box effects to the precision allowed by sample variance in our test boxes. At $z=0$, sample variance is smaller than $1\%$ for $M_{200m}\sim < 4\times 10^{14} h^{-1}\textrm{M}_{\odot}$. The study of this effect in the current work is limited by our inability to use the same initial conditions for the different box sizes necessary for this test, as was done for the rest of the internal convergence tests detailed in this work. As such, these tests are limited by the sample variance in our test boxes, which is greater than percent level for $M_{\odot} \ge 4\times 10^{14} M_{\odot}$ and $z>$0. Future efforts are required to ensure that observables are indeed converged with respect to simulation size to the level needed for upcoming surveys.
The matter power spectra in our simulations are consistent with those predicted by the \textsc{Halofit} methodology to within their reported errors, but we are unable to compare to the Mira--Titan emulator as our simulations use a different cosmological parameter space. Tests of this nature are of the utmost importance, and continued work in making them is vital to ensure that emulators of this kind are put to their full use in upcoming analyses.
The work presented here is just the beginning of our effort to contribute high precision and accuracy simulations and emulators to the community. Future work will extend our simulation suite significantly, especially with higher resolution simulations that are suited for use with more complete galaxy formation models. Additionally, we plan to expand our parameter space by including more physics such as neutrino masses, and by expanding the limits of the parameters sampled to include more volume away from CMB+BAO+SN constraints. This is important, as upcoming analyses will attempt to diagnose tension between different data sets in addition to combining constraints from many different experiments.
The sharing of resources between simulators and the exchange of expertise between simulators, theorists, and observers will be vital in attaining the best possible outcomes for the next generation of surveys. Only a concerted effort from many groups in the domain of cosmic emulation over the next decade will help ensure that Stage-IV cosmological surveys are not limited by modeling systematics.
\acknowledgments
This work received support from the U.S. Department of Energy under contract number DE-AC02-76SF00515. JLT and RHW acknowledge support of NSF grant AST-1211889. TM and ER are supported by DOE grant DE-SC0015975. ER acknowledges additional support by the Sloan Foundation, grant FG-2016-6443. YYM is supported by the Samuel P.\ Langley PITT PACC Postdoctoral Fellowship.
This research made use of computational resources at SLAC National Accelerator Laboratory, and the authors thank the SLAC computational team for support. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy
under Contract No. DE-AC02-05CH11231.
\software{Python,
Matplotlib \citep{matplotlib},
NumPy \citep{numpy},
SciPy \citep{scipy},
nbodykit \citep{nbodykit},
Corrfunc \citep{corrfunc},
CCL \citep{ccl},
CAMB \citep{CAMB},
CLASS \citep{class},
}
\bibliographystyle{yahapj}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{s1}
The type of data that is necessary to analyse has changed over the past few years. The evolution of technology allows recording large sets of data, it is therefore necessary to develop efficient and more precise methods to analyse such big data. The large size of datasets often leads to the need to aggregate units and build macrodata. As a consequence, data analysts are confronted with data where the observations are not single values or categories but finite sets of values/categories, intervals or distributions. It is therefore necessary to develop new methods that allow analysing these more complex datasets. Symbolic Data Analysis \citep{bodi00,bidi07,noirbri11,br14} is a recent statistical framework that studies and develops methods for such kind of symbolic variables. As in classical statistics, these new variables can be classified as quantitative or qualitative. Depending on the type of realization, quantitative variables may be single-valued - when each unit is allowed to take just one single value, multi-valued - when each unit is allowed to take a finite set of values, interval-valued - when an interval of real values is recorded for each unit or modal-valued - when each unit is described by a probability/frequency/weight distribution. Histogram-valued variables, that we are considering in this work, are a particular type of modal-valued variables \citep{bodi00,bidi07,noirbri11,br14}; we note that interval-valued variables are a special case of these, so that the developed models and methods also apply to interval data.
\begin{example} \label{ex1}
Consider data about the Air Time and Arrival Delay of all flights of five airlines departing from a given airport. Here, the entities of interest are not the individual flights but the airlines, for each of which we aggregate information. The values of variables Air Time and Arrival Delay may be aggregated in the form of interval or histogram-valued variables; in the first case each airline is represented by an interval, defined by the minimum and maximum observed values; for histogram-valued variables, each airline is represented by the empirical distributions of the records associated with each variable.
Table \ref{table_AirTimeArrive } presents the obtained symbolic data, where Arrival Delay is an interval-valued variable and Air Time is a histogram-valued variable.
\end{example}
Linear models are frequently used in multivariate data analysis, as in linear regression and linear discriminant analysis. However, when the observations are not single values but intervals or distributions, the classical definition of linear combination has to be adapted. According to the definition proposed by \citet{dibr15}, the observations of the variables that are distributions or intervals of values are rewritten as quantile functions, the inverse of cumulative distribution functions \citep{irve15}, thereby taking into account the distribution within the intervals or subintervals that compose the histograms.
\begin{center}
\begin{table}
\caption{Arrival Delay and Air Time interval and histogram-valued variables, respectively.}
{\scriptsize
\begin{tabular}{|c|c|c|}
\hline
Airline & \multirow{2}{*}{Arrival Delay} & \multirow{2}{*}{Air Time } \\
(IATA code) && \\
\hline
9E & $[-68,744]$ & $\{[21,56.5[,0.3;[56.5,106[,0.4;[106,196[,0.2;[196,272[,0.1\}$\\
EV & $[-62,577]$ & $\{[20,49[,0.2;[49,76[,0.2;[76,97[,0.2;[97,124[,0.2;[124,286],0.2\}$\\
MQ & $[-53,1127]$ & $\{[33,68[,0.2;[68,77[,0.2;[77,105[,0.3;[105,236],0.3\}$\\
OO & $[-26,157]$ & $\{[50,68[,0.4;[68,70[,0.2;[70,177[,0.4\}$\\
YV & $[-46,381]$ & $\{[32,47[,0.2;[47,51[,0.2;[51,77[,0.2;[77,85[,0.2;[85,122],0.2\}$\\
\hline
\end{tabular}}
\label{table_AirTimeArrive }
\end{table}
\end{center}
In the case of histogram-valued variables, the Uniform distribution is usually assumed; for interval-valued variables different distributions may be assumed \citep{dibr17}, up to this date Uniform and Triangular distributions have been considered in linear regression models for this type of variables \citep{dibr17, tesemalaquias17}. The criterion to be optimized to define linear models, for both variable types, is based on the Mallows distance. Using the linear combination proposed in \citet{dibr15} we define, in this work, a linear discriminant function which allows defining for each unit a score that is a quantile function. The units are then classified based on the distance between its score and that of the barycenter of each a priori classes.
For interval-valued variables, parametric classification rules, based on Normal or Skew-Normal distributions, were proposed in \citet{sibri15}.
Non-parametric methodologies for discriminant analysis of interval data may be found in, e.g. \cite{Ishibuchi90}, \cite{Nivlet01}, \cite{Rossi02}, \cite{DSBr06}, \cite{Carrizosa07}, \cite{Angulo07}, \cite{Lauro08}, \cite{Utkin11}.
The remaining of the paper is organized as follows. Section 2 introduces histogram and interval-valued variables and their representations, the distance used to evaluate the similarity between histogram/intervals and the definition of linear combination for these types of variables. Section 3 presents the discriminant function and the optimization problem that allows obtaining the model parameters. Section 4 reports a simulation study and discusses its results. In Section 5, an application to flights data is presented. Finally, Section 6 concludes the paper, pointing out directions for future research.
\section{Concepts about histogram-valued variables} \label{s2}
In this section we introduce definitions and recall results needed to support the symbolic linear discriminant analysis that will be proposed and that allow for the classification of a set of units in two classes.
\subsection{Histogram-valued variables and their representations}\label{s2.1}
Histogram-valued variables \citep{bidi07, noirbri11}, are formally defined as follows.
\begin{definition}\label{def2.1}
$Y$ is a histogram-valued variable when to each unit $i \in \{1,\ldots,n\}$ corresponds a histogram $Y(i)$ defined by a finite number of contiguous and non-overlapping intervals, each of which is associated with a (non-negative) weight. Then, $Y(i)$ can be represented by a histogram (\cite{bidi03}):
\begin{equation}\label{eqHinterval}
H_{Y(i)}=\left\{\left[\underline{I}_{Y(i)_1},\overline{I}_{Y(i)_1}\right[,p_{i1}; \left[\underline{I}_{Y(i)_2},\overline{I}_{Y(i)_2}\right[,p_{i2};\ldots;
\left[\underline{I}_{Y(i){m_{i}}},\overline{I}_{Y(i){m_{i}}}\right],p_{im_{i}}\right\}
\end{equation}
$p_{i\ell}$ is the probability or frequency associated with the subinterval $\left[\underline{I}_{Y(i)_{\ell}},\overline{I}_{Y(i)_{\ell}}\right[,$ $\ell \in \left\{1,2,\ldots,m_{i}\right\},$ $m_{i}$ is the number of subintervals for unit $i$; $\displaystyle\sum\limits_{\ell=1}^{m_{i}} p_{i\ell}=1;$ $\underline{I}_{Y(i)_\ell} \leq \overline{I}_{Y(i)_{\ell}}$ for $\ell \in \left\{1,2,\ldots,m_{i}\right\},$ and $\overline{I}_{Y(i)_{\ell-1}}\leq \underline{I}_{Y(i)_{\ell}},$ $\ell \in \left\{2,\ldots,m_{i}\right\}.$
\end{definition}
Each subinterval $I_{Y(i)_\ell}$ may be represented by its lower and upper bounds $\underline{I}_{Y(i)_\ell}$ and $\overline{I}_{Y(i)_\ell}$, or by its centre $c_{Y(i)_\ell}=\frac{\overline{I}_{Y(i)_\ell}+\underline{I}_{Y(i)_\ell}}{2}$ and half-range $r_{Y(i)_\ell}=\frac{\overline{I}_{Y(i)_\ell}-\underline{I}_{Y(i)_\ell}}{2}$.
$Y(i)$ may, alternatively, be represented by the inverse cumulative distribution function, the quantile function $\Psi_{Y(i)}^{-1}$, under specific assumptions \citep{irve06, dibr15}.
Henceforth, in all representations, it is assumed that within each subinterval $\left[\underline{I}_{Y(i)_{\ell}},\overline{I}_{Y(i)_{\ell}}\right[$ the values for the variable $Y,$ for unit $i$, are uniformly distributed.
In this case, the quantile function is piecewise linear and is given by
\begin{equation}\label{eqHFQ}
\Psi_{Y(i)}^{-1}(t)=\left\{{\renewcommand{\arraystretch}{1.25}\begin{array}{lll}
c_{Y(i)_1}+\left(\frac{2t}{w_{i1}}-1\right) r_{Y(i)_1} & \textit{ if } & 0 \leq t < w_{i1} \\
c_{Y(i)_2} +\left(\frac{2(t-w_{i1})}{w_{i2}-w_{i1}}-1\right) r_{Y(i)_2} & \textit{ if } & w_{i1} \leq t < w_{i2} \\
\vdots & & \\
c_{Y(i)_{m_i}} +\left(\frac{2(t-w_{i(m_i-1)})}{1-w_{i(m_i-1)}}-1\right) r_{Y(i)_{im_i}} & \textit{ if } & w_{i(m_i-1)} \leq t \leq
1
\end{array}}
\right.
\end{equation}
\noindent where $w_{i\ell}=\displaystyle \sum_{h=1}^{\ell} p_{ih}$, $\ell \in\{1,\ldots,m_{i}\}$, and $m_{i}$ is the number of subintervals in $Y(i).$
In the previous definition:
\begin{itemize}
\item The subintervals of histograms $H_{Y(i)}$ should be ordered and disjoint, if not they must be rewritten in that form (see \cite{tesedias14}, Appendix A).
\item For different units, the number of subintervals, or pieces in the quantile functions, may be different. If necessary, this function may be rewritten with the same number of pieces and the same domain for each piece for all units (see \cite{tesedias14}, Section 2.2.3).
\end{itemize}
When $m_i=1$ for each unit $i,$ $Y(i)$ is the interval $\left[\underline{I}_{Y(i)},\overline{I}_{Y(i)}\right[$ with $p_{i}=1;$ the histogram-valued variable is then reduced to an interval-valued variable. The corresponding quantile function is:
$\Psi_{Y(i)}^{-1}(t)=
c_{Y(i)}+ \left(2t-1\right)r_{Y(i)}$, $0 \leq t \leq 1$, where $c_{Y(i)}= \frac{(\underline{I}_{Y(i)}+\overline{I}_{Y(i)})}{2}$ and $r_{Y(i)}= \frac{(\overline{I}_{Y(i)} -\underline{I}_{Y(i)})}{2}$.
\begin{example} \label{ex2}
Consider the distribution of variable Air Time for the airline 9E in Example \ref{ex1}: $H_{9E}=\left\{[21,56.5[,0.3;[56.5,106[,0.4;[106,196[,0.2;[196,272[,0.1\right\}.$
This histogram may also be represented by the quantile function (see \textit{Figure \ref{figHist1}}):
\small{
$$
\Psi_{9E}^{-1}(t)=\left\{\begin{array}{lll}
38.75+\frac{t}{0.3}\times 18.75 & \quad if \quad & 0 \leq t <0.3 \\
81.25 +\frac{t-0.3}{0.4} \times 25.75 & \quad if \quad & 0.3 \leq t < 0.7 \\
151 +\frac{t-0.7}{0.2} \times 45 & \quad if \quad & 0.7 \leq t < 0.9 \\
234 +\frac{t-0.9}{0.1} \times 38 & \quad if \quad & 0.9 \leq t \leq 1
\end{array}
\right.
$$}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{Hist1.jpg}~
\includegraphics[width=0.5\textwidth]{FQ1.jpg}
\caption{Histogram and respective quantile function of $H_{9E}$ in Example \ref{ex1}.}\label{figHist1}
\end{figure}
For the interval-valued variable Arrival Delay for airline 9E, $I_{9E}=[-68,744],$ the quantile function is the linear function
$\Psi_{9E}^{-1}(t)= 338+(2t-1)406, \, \, t\in[0,1]$,
represented in Figure \ref{figInt1}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{Int1.jpg}~
\includegraphics[width=0.5\textwidth]{FQ_Int.jpg}
\caption{Interval and quantile function of $I_{9E}$ in Example \ref{ex1}.}\label{figInt1}
\end{figure}
\end{example}
Since empirical quantile functions are the inverse of cumulative distribution functions, which under the uniformity hypothesis are piecewise linear functions in the case of the histograms and continuous linear functions in the case of the intervals, with domain $\left[0,1\right],$ we shall use the usual arithmetic operations with functions. However, when we use quantile functions as the representation of histograms some issues may arise:
\begin{enumerate}
\item To operate with the quantile functions it is convenient to define all functions involved with the same number of pieces, and the domain of each piece must be the same for all functions. All histograms must hence have the same number of subintervals and the weight associated with each corresponding subinterval must be equal in all units \citep{irve06};
\item The quantile functions are always non-decreasing functions because all linear pieces represent the subintervals $ [\underline{I}_{X(j)_i} , \overline{I}_{X(j)_i}];$
\item When we multiply a quantile function by a negative real number we obtain a function that is not a non-decreasing function, so it cannot representd a histogram/interval;
\item The quantile function corresponding to the symmetric of the histogram $H_X$ is the quantile function $-\Psi_{X}^{-1}(1-t)$ with $t \in [0,1]$ and not the function obtained by multiplying $\Psi_{X}^{-1}(t)$ by $-1.$ As it is required for quantile functions, $-\Psi_{X}^{-1}(1-t)$ is a non-decreasing function;
\item $\Psi_{X}^{-1}(t)-\Psi_{X}^{-1}(1-t)$ is not a null function, as might be expected, but a quantile function with null (symbolic) mean \citep{bidi03};
\item The functions $-\Psi_{X}^{-1}(1-t)$ and $\Psi_{X}^{-1}(t)$ are linearly independent, providing that $-\Psi_{X}^{-1}(1-t) \neq \Psi_{X}^{-1}(t)$; only when the histogram $H_{X}$ is symmetric with respect to the $yy-$axis we have $-\Psi_{X}^{-1}(1-t) = \Psi_{X}^{-1}(t).$
\end{enumerate}
For more details about the behavior of quantile functions in the representation of histograms, see \citet{tesedias14, dibr15}.
Descriptive statistics for symbolic variables have been proposed by several authors, see \cite{bertrand2000} for interval-valued variables and \cite{bidi03,bidi07}, for histogram-valued variables. The definition of symbolic mean, which is needed in the sequel, is as follows.
\begin{definition}\label{defmean}
Let $Y$ be a histogram-valued variable and $H_Y(i)$ the observed histogram for each unit $i \in \left\{1,...,n\right\}$, composed by $m_{i}$ subintervals with weight $p_{i\ell},$, $\ell \in\left\{1,\ldots,m_{i}\right\}$.
The symbolic mean of \textit{histogram-valued variable} $Y$ is given by \citep{bidi03}
$\overline{Y} = \displaystyle \frac{1}{n} \sum_{i=1}^{n} \sum_{\ell=1}^{m_{i}} c_{Y(i)\ell} \,\, p_{i\ell}.$
For an interval-valued variable $Y$, the symbolic mean is the arithmetic mean of the centres of the intervals (\cite{bertrand2000}):
$\overline{Y} = \displaystyle \frac{1}{n}\sum_{i=1}^{n} c_{Y(i)}.$
\end{definition}
\subsection{Mallows distance}\label{s2.2}
In recent literature the Mallows distance is considered as an adequate measure to evaluate the similarity between distributions. This distance has been successfully used in cluster analysis for histogram data \citep{irve06}, in forecasting histogram time series \citep{arma09}, and in linear regression with histogram/interval-valued variables \citep{arma09,irve15, dibr15,dibr17}.
To compare the distributions taken by the histogram-valued variables using the Mallows distance, they should be represented by their corresponding quantile functions. This distance is defined as follows:
\begin{definition}\label{defDM}
Given two quantile functions $\Psi_{X}^{-1}(t)$ and $\Psi_{Y}^{-1}(t)$ that represent the distributions of the histogram-valued variables $X$ and $Y,$ the Mallows distance \citep{mall72} is defined as:
\begin{equation}\label{eqDMdef}
D_M(\Psi_{X}^{-1}(t),\Psi_{Y}^{-1}(t))=\sqrt{\int_{0}^{1}(\Psi_{X}^{-1}(t)-\Psi_{Y}^{-1}(t))^2dt}
\end{equation}
Assuming the Uniform distribution within the subintervals and that the quantile functions $\Psi_{X}^{-1}(t)$ and $\Psi_{Y}^{-1}(t)$ are both written with $m$ pieces and the same set of cumulative weights, the squared Mallows distance may be rewritten as \citep{irve06}:
\begin{equation}\label{eqDM}
D^{2}_M(\Psi_{X}^{-1}(t),\Psi_{Y}^{-1}(t))=\displaystyle\sum_{\ell=1}^{m}p_{\ell}\left[(c_{X_\ell}-c_{Y_{\ell}})^2+\frac{1}{3}(r_{X_\ell}-r_{Y_{\ell}})^2\right]
\end{equation}
\noindent where $c_{X_\ell}, c_{Y_\ell}$ are the centres and $r_{X_\ell}, r_{Y_\ell}$ are the half-ranges of subinterval $\ell$ of variables $X$ and $Y,$ respectively, with $\ell \in \left\{1,2,\ldots,m \right\}.$
\end{definition}
We notice that the weight of the difference between the centres is larger than the weight of the difference between the half-ranges.
Given a set of $n$ units, we may then compute the \textit{Mallows barycentric histogram} or simply \textit{barycentric histogram}, $Y_{b},$ represented by the quantile function $\Psi_{Y_{b}}^{-1}(t)$, as the solution of the minimization problem \citep{irve06}:
\begin{equation}
{\renewcommand{\arraystretch}{1.5}
\begin{array}{ll}
\min & {\displaystyle \sum_{i=1}^{n}D_{M}^2(\Psi_{Y(i)}^{-1}(t),\Psi_{Y_{b}}^{-1}(t))}.\\
\end{array}}\label{eq1.Barycentric}
\end{equation}
According to (\ref{eqDM}) we may rewrite the previous problem in the form
\begin{equation}
\begin{array}{ll}
\min & {\displaystyle f(c_{b1},r_{b1},\ldots,c_{bm},r_{bm})=\displaystyle \sum_{i=1}^{n}\sum_{\ell=1}^{m}p_{i}\left[\left(c_{Y(i)_{\ell}}-c_{Y_{b\ell}}\right)^2+\frac{1}{3}\left(r_{Y(i)_{\ell}}-r_{Y_{b\ell}}\right)^2\right]}.\\
\end{array}
\label{eq2.Barycentric}
\end{equation}
The optimal solution is obtained by solving a least squares problem, and is a histogram where the centre and half range of each subinterval $\ell$ is the classical mean, respectively, of the centres and of the half ranges $\ell,$ of all units $i$, and it corresponds to the quantile function:
\begin{equation}\label{eq.BarycentricQF}
\Psi_{Y_{b}}^{-1}(t)=\left\{{\renewcommand{\arraystretch}{1.25}\begin{array}{lll}
c_{b1}+\left(\frac{2t}{w_{1}}-1\right) r_{b1} & \textit{ if } & 0 \leq t < w_{1} \\
c_{b2} +\left(\frac{2(t-w_{1})}{w_{2}-w_{1}}-1\right) r_{b2} & \textit{ if } & w_{1} \leq t < w_{2} \\
\vdots & & \\
c_{bm} +\left(\frac{2(t-w_{m-1})}{1-w_{m-1}}-1\right) r_{bm} & \textit{ if } & w_{m-1} \leq t \leq
1
\end{array}}
\right..
\end{equation}
\begin{equation*}
\text{with } c_{b\ell}=\frac{1}{n}\displaystyle \sum_{i=1}^{n} c_{Y(i)_{\ell}} \qquad \text{and} \qquad r_{b\ell}=\frac{1}{n}\displaystyle \sum_{i=1}^{n} r_{Y(i)_{\ell}}.
\end{equation*}
\begin{proposition}\label{prBarycentricMeanFQ}
\citep{irve10} The quantile function $\Psi_{Y_b}^{-1}(t),$ that represents the barycentric histogram of $n$ histograms, is the mean of the $n$ quantile functions that represent each observation of the histogram-valued variable $Y,$ i.e.
$\Psi_{Y_b}^{-1}(t)=\overline{\Psi_{Y}^{-1}}(t).$
\end{proposition}
The mean value of the barycentric histogram $\overline{Y}_{b},$ is the symbolic mean of the histogram-valued variable $Y$ \citep{tesedias14}:
$\displaystyle \overline{Y}=\int_{0}^{1} \overline{\Psi_{Y}^{-1}}(t) dt=\int_{0}^{1} \Psi_{Y_b}^{-1}(t) dt.$
The barycentric histogram corresponds to the ``center of gravity'' of the set of histograms. This may be observed in the following example.
\begin{example} \label{ex3}
Consider the distributions of variable Air Time for the airlines in Example \ref{ex1}, the barycentric histogram may be represented by the quantile function
\small{
$$
\overline{\Psi_{AirTime}^{-1}}(t)=\left\{\begin{array}{lll}
43.26+\frac{t}{0.2}\times 12.06 & \quad if \quad & 0 \leq t <0.2 \\
61.56 +\frac{t-0.2}{0.2} \times 6.24 & \quad if \quad & 0.2 \leq t < 0.4\\
77 +\frac{t-0.4}{0.2} \times 9.2 & \quad if \quad & 0.4 \leq t < 0.6 \\
95.88 +\frac{t-0.6}{0.2} \times 9.68 & \quad if \quad & 0.6 \leq t < 0.8 \\
162.08 +\frac{t-0.8}{0.2} \times 56.52 & \quad if \quad & 0.8 \leq t \leq 1
\end{array}
\right.
$$}
The quantile functions that represent the distributions for the five airlines, and the respective barycentric histogram, are represented in Figure \ref{figBarHist}. This Barycenter provides more information than the symbolic mean (see Definition \ref{defmean}), that is equal to $87.96.$
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{BaricentroHist.jpg}
\caption{Observed and barycentric quantile functions of variable Air Time, in Example \ref{ex1}.}\label{figBarHist}
\end{figure}
\end{example}
The concept of barycentric histogram allows for the definition of a measure of inertia based on the \textit{Mallows distance} \cite{irve06}.
The total inertia $(TI),$ with respect to the barycentric histogram $Y_{b},$ of a set of $n$ histogram observations $Y(i), i=1,\ldots, n$, is given by
$TI= \displaystyle \sum_{i=1}^{n} D^{2}_{M}(\Psi_{Y(i)^{-1}}(t),\Psi_{Y_b^{-1}}(t)).$
\cite{irve06} proved that the \textit{Mallows distance} allows for the Huygens decomposition of inertia for clustered histogram-valued data:
\begin{equation}
\begin{array}{lll}
TI & = & BI+WI \\
& = & \displaystyle \sum_{k=1}^{s}n_k D^{2}_{M}(\Psi_{Y_{b_k}^{-1}}(t),\Psi_{{Y_b}^{-1}}(t))+ \displaystyle \sum_{k=1}^{s} \sum_{i=1}^{n_k} D^{2}_{M}(\Psi_{Y(i)^{-1}}(t),\Psi_{Y_{b_k}^{-1}}(t))
\end{array}\label{eq.HuygensTheorem}
\end{equation}
\noindent where $Y_{b_{k}}$ is the barycenter of Group $k$ and $n_{k} = |G_{k}|$ $k\in \left\{1,\ldots,s\right\}.$
\subsection{Linear combination of histograms}\label{s2.3}
The linear combination of histogram-valued variables is not a simple adaptation of the classical definition, as it is not possible to apply the classical linear combination to the quantile functions that represent the distributions, as in:
$\Psi_{\widehat{Y}(i)}^{-1}(t)=a_{1}\Psi_{X_{1}(i)}^{-1}(t)+a_{2}\Psi_{X_{2}(i)}^{-1}(t)+\ldots+a_{p}\Psi_{X_{p}(i)}^{-1}(t).$
The problem comes from the fact that when we multiply a quantile function by a negative number we do not obtain a non-decreasing function. If non-negativity constraints are imposed on the parameters $a_j, $ $j \in \{1,2, \ldots, p\}$ a quantile function is always obtained; however, this solution forces a direct linear relation between $\Psi_{\widehat{Y}(i)}^{-1}(t)$ and $\Psi_{X_{j}(i)}^{-1}(t)$, which is not acceptable.
In order to define a linear combination that solves the problem of the semi-linearity of the space of the quantile functions and allows for a direct or an inverse linear relation between the involved histogram-valued variables, a method was proposed in \cite{dibr15,dibr17}. The proposed definition includes two terms for each explicative variable - one for the quantile function that represents each histogram/interval $X_{j}(i)$ and the other for the quantile function that represents the respective symmetric histogram/interval. This solution, however, increases the number of parameters to estimate, one should bear in mind that the number of units $n$ should be large enough.
\begin{definition}\label{defDSD} \citep{dibr15}
Consider the histogram-valued variables $X_{1}; X_{2}; \ldots; X_{p}$.
Let $\Psi_{X_1(j)}^{-1}(t),$ $\Psi_{X_2(i)}^{-1}(t),\ldots,$ $\Psi_{X_p(i)}^{-1}(t)$,with $t \in [0,1]$, be the quantile functions that represent the distributions the variables take for each unit $i$ and $-\Psi_{X_1(i)}^{-1}(1-t),-\Psi_{X_2(i)}^{-1}(1-t),\ldots,$ $-\Psi_{X_p(i)}^{-1}(1-t),$ the quantile functions that represent the respective symmetric distributions.
The linear combination of the histogram-valued variables $X_{1}, X_{2}, \ldots, X_{p}$ is a new histogram-valued variable $Y$, each quantile function $\Psi_{Y(i)}^{-1}$ may be expressed as
\begin{equation}\label{eqPredictFQint1}
\Psi_{Y(i)}^{-1}(t)=\sum_{j=1}^{p}a_{j}\Psi_{X_{j}(i)}^{-1}(t)-\sum_{j=1}^{p}b_{j}\Psi_{X_{j}(i)}^{-1}(1-t)
\end{equation}
\noindent with $t \in \left[0,1\right];$ $a_{j},b_{j} \geq 0,$ $j \in \left\{1,2,\ldots,p \right\}.$
\end{definition}
In the particular case of interval-valued variables and assuming uniformity within the observed intervals, the corresponding quantile functions are $\Psi^{-1}_{X_{j}(i)}(t)=c_{X_{j}(i)}+(2t-1)r_{X_{j}(i)}$ and $-\Psi^{-1}_{X_{j}(i)}(1-t)=-c_{X_{j}(i)}+(2t-1)r_{X_{j}(i)}, i=1,\ldots, n.$
In this case, the linear combination may be written as \citep{dibr17}:
\begin{equation}\label{eqPredictFQint2}
\Psi_{Y(j)}^{-1}(t)=\sum_{i=1}^{p}\left(a_{i} -b_{i}\right)c_{X_{i}(j)}+\sum_{i=1}^{p}\left(a_{i}+b_{i}\right) r_{X_{i}(j)} \left(2t-1\right)
\end{equation}
\noindent with $t \in \left[0,1\right];$ $a_{j},b_{j} \geq 0,$ $j \in \left\{1,2,\ldots,p \right\}.$
Interval-valued variables where all observations are degenerate intervals, i.e., intervals with null range, are classical variables. In this case (\ref{eqPredictFQint2}) is the classical definition of linear combination. This is in accordance with the purpose of SDA, that the statistical concepts and methods defined for symbolic variables should generalize the classical ones.
\section{Linear Discriminant function}\label{s3}
The process that allows obtaining a linear discriminant function for histogram-valued variables is similar to the classical method.
\subsection{Discriminant function for classical variables}\label{s3.1}
In the classical setup, i.e., with real-valued variables, the linear discriminant function defines a score for each unit, $S$, as the linear combination of the $p$ descriptive variables $\xi=X' \gamma$ where $X$ is the vector of $p$ centered variables and $\gamma$ the vector of weights $p\times 1.$ The sum of squares of the resulting discriminant scores is given by $\xi'\xi=\gamma' T \gamma,$ where $T=XX'$ is the matrix of the total Sums of Squares and Cross-Products (SSCP) of the matrix $X$. Note that $\gamma' T\gamma$ is the sum of the squared Euclidean distances $d$ between $S(i)=\displaystyle\sum_{j=1}^{p}\gamma_{j}X_j(i)$ and $\overline{S}=\displaystyle \frac{1}{n}\displaystyle\sum_{i=1}^{n}S(i)$, i.e. $
\displaystyle \gamma'T\gamma=\sum_{i=1}^{n}d^2(S(i),\overline{S}).
$
Given the decomposition of $T$ as the sum of the matrix of the sum of squares and cross-products between-groups, $B$ and the matrix of the sum of squares and
cross-products within-groups, $W,$ $T = B +W$, we may write $\gamma' T \gamma=\gamma' B \gamma+\gamma' W \gamma.$
The vector $\gamma$ that defines the discriminant function is then estimated such that the ratio between the variability between groups and the variability within groups is maximum, i.e., maximizing $ \displaystyle \lambda=\frac{\gamma' B\gamma}{\gamma' W\gamma}$. This is an easy and classical optimization problem.
\subsection{Discriminant function for histogram-valued variables}\label{s3.2}
We now define the discriminant function that allows obtaining a discriminant score for $p$ histogram-valued variables, as well as the ratio to be optimized to estimate the weight vector of the discriminant function.
\begin{definition} \label{deffuncaodiscr}
Consider $p$ histogram-valued variables represented for each unit $i$ by the respective quantile function $\Psi^{-1}_{X_{j}(i)}(t)$, as in (\ref{eqHFQ}). The score for unit $i$ is a quantile function, $\Psi_{S(i)}^{-1}(t),$ obtained by the linear combination as in (\ref{eqPredictFQint1}):
\begin{equation}\label{eqScore1}
\Psi_{S(i)}^{-1}(t)=\sum_{j=1}^{p}a_{j}\Psi_{X_{j}(i)}^{-1}(t)-
\sum_{j=1}^{p}b_{j}\Psi_{X_{j}(i)}^{-1}(1-t)
\end{equation}
\noindent with $t \in \left[0,1\right];$ $a_{j},b_{j} \geq 0,$ $j \in \left\{1,2,\ldots,p \right\}.$
\end{definition}
Similarly to the classical case, the sum of the squared Mallows distance between the score for unit $i$, $\Psi_{S(i)}^{-1}(t),$ and the mean of all scores - the quantile function $\overline{\Psi_{S}^{-1}}(t)$ may be written as $\gamma' T\gamma$, where $\gamma=(a_1, \ldots, a_p, b_1, \ldots, b_p)$ is the weight vector and $T$ the matrix of the total Sums of Squares and Cross-Products (SSCP) for the $p$ histogram-valued variables.
For the next results we shall use the following notation:
\begin{itemize}
\item $\Psi_{S(i)}^{-1}(t)$: quantile function representing the score obtained applying (\ref{eqScore1}), in Definition \ref{deffuncaodiscr}, to the quantile functions $\Psi^{-1}_{X_{j}(i)}(t),$ defined in (\ref{eqHFQ}). For each subinterval $\ell$ this function is defined by
\begin{equation}\label{eqScore}
\begin{array}{l}
\displaystyle \sum_{j=1}^{p} \left(a_j c_{X_j(i)_\ell}-b_j c_{X_j(i)_{(m-\ell+1)}}\right)+ \\
\displaystyle + \left(\frac{2(t-w_{\ell-1})}{w_{\ell}-w_{\ell-1}}-1\right)\sum_{j=1}^{p} \left(a_j r_{X_j(i)_\ell}+b_j r_{X_j(i)_{(m-\ell+1)}}\right).
\end{array}
\end{equation}
\item $\overline{\Psi_{S}^{-1}}(t)$ - $\mathit{barycentric \, score}$: quantile function that represents the barycentric histogram of the $n$ scores, it is the mean of the quantile functions that represent the individual scores. For subinterval $\ell,$ $\overline{\Psi_{S}^{-1}}(t)$ is defined by
\begin{equation}\label{eqScoreMedia}
\begin{array}{l}
\displaystyle \sum_{j=1}^{p} \left(a_j \overline{c}_{X_{j_\ell}}-b_j \overline{c}_{X_{j_{(m-\ell+1)}}}\right)+\\
\displaystyle
+
\left(\frac{2(t-w_{\ell-1})}{w_{\ell}-w_{\ell-1}}-1\right)\sum_{j=1}^{p} \left(a_j \overline{r}_{X_{j_\ell}}+b_j \overline{r}_{X_{j_{(m-\ell+1)}}}\right)$$
\end{array}
\end{equation}
\noindent where $\overline{c}_{X_{j_\ell}}$ and $\overline{r}_{X_{j_\ell}}$ are, respectively, the means of the centres and half ranges of the subinterval $\ell$ for variable $j.$
\item $\overline{\Psi_{S_k}^{-1}}(t)$ - $\mathit{barycentric \, group \, score}$: quantile function that represents the barycentric histogram of the scores in group $k, (k=1,\ldots, s)$ i.e., it is the mean of the quantile functions that represent the scores in group $k.$ For each subinterval $\ell$, $\overline{\Psi_{S_k}^{-1}}(t)$ is defined by
\begin{equation}\label{eqScoreMediaGrupo}
\begin{array}{l}
\displaystyle \sum_{j=1}^{p} \left(a_j \overline{c}_{X_{jk\ell}}-b_j \overline{c}_{X_{jk{(m-\ell+1)}}}\right)+ \\
\displaystyle
+ \left(\frac{2(t-w_{\ell-1})}{w_{\ell}-w_{\ell-1}}-1\right)\sum_{j=1}^{p} \left(a_j \overline{r}_{X_{jk\ell}}+b_j \overline{r}_{X_{jk{(m-\ell+1)}}}\right)$$
\end{array}
\end{equation}
\noindent where $\overline{c}_{X_{jk\ell}}$ and $\overline{r}_{X_{jk\ell}}$ are the means for the observations in group $k,$ of the centers and half ranges of subinterval $\ell$ for variable $j.$
\end{itemize}
\begin{proposition} \label{propT}
Let $\Psi_{S(i)}^{-1}(t)$ be the score (quantile function) obtained from Definition (\ref{deffuncaodiscr}) considering $p$ histogram-valued variables $\Psi^{-1}_{X_{j}(i)}(t),$ $j\in\left\{1,\ldots,p \right\},$
and $\overline{\Psi_{S}^{-1}}(t),$ the barycentric score. The sum of the squared Mallows distance between $\Psi_{S(i)}^{-1}(t)$ and $\overline{\Psi_{S}^{-1}}(t)$ may be written as
$\displaystyle \sum_{i=1}^{n}D_M^2(\Psi_{S(i)}^{-1}(t),\overline{\Psi_{S}^{-1}}(t))=\gamma' T \gamma,$
where $T$ is the matrix of the total Sums of Squares and Cross-Products (SSCP) of the $p$ histogram-valued variables and $\gamma$ is the $p\times 1$ vector of weights.
\end{proposition}
\begin{proof}
Consider the quantile functions $\overline{\Psi_{S(i)}^{-1}}(t)$ and $\overline{\Psi_{S}^{-1}}(t)$ defined as in (\ref{eqScore}) and (\ref{eqScoreMedia}), respectively. Applying (\ref{eqDM}) in Definition \ref{defDM} for the Mallows distance and after some algebra, it is possible to write $\displaystyle \sum_{i=1}^{n}D_M^2(\Psi_{S(i)}^{-1}(t),\overline{\Psi_{S}^{-1}}(t)),$ in matricial form as $\gamma' T \gamma$ where,
\begin{itemize}
\item $\gamma$ is the $p\times 1$ vector of non-negative weights, i.e. $\gamma'=\begin{array}{ccccc}[a_1 & b_1 & \ldots & a_p & b_p] \end{array},$ with $a_j,b_j \geqslant 0,$ $\forall j \in \left\{1, \ldots, p \right\}.$
\item $T$ is the matrix of the total SSCP obtained by the product $A \times A',$ where $A'$ is a $2mn\times 2p$ matrix defined as:
$$A'=\left[\begin{array}{cccc}
\mathbf{A_{11}} & \mathbf{A_{12} }& \ldots & \mathbf{A_{1p}} \\
\mathbf{A_{21}} & \mathbf{A_{22}} & \ldots & \mathbf{A_{2p} }\\
\cdots & \cdots & \ldots & \cdots \\
\mathbf{A_{m1}} & \mathbf{A_{m2}} & \ldots & \mathbf{A_{mp}
}\end{array}
\right]$$
where $\mathbf{A_{\ell j}}$, for $\ell \in \left\{1,2,\ldots,m \right\}$ and $j \in \left\{1,2,\ldots,p \right\}$, are $2n \times 2$ matrices. The elements in each matrix $\mathbf{A_{\ell j}}=[a_{hq}],$ with $h\in \left\{1,2,\ldots,2n \right\}$ and $q \in \left\{1,2\right\}$ are defined as:
$$
a_{hq}=\left\{
\begin{array}{lll}
\sqrt{p_\ell}\left(c_{X_{j}(h)_{\ell}} - \overline{c}_{X_{j_\ell}} \right) & if & 1\leqslant h \leqslant n \textit{ and } q=1 \\
- \sqrt{p_\ell}\left(c_{X_{j}(h)_{m-\ell+1}}- \overline{c}_{X_{j_{(m-\ell+1)}} } \right) & if & n+1\leqslant h \leqslant 2n \textit{ and } q=1 \\
\sqrt{\frac{p_\ell}{3}}\left(r_{X_{j}(h)_{\ell}}- \overline{r}_{X_{j_\ell}} \right) & if & 1\leqslant h \leqslant n \textit{ and } q=2 \\
- \sqrt{\frac{p_\ell}{3}}\left(r_{X_{j}(h)_{m-\ell+1}} - \overline{r}_{X_{j_{(m-\ell+1)}} } \right) & if & n+1\leqslant h \leqslant 2n \textit{ and } q=2 \\
\end{array}
\right.
$$
$T= A \times A'$ is a symmetric matrix of order $2p$, its elements $t_{hq},$ $h,q\in \left\{1,2,\ldots,2p \right\}$ are defined as:
{\scriptsize
$$
t_{hq}=\left\{\begin{array}{lll}
\displaystyle\sum\limits_{i=1}^n \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{c}_{X_{\frac{h+1}{2}}(i)_{\ell}}
\tilde{c}_{X_{\frac{q+1}{2}}(i)_{\ell}}+\frac{1}{3}\tilde{r}_{X_{\frac{h+1}{2}}(i)_{\ell}}\tilde{r}_{X_{\frac{q+1}{2}}(i)_{\ell}} \right) & if & \textit{ h,q are odd } \\
\displaystyle\sum\limits_{i=1}^n \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{c}_{X_{\frac{h}{2}}(i)_{(m-\ell+1)}}
\tilde{c}_{X_{\frac{q}{2}}(i)_{(m-\ell+1)}}+\frac{1}{3}\tilde{r}_{X_{\frac{h}{2}}(i)_{(m-\ell+1)}}\tilde{r}_{X_{\frac{q}{2}}(i)_{(m-\ell+1)}} \right) & if & \textit{ h,q are even} \\
\displaystyle\sum\limits_{i=1}^n \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(-\tilde{c}_{X_{\frac{h}{2}}(i)_{\ell}}
\tilde{c}_{X_{\frac{q+1}{2}}(i)_{(m-\ell+1)}}+\frac{1}{3}\tilde{r}_{X_{\frac{h}{2}}(i)_{\ell}}\tilde{r}_{X_{\frac{q+1}{2}}(i)_{(m-\ell+1)}} \right) & if & \textit{h is even, q is odd } \\
\end{array} \right.
$$
}
\normalsize
with $\tilde{c}_{X_{\frac{\delta+1}{2}}(i) \ell}=c_{X_{\frac{\delta+1}{2}}(i) \ell}-
\overline{c}_{X_{\frac{\delta+1}{2}} \ell}$ and $\tilde{r}_{X_{\frac{\delta+1}{2}}(i) \ell}=r_{X_{\frac{\delta+1}{2}}(i) \ell}-
\overline{r}_{X_{\frac{\delta+1}{2}}\ell},$ $\delta \in \{h,q\}.$
\end{itemize}
\end{proof}
According to the Huygens theorem (\ref{eq.HuygensTheorem}), the following decomposition holds:
\begin{equation}\label{eqHuygnesDecomp}
\begin{array}{l}
\sum\limits_{i=1}^{n} D^{2}_{M}(\Psi_{S(i)}^{-1}(t),\overline{\Psi_{S}^{-1}}(t))=\\
\quad =\sum\limits_{k=1}^{s}|G_{k}| D^{2}_{M}(\overline{\Psi_{S}^{-1}}(t),\overline{\Psi_{S_{k}}^{-1}}(t))+\sum\limits_{k=1}^{s}\sum\limits_{i \in G_{k}} D^{2}_{M}(\Psi_{S(i)}^{-1}(t),\overline{\Psi_{S_{k}}^{-1}}(t))
\end{array}
\end{equation}
\noindent with $|G_{k}|$ the cardinal of group $k, $ and the quantile functions $\Psi_{S(i)}^{-1}(t)$ - $\mathit{score}$, $\overline{\Psi_{S}^{-1}}(t)$ - $\mathit{barycentric \, score} $ and $\overline{\Psi_{S_k}^{-1}}(t)$ - $\mathit{barycentric \, group \, score}$, as defined in (\ref{eqScore}), (\ref{eqScoreMedia}) and (\ref{eqScoreMediaGrupo}), respectively.
This result shows that, under our framework, the SSCP may also be decomposed
in the sum of the squares and crossproducts between groups and the sum of the
squares and crossproducts within groups. We may then write
$\gamma' T \gamma= \gamma' B \gamma+\gamma' W \gamma,$
where $B=[b_{hq}]$ and $W=[w_{hq}]$ are symmetric matrices of order $2p$.
The elements $b_{hq}$ of the matrix $B,$ with $h,q \in \{1,\ldots,2p \}$ are defined as:
{\scriptsize
$$
b_{hq}=\left\{\begin{array}{lll}
\displaystyle\sum\limits_{k=1}^s n_k \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{\overline{c}}_{X_{\frac{h+1}{2}k\ell}}
\tilde{\overline{c}}_{X_{\frac{q+1}{2} k\ell}}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h+1}{2}k\ell}}\tilde{\overline{r}}_{X_{\frac{q+1}{2}k\ell}} \right) & if & \textit{ h,q are odd } \\
\displaystyle\sum\limits_{k=1}^s n_k \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{\overline{c}}_{X_{\frac{h}{2}k(m-\ell+1)}}
\tilde{\overline{r}}_{X_{\frac{q}{2}k(m-\ell+1)}}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h}{2}k(m-\ell+1)}}\tilde{\overline{r}}_{X_{\frac{q}{2}k(m-\ell+1)}} \right) & if & \textit{ h,q are even} \\
\displaystyle\sum\limits_{k=1}^s n_k \displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(-\tilde{\overline{c}}_{X_{\frac{h}{2}k\ell}}
\tilde{\overline{c}}_{X_{\frac{q+1}{2}k(m-\ell+1)}}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h}{2}k\ell}}\tilde{\overline{r}}_{X_{\frac{q+1}{2}k(m-\ell+1)}} \right) & if & \textit{h is even, q is odd } \\
\end{array} \right.
$$
}
\normalsize
\noindent with $\tilde{\overline{c}}_{X_{\frac{\delta+1}{2}k\ell}}=\overline{c}_{X_{\frac{\delta+1}{2}\ell}}-\overline{c}_{X_{\frac{\delta+1}{2}k\ell}}$ and $\tilde{\overline{r}}_{X_{\frac{\delta+1}{2}k\ell}}=\overline{r}_{X_{\frac{\delta+1}{2}\ell}}-\overline{r}_{X_{\frac{\delta+1}{2}k\ell}},$ $\delta \in \left\{ h,q \right\}.$
The elements $w_{hq}$ of matrix $W$, $h,q \in \{1,\ldots,2p\}$ are defined as:
{\scriptsize
$$
w_{hq}=\left\{\begin{array}{lll}
\displaystyle\sum\limits_{k=1}^s \displaystyle\sum\limits_{i=1}^{n_k}\displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{\overline{c}}_{X_{\frac{h+1}{2}k}(i)_\ell}
\tilde{\overline{c}}_{X_{\frac{q+1}{2}k}(i)_\ell}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h+1}{2}k}(i)_\ell}\tilde{\overline{r}}_{X_{\frac{q+1}{2}k}(i)_\ell} \right) & if & \textit{ h,q are odd } \\
\displaystyle\sum\limits_{k=1}^s \displaystyle\sum\limits_{i=1}^{n_k}\displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(\tilde{\overline{c}}_{X_{\frac{h}{2}k}(i)_{(m-\ell+1)}}
\tilde{\overline{r}}_{X_{\frac{q}{2}k}(i)_{(m-\ell+1)}}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h}{2}k}(i)_{(m-\ell+1)}}\tilde{\overline{r}}_{X_{\frac{q}{2}k}(i)_{(m-\ell+1)}} \right) & if & \textit{ h,q are even} \\
\displaystyle\sum\limits_{k=1}^s \displaystyle\sum\limits_{i=1}^{n_k}\displaystyle\sum\limits_{\ell=1}^m p_{\ell}\left(-\tilde{\overline{c}}_{X_{\frac{h}{2}k}(i)_\ell}
\tilde{\overline{c}}_{X_{\frac{q+1}{2}k}(i)_{(m-\ell+1)}}+\frac{1}{3}\tilde{\overline{r}}_{X_{\frac{h}{2}k}(i)_\ell}\tilde{\overline{r}}_{X_{\frac{q+1}{2}k}(i)_{(m-\ell+1)}} \right) & if & \textit{h is even, q is odd } \\
\end{array} \right.
$$
}
\normalsize
\noindent with $\tilde{\overline{c}}_{X_{\frac{\delta+1}{2}k}(i)_\ell}$ and $\tilde{\overline{r}}_{X_{\frac{\delta+1}{2}k}(i)_\ell}$ as above.
As in the classic case, the parameters of the model, i.e., the components of vector $\gamma$, are estimated such that the ratio of the variability between groups and the variability within groups is maximized. The optimization problem is now written as
\begin{equation}\label{CFQP}
\mbox{Maximize }\lambda=\frac{\gamma' B\gamma}{\gamma' W\gamma} \,\, \mbox{ subject to } \, \gamma \geqslant 0.
\end{equation}
Contrary to the classical situation, this is a hard optimization problem as it is now necessary to solve a constrained fractional quadratic problem. The methods to solve this problem are presented in the next section.
\subsection{Optimization of constrained fractional quadratic problem }\label{s3.3}
Problem (\ref{CFQP}) is non-convex, and finding the global optima in this class requires a computational effort that increases exponentially with the size, in this case, of matrices $B$ and $W$.
\\
Exact methods for global optima, as Branch and Bound, are heavy in terms of memory and computational time. Attempting to solve the instances of problem (\ref{CFQP}), from our data, using proven software like Baron \citep{Baron}, has failed even for small size problems. Typically, good feasible solutions were found in the first iterations but the algorithm ended up unable to establish the global optimality of the best solution found, even when largely increasing the maximum number of iterations. The incumbent solution obtained by the software, $\tilde{\gamma}$, allowed to define a lower bound for the optimal solution:
\begin{equation}
\tilde{\lambda}=\frac{\tilde{\gamma}' B\tilde{\gamma}}{\tilde{\gamma}' W\tilde{\gamma}} \le \lambda^*= \max_{{\gamma} \geqslant 0} \frac{\gamma' B\gamma}{\gamma' W\gamma}.
\end{equation}
To improve this lower bound, or to prove optimality, it is necessary to introduce an upper bound:
$\tilde{\lambda} \le \lambda^* \le \bar{\lambda}.$
If for a small $ \epsilon$, we have $ \bar{\lambda}-\tilde{\lambda} \le \epsilon $ then we accept $\tilde{\gamma}$ as an $ \epsilon-\mbox{optimal solution}$. In a numerical method we do not expect to have a zero gap solution ($\epsilon=0$), so we defined $\epsilon=10^{-4}$ as the intended accuracy for our numerical results.
Tight bounds for general constrained fractional quadratic problems based on copositive relaxation were proposed in \cite{Amaral2014}.
Let $X\bullet Y= trace (XY)$ be the Frobenius inner product of two matrices $X$ and $Y$ in the set of symmetric matrices of size $n$, $\mathcal{M}_n$. The cone of completely positive matrices is given by
$\mathcal{C}_n^*=\left\{ X \in \mathcal{M}_n : X=YY' , Y \mbox{ an $n \times k$ matrix with }Y\ge O \right\}.$ Since
\begin{equation}
\lambda^*= \max_{{\gamma} \geqslant 0} \frac{\gamma' B\gamma}{\gamma' W\gamma}
= \max_{{\gamma} \geqslant 0\; \gamma' W\gamma=1} \gamma' B\gamma,
\end{equation}
taking $\Gamma=\gamma' \gamma \in \mathcal{C}_n^*$ with rank$(\Gamma)=1$, and considering that $\gamma' W\gamma={W}\bullet \Gamma$,
following \cite{Amaral2014} we obtain the following completely positive reformulation of (\ref{CFQP}):
\begin{equation}\label{cop}
\max \left\{ {B}\bullet\Gamma : {W}\bullet \Gamma=1, \Gamma \in \mathcal{C}^*_{n}\right\}\, .\end{equation}
Checking condition $\Gamma \in
\mathcal{C}^*_{n}$ is (co-)NP-hard \cite{MuKa87,Dick11b}, but
knowing that the cone of symmetric, non-negative and semi-definite matrices, denoted as doubly non-negative matrices, represented by
$\mathcal{D}_n=\left\{ X \in \mathcal{M}_n : y^TXy \ge 0, \forall y \in \mathbb{R}^n\; , X\ge O \right\}\,$
provides an approximation for $\mathcal{C}_n^*$, since $\mathcal{C}_n^*\subseteq \mathcal{D}_n$, it is then possible to exploit this relaxation to obtain an upper bound for
(\ref{cop}), by solving
\begin{equation}\label{dnn}
\bar{\lambda}=\max \left\{ {B}\bullet\Gamma : {W}\bullet \Gamma=1, \Gamma \in \mathcal{D}_{n}\right\}\, .\end{equation}
This upper bound was enough in all instances to close the gap for $\epsilon=10^{-4}$ and to prove $\epsilon-$optimality of the incumbent solution provided by Baron. This allowed using this solution with confidence to estimate the model parameters.
\subsection{Classification in two \textit{a priori} groups}\label{s3.4}
From the discriminant function in Definition \ref{deffuncaodiscr} and using the Mallows distance, it is possible to classify an unit in one of two groups, $G_{1},$ $G_{2}$.
Let $\overline{\Psi^{-1}_{S_{G_1}}}(t)$ and $\overline{\Psi^{-1}_{S_{G_2}}}(t)$, be the quantile functions that represent the barycentric score of each group and let $\Psi^{-1}_{S(i)}(t)$ be the quantile function that represents the score of unit $i.$
Unit $i$ is assigned to Group $G_1$ if
$D^{2}_M\left(\Psi_{S(i)}^{-1}(t),\overline{\Psi^{-1}_{S_{G_1}}}(t)\right)<D^{2}_M\left(\Psi_{S(i)}^{-1}(t),\overline{\Psi^{-1}_{S_{G_2}}}(t)\right),$
otherwise it is assigned to Group $G_2$; i.e.,
it is assigned to the group for which the Mallows distance between its score and the corresponding barycentric score is minimum.
\section{Experiments with simulated data} \label{s4}
In this section, we evaluate the performance of the proposed discriminant method, for histogram and interval-valued variables, under different conditions.
\subsection{Description of the simulation study} \label{s4.1}
Symbolic data tables that illustrate different situations were created; a full factorial design was employed, similar for histogram and interval-valued variables, with the following factors:
\begin{itemize}
\item Three variables: $X_1, X_2, X_3$ and two groups: $G_1,G_2$
\item Four learning sets and two test sets:
\begin{description}
\item[Learning sets:] $|G_1|=10,$ $|G_2|=40$; $|G_1|=|G_2|= 25$ ;
$|G_1|=50,$ $|G_2|=200$; $|G_1|=|G_2|=125$.
\item[Test Sets:] $|G_1|=200,$ $|G_2|=800$; $|G_1|=|G_2|=500$.
\end{description}
\item Different levels of similarity between the groups
\begin{itemize}
\item For histogram-valued variables - four levels defined by the mean and standard deviation of the distributions:
\begin{description}
\item[ Case HA:] Similar mean and similar standard deviation;
\item[ Case HB:] Similar mean and different standard deviation;
\item[ Case HC:] Different mean and similar standard deviation;
\item[ Case HD:] Different mean and different standard deviation.
\end{description}
For each case above, four different distributions are considered: Uniform, Normal,Log-normal, Mixture of distributions.
\end{itemize}
\begin{itemize}
\item For interval-valued variables - six levels defined by the centers and half-ranges of the intervals:
\begin{description}
\item[ Case IA:] Low variation in half range and similar center;
\item[ Case IB:] Large variation in half range and similar center;
\item[ Case IC:] Low variation in centre and similar half range;
\item[ Case ID:] Large variation in centre and similar half range;
\item[ Case IE:] Low variation in half range and center;
\item[ Case IF:] Large variation in half range and center.
\end{description}
\end{itemize}
\end{itemize}
To simulate symbolic data tables for the conditions described above, it is necessary to generate the observations of the variables $X_j$ in the two groups. For the case of histogram-valued variables, different distributions are considered, whereas for the case of the interval-valued variables several types of half ranges and centers are considered.
\textit{Histogram-valued variables}
\begin{enumerate}
\item For each variable, the values of the mean and the standard deviation were fixed. For the three variables in this study, we selected: $\mu_{X_1}=20, \sigma_{X_1}=8;$ $\mu_{X_2}=10, \sigma_{X_2}=6;$ $\mu_{X_3}=5, \sigma_{X_3}=4.$
\item For each variable $j$ and for each group $k$, two vectors of length $n$ are generated, one with values of the means, $M_{X_{jk}}=[m_{jk}(i)]$ and another with values of the standard deviation $S_{X_{jk}}=[s_{jk}(i)].$ The $n$ values of each vector $M_{X_{jk}}$ and $S_{X_{jk}},$ are randomly generated, as follows:
\begin{itemize}
\item $m_{jk}(i)\sim \mathcal{U}(c_1(1+a),c_2(1+a)),$ with $c_1=0.6 \times \mu_{X_j},$ $c_2=1.4 \times \mu_{X_j},$ $a=0$ in Group $G_1$ and $a>0$ in Group $G_2.$ ($a=0.1$ - cases \textbf{HA, HB}; $a=0.5$ - cases \textbf{HC, HD}).
\item $s_{jk}(i)\sim \mathcal{U}(h_1(1+b),h_2(1+b)),$ with $h_1=0.6 \times \sigma_{X_j},$ $h_2=1.4 \times \sigma_{X_j},$ $b=0$ in Group $G_1$ and $b>0$ in Group $G_2.$ ($b=0.1$ - cases \textbf{HA, HC}; $b=0.5$ - cases \textbf{HB, HD}).
\end{itemize}
\item From each couple of values $m_{jk}(i)$ and $s_{jk}(i),$ $i\in \left\{1,\ldots,n\right\}$ we randomly generate 5000 real values, $x_{jki}(w),$ $w \in \left\{1,\ldots,5000\right\} $ that allow creating the histograms corresponding to unit $i$ and variable $X_j$ of the group $k$ and distribution $D.$
According to the distribution $D,$ the real values are generated as follows:
\begin{itemize}
\item D=Uniform distribution: $x_{jki}(w) \sim \mathcal{U}(a_{jk}(i),b_{jk}(i))$ with $a_{jk}(i)=m_{jk}(i)-\sqrt{3}s_{jk}(i)$ and $b_{jk}(i)=m_{jk}(i)+\sqrt{3}s_{jk}(i)$
\item D=Normal distribution: $x_{jki}(w) \sim \mathcal{N}(m_{jk}(i),s_{jk}(i))$
\item D=Log-Normal distribution: $x_{jki}(w)\sim Log\mathcal{LN}(\tilde{m}_{jk}(i),\tilde{s}_{jk}(i))$ with $\tilde{m}_{jk}(i)=\frac{1}{2} \ln \left(\frac{(m_{jk}(i))^4}{(s_{jk}(i))^2+(m_{jk}(i))^2}\right)$ and $\tilde{s}_{jk}(i)= \ln \left(\frac{(s_{jk}(i)))^2}{(m_{jk}(i))^2}\right)$
\end{itemize}
\item The 5000 real values $x_{jki}(w)$ generated for each unit $i$ are organized in histograms, which represent its empirical distribution. For all units, all subintervals of each histogram have the same weight, $0.20.$
\end{enumerate}
\textit{Interval-valued variables}
For each variable, the values of the center and the half range were fixed, as follows: $c_{X_1}=20, r_{X_1}=8;$ $c_{X_2}=10, r_{X_2}=6;$ $c_{X_3}=5, r_{X_3}=4.$
\begin{enumerate}
\item For each variable $j$, in each group $k$, two vectors of length $n$ are randomly generated, one with values of the centers, $C_{X_{jk}}=[c_{jk}(i)]$ and another with values of the half ranges $R_{X_{jk}}=[r_{jk}(i)],$ as follows:
\begin{itemize}
\item $c_{jk}(i)\sim \mathcal{U}(c_1(1+a),c_2(1+a)),$ with $c_1=0.6 \times c_{X_j},$ $c_2= 1.4 \times c_{X_j},$ $a=0$ in Group $G_1$ and $a>0$ in Group $G_2$ ($a=0.05$ - cases \textbf{IA, IB}; $a=0.2$ - cases \textbf{IC, IE} and $a=0.6$ - cases \textbf{ID, IF}).
\item $r_{jk}(i)\sim \mathcal{U}(h_1(1+b),h_2(1+b)),$ with $h_1=0.6 \times r_{X_j},$ $h_2=1.4 \times r_{X_j},$ $b=0$ in Group $G_1$ and $b>0$ in Group $G_2$ ($b=0.05$ - cases \textbf{IC, ID}; $b=0.2$ - cases \textbf{IA, IE} and $b=0.6$ - cases \textbf{IB, IF}).
\end{itemize}
\item From each couple of values $c_{jk}(i)$ and $r_{jk}(i),$ $i\in \left\{1,\ldots,n\right\}$ the intervals associated with each unit $i$ are generated: $[c_{jk}(i)-r_{jk}(i);c_{jk}(i)+r_{jk}(i)[.$
\end{enumerate}
For each case, 100 data tables were generated for each learning set, one data table for each test set was created under the same conditions.
For each replicate of each of the four learning sets, the classification rule based on the proposed discriminant method was applied and the proportion of well assigned units calculated. The mean values and corresponding standard deviations of the obtained results are provided as Supplementary Material (Tables \ref{table1} and \ref{table3}).
The discriminant functions with the parameters obtained (see Definition \ref{deffuncaodiscr}) for each replicate of the learning sets with the same/different number of units, were then applied to the test set in which the groups have the same/different size. The proposed classification rule (see Subsection \ref{s3.4}) was applied and the proportion of well assigned cases in each group calculated; the obtained values are presented in Supplementary Material (Tables \ref{table2}, \ref{table4}).
\subsection{Discussion of results} \label{s4.2}
\textit{Histogram-valued variables}
Tables \ref{table1} and \ref{table2} (in Supplementary Material), gather the mean and standard deviation of the proportion of well classified units in test and learning sets, for each of the four cases, under different conditions; Figures \ref{fig1} and \ref{fig2} represent those values for the test sets.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Hist50_200.jpg}
\caption{Mean and standard deviation of the proportion of well classified units for test set with $|G1|=200$ and $|G2|= 800$.}\label{fig1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Hist125_125.jpg}
\caption{Mean and standard deviation of the proportion of well classified units for test set with $\protect |G1|=|G2|= 500$.} \label{fig2}
\end{figure}
We observe similar behaviors for the different distributions in both learning and test sets. In general, increasing the difference between means and/or standard deviations of the two groups provides a better discrimination. The mean of the proportion of well classified is generally slightly higher in the situations where the classes are balanced.
In almost every situation, the mean hit rate is influenced in the same way by the mean and standard deviation; in the case of the Log-normal distribution, when the mean of the two groups is quite different, the increase of the perturbation in the standard deviation seems to have little influence.
In the learning sets, the mean of the proportion of well classified units is slightly higher for smaller samples. This behavior is observed mainly for the Uniform and Normal distributions.
\vspace{0.5 cm}
\textit{Interval-valued variables}
Tables \ref{table3} and \ref{table4} (in Supplementary Material) gather the mean and standard deviation of the proportion of well classified units in the learning and test sets, for each of the cases, under different conditions. Figure \ref{fig3} illustrates the behavior in the test sets, for all situations investigated.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Int_todos.jpg}
\caption{Mean and standard deviation of the proportion of well classified units for test sets defined for interval-valued variables.}\label{fig3}
\end{figure}
The behavior observed is similar for all considered cases. In general, no differences are observed when the distinction between groups is induced by the centers or half-ranges. When the groups are balanced, the proportion of well classified units is slightly higher. As in the case of histograms, in the learning sets, the mean hit rate is slightly higher for smaller samples.
Observing Figure \ref{fig3}, we notice that when the learning set is smaller, the standard deviations in the corresponding test sets are larger. This behavior is observed mainly in smaller samples of cases IA, IC and IE.
In conclusion, the classification method based on the proposed discriminant function behaves as expected in all considered situations, for both interval and histogram-valued variables, providing good results (in terms of hit rate) in a wide variety of situations.
\section{Application - Flights that Departed NYC in 2013} \label{s5}
This study concerns all outgoing flights from the three New York City airports (LGA, JFK, and EWR) in 2013. The microdata is available in the R package \textit{nycflights13}, and was originally obtained from the website of the Bureau of Transportation Statistics \footnote{\url{https://www.transtats.bts.gov/DL\_SelectFields.asp},
accessed 2020-03-10.}.
We considered the Flights Data that include information about date of departure, departure and arrival delays, airlines, and air time. The original data contains information concerning flights of the $15$ airlines departing from NYC in 2013, in a total of $327346$ records.
\begin{table}[h!]
\caption{Original 'microdata' (partial view).}
\begin{center}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{0.2cm}
{\scriptsize
\begin{tabular}{c|c|cccc}
\hline
Airline & Date & $DDELAY $ & $ADELAY$ & AIRTIME \ldots \\
\hline
\multirow {4} {*} {AA} & 2013,1,1 & $-4$ & $-2$ & 377 & \ldots \\
& 2013,1,1 & $-1$ & $11$ & 358 &\ldots\\
& $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
& 2013,1,31 & $-7$ & $-30$ & $359$ & \ldots \\
\hline
\multirow {4} {*} {9E} & 2013,1,1 & $16$ & $4$ & $171$& \ldots\\
& 2013,1,4 & $2$ & $10$ & $116$ & \ldots\\
& $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
& 2013,1, 31 & $13$ & $-12$ & $118$ & \ldots\\
\hline
& $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$\\
\end{tabular}}\label{tableOriginal}
\end{center}
\end{table}
The goal is to discriminate Regional and Main carrier, from the flights' departure and arrival delays (DDELAY and ADELAY, respectively, negative times represent early departures or arrivals) and the Airtime, (all recorded in minutes). Part of the original data is illustrated in Table \ref{tableOriginal}. However, the units under study are not the flights but the airlines. Since the amount of information associated with each airline is substantial, we build a symbolic data table, where each unit is the airline/month. We consider only the units where variability was observed, thereby excluding three airlines as well as some months for other specific airlines (where values were constant for some variable(s)), leading to a final symbolic data array with 141 units. The included airlines are (IATA Codes): 9E; AA; B6; DL; EV; FL; MQ;UA; US; VX; WN; YV, four of which are Regional Carriers (9E; EV; MQ; YV), the remaining eight are Main Carriers. In spite of the original number of observations of each variable for each airline/month not being equal, we considered, for all units, histograms where the subintervals have the same weight, 0.20. Part of the symbolic data array is represented in Table \ref{tablesymbolic}.
\begin{table}[h!]
\caption{Symbolic histogram-valued data array (partial view).}
\begin{center}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{0.2cm}
{\scriptsize
\begin{tabular}{c|c}
\hline
Airline / Month & AIRTIME \\
\hline
AA /Jan & $\{[30,139[,0.2;[139,158[,0.2;[158,205[,0.2;[205,255[,0.2; [255,408],0.2 \}$ \\
$\vdots$ & $\vdots$ \\
AA/Dec & $\{[30,138[,0.2;[138,159[,0.2;[159,203[,0.2;[203,297.8[,0.2; [297.8,426],0.2 \}$ \\
\hline
9E/Jan & $\{[24,43[,0.2;[43,58[,0.2;[58,86[,0.2;[86,121.4[,0.2; [121.,264],0.2 \}$ \\
$\vdots$ & $\vdots$ \\
9E/Dec & $\{[26,54[,0.2;[54,76[,0.2;[76,104[,0.2;[104,134[,0.2; [134,261],0.2 \}$ \\
\hline
$\vdots$ & $\vdots$
\end{tabular}}\label{tablesymbolic}
\end{center}
\end{table}
Alternatively, we may consider interval-valued variables, and register for each unit (airline /month) the range of values recorded for the corresponding flights. The resulting data array is represented in Table \ref{table_interval}.
\begin{table}[h!]
\caption{Symbolic interval-valued data array (partial view).}
\begin{center}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{0.2cm}
{\scriptsize
\begin{tabular}{c|c|c|c}
\hline
Airline / Month & DDELAY& ADELAY & AIRTIME \\
\hline
AA /Jan & $[-16,337]$& $[-54,368]$ & $[30,408] $ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
AA/Dec & $[-16,896]$& $[-51,878]$ & $[30,426] $ \\
\hline
9E/Jan & $[-18,360]$& $[-59,370]$ & $[24,264] $ \\
$\vdots$ & $$& $$ & $\vdots$ \\
9E/Dec & $[-19,360]$& $[-50,386]$ & $[26,261]$ \\
\hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$
\end{tabular}}\label{table_interval}
\end{center}
\end{table}
The methodology described in Sections \ref{s3.2} and \ref{s3.3} lead to the following discriminant function, which then allows obtaining the discriminant score of each unit
$ \Psi^{-1}_{S(i)}(t) =-0.7844\Psi^{-1}_{AIRTIME(i)}(1-t).$
Considering interval-valued variables, the discriminant scores are obtained from
$\Psi^{-1}_{S(i)}(t) =0.0322\Psi^{-1}_{DDELAY(i)}(t)+0.9597\Psi^{-1}_{AIRTIME(i)}(t).$
As described in Section \ref{s3.4}, each unit is assigned to the group for which the Mallows distance between its score and the corresponding barycentric score is minimum. Figure \ref{figApril} illustrates the barycentric scores of the two groups and the scores of the units of all airlines in April, for the histogram and the interval data analysis. When histogram-valued variables are used, FL and US are misclassified in Group 2, whereas with interval-valued variables, only FL is misclassified. The proportion of well classified units was the same with and without Leave-One-Out, results are summarized in Table \ref{tableresults}. We may observe that the hit rate is higher when interval-valued variables are considered, although in this case less information is used about the variables. This may be explained by the different behavior of the parameters used in the determination of the discriminant scores.
\begin{figure}[h!]
\centering
\begin{tabular}[b]{c}
\includegraphics[width=.43\linewidth]{VoosHistAbril.jpg} \\
\small (a)
\end{tabular} \qquad
\begin{tabular}[b]{c}
\includegraphics[width=.43\linewidth]{VoosIntAbril.jpg} \\
\small (b)
\end{tabular}
\caption{Barycentric scores of the two groups and scores of the units for all airlines in April. (a) Histogram data (b) Interval data.}\label{figApril}
\end{figure}
\begin{table}
\caption{Classification results considering histogram and interval-valued variables with and without Leave-One-Out.}\label{tableresults}
\begin{center}
{\scriptsize
\begin{tabular}{p{0.2\textwidth} | p{0.3\textwidth} p{0.4\textwidth}}
\hline
Type of variable & \multicolumn{2}{c}{Classification with and without LOO} \\
\hline
\multirow{3}{*}{Histogram-variables} & $\%$ Well classified & $83\%$ \\
\cline{2-3}
& Units wrongly classified in G1 & None \\
& Units wrongly classified in G2 & 24 units: airlines FL and US, all months \\
\hline
\multirow{4}{*}{Interval-variables} & $\%$ Well classified & $90\%$ \\
\cline{2-3}
& Units wrongly classified in G1 & None \\
& Units wrongly classified in G2 & 14 units: all months of airline FL \\
& & and months May and Sept of US\\
\hline
\end{tabular}}
\end{center}
\end{table}
\section{Conclusion} \label{s6}
The discriminant method for histogram-valued variables proposed in this paper allows defining a score for each unit, in the form of a quantile function and of the same type as the variables' records - i.e. a histogram when we have histogram-valued variables and an interval when we have interval-valued variables. The score is obtained by an appopriate linear combination of variables, where now the model parameters are obtained by the optimization of a constrained fractional quadratic problem. This is a non-trivial problem, since it is non convex, and is solved by using Branch and Bound and Conic Optimization. The obtained scores allow for a classification in two a priori groups, based on the Mallows distance between the score of each unit and the barycentric score of each class. The methodology is defined for histogram-valued variables, but also applies to interval-valued variables, as an interval is a special case of a histogram; for degenerate intervals, i.e. real values, we obtain the classic method.
The proposed method performs well in a diversity of situations. It is potentially useful in a wide variety of areas where variability inherent to the data is relevant for the classification task.
Future research perspectives include developing the method to allow for the classification in more than two groups.
\section*{Acknowledgments}
This work is financed by National Funds through the Portuguese funding agency, FCT - Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia, within projects UIDB/50014/2020 and UIDB/00297/2020.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{}
\section{Introduction}
There is compelling observational evidence for the existence of dark matter.
Although knowledge of its underlying nature remains elusive, a variety of theories provide candidate particles~\cite{Bertone:2004pz}.
Among those are Supersymmetry~\cite{Martin:1997ns} and
Universal Extra Dimensions~\cite{Appelquist:2000nn}, both of which predict new physics at the electro-weak scale and, in most scenarios, introduce a
light, and stable (or long lived) particle that exhibits the properties of
a Weakly Interacting Massive Particle (WIMP)~\cite{Steigman:1984ac}.
WIMPs are an ideal dark matter candidate, predicted to have masses ranging from a few tens of GeV to several TeV.
High energy neutrinos are expected to be produced as a result of the self-annihilation or decay of WIMPs.
These neutrinos are detectable by high energy neutrino telescopes, making them
powerful tools in the search for WIMPs and the investigation of their properties.
In particular, they can be used to probe the self-annihilation cross section of dark matter candidates
by looking for anomalous neutrino signals from the Galactic halo.
Additionally, WIMPs could also be gravitationally captured by massive bodies like the Sun. If the annihilation rate of these captured WIMPs is regulated by the capture rate, then neutrino telescopes can be used to probe the WIMP-nucleon scattering cross section~\cite{Abbasi:2009uz}.
Recent observations of a GeV positron
excess by PAMELA~\cite{Adriani:2008zr}, an anomalous electron peak by ATIC~\cite{ATIC:2008zzr}, and
electron spectra from H.E.S.S.~\cite{Aharonian:2009ah} and Fermi~\cite{Abdo:2009zk}, demonstrate the importance of a multi-messenger approach
to astrophysics and validate the interest in a neutrino channel.
The observed lepton signals are inconsistent with each other or standard electron--positron production models~\cite{Moskalenko:1997gh} and although they could potentially originate from nearby astrophysical sources (e.g. pulsars~\cite{Yuksel:2008rf}), they could also be an indication of dark matter.
If interpreted as the latter, it would suggest the existence
of a leptophilic dark matter particle in the TeV mass range~\cite{Meade:2009iu,Cirelli:2008pk}. Such a model would also result in significant high energy neutrino fluxes, through the decay of muons and $\tau$-leptons.
A significant fraction of neutrinos could also be produced directly as part of the annihilation~\cite{Lindner:2010rr}, producing a line feature in the resulting neutrino spectrum.
Such a mono-energetic neutrino flux is of specific interest since it can be used to set a model
independent limit on the total dark matter self-annihilation cross section~\cite{Beacom:2006tt} for the
region of parameter space where gamma-ray signals would dominate.
In this paper we discuss a search for neutrino signals produced by annihilating or decaying dark matter in the Galactic halo.
The search is used to test the self-annihilation cross section by constraining the product of cross section and velocity averaged over the dark matter velocity distribution, $\langle \sigma_{A} v \rangle$, and to probe the lifetime, $\tau$.
The search focuses on the outer Milky Way halo, where the dark matter density distributions are relatively well modelled. We do not include the Galactic Center region and thus remove any strong dependence on the choice of the halo profile.
We quantify the residual weak dependence and present constraints on the dark matter self-annihilation
cross section and lifetime in a model-independent way for a set of selected benchmark annihilation and decay channels, respectively.
The paper is organized as follows: in the next section we describe the detector used for the data taken during 2007--2008 which is the base for our analysis.
Section III discusses how we obtain an expected neutrino flux at Earth using different dark matter distributions and annihilation channels. In section IV we describe our data selection criteria and analysis strategy, which is followed by a discussion of the associated systematic uncertainties in section V. Section VI presents the result of the search, and section VII puts it in context with other experiments. Section VIII concludes by summarizing the results and giving an outlook for related searches.
\section{The IceCube Neutrino Observatory and Event Selection}
The IceCube Neutrino Observatory, located at the geographic South Pole, consists of
the IceCube neutrino telescope and the IceTop air shower array~\cite{Achterberg:2006md}.
IceTop covers a surface area of one square-kilometer above the IceCube ``in-ice'' detector, and is designed to measure cosmic ray air
showers with energies between approximately 300~TeV to 1~EeV.
The in-ice detector instruments a volume of one
cubic kilometer of Antarctic ice~\cite{IceCube:icepaper} with 5160~digital optical modules (DOMs)~\cite{Abbasi:2008ym}
deployed at depths between 1450~m and 2450~m (see Fig.~\ref{fig_icecube_schematic}).
The DOMS are distributed over 86 electrical cable bundles (strings) that handle power
transmission and communication with electronics located on the surface.
Each DOM consists of a 25~cm Hamamatsu R7081-02 photomultiplier tube~\cite{Abbasi:2010vc} connected to a waveform recording data
acquisition circuit.
It is capable of resolving pulses with nanosecond precision and has an effective dynamic range of 200~photoelectrons per 15~ns.
\begin{figure}[htb]
\includegraphics[width=3.0in]{fig1}
\caption{(Color online) Schematic view of the IceCube Neutrino Observatory including the
low energy extension DeepCore.
Shown in red is the partially instrumented detector, active in the 2007--2008 season, which was the only portion used for this analysis.
\label{fig_icecube_schematic}}
\end{figure}
IceCube detects all flavors of active neutrinos through Cherenkov light emission from secondary particles created when a neutrino interacts in the ice.
Muon neutrinos are of particular interest
since their extended track-like topology makes them relatively simple to identify.
Furthermore, the elongated tracks of the muons
permit a relatively accurate reconstruction of the neutrino direction with approximately a few degrees precision at the detection threshold of 100~GeV.
Neutrinos with energies down to about 10~GeV can be identified in a densely
instrumented sub-detector, DeepCore~\cite{Wiebusch:2009jf}, which has been operating since early 2010 (see Fig.~\ref{fig_icecube_schematic}). In this analysis, we use data taken with an intermediate construction stage of the in-ice detector, comprising 22 strings.
The primary background in the search for neutrinos originates from cosmic ray air showers.
When high energy cosmic rays enter the Earth's upper atmosphere they produce extended air showers, a part of which includes high energy pions and kaons.
The decay of these mesons results in a continuous stream of
neutrinos and muons. These are known as atmospheric muons and neutrinos, and their
flux is regulated by the path length and time the parent particles had in the
atmosphere to lose energy or decay.
The resulting neutrino spectrum obeys a power law with a spectral index of $\gamma \approx 3.7$~\cite{Honda:1995hz,Agrawal:1995gk}.
High energy muons are capable of travelling long distances through matter before they eventually decay, resulting in a down-going muon flux at the IceCube detector.
In contrast, neutrinos below 100~TeV can traverse the Earth without significant absorption losses.
To distinguish between a muon produced from a charged current interaction of a muon neutrino from those produced in the atmosphere, we select only tracks
that enter the detector from below the horizon.
Given the 22-string detector configuration (see Fig.~\ref{fig_icecube_schematic}) for the analysis presented here,
the total trigger rate was approximately $550$~Hz, dominated by down-going atmospheric muons. A pre-selection at the South Pole for up-going reconstructed tracks reduces the data rate to 20~Hz, which is sent by satellite to be processed offline.
\section{Halo Profiles and Signal Expectations}
Recent advances in N-body simulations~\cite{Diemand:2007qr} and gravitational lensing observations~\cite{Menard:2009yb} have provided reliable predictions of the dark matter density distribution in
the Milky Way~(MW). While the outer regions of the dark matter
halo of the Milky Way (several kpc away from the Galactic Center (GC)) are
relatively well modelled, the structure of its central region is
still a matter of debate since it can neither be resolved in simulations,
nor directly measured. Not surprisingly, halo models generally show very
similar behavior at large distances from the Galactic Center, but differ
significantly in their predictions near it.
This effect is shown in Fig.~\ref{fig_halo_profiles}, where the dark matter density, $\rho(r)$, predictions from several spherically symmetric halo profiles obtained from N-body simulations are compared.
We show four different distribution functions which are used in our analysis.
Since we only use neutrinos from the northern sky, the effective dark matter densities which dominate the analysis are those between a distance from roughly 4~kpc to 20~kpc from the Galactic Center. In this range the various halo profiles are relatively consistent in their description of the dark matter density. This agreement allows us to constrain the dark matter
self-annihilation cross section with minimal halo profile dependence.
We use the Einasto~\cite{Einasto_profile,Einasto:by_hand} and
Navarro-Frenk-White (NFW)~\cite{Navarro:1995iw} profiles as benchmark models, while the Moore~\cite{Moore:1999gc}, and
Kravtsov~\cite{Kravtsov:1997dp} profiles are applied as extreme cases to estimate the
impact of halo model choice on the result.
The Einasto profile is given by:
\begin{equation}
\rho(r) = \rho_{-2} \times e^{(-\frac{2}{\alpha})\left[\left(\frac{r}{r_{-2}}\right)^{\alpha}-1\right]}
\end{equation}
with $\alpha=0.16$~\cite{Navarro:2003ew}, $r_{-2}=20$~kpc, and $\rho_{-2}$ normalized to a dark matter density $0.3 \frac{\rm GeV}{\rm
cm^3}$ at the solar system's orbit in the Galaxy ($R_{\rm sc}$ = 8.5~kpc).
The remaining three profiles can
be described by the following function:
\begin{equation}
\rho(r) = \frac{\rho_{0}}{(\frac{r}{r_s})^{\gamma}\left[1+(\frac{r}{r_s})^{\alpha}\right]^{(\beta-\gamma)/\alpha}},
\label{eq:dm_dist}
\end{equation}
where the variables $(\alpha, \beta, \gamma, r_{s})$~\cite{Yuksel:2007ac} take different numerical values (listed in Table~\ref{table:Halo_Parameters}) for the three models. The normalizations
are chosen such that the mass contained within the orbit of the Solar System in the Galaxy
provides the appropriate dark matter contribution to the local rotational curves, and yields a local
dark matter density $\rho_{\rm sc}=\rho_{\rm NFW}(R_{\rm sc}) = 0.3 \frac{\rm GeV}{\rm
cm^3}$ for the NFW profile.
\begin{table}
\caption{Summary of the parameters of Eq.~\ref{eq:dm_dist} used in this analysis.\label{table:Halo_Parameters}}
\begin{ruledtabular}
\begin{tabular}{|l|r|r|r|r|r|}
Halo Model & $\alpha$ & $\beta$ & $\gamma$ & $r_{s}$/kpc & $\rho(R_{\rm sc})$/$\frac{\rm GeV}{\rm cm^3}$ \\
\hline
NFW & 1 & 3 & 1 & 20 & 0.3 \\
Moore & 1.5 & 3 & 1.5 & 28 & 0.27 \\
Kravtsov & 2 & 3 & 0.4 & 10 & 0.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[htb]
\includegraphics[angle=-90,width=3.0in]{fig2}
\caption{Comparison of the dark matter density distribution, $\rho(r)$, as a
function of distance from the Galactic Center as
described by the Einasto, NFW, Kravtsov, and Moore halo profiles. The shaded area indicates the region where the presented analysis is sensitive.
\label{fig_halo_profiles}}
\end{figure}
The expected neutrino flux, $\phi_{\nu}$, from dark matter self-annihilations is proportional
to the square of the dark matter density integrated along the line of sight
$J(\psi)$:
\begin{equation}
J(\psi) = \int_{0}^{l_{\rm max}}
\frac{\rho^2(\sqrt{R_{\rm sc}^2 - 2 l R_{\rm sc} \cos\psi + l^2})}{R_{\rm sc} \rho^2_{\rm sc}} dl,
\label{Jpsi_integral}
\end{equation}
where $\psi$ is the angular distance from the Galactic Center and $l_{\rm max}$ is the upper limit of the integral, defined as
\begin{equation}
l_{\rm max} = \sqrt{(R^2_{\rm MW}-\sin^2\psi R^2_{\rm sc})}+R_{\rm sc}\cos\psi.
\end{equation}
We adopt a halo size of $R_{\rm MW}=40$~kpc. Contributions to the expected neutrino flux from beyond this range are
small, and are discussed as part of our systematic studies on the result in section VI.
\begin{figure}
\includegraphics[angle=-90,width=2.8in]{fig3}
\caption{Differential muon neutrino energy spectrum per annihilation, taking neutrino oscillations into account.
In this example we assume a WIMP mass of $300$~GeV and 100\% branching fraction into the corresponding annihilation channel.
\label{fig_Halo_Multi_dNdE_log}}
\end{figure}
The annihilation products are highly model dependent
and we thus study extremes of the possible annihilation channels assuming a branching ratio of 100\% for each of them in turn.
We consider soft neutrino spectra produced from the annihilation into quarks
(${\rm b}\bar{\rm b}$), and hard spectra as produced by annihilation into
${\rm W}^+{\rm W}^-$ and $\mu^{+}\mu^{-}$.
In addition, we consider a neutrino line spectrum ($\chi\chi \rightarrow \nu \nu$).
Neutrinos will have undergone extensive mixing through vacuum oscillations over the distances travelled across the Galaxy.
We determine
neutrino flavor oscillations in the long baseline limit~\cite{PhysRevD.67.073024,Murase:2007yt},
adopting values of $\sin^{2}2\Theta_{12} = 0.86$, $\Theta_{23}$ maximal $(\Theta_{23} \simeq \pi/4)$, and $\Theta_{13} \simeq 0$.
The neutrino fluxes at Earth are then given by:
\begin{equation}
\phi_{\nu_{e}} \simeq \phi_{\nu_{e}}^{0} - \frac{1}{4} s_2
\end{equation}
and
\begin{equation}
\phi_{\nu_{\tau}} \simeq \phi_{\nu_{\mu}} \simeq \frac{1}{2}(\phi_{\nu_{\mu}}^{0} + \phi_{\nu_\tau}^{0}) + \frac{1}{8} s_2,
\end{equation}
where $\phi_{\nu_{i}}^{0}$ is the flux at injection and $s_2$ is defined as
$ \sin^2 2\Theta_{12} (2\phi_{\nu_{e}}^{0} - \phi_{\nu_{\mu}}^{0} - \phi_{\nu_{\tau}}^{0} )$.
Note that the expected flux for muon and tau neutrinos is equal.
The neutrino energy spectra
were produced using DarkSUSY~\cite{Gondolo:2004sc}, an advanced numerical software package for supersymmetric dark matter calculations,
and are shown in Fig.~\ref{fig_Halo_Multi_dNdE_log}.
The differential neutrino flux from the annihilations of neutralinos of mass
$m_{\chi}$ in the Galactic halo is given by~\cite{Yuksel:2007ac}:
\begin{equation}
\frac{d\phi_{\nu}}{dE} = \frac{\langle \sigma_A v \rangle}{2} J(\psi) \frac{R_{\rm sc}
\rho_{\rm sc}^2}{4\pi m^2_{\chi}} \frac{dN_{\nu}}{dE},
\end{equation}
where $\frac{dN_{\nu}}{dE}$ is the differential neutrino multiplicity per annihilation.
Similar to the annihilation cross section, one can search for signals from
decaying dark matter~\cite{PalomaresRuiz:2010pn} and constrain
the lifetime, $\tau$.
For decaying dark matter, the expected neutrino flux is proportional to the dark matter density along the
line of sight, given by:
\begin{equation}
J_{\rm d}(\psi) = \int_{0}^{l_{\rm max}}
\frac{\rho (\sqrt{R_{\rm sc}^2 - 2 l R_{\rm sc} \cos\psi + l^2})}{R_{\rm sc} \rho_{\rm sc}} dl.
\label{eq_line_of_sight}
\end{equation}
The expected neutrino flux from the dark matter decay is then:
\begin{equation}
\frac{d\phi_{\nu}}{dE} = \frac{1}{\tau} J_{\rm d}(\psi) \frac{R_{\rm sc}\rho_{\rm
sc}}{4\pi m_{\chi}} \frac{dN_{\nu}}{dE}.
\label{dphidE_decay}
\end{equation}
We use identical halo model parameters in both the dark matter annihilation and decay analyses.
We assume a smooth halo
profile and discuss the effect of substructure separately.
\section{Data Selection}
The search for a clustering of neutrinos to indicate an astrophysical neutrino source is one of the benchmark analyses performed by the IceCube collaboration.
Such a ``point source'' search relies on muon neutrinos since the elongated tracks of the muons permit an accurate reconstruction of the neutrino direction. The 22-string detector configuration has
produced a well understood neutrino candidate sample~\cite{Abbasi:2009iv}, extracted
using likelihood-based track
reconstructions and selecting tracks from $-5^\circ$ to $85^\circ$ in declination.
The shape of the likelihood function around the best-fit value is used to estimate the angular uncertainty of the reconstructed track~\cite{Neunhoffer:2004ha}, while the number of optical modules in the event which record minimally scattered Cherenkov photons gives an additional handle on the quality of the reconstruction.
Such ``direct'' photons are isolated via a time difference selection window between the expected arrival time of an unscattered
photon, given the reconstructed track, and the registered DOM hit time.
Near the horizon, the background from poorly reconstructed atmospheric muons is further reduced by
an additional cut on the likelihood ratio of the best-fit track to the best-fit track constrained to be down-going.
These applied selection criteria remove the largest fraction of mis-reconstructed
down-going events, maintaining a neutrino candidate sample with about $90\%$ purity~\cite{Abbasi:2009iv}.
The final northern sky dataset consists of $5114$~neutrino candidate events acquired in $275.7$~days of livetime.
Figure~\ref{fig_nu_energy} shows the neutrino energy distribution of
the final selection based on simulations of atmospheric neutrinos.
\begin{figure}[htb]
\includegraphics[width=3.0in]{fig4}
\caption{Muon neutrino energy distribution from atmospheric neutrino simulations at final selection level. \label{fig_nu_energy}}
\end{figure}
\begin{figure}
\includegraphics[width=3.0in,height=2.5in]{fig5}
\caption{The relative expected neutrino flux from dark matter self-annihilation in the northern celestial hemisphere of the
Milky Way Galaxy halo is shown. The largest flux is
expected at a right ascension (RA) closest to the
Galactic Center ($\Delta {\rm RA} =0$).
Dashed lines indicate circles around the Galactic Center with a half-opening angle, $\psi$, that increases in $10^\circ$ steps. The solid
lines show the definition of on-- and off--source regions in the northern
hemisphere. The on--source region is centered around $\Delta {\rm RA} =0$, while
the off--source region is shifted by $180^{\circ}$ in RA.
\label{fig_OnOffSourceRegion}
}
\end{figure}
Assuming a given annihilation channel and dark matter halo profile, one can determine the expected neutrino flux (proportional to the dark matter annihilation cross section) for any given location on the sky.
The flux is peaked in the direction of the Galactic Center, which is a prominent
target for searches. However, the Galactic Center is located in the southern hemisphere at $266^\circ$~right ascension (RA) and
$-29^\circ$~declination (DEC), and therefore outside the field of view in the used dataset.
In the northern hemisphere, regardless of the choice of halo model, dark matter annihilations
would produce a large--scale neutrino anisotropy.
The search for such an anisotropy affords distinct advantages
for discovery. An observation of a flux from the Galactic Center
would be more difficult to
distinguish from other astrophysical sources
or cosmic ray interaction with the interstellar medium.
However, the Galactic Center is an excellent target to constrain the dark
matter self-annihilation cross section for a given halo model and
is the subject of a separate analysis.
To test for an excess flux of neutrinos, we define two regions on the northern sky.
The first region will serve as our signal region (on--source) and is defined by a half-opening angle, $r_{\psi}$, around the Galactic Center.
An equally sized region, offset by $180^\circ$ in RA, serves an off--source region (see Fig.~\ref{fig_OnOffSourceRegion}).
This choice is motivated by the robustness and simplicity of the ensuing analysis and
minimizes systematic uncertainties due to azimuth angle dependent reconstruction
efficiencies. For spherical halo profiles, the expected flux is a function of the angular distance from the Galactic Center, $\psi$, and we count the total number of
neutrino candidate events in each region. This makes the analysis maximally independent of
halo profiles and provides sensitivity to both hard and soft neutrino spectra.
The difference in the expected number of neutrino events between the on--source and off--source region is given by:
\begin{equation}
\Delta N =
\left(N_{{\rm on}}^{\rm bkg}+N_{{\rm on}}^{\rm sig}\right) -
\left(N_{\rm off}^{\rm bkg}+N_{{\rm off}}^{\rm sig}\right)
\label{eq1},
\end{equation}
where bkg/sig stand for background and signal, respectively.
Background events are expected to be equally distributed in the on-- and off--source regions, simplifying the prediction to
$\Delta N^{\rm sig} = N_{{\rm on}}^{\rm sig} - N_{{\rm off}}^{\rm sig}$.
The signal expectation in both regions, and hence $\Delta N^{\rm sig}$, is directly proportional to the dark matter self-annihilation cross-section $\langle \sigma_A v\rangle $.
To optimize the size of the on-- and off--source region, we
chose an example cross section $\langle \sigma_{A} v\rangle _0$ and predict the expected
number of signal events $S=\Delta N^{\rm sig}$ from simulations for
different choices of $r_{\psi}$~\cite{Rott:2009hr}.
For $r_{\psi} = 80^{\circ}$, the ratio of ${\rm S}/\sqrt{\rm B}$, where $B$ is the expected number of background events, is close to maximal for all considered halo profiles, while the on-- and off--source regions remain well separated and do not overlap.
\section{Systematic Uncertainties}
We first discuss the systematic uncertainty
associated with the background estimation.
By design, the
background can be determined from the data by comparing events in the on-- and off--source regions, eliminating most detector related effects.
Thus, only pre-existing anisotropy in the data must be considered.
The two dominant effects giving rise to this are: (1) An anisotropy in the cosmic ray
flux producing the atmospheric muon neutrino flux;
(2) Variations in exposure for different RA.
A large--scale anisotropy in the cosmic ray flux has been observed both on the
northern hemisphere by the TIBET air shower array~\cite{Amenomori:2006bx}, and
the southern hemisphere by an IceCube
measurement of the down-going muon flux~\cite{Abbasi:2010mf}.
The northern hemisphere anisotropy for cosmic ray energies around 50~TeV is relevant to this analysis. This energy range of cosmic ray showers contributes most in creating this analysis' background up-going atmospheric muon neutrino flux.
The overall scale of the measured cosmic ray anisotropy is about $0.2\%$, with peak values at ${\rm RA\approx}60^{\circ}$ and a minimum at ${\rm RA\approx}180^{\circ}$. This is not aligned
with an expected signal anisotropy from the Milky Way dark matter halo.
To provide a conservative systematic uncertainty estimate, we assume the
worst case of an aligned anisotropy, which peaks in one
region and is minimal in the other. In such a scenario
a difference of three events between on-- and off--source
regions would be observed, corresponding to a $0.2\%$ systematic uncertainty on the number of
background events.
The muon track reconstruction efficiency varies as function of
the zenith angle and azimuth
angle~\cite{Achterberg:2007bi,Abbasi:2009iv}.
Although the azimuth dependence is relatively uniform for the axially symmetric full IceCube detector, it
is particularly pronounced in the partially instrumented 22-string detector configuration used for this analysis.
As the Earth rotates, each detector alignment in RA gets equal exposure within one sidereal day.
A small fraction of detector operations is dedicated towards scheduled detector maintenance, which is performed at times when communication with the South Pole can be established. The use of geosynchronous satellites introduces a bias in sidereal time, which means that fewer physics data runs are available for particular alignments of the detector in RA.
Selecting symmetric on-- and off--source
regions shifted by $180^{\circ}$ in RA
reduces this effect significantly, such that
the track reconstruction efficiency is almost identical to the case where the detector is rotated
by $180^{\circ}$.
The total expected variation in the number of events
due to this effect is approximately $0.1\%$ (see
Fig.~\ref{fig:exposure}).
It is possible, in principle, to correct for both the cosmic
ray anisotropy and detector uptime effects.
Because of their negligibly small impact compared to the background statistical uncertainty, such a correction has not been applied.
The contribution from the cosmic ray anisotropy ($0.2\%$) and the uneven
exposure ($0.1\%$) are uncorrelated.
We use $0.3\%$ as a conservative estimate on the total systematic uncertainty
on the number of background events in the on--source region (see Table~\ref{sys_summary}).
\begin{figure}
\includegraphics[angle=-90,width=3.0in]{fig6}
\caption{
The relative exposure variation as function of RA and
rotated by $180^{\circ}$ is shown. The absolute variation defines the signal
acceptance uncertainty due to exposure, while the difference between the normal and rotated
exposure defines the corresponding systematic uncertainty on the background estimate.
\label{fig:exposure}}
\end{figure}
\begin{table}
\caption{Summary of systematic uncertainties on the background estimate.\label{sys_summary}}
\begin{ruledtabular}
\begin{tabular}{|l|r|}
Effect & Sys. Uncertainty \\
\hline
Cosmic ray anisotropy & $0.2\%$ \\
Exposure & $0.1\%$ \\
\hline
Total Background & $0.3\%$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
The signal acceptance uncertainty is dominated by uncertainties in the ice properties and limitations in the detector simulation, which is uncorrelated with a number of theoretical uncertainties such as muon propagation, neutrino
cross section, and bedrock uncertainty, each of which have
been studied in previous analyses~\cite{Abbasi:2009iv}.
In addition, we consider the uncertainty due to Monte Carlo simulation statistics and detector exposure.
The individual track pointing uncertainty (point spread function), on the order of one degree,
is negligible in this analysis, which targets a large--scale anisotropy.
Our dominant systematic uncertainty, the limited knowledge of ice properties
as a function of depth and limitations in the detector simulation, is expected to produce an
observed discrepancy between data and simulation for events near the horizon~\cite{Abbasi:2009iv}.
For nearly horizontal tracks the disagreement is maximal, with $30\%$ more
events observed in data compared to simulation predictions. Since we use the data itself to predict the number of background events in the on--source region, this discrepancy does
not affect the background estimate.
However, the signal acceptance can only be obtained from simulations. Hence, we must take this discrepancy into consideration for the signal acceptance uncertainty.
The higher than expected observed data rate, when compared to simulation expectations, may indicate
a contribution from mis-reconstructed down-going events, or a
higher signal acceptance than expected. Both would cause
the constraints presented later to be more conservative.
The estimate for this systematic uncertainty in signal acceptance is 25-30\%.
The track reconstruction efficiency coupled with detector uptime (see
Fig.~\ref{fig:exposure}) results in a systematic
uncertainty on the signal acceptance of $1\%$. This uncertainty, combined with the
theoretical uncertainties, results in a negligible
contribution
compared to the uncertainties in the optical properties of the ice.
We therefore assume a 30\% systematic signal acceptance uncertainty,
primarily associated with that from the ice properties and limitations in the detector simulation.
An additional systematic uncertainty to consider in signal acceptance is related to the photon detection efficiency of the DOMs, measured to be 8\% in the laboratory~\cite{Abbasi:2010vc}.
The effect of this uncertainty on the passing rate of reconstructed tracks is found to range from about $1\%$ for energetic events ($\ge 1$~TeV), increasing to as much as 20\% for lower energy events ($\le 200$~GeV), as expected from annihilations assuming WIMP's of mass 200~GeV. We calculate this uncertainty for each of the considered WIMP masses and annihilation channels, then we add it in quadrature to the ice properties uncertainty discussed above.
To derive the total uncertainty on the signal acceptance, we have added the systematic signal acceptance uncertainty in quadrature to the statistical uncertainty (Monte Carlo statistics).
The Monte Carlo statistics uncertainty ranges from 3-6\% (hard channels) and 4-16\% (soft channels) in the TeV mass range dark matter, and increases to 50\% (hard channels) and 90\% (soft channel) at $m_{\chi}=200$~GeV.
\section{Results}
\begin{figure}
\includegraphics[width=3.0in]{fig7}
\caption{
The location of the neutrino candidate
events in DEC versus RA for the on-- (right) and off--source (left) region.
\label{fig_on_off_source}}
\end{figure}
Except for examination of the data for quality assurance,
the optimization of the size of the on--source region was performed
entirely with simulated events, ensuring a blind analysis.
In the final dataset we observed 1389~events in the off--source
region and 1367~events in the on--source region,
consistent with the null hypothesis.
Figure~\ref{fig_on_off_source} shows the distribution of these neutrino candidate
events in declination and right ascension.
To study the possibility of an anisotropy in an adjacent bin, we shift the on-- and off--source regions in $60^{\circ}$ steps.
For each of the step bins, the ratio of ${\rm N}_{\rm on}/{\rm N}_{\rm off}$ is
consistent with one (see Fig.~\ref{fig_rotate60}).
We compute constraints on the neutrino flux from
dark matter annihilation from the Galactic halo.
Given a specific $\langle \sigma_{A} v\rangle _{0}$ in signal simulations, the number of
expected events for an arbitrary cross section $\langle \sigma_{A} v\rangle$ is
\begin{equation}
\Delta N^{\rm sig}(\langle \sigma_{A} v\rangle ) = \frac{\langle \sigma_{A}
v\rangle }{\langle \sigma_{A} v\rangle _{0}} (\Delta N^{\rm sig}(\langle \sigma_{A} v\rangle _0)).
\end{equation}
The cross section limit at 90\%~C.L. is
\begin{equation}
\langle \sigma_{A} v\rangle _{90} = \Delta {\rm N}_{90} \times \frac{\langle \sigma_{A} v\rangle_{0}}{\Delta {\rm N}^{\rm sig}(\langle \sigma_{A} v\rangle_{0} )},
\end{equation}
where $\Delta {\rm N}_{90}$ is the limit at 90\%~C.L. for the number of signal events.
To determine $\Delta {\rm N}_{90}$, we construct a Neyman confidence belt.
The one-sided 90\%~C.L. acceptance intervals are determined by a simple Monte Carlo, in which
the numbers of events in the on-- and off--source regions
are assumed to be Poisson distributed over repeated measurements, with an average contribution of $N_{\rm bkg} = N_{\rm off} = 1389 \pm 4 ({\rm sys}) 37 ({\rm stat})$. The 90\%~C.L. event upper limit $\Delta {\rm N}_{90}$ is calculated for various WIMP masses and annihilation channels using the appropriate signal expectation. Statistical and systematic uncertainties in the signal expectations are represented by log-normal distributions. For a 30\% signal acceptance uncertainty, for example, the upper limit was found to be $\Delta {\rm N}_{90}=49$ for $\Delta {\rm N} = -22$~events.
For small signal acceptance uncertainties, where the log-normal distribution can be approximated by a Gaussian, results are consistent with the confidence interval constructed using the method by Lundberg et al.~\cite{Lundberg2010683,Conrad:2002kn}.
Our limit calculation of the on--source region also resembles a
commonly used procedure by Li and Ma to compute the significance of an on--source observation~\cite{Li:1983fv}.
The significance $\xi$ is defined as
\begin{equation}
\xi = \frac{N_{\rm on}- \eta N_{\rm off}}{\eta\sqrt{N_{\rm on}+N_{\rm off}}} \approx \frac{\Delta N}{\sqrt{2\times N_{\rm off}}}.
\end{equation}
Here $\eta$ is the ratio in exposure, or ratio of the size of the two regions. For our case of an equally sized on-- and off--source region, $\eta=1$.
Figure~\ref{fig_exclusion_limit} shows the obtained exclusion limit
compared to the ``natural scale'', for which dark matter candidates are consistent with being a thermal relic~\cite{Steigman:1979kw,Jungman:1995df}. Larger cross sections are possible if, for example, dark matter is produced non-thermally or acquires mass only in the late universe~\cite{Kaplinghat:2000vt}.
\begin{figure}
\includegraphics[width=3.0in]{fig8}
\caption{Relative difference in number of events in the on/off--source region as a function of offset
from the nominal position. The regions are shifted by $60^{\circ}$ steps to
be centered at $\Delta RA + \delta$.
Error bars represent the statistical uncertainty in the bin.
Adjacent bins are correlated, as regions partially overlap.
Note the
first bin corresponds to the result obtained by this analysis. Bins 4-6 are closely related to bins 1-3, as $N_{\rm on}$ and $N_{\rm off}$ are swapped in them.
\label{fig_rotate60}}
\end{figure}
\begin{figure}
\includegraphics[angle=-90,width=3.0in]{fig9}
\caption{(Color online) 90\% C.L. upper limit on the dark matter self annihilation cross section for
five different annihilation channels. Also shown are the natural scale (red dotted line), for which the WIMP is a thermal relic~\cite{Steigman:1979kw,Jungman:1995df}, and unitarity bound (blue line)~\cite{Griest:1989wd,Hui:2001wy}.
For the limit curves, the central line is for the Einasto and NFW
profiles, while the shaded width identifies the extrema results from the Moore and Kravtsov
profiles. We consider only smooth halo profiles. The limits for $\tau\tau$ and $\mu\mu$
overlay, due to their very similar high energy neutrino spectra.
\label{fig_exclusion_limit}}
\end{figure}
Applying the same procedure as that above for the annihilation cross section, we compute a
$90\%$~C.L. lower limit on the WIMP lifetime, $\tau$, as function of
the WIMP mass, as shown in Fig.~\ref{fig_limit_decay}.
We assume a line spectrum, $\chi \rightarrow \nu\nu$ and apply Eq.~\ref{dphidE_decay}
for the expected neutrino flux. If dark matter is a thermal relic and unstable, the only requirement in order for it to be present today is that it has a lifetime much longer than
the age of the Universe $T_{\rm U} \simeq 4 \times 10^{17}$~s.
\begin{figure}
\includegraphics[angle=-90,width=3.0in]{fig10}
\caption{
Lower limit on WIMP lifetime $\tau$ assuming $\chi
\rightarrow \nu \bar{\nu}$ at $90\%$~C.L..
\label{fig_limit_decay}}
\end{figure}
Our limit calculation assumes smooth, spherically symmetric halo models.
However, N-body simulations indicate that dark matter in the halo should have some
substructure~\cite{Moore:1999nt,Diemand:2006ik}. While this will have negligible effects on the expected neutrino flux from dark matter decay, the presence of substructure will enhance the self-annihilation rate since it is proportional to the square of the dark matter density.
To quantify the average expected enhancement in the annihilation rate compared to a smooth dark matter distribution,
one can define a boost factor
as a function of the distance from the
Galactic Center~\cite{Kistler:2009xf,Kamionkowski:2010mi}:
\begin{equation}
B(r)=\frac{\int \rho^2 dV}{\int(\bar{\rho})^2 dV},
\end{equation}
where we defined $\bar{\rho}$ as the mean density of the smooth halo component.
To determine the impact of a boosted neutrino flux on the expected neutrino
signal in the on-- and off--source regions
we use the signal enhancement resulting from substructure in the halo following the simplest model of reference~\cite{Kamionkowski:2010mi}, as shown in Fig.~\ref{fig_sys_boost}.
We investigate the scaling of the limit due to a boost factor and adopted
size of the Galactic dark matter halo, $R_{\rm MW}$, which sets the upper integration limit
in the dark matter density line of sight integral given by Eq.~\ref{Jpsi_integral}.
The ratio between the limit for the default value (smooth halo, and $R_{\rm MW}=40$~kpc) and the modified halo model is shown in Fig.~\ref{fig_sys_boost_limits}. An increase in the halo size $R_{\rm MW}$ from 40~kpc to 100~kpc has no impact. Boosting the flux due to substructure results in a better limit and therefore assuming no substructure, yields a more conservative result.
\begin{figure}
\includegraphics[width=3.0in]{fig11}
\caption{
Boost factor as function of the distance from the Galactic Center for the simplest model of~\cite{Kamionkowski:2010mi} and a dark matter density using the NFW halo profile.
\label{fig_sys_boost}}
\end{figure}
\begin{figure}
\includegraphics[angle=-90,width=2.6in]{fig12}
\caption{
The ratio between the limit obtained with our default and modified halo models are shown.
The scaling due to a boost factor and the adopted size of the Galactic dark matter halo $R_{\rm MW}$ are given separately.
\label{fig_sys_boost_limits}}
\end{figure}
Another possible contribution to the neutrino flux from dark matter self-annihilations
originates outside our Galaxy. This extra-galactic flux~\cite{Beacom:2006tt} is
expected to be isotropic and, hence, contributes equally to the on-- and off--source
regions. That is, any such additional flux would equally contribute to the number of events observed
in the on-- and off--source regions and hence make a flux limit based on the
difference more conservative. Note also that the contribution from
the extragalactic component is much smaller
than the flux from within our Galaxy~\cite{Yuksel:2007ac}.
\section{Comparison to phenomenological models}
Lepton signals, such as those observed in the ATIC peak~\cite{ATIC:2008zzr}, the
PAMELA GeV positron excess~\cite{Adriani:2008zr}, and
electron spectra from H.E.S.S.~\cite{Aharonian:2009ah}, and
Fermi~\cite{Abdo:2009zk} deviate from predictions for the primary electron
and cosmic ray secondary positron spectrum~\cite{Moskalenko:1997gh}. Such an
excess, if interpreted as originating from dark matter self-annihilations, would
be indicative of leptophilic dark matter
candidates~\cite{Meade:2009iu,Cirelli:2008pk}.
Alternatively, such an excess could also be explained through nearby astrophysical sources such as pulsars~\cite{Yuksel:2008rf}.
\begin{figure*}
\includegraphics[width=3.0in]{fig13a}
\includegraphics[width=3.0in]{fig13b}
\caption{(Color online) 90\% C.L. upper limit on the dark matter self annihilation cross section assuming the Einasto profile and annihilation into $\mu\mu$ (left panel) and $\tau\tau$ (right panel). Limits are compared to a preferred phenomenological model to explain the PAMELA excess (green)
together with Fermi electrons (brown). The natural scale (red dotted line), for which the WIMP is a thermal relic, and unitarity bound~\cite{Griest:1989wd,Hui:2001wy} (blue line) are shown.
\label{fig_tautau_fermi}}
\end{figure*}
Since electrons lose significant energy during propagation,
signals must originate within a distance of about one kpc from the Sun.
While electron signals could only probe the local dark matter density,
the presented large--scale anisotropy search probes a wider range of the
Milky Way halo.
Figure~\ref{fig_tautau_fermi} compares the
IceCube exclusion limit with phenomenological interpretations of
anomalous electron measurements for two example annihilation
channels ($\mu\mu$,$\tau\tau$) and our chosen benchmark profile of Einasto.
Even the small dataset used here allows this analysis to constrain models motivated
by the anomalous lepton signals.
\section{Summary and Outlook}
The IceCube candidate neutrino sample, collected during 2007--2008 in the 22-string
configuration, has been used to search for a neutrino anisotropy as expected from
dark matter self annihilation in the Milky Way halo. Such an anisotropy was
not observed and we have determined limits on the
dark matter self-annihilation cross section $\langle \sigma_{A} v \rangle$
at 90\%~C.L. for WIMPs in the mass range from 200~GeV to 10~TeV.
The IceCube detector sensitivity can be significantly
improved by investigating the Galactic Center as a potential source.
Such a search could be performed with the IceCube detector at a later construction stage and rely on selecting neutrinos
interacting inside the detector volume. It would be able to
significantly improve the constraints on the dark matter self-annihilation
cross section given a particular choice of halo model in the case of a non observation.
A large--scale anisotropy study as performed here, however, might provide a more distinct
discovery signal. In the case of the Galactic Center, a dark matter signal
would be more difficult to distinguish from other astrophysical neutrino sources, such as point sources (source
contamination) or cosmic ray interaction with the interstellar medium.
\begin{acknowledgments}
We acknowledge the support from the following agencies:
U.S. National Science Foundation-Office of Polar Programs,
U.S. National Science Foundation-Physics Division,
University of Wisconsin Alumni Research Foundation,
the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin - Madison, the Open Science Grid (OSG) grid infrastructure;
U.S. Department of Energy, and National Energy Research Scientific Computing Center,
the Louisiana Optical Network Initiative (LONI) grid computing resources;
National Science and Engineering Research Council of Canada;
Swedish Research Council,
Swedish Polar Research Secretariat,
Swedish National Infrastructure for Computing (SNIC),
and Knut and Alice Wallenberg Foundation, Sweden;
German Ministry for Education and Research (BMBF),
Deutsche Forschungsgemeinschaft (DFG),
Research Department of Plasmas with Complex Interactions (Bochum), Germany;
Fund for Scientific Research (FNRS-FWO),
FWO Odysseus programme,
Flanders Institute to encourage scientific and technological research in industry (IWT),
Belgian Federal Science Policy Office (Belspo);
University of Oxford, United Kingdom;
Marsden Fund, New Zealand;
Japan Society for Promotion of Science (JSPS);
the Swiss National Science Foundation (SNSF), Switzerland;
A.~Gro{\ss} acknowledges support by the EU Marie Curie OIF Program;
J.~P.~Rodrigues acknowledges support by the Capes Foundation, Ministry of Education of Brazil.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Mixtures of protein filaments and molecular motors form an established class of active media, in which spontaneous internal processes drive the system from thermodynamic equilibrium~\cite{Ramaswamy2010}. Protein filaments and molecular motors represent the dynamic intracellular scaffolding known as the cytoskeleton that performs a range of tasks crucial to organism viability~\cite{HowardBook,BoalBook,BrayBook,AlbertsBook}. That similar phenomena to those observed {\em in vivo} can be reproduced in systems lacking genetic control~\cite{Heald1996,Daga2006,Nurse2006} suggests that some form of self-organisation has been exploited by
natural selection to robustly produce beneficial phenotypes. Identifying and elucidating the principles of self-organisation relevant to these active gels will therefore increase our understanding of the processes that sustain life, and leave us better equipped to counteract defects when they arise.
Mesoscopic theoretical models are uniquely placed to investigate
such phenomena, as they permit hypothesis testing unconstrained by experimental limitations, and full, non-invasive data extraction. A range of theories based on a continuum description of the
local director field of filament orientation, which assume that variations
are slow on the length scale of single filaments (the so-called
``hydrodynamic" limit),
have now been devised~\cite{Marchetti2013,Aranson2003,Aranson2005,Liverpool2006,Cates2008,Giomi2010,Tjhung2012,Sankararaman2009}, including those based on nematodynamic~\cite{Voituriez2005,Voituriez2006,Basu2008,Elgeti2011} and Smoluchowski~\cite{Liverpool2003,Ahmadi2005,Zeibert2005,Ruehle2008} approaches, predicting a range of self-organised pattern formation as control parameters are varied. For example, asters, nematic phases, density instabilities, and vortices have been predicted and qualitatively observed.
A recognised deficiency of such ``hydrodynamic" models is their dependence on
phenomenological parameters that cannot be easily related to
molecular mechanisms. Microscopic models can bridge these length scales, but most devised to date neglect steric hinderance between filaments, from which nematic elasticity derives and without which many of the predicted states cannot be realised~\cite{Surrey2001,Ziebert2008,Pinot2009,Loughlin2010,Wang2011,Kohler2011,Saintillan2012}. The reason for this omission may be due to the specific application considered, but may also be simply pragmatic, as incorporating excluded-volume interactions in numerical simulation
is notoriously expensive. Coupled with the high aspect ratio of filaments, making it difficult to achieve linear system sizes much larger than the filament length at reasonable densities, means numerical simulation of active gels is a formidable challenge. Analytical coarse graining is therefore desirable, but has so far only been performed for rigid, adamant motors which do not induce relative filament rotation~\cite{Liverpool2005} (here adamant refers to a motor's insensitivity to loading, which has been argued to make spontaneous flow impossible~\cite{Wang2011}).
The potential benefits to be made from microscopic modelling motivates its continued pursuit, even if results are limited for now to relatively small systems. For strictly two dimensional (2D) systems, anomalous diffusion and
large-wavelength density fluctuations were observed~\cite{Head2011a} as in models of active media~\cite{Tu1998,Chate2008,Golestanian2009,Ramaswamy2003}, but structural self-organisation was inhibited by the steric hinderance, resulting in disordered structures unlike {\em in vitro} experiments~\cite{Surrey2001}. 3D systems confined between parallel plates reduce steric hinderance by allowing a degree of filament overlap without excessively increasing the numerical burden. With additional lateral confinement in a ring-like corral (representing either the cell membrane or the effect of other filaments around), spindle-like configurations and rotating vortices were observed in delineated regions of parameters space comprised of motor speed and density~\cite{Head2011b}.
Here we consider quasi-2D active gels confined between parallel planes with periodic boundaries in the lateral directions, in order to describe active systems in thin films without lateral confinement.
Our aim is to elucidate and quantify structure and dynamics on molecular
and supra-molecular length scales, and how they result from the
various microscopic parameters.
We systematically vary the end-detachment rate to control the dwell time of motors at filament ends, which is sometimes incorporated into models lacking excluded volume~\cite{Ziebert2008,Pinot2009} where it has been argued to be necessary to reproduce vortices~\cite{Surrey2001}. Both the mean filament speed and the exponents describing anomalous diffusion are sensitive to end-detachment as detailed in Sec.~\ref{s:resDynamics}. This is argued to be due to motor motility being limited by loading, and the load in turn dominated by static motors dwelling at filament ends. For high motor densities, many-filament clusters form that can be classified into asters, layers and bundles as described in Sec.~\ref{s:resStatics}. The layered state, strikingly reminiscent of microtubule structures that self-organise from {\em Xenopus} cytosol~\cite{Mitchison2013}, is only clearly defined when end-detachment is enhanced, confirming the importance of end-dwelling in guiding the motor-driven self-organisation.
It is also dynamically stable in the presence of thermal noise, similar to
active smectics and other striped non-equilibrium steady
states~\cite{Adhyapak2013}.
The observed trends are reproduced in a
simple, effective one-filament model that supports this
interpretation. Finally, in Sec.~\ref{s:discussion} we discuss how close we are to achieving our goal of reaching experimental length and time scales
{\em in silico}, and suggest possible means to close the gap.
\section{Model}
\label{s:methods}
The model is referred to as microscopic as the shortest length represented is no larger than the dimensions of individual motors or filaments. The model is explained below first in terms of the components, then the method used to integrate the system in the specified geometry is detailed.
\subsection{Motors and filaments}
Filaments are modelled as linear arrays of $M=30$ monomers with centers spaced by a distance~$b$ as shown in Fig.~\ref{f:schematic}(a). The filaments are
polar and have $[-]$- and $[+]$-ends that define the direction of motor motion.
The unit vector from $[-]$ to $[+]$ is denoted~$\hat{\bf p}$. Steric hinderance between filaments is incorporated as repulsive forces acting between non-bonded monomers, here taken to be a Lennard-Jones potential parameterised by an energy $\varepsilon=5k_{\rm B}T$ and a length scale $\sigma=b$, with a cut-off at the potential minimum $r=2^{1/6}\sigma$~\cite{FrenkelSmit,AllenTildesly}. The filament length is $L=Mb$, and mapping this to the protein fiber in question allows $b$ to be estimated, {\em e.g.}, $b\approx100nm$ for a 3$\mu$m protein filament.
Bipolar motor clusters (hereafter simply called motors) are only explicitly represented when attached to filaments. Soluble motors are instead implicitly incorporated into the fixed attachment rate $k_{\rm A}$ (this simplification, which can be relaxed~\cite{Surrey2001}, corresponds to an infinite reservoir of soluble motors). Motors only attach to pairs of monomers of different filaments with centers within a specified distance, here taken to be the same as the interaction range $2^{1/6}b$. Once attached, the motor is represented as a two-headed spring, with each head located at the center of the attached monomer. The spring constant is $k_{\rm B}T/b^{2}$ and the natural spring length is $2^{1/6}b$ so that they attach in an (almost) unstressed state.
Motor heads move by one or more monomers at a time in the direction of the filament's $[+]$-end, as shown in Fig.~\ref{f:schematic}(b). Since the distance of order $b$ per step will typically be much larger than the step size of real motor proteins~\cite{HowardBook}, this should be regarded as the integration of a series of smaller movements. Motor loading exponentially retards motion according to the change $\Delta E$ in motor elastic energy that would be induced by the move,
\begin{equation}
\left\{
\begin{array}{l@{\quad:\quad}c}
k_{\rm M}e^{-\Delta E/k_{\rm B}T} & \Delta E\geq0\:, \\
k_{\rm M} & \Delta E<0\:.
\end{array}
\right.
\label{e:move}
\end{equation}
The form of Eq.~(\ref{e:move}) suppresses moves that would increase the motor spring energy too much, acting as a stall force. $k_{\rm M}$ corresponds to the unloaded motor rate, which has been tabulated for real proteins~\cite{HowardBook}.
Moves of more than one monomer are allowed but are exponentially rare due to their typically high~$\Delta E>0$. Each motor head detaches at a rate $k_{\rm D}$, in which event the entire motor is removed from the system.
Motor heads residing at a filament's $[+]$-end detach at a rate $k_{\rm E}$ which may differ from $k_{\rm D}$; see Fig.~\ref{f:schematic}(c).
Finally, motors do not move if by doing so they would exceed
a maximum head-to-head separation of~$5b$; however, if overstretching
(head-to-head separation larger than ~$5b$) is induced by the relative motion
of the filaments, then overstretched motors are removed from the filaments.
\subsection{Iteration}
The filament positions and orientations are updated as per the Brownian dynamics of rigid rods~\cite{DoiEdwards}. For each time step~${\rm d}t$, all forces (motor-mediated plus excluded volume) acting on each filament are summed to give the total force ${\bf F}$ and torque ${\bf W}$. These are then converted to a change in the filament center-of-mass vector ${\bf x}^{\rm COM}$ as
\begin{eqnarray}
\delta {\bf x}^{\rm COM}
&=&
\frac{1}{\gamma^{\parallel}} \left[\xi_{1} \sqrt{2\gamma^{\parallel}kT{\rm d}t} \:\:+ {\bf F}\cdot\hat{\bf p}\:\:\,{\rm d}t\right] \hat{\bf p}
\nonumber\\
&+&
\frac{1}{\gamma^{\perp}}\left[\xi_{2} \sqrt{2\gamma^{\perp}kT{\rm d}t}+ {\bf F}\cdot\hat{\bf n}_{1}\,{\rm d}t\right] \hat{\bf n}_{1}
\nonumber\\
&+&
\frac{1}{\gamma^{\perp}}\left[\xi_{3} \sqrt{2\gamma^{\perp}kT{\rm d}t}+ {\bf F}\cdot\hat{\bf n}_{2}\,{\rm d}t\right] \hat{\bf n}_{2}\:,
\label{e:modelTrans}
\end{eqnarray}
where the $\xi_{i}$ are uncorrelated random variables drawn from a unit Gaussian distribution, and the unit vectors $\hat{\bf n}_{1}$ and $\hat{\bf n}_{1}$ are chosen at each time step such that $(\hat{\bf p},\hat{\bf n}_{1},\hat{\bf n}_{2})$ form an orthonormal basis. The damping coefficients are related to the drag coefficient $\gamma$ of an individual monomer by $\gamma^{\parallel}=M\gamma$, $\gamma^{\perp}=2\gamma^{\parallel}$. The filament is then rotated about its new centre-of-mass to give a new orientation unit vector $\hat{\bf p}^{\rm new}=(\hat{\bf p}+\delta{\bf p})/|\hat{\bf p}+\delta{\bf p}|$, where
\begin{eqnarray}
\delta{\bf p}
=
\frac{1}{\gamma_{M}}
\Big{[}
{\bf W}\times\hat{\bf p}\,{\rm d}t
&+&
\xi_{4}\sqrt{2kT\gamma_{M}{\rm d}t}\,\hat{\bf n}_{1}
\nonumber\\
&+&
\xi_{5}\sqrt{2kT\gamma_{M}{\rm d}t}\,\hat{\bf n}_{2}
\Big{]}
\label{e:modelRot}
\end{eqnarray}
where $\gamma_{M}=\frac{1}{12}M(M^{2}-1)b^{2}\cdot2\gamma$ plays the role of the moment of inertia in this overdamped system. The bead positions are then updated according to the new ${\bf x}^{\rm COM}$ and~$\hat{\bf p}$. The use of rigid rods deviates from previous work where the filaments were flexible, which required a smaller $\delta t$ for numerical stability~\cite{Head2011a,Head2011b}.
\subsection{Geometry and numerical procedure}
The system has dimensions $(X,Y,Z)$ with $X=Y=125b\approx4L$ and $Z=5b=L/6$ as shown in Fig.~\ref{f:schematic}(d). The system is periodic in the $x$ and $y$-directions, but there are repulsive walls along the planes $z=0$ and $z=Z$ with the same potential and parameters as the excluded-volume interactions.
As $Z\ll L$, these walls restrict filament orientations to lie approximately in the $x$-$y$ plane while still permitting overlap. The density of the system is given in terms of the volume fraction $\phi=N v_{\rm f}/XYZ$ for $N$ filaments of volume $v_{\rm f}$ each, where $v_{\rm f}$ is the volume of a cylinder of diameter $2^{1/6}\sigma$ with hemispherical end-caps.
Convergence with time was checked by ensuring a sample of measured quantities (nematic order parameter, motor density and mean squared displacements) were independent of time. Densities above $\phi\approx0.2$, or motor speeds below $k_{\rm M}\approx k_{\rm D}$, did not reach stationarity within the attainable simulation times of around $10^{2}k_{\rm D}^{-1}$ and were avoided. Motor speeds above $k_{\rm M}\approx10^{3}k_{\rm D}$ placed a finite fraction of motors close to their maximum extension, resulting in a significant rate of motor breakage through overextension under relative filament motion. These speeds were also avoided to reduce the number of mechanisms under consideration.
\begin{figure}[htpb]
\centerline{\includegraphics[width=8.5cm]{schematic.pdf}}
\caption{(a)~Filaments are linear monomer arrays with centers $b$ apart, with a polarity vector $\hat{\bf p}$ directed from $[-]$ to $[+]$. Each monomer has an excluded-volume interaction of range $2^{1/6}b$ to non-bonded monomers.
(b)~Each motor head moves at a rate $k_{\rm M}e^{-\Delta E/k_{\rm B}T}$ if the corresponding increase in spring energy $\Delta E\geq 0$; for $\Delta E<0$, the rate is simply~$k_{\rm M}$.
(c)~Motors attach at a rate $k_{\rm A}$ when the monomers are within a prescribed distance. Each head detaches at a rate $k_{\rm D}$, leading to removal of the motor. If the head is at the $[+]$-end, this rate becomes~$k_{\rm E}$. (d)~The system is narrowly confined in the $z$-direction, with periodic boundaries for $x$ and~$y$.
}
\label{f:schematic}
\end{figure}
\section{Results}
\label{s:results}
Snapshots representative of the parameter space sampled are presented in Fig.~\ref{f:snapshots}. Movies are provided in the supplementary information~\cite{SuppInf}. Filament configurations can be broadly identified as belonging to one of two groups:
(i) weakly bound states of small, transient clusters, or
(ii) strongly bound states with spatially-extended structure formation. The former class displays a range of exotic dynamics and is the subject of Sec.~\ref{s:resDynamics}. The motor-driven structure formation for strongly bound states is detailed in Sec.~\ref{s:resStatics}, and is supported by analysis of a
simple, effective one-filament model that highlights the
controlling role of $k_{\rm E}$ in selecting between aster and layer states.
All results are presented in dimensionless form by scaling lengths by the filament length $L=Mb$, and times or rates by either the detachment rate $k_{\rm D}$ or the time $\tau_{\rm L}=M/k_{\rm M}$ for an unloaded motor to traverse a filament. The relationship to the equivalent experimental scales is discussed in Sec.~\ref{s:discussion}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=8.5cm]{snapshotsCombined.pdf}}
\caption{Snapshots representative of regions of parameter space for weakly-bound states with $k_{\rm A}=20k_{\rm D}$ in (a)--(c), and strongly bound states with $k_{\rm A}=40k_{\rm D}$ in (d)--(f). The other parameters are
(a)~$k_{\rm E}/k_{\rm D}=10$, $k_{\rm M}=10^{2}k_{\rm D}$ and $\phi=0.1$,
(b)~$k_{\rm E}/k_{\rm D}=1$, $k_{\rm M}=10^{2}k_{\rm D}$ and $\phi=0.15$,
(c)~$k_{\rm E}/k_{\rm D}=5$, $k_{\rm M}=10^{2}k_{\rm D}$ and $\phi=0.15$.
(d)~$k_{\rm E}/k_{\rm D}=1$, $k_{\rm M}=10^{2}k_{\rm D}$ and $\phi=0.15$,
(e)~$k_{\rm E}/k_{\rm D}=5$, $k_{\rm M}=10^{2}k_{\rm D}$ and $\phi=0.15$ and
(f)~ $k_{\rm E}/k_{\rm D}=1$, $k_{\rm M}=k_{\rm D}$ and $\phi=0.2$.
Light (dark) shades correspond to filament $[+]$ ($[-]$)-ends. Motors are not shown for reasons of clarity, but are provided in the matching figures in the supplementary information, along with movies for the same parameters~\cite{SuppInf}.
}
\label{f:snapshots}
\end{figure}
\subsection{Dynamics of weakly-bound states}
\label{s:resDynamics}
A basic dynamic quantity is the mean filament translational speed
$v^{\rm RMS}\equiv\sqrt{\langle v^{2}\rangle}$ averaged over particle
trajectories in steady state~\cite{Saintillan2012}. However, instantaneous velocities are not
well defined for overdamped dynamics with thermal noise, as employed here
[see Eqs.~(\ref{e:modelTrans}) and (\ref{e:modelRot})]. It is therefore
necessary to estimate the velocity over a finite time interval~$t>0$,
but this raises further difficulties since filament motion is not ballistic
in the regimes of interest, {\em i.e.} the displacement vector
$\Delta {\bf x}(t)\equiv{\bf x}(t_{0}+t) - {\bf x}(t_{0})$ of a filament
center ${\bf x}$ is not linear in~$t$, making it difficult to define a unique velocity. Instead,
we first consider a nominal speed defined over a fixed time interval,
$v^{\rm RMS}\equiv\Delta r(t^{\rm RMS})/t^{\rm RMS}$ with
$\Delta r\equiv|\Delta {\bf x}|$ and $t^{\rm RMS}=(4k_{\rm D})^{-1}$,
as a measure of net motility, and consider trends with respect to variations
in $k_{\rm M}$ and $k_{\rm E}$. Varying $t^{\rm RMS}$ alters the values
of $v^{\rm RMS}$ but not these trends. The full spectrum of displacements
with varying lag times is then considered in more detail.
Fig.~\ref{f:velMags} shows $v^{\rm RMS}$ versus $k_{\rm M}$ for a range of $k_{\rm E}$ from $k_{\rm E}=0.2k_{\rm D}$ to $k_{\rm E}=10k_{\rm D}$.
For $k_{\rm E}<k_{\rm D}$, the system forms a strongly-bound aster state similar to Fig.~\ref{f:snapshots}(d), and correspondingly low values of~$v^{\rm RMS}$. Such states are the focus of Sec.~\ref{s:resStatics} and will not be pursued further here. For $k_{\rm E}\geq k_{\rm D}$, states more closely resemble Figs.~\ref{f:snapshots}(a-c) and
$v^{\rm RMS}$ monotonically increases
with $k_{\rm M}$ but at a slower rate than the naive expectation
$v^{\rm RMS}\propto k_{\rm M}$, which would arise from a filament
being pulled with constant motor stepping rate $k_{\rm M}$ across other filaments.
Sub-linear scaling of speed with activity (controlled via ATP concentration)
has also been inferred from experiments~\cite{Sanchez2012,Thampi2013}.
Possible origins of this sub-linear behavior are that for larger
$k_{\rm M}$ motors more often reach their stall force, or are experiencing more
frequent force-induced detachments from the filament. Furthermore,
the observation from Fig.~\ref{f:velMags} that $v^{\rm RMS}$ increases with~$k_{\rm E}$ suggests end-dwelling motors act to suppress filament motion. To test this hypothesis, let $t^{[+]}_{\rm occ}$ denote the mean dwell time of motor heads at $[+]$-ends, and $t_{\rm occ}$ the occupancy time at any other point along the filament ({\em i.e.}, before the head detaches or moves). All of the $v^{\rm RMS}$ can be collapsed onto a single-valued function of $t^{[+]}_{\rm occ}/t_{\rm occ}$ after rescaling both axes by powers of $k_{\rm E}/k_{\rm D}$. As demonstrated in Fig.~\ref{f:velMags} (inset), good collapse arises when employing the scaling variables $\tilde{t}=(t^{[+]}_{\rm occ}/t_{\rm occ})(k_{\rm D}/k_{\rm E})$ and $\tilde{v}=(k_{\rm D}/k_{\rm E})^{3/4}(v^{\rm RMS}/Lk_{\rm D})$, {\em i.e.}, $\tilde{v}=g(\tilde{t})$ with scaling function $g$.
That $v^{\rm RMS}$ is a function of $k_{\rm E}/k_{\rm D}$ and the relative dwell time at $[+]$-ends, confirms system activity is strongly influenced by end-dwelling. The origin of the scaling exponents for $\tilde{t}$ and $\tilde{v}$ are not yet evident.
Extending this analysis to self-diffusion reinforces the important role of end-dwelling. Active media often exhibit super-diffusion with mean-squared displacements $\Delta r^{2}$ that vary super-linearly with time, $\Delta r^{2}\propto t^{a}$ with $1<a\leq2$, as observed in intracellular transport~\cite{Bursac2005,Zhou2009,Bruno2009}, {\em in vitro} experiments~\cite{Kohler2011,Kohler2012,Sanchez2012} and models of self-propelled particles~\cite{Tu1998,Chate2008,Golestanian2009}. Conversely, $0<a<1$ is referred to as sub-diffusion. Both forms of anomalous diffusion have been measured in our model, as shown in Fig.~\ref{f:msd_kA10} which gives $\Delta r^{2}(t)$ for weakly bound systems $k_{\rm A}=10k_{\rm D}$. Sub-diffusion with $a\approx0.8$ is observed over short times $t$ when the motors are acting as passive crosslinkers, generating viscoelasticity of the aggregate structures that retards filament motion~\cite{Mason2000}. For larger~$t$, when motor motion becomes relevant, a crossover to super-diffusion with $a\approx1.6$ is clearly seen. This super-diffusive regime becomes more dominant with a higher density of motors, as shown in Fig.~\ref{f:msd_vv} for the higher $k_{\rm A}=20k_{\rm D}$. Further increasing $k_{\rm A}$ generates strongly-bound structures such as Figs.~\ref{f:snapshots}(d)--(f), which remain sub-diffusive for the largest simulation times achieved.
Independent evaluation of $a>1$ is possible from the velocity autocorrelation function $R(t)\equiv\langle {\bf v}(0)\cdot{\bf v}(t)\rangle$, which in steady state obeys~\cite{Taylor1922,Majda1999}
\begin{equation}
\langle\Delta r^{2}(t)\rangle
=
2
\int_{0}^{t}{\rm d}s\,(t-s)R(s)\:,
\label{e:vv}
\end{equation}
from which it immediately follows that $1<a\leq2$ corresponds to $R(t)\sim t^{a-2}$. $R(t)$ is plotted in Fig.~\ref{f:msd_vv} (inset) and is consistent with this prediction.
The exponent~$a$, as determined from fitting $\Delta r^{2}$ at the same length $\Delta r^{2}=L^{2}$, for a range of $k_{\rm E}$ and $k_{\rm M}$ is shown in Fig.~\ref{f:msd_exps}, and is seen to cover a similar range to that measured for intra-cellular traffic~\cite{Bursac2005,Bruno2009}. The variation with $k_{\rm M}$ is non-monotonic; however, $a$ monotonically increases with the end-detachment rate $k_{\rm E}$, and for high $k_{\rm E}$ approaches $a=2$ as observed in reconstituted active gels~\cite{Kohler2011,Kohler2012,Sanchez2012}. This observation suggests end-dwelling is again playing a key role, and plotting the exponent against the same scaling variable $\tilde{t}$ as above collapses the data as shown in the figure inset. Although here the collapse is only partial, the significant clustering compared to the unscaled data demonstrates the importance of end-dwelling.
The variation of the effective MSD exponent~$a$ with $k_{\rm E}$ and $\phi$ is presented in Fig.~\ref{f:stateMSDlowkA}, where we also plot the state of these same data points using the procedure to be described in Sec.~\ref{s:resStatics}. High filament density and low $k_{\rm E}$ give rise to persistent, localised clusters such as those evident in Fig.~\ref{f:snapshots}(b) and~(c), which are termed bundles. Such states, although super-diffusive with~$a>1$, have a much lower exponent than the nematic states that arise for high $k_{\rm E}$ or low~$\phi$, which are referred to as weak binding in the figure, and resemble Fig.~\ref{f:snapshots}(a).
Asters predominately form for $k_{\rm E}<k_{\rm D}$ for this $k_{\rm A}$
and $k_{\rm M}$, with correspondingly subdiffusive dynamics with $a<1$ as
seen in the figure.
Spatial correlations in velocity reveal instantaneous modes of relative
filament motion, and has been used to quantify the effect of mutations on
cytoplasmic streaming {\em in vivo}~\cite{Ganguly2012}, and of ATP concentration on active flow {\em in vitro}~\cite{Sanchez2012} and in ``hydrodynamic" models~\cite{Thampi2013}. The two-point correlation function $C_{vv}(r)=\langle{\bf v}(0)\cdot{\bf v}({\bf r})\rangle$ provides information purely as a function of the distance $r=|{\bf x}^{\beta}-{\bf x}^{\alpha}|$ separating filament centres ${\bf x}^{\alpha}$ and ${\bf x}^{\beta}$. Additional insight can be gained by projecting the separation vector parallel and perpendicular to the filament polarity $\hat{\bf p}^{\alpha}$, {\em i.e.},
\begin{equation}
C^{\parallel}_{vv}(r)
=
\frac
{\sum_{\alpha,\beta}\,{\bf v}^{\alpha}\cdot{\bf v}^{\beta}\,\delta(r-|{\bf x}^{\alpha}-{\bf x}^{\beta}|)\cos^{2}\theta}
{\sum_{\alpha,\beta}\delta(r-|{\bf x}^{\alpha}-{\bf x}^{\beta}|)\cos^{2}\theta},
\label{e:Cvv_par}
\end{equation}
where $\cos\theta=\hat{\bf p}^{\alpha}\cdot({\bf x}^{\beta}-{\bf x}^{\alpha})/r$. The corresponding expression for $C^{\perp}_{vv}(r)$ is given by replacing $\cos^{2}\theta$ by $\sin^{2}\theta$. (Using $\hat{\bf p}^{\beta}$ to calculate $\theta$ gives the same result due to the symmetry of Eq.~(\ref{e:Cvv_par})). The variation of $C_{vv}(r)$, $C^{\parallel}_{vv}(r)$ and $C^{\perp}_{vv}(r)$ with $k_{\rm M}$ is plotted in Fig.~\ref{f:spatVel}, and exhibits qualitatively different behavior for the two projections:~$C^{\perp}_{vv}$ is always positive, while $C^{\parallel}_{vv}$ exhibits a negative region for fast motors. This trend remains true for all $1\leq k_{\rm E}/k_{\rm D}\leq10$ considered, with a broader anti-correlated region for increasing~$k_{\rm E}$ when filament motion is less inhibited. Throughout this range the filament polarity vectors are aligned in parallel, as evident in the corresponding polarity correlation functions described in Sec.~\ref{s:resStatics}. Inspection of Eq.~(\ref{e:Cvv_par}) then reveals that $C^{\parallel}_{vv}<0$ corresponds to contrary motion of overlapping filaments. Cytoplasmic streaming in {\em Drosophila} egg cells exhibited anti-correlations over lengths of approximately $18\mu$m, comparable to the microtubule length~\cite{Ganguly2012}, and therefore on longer lengths than observed here. In addition, little or no variation in correlation length with motor speed was observed either in the {\em Drosophila} system, or in reconstituted {\em in vitro} networks and a ``hydrodynamic" model~\cite{Sanchez2012,Thampi2013}, unlike the variation apparent in Fig.~\ref{f:spatVel}. The cause of this deviation is not clear, but may simply be due to the smaller systems studied here not permitting active swirls to fully develop.
It may also be due to the lack of hydrodynamic interactions in our model,
which has been shown to give long-range velocity correlations in a
microscopic model~\cite{Saintillan2012}.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{velMags.pdf}}
\caption{Filament speed $v^{\rm RMS}$ versus unloaded motor speed $k_{\rm M}$
in a double-logarithmic representation, for (from bottom to top) $k_{\rm E}/k_{\rm D}$=0.2, 0.5, 1, 2, 5 and 10 respectively. $k_{\rm A}=20k_{\rm D}$, $\phi=0.15$ and the thick dashed line has a slope of 1. The inset shows the scaled velocity $\tilde{v}=(k_{\rm D}/k_{\rm E})^{3/4}(v^{\rm RMS}/Lk_{\rm D})$ against the scaled dwell time $\tilde{t}=(t_{\rm occ}^{[+]}/t_{\rm occ})(k_{\rm D}/k_{\rm E})$ for the $k_{\rm E}\geq k_{\rm D}$ data points only.
}
\label{f:velMags}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{msd_kA10.pdf}}
\caption{
Mean squared displacements $\Delta r^{2}(t)$ versus lag time $t$ plotted such that normal
diffusion $\Delta r^{2}\propto t$ corresponds to a horizontal line.
$k_{\rm A}=10k_{\rm D}$, $k_{\rm M}=k_{\rm D}$, $\phi=0.15$, and the $k_{\rm E}$ are given
in the legend. The thin diagonal line corresponds to displacements equal to the filament length, {\em i.e.} $\Delta r^{2}=L^{2}$. The thick dashed lines, which have slopes $-0.2$ and $0.6$ on these axes, correspond to sub-diffusion $\Delta r^{2}\propto t^{0.8}$ and super-diffusion $\Delta r^{2}\propto t^{1.6}$ respectively. The leftmost vertical line corresponds to $t=t^{\rm RMS}$, {\em i.e.} the time interval used to calculate the $v^{\rm RMS}$ in Fig.~\ref{f:velMags}. The middle and rightmost vertical lines correspond to $t=k_{\rm M}^{-1}$ and $t=Mk_{\rm M}^{-1}\equiv \tau_{L}$ respectively.
}
\label{f:msd_kA10}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{msd_vv.pdf}}
\caption{
Mean squared displacements $\Delta r^{2}(t)$ versus lag time $t$ plotted in the same manner
as in Fig.~\ref{f:msd_kA10}. $k_{\rm A}=20k_{\rm D}$, $k_{\rm M}=10^{2}k_{\rm D}$, and
$\phi$ and $k_{\rm E}$ are given in the legend. The thin diagonal line corresponds to displacements equal to the filament length, $\Delta r^{2}=L^{2}$. The thin vertical line corresponds to $t=t^{\rm RMS}$.
(Inset)~The velocity autocorrelation function $R(t)=\langle{\bf v}(0)\cdot{\bf v}(t)\rangle$ for the same runs. For both plots, the thick dashed lines have the given slopes.
}
\label{f:msd_vv}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{msd_exps.pdf}}
\caption{Effective MSD exponent $a$ for the mean-squared displacements (filled symbols) versus $k_{\rm M}$, for $\phi=0.15$, $k_{\rm A}=20k_{\rm D}$ and the $k_{\rm E}$ given in the legend. The open symbols show $a$ as measured from the decay of velocity autocorrelations $R(t)\sim t^{a-2}$ for $k_{\rm E}=k_{\rm D}$ (other $k_{\rm E}$ not shown for clarity but give similar agreement). (Inset)~Data plotted against the same $\tilde{t}$ as in Fig.~\ref{f:velMags}, demonstrating partial collapse.}
\label{f:msd_exps}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=6.5cm]{stateMSDlowkA.pdf}}
\caption{
(a)~Effective MSD exponent and (b) state as a function of $\phi$ and $k_{\rm E}/k_{\rm D}$ for $k_{\rm A}=20k_{\rm D}$ and $k_{\rm M}=10^{2}k_{\rm D}$. Symbols denote actual data points and contours are linearly interpolated. The calibration bar for (a) denotes the value of the MSD exponent. The state was determined using the procedure described in Sec.~\ref{s:resStatics}.
}
\label{f:stateMSDlowkA}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{spatVel.pdf}}
\caption{Spatial velocity correlations without projection $C_{vv}(r)$, and projected parallel and perpendicular to the filament polarity vector, $C_{vv}^{\parallel}(r)$ and $C_{vv}^{\perp}(r)$ respectively, for the $k_{\rm M}$ given in the legend in the lower panel, $k_{\rm A}=20k_{\rm D}$, $k_{\rm E}=5k_{\rm D}$ and $\phi=0.15$.
}
\label{f:spatVel}
\end{figure}
\subsection{Structure formation for strong binding}
\label{s:resStatics}
Increasing the motor density, {\em e.g.}, by raising the attachment rate~$k_{\rm A}$, produces extended clusters consisting of many filaments. Three distinct configurations were observed in this strongly-bound regime for the parameter space sampled, namely asters, layers and bundles as demonstrated in Fig.~\ref{f:snapshots}(d), (e) and~(f) respectively. Signatures of the structural organisation are apparent in the spatial correlations in filament polarity~$\hat{\bf p}$, quantified by projecting relative displacements parallel and perpendicular to the filament axis analogously to the velocity correlations~(\ref{e:Cvv_par}). Plots of both
$C_{pp}^{\parallel}(r)$ and $C_{pp}^{\perp}(r)$ are given in
Fig.~\ref{f:polarCorrns} for examples of each of the three states mentioned, and also for a weakly bound state by way of comparison. Projecting the correlations in this manner, rather than using a single
averaged quantity~\cite{Saintillan2012,Sanchez2012,Thampi2013}, provides
additional information which can be used to extract the structure formation.
The polarity correlation data can be used to define criteria to
determine the system state as follows: (i)~If $C_{pp}^{\perp}(r)$ remains above some threshold value $C^{\rm str}\approx1$ up to some given length $\ell^{\rm str}<L$, the state is regarded as strongly bound. (ii)~If a strongly bound state exhibits positive $C_{pp}^{\parallel}(r)$ and $C_{pp}^{\perp}(r)$ up to $r=L$, they are regarded as an aster or a layer; if not, they are a bundle. (iii)~Layers are differentiated from asters in that $C_{pp}^{\perp}$ remains non-negative up until the system size. Although clearly there is some arbitrariness in the choice of thresholds $C^{\rm str}$ and~$\ell^{\rm str}$, this only affects marginal cases near state boundaries. State diagrams for $k_{\rm A}=40k_{\rm D}$ are given in Fig.~\ref{f:state} for $k_{\rm E}=k_{\rm D}$ and $k_{\rm E}=5k_{\rm D}$.
It is clear from Fig.~\ref{f:state} that reducing the dwell-time by increasing $k_{\rm E}$ favors layers over asters. To elucidate this crossover, we constructed and solved a one-filament model consisting of set of rate equations for the occupancy of motor heads along a filament, given known rates of motor attachment, detachment and movement. Since the actual attachment and movement rates depend on the current configuration, they are not known a priori, so to close the equations we assumed a constant attachment rate $k_{\rm A}^{*}$ and a constant movement rate $k_{\rm M}^{*}$. Details are given in the Appendix. Inspection of the solution reveals that the
steady-state solution exhibits regimes for fast ($k_{\rm M}^{*}\gg Mk_{\rm D}$) and slow ($k_{\rm M}^{*}\ll Mk_{\rm D}$) motors, and also for end-dominated binding $2k^{*}_{\rm M}\gg k_{\rm E}M$ when most motors occupy $[+]$-ends. This latter regime corresponds to $t^{[+]}_{\rm occ}/t_{\rm occ}\gg M/2$. If we now assume that fast motors with end-dominated binding generate asters, fast motors without end-dominated binding generate layers ({\em i.e.}, $t^{[+]}_{\rm occ}/t_{\rm occ}\ll M/2$), and slow motors generate bundles, then the state diagram in Fig.~\ref{f:thy_state}(a) is predicted. Comparison to the numerical data in Fig.~\ref{f:state} reveals qualitative agreement, confirming the dominant factors determining pattern formation have been correctly identified.
For $k_{\rm E}\ll k_{\rm D}$, lateral binding with fast motors is no longer possible, but end binding with slow motors can arise as shown in Fig.~\ref{f:thy_state}(b). This suggests the layers regime is replaced by an extended aster regime, consistent with the results of Fig.~\ref{f:stateMSDlowkA}.
For comparison to other active and passive systems, two further quantities often employed to characterize structural arrangements in disordered
or weakly ordered systems are now described. As shown in Fig.~\ref{f:Sq},
the static structure factor~$S(q)$, calculated from the
correlations of filament centres, increases with decreasing wave vector $q$
for a broad range of~$q$. The variation is approximately a power law, $S(q)\propto q^{-\beta}$, with an exponent in the range $1\leq\beta<1.5$. Rod-like objects generate scattering curves with $\beta=1$~\cite{Roberts2012}; however, our
structure factors $S(q)$ are calculated from the centres of mass of each
filament and not the constituent monomers. Thus the power-law decay of $S(q)$
does not reflect the structure of a single filament, but rather arrays of laterally-aligned filaments as shown in the figure inset. Fluctuations in this array map to undulations in the line of centers, akin to a polymer in which each monomer corresponds to a filament's centre of mass, and indeed values of $\beta>1$ are expected for flexible polymers on lengths greater than their Kuhn length~\cite{EgalhaafWorkshop}.
The weak and strong binding regimes are not distinct, and there is a continuous crossover between the two. This crossover regime contains a scale-invariant distribution of cluster sizes $P(n_{\rm c})$, where two filaments are regarded as belonging to the same cluster if they are connected by at least one motor. As shown in Fig.~\ref{f:clusterDist}, $P(n_{c})$~is unimodal at small $n_{c}$ for weakly-bound states, becomes power law with an exponent $-2$ within the crossover, and bimodal for strongly-bound states. The exponent $-2$ is consistent with values observed for self-propelled particles in 2D~\cite{Yang2010,Chate2008}, but differs from the $-1$ observed in strictly 2D simulations of a similar model to here~\cite{Head2011b}.
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{polarCorrns.pdf}}
\caption{The polarity correlation function projected parallel $C^{\parallel}_{pp}(r)$ (top) and perpendicular $C^{\perp}_{pp}(r)$ (bottom) to the filament axis.
Symbols refer to the same parameters in Fig.~\ref{f:snapshots}:
Circles to Fig.~\ref{f:snapshots}(a) (weakly bound),
squares to Fig.~\ref{f:snapshots}(d) (aster),
diamonds to Fig.~\ref{f:snapshots}(e) (layer) and
triangles to Fig.~\ref{f:snapshots}(f) (bundle).
}
\label{f:polarCorrns}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=6cm]{stateDiagram.pdf}\hspace{1cm}}
\caption{States for filament density $\phi$ and motor speed $k_{\rm M}$ for (a)~$k_{\rm E}=k_{\rm D}$ and (b)~$k_{\rm E}=5k_{\rm D}$. $k_{\rm A}=40k_{\rm D}$ in both cases. Symbols refer to state: Circle (aster), diamond (layers), downward triangle (bundle) and upward triangle (weakly bound). The threshold parameters were $C^{\rm str}=0.9$ and~$\ell^{\rm str}=L/6$. Boundaries are drawn at midpoints between symbols.}
\label{f:state}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{theory.pdf}
\end{center}
\caption{(a) Schematic diagram denoting regimes predicted by the analytical model, here shown for $k_{\rm E}\gg k_{\rm D}$ (for $k_{\rm E}\approx k_{\rm D}$ the middle layers region vanishes). As described in the Appendix, the states for strong binding are predicted based on motor speed and the location of motor binding (end-dominated or laterally spread out). The boundary between weak and strong regimes is estimated by comparing the energies of thermal fluctuations and motor elasticity. To map to density, it has been assumed that $k_{\rm A}^{*}\propto k_{\rm A}\phi^{2}$.
(b) The same for $k_{\rm E}\ll k_{\rm D}$. Note that if $k_{\rm E}\leq k_{\rm D}/M$, the bundle region vanishes.
}
\label{f:thy_state}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{SqWithInset.pdf}}
\caption{Static structure factor $S(q)$ for the same data (with the same symbols) as Fig.~\ref{f:polarCorrns}. The thick dashed lines have the given slope.
The schematic diagram in the inset explains why this $S(q)$ calculated from filament centers (black circles) can produce a similar spectrum to polymers.}
\label{f:Sq}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=8.5cm]{clusterDist.pdf}}
\caption{Probability density function $P(n_{c})$ of cluster sizes for $k_{\rm A}/k_{\rm D}=20$, $k_{\rm M}=10^{2}k_{\rm D}$, $k_{\rm E}=5k_{\rm D}$ and the filament densities $\phi$ given in the legend. The thick dashed line has a slope of~-2.}
\label{f:clusterDist}
\end{figure}
\section{Discussion}
\label{s:discussion}
The use of microscopic modelling has highlighted the importance of a rarely considered microscopic parameter, namely the detachment rate from filament $[+]$-ends, in determining the motor-driven dynamics of weakly bound states, and the selection between asters and layers in the strongly bound regime. This parameter is not immediately accessible to ``hydrodynamic" theories. Furthermore, it
cannot be easily varied experimentally, as it is an intrinsic property of
motor proteins and filaments together, and is not amenable to
continuous control (although see below).
Microscopic modelling thus complements both ``hydrodynamic" theory and experiments by providing important
insight that is difficult to gain by other means.
That the differential end-detachment rate $k_{\rm E}/k_{\rm D}$ can influence structure and dynamics implies that it may also modify the function of protein filament assemblies, and thus have been under the influence of natural selection, {\em i.e.}, a motor's $k_{\rm E}/k_{\rm D}$ may have evolved to increase the organism's fitness. If this speculation is true, it would suggest motor mutants exist with differing $k_{\rm E}/k_{\rm D}$, and creating such mutants in {\em in vitro} assays would help elucidate the role of dwell times in cellular function.
It is also possible that other proteins binding to filament ends will affect the end-detachment rate.
Even if direct control over $k_{\rm E}/k_{\rm D}$ is not currently feasible, it should still be possible to test many of the predictions of our model using quasi-two-dimensional chambers, such as those that have been employed to study mixtures of microtubules and motors~\cite{Surrey2001,Sanchez2012}.
This geometry permits direct visualization of fluorescently-tagged filaments {\em via} light microscopy, allowing the quantities presented in Section~\ref{s:results} ({\em e.g.} the mean squared displacements in Figs.~\ref{f:msd_kA10}-\ref{f:msd_exps} and the polarity correlations in Fig.~\ref{f:polarCorrns}) to be extracted and compared to our predictions.
In addition, our predictions for scattering experiments are given in Fig.~\ref{f:Sq}.
However, experimental controls aligned with two of our key microscopic parameters, namely the ATP concentration (which modulates $k_{\rm M}$) and filament density, have not yet been systematically varied.
Surrey {\em et al.}~\cite{Surrey2001} only varied the motor concentration, related to our $k_{\rm A}$ (and indeed they found asters for high concentrations in agreement with our model), whereas Sanchez {\em et al.}~\cite{Sanchez2012} varied the ATP concentration but also added a depletion agent absent in our model.
We see no reason why these experiments could not be modified to directly test our predictions.
Hydrodynamic quantities defined on scales much longer than the filament length $L$ will require accelerated simulations before predictions can be made, as we now discuss.
In terms of the time for an unloaded motor to traverse a filament~$\tau_{L}=M/k_{\rm M}$, the total simulation times achieved varied from approximately $3\tau_{L}$ (for $k_{\rm M}=k_{\rm D}$) to $3\times10^{2}\tau_{L}$ (for $k_{\rm M}=10^{2}k_{\rm D}$). For actin-myosin systems $\tau_{L}\approx 0.1$s (based on $\approx1\mu$m filaments and motor speeds of $\approx 10\mu$m s$^{-1}$~\cite{HowardBook}), for which the maximum simulation time corresponds to minutes, shorter than typical experiments by 1-2 orders of magnitude. For kinesin-microtubule systems, $\tau_{L}\approx10$s ($\approx 10\mu$m filaments and motor speeds of $1\mu$m s$^{-1}$~\cite{HowardBook}), and here the simulation times approach hours, representative of experiments.
For length scales, however, the simulations fall short of the lengths orders of magnitude larger than $L$ required when coarse-graining ``hydrodynamic"
equations~\cite{Liverpool2005}; all results presented here were for $X=Y\approx4L$. Experimental length scales are also typically much larger, except for cell-scale confinement where this model can already achieve comparable dimensions~\cite{Head2011b,Silva2011}. Roughly 90\% of our simulation time was spent performing the
excluded-volume calculations (including Verlet list construction by cell sorting~\cite{AllenTildesly}), typically on 8-core shared memory architectures. This bottleneck can be reduced by extending the model to multi-node distributed architectures, or converting to run on many-core GPU devices. One order of magnitude improvement will allow box dimensions $X$, $Y\approx10L$ to be reached (with the same thickness $Z=L/6$), which would permit both length and time scales representative of {\em in vitro} microtubule-kinesin experiments to be replicated {\em in silico}.
In summary, simulations of a microscopic model of filament-motor
mixtures qualitatively reproduce essential aspects of active gel properties.
Improvements in simulation approaches will soon allow simulations on
experimentally relevant time and length scales.
\begin{acknowledgments}
DAH was funded by a BHRC Senior Translational Research Fellowship, University of Leeds.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Researchers in the emerging field of synthetic biology \cite{Heinemann2006} have demonstrated the successful construction of devices based on populations of engineered microbes \cite{Basu2005,Sole2012,Tamsir2011}. Recent work has focussed attention on the combination of single-cell intracellular devices \cite{Gardner2000, Auslander2012} with intercellular engineering, in order to build increasingly complex systems.
To date, most work on engineered cell-cell communication has focussed on quorum-sensing (QS) \cite{Atkinson2009}, which may be thought of as a communication protocol to facilitate inter-bacterial communication via the generation and receiving of small signal molecules. However, recent studies on DNA messaging \cite{Endy2012} highlight the importance and utility of transferring whole sets of DNA molecules from one cell (the so-called donor) to another (the recipient). Bacterial {\it conjugation} is a cell-to-cell communication mechanism \cite{Fernando2010, Fernando2012} that enables such transfers to occur.
In this paper we present a simulation platform that realistically simulates (in a modular fashion) both intracellular genetic networks and intercellular communication via conjugation. To our knowledge, this is the first such platform to offer both of these facilities. We first review previous work on cell simulation, before presenting the details of our model. We validate it against previous experimental work, and then discuss possible applications of our method.
\section{Previous work}
The rapid development of bacterial-based devices is accompanied by a need for computational simulations and mathematical modelling to facilitate the characterisation and design of such systems. A number of of platforms and methods are available for this purpose. Agent-based models (AbMs) are widely used \cite{BSim2012}, and were first used to study microbial growth in {\it BacSim} \cite{Kreft1998}. Continuous models have also been proposed \cite{Melke2010}, and recent developments make use of hardware optimisation, by using GPUs (Graphics Processing Units) in order to scale up the number of cells simulated \cite{Haseloff2012}.
Because of the complexity of the system under study, several computational platforms focus on either specific cellular behaviours (e.g., bacterial chemotaxis \cite{Emonet2005}, morphogenesis of dense tissue like systems \cite{Izaguirre2004}) , or on specific organisms (e.g., {\it Myxococcus xanthus} \cite{Holmes2010}). Platforms that incorporate cell-cell communication generally focus their attention on quorum-sensing. Simulations of conjugation do exist, but these consider cells as abstracted {\it circular objects} \cite{Krone2007,Seoane2011b}. We demonstrate in this paper how a consideration of the {\it shape} of cells is an essential feature for understanding the conjugation behaviour of the population. We now describe our model for bacterial growth, in which conjugation is handled explicitly. \\ \\
\section{Methods}
\label{Methods}
We use an {\it individual-based modelling} approach \cite{Lardon2011}
to the study of conjugation dynamics. This models each cell as an individual, mobile entity, each of which is subject to physical forces arising from contact with other cells and the environment (e.g., surfaces). Each cell has a number of different {\it attributes}, listed in Table ~\ref{tab:variables2}, which correspond to various physiological states and characteristics.
\begin{table*}[htbp]
\centering
\caption{Cell attributes.}
{\begin{tabular}{l l l}
\hline
Attribute & type & Definition \\
\hline
\texttt{shape} & {\it pymunk}.Shape & Shape of the cell\\
\texttt{program} & [m$_{0}$ $\ldots$ m$_{i}$] & List of the {\it i} regulatory network molecules (m)\\
\texttt{elongation} & [int,int] & Elongation values (one per cell pole) \\
\texttt{position} & [x,y] & Coordinates of centre point, {\it x} and {\it y}\\
\texttt{speed} & float & Velocity\\
\texttt{conjugating} & Boolean & Conjugation state\\
\texttt{plasmid} & Boolean & Program state (present/not present)\\
\texttt{role} & int & Donor (0), recipient (1) or transconjugant (2)\\
\texttt{partner} & int & Role of plasmid transfer cell \\
\hline
\label{tab:variables2}
\end{tabular}}{}
\end{table*}
Bacteria are modelled as rod-shaped cells with a constant radius (parameter \texttt{width} in Table \ref{tab:variables}). Elongation processes occur along the longitudinal axis, which has a minimum dimension of \texttt{length}, and division takes place whenever a the cell measures 2*\texttt{length}. The division of a cell into two new daughter cells is also controlled by \texttt{max\_overlap}, which monitors the physical {\it pressure} affecting each cell; if the pressure exceeds this parameter value, the cell delays its growth and division. Thus, a cell with pressure grows slower than without it. In Figure \ref{fig:intro}A we see an snapshot of a population with different cell lengths, due to the pressure-dependent behaviour. The global parameter \texttt{growth\_speed} (Table ~\ref{tab:variables}) also helps us simulate cell flexibility in a realistic fashion. This parameter defines a ``cut off" value for the number of iterations in which the physics engine must resolve {\it all} the current forces and collisions. Thus, smaller values will cause the solver to be effectively ``overloaded", and some collisions may, as a result, be partially undetected. This means that cells behave as flexible shapes, which gives the simulation a more realistic performance. In Figure \ref{fig:intro}C we show how changes in \texttt{growth\_speed} affect the simulation, using bigger (left) to smaller (right) values.
\begin{figure*}[!tpb
\centerline{\includegraphics[width=60mm]{intro2.png}}
\caption{Cell behaviours at low scale. (A): Different cell length due to asynchronous growth (pressure dependent). Two cells marked with red arrows. (B): A donor cell (red) starts the conjugation process with a recipient (grey) which turns into transconjugant (yellow). The pilus (green) is an elastic spring that links the two cells until the process is finished. (C): Different overlapping levels within the cells of a population.}
\label{fig:intro}
\end{figure*}
Horizontal genetic transfer (or conjugation) is modelled using an {\it elastic spring} to connect donor and recipient cells. Parameter \texttt{c\_time} defines the duration of that linkage, which determines the time in which the DNA is transferred. The springs are constantly monitored to ensure that they physically connect both cells during conjugation. Importantly, during conjugation, the resolution of collisions involving relevant cells considers the forces produced by the spring connection, in order to calculate the final movement of the bacteria. By coupling cells in this way, we obtain realistic population-level physical patterns that emerge as a result of large numbers of conjugation events. Figure \ref{fig:intro}B shows this process, with a donor cell (red) and a recipient cell (grey) which becomes a transconjugant (yellow). A transconjugant cell is one that was initially a recipient, but which has been conjugated during its lifetime. Thus, it already has the DNA information transferred by the donor.
This agent-based algorithm has an iteration-driven structure, where - after initialisation of the main global parameters - it repeatedly performs the following steps for each cell: \\ \\
\begin{enumerate}
\item Update springs (position and timing).
\item Perform cell division (if cell is ready).
\item Elongate cell (every \texttt{growth\_speed} steps).
\item Handle conjugation.
\item Update physical position.
\end{enumerate}
Conjugation decisions (step 4) made by cells are driven by three sequential steps:
\begin{enumerate}
\item The cell {\it decides}, following a probability distribution, whether or not to conjugate (one trial per iteration).
\item If conjugating, randomly select a mate from surrounding bacteria (if present).
\item If valid mate is found, effect conjugation transfer.
\end{enumerate}
The discrete probability distribution used for the conjugation process is $C(N, p, \texttt{c\_time})$, where $N$ is the number of trials in a cell lifetime (\texttt{width} * \texttt{length}), $p$ is the success probability in each trial (with p $\in$ [0$\ldots$1]) and \texttt{c\_time} is the time interval during $p = 0.0$ (i.e., when the cell is already conjugating). As stated in \cite{Seoane2011a}, $p$ can vary, depending on whether the cell is a donor (\texttt{p\_d}), a transconjugant that received the DNA message from a donor (\texttt{p\_t1}), or a transconjugant that received the DNA from another transconjugant (\texttt{p\_t2}).
\begin{sidewaystable*}
\centering
\caption{Global simulation parameters.}
{\begin{tabular}{ l l | l l}
\hline
Parameter & Definition & Parameter & Definition \\
\hline
\texttt{screenview} & Size of the simulated {\it world} & \texttt{network\_steps} & Number of steps of the ODEs per \texttt{Gt}\\
\texttt{max\_overlap} & Pressure tolerance of cells & \texttt{number\_donors} & Initial number of donor cells\\
\texttt{width} & Width of each cell (lattice squares) & \texttt{number\_recipients} & Initial number of recipient cells \\
\texttt{length} & Length of each cell (lattice squares) & \texttt{spring\_rest\_length} & Natural sprint expansion/contraction\\
\texttt{growth\_speed} & Iterations between elongation processes & \texttt{spring\_stiffness} & The tensile modulus of the spring\\
\texttt{Gt} & Doubling time of the simulated cells (iterations) & \texttt{spring\_damping} & The amount of viscous damping to apply\\
\texttt{real\_Gt} & Real doubling time of the studied cells (minutes) & \texttt{cell\_infancy} & Time lag (percentage)\\
\texttt{p\_d} & Probability of conjugation event (donors) & \texttt{pymunk\_steps} & Update the space for the given time step\\
\texttt{p\_t1} & Probability of conjugation event (transconj.1) & \texttt{pymunk\_clock\_ticks} & Frame frequency (FPS - frames per second\\
\texttt{p\_t2} & Probability of conjugation event (transconj.2) & \texttt{bac\_mass}& Mass of the cell (for calculating the moment)\\
\texttt{c\_time} & Duration of the conjugation process & \texttt{bac\_friction} & Friction coefficient (Coulomb friction model)\\
\hline
\label{tab:variables}
\end{tabular}}{}
\end{sidewaystable*}
Intracellular circuitry is modelled separately, and then {\it introduced} into each cell by storing the state of the circuit in an attribute of the cell (\texttt{program}). Thus, there are effectively as many copies of the circuit as cells in the simulation. This circuit simulation is implemented in a modular fashion, so that the internal cellular ``program" may be easily replaced with any other. In this paper we demonstrate the principle using a two-component genetic oscillator as the DNA message that is exchanged through conjugation. The ordinary differential equations (ODEs) for this circuit are:
\begin{equation}
\frac{dx}{dt} = \Delta \left ( \beta \frac{1 + \alpha x^2}{1 + x^2 + \sigma y^2 } - x\right ) \label{eq1}
\end{equation}
\begin{equation}
\frac{dy}{dt} = \Delta \gamma \frac{1 + \alpha x^2}{1 + x^2} - y \label{eq2}
\end{equation}
\noindent which are detailed in \cite{PoyatosPLoS}, as well as the meaning and value of the parameters (we use the same values in the code provided). We have also recently used our software platform to investigate the spatial behaviour of a {\it reconfigurable} genetic logic circuit \cite{Reconfigurable}, which demonstrates how it may easily be modified to accommodate different sets of equations.
The actions controlling the growth rates of cells occur on a longer time scale than the integration steps that control molecular reactions (as equations \ref{eq1} and \ref{eq2}). In order to ensure synchronisation, the parameter \texttt{network\_steps} defines the number of integration steps of the ODEs that run per \texttt{Gt}. Thus, a number of \texttt{network\_steps}/\texttt{Gt} integration steps will update the attribute \texttt{network} of each cell every iteration.
Other important physical parameters listed in Table ~\ref{tab:variables} are \\ \texttt{spring\_rest\_length}, \texttt{spring\_stiffness} and \texttt{spring\_damping}; these are three parameters to model the material and behaviour of the bacterial pilus (i.e. the spring) during conjugation. Parameter \texttt{cell\_infancy} is a delay period, during which a cell is considered to be too young to conjugate (as observed experimentally \cite{Seoane2011a}). Parameters \texttt{pymunk\_steps} and \texttt{pymunk\_clock\_ticks} are used by the physics engine to update the world, and may be adjusted by the user in order to alter the performance of the simulation (machine dependent). Parameters \texttt{bac\_mass} and \texttt{bac\_friction} play a role in collision handling.
Our platform is writting in {\it Python}, and makes use of the physics engine {\it pymunk} (www.pymunk.org) as a wrapper for the 2D physics library Chipmunk, which is written in C (www.chipmunk-physics.net/). As cells are represented as semi-rigid bodies in a 2D lattice, pymunk handles the physical environment on our behalf. For monitoring purposes, parameters \texttt{Gt} and \texttt{real\_Gt} allow us to stablise the relation between iterations and clock minutes: $minute = \texttt{Gt}/\texttt{real\_Gt}$ (units: iterations). The platform, which we call DiSCUS (Discrete Simulation of Conjugation Using Springs) is available at the project repository at http://code.google.com/p/discus/.
\begin{figure*}[!tpb
\centerline{\includegraphics[width=150mm]{validation2.png}}
\caption{Validation of cell movement and conjugation dynamics using real data. (A): Figure extracted from \cite{Seoane2011a} where a colony of {\it Pseudomonas putida} is divided into dark red donor cells (DsRed), yellow recipient cells (YFP) and transconjugants, expressing both yellow and green light (YFP and GFP). The upper row shows the transconjugant signal, and the bottom row shows the whole community. (B and C): Simulation results. Two simulations of similar colonies are recorded over exactly the same time intervals (min). The colours of the cells match the colours observed in (A). Graphs (D), (F) and( H) are extracted from \cite{Volfson2008}, and show experimental results of {\it Escherichia Coli} growth regarding density, velocity and ordering (respectively). Graphs (E), (G) and (I) correspond to simulations in similar conditions to \cite{Volfson2008}, for the same parameters (density, velocity and ordering respectively). Tests 1, 2 and 3 in graphs correspond to different spatial distribution of cells inside the microfluidic chanel (details in text).}
\label{fig:validation}
\end{figure*}
\section{Results}
\label{Results}
We now describe the results of experiments to validate our conjugation model, using four sets of simulations. As we aim to understand the behaviour of cells in small-scale two-dimensional populations (as occur in microfluidic environments), we avoid the sorts of extreme overlapping situations shown in Figure \ref{fig:intro}C(right). We first validate individual conjugation dynamics; then we validate the biomechanical properties of the simulation; the third set of experiments concerns the transfer of the two-component oscillator, and the final set of experiments study the effects of mixing on conjugation dynamics. We now describe in detail the results of each set of experiments.
\subsection{Conjugation dynamics}
\begin{figure*}[!tpb
\centerline{\includegraphics[width=158mm]{oscillator2.png}}
\caption{Horizontal transfer of a two-component genetic oscillator. A: Transcriptional-level design. The activator (green) acts on itself and on the repressor (red) by inducing the transcription of both. The represor acts on the activator by repressing its transcription. As a result, molecule {\it x} (as well as {\it y} oscillates in time. B: Cells growing in a cross-shaped channel. At 250 (minutes) we clearly see the position of both donor (left hand, with the oscillator inside) and recipient (right hand, empty) strains. The intensity of green colour denotes the amount of molecule {\it x} inside cells in that specific time. Through conjugation, the oscillator is {\it copied} to the initial recipient strain (t = 1000 min). C: Using the same experiment as in B, measurement over time of the maximum level of molecule {\it x} in a single cell. Black arrow highlights the point in time when conjugation starts (t $\simeq$ 550 min).}
\label{fig:oscillator}
\end{figure*}
The objective of the first set of experiments is to validate the software in terms of {\it conjugation dynamics}. For that purpose, we first focus on conjugation, using images of a {\it Pseudomonas putida} population (Figure \ref{fig:validation}A) extracted with permission from \cite{Seoane2011a}. These show donor cells (dark red) growing in contact with recipients (yellow). The DNA information they share after conjugation makes the transconjugant cells display GFP (green fluorescent protein). We adjusted the parameters of our simulations until the behaviour matched the images of real cells (two simulations shown: Figures \ref{fig:validation}B and \ref{fig:validation}C), in terms of both time-series behaviour and the type of physical pattern displayed. It is important to note that the differential probabilities of conjugation of donors and transconjugants (higher in the latter) causes directional spreading of the DNA information. After the first transconjugant appears (160 minutes), the newly-formed transconjugants appear -most probably - in the immediate neighbourhood. The final parameter values used to reproduce this experiment are: \texttt{width}=5, \texttt{length}=15, \texttt{growth\_speed}=30, \texttt{p\_d}=0.001, \texttt{p\_t1}=0.02, \texttt{p\_t2}=0.05 and \texttt{c\_time}=450 (the rest of the parameters are as defined in the DiSCUS distribution). Movie {\it DemoConjugation1} (found in the project repository) shows a simulation of a similar experiment where the transconjugants do not act as new donors.
\subsection{Biomechanical properties}
The second set of validation experiments focuses on {\it biomechanical movement}. We use data from \cite{Volfson2008}, which describe an {\it Escherichia coli} colony growing in a microfluidic channel (30 * 50 * 1 $\mu$m$^3$) (Figures \ref{fig:validation}D, \ref{fig:validation}F and \ref{fig:validation}H). Using the same parameter setup (\texttt{width}=5, \texttt{length}=24, \texttt{growth\_speed}=30) we highlight how different initial positioning of cells inside the channel can affect the final result ({\it test1}, with more cells observed in the centre than at the edges; {\it test2} with all cells initially in the centre; {\it test3} with all cells homogeneously spread along the channel). Density graphs (Figures \ref{fig:validation}D and \ref{fig:validation}E) show the increasing curve as the channel becomes more populated (results vary depending on which area is considered for monitoring). Velocity gradients (Figures \ref{fig:validation}F and \ref{fig:validation}G) depict the differential velocity across the longitudinal axis of the channel with respect to the centre (we see negative values when the cells in the centre move faster than the rest). The difference in the {\it y} axis is due to our considering different spacial intervals in the velocity gradient calculation. Ordering graphs (Figures \ref{fig:validation}H and \ref{fig:validation}I) are based on calculating the cosine of a cell's angle with respect to the longitudinal axis of the channel (e.g. angle 0, cos(0)=1, completely aligned). As time increases, we see that the cells tend to align themselves.
\subsection{Internal cell ``program"}
Figure \ref{fig:oscillator} shows the results of the next set of experiments, aiming at studying the horizontal transfer of a {\it two-component oscillator}. The transcriptional-level design of the circuit is shown in Figure \ref{fig:oscillator}(A), which causes the molecular concentration of a repressor (y) and an activator (x) to oscillate over time. Each molecule is produced by a gene when its upstream promoter is activated. The activator can induce its own production at the same time as inducing the production of the repressor, which in turns inhibits the production of the activator. In Figure \ref{fig:oscillator}B we place (250 minutes) an initial donor colony (with the two-component oscillator inside) on the left-hand side of a cross-shaped channel, while a recipient colony is placed on the right.
As the equations for the oscillator have no stochasticity, every cell of the donor strain shows exactly the same state of the circuit as every other cell. At the beginning (250 minutes), these cells show a green colour (corresponding to the molecular concentration of x) which is switched off during the time intervals in which the repressor is {\it on} (see time profile of Figure \ref{fig:oscillator}C). When conjugation starts (at around t $\simeq$ 550), the newly formed transconjugant cells are given the circuit but, importantly, they do not share the state of the circuit of the cells from which they receive the message. During the DNA transfer, it is only the plasmid (circuit {\it carrier}) that is copied into the recipient; therefore, both molecular concentrations are null, and the circuit begins its functioning from the initial stage. That is why we clearly observe different green intensities within the community. This asynchronous behaviour happens only in the transconjugants, while the circuits inside the donors always run synchronously (due to deterministic equations).
A time profile of the previous experiment is shown in Figure \ref{fig:oscillator}C, where the maximum level of activator (x) concentration in a single cell (compared with the whole population) is recorded over time. Before conjugation starts, all cells in the consortia display perfect synchrony. After conjugation (shown with an arrow on the graph) there is always a cell with the maximum level of activator, which demonstrates high asynchrony. All parameter values regarding cell dimensions or conjugation probabilities are the same as in Figures \ref{fig:validation}B and \ref{fig:validation}C. Parameters relevant to the oscillator are: \texttt{network\_steps}=18 and \texttt{Gt}=450. Movie {\it DemoConjutagion2} (found in the project repository) shows this experiment and Movie {\it DemoDynamics1} shows donor cells growing with a stochastic version of the oscillator.
\subsection{Effects of mixing}
\begin{figure*}[!tpb
\centerline{\includegraphics[width=150mm]{mixing2.png}}
\caption{Dynamical mixing of bacterial strains under different conditions. (A): Three strains (purple, yellow and blue) growing in a longitudinal pipe. Detail (on the right) shows the vector field corresponding to the red square (on the left) where the arrows display both directionality and velocity of every cell. (B): Similar to (A), but with four {\it columns} placed in the middle of the pipe. (C): Similar to A, but with {\it zig-zag} borders along the pipe. In (A), (B) and (C), the speed in vector fields is measured in arbitrary units (a.u.). (D): Six strains (three per trap) grow in two square traps on one side of the main longitudinal channel. The flow in the channels follows the direction of the arrows. (E): Similar to (D) but with a much stronger flow in channels, causing turbulence in traps (long circled arrow). (F): Vector field of experiment (E). }
\label{fig:mixing}
\end{figure*}
\begin{figure*}[!tpb
\centerline{\includegraphics[width=60mm]{frequency2.png}}
\caption{Effects of manual mixing on conjugation frequency. (A): Recipient-trapping behaviour of a population with donors (red), transconjugants (green) and recipients (yellow). Two snapshots depict clearly-observed clusters. (B): Population after random mixing, where the clusters are automatically dissolved. (C): Graph showing conjugation frequencies (Y = T/(R + T)) of 560-minute experiments (ratio D/R = 50\%). Blue bars represent Y on an untouched population, while red bars represent Y when the population is mixed at 420 minutes. The two sets of bars correspond to experiments with different cell dimensions (1x3 -left- and 1x2 -right). Error bars show variation across 15 experiments of each class.}
\label{fig:frequency}
\end{figure*}
Conjugation behaviour within a population may be altered in different ways to achieve different behaviours, depending on the desired application. For example, in the previous experiments described in this paper, transconjugants are unable to act as recipients (simulating a {\it radical} entry exclusion \cite{Mapi2008}). That is to say, they will not receive more plasmids (genetic circuits) from either donors or transconjugants. Furthermore, we may also engineer the transconjugants to stop acting as {\it new donors} \cite{Fernando2012}, so that only the original donors have the ability to transfer the DNA message. {\it Mixing} of the cell population becomes essential in this last scenario, in order to ensure maximal contact between donors and recipients. In Figure \ref{fig:oscillator}B we see how, at the end of the experiment (1000 minutes), donors and the transconjugants cover different areas of the channel (left and right respectively), without being mixed.
We now study the autonomous mixing behaviour of cells under different environmental conditions, with the third sets of experiments (Figure \ref{fig:mixing}). Firstly we investigate how morphological changes in a longitudinal microfluidic channel can affect the patterns being formed by the consortia and its mixing. Figure \ref{fig:mixing}A shows three bacterial strains (each shown in a different colour) growing in a channel from different starting points. As we can see, their mixing is highly improbable. The main reason for this is the velocity and directionality of the cells. As the cells are washed out at the edges of the channel, all of them {\it travel} (they only have passive movement while being pushed) at variable speed from centre to left or from centre to right. This causes the cells to have the same direction (see vector field), which in turn makes mixing more difficult. In Figures \ref{fig:mixing}B and \ref{fig:mixing}C we show the result of altering the morphology of the channels, by adding columns and zig-zag walls, respectively. As a result, the cells show different directionality (see vector fields) and the strains have a higher probability of being mixed. In both experiments (unlike \ref{fig:mixing}(A)), the three strains are in contact at some point. If the experimental application relied on the conjugation of purple and yellow cell pairs, for example, we can see that these physical changes in the channel would be essential.
Another way to intensify the mixing of strains in a microfluidic trap is to change the main channel {\it flow strength}, with the objective of creating turbulence inside the trap. Figures \ref{fig:mixing}D and \ref{fig:mixing}E show a three-strain population growing in a trap (identical initial positioning of cells in both) with the only difference being that the strong flow in the main channel (white arrows) of \ref{fig:mixing}E creates turbulence (in the direction of the circled arrow). Two different colonies are simulated in each experiment, inside both symmetrical and independent traps. We see how turbulence helps the cells to get mixed, thanks to the constant change of direction they display (see vector field in Figure \ref{fig:mixing}F). Furthermore, we can avoid {\it missing} one strain, as happens with the yellow cells in \ref{fig:mixing}D (see Movie {\it DemoDynamics2} -found in the project repository-), where another run of this experiment is shown).
Investigations of how manual mixing can affect conjugation frequencies are described in in \cite{Fernando2012}, using an {\it Escherichia coli} population. We now reproduce those results using our software, and give valuable insight into the reasons for that behaviour: the {\it isolation} of the recipients. For that purpose (Figure \ref{fig:frequency}) we grow a population of donors (D, red) and recipients (R, yellow) in which the ratio D/R is 50\% and the transconjungants (T, green) are unable to act as new donors. The frequency of conjugation, Y, is measured as Y = T/(R + T). The graph in Figure \ref{fig:frequency}C shows the frequency after 560 minutes of {\it untouched} populations (not mixed, blue bars) and populations that have been {\it manually mixed} at 420 minutes (red bars). The difference that the mixing produces is based on the isolation of the recipients in untouched populations. Figure \ref{fig:frequency}A shows two different occasions in which clusters of recipients are formed, where the transconjugants do not allow donors to reach new possible mates. After the population is completely ``shuffled" (\ref{fig:frequency}B), the clusters are dissolved, and new pairs of donor-recipient can arise in the new topology.
An interesting result from Figure \ref{fig:frequency}C is the fact that the smaller the size of the cell, the higher results we observe for conjugation frequencies. This may be due to the fact that smaller cells are able to slip through physical gaps, and the biomechanical ordering of the population becomes more ``fuzzy". This underlines the importance of considering the physical shape of cells, since circle-shaped cells would not give valid results.
\section{Discussion}
The conjugation model presented here is the first agent-based model to explicitly simulates the conjugation process with growing rod-shaped cells. Full validation against real data is performed, which shows the capacity of the software to reproduce observed behaviour. In addition, the mixing study offers valuable insights into the design of multi-strain populations. The software also allows for genetic {\it programs} to be {\it installed} inside cells; the potential for horizontal gene transfer to recreate distributed information processing within a microbial consortium is of significant interest in synthetic biology, and the software presented will aid the design and testing of systems before their {\it wet-lab} implementation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The study of large random matrices in the past thirty years has successfully described measures which can be written as the exponential of
single trace invariants perturbing a Gaussian. In addition to the standard Feynman diagrammatic expansion \cite{graph-combinatorics-difrancesco},
powerful methods exist to solve such models \cite{mm-review-difrancesco}, including orthogonal polynomials (which rely on eigenvalue decomposition)
and more recently the topological expansion developed by Eynard \cite{topological-expansion}. The latter starts with the Schwinger-Dyson equations (also
known as loop equations) written in terms of the resolvent $\tr \frac{1}{z-M M^\dagger}$ where $M$ is the random $N\times N$ matrix, and
provides an intrinsic way to solve them at all orders in the $1/N$ expansion.
Random tensors are a generalization of random matrices to rank $D$ objects (having $D$ indices of size $N$ each). The study of their
large size statistics became possible thanks to the $1/N$ expansion discovered in \cite{largeN}. This expansion relies on the
construction of multi-unitary invariants, i.e. tensor contractions which are invariant under the external tensor product
of $D$ copies of ${U}(N)$, each of them acting independently on each tensor index. In contrast with random matrices, there are many
invariant monomials at a fixed degree. These monomials are indexed by colored graphs \cite{universality, uncoloring} (these colors correspond
to the position of the index in a tensor contraction: in $T_{a_1 \dotsb a_D}$, $a_1$ has position, hence color, 1, up to $a_D$ which
has color $D$). The $1/N$ expansion generalizes to any measure on a single random tensor that can be written as the exponential of
such invariants \cite{virasoro}.
In the diagrammatic approach, the graphs contributing at leading order, known as \emph{melonic graphs} (equivalent to planar graphs of matrix
models), have been identified in \cite{Bonzom:2011zz} enabling one to solve these models exactly at leading order \cite{uncoloring}.
At the core of these solutions, a \emph{universality} theorem, first derived in \cite{universality}, states that all models are
\emph{Gaussian} at large $N$ (but with a covariance which crucially depends on the joint distribution). In particular all invariant
monomials corresponding to melonic graphs at fixed degree have the same expectation value. However, this certainly does not hold
at sub-leading orders in the $1/N$ expansion, and different melonic graphs have different sub-leading corrections in $1/N$.
A method which would be fruitful to adapt to tensor models is the one developed by Eynard \cite{topological-expansion}. It first
requires to introduce an equivalent of the resolvent. Writing the matrix resolvent
like $\tr \frac1{z-M M^\dagger} = \sum_{n\geq 0} z^{-n-1} \tr (M M^\dagger)^n$, it is tempting to generalize it by changing
$\tr (M M^\dagger)^n$ with the sum over all invariants of degree $n$. As $z$ counts the degree of each invariant, this object
does not distinguish different invariants having the same degree. Although sufficient for
the study of the leading order, this object should be refined in order to explore the finer structure of sub-leading orders,
for which a generating function which probes the structure of each invariant beyond their degree seems better adapted.
This Letter is a modest contribution in this direction. The dominant melonic invariants are in one-to-one correspondence
with $D$-ary trees with lines colored $1$ up to $D$ such that no two lines connecting a vertex to its descendants
have the same color. At leading order only the number of vertices matters, but at subsequent orders
one must distinguish between different colored trees having the same number of vertices. As a first step, we study the generating
function of these colored trees and find an explicit formula counting how many such trees with $p_i$ lines of color $i$ one can build.
Countings of colored trees exist in the literature (like \cite{edge-colored}) but we could not find the
counting which is relevant for our purpose.
In Section \ref{sec:stating} we introduce the problem and state our main results. The generating function is presented in Section \ref{sec:generating}
and the proof of our results, based on a linear recursive sequence related to the generating function, is contained in Section \ref{sec:sequence}.
\section{Stating the problem} \label{sec:stating}
We consider a family of rooted trees defined by the following properties
\begin{itemize}
\item each vertex has at most $D$ descendants, $D\geq 2$,
\item each line receives a color $i=1,\dotsc,D$ such that no two lines connecting a vertex to its descendants have the same color.
\end{itemize}
By adding leafs (univalent vertices) appropriately any such tree becomes a rooted $D$-ary tree with colored lines.
Let $C_{p_1\dotsb p_D}$ be the number of such trees with exactly $p_i$ lines of color $i$ for all $i=1,\dotsc,D$. The purpose
of this note is to derive an explicit formula for $C_{p_1\dotsb p_D}$.
Our strategy is based on the generating function
\begin{equation} \label{generating function}
F(g_1,\dotsc,g_D) = \sum_{p_1,\dotsc,p_D=0}^\infty C_{p_1\dotsb p_D}\ \prod_{i=1}^D g_i^{p_i}\;,
\end{equation}
and the sequence $(F_n(g_i))_{n\in\mathbbm{N}}$ defined by
\begin{equation} \label{recursive sequence}
F_0 =1,\qquad F_n(g_1,\dotsc,g_D) = \sum_{p_1,\dotsc,p_D} C^n_{p_1 \dotsb p_D}\,\prod_{i=1}^D g_i^{p_i} \quad \text{with} \quad C^n_{p_1\dotsb p_D} = \frac{n}{\sum_{i=1}^D p_i +n}\,\prod_{j=1}^D \binom{\sum_{i=1}^D p_i +n}{p_j}
\;.
\end{equation}
We will prove that $(F_n)$ is a linear recursive sequence whose characteristic equation is an algebraic equation satisfied by $F$.
Examining the roots of this algebraic equation, we will obtain the following proposition.
\begin{proposition} \label{propF_n}
The sequence $(F_n(g_i))_{n\in\mathbbm{N}}$ is a geometric sequence with common ratio $F$,
\begin{equation}
F_n(g_1,\dotsc,g_D) = \Bigl(F(g_1,\dotsc,g_D)\Bigr)^n\;.
\end{equation}
This implies that $F(g_1,\dotsc,g_D) = F_1(g_1,\dotsc,g_D)$, hence the following corollary.
\end{proposition}
\begin{corollary}
The number $C_{p_1\dotsb p_D}$ of rooted line-colored trees with maximal degree $D$ and exactly $p_i$ lines of color $i=1,\dotsc,D$ is
\begin{equation} \label{prop}
C_{p_1\dotsb p_D} =C^1_{p_1\dotsb p_D}= \frac{1}{\sum_{i=1}^D p_i +1}\,\prod_{j=1}^D \binom{\sum_{i=1}^D p_i +1}{p_j}\;.
\end{equation}
\end{corollary}
\section{The generating function} \label{sec:generating}
The generating function \eqref{generating function} satisfies an algebraic equation which is obtained by simply observing that the root of a tree can have $k\leq D$ descendants connected by lines of colors $i_1,\dotsc,i_k$ all different. Therefore
\begin{equation} \label{tree recursion}
\begin{aligned}
F(g_1,\dotsc,g_D) &= 1 + \bigl(\sum_{1\le i_1\le D } g_{i_1} \bigr) F + \bigl(\sum_{1\le i_1<i_2 \le D} g_{i_2} g_{i_2}\bigr) F^2 + \dotsb
+ \bigl( g_1 \dotsb g_D \bigr) F^D\;,\\
&= \sum_{k=0}^D \biggl( \sum_{1\leq i_1<\dotsb <i_k\leq D}\, \prod_{l=1}^k g_{i_l} \biggr)\ [F(g_1,\dotsc,g_D)]^k \;,
\end{aligned}
\end{equation}
and $F$ is the root $x_{(0)}(g_1,\dots g_D)$ of this polynomial equation such that
\bea
\lim_{g_1,\dots g_D \to 0} x_{(0)} (g_1,\dots g_D)=1 \; .
\end{eqnarray}
The following lemma show how to distinguish the desired root $x_{(0)}(g_1,\dots g_D) $ from the other roots of the above polynomial.
\begin{lemma} \label{lem:x_0}
For all $R>1$, there exists $\epsilon_R>0$ such that for all $|g_1|, \dotsc, |g_D|< \epsilon_R$, the polynomial
\begin{equation} \label{eq:charD}
Q(X) = - X + \sum_{k=0}^D \Bigl( \sum_{1\leq i_1<\dotsb <i_k\leq D}\, \prod_{l=1}^k g_{i_l} \Bigr) X^k
\end{equation}
has exactly one root $x_{(0)}$ with $|x_{(0)}|<R$, all other roots $x_{(i)}$ for $i\neq 0$ having norms $|x_{(i)}|\ge R$. In particular $x_{(0)}=F$ as it is the only root satisfying $ \lim_{g_i\to 0} x_{(0)} =1$.
\end{lemma}
{\bf Proof.} To establish this lemma we use Rouch\'e's theorem whose statement is now recalled. Let $f, g$ be two holomorphic functions and $S$ a closed contour which does not contain zeros of $f$ and $g$. If for all $z\in S$
\begin{equation}
|f(z) - g(z)|< |g(z)|\;,
\end{equation}
then the number of zeros of $f$ and the number of zeros of $g$ encircled by $S$ (counted with multiplicities) are the same.
We now exploit this theorem to get bounds on the absolute value of the roots of the polynomial \eqref{eq:charD}. Take
\begin{equation}
f(z) = Q(z)\;,\qquad \text{and} \qquad g(z) = -z \; .
\end{equation}
Let $R>1$, and $S = \{ z\in\mathbbm{C}, |z|=R\}$ be the circle of radius $R$. Note that on $S$, $|g(z)|=R$. Set $\epsilon_R>0$ with $\epsilon_R<\frac{R^{1/D}-1}{R}$ so that on $S$,
\begin{equation}
|f(z) - g(z)| \leq \sum_{k=0}^D \Bigl( \sum_{1\le i_1<i_2\dots <i_k\le D} |g_{i_1}| \dotsm |g_{i_k}| \Bigr) |z|^k
< \sum_{k=0}^D \binom{D}{k} (\epsilon_R\, R)^k
\leq (1+\epsilon_R R)^D
\leq |g(z)| \; .
\end{equation}
As $g(z)$ has an unique root $z=0$ inside the circle of radius $R$, we conclude that $f(z)$ also has exactly one root,
which we denote $x_{(0)}$, with $ |x_{(0)}| <R $. As this is the only root which can go to one when $g_i \to 0$, it is identified with the generating function $F$.
\qed
\begin{remark}
At $D=2$, $Q(X)= 1 +(g_1+g_2-1)X + g_1 g_2 X^2$ and $Q(x)=0$ is easily solved,
\begin{equation}
x_{(0)} = \frac{1-g_1-g_2-\sqrt{(1-g_1-g_2)^2-4g_1 g_2}}{2\,g_1 g_2}\qquad \text{and} \qquad x_{(1)} = \frac{1-g_1-g_2+\sqrt{(1-g_1-g_2)^2-4g_1 g_2}}{2\,g_1 g_2}\;.
\end{equation}
\end{remark}
\section{The linear recursive sequence} \label{sec:sequence}
First we show that the sums defining each $F_n(g_i)$ in \eqref{recursive sequence} converge absolutely when $|g_i|< \frac{(D-1)^{D-1}}{D^D}$ for all $i=1,\dotsc,D$. Let $\epsilon>0$ such that $\epsilon<\frac{(D-1)^{D-1}}{D^D}$, then for all complex $g_1,\dotsc,g_D$ with norm $|g_i|<\epsilon$
\begin{equation}
|F_n(g_1,\dotsc,g_D)| \leq \sum_{p=0}^{\infty} \epsilon^p \frac{n}{p+n} \sum_{\substack{ \{p_i\}\\ \sum_i p_i = p}}
\binom{p+n}{p_1} \dotsm \binom{p+n}{p_D}
\end{equation}
The sums over $p_i$ are computed by equating the coefficients of $x^p$ in $(1+x)^{Dp+Dn}$ and $(1+x)^{p+n} \dotsm (1+x)^{p+n}$, hence
\bea\label{eq:Dcatalan}
|F_n(g_1,\dotsc,g_D)| \leq \sum_{p=0}^{\infty} \frac{n}{p+n} \binom{Dp+Dn}{p} \; \epsilon^p \; .
\end{eqnarray}
One finds using the Stirling formula that the radius of convergence of the above series is $\frac{(D-1)^{D-1}}{D^D}> \epsilon$, hence $F_{n}(g_1,\dotsc,g_D)$ converges absolutely.
\begin{lemma}\label{lem:recD}
The sequence $(F_n)$ respects the recursion
\begin{equation}
\forall n\ge 0 \qquad
F_{n+1} = F_n + \sum_{k=1}^D \biggl( \sum_{1\leq i_1<\dotsb <i_k\leq D} \prod_{l=1}^k g_{i_l} \biggr)\ F_{n+k} \;.
\end{equation}
\end{lemma}
{\bf Proof:} The recursion translates into
\bea\label{eq:recD}
\forall p_i \ge 1\qquad
C^{n+1}_{p_1,\dots p_D} = C^n_{p_1,\dots p_D} + \sum_{k=1}^{D} \sum_{1\le i_1<i_2\dots <i_k\le D}
C^{n+k}_{p_1,\dots p_{i_1}-1,\dots p_{i_k}-1,\dots p_D} \; .
\end{eqnarray}
The boundary cases, when some $p_i=0$, just reproduce the recursion at level $D-1$. Let us denote $P=\sum_{i=1}^D p_i$. The right hand side of \eqref{eq:recD} is
\begin{equation}
\frac{(P+n)^{-1} \bigl[(P +n)! \bigr]^{D} }{ \prod_{i=1}^D p_i!(P -p_i +n +1)!}
\Bigl[ n \prod_{i=1}^D (P -p_i +n +1) + \sum_{k=1}^D (n+k) \sum_{ 1\leq i_1<\dotsb <i_k\leq D } \prod_{l=1}^k p_{i_l} \prod_{j\neq i_1,\dotsc,i_k} (P -p_j +n +1) \Bigr] \; .
\end{equation}
We write $n \prod_{i=1}^D (P -p_i +n +1) = (n+1)\prod_{i=1}^D (P -p_i +n +1) - \prod_{i=1}^D (P -p_i +n +1)$ so as to re-arrange the square bracket above as
\bea\label{eq:lung}
&& n \prod_i (P -p_i +n +1) + \sum_{k=1}^D (n+k) \sum_{ 1\leq i_1<\dotsb <i_k\leq D } \prod_{l=1}^k p_{i_l} \prod_{j\neq i_1,\dotsc,i_k} (P -p_j +n +1) \crcr
&&
= (n+1) \prod_{i=1}^D ( P +n +1) - \prod_{i=1}^D (P -p_i +n +1) + \sum_{k=1}^D (k-1)
\sum_{ 1\leq i_1<\dotsb <i_k\leq D } \prod_{l=1}^k p_{i_l} \prod_{j\neq i_1,\dotsc, i_k} (P -p_j +n +1) \crcr
&& = (n+1) (P +n) (P +n +1)^{D-1} + (n+1) (P +n +1)^{D-1} - \prod_{i=1}^D (P -p_i +n +1) \crcr
&& + \sum_{ k=1 }^D (k-1)
\sum_{ 1\leq i_1<\dotsb <i_k\leq D} \prod_{l=1}^k p_{i_l} \prod_{j\neq i_1,\dots i_k}
(P-p_j +n +1) \; .
\end{eqnarray}
The first term of the last equality is exactly what is needed to form $C^{n+1}_{p_1\dotsb p_D}$. Therefore we focus now on the sum of the three other contributions,
\bea
&& (n+1) (P+n+1)^{D-1} - \prod_{i=1}^D (P+n+1-p_i) + \sum_{k=2}^D (k-1)
\sum_{ 1\leq i_1<\dotsb <i_k\leq D} \prod_{l=1}^k p_{i_l} \prod_{j\neq i_1,\dots i_k} (P +n +1 -p_j ) \crcr
&& = (n+1) (P+n+1)^{D-1} - (P+n+1)^D + (P+n+1)^{D-1} P + \sum_{k=2}^D(-)^{k+1} \sum_{ 1\leq i_1<\dotsb <i_k\leq D} \Bigl[\prod_{l=1}^k p_{i_l}\Bigr] (P+n+1)^{D-k} \crcr
&& + \sum_{ k=2 }^D (k-1) \sum_{ 1\leq i_1<\dotsb <i_k\leq D} \prod_{l=1}^k p_{i_l}
\sum_{m=0}^{D-k} ( P+n+1 )^{D-k-m} (-)^m \sum_{\substack{1\leq j_1<\dotsb <j_m \leq D\\j_t\neq i_l}} \prod_{t=1}^m p_{j_t} \; .
\end{eqnarray}
The first three terms cancel. For the remaining terms
\begin{multline}\label{eq:sum-sum}
\sum_{k=2}^D(-)^{k+1} \sum_{ 1\leq i_1<\dotsb <i_k\leq D} \Bigl[\prod_{l=1}^k p_{i_l}\Bigr] (P+n+1)^{D-k} \\
+ \sum_{ k=2 }^D (k-1) \sum_{ 1\leq i_1<\dotsb <i_k\leq D} \prod_{l=1}^k p_{i_l}
\sum_{m=0}^{D-k} ( P+n+1 )^{D-k-m} (-)^m \sum_{\substack{1\leq j_1<\dotsb <j_m \leq D\\j_t\neq i_l}} \prod_{t=1}^m p_{j_t}\;,
\end{multline}
we take into account that $k+m$ ordered integers can be partitioned into $\binom{k+m}{k}$ ways into two subsets
of $k$ and $m$ ordered integers. Thus the second sum rewrites as a sum over $q=k+m$
\bea
\sum_{q=2}^D\ \sum_{ 1\leq i_1<\dotsb <i_q\leq D } p_{i_1} \dotsm p_{i_q} (-)^q (P+n+1)^{D-q}
\sum_{k=2}^q (-)^k (k-1) \binom{q}{k} \; .
\end{eqnarray}
But
\begin{equation}
\sum_{k=2}^q (-)^k (k-1) \binom{q}{k} = 1 + \sum_{k=0}^q (-)^k (k-1) \binom{q}{k} = 1 - (1-1)^q - q(1-1)^{q-1} =1 \;,
\end{equation}
hence the whole quantity displayed in \eqref{eq:sum-sum} is zero and the lemma \ref{lem:recD} follows.
\qed
\medskip
Therefore, $F_{n+D}$ is obtained recursively from the set $(F_{p})_{p<n+D}$. The characteristic polynomial of this recursion is exactly $Q(X)$ (Equation \eqref{eq:charD}). It means that the solution $x_{(0)}$ is one of the common ratios of $(F_n)$. We denote the others $x_{(j)}$, and assuming they are all different,
\begin{equation} \label{formal F_n}
F_n = a\, x^n_{(0)} + \sum_j b_j\, x^n_{(j)} \;,
\end{equation}
for some functions $a(g_1,\dotsc,g_D), b_j(g_1,\dotsc,g_D)$ which can in principle be determined by $D$ initial conditions. However, we cannot use the initial conditions (remember we want to prove that $F=F_1$) so we have to proceed differently. Each common ratio in the sum \eqref{formal F_n} is controlled thanks to the Lemma \ref{lem:x_0}, as $x_{(0)}$ is bounded from above and each $x_{(j)}$ is bounded from below. Now we need to control the sequence $(F_n)$ independently of its common ratios. This is done through the following lemma.
\begin{lemma} \label{lem:boundD}
For all $K>1$, there exists $\epsilon_K >0$ such that for all $g_1,\dotsc, g_D \in \mathbb{C}$ with $|g_1|,\dotsc,|g_D| < \epsilon_K $, $F_n(g_1,\dots g_D)$ is polynomially bounded by $K$,
\begin{equation}
\forall n \ge 0\qquad |F_n(g_1,\dots g_D)| \leq K^n \;.
\end{equation}
\end{lemma}
{\bf Proof.} For $n=0$, this is trivial as $F_0=1$. Let thus be $n\geq1$ and $K>1$. It is enough to chose $\epsilon_K$ such that
\bea
\epsilon_K < \frac{(D-1)^{D-1}}{D^D} \; , \qquad \epsilon_K < \frac{1}{2De}
\qquad \text{and} \qquad e^{2D\epsilon_K}+ \frac{1}{\sqrt{2\pi} } \frac{2De\epsilon_K}{1-2De\epsilon_K} <K \; .
\end{eqnarray}
With $\epsilon_K < \frac{(D-1)^{D-1}}{D^D}$, one can use Equation \eqref{eq:Dcatalan} which implies
\bea
|F_n| \leq \sum_{p=0}^{\infty} \frac{n}{n+p} \frac{(Dn+Dp)^p}{p!} \; \epsilon_K^p
\leq \sum_{p=0}^{\infty} \frac{ (n+p)^{p} }{p!} (D\epsilon_K)^p \; . \crcr
\end{eqnarray}
We use the fact that $(n+p)^{p} \leq (2n)^{p}$ when $p\leq n$ and $(n+p)^{p} \leq (2p)^{p}$ when $p\geq n$ to obtain the bound
\bea
&& |F_n| \leq \sum_{p=0}^{n} n^p \frac{(2D\epsilon_K)^p}{p!} + \sum_{p=n}^{\infty} \frac{p^{p}}{p!}(2D\epsilon_K)^p
\leq e^{2Dn\epsilon_K} + \sum_{p=n}^{\infty} \frac{p^{p}}{p!} (2D\epsilon_K)^p \;.
\end{eqnarray}
Now we use $p! \geq \sqrt{2\pi p} \; e^{p\ln p -p}, \forall p\geq1$ and as $\epsilon_K < \frac{1}{2De}$ we get
\begin{equation}
\begin{aligned}
|F_n| &\leq e^{2Dn\epsilon_K} + \frac{1}{\sqrt{2\pi} } \sum_{p=1}^{\infty} \frac{1}{\sqrt{p}} \; (2De\epsilon_K)^p
\leq e^{2Dn\epsilon_K} + \frac{1}{\sqrt{2\pi} } \sum_{p=1}^{\infty} (2De\epsilon_K)^p \\
&\leq e^{2Dn\epsilon_K} + \frac{1}{\sqrt{2\pi} } \frac{2De\epsilon_K}{1-2De\epsilon_K} \leq
\Bigl( e^{2D\epsilon_K}+ \frac{1}{\sqrt{2\pi} } \frac{2De\epsilon_K}{1-2De\epsilon_K} \Bigr)^n \; .
\end{aligned}
\end{equation}
\qed
We are now in position to prove Proposition \ref{propF_n}, by combining Lemmas \ref{lem:x_0} and \ref{lem:boundD}. Choose $R>K>1$ and consider $|g_i|<\inf (\epsilon_R,\epsilon_K)$ with $\epsilon_R, \epsilon_K$ as in the lemmas. First we consider the case where all $x_{(j)}$ have different norms and denote $x_{(\rm max)}\neq 0$ the one with the largest norm. In particular $|x_{(\rm max)}| \geq R>1$. At large $n$, the norm of $F_n$ is dominated by $b_{({\rm max})} x_{(\rm max)}$. Hence there exist a constant $A>0$ and an integer $N$ such that for all $n\geq N$,
\begin{equation}
|F_n(g_1,\dotsc,g_D)|\geq A\,|b_{({\rm max})}|\,|x_{\rm max}|^n \geq A\,|b_{({\rm max})}|\,R^n\;.
\end{equation}
However $|F_n|\leq K^n$ with $R>K$. Therefore we conclude $b_{({\rm max})}=0$. We can repeat this reasoning with the root $x_{(j)}$ that has the second largest norm, and so on until we get $F_n = a x_{(0)}^n$. The initial condition $F_0=1$ for all $g_i$ finally leads to $F_n = x_{(0)}^n$.
The case where some of the roots have the same norm is quite similar. The idea is to extract sub-sequences $(F_{r(n)})$ for which $F_{r(n)}$ behaves at large $n$ like a coefficient times some combination of the roots $x_{(j)}$, $j\neq 0$, where this combination is greater than $R^n$ when $|x_{(j)}|\geq R$.
\begin{remark}
At $D=2$, the number of line-colored trees with $p_1$ lines of color 1 and $p_2$ lines of color 2 is $C_{p_1 p_2} = \frac{1}{p_1+p_2+1} \binom{p_1+p_2+1}{p_1} \binom{p_1+p_2+1}{p_2}$. These numbers are known as the Narayana numbers $N(p_1+p_2+1,p_1+1)$.
\end{remark}
\begin{remark}
By summing the numbers $C_{p_1 \dotsb p_D}$ over all possible numbers of lines of each color at a fixed total number of lines $P -1$,
\begin{equation}
\sum_{\substack{ \{p_i\} \\ \sum_i p_i =P-1}} C_{p_1\dotsb p_D} = \frac{1}{P}\,\binom{D(P-1) +D}{P-1} = \frac{1}{DP+1}\,\binom{DP+1}{P}\;,
\end{equation}
we obtain the total number of $D$-ary trees on $P$ vertices also known as the $D$-Catalan numbers (pp. 200 in \cite{graham-knuth-patashnik}, proposition 6.2.2 in \cite{stanley} and more details in \cite{Bonzom:2011zz}).
\end{remark}
\begin{remark}
Proposition \ref{propF_n} implies that $F_n F_m = F_{n+m}$, corresponding to interesting combinatorial identities,
\begin{equation}
\sum_{\{k_i=0,\dotsc,p_i\}} C^{n}_{k_1\dotsb k_D}\,C^m_{p_1-k_1 \dotsb p_D-k_D} = C^{n+m}_{p_1\dotsb p_D}\;.
\end{equation}
For example, when $D=2$ one gets
\begin{multline}
\sum_{k_1,k_2=0}^{p_1,p_2} \frac{n\ m}{(k_1+k_2+n)\,(p_1-k_1+p_2-k_2+m)} \binom{k_1+k_2+n}{k_1} \binom{k_1+k_2+n}{k_2} \\
\times \binom{p_1-k_1+p_2-k_2+m}{p_1-k_1} \binom{p_1-k_1+p_2-k_2+m}{p_2-k_2}
= \frac{n+m}{p_1+p_2+n+m} \binom{p_1+p_2+m+n}{p_1} \binom{p_1+p_2+m+n}{p_2}\;.
\end{multline}
\end{remark}
\section*{Acknowledgements}
Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\label{sec:intro}Introduction}
Since the first detection of \gw[s] in 2015~\cite{GW150914}, the LIGO detectors have been upgraded multiple times, and the network of detectors now includes Advanced LIGO\xspace \cite{aLIGO}, Advanced Virgo\xspace \cite{aVirgo} and Kagra \cite{Kagra}. The third observing run of the LIGO-Virgo-Kagra network \cite{LVK} has been completed and a whole suite of new \cbc signals have been observed \cite{GWTC2, GWTC2.1, GWTC3, 3OGC, 4OGC, IAS3}. Along with improvements in the detectors, search algorithms used to search for \cbc signals have also been improved by using information from the full network and introducing new methods for removing noise \cite{ExtPyCBC, gstlalO2, mbta}.
A set of modelled \cite{findchirp, ihope, pycbc, ExtPyCBC, gstlalearly, gstlalmid, gstlalO2, mbta, mbtaO3, spiir, IAS1, IAS2, IAS3} and unmodelled \cite{cwb} search algorithms have been used to observe \cbc[s] in the past. In this paper we will focus on modelled searches. Modelled searches construct a large bank of simulated signals (templates) with a variety of masses and spins in order to cover the targeted parameter space. These templates are then each used to perform a matched filter over the data for each detector. The matched filter \snr is then compared across the detector network in order to check for consistency across detectors.
The matched filter is the optimal solution when searching for known signals assuming that only Gaussian noise is present~\cite{findchirp}. However, in the presence of non-Gaussian noise transients, large \snr[s] can be produced where no signal is present~\cite{ihope}. Gravitational-wave detectors contain many such transients \cite{noise, o3detchar}, commonly referred to as ``glitches". Glitches can produce huge values of \snr while having little resemblance to the \cbc signals being searched for. It is therefore necessary to employ signal-consistency tests to remove as many of these glitches as possible in order to reduce the rate of false alarms. In this paper we will explore how to effectively develop and tune such a signal-consistency test to separate the signal and noise populations. We will apply this methodology to the PyCBC search \cite{ExtPyCBC} as a demonstration of its use. However, we strongly emphasize that this method will be applicable to any modelled search.
In the PyCBC search, multiple signal-consistency tests are employed. Firstly, the matched filter \snr is modified using two $\chi^2$\xspace tests that compare the morphology of the signal in the data with that of the template \cite{BAchisq, SGchisq}, penalising any signals found to be inconsistent. Peaks in the re-weighted \snr time-series are then compared across the detector network, checking for consistency in the template parameters, as well as the relative time of arrival, amplitude and phase of the signals \cite{phase}. Each potential candidate event is then given a detection statistic in order to rank the likelihood that it is a real signal.
After these tests are performed, the remaining signals are shifted in time relative to each other to empirically measure a non-astrophysical background. The detection statistic values of the background signals can then be compared to the observed coincident signals in order to produce a false alarm rate for each observed signal \cite{pycbc}.
The existing signal consistency tests remove a large number of glitches from the single detector data, reducing the rate of background coincidences, and therefore reducing the false alarm rate of observed signals. However, a large number of glitches continue to be detected, particularly when searching with high mass templates \cite{pycbcimbh}, $M_{\mathrm{total}} \gtrsim 100 M_\odot$, where the signal may only be in the detector frequency band for a fraction of a second.
In this work we propose a method of creating and tuning new signal-consistency tests in order to separate signal and noise populations. We show that we can use machine-learning techniques such as \sgd to optimise these tests efficiently, and thus improve the sensitivity of our searches to \gw signals.
The use of machine-learning in \gw searches is an area where much work is being done. Several works have explored the use of neural networks to replace the matched filter statistic \cite{George:2016hay, George:2017pmj, Gabbard:2017lja, Gebhard:2019ldz, Yan:2021wml}, using convolution neural networks to predict the probability of a signal being present.
One advantage of convolution neural networks compared to a matched filter is the computational cost involved. This is particularly important for multi-messenger astronomy, where prompt detection of \gw signals could enable follow-up with electromagnetic observations. It has been shown that convolution neural networks could be an effective method for enabling such observations \cite{Wei:2020sfz, Wei:2020xrl, Wei:2020ztw, Schafer:2020kor, Krastev:2019koe, Krastev:2019koe}.
These works have shown that machine-learning can, in some cases, compete with the sensitivity of a matched filter search when applied to a single detector. However, such methods have not yet been demonstrated to be effective in large-scale searches for \cbc[s] covering a wide range of parameters. Current methods also do not produce additional information such as the amplitude and phase of the signal, used in the matched filter search to test triggers across detectors, they therefore lose sensitivity when compared to a matched filter search using a network of detectors \cite{Schafer:2021cml}.
In this work we choose to introduce a machine learning model within the current matched filter framework in order to utilise the statistical tests already available to us. We implement a new $\chi^2$\xspace test using \sgd to train a set of tunable parameters within the model, optimising the test using noise triggers from a previous search along with a set of simulated signals. By implementing the model as a $\chi^2$\xspace test it should remain rigorous in the case of unseen data, such as a new population of glitches.
We start by describing the general use of $\chi^2$\xspace tests within \gw searches, and the tests currently used within the PyCBC search in Sec. \ref{sec:background}. In Sec. \ref{sec:chisq} we describe our proposed framework for training new $\chi^2$\xspace tests using machine-learning methods. We then utilise this framework in Sec. \ref{sec:imbh} to train a new $\chi^2$\xspace test for use in a search for \imbh[s]. In Sec. \ref{sec:results} we present the effect of this trained model on the \imbh search showing that it improves the sensitivity of the search, particularly at high masses, where the effect of non-Gaussian noise is most prominent.
\section{\label{sec:background}A Review of $\chi^2$\xspace Tests in GW Searches}
We will begin by reviewing existing $\chi^2$\xspace signal-consistency tests used in \gw searches.
In order to search for signals in strain data, a matched filter is used. Assuming the strain data takes the form of $s(t) = n(t) + h(t)$, where $n(t)$ is stationary Gaussian noise and $h(t)$ is a known signal, matched filtering is the optimal method for detecting the signal $h(t)$. The calculated \snr is analytically maximised over the amplitude and phase of the signal. The \snr is calculated as:
\begin{equation}
\rho^2 = \frac{|(s|h)|^2}{(h|h)},
\end{equation}
where the inner product is:
\begin{equation}
(a|b) = 4\int^{f_{\mathrm{high}}}_{f_{\mathrm{low}}}
\frac{\Tilde{a}(f) \Tilde{b}^{\ast}(f)}{S_n(f)} df
\end{equation}
and $S_n(f)$ is the one-sided \psd of the noise. However, in the case of non-Gaussian noise, large peaks in the \snr time-series can be produced. Short bursts of non-Gaussian noise are often referred to as ``glitches", these can produce extremely large values of \snr with little similarity to the signal being searched for. In order to remove such triggers signal-consistency tests may be introduced to test the morphology of the trigger compared to that of the search template.
A $\chi^2$\xspace test is one such test. A $\chi^2$\xspace test is constructed by performing a matched filter with additional templates, $\hat{h}_{\bot}$, that are orthogonal to the search template such that $(h|\hat{h}_{\bot}) = 0$. Given well-behaved noise, and a signal which is an exact match to the search template, the matched filter \snr of the orthogonal template will follow a reduced $\chi^2$\xspace distribution with 2 degrees of freedom~\cite{findchirp}. However, when there is non-Gaussian noise present, such as a glitch, the \snr will deviate from this distribution, taking a larger value. By examining triggers on the \snr-$\chi^2$\xspace plane the signal and noise populations may then be separated.
After choosing a suitable template, $\hat{h}$, to be used for the $\chi^2$\xspace test one first normalises it so that $(\hat{h}|\hat{h}) = 1$. The part of the signal orthogonal to the search template is then selected:
\begin{equation}\label{eq:ortho}
\hat{h}_{\bot} = \frac{\hat{h} - (\hat{h}|h)h}{\sqrt{1 - (\hat{h}|h)^2}}.
\end{equation}
N such templates are created in this way and their \snr[s] are combined to produce the $\chi^2$\xspace statistic.
\begin{equation}
\chi^2_{r} = \frac{1}{2N}\sum^N_i\rho^2_i.
\end{equation}
In the case that the templates are orthogonal to one another this will produce a reduced $\chi^2$\xspace distribution with $2N$ degrees of freedom. However, in general, orthogonality between the $\chi^2$\xspace templates is not always enforced, in which case the statistic will follow a generalised $\chi^2$\xspace distribution with increased variance.
The $\chi^2$\xspace test also assumes that the signal in the data matches the search template. However, due to the discrete placement of templates within the parameter space of the search there will be some mismatch between these two signals. This mismatch means that the $\chi^2$\xspace template will no longer be orthogonal to the signal and some of the signal's \snr will be included in the $\chi^2$\xspace statistic, increasing the mean of the distribution, creating a non-central $\chi^2$\xspace distribution. A similar effect will be caused if the \psd is miscalculated, or if the noise is non-stationary.
In general any N templates can be chosen to construct a $\chi^2$\xspace test. However, this test will be most effective when templates are chosen that have some overlap with known non-Gaussian noise in the data, in particular, aiming to target noise which produces high \snr triggers for the targeted parameter space.
To separate the signal and noise populations a re-weighted \snr is then calculated that penalises triggers where the $\chi^2_{r}$ is larger than expected. This re-weighting takes the general form
\begin{equation}\label{eq:general_reweight}
\hat{\rho} = f(\rho, \chi_r^2).
\end{equation}
This re-weighted \snr is then used to rank potential candidate events.
\subsection{\label{sec:existing} Existing $\chi^2$\xspace Tests in the PyCBC search}
There are currently two $\chi^2$\xspace signal-consistency tests employed within the PyCBC search. The first of these is the Allen $\chi^2$\xspace test \cite{BAchisq}. This test divides the template into $p$ independent frequency bins, splitting the template such that each bin will contribute equally to the \snr. The \snr contribution for each of these sub-templates is calculated and compared to the expected value, calculating the $\chi^2$\xspace statistic as
\begin{equation}
\chi_r^2 = \frac{p}{2p - 2} \sum^{p}_{i=1} \left( \frac{\rho}{p} - \rho_{bin,i} \right)^2.
\end{equation}
This will take large values when a glitch is present in the data that does not share the same morphology as the search template. Specifically this test checks the distribution of power along the track of the \cbc signal. Although this test does not follow the exact form described in the previous section, it follows the same principle detecting excess power along the track of the signal. The re-weighted \snr \cite{chicurrent} is then calculated as
\begin{equation}\label{eq:BAreweight}
\Tilde{\rho} =
\begin{cases}
\rho, & \text{if $\chi_r^2 \leq 1$, } \\
\rho \left[ \left(1 + (\chi_r^2)^3\right) /2 \right]^{-\frac{1}{6}}, & \text{if $\chi_r^2 > 1$.}
\end{cases}
\end{equation}
By ranking the candidates based on this re-weighted \snr, a large number of noise triggers can be down-weighted. This test is particularly powerful for lower mass systems where the signals span a wide range of frequencies within the band of the detectors, allowing for a larger number of frequency bins to be used effectively. The number of frequency bins to be used is varied as a function of the search templates parameters \cite{chibins}. The number of frequency bins to use and the form of the \snr re-weighting are currently tuned empirically and have evolved over time \cite{chiold, chicurrent, chibins}.
The second $\chi^2$\xspace test is the sine-Gaussian $\chi^2$\xspace test \cite{SGchisq}. This works by performing a matched filter with $n$ sine-Gaussian signals, each being placed at frequencies higher than those expected from the search template. As these sine-Gaussian signals do not overlap with the search template, we can construct a $\chi^2$\xspace test as the sum of their \snr[s]
\begin{equation}
\chi^2_{r,sg} = \frac{1}{2n}\sum^n_i\rho^2_{sg,i}.
\end{equation}
This statistic tests if excess power is present above the final frequency of the search template. When excess power is present large values of $\chi^2_{r,sg}$ will be produced and the \snr is re-weighted again
\begin{equation}\label{eq:SGreweight}
\Tilde{\rho}_{sg} =
\begin{cases}
\Tilde{\rho}, & \text{if $\chi_r^2 \leq 4$}, \\
\Tilde{\rho} \left(\chi^2_{r,sg} / 4 \right)^{-\frac{1}{2}}, & \text{if $\chi_r^2 > 4$}.
\end{cases}
\end{equation}
The addition of this second test further reduces the rate of noise triggers due to glitches. This test has a significant impact for higher mass templates where there is a population of short duration glitches known as ``blips" \cite{blips}. A subset of these blips have power extending to high frequencies allowing this test to remove them successfully \cite{SGchisq}. However a large number of these glitches are not removed by this test \cite{pycbcimbh}.
Both of these tests have been tuned empirically by hand, choosing the number of frequency bins to be used and the placement of the sine-Gaussian signals, as well as the exact form of the re-weighting of the \snr. In the next section we propose an approach that allows us to create and tune new $\chi^2$\xspace tests using a data-driven approach.
\section{\label{sec:chisq}Auto Tuning of a $\chi^2$\xspace signal-consistency test}
We propose a framework in which we create new $\chi^2$\xspace tests and empirically tune them based on a set of training data. To achieve this we take a set of noise triggers from a previous search along with a set of simulated signals and use \sgd to tune the parameters of our model.
In order to optimise the parameters of our chosen model we first must define a loss function, this is the quantity that we aim to minimise during the training process. The loss function is a function of the triggers \snr, $\rho$, and its \snr re-weighted by the new $\chi^2$\xspace test described in Sec. \ref{sec:imbh}, $\hat{\rho}$. For this work we choose to define a separate loss function for noise triggers and simulated signals. The loss functions used in the case of noise triggers is,
\begin{equation}
L_{n}(\hat{\rho}, \rho) =
\begin{cases}
\hat{\rho} - 4, & \text{if $\hat{\rho} > 4$} \\
0, & \text{if $\hat{\rho} \leq 4$}
\end{cases}
\end{equation}
which penalises any cases where the re-weighted \snr is greater than 4. Below this threshold the PyCBC search currently discards all triggers so it is unnecessary to reduce it any further.
The loss function used for simulated signals is:
\begin{equation}
L_{inj}(\hat{\rho}, \rho) = \rho - \hat{\rho}
\end{equation}
This penalises the case where the $\chi_r^2$ value is large and the \snr is reduced. This contribution to the loss will also allow us to train a function that re-weights the \snr as in Eq. \ref{eq:general_reweight} in order to create a greater separation between the signal and noise populations in the \snr-$\chi^2$\xspace plane.
To update the parameters of the model we must then calculate the gradients with respect to the loss function. This is done using backpropogation after calculating the loss function using a set of training data. The $\chi^2$\xspace model and matched filter are implemented in \tensorflow \cite{tensorflow2015-whitepaper, tensorflow_developers_2021_5177374}, allowing the gradients to be tracked through the calculation and the parameters updated.
Stochastic gradient descent has been used widely in the field of deep learning to optimise extremely complex models \cite{sgd, DeepLearning}, this framework therefore allows us to produce highly complex transformations while allowing us to effectively tune them to the detector data at a reasonable computational cost.
In the next section we will describe one such model and the data used to train the model.
\section{\label{sec:imbh}A $\chi^2$\xspace test for intermediate mass black hole searches}
To demonstrate the training scheme described in the previous section we will attempt to train a $\chi^2$\xspace test that improves the separation between glitches and signals when searching for \imbh[s], which we consider as signals with $M_{\rm tot}> 100 M_{\odot}$. In this mass range the Allen $\chi^2$\xspace test described in the Sec. \ref{sec:existing} has a limited effect due to the systems merging at low frequencies and covering a relatively small frequency range within the detector bandwidth. The sine-Gaussian $\chi^2$\xspace test is successful in removing a population of glitches that affect templates within this mass range, however, many glitches remain that do not have significant high frequency power~\cite{pycbcimbh}.
In this section we will define a transformation using the framework outlined in the previous sections using data from a previous \imbh search in order to train the $\chi^2$\xspace test and improve the sensitivity of the search.
\subsection{\label{sec:model}Creating $\chi^2$\xspace templates}
Existing $\chi^2$\xspace tests effectively test the distribution of power of a candidate event along the track of the signal in the time-frequency plane and test for excess power at high-frequencies. We aim to test for excess power in a frequency band similar to those of the search templates, aiming to cover areas of the time-frequency plane not currently covered by existing tests.
To achieve this we transform the search template itself, shifting the template in time and frequency. The optimal values for these time and frequency shifts will depend on the template being used and the noise present in the data. Time shifted templates have previously been used in the PyGRB search \cite{Harry:2010fr, Williamson:2014wma} to create a $\chi^2$\xspace signal-consistency test using fixed values for the time shifts. We propose a model that allows different time and frequency shifts for each template, tuning these values based on the current data.
The time and frequency shifts are selected using a dense neural network. The template is first turned into an input by sampling it between 12 Hz and 256 Hz with a sample width of 0.1 Hz, we then take the absolute value of the template and normalise it so that the mean of the input is one. This input is then passed to a dense neural network with two output values between -1 and 1. The first of these values is used as the time shift after being scaled by the maximum allowed time shift $\Delta t_{\mathrm{max}}$, similarly, the second value is used as the frequency shift after being scaled by the maximum allowed frequency shift $\Delta f_{\mathrm{max}}$.
The dense neural network consists of 11 dense layers, using the Rectified Linear Unit (\textsc{ReLu}) activation functions for hidden layers and the hyperbolic tangent function for the output layer. In order to produce multiple time/frequency shift pairs we train multiple networks with this configuration. However, to speed up the training of this model the first 6 dense layers are shared between each network. The sizes of the dense layers are listed in Table. \ref{tab:architecture}.
\begin{table}[t]
\centering
\begin{tabular}{l c | r}
Layer & & Output Size \\
\hline
Input & & 2440 \\
Dense + \textsc{ReLu} & * & 128 \\
Dense + \textsc{ReLu} & * & 128 \\
Dense + \textsc{ReLu} & * & 64 \\
Dense + \textsc{ReLu} & * & 64 \\
Dense + \textsc{ReLu} & * & 32 \\
Dense + \textsc{ReLu} & * & 32 \\
Dense + \textsc{ReLu} & & 16 \\
Dense + \textsc{ReLu} & & 16 \\
Dense + \textsc{ReLu} & & 8 \\
Dense + \textsc{ReLu} & & 8 \\
Dense + tanh & & 2 \\
\end{tabular}
\caption{This table details the architecture of the neural network used to calculate time and frequency shifts. The input is the absolute value of the search template sampled at 0.1 Hz between 12 Hz and 256 Hz. Dense layers transform their input using a matrix multiplication followed by the addition of a bias vector, the output is then passed to the activation function listed. The hyperbolic tangent activation function of the final layer ensures the output is in the range $[-1, 1]$. The two outputs are multiplied by $\Delta t_{\mathrm{max}}$ and $\Delta f_{\mathrm{max}}$ respectively to generate the time and frequency shifts. 4 such networks are generated, the layers marked with an asterisk (*) share their weights between the 4 networks.}
\label{tab:architecture}
\end{table}
After calculating these shifts they are applied to the template before using Eq. \ref{eq:ortho} to generate our $\chi^2$\xspace templates.
In addition to this model we must define a function to re-weight the \snr with the $\chi^2_r$ value. We create a parameterised model that can reproduce the re-weighting in Eq. \ref{eq:BAreweight} and \ref{eq:SGreweight}.
\begin{equation}\label{eq:reweight}
\hat{\rho} =
\begin{cases}
\rho, & \text{if $\chi_r^2 \leq \sigma$}, \\
\rho \left(\left(\delta + (\chi^2_{r} / \sigma)^{\beta} \right) / \left(\delta + 1 \right)\right)^{-\frac{1}{\alpha}}, & \text{if $\chi_r^2 > \sigma$}.
\end{cases}
\end{equation}
Here $\sigma$, $\alpha$, $\beta$ and $\delta$ are parameters that can also be tuned to increase the effectiveness of the test. This re-weighting leaves any signals with $\chi_r^2$ less than the threshold, $\sigma$, unchanged. At large $\chi_r^2$ values $\alpha$ and $\beta$ determine how quickly the re-weighted \snr decreases with the $\chi_r^2$ value, while $\beta$ and $\delta$ effect the transition between these two regimes.
\subsection{Data}
In order to most effectively train the model we target glitches that are missed by previous signal-consistency tests. We achieve this by performing a search using the setup described in \cite{pycbcimbh} covering $~ 45$ days of data from the first half of the third observing run. The data used in this search is available from GWOSC \cite{LOSC, GWOSC}. From this search we then select a set of noise triggers with $\Tilde{\rho}_{sg} > 6$ and $6 \leq \rho \leq 64$. These noise triggers may have been down-weighted by existing signal-consistency tests, but have not been down-weighted enough to remove them from the analysis. In order to avoid including real signals we remove any triggers that are coincident across multiple detectors. This will also remove a number of noise triggers that have formed coincident triggers, however, enough triggers remain to create a substantial training set. These triggers are then clustered over a window of 1 second and the triggers with the largest $\Tilde{\rho}_{sg}$ in that window are kept. We record the times of the triggers and the parameters of the template that produced them.
We also select a set of simulated signals to include during training. From the list of simulated signals that were recovered by the search, with false-alarm rates smaller than 1 per year, we select a set using the same constraints as the noise triggers. For these triggers we record the parameters of the simulated signal, as well as the template that recovered the signal. By using the template that recovered the signal in the search we are including the effect of template mismatch within the training scheme, this allows the \snr re-weighting step in Eq. \ref{eq:reweight} to be tuned to account for this contribution. The simulated signals used in this analysis include effects from precession and higher-order modes that are not present in the template bank, by including these effects in the analysis we can train the model to avoid identifying these effects as noise, maintaining our sensitivity to these signals.
For each sample, the strain is loaded at 2048Hz, for samples containing simulated signals the signal is then added. The strain data is high-pass filtered at 12Hz and PSDs are calculated using 512 seconds of data around the time of the sample, following the same procedure as the PyCBC search. 64 seconds of data around the trigger is then selected, ensuring the trigger is not within the first 32 seconds or the last 16 seconds to allow time for the inspiral and ringdown of the search templates.
The search templates are generated using the \textsc{SEOBNRv4\_ROM} \cite{SEOBNRv4_1, SEOBNRv4_2} waveform model, and are generated at the same length and sample frequency as the strain data. The simulated signals are generated using the \textsc{NRSur7DQ4} \cite{NRSur7DQ4}, \textsc{SEOBNRv4} \cite{SEOBNRv4_1, SEOBNRv4_2} and \textsc{SEOBNRv4HM} \cite{SEOBNRv4HM_1, SEOBNRv4HM_2} waveform models.
In order to ensure the noise and signal samples have similar importance during training we select an equal number of each. Additionally, to ensure that the parameters in Eq. \ref{eq:reweight} are trained to separate the noise and signal populations across a range of \snr[s] we bin our samples by \snr and select an equal number of samples from the noise and signal models for each bin. The boundaries of these \snr bins are 6, 8, 10, 14, 18, 24 and 64. In each of these \snr bins we draw 1200 noise samples and signal samples from those remaining after filtering has been applied.
We set aside 10\% of all samples to be used as a validation set to monitor the performance of the model on unseen data, all other samples are used to train the model. This gives us a total of 12960 samples for training and 1440 samples for validation.
\subsection{Training}
The training of the model is performed in batches, each batch contains 32 samples from the training set described in the previous section. For each sample the peak \snr within 0.1 seconds of the trigger time is calculated. The search template is then generated and passed to the transformation described in Sec. \ref{sec:model}. The resulting templates are then normalised and the orthogonal templates calculated using Eq. \ref{eq:ortho}. These templates are then used to calculate $\chi_r^2$ at the time of the peak \snr and the re-weighted \snr is calculated. This value can then be passed to the loss function in order to calculate the loss values for the batch.
Once the losses have been calculated, backpropogation is used to obtain the gradients of the loss function with respect to the trainable weights of the network described in Sec. \ref{sec:model}, as well as the trainable parameters in Eq. \ref{eq:reweight}. Stochastic gradient descent is then used to apply small changes to the variables based on the calculated gradients. In order to speed up training and improve performance for sparse gradients we use Nesterov momentum \cite{Nesterov1983AMF} when calculating the parameter updates.
Before training the full model we perform a pre-training phase where only the parameters of the \snr re-weighting in Eq. \ref{eq:reweight} are trained. This step is faster than the training of the full model and by performing this step first the training of the $\chi^2$\xspace model is more effective early in the training. After this step we proceed with the main training step, training the parameters of the model described in Sec. \ref{sec:model} and the \snr re-weighting in Eq. \ref{eq:reweight} at the same time.
The main training step is repeated until all samples in the training set have been analysed 25 times, taking a total of $\sim 24$ hours using 8 CPU cores. During training the learning rate determines how quickly parameters are changed in response to the calculated gradients. In order to improve convergence late in the training stage we employ learning rate decay. After each full cycle of the training set the learning rate is multiplied by a factor of 0.9, allowing the model to make smaller adjustments late in training.
\subsection{Trained Model}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{loss.pdf}
\caption{The average loss calculated using the unseen test data as it changes with the number of training batches used. The average losses contributed by noise samples and signal samples are also plotted. This shows a large improvement in the loss contributed by noise samples, while the loss from signals remains reasonably steady.}
\label{fig:loss}
\end{figure}
As shown in Fig. \ref{fig:loss} we see that the loss calculated using the test set decreases as training continues. This change is mainly driven by the model down-weighting noise triggers, while the contribution from signals makes a smaller change over the course of the training. The effect of this training can also be seen in Fig. \ref{fig:metric}, we can see that as training continues a larger fraction of the \snr is removed as noise triggers are targeted more effectively by the model, while signals are left relatively unaffected.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{metric.pdf}
\caption{The average fraction of the \snr removed calculated using the unseen test data as it changes with the number of training batches used, split by noise and signal samples.}
\label{fig:metric}
\end{figure}
In Fig. \ref{fig:snr_chi} we can see that the noise and signal populations are well separated in the \snr-$\chi^2$\xspace plane, particularly at high \snr[s]. The parameters in Eq. \ref{eq:reweight} have been trained such that the majority of signal samples are below the threshold, $\sigma$, while noise samples above the threshold are heavily down-weighted.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{snr_chi.pdf}
\caption{The test samples plotted in the \snr-$\chi^2$\xspace plane after training is complete. Lines of constant re-weighted \snr are plotted. We can see that at high \snr values there is good separation between the noise samples (black dots) and signal samples (blue triangles), allowing the model to down-weight noise triggers heavily.}
\label{fig:snr_chi}
\end{figure}
The trained parameters are available as a supplementary data file at: \url{https://icg-gravwaves.github.io/chisqnet/}
\section{\label{sec:results}Effect on a Intermediate Mass Black Hole Search}
In this section we will show the effect of introducing this model to a search for \imbh[s]. For this test we carry out a search following the configuration set out in \cite{pycbcimbh}, covering $~ 8$ days of data from the first half of the third observing run. We run the search twice, with the only change being the introduction of the model trained in the previous section. To ensure that the model generalises to new data we run this search on a stretch of data completely separate to that used during training.
By introducing this model, noise triggers can be effectively down-weighted. Fig. \ref{fig:cumnum} shows the change in the number of triggers found when using the new ranking statistic. This reduction in triggers will reduce the number of coincident noise triggers in the foreground and the empirically measured background. It is this decrease in the rate of background triggers that produces an increase in the significance of remaining foreground triggers, thereby improving the sensitivity of the search.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{hist.pdf}
\caption{The cumulative number of single-detector triggers with ranking statistics below a given value. We can see a reduction in the number of single-detector triggers for all three detectors when changing from the previous ranking statistic (dashed line) to the new ranking statistic (solid line) including the new tuned model.}
\label{fig:cumnum}
\end{figure}
We evaluate the sensitivity of the search using a number of simulated signals added to the data and analysed in the same way as the main search. These simulated signals follow the same distribution as those in \cite{pycbcimbh}. The \textsc{SEOBNRv4} and \textsc{SEOBNRv4HM} waveform models are used to generate aligned spin signals, with total masses in the range $[100, 600] M_\odot$ and mass ratios in the range $[1, 10]$. Precessing signals are generated using the \textsc{NRSur7DQ4} waveform model with total masses in the range $[100, 600] M_\odot$, mass ratios in the range $[1, 4]$ and component spins isotropically distributed. For all simulated signals the distance is drawn uniformly in the range $[0.5, 11]$ Gpc, with isotropic sky positions and binary orientations. The sensitive volume of the search is then calculated by applying a threshold to the calculated false-alarm rate of 1 per year and measuring the detection efficiency in a number of distance bins. The detection efficiencies are then multiplied by the volume enclosed in the distance bins and the volumes summed to find the total sensitive volume of the search.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{compare_vt.pdf}
\caption{The ratio of the sensitive volume-time for the search including the trained model to the search without, calculated using simulated signals added to the data with a detection threshold on false-alarm rate of 1 per year.}
\label{fig:vt}
\end{figure}
In Fig. \ref{fig:vt} we see that the sensitivity of the search has been increased by the addition of the new $\chi^2$\xspace test, with an increase in sensitivity of $\sim 4\%$ for signals with total masses in the range $[100, 300] M_\odot$, increasing to $\sim 11\%$ for signals with total masses in the range $[300, 600] M_\odot$. This is due to the higher rate of glitches matching high mass templates, any decrease in the glitch population will therefore have a larger effect for these templates.
\section{\label{sec:discussion}Conclusion and Outlook}
In this work we have demonstrated a new framework to automatically train complex new $\chi^2$\xspace signal-consistency tests within modelled searches for \gw signals. We have applied this to the example of a search for \imbh signals, where glitches have a strong effect on the sensitivity of the search. Our framework is able to train a new $\chi^2$\xspace model, which provides an improved separation of the signal and noise populations allowing the noise triggers to be down-weighted. Using this new $\chi^2$\xspace test improves the sensitivity of the search to real signals by $\sim 4\%$ for signals with total masses in the range $[100, 300] M_\odot$ and $\sim 11\%$ for signals with total masses in the range $[300, 600] M_\odot$
The introduction of new $\chi^2$\xspace tests is difficult and usually requires empirical tuning by hand to be effective, and often requires re-tuning for different target parameter spaces or noise populations. As signal-consistency tests become more complex this can become unfeasible. However, by utilising machine-learning techniques we have shown that we can tune these automatically, removing the burden in improving and optimally tuning these tests. The method we demonstrate here could be applied to any of the commonly used matched-filter search pipelines targeting compact binary mergers.
The population of glitches within the interferometer data continues to be one of the largest challenges facing \gw searches. By continuing to develop signal-consistency tests that specifically target such noise we can continue to improve the sensitivity of searches and increase the chance of observing new events in areas of the parameter space most affected by glitches.
\begin{acknowledgments}
CM was supported by the Science and Technology Facilities Council through the DISCnet Centre for Doctoral Training.
IH was supported by STFC grants ST/T000333/1 and ST/V005715/1.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan.
For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:Introduction}
The ripening of technologies like the Internet of Things (IoT) and Internet of Services (IoS) brought focus of studies on the emergent 4th Industrial Revolution, i.e. Industry 4.0 and Industrial Internet concepts~\citep{CHEN2017588,Kagermann2013,klaus_dieter_thoben_2017_1002731}. These research agree that those technologies are provoking changes on the overall value chain of industries, from raw materials acquisition, logistic, and goods production to the delivery and even post sales services. This wide use and diversity of resources brings challenges when it is necessary to integrate industrial resources as required by the \emph{smart factories} of Industry 4.0 concept. The interoperability, which in this context means that the Cyber-Physical System (CPS) and all sorts of resources can communicate with each other, is a key factor~\citep{Hermann2016,Lu2017a}.
However, the incipient convergence of technology with no standard, usually requires the study of specific integration interfaces and high efforts on development. To face this challenge, we propose the use of Apache Camel framework, a message routing and mediation engine. Camel allows defining communication routes among data producers and consumers by using a Domain-Specific Language, independent of the protocol or networking technology on each side~\citep{Ibsen:2010:CA:1965487}. Camel already has a wide range of components and allows to design new ones facilitating the integration of current and new technologies used by industrial resources.
In many studies~\citep{Leitao2018,Roloff2016,COOK2009277}, Multi-Agent System (MAS) is being the CPS. In fact, it can partially reach some of \emph{smart factory}'s requirements~\citep{Hermann2016,LI2017608,MONOSTORI2016621,Zhong2017}, like decentralization and virtualization, leaving to devices the solution of issues like robustness and real-time capability. However, MAS applications are becoming very complex and computationally heavy specially due indiscriminately \emph{agentification} of entities, i.e., the approach that model almost any entity of a system as an agent. In the other hand, in recent MAS research~\citep{Hubner2010,Omicini2008,Ricci2006,RoloffSHSPH14}, dynamic and complex scenarios are being analysed in dimensions, it interprets that some elements are not necessarily agents. Agents and Artifacts approach (A\&A) is proposing: (i) agent's dimension for proactive entities which encapsulate autonomous execution in some activities (ii) environment dimension which includes \emph{artifacts}, i.e., simpler entities that can be manipulated and shared by agents.
In this sense, the question this paper is employed to answer is: how to integrate all sorts of resources like machines, sensors and software with a MAS in a scalable manner? This paper presents a new component developed to allow the integration of artifacts from MAS to many communication protocols thanks to Apache Camel framework. We think that cognitive agents can play the intelligent part of the system. They can be enhanced with several Artificial Intelligence technologies. The use of artifacts besides a proper method to model non autonomous entities can make the whole system computationally lighter.
This paper is structured as follows: in order to show the background technologies we have used to build a communication component, the framework Apache Camel is briefly presented in Section~\ref{sec:Camel}, and \textsf{CArtAgO} framework, employed to design artifacts, is introduced in Section~\ref{sec:CArtAgO}. Then, in Section~\ref{sec:IndustrialArtifacts}, we show the evolution of factory automation culminating in our proposal and what we refer as \emph{Industrial Artifacts}. In Section~\ref{sec:Component}, we show how the \emph{CamelArtifact} component was modelled and how to use it in an application. Following this, in Section~\ref{sec:Results}, we discuss two illustrative experiments. Finally, related work and conclusions complete this paper.
\section{Apache Camel}\label{sec:Camel}
The Apache Camel framework is a lightweight Java-based message routing and mediation engine~\citep{Ibsen:2010:CA:1965487}. Camel can achieve high performance processes since it handles multiple messages concurrently and provides functions like for interception, routing, exception handling and testing that allows creation of complex routes. This framework uses structured messages and queues as defined on Enterprise Integration Patterns (EIP)~\citep{Hohpe:2003:EIP:940308}, preserving loose coupling among the resources. The complexity of the protocol of each supported technology is embedded in a component, which works as a bridge to Camel routes. There are more than two hundred components available on Camel website and many others on community's repositories.
A whole route has a data producer, a consumer endpoint, a producer endpoint and, finally, a data consumer. Routes afford the use of multiple endpoints, which means multiple and heterogeneous producers and consumers communicating in the same logical channel where messages go through. An endpoint is a connector that encapsulates a specific protocol. Messages are entities that carry data which has a body, which is the payload, headers and optionally attachments. \newline
\begin{lstlisting}[caption={Lines 1-3: route from artifact to an external MQTT server to publish in a topic. Lines 5-9: route from a MQTT server to an artifact.}, captionpos=b, label=lst:CamelRoutes, frame=none, framexleftmargin=0pt, numbers=left, numbersep=5pt, numberstyle=\tiny, language=Java, showspaces=false, showtabs=false, breaklines=true, showstringspaces=false, breakatwhitespace=true, basicstyle=\footnotesize\ttfamily,
morekeywords={route, from, to, setHeader}]
from("artifact:cartago")
.transform().mvel("(request.body[0] * 1.8 + 32).toString()")
.to("mqtt:foo?host=tcp://broker(...)");
from("mqtt:foo?host=tcp://broker(...)")
.setHeader("ArtifactName",constant("s1"))
.setHeader("OperationName",constant("temp"))
.transform().mvel("[ (request.body[0].toString() - 32) / 1.8 ]")
.to("artifact:cartago");
\end{lstlisting}
In order to write route definitions, there are three available Domain-Specific Languages (DSLs): Java, Scala and XML based language. Using these languages, Camel allows to wrap in the route the necessary transformations to integrate a set of data consumers and producers. Camel works as a middleware that can be incorporated in an application for concentrating integration matters. In this fashion, programming complexity may be reduced since there is a separation between MAS and integrating programming.
For instance, a route may be used to convert temperature unities when an endpoint that uses celsius needs to send some data to another endpoint that is expecting it in fahrenheit. The code in Listing~\ref{lst:CamelRoutes} illustrates its route definitions in Java. In the first, the route is processing the content, applying some math and sending to a MQTT\footnote{Message Queuing Telemetry Transport by IBM\texttrademark.} endpoint. The next route is displaying the way back, adding on the header of the message necessary tags to destination endpoint\footnote{broker(...) refers to broker's address. Some Math is omitted.}.
\section{\textsf{CArtAgO} Artifacts}\label{sec:CArtAgO}
In MAS the agents are situated entities. They are perceiving and acting on an environment. Non-agent elements of a MAS are usually considered as part of the environment~\citep{Weyns:2007:EFC:1176841.1176951}, which may have tools and other objects that can be used and shared by agents. The framework \textsf{CArtAgO} calls these resources \emph{artifacts}~\citep{Ricci2006}. Essentially what differs Agents and Artifacts is autonomy, agents are considered the active part of the system. The Artifacts, on the other hand, are not autonomous, they have functions, they provide operations and behave predictably~\citep{Omicini2008}.
Artifacts commonly are utilized to: (i) simulate the real world, (ii) as virtual representations touchable by the agents, (iii) for coordination purposes, and (iv) as interfaces to the external world wrapping some technology. Many of these uses are related to shareable knowledge about the environment, which refers to synchronization issues. About wrapping functions, the called \emph{resource artifacts} are responsible for this. They mediate access to such functions or effectively embody a resource of a MAS.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{ArtifactStructure.pdf}
}
\caption{Artifact's structure.}
\label{fig:AABAsicLevel}
\end{figure}
\textsf{CArtAgO} is a Java-based framework that brings many functions to promote knowledge synchronization among agents and environment. The API provides basic classes to define artifacts, the interface to interact with agents and a run-time infrastructure to support the dynamic management of working environments~\citep{Ricci2006}. Besides these facilities, the framework processes transactions atomically to ensure data integrity and provides synchronization functions for multiple agents and multiple infrastructure servers.
Artifacts are used by the agent through its interface which provides operations to achieve the services it offers. Typically, artifacts' implementation are computationally lighter than agents since they are passive entities. The operations performed by artifacts commonly require little attention from the agent since they are passive and characterized to be routine behaviour.
Artifacts are located in logical areas called \emph{workspaces}. The agent that is focusing on an artifact reads its \emph{observable properties}, perceives events and may trigger its interfaced operations. Another feature that artifacts may provide is a manual with machine-readable description of their functions which is useful especially in open systems. Finally, artifacts provide linking interfaces allowing to connect artifacts and use linked operations (Figure~\ref{fig:AABAsicLevel}). With this function an artifact may invoke an operation of another, for instance, to communicate with a resource through another artifact.
\section{Industrial Artifacts}\label{sec:IndustrialArtifacts}
A traditional automated factory may be seen in four levels of a pyramid as showed in Figure~\ref{fig:pyramid}. The strategic decisions are placed on an Enterprise Resource Planning level. According to priorities and other aspects, the manufacturing is scheduled and monitored in the Process Execution level. The next level has the responsibility to control the wide spread devices, often in real-time. These devices, like sensors and actuators are situated in the bottom level of the pyramid. The presented pyramid is being more populated for all sorts of nodes including peripheral nodes (i.e logistic monitoring and control). The automation is usually still partial as we can see by human presence, in all levels, filling gaps of the processes.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{AutomationPyramid.pdf}
}
\caption{Traditional Automation Pyramid}
\label{fig:pyramid}
\end{figure}
Many of those devices communicate via OPC (OLE for Process Control) which is widely accepted communication standard on factory shop floor automation\citep{Heory2014}. This standard is used by devices and supervisory software located on Device and Control levels of the pyramid. The higher levels of the pyramid commonly use other applications with restricted or no integration with lower levels. In this scenario, as illustrated in Figure~\ref{fig:Comparison}a, interoperability among different levels and different technologies is commonly solved by ad hoc solutions. The figure illustrates some integration between an industrial device (i.e. an OPC controller) and a messaging device (i.e. IoT sensor) by specific APIs (Application Program Interfaces).
Factory automation is evolving, especially towards Cyber-Physical Systems (CPS) concept, which is integrating virtual and physical processes~\citep{4519604,LI2017608}. CPS has a standardized abstraction and architecture integration in a broad sense~\citep{MONOSTORI2016621}. This concept is central to the so called \emph{smart factory} of the Industry 4.0~\citep{Kagermann2013}. MAS is playing the central part of the CPS virtualizing entities allowing decentralized control and interoperability in many researches. However, studies~\citep{Marik:2005:RAA:1082473.1082812,Yokogawa2000MachineryCS} opted to represent almost any factory entity as agents (Figure~\ref{fig:Comparison}b), which increases complexity and makes synchronization of the environment information more difficult.
Virtual representations of the plant, including software entities as well as the physical world, may be reached at A\&A approach using \emph{artifacts} and \emph{workspaces}. All sorts of resources that accept command by operations and generate events can be modelled as artifacts. Artifacts allow to represent heterogeneous entities in a common format and make these virtual representations interoperable. In fact, without a mediation tool like Camel the usual solution to integrate artifacts and each technology is by APIs, what increases development efforts. Using our proposition (Figure~\ref{fig:Comparison}c) interoperability is facilitated using Camel that makes available many components for different protocols. The integration provided by Camel is also facilitated through DLSs, the application just need to specify the routes via URIs parameters and message headers. In most cases this approach provides the needed functionality with fewer programming efforts and faster learning curve.
\begin{figure}
\centerline
{
\includegraphics[width=0.425\textwidth]{WholeSystem.pdf}
}
\caption{A factory automation integrating two devices. a) ad hoc solution. b) \emph{agentification} approach. d) CamelArtifact and endpoints.}
\label{fig:Comparison}
\end{figure}
\section{The Artifact Component}\label{sec:Component}
In order to address interoperability, we propose joining Camel framework and \emph{artifacts}. Artifact's operations are responsive execution processes lighter than agent's actions which are usually consciously behaviour. Different device protocols and networking technologies can be integrated to a MAS similarly, using Camel as routing and mediation engine. In this sense, the developed component, called \emph{CamelArtifact}, is responsible for link artifacts and external resources. Each \emph{CamelArtifact} may contain routes definitions using specific endpoints for each resource. The endpoints are encapsulating the communication protocol complexity.
To use this component, an instance of \emph{CamelArtifact} should be set to listen its routes. Any message coming from, or going to the routes, will be kept in queues. Messages that are arriving or being sent can be transformed. The transformation is usually to make compatible both sides of the route depending on the application. The artifact may have route definitions on itself or it may receive and send messages through other \emph{CamelArtifact}, which may forward messages. The use of forwarding function may save computer memory, on the other hand, each artifact as a \emph{CamelArtifact} has his own thread taking computer parallelism advantages.
\subsection{The Component Architecture}
The structure of the created component was based on the available \emph{Camel Component Archetype}, which provides a useful component template. The default configuration of an Apache Camel component is mainly a \emph{DefaultComponent} class that creates its endpoints, normally a consumer and a producer. In a MAS application, an artifact essentially uses \textsf{CArtAgO} Artifact original class. In our component, a new class, the called \emph{CamelArtifact}, extends Artifact from \textsf{CArtAgO} framework and imports the Camel API to implement Apache Camel routes. The data used by the artifacts were modelled as \textit{OpRequest}, meaning Operation Request, which contains the name of an artifact, an operation to be performed and its parameters.
Producer and consumer sides work in a very similar manner by polling processes supported by queues for incoming and outgoing messages. With the polling consumer to send messages, the artifact places messages in the outgoing queue that will be repeatedly checked by the endpoint consumer to send it through the route. For incoming messages process, the component implements an ad hoc process using CArtAgO's IBlockingCmd class to check for new messages delivering it to the artifact
The message structure used by Camel provides in the header a map of Java objects, in the body any Java object, and optional attachments. In the header of \emph{CamelArtifact} messages, it is expected to find tags for the artifact name and operation, this last regards to a method to be performed. The body may contain a list of params the referred method needs. The operation tagged will be invoked as a \textsf{CArtAgO} internal operation or it may be forwarded when addressed to other artifact.
\section{Illustrative use and Results}\label{sec:Results}
To test the designed Camel component, two MAS applications were developed. The first was centred on a scalability experiment of the component making use of multiple consumers, forwarding function and \emph{loopback} communications. The second experiment was to try the component in a scenario needing interoperability among different technologies and protocols in the context of Industry 4.0.
\subsection{Terminal and Router scalability experiment}
In the first application, two scenarios are used to illustrate the \emph{ArtifactComponent}. For communication point of view, there are the scenarios have artifacts as terminals, which are end-points of the communication and an artifact as a router, which is a middleware to forward messages to end-points. Scenario 1 is varying the number of terminals instantiating multiple \emph{CamelArtifacts}. Scenario 2 is varying the number of common artifacts, all of them linked to a \emph{CamelArtifact} as a message router. The router, in this context, contains routes of other artifacts using Apache Camel MQTT supported endpoint.
All the \emph{CamelArtifacts} were set to publish and subscribe its own topic, being able to receive back its own sending messages. When playing the router function, an extra route was created to the linked artifacts. The scenario 1 had the number of camel terminals varied from 10 to 500 artifacts. The scenario 2 had the number of common artifacts varied from 10 to 200.
The MAS was designed with Jason framework~\citep{Boissier:2013:MOP:2459520.2459792}. It has only one agent, liable to make the artifacts, link and manage them. We have used MQTT QoS 2 setting, which mean the most guaranteed message delivering method provided by this protocol. The resources exchanged messages in both ways every 6 seconds. For our virtual Ubuntu Linux server with 2 cores and 2 GB RAM, it is a stressing situation that allows to check the message processing limits.
In scenario 1, we notice that the system uses more RAM memory since it creates several Camel instances. The application with 10 \emph{CamelArtifacts} used 127 MB growing an average of 1.3 MB for each instance added. In scenario 2, variance is not conclusive, main changes occurred by other Java Virtual Machine processes~(Figure~\ref{fig:Results}a). In addiction, on scenario 1 the system needed more time to load, for 10 artifacts it needed 21 seconds increasing an average of 1.5 seconds for each \emph{CamelArtifact} added. Scenario 2 had no significant increase~(Figure~\ref{fig:Results}b). In contrast, messages per second rate of scenario 1 reach better results, growing from 2 to 16.6 when the system was tested with 200 \emph{CamelArtifacts}, scenario 2 grew only until 4.1 messages/s~(Figure~\ref{fig:Results}c).
\begin{center}
\begin{figure}[h]
\begin{tikzpicture}
\begin{groupplot}[footnotesize,
group style={group size=2 by 2,
horizontal sep=2.0cm,
vertical sep=1.8cm},
height=5.0cm,width=3.5cm,
]
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 400,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Memory usage (MB),
title = {a)},
legend style={at={(0.5,0.3)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1Memory.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1Memory.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 300,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Loading time (s),
title = {b)},
legend style={at={(0.5,0.2)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1LoadingTime.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1LoadingTime.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 20,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Messages / s,
title = {c)},
legend style={at={(0.5,0.3)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1nMsgsPerSec.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1nMsgsPerSec.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\end{groupplot}
\end{tikzpicture}
\caption{Scenarios 1 and 2 (Sc1 and Sc2) comparison as increase the number of artifacts. a) Memory use b) Time to load c) Message/s rate }
\label{fig:Results}
\end{figure}
\end{center}
\subsection{Industry 4.0 context experiment}
The \emph{CamelArtifact} was tried in an Industry 4.0 context scenario (Figure~\ref{fig:IndustrialEssay}). To assess the solution, the application uses three resources with different features, one being an OPC\footnote{OPC: integration standard in industrial automation.} server, which may reflect some current industrial resource (e.g. a Programmable Logical Controller - PLC). Another resource is a compatible MQTT client, which may be an IoT device (e.g. a sensor). Finally, the last resource is a generic TCP/IP entity, which may be a software communicating by socket (e.g. a Robot or an Enterprise Resource Planning - ERP). In this test, all the three resources, were modelled as \emph{CamelArtifact}.
To check the OPC-DA route a numerical variable was created in the OPC server and an observable property in the artifact. The OPC-DA component used is provided by an independent developer. The routes did the synchronization of the value of the variable with the observable property. The MQTT client, using the supported component, had its routes to send and receive messages from a MQTT broker. Finally, a robot firmware was deployed in a virtual machine to simulate a cargo moving robot. A generic TCP/IP route, using Netty4 supported endpoint, was set to send and receive messages in a proprietary protocol.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{camelArtifactIndustrialEssay.pdf}
}
\caption{MAS communicating with different resources.}
\label{fig:IndustrialEssay}
\end{figure}
The MAS was designed with two Jason agents, one of them responsible to make the artifacts and share with another agent, a counter, the relevant information that came from the OPC server. The interoperability was tried when the agent used the counter from OPC server to act over the MQTT sensor and TCP/IP robot according to the variance of this information.
\section{Related work}\label{sec:Related}
Among related works we are considered only MAS based studies. \cite{Mzahm2013} coined the term Agent of Things (AoT) which is IoT with reasoning capabilities. Their approach suggests the \emph{agentification} of things to reach the required intelligence, benefits and costs. However, it does not get use of all advantages of a multi-dimensional MAS, e.g. virtual and shareable entities accessible to autonomous actors. \emph{Agentification} tendency can be seen in other attempts to use MAS in the industry~\citep{Marik:2005:RAA:1082473.1082812,Yokogawa2000MachineryCS}, as in the research of \cite{Leitao2018}. The drawbacks of \emph{agentification} are the increase of complexity and reduction of scalability.
\cite{Maturana1996} have proposed a mediation and coordination tool for MAS. They have used mediator agents as manufacturing coordinators. Our proposal does not aims to put in the middle an autonomous entity; but it gives connectivity power in an A\&A MAS using a mainstream technology such as Camel. Following similar idea, \cite{Olaru2013} have developed an agent-based \emph{middleware} which creates a sub-layer of application layer that allows agents to mediate communications. Later other research tried to address mainly the coordination and organisation challenges~\citep{BARBOSA201599,KOTAK200395} regarding manufacturing scenarios. It is important to notice that the mediation functions of these works are limited comparing to Camel features. These approaches also lack an environment support as \textsf{CArtAgO} provides.
\cite{TICHY2012846} used the Agent Development Environment (ADE) designed by Rockwell Automation, Inc. Besides the connectivity with the common shop floor devices (e.g. PLCs), this framework also supports the development of agents. They presented a conception to allow low and high level interaction, this last made by agents. The approach is an important industry supplier effort towards the requirements of the \emph{Smart Factory}. It also partly uses well-matured technology which is crucial for industrial stakeholders~\citep{Leitao2018}. The limitation we have seen regards specially connectivity with all sorts of entities (e.g. IoT sensors and mobile devices, ERP and other software, etc). Alternatively, the use of mature technologies can be reached using proper camel components to connect to industrial devices (e.g. using camel OPC-DA component).
\cite{Cranefield2013} developed a Camel component for \emph{Jason} agents. In this case environment description are not under A\&A concepts but they are part of the agents' knowledge. In their work, the agents are empowered by all Camel features we gave for \emph{CamelArtifacts}. Integration made only by agent dimension brings two advantages: (i) the programmer does not need to learn about artifacts, (ii) some elements are better modelled as agents, such as agents of another MAS. In contrast, two drawbacks: (i) agents increase computational demand higher than artifacts doing the same task, (ii) bring synchronization challenges, since agents spend time sharing information about the environment.
As far as we know, the only study that made use of A\&A concept to build MAS for industrial application in Industry 4.0 context was proposed by \cite{RoloffSHSPH14}. The limitation of this study refers to the integration by a unique API for OPC communication. The integration with other technologies needs the use of other APIs that brings more programming efforts when compared with our solution using Camel. Our proposal is filling a gap between the mature framework Apache Camel, a comprehensive mediation tool, and artifacts, a first-class designing entity.
\section{Conclusion}
In this paper, we discussed the interoperability challenge of the \emph{smart factory}. We showed that Agents and Artifacts (A\&A) method is useful for modelling the factory since this approach simplifies the design of non autonomous entities and may give more scalability to the whole system. We presented Camel framework as a mediation tool for integration with dozens of technologies used on industry. Our component, besides Camel facilities, also has functions to allow different topologies to deal with message rate requirements as well as resource limitations.
As future work we intend to change from polling strategy to event-driven strategy for consumer and producer sides. We intend to work on camel-agent~\citep{Cranefield2015} component trying to make both camel-artifact and camel-agent components with similar and easy to use configuration interfaces. Finally, we think that Apache Camel with both components may be used as Jason infrastructure being the mediation tool among distributed agents and artifacts.
\section{Introduction}\label{sec:Introduction}
The ripening of technologies like the Internet of Things (IoT) and Internet of Services (IoS) brought focus of studies on the emergent 4th Industrial Revolution, i.e. Industry 4.0 and Industrial Internet concepts~\citep{CHEN2017588,Kagermann2013,klaus_dieter_thoben_2017_1002731}. These research agree that those technologies are provoking changes on the overall value chain of industries, from raw materials acquisition, logistic, and goods production to the delivery and even post sales services. This wide use and diversity of resources brings challenges when it is necessary to integrate industrial resources as required by the \emph{smart factories} of Industry 4.0 concept. The interoperability, which in this context means that the Cyber-Physical System (CPS) and all sorts of resources can communicate with each other, is a key factor~\citep{Hermann2016,Lu2017a}.
However, the incipient convergence of technology with no standard, usually requires the study of specific integration interfaces and high efforts on development. To face this challenge, we propose the use of Apache Camel framework, a message routing and mediation engine. Camel allows defining communication routes among data producers and consumers by using a Domain-Specific Language, independent of the protocol or networking technology on each side~\citep{Ibsen:2010:CA:1965487}. Camel already has a wide range of components and allows to design new ones facilitating the integration of current and new technologies used by industrial resources.
In many studies~\citep{Leitao2018,Roloff2016,COOK2009277}, Multi-Agent System (MAS) is being the CPS. In fact, it can partially reach some of \emph{smart factory}'s requirements~\citep{Hermann2016,LI2017608,MONOSTORI2016621,Zhong2017}, like decentralization and virtualization, leaving to devices the solution of issues like robustness and real-time capability. However, MAS applications are becoming very complex and computationally heavy specially due indiscriminately \emph{agentification} of entities, i.e., the approach that model almost any entity of a system as an agent. In the other hand, in recent MAS research~\citep{Hubner2010,Omicini2008,Ricci2006,RoloffSHSPH14}, dynamic and complex scenarios are being analysed in dimensions, it interprets that some elements are not necessarily agents. Agents and Artifacts approach (A\&A) is proposing: (i) agent's dimension for proactive entities which encapsulate autonomous execution in some activities (ii) environment dimension which includes \emph{artifacts}, i.e., simpler entities that can be manipulated and shared by agents.
In this sense, the question this paper is employed to answer is: how to integrate all sorts of resources like machines, sensors and software with a MAS in a scalable manner? This paper presents a new component developed to allow the integration of artifacts from MAS to many communication protocols thanks to Apache Camel framework. We think that cognitive agents can play the intelligent part of the system. They can be enhanced with several Artificial Intelligence technologies. The use of artifacts besides a proper method to model non autonomous entities can make the whole system computationally lighter.
This paper is structured as follows: in order to show the background technologies we have used to build a communication component, the framework Apache Camel is briefly presented in Section~\ref{sec:Camel}, and \textsf{CArtAgO} framework, employed to design artifacts, is introduced in Section~\ref{sec:CArtAgO}. Then, in Section~\ref{sec:IndustrialArtifacts}, we show the evolution of factory automation culminating in our proposal and what we refer as \emph{Industrial Artifacts}. In Section~\ref{sec:Component}, we show how the \emph{CamelArtifact} component was modelled and how to use it in an application. Following this, in Section~\ref{sec:Results}, we discuss two illustrative experiments. Finally, related work and conclusions complete this paper.
\section{Apache Camel}\label{sec:Camel}
The Apache Camel framework is a lightweight Java-based message routing and mediation engine~\citep{Ibsen:2010:CA:1965487}. Camel can achieve high performance processes since it handles multiple messages concurrently and provides functions like for interception, routing, exception handling and testing that allows creation of complex routes. This framework uses structured messages and queues as defined on Enterprise Integration Patterns (EIP)~\citep{Hohpe:2003:EIP:940308}, preserving loose coupling among the resources. The complexity of the protocol of each supported technology is embedded in a component, which works as a bridge to Camel routes. There are more than two hundred components available on Camel website and many others on community's repositories.
A whole route has a data producer, a consumer endpoint, a producer endpoint and, finally, a data consumer. Routes afford the use of multiple endpoints, which means multiple and heterogeneous producers and consumers communicating in the same logical channel where messages go through. An endpoint is a connector that encapsulates a specific protocol. Messages are entities that carry data which has a body, which is the payload, headers and optionally attachments. \newline
\begin{lstlisting}[caption={Lines 1-3: route from artifact to an external MQTT server to publish in a topic. Lines 5-9: route from a MQTT server to an artifact.}, captionpos=b, label=lst:CamelRoutes, frame=none, framexleftmargin=0pt, numbers=left, numbersep=5pt, numberstyle=\tiny, language=Java, showspaces=false, showtabs=false, breaklines=true, showstringspaces=false, breakatwhitespace=true, basicstyle=\footnotesize\ttfamily,
morekeywords={route, from, to, setHeader}]
from("artifact:cartago")
.transform().mvel("(request.body[0] * 1.8 + 32).toString()")
.to("mqtt:foo?host=tcp://broker(...)");
from("mqtt:foo?host=tcp://broker(...)")
.setHeader("ArtifactName",constant("s1"))
.setHeader("OperationName",constant("temp"))
.transform().mvel("[ (request.body[0].toString() - 32) / 1.8 ]")
.to("artifact:cartago");
\end{lstlisting}
In order to write route definitions, there are three available Domain-Specific Languages (DSLs): Java, Scala and XML based language. Using these languages, Camel allows to wrap in the route the necessary transformations to integrate a set of data consumers and producers. Camel works as a middleware that can be incorporated in an application for concentrating integration matters. In this fashion, programming complexity may be reduced since there is a separation between MAS and integrating programming.
For instance, a route may be used to convert temperature unities when an endpoint that uses celsius needs to send some data to another endpoint that is expecting it in fahrenheit. The code in Listing~\ref{lst:CamelRoutes} illustrates its route definitions in Java. In the first, the route is processing the content, applying some math and sending to a MQTT\footnote{Message Queuing Telemetry Transport by IBM\texttrademark.} endpoint. The next route is displaying the way back, adding on the header of the message necessary tags to destination endpoint\footnote{broker(...) refers to broker's address. Some Math is omitted.}.
\section{\textsf{CArtAgO} Artifacts}\label{sec:CArtAgO}
In MAS the agents are situated entities. They are perceiving and acting on an environment. Non-agent elements of a MAS are usually considered as part of the environment~\citep{Weyns:2007:EFC:1176841.1176951}, which may have tools and other objects that can be used and shared by agents. The framework \textsf{CArtAgO} calls these resources \emph{artifacts}~\citep{Ricci2006}. Essentially what differs Agents and Artifacts is autonomy, agents are considered the active part of the system. The Artifacts, on the other hand, are not autonomous, they have functions, they provide operations and behave predictably~\citep{Omicini2008}.
Artifacts commonly are utilized to: (i) simulate the real world, (ii) as virtual representations touchable by the agents, (iii) for coordination purposes, and (iv) as interfaces to the external world wrapping some technology. Many of these uses are related to shareable knowledge about the environment, which refers to synchronization issues. About wrapping functions, the called \emph{resource artifacts} are responsible for this. They mediate access to such functions or effectively embody a resource of a MAS.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{ArtifactStructure.pdf}
}
\caption{Artifact's structure.}
\label{fig:AABAsicLevel}
\end{figure}
\textsf{CArtAgO} is a Java-based framework that brings many functions to promote knowledge synchronization among agents and environment. The API provides basic classes to define artifacts, the interface to interact with agents and a run-time infrastructure to support the dynamic management of working environments~\citep{Ricci2006}. Besides these facilities, the framework processes transactions atomically to ensure data integrity and provides synchronization functions for multiple agents and multiple infrastructure servers.
Artifacts are used by the agent through its interface which provides operations to achieve the services it offers. Typically, artifacts' implementation are computationally lighter than agents since they are passive entities. The operations performed by artifacts commonly require little attention from the agent since they are passive and characterized to be routine behaviour.
Artifacts are located in logical areas called \emph{workspaces}. The agent that is focusing on an artifact reads its \emph{observable properties}, perceives events and may trigger its interfaced operations. Another feature that artifacts may provide is a manual with machine-readable description of their functions which is useful especially in open systems. Finally, artifacts provide linking interfaces allowing to connect artifacts and use linked operations (Figure~\ref{fig:AABAsicLevel}). With this function an artifact may invoke an operation of another, for instance, to communicate with a resource through another artifact.
\section{Industrial Artifacts}\label{sec:IndustrialArtifacts}
A traditional automated factory may be seen in four levels of a pyramid as showed in Figure~\ref{fig:pyramid}. The strategic decisions are placed on an Enterprise Resource Planning level. According to priorities and other aspects, the manufacturing is scheduled and monitored in the Process Execution level. The next level has the responsibility to control the wide spread devices, often in real-time. These devices, like sensors and actuators are situated in the bottom level of the pyramid. The presented pyramid is being more populated for all sorts of nodes including peripheral nodes (i.e logistic monitoring and control). The automation is usually still partial as we can see by human presence, in all levels, filling gaps of the processes.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{AutomationPyramid.pdf}
}
\caption{Traditional Automation Pyramid}
\label{fig:pyramid}
\end{figure}
Many of those devices communicate via OPC (OLE for Process Control) which is widely accepted communication standard on factory shop floor automation\citep{Heory2014}. This standard is used by devices and supervisory software located on Device and Control levels of the pyramid. The higher levels of the pyramid commonly use other applications with restricted or no integration with lower levels. In this scenario, as illustrated in Figure~\ref{fig:Comparison}a, interoperability among different levels and different technologies is commonly solved by ad hoc solutions. The figure illustrates some integration between an industrial device (i.e. an OPC controller) and a messaging device (i.e. IoT sensor) by specific APIs (Application Program Interfaces).
Factory automation is evolving, especially towards Cyber-Physical Systems (CPS) concept, which is integrating virtual and physical processes~\citep{4519604,LI2017608}. CPS has a standardized abstraction and architecture integration in a broad sense~\citep{MONOSTORI2016621}. This concept is central to the so called \emph{smart factory} of the Industry 4.0~\citep{Kagermann2013}. MAS is playing the central part of the CPS virtualizing entities allowing decentralized control and interoperability in many researches. However, studies~\citep{Marik:2005:RAA:1082473.1082812,Yokogawa2000MachineryCS} opted to represent almost any factory entity as agents (Figure~\ref{fig:Comparison}b), which increases complexity and makes synchronization of the environment information more difficult.
Virtual representations of the plant, including software entities as well as the physical world, may be reached at A\&A approach using \emph{artifacts} and \emph{workspaces}. All sorts of resources that accept command by operations and generate events can be modelled as artifacts. Artifacts allow to represent heterogeneous entities in a common format and make these virtual representations interoperable. In fact, without a mediation tool like Camel the usual solution to integrate artifacts and each technology is by APIs, what increases development efforts. Using our proposition (Figure~\ref{fig:Comparison}c) interoperability is facilitated using Camel that makes available many components for different protocols. The integration provided by Camel is also facilitated through DLSs, the application just need to specify the routes via URIs parameters and message headers. In most cases this approach provides the needed functionality with fewer programming efforts and faster learning curve.
\begin{figure}
\centerline
{
\includegraphics[width=0.425\textwidth]{WholeSystem.pdf}
}
\caption{A factory automation integrating two devices. a) ad hoc solution. b) \emph{agentification} approach. d) CamelArtifact and endpoints.}
\label{fig:Comparison}
\end{figure}
\section{The Artifact Component}\label{sec:Component}
In order to address interoperability, we propose joining Camel framework and \emph{artifacts}. Artifact's operations are responsive execution processes lighter than agent's actions which are usually consciously behaviour. Different device protocols and networking technologies can be integrated to a MAS similarly, using Camel as routing and mediation engine. In this sense, the developed component, called \emph{CamelArtifact}, is responsible for link artifacts and external resources. Each \emph{CamelArtifact} may contain routes definitions using specific endpoints for each resource. The endpoints are encapsulating the communication protocol complexity.
To use this component, an instance of \emph{CamelArtifact} should be set to listen its routes. Any message coming from, or going to the routes, will be kept in queues. Messages that are arriving or being sent can be transformed. The transformation is usually to make compatible both sides of the route depending on the application. The artifact may have route definitions on itself or it may receive and send messages through other \emph{CamelArtifact}, which may forward messages. The use of forwarding function may save computer memory, on the other hand, each artifact as a \emph{CamelArtifact} has his own thread taking computer parallelism advantages.
\subsection{The Component Architecture}
The structure of the created component was based on the available \emph{Camel Component Archetype}, which provides a useful component template. The default configuration of an Apache Camel component is mainly a \emph{DefaultComponent} class that creates its endpoints, normally a consumer and a producer. In a MAS application, an artifact essentially uses \textsf{CArtAgO} Artifact original class. In our component, a new class, the called \emph{CamelArtifact}, extends Artifact from \textsf{CArtAgO} framework and imports the Camel API to implement Apache Camel routes. The data used by the artifacts were modelled as \textit{OpRequest}, meaning Operation Request, which contains the name of an artifact, an operation to be performed and its parameters.
Producer and consumer sides work in a very similar manner by polling processes supported by queues for incoming and outgoing messages. With the polling consumer to send messages, the artifact places messages in the outgoing queue that will be repeatedly checked by the endpoint consumer to send it through the route. For incoming messages process, the component implements an ad hoc process using CArtAgO's IBlockingCmd class to check for new messages delivering it to the artifact
The message structure used by Camel provides in the header a map of Java objects, in the body any Java object, and optional attachments. In the header of \emph{CamelArtifact} messages, it is expected to find tags for the artifact name and operation, this last regards to a method to be performed. The body may contain a list of params the referred method needs. The operation tagged will be invoked as a \textsf{CArtAgO} internal operation or it may be forwarded when addressed to other artifact.
\section{Illustrative use and Results}\label{sec:Results}
To test the designed Camel component, two MAS applications were developed. The first was centred on a scalability experiment of the component making use of multiple consumers, forwarding function and \emph{loopback} communications. The second experiment was to try the component in a scenario needing interoperability among different technologies and protocols in the context of Industry 4.0.
\subsection{Terminal and Router scalability experiment}
In the first application, two scenarios are used to illustrate the \emph{ArtifactComponent}. For communication point of view, there are the scenarios have artifacts as terminals, which are end-points of the communication and an artifact as a router, which is a middleware to forward messages to end-points. Scenario 1 is varying the number of terminals instantiating multiple \emph{CamelArtifacts}. Scenario 2 is varying the number of common artifacts, all of them linked to a \emph{CamelArtifact} as a message router. The router, in this context, contains routes of other artifacts using Apache Camel MQTT supported endpoint.
All the \emph{CamelArtifacts} were set to publish and subscribe its own topic, being able to receive back its own sending messages. When playing the router function, an extra route was created to the linked artifacts. The scenario 1 had the number of camel terminals varied from 10 to 500 artifacts. The scenario 2 had the number of common artifacts varied from 10 to 200.
The MAS was designed with Jason framework~\citep{Boissier:2013:MOP:2459520.2459792}. It has only one agent, liable to make the artifacts, link and manage them. We have used MQTT QoS 2 setting, which mean the most guaranteed message delivering method provided by this protocol. The resources exchanged messages in both ways every 6 seconds. For our virtual Ubuntu Linux server with 2 cores and 2 GB RAM, it is a stressing situation that allows to check the message processing limits.
In scenario 1, we notice that the system uses more RAM memory since it creates several Camel instances. The application with 10 \emph{CamelArtifacts} used 127 MB growing an average of 1.3 MB for each instance added. In scenario 2, variance is not conclusive, main changes occurred by other Java Virtual Machine processes~(Figure~\ref{fig:Results}a). In addiction, on scenario 1 the system needed more time to load, for 10 artifacts it needed 21 seconds increasing an average of 1.5 seconds for each \emph{CamelArtifact} added. Scenario 2 had no significant increase~(Figure~\ref{fig:Results}b). In contrast, messages per second rate of scenario 1 reach better results, growing from 2 to 16.6 when the system was tested with 200 \emph{CamelArtifacts}, scenario 2 grew only until 4.1 messages/s~(Figure~\ref{fig:Results}c).
\begin{center}
\begin{figure}[h]
\begin{tikzpicture}
\begin{groupplot}[footnotesize,
group style={group size=2 by 2,
horizontal sep=2.0cm,
vertical sep=1.8cm},
height=5.0cm,width=3.5cm,
]
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 400,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Memory usage (MB),
title = {a)},
legend style={at={(0.5,0.3)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1Memory.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1Memory.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 300,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Loading time (s),
title = {b)},
legend style={at={(0.5,0.2)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1LoadingTime.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1LoadingTime.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\nextgroupplot[
axis x line=bottom,
axis y line=left,
ymax = 20,
xmax = 200,
xlabel = Number of artifacts,
ylabel = Messages / s,
title = {c)},
legend style={at={(0.5,0.3)},anchor=south west}
]
\addplot[thick, red, dash dot]
table[y=Scenario1,x=nArtifacts,col sep=comma]{Experiment1nMsgsPerSec.csv};
\addplot[thick, blue, dash pattern={on 1pt off 2pt on 1pt off 3pt}]
table[y=Scenario2,x=nArtifacts,col sep=comma]{Experiment1nMsgsPerSec.csv};
\addlegendentry{Sc1}
\addlegendentry{Sc2}
\end{groupplot}
\end{tikzpicture}
\caption{Scenarios 1 and 2 (Sc1 and Sc2) comparison as increase the number of artifacts. a) Memory use b) Time to load c) Message/s rate }
\label{fig:Results}
\end{figure}
\end{center}
\subsection{Industry 4.0 context experiment}
The \emph{CamelArtifact} was tried in an Industry 4.0 context scenario (Figure~\ref{fig:IndustrialEssay}). To assess the solution, the application uses three resources with different features, one being an OPC\footnote{OPC: integration standard in industrial automation.} server, which may reflect some current industrial resource (e.g. a Programmable Logical Controller - PLC). Another resource is a compatible MQTT client, which may be an IoT device (e.g. a sensor). Finally, the last resource is a generic TCP/IP entity, which may be a software communicating by socket (e.g. a Robot or an Enterprise Resource Planning - ERP). In this test, all the three resources, were modelled as \emph{CamelArtifact}.
To check the OPC-DA route a numerical variable was created in the OPC server and an observable property in the artifact. The OPC-DA component used is provided by an independent developer. The routes did the synchronization of the value of the variable with the observable property. The MQTT client, using the supported component, had its routes to send and receive messages from a MQTT broker. Finally, a robot firmware was deployed in a virtual machine to simulate a cargo moving robot. A generic TCP/IP route, using Netty4 supported endpoint, was set to send and receive messages in a proprietary protocol.
\begin{figure}[ht]
\centerline
{
\includegraphics[width=0.32\textwidth]{camelArtifactIndustrialEssay.pdf}
}
\caption{MAS communicating with different resources.}
\label{fig:IndustrialEssay}
\end{figure}
The MAS was designed with two Jason agents, one of them responsible to make the artifacts and share with another agent, a counter, the relevant information that came from the OPC server. The interoperability was tried when the agent used the counter from OPC server to act over the MQTT sensor and TCP/IP robot according to the variance of this information.
\section{Related work}\label{sec:Related}
Among related works we are considered only MAS based studies. \cite{Mzahm2013} coined the term Agent of Things (AoT) which is IoT with reasoning capabilities. Their approach suggests the \emph{agentification} of things to reach the required intelligence, benefits and costs. However, it does not get use of all advantages of a multi-dimensional MAS, e.g. virtual and shareable entities accessible to autonomous actors. \emph{Agentification} tendency can be seen in other attempts to use MAS in the industry~\citep{Marik:2005:RAA:1082473.1082812,Yokogawa2000MachineryCS}, as in the research of \cite{Leitao2018}. The drawbacks of \emph{agentification} are the increase of complexity and reduction of scalability.
\cite{Maturana1996} have proposed a mediation and coordination tool for MAS. They have used mediator agents as manufacturing coordinators. Our proposal does not aims to put in the middle an autonomous entity; but it gives connectivity power in an A\&A MAS using a mainstream technology such as Camel. Following similar idea, \cite{Olaru2013} have developed an agent-based \emph{middleware} which creates a sub-layer of application layer that allows agents to mediate communications. Later other research tried to address mainly the coordination and organisation challenges~\citep{BARBOSA201599,KOTAK200395} regarding manufacturing scenarios. It is important to notice that the mediation functions of these works are limited comparing to Camel features. These approaches also lack an environment support as \textsf{CArtAgO} provides.
\cite{TICHY2012846} used the Agent Development Environment (ADE) designed by Rockwell Automation, Inc. Besides the connectivity with the common shop floor devices (e.g. PLCs), this framework also supports the development of agents. They presented a conception to allow low and high level interaction, this last made by agents. The approach is an important industry supplier effort towards the requirements of the \emph{Smart Factory}. It also partly uses well-matured technology which is crucial for industrial stakeholders~\citep{Leitao2018}. The limitation we have seen regards specially connectivity with all sorts of entities (e.g. IoT sensors and mobile devices, ERP and other software, etc). Alternatively, the use of mature technologies can be reached using proper camel components to connect to industrial devices (e.g. using camel OPC-DA component).
\cite{Cranefield2013} developed a Camel component for \emph{Jason} agents. In this case environment description are not under A\&A concepts but they are part of the agents' knowledge. In their work, the agents are empowered by all Camel features we gave for \emph{CamelArtifacts}. Integration made only by agent dimension brings two advantages: (i) the programmer does not need to learn about artifacts, (ii) some elements are better modelled as agents, such as agents of another MAS. In contrast, two drawbacks: (i) agents increase computational demand higher than artifacts doing the same task, (ii) bring synchronization challenges, since agents spend time sharing information about the environment.
As far as we know, the only study that made use of A\&A concept to build MAS for industrial application in Industry 4.0 context was proposed by \cite{RoloffSHSPH14}. The limitation of this study refers to the integration by a unique API for OPC communication. The integration with other technologies needs the use of other APIs that brings more programming efforts when compared with our solution using Camel. Our proposal is filling a gap between the mature framework Apache Camel, a comprehensive mediation tool, and artifacts, a first-class designing entity.
\section{Conclusion}
In this paper, we discussed the interoperability challenge of the \emph{smart factory}. We showed that Agents and Artifacts (A\&A) method is useful for modelling the factory since this approach simplifies the design of non autonomous entities and may give more scalability to the whole system. We presented Camel framework as a mediation tool for integration with dozens of technologies used on industry. Our component, besides Camel facilities, also has functions to allow different topologies to deal with message rate requirements as well as resource limitations.
As future work we intend to change from polling strategy to event-driven strategy for consumer and producer sides. We intend to work on camel-agent~\citep{Cranefield2015} component trying to make both camel-artifact and camel-agent components with similar and easy to use configuration interfaces. Finally, we think that Apache Camel with both components may be used as Jason infrastructure being the mediation tool among distributed agents and artifacts.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
This paper studies the existence of nicely structured objects in
(randomly) colored
random graphs. Our basic interest will be in what we call
\emph{zebraic} paths and
cycles.
We assume that the edges of a graph $G$ have been colored
black or white. A path
or cycle
will be called \emph{zebraic} if the edges alternate in color along
the path. We view
this as
a variation on the usual theme of \emph{rainbow} paths and
cycles that have been
well-studied. Rainbow Hamilton cycles in edge colored complete graphs were first studied in Erd\H{o}s, Ne\v{s}et\v{r}il and R\"odl \cite{ENR}. Colorings were constrained by the number of times, $k$, that an individual color could be used. Such a coloring is called $k$-bounded. They showed that allowing $k$ to be any constant, there was always a rainbow Hamilton cycle. Hahn and Thomassen \cite{HT}
were next to consider this problem and they showed that $k$ could grow as fast as $n^{1/3}$ and conjectured that the growth rate of $k$ could in fact be linear. In an unpublished work R\"{o}dl and Winkler \cite{W} in 1984 improved this to $n^{1/2}$. Frieze and Reed \cite{FR} improved this to $k=\Omega(n/\log n)$ and finally Albert, Frieze and Reed \cite{AFR} improved the bound on $k$ to $\Omega(n)$. In another line of research, Cooper and Frieze \cite{CF} discussed the existence of rainbow Hamilton cycles in the random graph $G^{(q})_{n,p}$ where each edge is independently and randomly given one of $q$ colors. They showed that if $p\geq \frac{21\log n}{n}$ and $q\geq 21n$ then with high probability (w.h.p.), i.e. probability $1-o_n(1)$, there is a rainbow colored Hamilton cycle. Frieze and Loh \cite{FL} improved this to $p\geq \frac{(1+o(1))\log n}{n}$ and $q\geq n+o(n)$. Ferber and Krivelevich \cite{FK15} improved it further to $p=\frac{\log n+\log\log n+\omega(n)}{n}$ and $q\geq n+o(n)$. Bal and Frieze \cite{BF15} considered the case $q=n$ and showed that $p\geq \frac{K\log n}{n}$ suffices for large enough $k$. Ferber and Krivelevich \cite{FK15} proved that if $p\gg \frac{\log n}{n}$ and $q=Cn$ colors are used, then $G_{n,p}$ contains w.h.p. $(1-o(1))np/2$ edge-disjoint rainbow Hamilton cycles, for $C=C(\varepsilon)>0$ large enough.
In this paper we study the existence of other colorings of paths and cycles.
Our first result does not at first sight fit into this framework.
Let $n$ be even
and let
$M_0$ be an arbitrary perfect matching of the complete graph $K_n$.
Now consider the
random graph
process $\{G_m\}=\{([n],E_m)\}$ where $E_m=\set{e_1,e_2,\ldots,e_m}$ is
obtained from $E_{m-1}$ by
adding a random
edge $e_m\notin E_{m-1}$, for
$m=0,1,\ldots, N=\binom{n}{2}$.
Let
$$\tau_1=\min\set{m:\delta(G_m)\geq 1}\,,$$
where $\delta$ denotes minimum
degree. Then let
$$\tau_H=\min\set{m:G_m\text{ contains a Hamilton cycle }
H\supseteq M_0}.$$
\begin{theorem}\label{th1}
$\tau_1=\tau_H$ w.h.p.
\end{theorem}
In actual fact there are two slightly different versions. One
where we insist that $M_0\cap E_m=\emptyset$ and one where
$E_m$ is chosen completely independently of $M_0$. The theorem holds in both
cases.
\medskip
We note that Robinson and Wormald \cite{RW} considered a
similar problem with respect to random regular graphs.
They showed that one can choose $o(n^{1/2})$ edges at random,
orient them and then w.h.p. there will be a Hamilton cycle
containing these edges and following the orientations.
Theorem \ref{th1}
has an easy corollary that fits our initial description.
Let $\{G^{(r)}_m\}$ be an
$r$-colored version of the graph process. This means that $G^{(r)}_m$ is
obtained
from $G^{(r)}_{m-1}$
by adding a random edge and then giving it a random color from $[r]$.
Let $E_{m,i}$ denote the edges of color $i$ for $i=1,2,\ldots,r$.
When $r=2$ denote the
colors by $black$ and $white$ and let $E_{m,b}=E_{m,1},E_{m,w}=E_{m,2}$.
Then let $G^{(b)}_m$ be the
subgraph of
$G^{(2)}_m$ induced by the
black edges and let $G^{(w)}_m$ induced by the white edges. Let
$$\tau_{1,1}=\min\set{m:\delta(G^{(b)}_m),\delta(G^{(w)}_m)\geq 1}\,,$$
and let
$$\tau_{ZH}=\min\set{m:G_m\text{ contains a zebraic Hamilton
cycle}}.$$
\begin{corollary}\label{cor1}
$\tau_{1,1}=\tau_{ZH}$ w.h.p.
\end{corollary}
Our next result is a zebraic analogue of \emph{rainbow connection}.
For
a connected graph $G$, its rainbow connection
$rc(G)$, is the minimum number $r$ of colors needed for the
following to
hold: The edges of $G$ can be $r$-colored
so that every pair of vertices is connected by a rainbow path,
i.e. a path in which no color is repeated.
Recently,
there has been interest in estimating this parameter
for various classes of graph, including random graphs (see, e.g., \cite{FT12, HR12, KKS15}). By analogy,
we say that a two-coloring of a connected graph provides a \emph{zebraic connection} if
there is a zebraic path joining every pair of vertices.
\begin{theorem}\label{th4}
At time $\tau_1$, a random black-white coloring of $G_{\tau_1}$ provides a
zebraic connection, w.h.p.
\end{theorem}
We consider now how we can extend our results to more
than two colors. Suppose we have $r$ colors $[r]$ and that $r\mid n$.
We would like to consider the existence of Hamilton cycles where
the $i$th edge has color $(i\mod r)+1$. Call such a
cycle \emph{ $r$-zebraic}. Our result for this case is not as tight as
for the case of two colors. We are not able to prove a hitting time version.
We will instead satisfy ourselves with a result for $G_{n,p}^{(r)}$.
Let
$$p_r=\frac{r}{\alpha_r}\frac{\log n}{n}$$
where
$$\alpha_r=
\rdup{\frac{r}{2}}.
$$
\begin{theorem}\label{th5}
Let $\varepsilon>0$ be an arbitrary positive constant.
$$\lim_{n\to\infty}\Pr(G_{n,p}^{(r)}\text{ contains an $r$-zebraic Hamilton
cycle})=\begin{cases}
0&p\leq (1-\varepsilon)p_r\\1&p\geq (1+\varepsilon)p_r
\end{cases}.
$$
\end{theorem}
The proofs of Theorems \ref{th1}--\ref{th5} will be given in Sections \ref{th1proof}--\ref{th5proof}.
Here and in the rest of the paper all logarithms will have base $e$ unless
explicitly stated otherwise.
\section{Notation}
For a graph $G=(V,E)$ and $S,T\subseteq V$ we let $e_G(S)$ denote the number of edges contained in $S$, $e_G(S,T)$ denote the number of edges with one end in $S$ and the other in $T$ and $N_G(S)$ denote the set of neighbors of $S$ that are not in $S$.
We will use certain values throughout our proofs. We list most of them here for easy reference. The reader is encouraged to skip reading this section and to just refer back as necessary.
\begin{align*}
&t_0=\frac{n}{2}(\log n -2\log\log n)\text{ and }
t_1=\frac{n}{2}(\log n +2\log\log n)\\
&t_2=\frac{t_0}{10}\text{ and }t_3=\frac{t_0}{5}\text{ and }t_4=\frac{9t_0}{10}.\\
&\zeta_i=t_i-t_{i-1}\text{ for }i=3,4.\\
&p_i=\frac{t_i}{\binom{n}{2}},\,i=0,1,2.\\
&n_0=\frac{n}{\log^2n}\text{ and }n_0'=\frac{n_0}{\log^4n}\text{ and }
n_1=\frac{n}{10\log n}.\\
&n_b=\frac{n\log\log\log n}{\log\log n}\text{ and }n_c =\frac{200n}{\log n}.\\
&L_0=\frac{\log n}{100}\text{ and }L_1=\frac{\log n}{\log\log n}.\\
&\ell_0=\frac{\log n}{200}\text{ and }\ell_1=\frac{2\log n}{3\log\log n}\text{ and }
\nu_L=\ell_0^{\ell_1}=n^{2/3+o(1)}.
\end{align*}
The following graphs and sets of vertices are used.
\begin{align*}
&\Psi_0=G_{t_2}.\\
&V_0=\set{v\in [n]:d_{\Psi_0}(v)\leq L_0}.\\
&\Psi_1=\Psi_0\cup\set{e\in E_{t_1}\setminus E_{t_2}:e\cap V_0\neq\emptyset}.\\
&V_\lambda=\set{v\in [n]:\;v\text{ is large}}.\\
&V_\sigma=[n]\setminus V_{\lambda}.\\
&E_B=\set{e\in E_{t_4}\setminus E_{t_3}: e\cap V_0=\emptyset}.\\
&V_\tau=\set{v\in [n]\setminus V_0:\deg_{E_B}(v)\leq L_0}.
\end{align*}
The definition of ``large'' depends on which theorem we are proving.
\section{Probabilistic Inequalities}
We will need standard estimates on the tails of various random variables.
{\bf Chernoff Bounds:} Let $B(n,p)$ denote the binomial random variable where $n$ is the number of trials and $p$ is the probability of success.
\begin{align}
&\Pr(|B(n,p)-np|\geq \varepsilon np)\leq 2e^{-\varepsilon^2np/3}\qquad\text{for }0\leq\varepsilon\leq 1.\label{chern1}\\
&\Pr(B(n,p)\geq anp)\leq \bfrac{e}{a}^{anp}\qquad\text{for }a>0.\label{chern2}
\end{align}
For proofs, see the appendix of Alon and Spencer \cite{AS}.
{\bf McDiarmid's Inequality:} Let $Z=Z(Y_1,Y_2,\ldots,Y_n)$ be a random variable where $Y_1,Y_2,\ldots,Y_n$ are independent for $i=1,2,\ldots,n$. Suppose that
$$|Z(Y_1,\ldots,Y_{i-1},Y_i,Y_{i+1},\ldots,Y_n)-Z(Y_1,\ldots,Y_{i-1},\widehat{Y}_i,Y_{i+1},\ldots,Y_n)|\leq c_i$$
for all $Y_1,Y_2,\ldots,Y_n,\widehat{Y}_i$ and $1\leq i\leq n$. Then
\beq{mcd}
\Pr(|Z-\expect(Z)|\geq t)\leq \exp\set{-\frac{t^2}{c_1^2+c_2^2+\cdots+c_n^2}}.
\end{equation}
For a proof see for example \cite{AS}, \cite{Book}, \cite{FK15}, or \cite{JLR}.
\section{Proof of Theorem \ref{th1}}\label{th1proof}
\subsection{Outline of proof}
It is well known (see for example \cite{Book}, \cite{FK15}, \cite{JLR}) that
w.h.p.
we have $t_0\leq \tau_1\leq t_1$.
Our strategy for proving Theorem \ref{th1} is broadly in line with the 3-phase algorithm described in \cite{CF1}.
\begin{description}
\item[(a)]
We will take the first $t_2$ edges plus all the edges incident to vertices that have a low degree in $G_{t_2}$. We argue that w.h.p. this contains a perfect matching $M_1$ that is disjoint from $M_0$. The union of $M_0,M_1$ will then have $O(\log n)$ components w.h.p.
\item[(b)] $M_0\cup M_1$ induces a 2-factor made up of alternating cycles. We then use about $t_3$ edges to make the minimum cycle length $\Omega(n/\log n)$.
\item[(c)] We then create a Hamilton cycle containing $M_0$, where we use the final $\approx t_2$ edges to close cycles applying a second moment calculation.
\end{description}
We are working in a different model to that in \cite{CF1} and there are many more conditioning problems to be overcome. For example, in \cite{CF1}, it is very easy to show that the random digraph $D_{3-in,3-out}$ contains a set of $O(\log n)$ vertex disjoint cycles that contain all vertices. Here we have to build a perfect matching $M_1$ from scratch and avoid several conditioning problems. The same is true for (b) and (c). The broad strategy is the same, the details are quite different.
\subsection{Phase 1: Building $M_1$}
We begin with $\Psi_0=G_{t_2}$. Then let $V_0$ denote the set of vertices that have degree at most $L_0$
in $\Psi_0$. Now create $\Psi_1=([n],E_1)$ by adding those edges in $E_{t_1}\setminus E_{t_2}$ that are incident with $V_0$. We argue that w.h.p. $\Psi_1$ is a random graph with minimum degree one in which almost all vertices have degree $\Omega(\log n)$. Furthermore, we will show that w.h.p. $\Psi_1$ is an expander, and then it will not be difficult to show that it contains the required perfect matching $M_1$.
Let a vertex be \emph{large} if its degree in $G_{t_1}$ is at
least $L_0$
and \emph{small} otherwise. Let $V_\lambda$ denote the set of
large vertices and
let $V_\sigma$ denote the set of small vertices.
The calculations for the next lemma will simplify if we observe
the following: Suppose that $m=Np$. It is known that for any monotone
increasing property of graphs
\beq{mon}
\Pr(G_m\in \mathcal{P})\leq 3\Pr(G_{n,p}\in \mathcal{P}).
\end{equation}
In general we have for not necessarily monotone properties:
\beq{notmon}
\Pr(G_m\in \mathcal{P})\leq 3m^{1/2}\Pr(G_{n,p}\in \mathcal{P}).
\end{equation}
For proofs of \eqref{mon}, \eqref{notmon} see Bollob\'as \cite{Book}
or Frieze and Karo\'nski \cite{FK15} or Janson, {\L}uczak and Ruci\'nski \cite{JLR}.
The properties in the next lemma will be used to show that w.h.p. $\Psi_1$ is an expander. For technical reasons, we require the failure probabilities to be $O(n^{-0.51})$. Precisely, this is still $o(1)$ even after inflating by $n^{1/2+o(1)}$ and this will mean that the lower bound proved in \eqref{(a)} is large enough so that for any relevant event $A$ we can use a crude estimate $\Pr(A\mid B)\leq \Pr(A)/\Pr(B)$ to handle conditioning on the event $B$ described in \eqref{(a)}.
\begin{lemma}\label{lem0}
The following holds with probability $1-O(n^{-0.51}$):
\begin{description}
\item[(a)] $|V_0|\leq n^{11/12}$.
\item[(b)] If $x,y\in V_\sigma$ then the distance between them in $G_{t_1}$
is at least 10.
\item[(c)] If $S\subseteq [n]$ and $|S|\leq n_0$ then
$e_{G_{t_1}}(S)\leq 10|S|$.
\item[(d)] If $S\subseteq [n]$ and $|S|=s\in [n_0',n_1]$
then $|N_{\Psi_0}(S)|\geq s\log n/25$.
\item[(e)] No cycle of length 4 in $G_{t_1}$ contains a small vertex.
\item[(f)] The maximum degree in $G_{t_1}$ is less than $10\log n$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
(a)
Suppose that the sequence $x_1,x_2,\ldots,x_{2t_2}$ is chosen randomly from
$[n]^{2t_2}$ and we let $\Gamma_{t_2}$ denote the multigraph with edge-set
$(x_{2i-1},x_{2i}),\,i=1,2,\ldots,t_2$. After we remove repeated edges
and loops we can couple what remains
with a subgraph $H$ of $G_{t_2}$. Let $Z_1$ denote the
number of loops and let $Z_2$ denote the number of repeated edges in $\Gamma_{t_2}$.
Let $V_0'$ denote the set of vertices of degree at most $L_0$
in $\Gamma_{t_2}$.
Then $|V_0|\leq Z_1+2Z_2+|V_0'|$. This is because if $v\in V_0
\setminus V_0'$ then it must lie in a loop or a multiple edge.
Now $Z_1$ is distributed as $\ensuremath{\operatorname{Bin}}(t_2,1/n)$ and then the Chernoff bound \eqref{chern2} implies that
\beq{Z1}
\Pr(Z_1\geq \log^2n)\leq e^{-\log^2n}.
\end{equation}
We are doing more than usual here, because we need probability $o(n^{-0.51})$,
rather than just probabilty $o(1)$.
Now $Z_2$ is dominated by $\ensuremath{\operatorname{Bin}}(t_2,t_2/N)$ and then the Chernoff bound \eqref{chern2} implies that
\beq{Z2}
\Pr(Z_2\geq \log^3n)\leq e^{-\log^3n}.
\end{equation}
Now,
\begin{align*}
\Pr\brac{v\in V_0'} & \leq \sum_{k=0}^{L_0}
\binom{2t_2 }{ k}n^{-k} \brac{1-\frac1n}^{2t_2-k} \\
& \leq \ 2\binom{2t_2 }{ L_0}n^{-L_0} e^{-(2t_2-L_0)/n} \\
& \leq \ 2\left(\frac{2et_2}{nL_0}\right)^{L_0} n^{-1/10 + o(1)} \\
& \leq n^{-1/11}.
\end{align*}
It follows, that $\operatorname{\bf E}(|V_0'|)\leq n^{10/11}$.
We now use inequality \eqref{mcd} to finish the proof. Indeed, changing
one of the $x_i$'s can change $|V_0'|$ by at most one. Hence, for any $u>0$,
$$\Pr(|V_0'|\geq \expect(|V_0'|)+u)\leq \exp\set{-\frac{u^2}{2t_2}}.$$
Putting $u=n^{4/7}$ into the above and using \eqref{Z1}, \eqref{Z2} finishes the
proof of (a).
(b)
We do not have room to apply \eqref{notmon} here. We need the inequality
\beq{binom}
\frac{\binom{N-a}{t-b}}{\binom{N}{t}}\leq \bfrac{t}{N}^b\bfrac{N-t}{N-b}^{a-b}
\end{equation}
for $b\le a\le t\le N$. Verification of \eqref{binom} is straightforward and can be found for example in Chapter 21.1 of \cite{FK15}. We will now and again use the notation $A\leq_b B$ in place
of $A=O(B)$ when it suits our aesthetic taste.
\begin{align*}
\Pr(\exists\ x,y) & \leq \sum_{k=2}^{11}
\binom{n }{ k} k! \sum_{\ell_1,\ell_2=0}^{L_0}\binom{n-k}{\ell_1}\binom{n-k}{\ell_2}
\frac{\binom{N-(2n-k+1)}{t_1-k+1-\ell_1-\ell_2}}{\binom{N}{t_1}}\\
& \leq_b \sum_{k=2}^{11}n^k \sum_{\ell_1,\ell_2=0}^{L_0}\bfrac{ne}{\ell_1}^{\ell_1}
\bfrac{ne}{\ell_2}^{\ell_2}\bfrac{t_1}{N}^{\ell_1+\ell_2+k-1}
\bfrac{N-t_1}{N-(\ell_1+\ell_2+k-1)}^{2n-(\ell_1+\ell_2+2k-2)} \\
& \leq_b n\sum_{k=2}^{11}\log^{k-1}n \sum_{\ell_1,\ell_2=0}^{L_0}
\bfrac{3\log n}{\ell_1}^{\ell_1}\bfrac{3\log n}{\ell_2}^{\ell_2}n^{-2+o(1)}\\
&=o(n^{-0.51}).
\end{align*}
(c) We can use \eqref{mon} here with $p_1=t_1/N$. If $s = |S|$, then in $G_{n,p_1}$
where $p_1=t_1/N$ and $N=\binom{n}{2}$,
$$\Pr(e_{G_{t_1}}(S) > 10|S|) \leq \binom{\binom{s}{2}}{10s}p_1^{10s}
\leq \brac{\frac{s^2e}{20s}\cdot \frac{\log n+2\log\log n}{n-1}}^{10s}\leq\bfrac{s\log n}{n}^{10s} .$$
So,
$$\Pr(\exists\ S)\leq \sum_{s=10}^{n_0}\binom{n}{s}\bfrac{s\log n}{n}^{10s}
\leq \sum_{s=10}^{n_0}\bfrac{ne}{s}^s\bfrac{s\log n}{n}^{10s}
= \sum_{s=10}^{n_0}\brac{e\bfrac {s}{n}^9\log^{10}n}^s=o(n^{-0.51}).$$
(d) We can use \eqref{mon} here with
$p_2=t_2/N$.
For $v\in V$, $\Pr(v\in N(S))=1-(1-p_2)^s \ge\frac{sp_2}{2}$
for $s\leq n_1$. So $|N(S)|$
stochastically dominates
$\ensuremath{\operatorname{Bin}}(n-s, \frac{sp_2}{2})$.
Now $(n-s)\frac{sp_2}{2}\sim \frac{s\log n}{20}$ and so using the Chernoff bound \eqref{chern1} with $\varepsilon\sim1/5$,
$$\Pr(|N_{\Psi_0}(S)|< s\log n/25) \leq e^{-s\log n/1001}.$$
So,
$$\Pr(\exists\ S)\leq \sum_{s=n_0'}^{n_1}\binom{n}{s}e^{-s\log n/1001}
\leq \sum_{s=n_0'}^{n_1}\brac{\frac{ne}{s}\cdot n^{-1/1001}}^s=o(n^{-0.51}).$$
(e) The expected number of such cycles is bounded by
\begin{align*}
\binom{n}{4}\frac{3!}{2}\sum_{k=0}^{L_0}4\binom{n-4}{k}
\frac{\binom{N-n-3}{t_1-4-k}}{\binom{N}{t_1}}
&\leq n^4\sum_{k=0}^{L_0}\bfrac{ne}{k}^k
\bfrac{t_1}{N}^{k+4}\bfrac{N-t_1}{N-k-4}^{n-k-1}\\
&\leq_b \log^4n \sum_{k=0}^{L_0}\bfrac{e^{1+o(1)}\log n}{k}^k n^{-1+o(1)}\\
&=o(n^{-0.51}).
\end{align*}
(f) We apply \eqref{mon} with $p_1=t_1/N$ and find that the probability of having a vertex of degree exceeding
$10\log n$ is at most
$$3n\binom{n-1}{10\log n}\bfrac{\log n+2\log\log n}{n-1}^{10\log n}
\leq 3n\bfrac{e^{1+o(1)}}{10}^{10\log n}=o(n^{-0.51}).$$
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
We will sometimes use (f) without comment in what follows.
Lemma \ref{lem0} implies the following:
\begin{lemma}
With probability $1-o(n^{-0.51})$,
\beq{eq1}
S\subseteq [n]\text{ and }|S|\leq n/2000\text{ implies }
|N_{\Psi_1}(S)|\geq |S|\text{ in }
\Psi_1,
\end{equation}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
Assume that the conditions described in Lemma \ref{lem0} hold.
Let $N(S)=N_{\Psi_1}(S)$.
We first argue that if $S\subseteq V_\lambda$ and $|S|\leq n/2000$ then
\beq{largeS}
|N(S)|\geq 4|S|.
\end{equation}
{From} the lemma, we only have to concern ourselves with $|S|\leq n_0'$
or $|S|\in [n_1,n/2000]$.
If $|S|\leq n_0'$ and $T=N(S)$ then in $\Psi_1$ we have, using Lemma \ref{lem0}(f),
\beq{cof}
e(S\cup T)\geq \frac{|S|\log n}{200}\text{ and }|S\cup T|\leq |S|\brac{1+10\log n}\leq n_0.
\end{equation}
It is important to note that to obtain \eqref{cof}
we use the fact that vertices in $V_0\setminus V_\sigma$
are given all their edges in $\Psi_1$.
Equation \eqref{cof} and Lemma \ref{lem0}(c)
imply that $\frac{|S|\log n}{200}\leq 10|S\cup T|$ and so \eqref{largeS}
holds with room to spare.
If $|S|\in [n_1,n/2000]$ then we choose $S'\subseteq S$ where $|S'|=n_1$ and
use
$$|N(S)|\geq |N(S')|-|S|\geq \frac{\log n}{25}\cdot \frac{200|S|}{\log n}-|S|.$$
This yields \eqref{largeS}, again with room to spare.
Now let $S_0=S\cap V_\sigma$ and $S_1=S\setminus S_0$. Then we have
\beq{N(S)}
|N(S)|\geq |N(S_0)|+|N(S_1)|-|N(S_0)\cap S_1|-|N(S_1)\cap S_0|-|N(S_0)\cap N(S_1)|.
\end{equation}
But $|N(S_0)|\geq |S_0|$. This follows from (i) $\Psi_1$ has no isolated vertices,
and (ii) Lemma \ref{lem0}(b) means that $S_0$ is an independent set and no two
vertices in $S_0$ have a common neighbor. Equation \eqref{largeS} implies that
$|N(S_1)|\geq 4|S_1|$. We next observe that trivially, $|N(S_0)\cap S_1|\leq |S_1|$.
Then we have
$|N(S_1)\cap S_0|\leq |S_1|$, for otherwise some vertex in $S_1$ has two neighbors
in $S_0$, contradicting Lemma \ref{lem0}(b).
Finally, we also have $|N(S_0)\cap N(S_1)|\leq |S_1|$.
If for a vertex in $S_1$ there are
two distinct paths of length two to $S_0$ then we violate one of
the conditions -- Lemma \ref{lem0}(b) or (e).
So, from \eqref{N(S)} we have
$$|N(S)|\geq |S_0|+4|S_1|-|S_1|-|S_1|-|S_1|= |S|.$$
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Next let $G=(V,E)$ be a graph with an even number of vertices
that does not contain a perfect matching. Let
$v$ be a vertex not covered by some maximum matching, and let
$$A_G(v)=\set{w:\exists \text{ a maximum matching of $G$ that does not cover
both $v$ and $w$.}}$$
\begin{lemma}\label{lem1}
If $A=A_G(v)$ for some $v,G$, then $|N_G(A)|<|A|$.
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
Let $v\in V$, let $M$ be a maximum matching that isolates
$v$, and let $S_0\neq \emptyset$ be the set of vertices, other than
$v$, that are isolated by $M$. Let $S_1$
be the set of vertices reachable from $S_0$ by a non-empty even length
alternating path with respect to $M$. Note that $S_1\cap S_0=\emptyset$.
Let $x\in N_G(S_1)$ and let
$y\in S_1$ be a neighbor of $x$. Then
$x$ is covered by $M$, as otherwise we can get a larger matching
by using an alternating path from $v$ to $y$, and then the
edge $\set{y,x}$.
Let $y_1$ satisfy $\set{x,y_1}\in M$. We show that $y_1\in S_1$ and this
implies that $|N_G(S_1)|\leq |S_1|$ as $M$ defines a mapping $x\to y_1$
of $N_G(S_1)$ into $S_1$. Let $P$ be
an even length alternating path from $S_0$ terminating at
$y$. If $P$ contains $\set{x,y_1}$ we can truncate it to terminate
with $\set{x,y_1}$, otherwise we can extend it using edges $\set{y,x}$
and $\set{x,y_1}$.
Finally, observe that $A(v)=S_0\cup S_1$ and $N_G(S_0)\subseteq S_1\cup N_G(S_1)$.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Now consider the edge set
$$E_A=E_{t_3}\setminus
E(\Psi_1)=\{f_1,f_2,\ldots,f_\rho\},$$
where with probability $1-o(n^{-0.51})$ we have
$$\zeta_3\geq \rho\geq \zeta_3-10n^{11/12}\log n\sim\frac{n\log n}{20}.$$
\begin{lemma}\label{lem4}
Given $\rho$, $E_A$ is a random $\rho$-subset of
$\binom{W}{2}$, where $W=[n]\setminus V_0$.
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
This follows from the fact that if we remove any
$f_i$ and replace it with
any other edge from $\binom{[n]\setminus V_0}{2}$
then $V_0$ is unaffected.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Now consider the sequence of graphs
$H_0=\Psi_1,H_1,\ldots,H_\rho$ where $H_i$ is
obtained from $H_{i-1}$ by adding the edge $f_i$.
We claim that if $\mu_i$ denotes the
size of a largest matching in $H_i$, then
\beq{match}
\Pr(\mu_i= \mu_{i-1}+1\mid \mu_{i-1}<n/2,\,f_1,
\ldots,f_{i-1},\,(\Psi_1\text{ satisfies \eqref{eq1}}))\geq 10^{-7}.
\end{equation}
To see this, let $M_{i-1}$ be a matching of size
$\mu_{i-1}$ in $H_{i-1}$ and suppose that $v$ is a
vertex not covered by
$M_{i-1}$. It follows from \eqref{eq1} and Lemma \ref{lem1}
that if $A_{H_{i-1}}(v)=\set{g_1,g_2,\ldots g_r}$ then
$r\geq n/2000$.
Now consider the pairs
$\{g_j,x\},\,j=1,\ldots,r,\,x\in A_{H_{i-1}}(g_i)$.
There are at least $\binom{n/2000}{2}$ such pairs and if
$f_i$ lies in this collection, then $\mu_i=\mu_{i-1}+1$.
Equation \eqref{match} follows from this and the fact
that $E_A$ is a random
set. In fact, given the condition in Lemma \ref{lem0}(a)
and a maximum degree of at most $10\log n$ in $G_{t_1}$,
the probability in question is at least
$$\frac{\binom{n/2000}{2}-10n^{11/12}\log n-\rho}{\binom{n}{2}}> 10^{-7}.$$
It follows from \eqref{match} that
\beq{eq2}
\Pr(H_\mu\text{ has no perfect matching})\leq o(n^{-0.51})+\Pr\brac{\ensuremath{\operatorname{Bin}}\brac{\rho,10^{-7}}\leq n/2}= o(n^{-0.51}).
\end{equation}
So with probability $1-o(n^{-0.51})$, $\Psi_2=H_\rho$ has a perfect matching $M_1$.
\begin{remark}
$M_1$ is uniformly random, independently of $M_0$, and so the inclusion-exclusion formula gives
\beq{inc}
\Pr(M_0\cap M_1=\emptyset)= \sum_{i=0}^{n/2}(-1)^i \binom{n/2}{i} \frac{(n-2i)!}{(n/2-i)!2^{n/2-i}} \frac{2^{n/2}(n/2)!}{n!}.
\end{equation}
Here we use the fact that there are $(2m)!/(m!2^m)$ perfect matchings in $K_{2m}$.
Now if $u_i$ denotes the $i$-th summand in \eqref{inc} then we have $u_0=1$ and
$$\frac{|u_{i+1}|}{|u_i|}=\frac{1}{2(i+1)}\brac{1+\frac{1}{n-2i-1}}.$$
So if $m=\Theta(\log n)$ say, then by the Bonferroni inequalities,
\begin{align*}
\Pr(M_0\cap M_1=\emptyset)&\geq \sum_{i=0}^{2m+1}u_i\\
&=\sum_{i=0}^{2m+1}(-1)^i\frac{1}{2^ii!}+O\bfrac{\log n}{n}\\
&=e^{-1/2}+o(1).
\end{align*}
\medskip
It follows that $M_1$ exists with probability $1-o(n^{-0.51})$, even if we insist
that it be disjoint from $M_0$. Indeed, conditioning on
$M_0\cap M_1=\emptyset$ can only increase the probability
of some ``unlikely'' event by a factor of at most $e^{1/2}+o(1)$.
\end{remark}
We will need the following properties of the 2-factor
$\Pi_0=M_0\cup M_1$.
\begin{lemma}\label{lem2}
The following hold with probability $1-o(n^{-0.51})$:
\begin{description}
\item[(a)] $M_0\cup M_1$ has at most $10\log_2n$ components.
\item[(b)] There are at most $n_b$ vertices in
components
of size at most $n_c$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
(a) Following the argument in \cite{FLu} we note that if $C$ is the cycle
of $M_0\cup M_1$ that contains vertex 1 then
\beq{eq12}
\Pr(|C|=2k)< \prod_{i=1}^{k-1}\bfrac{n-2i}{n-2i+1}\frac{1}{n-2k+1}<\frac{1}{n-2k+1}.
\end{equation}
Indeed, consider $M_0$-edge $\set{1=i_1,i_2}\in C$ containing vertex 1. Let $\set{i_2,i_3}\in C$ be the $M_1$-edge containing $i_2$. Then $\Pr(i_3\neq 1)=\frac{n-2}{n-1}$. Assume $i_3\neq 1$ and let $\set{i_3,i_4\neq 1}\in C$ be the $M_0$ edge containing $i_3$. Let $\set{i_4,i_5}\in C$ be the $M_1$-edge containing $i_4$. Then $\Pr(i_5\neq 1)=\frac{n-4}{n-3}$ and so on.
Having chosen $C$, the remaining cycles come from the union of two (random)
matchings on the complete graph $K_{n-|C|}$. It follows from this, by summing \eqref{eq12} over $k\leq n/4$ that
$$\Pr(|C|<n/2)\leq \sum_{k=1}^{n/4}\frac{1}{n-2k+1}\leq \frac{n}{4}\times \frac2{n}=\frac12.$$
Hence,
$$\Pr(\neg(a))\leq \Pr(Bin(10\log_2n,1/2)\leq \log_2n)=
2^{-10\log_2n}\sum_{i=0}^{\log_2n}\binom{10\log_2n}{i}\leq 2^{-5\log_2n}=o(n^{-0.51}).$$
(b) It follows from \eqref{eq12} that
$$\Pr(|C|\leq n_c)\leq \frac{201}{\log n}.$$
If we generate cycle sizes as in (a) then up until there are fewer than $n_b/2$
vertices left, $\log\nu\sim \log n$ where $\nu$ is the number of vertices that
need to be partitioned into cycles. It follows that the probability we generate
more than $k=\frac{\log\log\log n\times \log n}{1000\log\log n}$ cycles of size
at most $n_c$ up to this time is bounded by
$$o(n^{-0.51})+\Pr\brac{Bin\brac{10\log_2n,\frac{201}{\log n}}\geq
k}\leq o(n^{-0.51})+ \bfrac{3000e}{k}^k=o(n^{-0.51}).
$$
Thus with probability $1-o(n^{-0.51})$, we have at most
$$\frac{n_b}{2}+kn_c\leq n_b$$
vertices on cycles of length at most $n_b$.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
\subsection{Phase 2: Increasing minimum cycle length}\label{ics}
In this section, we will use the edges in
$$E_B=\set{e\in E_{t_4}\setminus E_{t_3}:e\cap V_0=\emptyset}$$
to create a 2-factor that contains $M_0$ and in which each cycle has length at least $n_c$. Note that
$$E_B\cap E(\Psi_1)=\emptyset.$$
We eliminate the small cycles (of length less than $n_c$) one by one (more or less). Let $C$ be a small cycle. We remove an edge $\set{u_0,v_0}\notin M_0$ of $C$. We then try to join $u_0,v_0$ by a sufficiently long $M_0$ alternating path $P$ that begins and ends with edges not in $M_0$. This is done in such a way that the resulting 2-factor contains $M_0$ but has at least one less small cycle. The search for $P$ is done in a breadth first manner from both ends, creating $n^{2/3+o(1)}$ paths that begin at $v_0$ and another $n^{2/3+o(1)}$ paths that end at $u_0$. We then argue that with sufficient probability, we can find a pair of paths that can be joined by an edge from $E_B$ to create the required alternating path.
We proceed to a detailed description. Let
$$V_\tau=\set{v\in [n]\setminus V_0:\deg_{E_B}(v)\leq L_0},$$
where for a set of edges $X$ and a vertex $x$,
$\deg_X(x)$ is the number of edges in $X$ that are
incident with $x$.
\begin{lemma}\label{lem3}
The following hold with probability $1-o(n^{-0.51})$:
\begin{description}
\item[(a)] $|V_\tau|\leq n^{2/5}$.
\item[(b)] No vertex has 10 or more $G_{t_1}$ neighbors in $V_\tau$.
\item[(c)]
If $C$ is a cycle with $|C|\leq n_c$ then $|C\cap V_\tau|\leq |C|/200$ in $G_{t_1}$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
(a) We follow a similar argument to that in Lemma \ref{lem0}(a). We condition on $|V_0|\leq n^{11/12}$ and maximum degree $10\log n$ in $G_{t_0}$ and generate a random sequence from $[n-n^{11/12}]^{7t_0/5-20n^{11/12}\log n}$. The argument is now almost identical to that in Lemma \ref{lem0}(a).
(b) This time we can condition on $\nu=n-|V_0|$ and $\mu=|\set{e\in E_{t_4}\setminus E_{t_3} :e\cap V_0\neq \emptyset}|\leq n^{11/12}\times 10\log n$. We write $$\Pr(v\text{ violates (b)})\leq
\sum_{S\in\binom{[n-1]}{10}}\Pr({\cal A}(v,S))\Pr({\cal B}(v,S)\mid {\cal A}(v,S))
$$
where
\begin{align*}
{\cal A}(v,S)&=\set{N(v)\supseteq S,
\text{ in }G_{t_1}},\\
{\cal B}(v,S)&=\set{w\text{ has at most $L_0$ $E_B$-neighbors in }[n]\setminus
(S\cup\set{v}),\forall w\in S}.
\end{align*}
Applying \eqref{mon}
we see that $\Pr({\cal A}(v,S))\leq 3\binom{n}{10}p_1^{10}$
and then using \eqref{mon} with
\beq{p}
p=\frac{t_4-t_3-\mu}{\binom{\nu}{2}}\sim \frac{7\log n}{10n}
\end{equation}
we see that
$$\Pr({\cal B}(v,S)\mid {\cal A}(v,S))\leq 3\brac{\sum_{k=0}^{L_0}
\binom{\nu-11}{k}p^k(1-p)^{\nu-11-k}}^{10}$$
and so
\begin{align*}
\Pr(v\text{ violates (b)})
& \leq_b \binom{n}{10}p_1^{10}\brac{\sum_{k=0}^{L_0}
\binom{\nu-11}{k}p^k(1-p)^{\nu-11-k}}^{10}\\
& \leq (e^{o(1)}\log n\cdot n^{1/10-7/10+o(1)})^{10}\\
&=o(n^{-5}).
\end{align*}
Now use the Markov inequality.
(c) Let $Z$ denote the number of cycles violating the required property. Using \eqref{mon} and $\nu$ as in (b) and $p$ as in \eqref{p}, we have
\begin{align*}
\operatorname{\bf E}(Z) & \leq_b
\sum_{k=3}^{n_c} \binom{n }{ k}k!p_1^k
\binom{k}{\rdup{\frac{k}{200}}}
\brac{\sum_{\ell=0}^{L_0}
\binom{\nu-k}{\ell}p^\ell(1-p)^{\nu-\ell}}^{\rdup{k/200}}\\
& \leq \sum_{k=3}^{n_c} (2n)^k \bfrac{\log n+2\log\log n}{n-1}^k n^{-\rdup{k/200}/2}\\
& = o(n^{-0.51}).
\end{align*}
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Now consider the distribution of the edges in $E_B$.
\begin{lemma}\label{lem5}
Let $V_1=[n]\setminus V_0$
and $A\subseteq\binom{V_1}{2}$ with $|A|=a=O(\log n)$.
Let $X$ be a subset of $E_B$ that is disjoint from $A$.
Suppose that $|X|=O(n^{11/12}\log n)$. Then
\begin{align}
&\Pr(E_B\supseteq A\mid |E_B|=\mu=\alpha n\log n,|V_1|=\nu\geq
n-n^{11/12}, E_B\supseteq X)\nonumber\\
&=\frac{\binom{\binom{\nu}{2}-a-|X|}{\mu-a-|X|}}{\binom{\binom{\nu}{2}-|X|}{\mu-|X|}}
\label{eq5}\\
&=(1+o(n^{-1/13}))\bfrac{2\alpha\log n}{n}^a.\label{eq6}
\end{align}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
Equation \eqref{eq5} follows from an argument similar to that given for Lemma \ref{lem4}. For equation \eqref{eq6}, we write
\begin{multline*}
\frac{\binom{\binom{\nu}{2}-a-|X|}{\mu-a-|X|}}{\binom{\binom{\nu}{2}-|X|}{\mu-|X|}}= \bfrac{\mu-|X|}{\binom{\nu}{2}-|X|}^a\brac{1+O\bfrac{a^2}{\mu-|X|}}=\\ \bfrac{\mu}{\binom{\nu}{2}}^a\brac{1+O\bfrac{a^2}{\mu-|X|}+O\bfrac{a|X|}{\mu}}.
\end{multline*}
This follows from the fact that in general, if $s^2=o(N)$ then
$$\frac{\binom{N-s}{M-s}}{\binom{N}{M}}=\bfrac{M}{N}^s\brac{1+O\bfrac{s^2}{M}}.$$
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
By construction, we can apply this lemma to the graph
induced by $E_B$ with
$$\alpha=\frac{7+o(1)}{20}.$$
Let a cycle $C$ of $\Pi_0$ be \emph{small} if its length $|C| < n_c$ and \emph{large} otherwise. Define a near 2-factor to be a graph that is obtained from a 2-factor by removing one edge. A near 2-factor $\Gamma$ consists of a path $P(\Gamma)$ and a collection of vertex disjoint cycles. A 2-factor or a near 2-factor is {\em proper} if it contains $M_0$. We abbreviate proper near 2-factor to PN2F.
\medskip
We will describe a process of eliminating small cycles. In this process we create intermediate proper 2-factors. Let $\Gamma_0$ be a 2-factor and suppose that it contains a small cycle $C$. To begin the elimination of $C$ we choose an arbitrary edge $\set{u_0,v_0}$ in $C\setminus M_0$, where $u_0,v_0\notin V_\tau$. This is always possible, see Lemma \ref{lem3}(c). We delete it, obtaining a PN2F $\Gamma_1$. Here, $P(\Gamma_1) \in {\mathcal P}(v_0,u_0)$, the set of $M_0$-alternating paths in $G$ from $v_0$ to $u_0$. Here an $M_0$-alternating path must begin and end with an edge of $M_0$. The initial goal will be to create a large set of PN2Fs such that each $\Gamma$ in this set has path $P(\Gamma)$ of length at least $n_c$ and the small cycles of $\Gamma$ are a strict subset of the small cycles of $\Gamma_0$. Then we will show that with probability $1-o(n^{-0.51})$, the endpoints of one of the paths in some such $\Gamma$ can be joined by an edge to create a proper 2-factor with at least one fewer small cycle than $\Gamma_0$.
\medskip
This process can be divided into two stages. In a generic step of Stage I, we take a PN2F $\Gamma$ as above with $P(\Gamma) \in {\mathcal P}(u_0, v)$ and construct a new PN2F with the same starting point $u_0$ for its path. We do this by considering edges from $E_B$ incident to $v$. Suppose $\set{v,w} \in E_B$ and that the non-$M_0$ edge in $\Gamma$ containing vertex $w$ is $\set{w,x}$. Then $\Gamma' = \Gamma \cup \set{v,w} \setminus \set{w,x}$ is a PN2F with $P(\Gamma') \in {\mathcal P}(u_0, x)$. We say that $\set{v,w}$ is \emph{acceptable} if $x,w\notin W$ ($W$ defined immediately below) and $P(\Gamma')$ has length at least $n_c$ and any new cycle created (in $\Gamma'$ but not $\Gamma$) has at least $n_c$ edges.
\medskip
There is an unlikely technicality to be faced. If $\Gamma$ has no non-$M_0$ edge $(x,w)$, then $w= u_0$ and this is accepted if $P(\Gamma')$ has at least $n_c$ edges. This would prematurely end an iteration. The probability that we close a cycle at such a step is $O(1/n)$ and so we can safely ignore this possibility.
\medskip
In addition we define a set $W$ of \emph{used} vertices, where
$$W=V_\sigma\cup V_\tau\text{ at the beginning of Phase 2}$$ and whenever we look at edges $\set{v,w},\set{w,x}$ (that is, consider using that edge to create a new $\Gamma'$), we add $v,w,x$ to $W$. Additionally, we maintain $|W|=O(n^{11/12})$, or fail if we cannot.
\medskip
We will build a tree $T$ of PN2Fs, breadth-first, where each non-leaf vertex $\Gamma$ yields PN2F children $\Gamma'$ as above. We stop building $T$ when we have $\nu_L=n^{2/3+o(1)}$ leaves. This will end Stage 1 for the current cycle $C$ being removed.
\medskip
We'll restrict the set of PN2F's which could be children of $\Gamma$ in $T_0$ as follows: We restrict our attention to $w\notin W$ with $\set{v,w}\in E_B$ and $\set{v,w}$ acceptable as defined above. Also, we only construct children from the first $\ell_0=L_0/2$ acceptable $\set{v,w}$'s at a vertex $v$. Furthermore we only build the tree down to $\ell_1=\frac{2\log n}{3\log\log n}$ levels. We denote the nodes in the $i$th level of the tree by $S_i$. Thus $S_0=\set{\Gamma_1}$ and $S_{i+1}$ consists of the PN2F's that are obtained from $S_i$ using acceptable edges. In this way we define a tree of PN2F's with root $\Gamma_1$ that has branching factor at most $\ell_0$. Thus,
\beq{eq8}
|S_{\ell_1}|\leq \nu_L=\ell_0^{\ell_1}.
\end{equation}
\medskip
On the other hand, if we let $\mathcal{E}_0$ denote the intersection of the high probability events of Lemmas \ref{lem0}, \ref{lem2} and \ref{lem3}, then:
\begin{lemma}\label{lem6}
Conditional on the event $\mathcal{E}_0$,
$$|S_{\ell_1}| =\nu_L$$
with probability $1-o(n^{-3})$.
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
If $P(\Gamma)$ has endpoints $u_0,v$ and $e=\set{v,w}\in E_B$ and $e$ is unacceptable then
i) $w$ lies on $P(\Gamma)$ and is too close to an endpoint or (ii) $x\in W$ or $w\in W$ or (iii) $w$ lies on a small cycle. Ab initio, there are at least $L_0$ choices for $w$ and we must bound the number of unacceptable choices.
The probability that at least $L_0/10$ vertices are unacceptable due to (iii) is by Lemmas \ref{lem2} and \ref{lem5} at most
\begin{multline}\label{eq7}
(1+o(1))\binom{n_b}{L_0/10}\bfrac{7\log n}{(10+o(1))n}^{L_0/10}\leq
\bfrac{10en_b\log n}{L_0n}^{L_0/10}\\
\leq \bfrac{1000e\log\log\log n}{\log\log n}^{L_0/10}=O(n^{-K})
\end{multline}
for any constant $K>0$.
A similar argument deals with conditions (i) and (ii).
\medskip
Thus, with (conditional) probability $1-o(n^{-4})$, there are at least
$$\brac{\frac{\log n}{100}-\frac{3\log n}{1000}}|S_t|\geq \frac{\log n}{200} |S_t|$$
acceptable edges, for all $t$. So, with (conditional) probability $1-o(n^{-3})$ we have
\beq{eq9}
|S_{\ell_1}| = \nu_L
\end{equation}
as desired.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Having built $T$, if we have not already made a cycle, we have a tree of PN2Fs and the last level, $\ell_0$ has leaves $\Gamma_i, \ i=1,...,\nu_L$, each with a path $P(\Gamma_i)$ of length at least $n_c$. (Recall the definition of an acceptable edge.) Now, perform a second stage which will be like executing $\nu_L$-many \emph{Stage 1}'s {\em in parallel} by constructing trees $T_i, \ i=1,...,\nu_L $, where the root of $T_i$ is $\Gamma_i$. Suppose for each $i$, $P(\Gamma_i) \in {\mathcal P}(u_0,v_i)$; we fix the vertex $v_i$ and build paths by first looking at neighbors of $u_0$, for all $i$ (so in tree $T_i$, every $\Gamma$ will have path $P(\Gamma) \in {\mathcal P}(u,v_i)$ for some $u$).
\medskip
Construct these $\nu_L$ trees in Stage 2 by only enforcing the conditions that $w \notin W$. This change will allow the PN2Fs to have small paths and cycles. We will not impose a bound on the branching factor either. As a result of this and the fact that each tree $T_i$ begins by considering edges from $E_B$ incident to $u_0$, the sets of endpoints of paths (that are not the $v_is$) of PN2Fs at the same level are the same in each of the trees $T_i,i=1,2,\ldots,\nu_L$. That is, if $\Gamma_i'$ is a node at level $\ell$ of tree $T_i$ and $\Gamma_j'$ is a node at level $\ell$ of tree $T_j$, $P(\Gamma_i') \in {\mathcal P}(w, v_i)$ and $P(\Gamma_j') \in {\mathcal P}(w, v_j)$ for some $w\in V_0$. This can be proved by induction, see \cite{CF}. Indeed, let $L_{i,\ell }$ denote the set of end vertices, other than $v_i$, of the paths associated with the nodes at depth $\ell$ of the tree $T_i$, $i=1,2\ldots ,\nu_\ell, \ell = 0,1,\ldots,\ell_1$. Thus $L_{i,0}=\{ u_0\}$ for all $i$. We can see inductively that $L_{i,\ell}=L_{j,\ell}$ for all $i,j,\ell$. In fact if $v\in L_{i,\ell}=L_{j,\ell}$ then $\set{v,w}\in E_B$ is acceptable for some $i$ means that $w\notin W$ (at the start of the construction of level $\ell+1$ and hence if $\set{w,x}$ is the non-$M_0$ edge for this $i$ then $x\notin W$ and it is the non-$M_0$ edge for all $j$. In which case $\set{v,w}$ is acceptable for all $i$ and we have $L_{i,\ell +1}=L_{1,\ell +1}$.
\medskip
The trees $T_i, i=1,...,\nu_L$, will be succesfully constructed with probability $1-o(1/n^3)$ and with a similar probability the number of nodes in each tree is at most $(10\log n)^{\ell_1}=n^{2/3+o(1)}$. Here we use the fact that the maximum degree in $G_{t_1}\leq 10\log n$ with this probability. However, some of the trees may not follow all of the conditions listed initially, and so we will ``prune'' the trees by disallowing any node $\Gamma$ that was constructed in violation of any of those conditions. Call tree $T_i$ GOOD if it still has at least $L_0$ leaves remaining after pruning and BAD otherwise. Notice that
$$\Pr(\exists\ i:T_i \text{ is BAD}\mid\mathcal{E}_0) = o\bfrac{\nu_L}{n^3}=o(n^{-2}).$$
Here the $o(1/n^3)$ factor is the one promised in Lemma \ref{lem6}.
Finally, consider the probability that there is no $E_B$ edge from any of the $n^{2/3+o(1)}$ endpoints found in Stage 1 to any of the $n^{2/3-o(1)}$ endpoints found in Stage 2. At this point we will have only exposed the edges of $\Pi_0$ incident with these endpoints. So if for some $k\leq \nu_L$ we examine the (at least)
$\log n/200$ edges incident to $v_1,v_2,\ldots,v_k$ but not $W$ then the probability we fail to close a cycle and produce a proper 2-factor is at most
$$\brac{1-\frac{1}{n^{1/3+o(1)}}}^{k\log n/200}.$$
Thus taking $k=n^{1/3+o(1)}$ suffices to make the failure probability $o(n^{-2})$. Also, this final part of the construction only contributes $n^{1/3+o(1)}$ to $W$.
Therefore, the probability that we fail to eliminate a particular small cycle $C$ is $o(n^{-2})$ and then given
$\mathcal{E}_0$, the probability that Phase 2 fails is $o(\log n/n^2)=o(1)$.
\medskip
We should check now that w.h.p. $|W|=O(n^{11/12})$ throughout Phase 2. It starts out with at most $n^{11/12}+n^{2/5}$ vertices (see Lemmas \ref{lem0}(a) and \ref{lem3}(a)) and we add $O(n^{2/3+o(1)}\times \log n)$ vertices altogether in this phase. So we conclude:
\begin{lemma}\label{lem7}
The probability that Phase 2 fails to produce a proper 2-factor with
minimum cycle length at least $n_c$ is $o(n^{-0.51})$.
\end{lemma}
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
\subsection{Phase 3: Creating a Hamilton cycle}\label{CHC}
By the end of Phase 2, we will with probability $1-o(n^{-0.51})$ have found a proper 2-factor
with all cycles of length at least $n_c$. Call this subgraph $\Pi^*$.
\medskip
In this section, we will use the edges in
$$E_C=\set{e\in E_{t_0}\setminus (E_{t_4}\cup E(\Psi_1)):
e\cap V_0=\emptyset}$$
to turn $\Pi^*$ into a Hamilton cycle that contains $M_0$, w.h.p.
It is basically a second moment calculation with a twist to
keep the variance under control. We note that Lemma \ref{lem5} continues to hold
if we replace $E_B$ by $E_C$.
\medskip
Arbitrarily assign an orientation to each cycle. Let $C_1,...,C_k$
be the cycles of $\Pi^*$ (note that if $k=1$ we are done)
and let $c_i = |C_i\setminus W|/2$. Then
$c_i \geq \frac{n_c}{2}-O(n^{11/12}) \geq \frac{99 n}{\log n}$ for all $i$.
Let $a= \frac{n}{\log n}$ and
$m_i = 2\lfloor \frac{c_i}{a} \rfloor+1$ for all $i$ and $m = \sum_{i=1}^k m_i$.
{From} each $C_i$, we will consider choosing
$m_i$ vertices $v \in C_i\setminus W$
that are heads of non-$M_0$ arcs after the
arbitrary ordering of all cycles, deleting these $m$ arcs and
replacing them with $m$ others to create a proper Hamilton cycle.
\medskip
Given such a deletion of edges, re-label the broken arcs as
$(v_j ,u_j), j \in [m]$ as follows: in cycle $C_i$ identify the lowest numbered
vertex $x_i\in [n]$ which loses a cycle edge directed out of it. Put $v_1=x_1$ and
then go round $C_1$ defining $v_2,v_3,\ldots v_{m_1} $ in order. Then let
$v_{m_1+1}=x_2$ and so on.
We thus have $m$ path sections $P_j\in {\cal P}(u_{\phi(j)},v_j) $ in $\Pi^{*}$
for some permutation $\phi$.
It is our intention to rejoin these path sections of $\Pi^{*}$ to make a Hamilton cycle using $E_C$, if we can. Suppose we can. This defines a permutation $\rho$ on $[m]$ where $\rho (i) = j$ if $P_i$ is joined to $P_j$ by
$(v_i,u_{\phi (j)})$, where $\rho\in H_m$, the set of cyclic permutations on $[m]$. We will use the second moment method to show that a suitable $\rho$ exists w.h.p. A technical problem forces a restriction on our choices
for $\rho$. This will produce a variance reduction in a second moment calculation.
Given $\rho$ define $\lambda=\phi\rho$. In our analysis we will restrict our
attention to $\rho\in R_{\phi} =
\{ \rho \in H_m : \phi \rho \in H_m \}$. If $\rho\in R_{\phi}$ then we have not only
constructed a Hamilton
cycle in $\Pi^{*}\cup E_C$, but also in the \emph{auxillary digraph} $\Lambda$,
whose edges are $(i, \lambda(i))$.
\medskip
The following lemma is from \cite{CF1}. The content is in the lower bound. It shows that there are still many choices for $\rho$ and it is needed to show that the expected number of possible re-arrangements of path sections grows with $n$.
\begin{lemma}\label{lem8}
$(m-2)! \leq |R_{\phi}| \leq (m-1)!$
\end{lemma}
\medskip
Let $H$ be the graph induced by the union of $\Pi^*$ and $E_C$.
\begin{lemma}
$H$ contains a Hamilton cycle w.h.p.
\end{lemma}
\begin{proof}
Let $X$ be the number of Hamilton cycles in $G$ that can be obtained by
removing the edges described above and
rearranging the path segments generated by $\phi$ according to
those in $\rho \in R_{\phi}$
and connecting the path segments using edges in $H$.
\medskip
We will use the inequality $\Pr(X>0) \geq \frac{\mathbb{E}(X)^2}{\mathbb{E}(X^2)}$ to show that such a Hamilton cycle exists with the required probability.
\medskip
The definition of $m_i$ gives us $\frac{2n}{a}-k \leq m \leq \frac{2n}{a}+k$
and so $1.99\log n \leq m \leq 2.01\log n$.
Additionally we will use $k \leq \frac{n}{n_c}=\frac{\log n}{200}$, $m_i \geq 199$ and
$\frac{c_i}{m_i} \geq \frac{a}{2.01}$ for all $i$.
\medskip
{From} Lemmas \ref{lem5} and \ref{lem8}, with $\alpha=1/(10+o(1))$
\begin{align}
\mathbb{E}(X) & \geq (1-o(1))\left( \frac{2\alpha\log n}{n}\right)^m (m-2)!
\prod_{i=1}^k \binom{c_i}{m_i} \label{eq10a}\\
& \geq \frac{1-o(1)}{m^{3/2}} \bfrac{2m\alpha\log n}{en}^m
\prod_{i=1}^k \brac{ \bfrac{c_ie^{1-1/10m_i}}{m_i^{1+(1/2m_i)}}^{m_i}
\bfrac{1-2m_i^2/c_i}{\sqrt{2\pi}}}\label{eq10aa}
\end{align}
where to go from \eqref{eq10a} to \eqref{eq10aa} we have used the approximation $(m-2)!\geq m^{-3/2}(m/e)^m$ and
$$\binom{c_i}{m_i}\geq \frac{c_i^{m_i}(1-2m_i^2/c_i)}{m_i!}\text{ and }
m_i!\leq \sqrt{2\pi m_i}\bfrac{m_i}{e}^{m_i}e^{1/10m_i}.$$
{\bf Explanation of \eqref{eq10a}:} We choose the arcs to delete in
$\prod_{i=1}^k \binom{c_i}{m_i}$
ways and put them together as explained prior to Lemma \ref{lem8}
in at least $(m-2)!$
ways. The probability that the required edges exist in $E_C$ is
$(1+o(1))\bfrac{2\alpha\log n}{n}^m$,
from Lemma \ref{lem5}.
\medskip
Continuing, we have
\begin{align}
\mathbb{E}(X)& \geq \frac{(1-o(1))(2\pi)^{-m/398}e^{-k/10}}{m^{3/2}}
\left( \frac{2m\alpha\log n}{en} \right)^m
\prod_{i=1}^k \left( \frac{c_ie}{(1.02)m_i} \right)^{m_i} \nonumber \\
& \geq \frac{(1-o(1))(2\pi)^{-m/398}}{n^{1/2000}m^{3/2}}
\left( \frac{2m\alpha\log n}{en} \right)^m \bfrac{ea}{2.01\times 1.02}^m \nonumber \\
& \geq \frac{(1-o(1))(2\pi)^{-m/398}}{n^{1/2000}m^{3/2}}
\left( \frac{\log n}{6} \right)^m \nonumber\\
& \to\infty
\end{align}
\medskip
Let $M ,M^{\prime}$ be two sets of selected edges which have been deleted
in $\Pi^{*}$ and whose path sections have been
re-arranged into Hamilton cycles according to $\rho , \rho^{\prime}$ respectively.
Let $N,N'$ be the corresponding sets of edges which have been added to make
the Hamilton cycles. Let $\Omega$ denote the set of choices for $M$ (and $M'$.)
Let $s=|M\cap M'|$ and $t=|N\cap N'|$. Now $t\leq s$ since if $(v,u)\in
N\cap N'$ then there must be a unique $(\tilde{v},u)\in M\cap M'$ which
is the unique $\Pi^{*}$-edge into $u$. It is shown in \cite{CF1} that $t=s$
implies $t=s=m$ and $(M,\rho )=(M',\rho ')$. (This removes a large term from the second moment
calculation). Indeed, suppose then that $t=s$ and $(v_i,u_i)\in M\cap M'$.
Now the edge $(v_i,u_{\lambda (i)})\in N$ and since $t=s$ this edge must
also be in $N'$. But this implies that $(v_{\lambda (i)},u_{\lambda (i)})\in M'$
and hence in $M\cap M'$. Repeating the argument we see that
$(v_{\lambda ^k(i)},u_{\lambda ^k(i)})\in M\cap M'$ for all
$k\geq 0$. But $\lambda$ is cyclic and so our claim follows.
If $\langle s,t \rangle$ denotes the case where $s= |M\cap M'|$ and
$t=|N\cap N'|$, then
\begin{align*}
\mathbb{E}(X^2) & \leq \mathbb{E}(X) + (1+o(1)\sum_{M\in \Omega} \bfrac{2\alpha\log n}{n}^m
\sum_{\substack{M'\in \Omega\\ N'\cap N = \emptyset}} \left(\frac{2\alpha\log n}{n}\right)^m \\
& + (1+o(1)) \sum_{M\in \Omega} \left(\frac{2\alpha\log n}{n}\right)^m \sum_{s=2}^m
\sum_{t=1}^{s-1}
\sum_{\substack{M'\in\Omega\\ \langle s,t \rangle}} \left(\frac{2\alpha\log n}{n}\right)^{m-t} \\
& = \mathbb{E}(X) + E_1 + E_2 \ \text{say.}
\end{align*}
Note that $E_1 \leq (1+o(1))\mathbb{E}(X)^2$.
\medskip
Now, with $\sigma_i$ denoting the number of common $M\cap M'$ edges selected from $C_i$,
\[
E_2 \leq E(X)^2 \sum_{s=2}^m \sum_{t=1}^{s-1} \binom{s}{t}
\bigg[ \sum_{\sigma_1+...+\sigma_k=s} \
\prod_{i=1}^k \frac{\binom{m_i}{\sigma_i}
\binom{c_i-m_i}{m_i - \sigma_i}}{\binom{c_i}{m_i}} \bigg] \frac{(m-t-1)!}{(m-2)!}
\left(\frac{n}{2\alpha\log n}\right)^t.
\]
{\bf Some explanation:} There are $\binom{s}{t}$ choices for $N\cap N'$, given $s$ and $t$. Given $\sigma_i$ there are $\binom{m_i}{\sigma_i}$ ways to choose $M\cap M'$ and $\binom{c_i-m_i}{m_i - \sigma_i}$ ways to choose the rest of $M'\cap C_i$. After deleting $M'$ and adding $N\cap N'$ there are at most $(m-t-1)!$ ways of putting the segments together to make a Hamilton cycle.
We see that
\[
\frac{\binom{c_i-m_i}{m_i - \sigma_i}}{\binom{c_i}{m_i}} \leq \frac{\binom{c_i}{m_i - \sigma_i}}{\binom{c_i}{m_i}}=\frac{m_i(m_i-1)\cdots(m_i-\sigma_i+1)}{(c_i-m_i+1)\cdots (c_i-m_i+\sigma_i)}
\leq (1+o(1))\left(\frac{2.01}{a}\right)^{\sigma_i}
\text{exp}\left\{-\frac{\sigma_i(\sigma_i-1)}{2m_i}\right\}.
\]
Also,
$$\sum_{i=1}^k \frac{\sigma_i^2}{2m_i} \geq \frac{s^2}{2m} \ \text{for} \
\sigma_1+...+\sigma_k = s$$
and
$$\sum_{i=1}^k \frac{\sigma_i}{2m_i} \leq \frac{k}{2} \ \text {and} \\
\sum_{\sigma_1+...+\sigma_k=s} \ \prod_{i=1}^k \binom{m_i}{\sigma_i} = \binom{m}{s}.$$
Using these approximations, we have
$$\sum_{\sigma_1+...+\sigma_k=s} \
\prod_{i=1}^k \frac{\binom{m_i}{\sigma_i}
\binom{c_i-m_i}{m_i - \sigma_i}}{\binom{c_i}{m_i}}\leq
(1+o(1))e^{k/2}\exp\left\{-\frac{s^2}{2m}\right\} \left( \frac{2.01}{a} \right) ^s
\binom{m}{s}.$$
So we can write
$$\frac{E_2}{\mathbb{E}(X)^2} \leq
(1+o(1))e^{k/2}\sum_{s=2}^m \sum_{t=1}^{s-1}\binom{s}{t}
\exp\left\{-\frac{s^2}{2m}\right\}\left( \frac{2.01}{a} \right) ^s
\binom{m}{s}\frac{(m-t-1)!}{(m-2)!}
\left( \frac{n}{2\alpha\log n}\right) ^t.$$
We approximate
$$\binom{m}{s}\frac{(m-t-1)!}{(m-2)!}\leq
C_1\frac{m^s}{s!}\bfrac{m-t-1}{e}^{m-t-1}\bfrac{e}{m-2}^{m-2}\leq C_2\frac{m^s}{s!}\frac{e^{t}}{m^{t-1}},$$
for some constants $C_1,C_2>0$.
Substituting this in, we obtain,
\begin{align*}
\frac{E_2}{\mathbb{E}(X)^2}
&\leq_b n^{1/400} m\sum_{s=2}^m \left( \frac{2.01}{a}\right)^s \frac{m^s}{s!} \
\text{exp}\left\{ - \frac{s^2}{2m}\right\} \sum_{t=1}^{s-1} \binom{s}{t}
\left(\frac{en}{2\alpha m\log n}\right)^t \\
& \leq (1+o(1) \left(\frac{m^2}{5en^{.99}}\right) \sum_{s=2}^m \left( \frac{(2.01)en
\text{ exp}\{-s/2m\} }{2\alpha a\log n}\right)^s \frac{1}{s!} \\
& \leq n^{-9/10}.
\end{align*}
To see this, notice that
$$\sum_{t=1}^{s-1} \binom{s}{t}\left(\frac{en}{2\alpha m\log n}\right)^t\leq m\bfrac{en}{2\alpha m\log n}^{s-1}$$
and
\[
\sum_{s=2}^m \left( \frac{(2.01)en \text{ exp}\{-s/2m\} }{2\alpha a\log n}\right)^s
\frac{1}{s!}
\leq \sum_{s=2}^m \frac{30^s}{s!} \leq e^{30}.
\]
Combining things, we get
\begin{align*}
\mathbb{E}(X^2) & \leq \mathbb{E}(X) + \mathbb{E}(X)^2(1+o(1)) + \mathbb{E}(X)^2 n^{-.9} \ \text{ so} \\
\frac{(\mathbb{E} X)^2}{\mathbb{E}(X^2)} & \geq \frac{1}{\frac{1}{\mathbb{E} X} +
1+o(1) + n^{-.9}} \\
& \longrightarrow 1
\end{align*}
as $n \rightarrow \infty$, as desired.
\end{proof}
\subsection{Proof of Corollary \ref{cor1}}
We begin the proof by replacing the sequence $E_0,E_1,\ldots,E_m,\ldots$ by $E_0',E_1',\ldots,E_m',\ldots,$ where the edges of $E_m'=\set{e_1',e_2',\ldots,e_m'}$ are randomly chosen \emph{with replacement}. This means that $e_m$ is allowed to be a member of $E_{m-1}'$. We let $G_m'$ be the graph $([n],E_m')$.
If an edge appears a second time, it will be randomly
re-colored. We let $R$ denote the set of edges that
get repeated. Note that if $\tau_{1,1}=\mu$ and $e_{\mu}=(v,w)\in R$ then $v$ or $w$ is isolated in $G_{\mu-1}^{(b)}$ or $G_{\mu-1}^{(w)}$.
\beq{(b)}
\Pr(e_{\tau_{1,1}}\in R)\leq 4\Pr(\exists e=(v,w)\in R:v\text{ has black degree }1)=o(1).
\end{equation}
{\bf Explanation:}
The factor 4 comes from $v$ or $w$ having black or white degree one. Next
suppose first that $e_\mu=(v,w)$ and that $v$ has black degree zero in
$G_{\mu-1}$ and $w$ also has zero black degree in $G_{\mu-1}$. An argument similar to that for Lemma \ref{lem0}(b) shows that w.h.p. there is no white edge joining $v$ and $w$ and so $e_\mu\notin R$.
Now suppose that $e_\mu=(v,w)$ and that $v$ has black degree zero in $G_{\mu-1}$ and $w$ has positive black degree in $G_{\mu-1}$. An argument similar to that given for Lemma \ref{lem0}(f) shows that w.h.p. the maximum white degree in $G_m'$ is $O(\log n)$. There are $n-1$ choices for $w$,
of which $O(\log n)$ put $e_\mu$ into $R$. So $e_\mu$ has an
$O(\log n/n)$ chance of being in $R$. This verifies \eqref{(b)}.
\medskip
At time $m=\tau_{1,1}$ the graphs $G^{(b)'}_m,G^{(w)'}_m$
will w.h.p. contain perfect matchings,
see \cite{ER-M}. That paper does not allow repeated edges, but
removing them enables one to use the result claimed.
We choose random perfect matchings $M_0,M_1$
from $G^{(b)'}_{\tau_{1,1}}$, $G^{(w)'}_{\tau_{1,1}}$.
\medskip
We couple the sequence $G_1,G_2,\ldots,$ with the sequence
$G_1',G_2',,\ldots,$ by ignoring repeated edges in the
latter. Thus $G_1',G_2',\ldots,G_m'$ is coupled with a sequence
$G_1,G_2,\ldots,G_{m'}$ where $m'\leq m$.
It follows
from \eqref{(b)} that w.h.p. the coupled processes stop with the same edge.
Furthermore, they stop with two independent matchings $M_0,M_1$. We can then begin analysing Phase 2 and Phase 3 within this context.
\medskip
We will prove that
\beq{(a)}
\Pr(M_1\cap R=\emptyset)\geq n^{-1/2-o(1)}.
\end{equation}
Corollary \ref{cor1} follows from this. Indeed, it follows from \eqref{(a)} and the fact that Phases 1 and 2 succeed with probability $1-O(n^{-0.51})$ that they succeed w.h.p. conditional on $M_1\cap R=\emptyset$.
Phase 3 succeeds w.h.p. even if we avoid using edges in $R$. We have already carried out calculations with an arbitrary set of $O(n^{11/12})$ edges that must be avoided. The size of $R$ is dominated by a binomial $Bin(O(n\log n),O(n^{-1}\log n))$ and so $|R|=O(\log^2n)$ w.h.p. So avoiding $R$ does not change any calculation in any significant way. In other words, we can w.h.p. find a zebraic Hamilton cycle in $G_m'$.
\medskip
Finally note that the Hamilton cycle we obtain is zebraic.
{\bf Proof of \eqref{(a)}:} $R$ is a random set and it is independent of $M_1$.
Let $t_B$ be the number of black edges, then
$$\Pr(M_1\cap R=\emptyset\mid t_B)\geq \brac{1-\frac{n/2}{N-|R|}}^{t_B}\geq \exp\set{-t_B\brac{\frac{1}{n}+O\bfrac{\log^2n}{n^2}}}.$$
To remove the conditioning, we take expectations and then by convexity
$$\expect\brac{\exp\set{-t_B\brac{\frac{1}{n}+O\bfrac{\log^2n}{n^2}}}}\geq\brac{\exp\set{- \expect(t_B)\brac{\frac{1}{n}+\frac{1}{n^2}}}}\geq n^{-1/2-o(1)}$$
since $\expect(t_B)\sim \frac12n\log n$. This proves \eqref{(a)}.
\section{Proof of Theorem \ref{th4}}\label{th4proof}
For a vertex $v\in [n]$ we let its \emph{black} degree $d_b(v)$ be the number of black
edges incident with $v$ in $G_{t_0}$. We define its \emph{white} degree $d_w(v)$ analogously. Let a vertex be \emph{large} if $d_b(v),d_w(v)\geq L_0$ and \emph{small} otherwise.
We first show how to construct zebraic paths between a pair $x,y$ of large vertices. We can in fact construct paths, even if we decide on the color of the edges incident with $x$ and $y$. We do breadth first searches from each vertex, alternately using black and white edges, constructing search trees $T_x,T_y$. We build trees with $n^{2/3+o(1)}$ leaves and then argue that we coonect the leaves with a correctly colored edge. We then find paths between small vertices and other vertices by piggybacking on the large to large paths.
We will need the following structural properties:
\begin{lemma}\label{lem12}
The following hold w.h.p.:
\begin{description}
\item[(a)] No set $S$ of at most 10 vertices that is connected
in $G_{t_1}$ contains three small vertices.
\item[(b)] Let $a$ be a positive integer, independent of $n$. No set of vertices $S$, with $|S|=s\leq aL_1$, contains
more than $s+a$ edges in $G_{t_1}$.
\item[(c)] There are at most $n^{2/3}$ small vertices.
\item[(d)] There are at most $\log^3n$ isolated vertices in $G_{t_0}$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
(a)
We say that a vertex is a \emph{low color vertex} if it is incident in $G_{t_1}$ to at most $L_\varepsilon=(1+\varepsilon)L_0$ edges
of one of the colors, where $\varepsilon$ is some sufficiently small positive constant.
Furthermore, it follows from \eqref{mon} that
\begin{align}
&\Pr(\exists\ \text{a connected $S$ in $G_{n,t_1}$ with three low color vertices}) \nonumber\\
&\leq \sum_{k=3}^{10}\binom{n}{k}k^{k-2}\frac{\binom{N-k+1}{t_1-k+1}}{\binom{N}{t_1}} \binom{k}{3}\Pr(\text{vertices 1,2,3 are low color})\label{po1}\\
&\leq_b \sum_{k=3}^{10}\binom{n}{k}k^{k-2}\frac{\binom{N-k+1}{t_1-k+1}}{\binom{N}{t_1}}\binom{k}{3}
\brac{2\sum_{\ell=0}^{L_\varepsilon}\binom{n-k}{\ell}\bfrac{p_1}{2}^\ell\brac{1-\frac{p_1}{2}}^{n-k-\ell}}^3\label{po2}\\
&\leq_b \sum_{k=3}^{10}n^k\bfrac{t_1}{N}^{k-1}(n^{-.45})^3\nonumber\\
&\leq_b \sum_{k=3}^{10}n^k \bfrac{\log n}{n}^{k-1}(n^{-.45})^3\nonumber\\
&=o(1).\nonumber
\end{align}
{\bf Explanation of \eqref{po1},\eqref{po2}:} Having chosen our tree, $\frac{\binom{N-k+1}{t_1-k+1}}{\binom{N}{t_1}}$ is the probability that this tree exists in $G_{t_1}$. Condition on this and choose three vertices. The final $(\cdots)^3$ in \eqref{po2} bounds the probability of the event that 1,2,3 are low color vertices in $G_{n,p_1}$. This event is monotone decreasing, given the conditioning, and so we can use \eqref{mon} to replace $G_{n,t_1}$ by $G_{n,p_1}$ here.
\medskip
Now a simple first moment calculation shows that w.h.p. each vertex in
$[n]$ is incident with $o(\log n)$ edges of $E_{t_1}\setminus
E_{t_0}$. Hence, for (a) to fail, there would have to be a
relevant set $S$ with three vertices, each incident in $G_{t_1}$ with at most $(1+o(1))L_0$ edges of
one of the colors, contradicting the above.
(b)
We will prove something slightly stronger. Suppose that $p=\frac{K\log n}{n}$
where $K>0$ is arbitrary. We will show this result for $G_{n,p}$.
The result for this lemma follows from $K=1+o(1)$ and \eqref{mon}.
We get
\begin{align*}
\Pr(\exists\ S)&\leq_b \sum_{s\geq 4}^{aL_1}\binom{n}{s}\binom{\binom{s}{2}}{s+a+1}
p^{s+a+1}\\
&\leq_b \sum_{s\geq 4}^{aL_1}\brac{\frac{ne}{s}\cdot\frac{sep}{2}}^s(sep)^{a+1}\\
&\leq_b (Ke^2\log n)^{aL_1}\bfrac{\log^2n}{n}^{a+1}\\
&\le n^{o(1)}\left(\frac{\log^{3+L_1}n}{n}\right)^a\,\frac{\log^2n}{n} \\
&=o(1).
\end{align*}
(c) Using \eqref{mon} we see that if $Z$ denotes the number of small vertices then
$$\expect(Z)\leq_b n\sum_{k=0}^{L_0}(p_0/2)^k (1-p_0/2)^{n-1-k}
\leq n^{1/2+o(1)}.$$
We now use the Markov inequality.
(d) Using \eqref{mon} we see that the expected number of isolated vertices in
$G_{t_0}$ is $O(\log^2n)$. We now use the Markov inequality.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Now fix a pair of large vertices $x<y$. We will define sets
$\Sigma{b}_i(z),\Sigma{w}_i(z),
i=0,1,\ldots,\ell_1$, $z=x,y$.
Assume w.l.o.g. that $\ell_1$ is even.
We
let $\Sigma{b}_0(x)=\Sigma{w}_0(x)=\set{x}$ and then $\Sigma{b}_1(x)$ (resp. $\Sigma{w}_1(x)$)
is the set consisting of the first $\ell_0$ black
(resp. white) neighbors of $x$. We will use the notation
$\Sigma{c}_{\leq i}(x)=\bigcup_{j=1}^i\Sigma{c}_j(x)$ for $c=b,w$.
We now iteratively define for $i=0,1,\ldots,(\ell_1-2)/2$.
\begin{align*}
\hS{b}_{2i+1}(x)&=\set{v\notin \Sigma{b}_{\leq 2i}(x): v\neq y
\text{ is joined by a black $G_{t_0}$-edge to a vertex in
}\Sigma{b}_{2i}(x)}.\\
\Sigma{b}_{2i+1}(x)&=\text{the first $\ell_0$ members of
$\hS{b}_{2i+1}(x)$}.\\
\hS{b}_{2i+2}(x)&=\set{v\notin \Sigma{b}_{\leq 2i+1}: v\neq y
\text{ is joined by a white $G_{t_0}$-edge to a vertex in
}\Sigma{b}_{2i+1}(x)}.\\
\Sigma{b}_{2i+2}(x)&=\text{the first $\ell_0$ members of
$\hS{b}_{2i+2}(x)$}:
\end{align*}
We then define, for $i=0,1,\ldots,(\ell_1-2)/2$.
\begin{align*}
\hS{w}_{2i+1}(x)&=\set{v\notin (\Sigma{b}_{\leq \ell_1}(x)\cup \Sigma{w}_{\leq 2i}(x)): v\neq y
\text{ is joined by a white $G_{t_0}$-edge to a vertex in
}\Sigma{w}_{2i}(x)}\\
\Sigma{w}_{2i+1}(x)&=\text{the first $\ell_0$ members of
$\hS{w}_{2i+1}(x)$}.\\
\hS{w}_{2i+2}(x)&=\set{v\notin (\Sigma{b}_{\leq \ell_1}(x)\cup \Sigma{w}_{\leq 2i+1}(x)): v\neq y
\text{ s joined by a black $G_{t_0}$-edge to a vertex in
}\Sigma{w}_{2i+1}(x)}\\
\Sigma{w}_{2i+2}(x)&=\text{the first $\ell_0$ members of
$\hS{w}_{2i+2}(x)$}:
\end{align*}
\begin{lemma}\label{lem13}
If $1\leq i\leq \ell_1$, then in $G_{t_0}$, for $c=b,w$,
$$\Pr(|\hS{c}_{i+1}(x)|\leq \ell_0|\Sigma{c}_i(x)|\mid |\Sigma{c}_j(x)|=\ell_0^{j},\,0\leq j
\leq i)=O(n^{-anyconstant}).$$
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
This follows easily from \eqref{notmon} and the Chernoff bounds.
Each random variable $\hS{c}(x)$ is binomially distributed with parameters $n-o(n)$ and $1-(1-p_0/2)^{\ell_0^i}$. The mean is therefore asymptotically $\frac12\ell_0^i\log n=\Omega(\log^2n)$ and we are asking for the probability that it is much less than half its mean.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
It follows from this lemma, that w.h.p., we may define
$\Sigma{b}_0(x),\Sigma{b}_1(x),\ldots,\Sigma{b}_{\ell_1}(x)$ where
$|\Sigma{b}_i(x)|=\ell_0^i$ such that for each $j$ and $z\in \Sigma{b}_j(x)$
there is a zebraic path from $x$ to $z$
that starts with a black edge. For $\Sigma{w}_{\ell_1}(x)$ we can
say the same except that the zebraic
path begins with a white edge.
Having defined the $\Sigma{c}_i(x)$ etc., we define sets
$\Sigma{c}_i(y),i=1,2 \ldots,\ell_1,\,c=b,w$.
We let $\Sigma{b}_0(y)=\Sigma{w}_0(y)=\set{y}$ and then $\Sigma{b}_1(y)$ (resp. $\Sigma{w}_1(y)$)
is the set consisting of the first $\ell_0$ black
(resp. white) neighbors of $y$ that are not in $\Sigma{b}_{\leq
\ell_1}(x)\cup \Sigma{w}_{\leq \ell_1}(x)$. We note that for $c=b,w$ we
have $|\Sigma{c}_1(y)|\geq L_0-18>\ell_0$. This follows from Lemma
\ref{lem12}(b). Indeed, suppose that $y$ has ten neighbors $T$ in
$\Sigma{w}_{\leq\ell_1}(x)$. Let $S$ be the set of vertices in the paths
from $T$ to $x$ in $\Sigma{w}_{\leq\ell_1}(x)$. If $|S|=s$ then
$S\cup\set{y}$ contains at least $s+9$ edges. This is because every
neighbour after the first adds an additional $k$ vertices and $k+1$ edges to
the subgraph of $G_{t_0}$ spanned by $S\cup\set{y}$,
for some $k\leq \ell_1$. Now $s+1\leq 10\ell_1+1\leq 7L_1$ and the
$s+9$ edges contradict the condition in the lemma, with $a=7$.
We make a slight change in the definitions of the $\hS{c}_i(y)$ in that we keep these sets disjoint from the $\Sigma{c'}_i(x)$.
Thus we take for example
\begin{multline*}
\hS{w}_{2i+1}(y)=\\
\set{v\notin (\Sigma{w}_{\leq 2i}(y)\cup
\Sigma{b}_{\leq \ell_1}(x)\cup \Sigma{w}_{\leq \ell_1}(x)): v
\text{ is joined by a black $G_{t_0}$-edge to a vertex in }\Sigma{w}_{2i}(y)}.
\end{multline*}
Then we note that excluding $o(n)$ extra vertices has little effect on the
proof of Lemma \ref{lem13} which remains true with $x$
replaced by $y$. We can then define the $\Sigma{c}_{i}(y)$ by taking the first $\ell_0$ vertices.
Suppose now that we condition on the sets $\Sigma{c}_i(x),\Sigma{c}_i(y)$
for $c=b,w$ and $i=0,1,\ldots,\ell_1$. The edges between
the sets with $c=b$ and $i=\ell_1$ and those with $c=w$
and $i=\ell_1$ are unconditioned.
Let
$$\Lambda=\ell_0^{2\ell_1}=n^{4/3-o(1)}.$$
Then, for example, using \eqref{mon},
\beq{eq13}
\Pr(\not\exists\text{ a black $G_{t_0}$ edge joining }\Sigma{b}_{\ell_1}(x),\Sigma{b}_{\ell_1}(y))
\leq 3\brac{1-\frac{\log n}{(2+o(1))n}}^\Lambda=O(n^{-anyconstant}).
\end{equation}
Thus w.h.p. there is a zebraic path with both terminal edges black between every
pair of large vertices. A similar argument using $\Sigma{w}_{\ell_1}(x),\Sigma{w}_{\ell_1}(y)$
shows that w.h.p. there is a zebraic path with both terminal edges white between
every pair of large vertices.
If we want a zebraic path with a black edge incident with $x$ and a white edge
incident with $y$ then we argue that there is a white $G_{t_0}$ edge between
$\Sigma{b}_{\ell_1}(x)$ and $\Sigma{w}_{\ell_1-1}(y)$.
We now consider the small vertices. Let $V_\sigma$ be the set of small vertices that
have a large neighbor in $G_{\tau_1}$. The above analysis shows that there is a zebraic
path between $v\in V_\sigma$ and $w\in V_\sigma\cup V_\lambda$, where $V_\lambda$ is the set of large vertices.
Indeed if $v$ is joined by a black edge to a vertex $w\in V_\lambda$ then we can continue with a zebraic path that begins with a white edge and we can reach any large vertex and choose the
color of the terminating edge to be either black or white. This is useful when we need
to continue to another vertex in $V_\sigma$.
We now have to deal with small vertices
that have no large neighbors at time $\tau_1$. It follows from Lemma \ref{lem12}(a)
that such vertices have degree one or two in $G_{\tau_1}$ and that every vertex at distance two
from such a vertex is large.
\begin{lemma}\label{lem14}
All vertices of degree at most two in $G_{t_0}$ are w.h.p.
at distance greater than 10 in $G_{t_1}$,
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
Simpler than Lemma \ref{lem0}(b). We use \eqref{notmon} and then
$$\Pr(\exists \text{ such a pair of vertices})\leq_b t_1^{1/2}\sum_{k=0}^9
n^k p_1^{k-1}\brac{(1-p_0)^{n-k-1}+(n-k)p_0(1-p_0)^{n-k-2}}^2=o(1).$$
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
Let $Z_i$ be the number of vertices of degree $0\leq i\leq 2$ in $G_{t_0}$
that are adjacent in $G_{\tau_1}$ to vertices that are
themselves only incident to edges of one color.
First consider the case $i=1,2$. Here we let $Z_i'$ be the number of
vertices of degree $i$ in $G_{t_0}$
that are adjacent in $G_{t_0}$ to vertices that are
themselves only incident to edges of one color. Note that $Z_i\leq Z_i'$.
Then we have, with the aid of \eqref{binom},
\begin{align}
\expect(Z_1')&\leq n\binom{n-1}{1}\frac{\binom{N-n+1}{t_0-1}}{\binom{N}{t_0}}
\sum_{k=0}^{n-2}\binom{n-2}{k}
\frac{\binom{N-2n+3}{t_0-1-k}}{\binom{N-n+1}{t_0-1}}2^{-(k-1)}.\label{eq11}\\
&\leq_b n^2\frac{t_0}{N}\bfrac{N-t_0}{N-1}^{n-2}\sum_{k=0}^{n-2}\binom{n-2}{k} 2^{-k}\bfrac{t_0-1}{N-n+1}^{k}
\bfrac{N-n-t_0+2}{N-n-k+1}^{n-2-k}\nonumber\\
&\leq_b n\log n \exp\set{-\frac{(n-2)(t_0-1)}{N-1}}\sum_{k=0}^{n-2}
\binom{n-2}{k}\bfrac{t_0-1}{2(N-n+1)}^k\bfrac{N-n-t_0+2}{N-n-k+1}^{n-2-k}\nonumber\\
&\leq n\log n \exp\set{-\frac{(n-2)(t_0-1)}{N-1}}\sum_{k=0}^{n-2}
\binom{n-2}{k}\bfrac{t_0}{2(N-n)}^k\bfrac{N-n-2t_0/3}{N-n}^{n-2-k}\nonumber\\
&\leq_b \log^3n \brac{\frac{t_0}{2(N-n)}+\frac{N-n-2t_0/3}{N-n}}^{n-2}\nonumber\\
&\leq \log^3n \bfrac{N-t_0/6}{N-n}^{n-2}\nonumber\\
&=o(1).\label{eq14}
\end{align}
{\bf Explanation for \eqref{eq11}:} We choose a vertex $v$ of degree one
and its neighbor $w$ in $n\binom{n-1}{1}$ ways.
The probability that $v$ has degree one is $\frac{\binom{N-n+1}{t_0-1}}{\binom{N}{t_0}}$.
We fix the
degree of $w$ to be $k+1$. This now has probability
$\frac{\binom{N-2n+3}{t_0-k-1}}{\binom{N-n+1}{t_0-1}}$. The
final factor $2^{-(k-1)}$ is the probability that $w$ only sees edges of one color.
\medskip
In order to deal with $Z_2'$, we next eliminate the possibility of a vertex of degree two in $G_{t_0}$
being in a triangle of $G_{t_1}$. First, using \eqref{mon}, the expected number of
vertices of degree two in $G_{t_0}$ is at most
$$3n\binom{n-1}{2}p_0^2\brac{1-p_0}^{n-3}=O(\log^4n).$$
So, w.h.p. there are fewer than $\log^5n$.
Using \eqref{notmon}, we see that
the expected number of triangles of $G_{t_0}$ containing a vertex of degree two is at most
$$O(t_0^{1/2})\times O(\log^4n)\times n^3p_0^3(1-p_0)^{n-3}=o(1).$$
So, w.h.p. there are no such triangles.
Then the probability that there is an edge of $G_{t_1}-G_{t_0}$ that
joins the two neighbors of a vertex of degree two in $G_{t_0}$ is at most
$$o(1)+\log^5n\times \frac{t_1-t_0}{N}=o(1).$$
Now we can proceed to estimate $\expect(Z_2')$, ignoring the possibility
of such a triangle. In which case,
\begin{align}
&\expect(Z_2')\nonumber\\
&\leq_b n\binom{n-1}{2}
\frac{\binom{N-n+1}{t_0-2}}{\binom{N}{t_0}}\sum_{k,l=0}^{n-3}
\binom{n-3}{k}\binom{n-3}{l}
\frac{\binom{N-3n+6}{t_0-2-k-l}}{\binom{N-n+1}{t_0-2}}2^{-k-l}\label{eq15}\\
&\leq n^3 \bfrac{t_0}{N}^2\bfrac{N-t_0}{N-2}^{n-3}\times \nonumber\\
&\gap{1}\sum_{k,l=0}^{n-3}
\binom{n-3}{k}\binom{n-3}{l}\bfrac{t_0-2}{N-n+1}^{k+l}\bfrac{N-n-t_0+3}
{N-n-k-l+1}^{2n-5-k-l}2^{-k-l}
\nonumber\\
&\leq_b \log^4n \brac{\sum_{k=0}^{n-3}\binom{n-3}{k}\bfrac{t_0}{2(N-n)}^{k}
\bfrac{N-t_0}{N-2n}^{n-3-k}}^2\nonumber\\
&\leq \log^4n\brac{1-\frac{t_0-2}{2(N-2n)}}^{2(n-3)}\nonumber\\
&=o(1).\label{eq16}
\end{align}
\medskip
Finally, consider $Z_0$. Condition on $G_{t_0}$ and assume that
Properties (c),(d) of Lemma \ref{lem12}
hold. The first edge incident with an isolated vertex of $G_{t_0}$
will have a random endpoint. It follows immediately
that
\beq{eq17}
\expect(Z_0)\leq o(1)+\log^3n\times n^{-1/3}=o(1).
\end{equation}
Here the $o(1)$ accounts for Properties (c),(d) of Lemma \ref{lem12} and
$\log^3n\times n^{-1/3}$ bounds the expected
number of ``first edges'' that choose small endpoints.
Equations \eqref{eq11}, \eqref{eq15} and \eqref{eq17} show that
$Z_0+Z_1+Z_2=0$ w.h.p. In which case it
will be possible to find zebraic paths starting from small vertices. Indeed, we now know that w.h.p. any small vertex $v$ will be adjacent to a vertex $w$ that is incident with edges of both colors and that any other neighbor of $w$ is large.
\section{Proof of Theorem \ref{th5}}\label{th5proof}
The case $r=2$ is implied by Corollary \ref{cor1} and
so we can assume that $r\geq3$.
\subsection{$p\leq (1-\varepsilon)p_r$}
For a vertex $v$, let
\begin{align*}
C_v&=\set{i:v\text{ is incident with an edge of color $i$}}.\\
I_v&=\set{i:\set{i,i+1}\subseteq C_v}.
\end{align*}
Let $v$ be \emph{bad} if $I_v=\emptyset$. The existence
of a bad vertex means that there are no $r$-zebraic Hamilton cycles. Let $Z_B$
denote the number of bad vertices. Now if $r$ is odd
and $C_v\subseteq \set{1,3,\ldots,2\rdown{r/2}-1}$
or $r$ is even and $C_v\subseteq \set{1,3,\ldots,r-1}$ then
$I_v=\emptyset$. Hence,
$$\expect(Z_B)\geq n\brac{1-\frac{\alpha_rp}{r}}^{n-1}=n^{\varepsilon-o(1)}\to\infty.$$
A straightforward second moment calculation shows that $Z_B\neq 0$ w.h.p. and this proves the
first part of the theorem.
\subsection{$p\geq (1+3\varepsilon)p_r$}
Note the replacement of $\varepsilon$ by $3\varepsilon$ here, for convenience. Note also that $\varepsilon$ is assumed to be sufficiently small for some inequalities below to hold.
Write $1-p=(1-p_1)(1-p_2)^2$ where $p_1=(1+\varepsilon)p_r$ and $p_2\sim\varepsilon p_r$. Thus
$G_{n,p}$ is the union of $G_{n,p_1}$ and two independent copies of $G_{n,p_2}$.
If an edge appears more than once in $G_{n,p}$, then it retains the color of its first occurence.
Now for a vertex $v$ let $d_i(v)$ denote the number of edges of color $i$ incident
with $v$ in $G_{n,p_1}$. Let
$$J_v=\set{i:d_i(v)\geq \eta_0\log n}$$
where $\eta_0=\varepsilon^2/r$.
Let $v$ be \emph{poor} if $|J_v|<\beta_r$ where
$\beta_r=\rdown{r/2}+1$. Observe that $\alpha_r+\beta_r=r+1$. Then let $Z_P$ denote the number
of poor vertices in $G_{n,p_1}$. A simple calcluation shows that w.h.p. the minimum
degree in $G_{n,p_1}$ is at least $L_0$ and that the maximum degree is at most $6\log n$.
Then
\begin{align*}
\expect(Z_P)&\leq o(1)+n\sum_{k=L_0}^{n-1}\binom{n-1}{k}p_1^k(1-p_1)^{n-1-k}
\sum_{l=r-\beta_r+1}^r\binom{r}{l}\binom{k}{l\eta_0\log n}\brac{1-\frac{l}{r}}^{k-r\eta_0\log n}\\
&\leq o(1)+n
\sum_{k=0}^{n-1}\binom{n-1}{k}p_1^k(1-p_1)^{n-1-k}2^r\binom{6\log n}{r\eta_0\log n}\bfrac{\beta_r-1}{r}^{k}
\bfrac{r}{\beta_r-1}^{r\eta_0\log n}\\
&\leq o(1)+2^rn^{1+r\eta_0\log(6e/\eta_0)}(1-p_1)^{n-1}\brac{1+\frac{(\beta_r-1)p_1}{r(1-p_1)}}^{n-1}\\
&\leq o(1)+2^rn^{1+r\eta_0\log(6e/\eta_0)}\brac{1-\frac{\alpha_rp_1}{r}}^{n-1}\\
&=o(1).
\end{align*}
We can therefore assert that w.h.p. there are no poor vertices. This means that
$$K_v=\set{i:d_i(v),d_{i+1}(v)\geq \eta_0\log n}\neq \emptyset\text{ for all }v\in [n].$$
The proof now follows our general 3-phase procedure of (i) finding an $r$-zebraic
2-factor, (ii) removing small cycles so that we have a 2-factor in which
every cycle has length $\Omega(n/\log n)$ and then (iii) using a second
moment calculation to show that this 2-factor can be re-arranged into
an $r$-zebraic Hamilton cycle.
\subsubsection{Finding an $r$-zebraic 2-factor}
We
partition $[n]$ into $r$ sets $V_i=[(i-1)n/r+1,in]$ of size $n/r$.
Now for each $i$ and each vertex $v$ let
\begin{align*}
d_i^+(v)&=|\set{w\in V_{i+1}:(v,w)\text{ is an edge of }G_{n,p_1}
\text{ of color }i+1}|.\\
d_i^-(v)&=|\set{w\in V_{i-1}:(v,w)\text{ is an edge of }G_{n,p_1}
\text{ of color }i-1}|.
\end{align*}
(Here 1-1 is interpreted as $r$ and $r+1$ is interpreted as 1).
We now let a vertex $v\in V_i$ be $i$-\emph{large} if $d_i^+(v),d_i^-(v)\geq \eta\log n$
where $\eta=\min\set{\eta_0,\eta_1,\eta_2}$ and $\eta_1$ is the solution to
$$\eta_1\log\bfrac{e(1+\varepsilon)}{r\eta_1\alpha_r}=\frac{1}{r\alpha_r}$$
and $\eta_2$ is the solution to
$$\eta_2\log\bfrac{3er(1+\varepsilon)}{\eta_2\alpha_r}=\frac{1}{3\alpha_r}.$$
Let $v$ be \emph{large} if it is $i$-large for all $i$.
Let $v$ be \emph{small} otherwise. (Note that $d_i^+(v),d_i^-(v)$ are defined for all $v$, not just for $v\in V_i$, $i\in[r]$).
Let $V_\lambda,V_\sigma$ denote the sets of large and small vertices respectively.
\begin{lemma}\label{lem15}
W.h.p., in $G_{n,p_1}$,
\begin{description}
\item[(a)] $|V_\sigma|\leq n^{1-\theta}$ where $\theta=\frac{\varepsilon}{2r\alpha_r}$.
\item[(b)] No connected subset of size at most $\log\log n$ contains more
than $\mu_0=r\alpha_r$ members of $V_\sigma$.
\item[(c)] If $S\subseteq [n]$ and $|S|\leq n_0=n/\log^2n$ then
$e(S)\leq 100|S|$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}\\
(a) If $v\in V_\sigma$ then there exists $i$ such that $d_i^+(v)\leq \eta\log n$
or $d_i^-(v)\leq \eta\log n$. So we have
\begin{align}
\expect(|V_\sigma|)&\leq 2rn\sum_{k=0}^{\eta\log n}\binom{n/r}{k}\bfrac{p_1}{r}^k
\brac{1-\frac{p_1}{r}}^{n/r-k}\label{MK}\\
&\leq3\bfrac{(1+\varepsilon)e}{r\eta\alpha_r}^{\eta\log n}n^{1-(1+\varepsilon+o(1))/r\alpha_r}\\
&\leq n^{1-2\theta+o(1)}.
\end{align}
Part (a) follows from the Markov inequality. Note that we can lose the factor 2 in \eqref{MK} since $d^+_i(v)=d^-_{i+2}(v)$.
(b)
The expected number of connected sets $S$ of size $2\log\log n$ containing
$\mu_0$ members of $V_\sigma$ can be bounded by
\beq{206}
\sum_{s=\mu_0}^{2\log\log n}\binom{n}{s}s^{s-2}p_1^{s-1}
\binom{s}{\mu_0}
\brac{r\sum_{k=0}^{\eta\log n}\binom{n/r-s}{k}\bfrac{p_1}{r}^k
\brac{1-\frac{p_1}{r}}^{n/r-s-k}}^{\mu_0}.
\end{equation}
{\bf Explanation:} We choose $s$ vertices for $S$ and a tree to connect
up the vertices of $S$. We then choose $\mu_0$ members $A\subseteq S$
to be in $V_\sigma$. We multiply by the probability that
for each vertex in $A$, there is at least one $j$ such that $v$ has few
neighbors in $V_{j}\setminus S$ connected to $v$ by edges of color $j$.
The sum in \eqref{206} can be bounded by
$$n\sum_{s=\mu_0}^{2\log\log n}(4e\log n)^s n^{-\mu_0(1+\varepsilon+o(1))/r\alpha_r}=o(1).$$
(c) This is proved in the same manner as Lemma \ref{lem0}(c).
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
For $v\in V_\sigma$ we let
$\phi(v)=\min\set{i:v\text{ is $i$-large}}$
and then let $X_i=\set{v\in V_\sigma:\phi(v)=i}$ for $i\in[r]$.
Now let
$$W_i=(V_i\setminus V_\sigma)\cup\set{v\in V_\sigma:\phi(v)=i},\quad i=1,2,\ldots,r.$$
Suppose that $w_i=|W_i|-n/r$ for $i\in [r]$
and let $w_i^+=\max\set{0,w_i}$ for $i\in [r]$. We now remove
$w_i^+$ randomly chosen large vertices from each $W_i$ and then randomly assign
$w_i^-=-\min\set{0,w_i}$ of them to each $W_i,i\in [r]$.
Thus we obtain a partition
of $[n]$ into $r$ sets $Z_i,i=1,2,\ldots,r$,
of size $n/r$ for $i\in[r]$.
Let $H_i$ be the bipartite graph induced by
$W_i,W_{i+1}$ and the edges of color $i$ in $G_{n,p_1}$.
We now argue that
\begin{lemma}\label{lemH}
$H_i$ has minimum degree at least $\frac12\eta\log n$ w.h.p.
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
It follows from Lemma \ref{lem15}(b) that
no vertex in $W_i\cap V_i$ loses more than $\mu_0$ neighbors
from the deletion of $V_\sigma$. Also, we move $v\in V_\sigma$ to
a $W_i$ where it has large degree in $V_{i-1}$ and $V_{i+1}$.
Its neighborhood may have been affected by the deletion of $V_\sigma$,
but only by at most $\mu_0$. Thus for every $i$ and $v\in X_i$,
$v$ has at least $\eta\log n-\mu_0$ neighbors in $W_{i-1}$ connected
to $v$ by an edge of color $i-1$. Similarly w.r.t. $i+1$.
Now consider the random re-shuffling to get sets of size $n/r$.
Fix a $v\in V_i$. Suppose that
it has $d=\Theta(\log n)$ neighbors in $W_{i+1}$ connected by
an edge of color $i+1$. Now randomly choose $w_{i+1}\leq |V_\sigma|$
to delete from $W_{i+1}$.
The number $\nu_v$ of neighbors of $v$ chosen
is dominated by $\ensuremath{\operatorname{Bin}}\brac{w_{i+1},\frac{d}{n/r-w_{i+1}}}$. This follows from the fact that if we choose these $w_{i+1}$ vertices one by one, then at each step, the chance that the chosen vertex is a neighbor of $v$ is bounded from above by $\frac{d}{n/r-w_{i+1}}$.
So, given the condition in Lemma \ref{lem15}(a) we have
$$\Pr(\nu_v\geq 2/\theta)\leq \binom{n^{1-\theta}}{2/\theta}\bfrac{dr}{n-o(n)}^{2/\theta}
\leq \bfrac{n^{1-\theta}edr\theta}{n}^{2/\theta}=o(n^{-1}).$$
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
We can now verify the existence of perfect matchings w.h.p.
\begin{lemma}\label{lem16}
W.h.p., each $H_i$ contains a perfect matching $M_i,i=1,2,\ldots,r$.
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
Fix $i$.
We use Hall's theorem and consider the existence of a set $S\subseteq W_i$
that has fewer than $|S|$ $H_i$-neighbors in $W_{i+1}$. Let $s=|S|$
and let $T=N_{H_i}(S)$ and $t=|T|<s$. We can
rule out $s\leq n_0=n/\log^2n$ through Lemma \ref{lem15}(c). This is because we have
$e(S\cup T)/|S\cup T|\geq \frac14\eta\log n$ in this case.
Let $n_\sigma=|V_\sigma|$ and now consider $n/\log^2n\leq s\leq n/2r$.
Given such a pair $S,T$ we deduce that there exist $S_1\subseteq S\subseteq V_i,
|S_1|\geq s-n_\sigma$ and $T_1\subseteq T\subseteq V_{i+1}$ and
$U_1\subseteq V_{i+1},|U_1|\leq n_\sigma$
such that there are at least $m_s=(s\eta/2-6n_\sigma)\log n$ edges between $S_1$ and
$T_1$ and no edges between $S_1$ and $V_{i+1}\setminus(T_1\cup U_1)$.
There is no loss of generality in increasing the size of $T$ to $s$.
We can then write
\begin{align*}
\Pr(\exists\ S,T)&\leq \sum_{s=n_0}^{n/2r}\binom{n/r-n_\sigma}{s}^2
\binom{s^2}{m_s}p_1^{m_s}(1-p_1)^{(s-n_\sigma)(n/r-s-n_\sigma)}\\
&\leq \sum_{s=n_0}^{n/2r}\bfrac{ne}{rs}^{2s}\bfrac{s^2p_1e}{m_s}^{m_s}
e^{-(s-n_\sigma)(n/r-s-n_\sigma)p_1}\\
&\leq \brac{\bfrac{s}{n}^{\eta\log n/3}\bfrac{3er(1+\varepsilon)}{\alpha_r\eta}^{\eta\log n/2}
n^{-(1-o(1))/2\alpha_r}}^s\\
&=o(1).
\end{align*}
For the case $s\geq n/2r$ we look for subsets of $V_{i+1}$ with too few neighbors.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
It follows from symmetry considerations that the $M_i$ are independent of each other.
Analogously to Lemma \ref{lem2}, we have
\begin{lemma}\label{lem17}
The following hold w.h.p.:
\begin{description}
\item[(a)] $\bigcup_{i=1}^rM_i$ has at most $10\log n$ components.
(Components are $r$-zebraic cycles of length divisible by $r$.)
\item[(b)] There are at most $n_b$ vertices on
components
of size at most $n_c$.
\end{description}
\end{lemma}
{\noindent \bf Proof\hspace{2em}}
The matchings induce a permutation $\pi$ on $W_1$. Suppose that $x\in W_1$.
We follow a path via a matching edge to $W_2$ and then by a matching edge to
$W_3$ and so on until we return to a vertex $\pi(x)\in W_1$. $\pi$ can be taken
to be a random permutation and then the lemma follows from Lemma \ref{lem2}.
\hspace*{\fill}\mbox{$\Box$}\\ \medskip
The remaining part of the proof is similar
to that described in Sections \ref{ics}, \ref{CHC}.
We use the edges of the first copy $G_{n,p_2}$
of color 1 to make all cycles have length $\Omega(n/\log n)$
and then we use the edges of the second copy
of $G_{n,p_2}$ of color 1 to create an $r$-zebraic
Hamilton cycle. The details are left to the reader.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The ordinary gravity, described by the Einstein-Hilbert action (containing the
curvature scalar $R$), is not renormalizable. The higher derivative gravity,
with $R+R^2$, is renormalizable. But higher derivative theories are considered
as problematic, because according the the Ostrogradski
formalism\,\cite{Ostrogradski} they contain
negative energies which, according to the wide spread belief, automatically
imply instabilities at the classical and quantum level. It is often stated
that at the quantum level such a theory implies negative probabilities due to
ghosts, and is therefore not unitary. Whether a theory implies negative
probabilities and positive energies, or vice versa, positive probabilities
and negative energies, depends on choice of vacuum and corresponding
creation and annihilations operators\,\cite{Pauli}--\cite{Ozonder}.
A model for a higher derivative theory is the Pais-Uhlenbeck
oscillator\,\cite{PU}. It has been studied by many authors,
because understanding the issues
concerning its stability, could pave the way towards quantum gravity.
Smilga\,\cite{Smilga1}--\cite{SmilgaStable} has found that there are islands of
stability of the classical solutions of the interacting
Pais-Uhlenbck (PU) oscillator. An example of an unconditionally stable
interacting system was also
found\,\cite{SmilgaStable}. This system, which is a non linear extension of the
PU oscillator, is a close relative of a
supersymmetric higher-derivative system\,\cite{Robert}. Recently,
Mostafazadeh\,\cite{Mostafazadeh} has found a Hamiltonian formulation
of the PU oscillator that yields a stable and unitary quantum system.
Other authors\,\cite{Bolonek}--\cite{Nucci} have also arrived at the positive
definite Hamiltonians for the PU oscillator. A procedure
with a PT symmetric Hamiltonian without ghosts and negative energies
in the spectrum has been considered in Refs.\,\cite{Bender}--\cite{Mannheim:2012ci}.
In this paper we show
that the descriptions of Ref.\,\cite{Mostafazadeh}--\cite{Nucci} hold for a free
PU oscillator only, but not for
a self-interacting one. In the latter case one has to describe the PU oscillator
by the second order Lagrangian and employ the Ostrogradski formalism.
Then, as it is well known, the PU oscillator can be
written as a system of two coupled oscillators, one with positive and the
other one with negative energy. Stability issues arise, because the
energy can flow from one to the other oscillator in such a way that
their kinetic energies escape into positive and negative infinity, respectively,
while the total energy of the system remains constant and finite.
In this paper we first show explicitly that, in general, a self-interacting
PU oscillator
cannot be described by a positive definite Hamiltonian. Then we consider
numerical solutions of the PU oscillator with the quartic self-interaction.
Unless the coupling constant or the initial velocity
is too high, the (classical) system is stable. So we indeed have islands
of stability as observed by Smilga\,\cite{Smilga1}--\cite{SmilgaStable},
and recently by\,\cite{Ilhan}. Then we consider two
modifications of the PU oscillator. (i) We replace the quartic
interaction term with a term that contains the forth power of sine.
We show numerically
and analytically that such a modification gives infinite continents of stability.
(ii) Instead of taking equal masses of the two oscillators, we consider
the case in which the masses are different. Such a modified system is in
fact just like the PU oscillators, only the coefficients in front
of the terms are changed, and a non-linear term is added. Again we obtain
stability for the vast range of parameters. Moreover, regardless of how
high the initial velocity is, the solution is stable. Such behavior
of classical solution implies that the quantum system is stable
as well\,\cite{Ilhan}.
\section{The Pais-Uhlenbeck oscillator as a system of two oscillators}
The Pais-Uhlenbeck oscillator is given by the 4th order differential equation
\begin{equation}
\left ( \frac{\mbox{\rm d}^2}{\mbox{\rm d} t^2} + \omega_1^2 \right )
\left (\frac{\mbox{\rm d}^2}{\mbox{\rm d} t^2} + \omega_2^2 \right ) x = 0,
\label{2.1}
\end{equation}
which gives
\begin{equation}
x^{(4)} + (\omega_1^2 + \omega_2^2) {\ddot x} + \omega_1^2 \omega_2^2 x = 0.
\label{2.2}
\end{equation}
As observed by Mostafazadeh\,\cite{Mostafazadeh}, this can be written as
the system of two oscillators\footnote{We use here a different notation for
coefficients.}
\begin{equation}
{\ddot x} + \mu_1 x - \rho_1 y = 0,
\label{2.3}
\end{equation}
\begin{equation}
{\ddot y} + \mu_2 y - \rho_2 x = 0,
\label{2.4}
\end{equation}
where $\mu_1,~\mu_2,~\rho_1,~\rho_2$ are real constants.
From Eq.\,(\ref{2.3}) we have $y=(1/\rho_1)({\ddot x} +\mu_1 x)$. Inserting this
into Eq.\,(\ref{2.4}), we obtain
\begin{equation}
x^{(4)} + (\mu_1 + \mu_2) {\ddot x} +(\mu_1 \mu_2 - \rho_1 \rho_2) x = 0.
\label{2.5}
\end{equation}
Comparison of the latter equation with (\ref{2.2}) gives the relations
\begin{equation}
\mu_1 +\mu_2 = \omega_1^2 +\omega_2^2
\label{2.6}
\end{equation}
\begin{equation}
\mu_1 \mu_2 - \rho_1 \rho_2 = \omega_1^2 \omega_2^2 .
\label{2.7}
\end{equation}
The solutions is
\begin{equation}
\omega_{1,2}^2 = \mbox{$\frac{1}{2}$} (\mu_1 +\mu_2) \pm \mbox{$\frac{1}{2}$}
\sqrt{(\mu_1 +\mu_2)^2 - 4 (\mu_1 \mu_2 - \rho_1 \rho_2)} .
\label{2.8}
\end{equation}
Let us now find possible Lagrangians corresponding to the equations of
motion (\ref{2.3}),(\ref{2.4}).
{\bf Case I.}
Assuming the Lagrangian
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}^2 + {\dot y}^2) - \mbox{$\frac{1}{2}$}
(\mu_1 x^2 + \mu_2 y^2 - 2 \rho_1 x y ),
\label{2.9}
\end{equation}
we obtain the equations of motion (\ref{2.3}),(\ref{2.4}), if
\begin{equation}
\rho_2 = \rho_1 .
\label{2.10}
\end{equation}
Then from Eq.\,(\ref{2.8}) we have
\begin{equation}
\omega_{1,2}^2 = \mbox{$\frac{1}{2}$}(\mu_1 + \mu_2) \pm \mbox{$\frac{1}{2}$}
\sqrt{(\mu_1 -\mu_2)^2 + 4 \rho_1^2} .
\label{2.11}
\end{equation}
We see that for a large range of the coefficients $\mu_1$, $\mu_2$, $\rho_1$,
the squared frequencies $\omega_1^2$ and $\omega_2^2$ are positive. Then
$\omega_1$, $\omega_2$ are real, in which case we have oscillating motion.
The Hamiltonian is
\begin{equation}
H= \mbox{$\frac{1}{2}$}(p_x^2 + p_y^2) + \mbox{$\frac{1}{2}$}(\mu_1 x^2
+\mu_2 y^2 - 2 \rho_1 x y),
\label{2.12}
\end{equation}
where $p_x=\partial L/\partial {\dot x} = {\dot x}$, and $p_y =\partial L/\partial {\dot y} = {\dot y}$.
By performing a rotation in the $(x,y)$-space,
\begin{eqnarray}
&&~x'=x\, {\rm cos}\, \alpha + y\, {\rm sin}\, \alpha \nonumber\\
&& y' = -x\, {\rm sin}\, \alpha + y\, {\rm cos}\, \alpha
\label{2.13}
\end{eqnarray}
with the accompanying rotation of momenta,
\begin{eqnarray}
&&~p_{x'}=p_x\, {\rm cos}\, \alpha + p_y\, {\rm sin}\, \alpha \nonumber\\
&& p_{y'}= -p_x\, {\rm sin}\, \alpha + p_y\, {\rm cos}\, \alpha,
\label{2.14}
\end{eqnarray}
the Hamiltonian (\ref{2.12}) can be diagonalized. By comparing the new Hamiltonian
\begin{equation}
H= \mbox{$\frac{1}{2}$}(p_{x'}^2 + p_{y'}^2) + \mbox{$\frac{1}{2}$}(a {x'}^2
+b {y'}^2),
\label{2.15}
\end{equation}
with the old one (\ref{2.12}), we obtain the system of three equations for
the unknowns $a$, $b$, $\alpha$:
\begin{eqnarray}
&& a\, {\rm cos}^2 \,\alpha + b\, {\rm sin}^2 \, \alpha = \mu_1 \nonumber\\
&& a\, {\rm sin}^2 \,\alpha + b\, {\rm cos}^2 \, \alpha = \mu_2 \label{2.16}\\
&& (a-b)\, {\rm cos}\, \alpha \,{\rm sin}\, \alpha = \rho_1 \nonumber .
\end{eqnarray}
The solution is
\begin{equation}
a = \mbox{$\frac{1}{2}$}(\mu_1 + \mu_2) + \mbox{$\frac{1}{2}$}
\sqrt{(\mu_1 -\mu_2)^2 + 4 \rho_1^2} = \omega_1^2.
\label{2.17}
\end{equation}
\begin{equation}
b = \mbox{$\frac{1}{2}$}(\mu_1 + \mu_2) - \mbox{$\frac{1}{2}$}
\sqrt{(\mu_1 -\mu_2)^2 + 4 \rho_1^2} = \omega_2^2.
\label{2.18}
\end{equation}
\begin{equation}
{\rm cos}\, 2 \alpha= \frac{\mu_1-\mu_2}{\sqrt{(\mu_1 -\mu_2)^2 + 4 \rho_1^2}}.
\label{2.19}
\end{equation}
The $a$, $b$ are just equal to the squared frequencies $\omega_1^2$, $\omega_2^2$
of the PU oscillator. This can be directly verified by inserting the expressions
(\ref{2.16}) into the equations of motion (\ref{2.5}) and using (\ref{2.10}).
In the new coordinates, the system is described by the Lagrangian\footnote{The
author\,\cite{Mostafazadeh} has also arrived at such a system of two uncoupled oscillators
straightforwardly from Eq.\,(\ref{2.2}), by using a different chain of
substitutions of variables. See also the procedure of refs.\,\cite{Nucci}.}
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}'^2 + {\dot y}'^2) - \mbox{$\frac{1}{2}$}
(\omega_1^2 {x'}^2 + \omega_2^2 {y'}^2),
\label{2.20}
\end{equation}
and the Hamiltonian
\begin{equation}
H=\mbox{$\frac{1}{2}$}({\dot x}'^{\,2} + {\dot y}'^{\,2}) + \mbox{$\frac{1}{2}$}
(\omega_1^2 {x'}^2 + \omega_2^2 {y'}^2).
\label{2.21}
\end{equation}
The energy of this system is positive. It is remarkable that when we diagonalize
the $L$ and $H$ for a system of two oscillators (\ref{2.3}) and (\ref{2.4}),
we obtain two different frequencies, $\omega_1$ and $\omega_2$, that correspond
to those occurring in the PU oscillators.
{\bf Case II.}
Alternatively, we may assume that the Lagrangian is
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}^2 - {\dot y}^2) - \mbox{$\frac{1}{2}$}
(\mu_1 x^2 - \mu_2 y^2 - 2 \rho_1 x y ),
\label{2.22}
\end{equation}
This gives the equations of motion (\ref{2.3}) and (\ref{2.4}) if
\begin{equation}
\rho_2 = - \rho_1 \,.
\label{2.23}
\end{equation}
Inserting this into Eq.\,(\ref{2.8}), we obtain
\begin{equation}
\omega_{1,2}^2 = \mbox{$\frac{1}{2}$}(\mu_1 + \mu_2) \mp \mbox{$\frac{1}{2}$}
\sqrt{(\mu_1 -\mu_2)^2 - 4 \rho_1^2} .
\label{2.24}
\end{equation}
The frequencies $\omega_1$, $\omega_2$ are real if
$(\mu_1 -\mu_2)^2 > 4 \rho_1^2$ and $\mu_1 + \mu_2 >
\sqrt{(\mu_1 -\mu_2)^2 - 4 \rho_1^2}$.
The Hamiltonian is
\begin{equation}
H= \mbox{$\frac{1}{2}$}(p_x^2 - p_y^2) + \mbox{$\frac{1}{2}$}(\mu_1 x^2
-\mu_2 y^2 - 2 \rho_1 x y).
\label{2.25}
\end{equation}
By performing the hyperbolic rotation in the $(x,y)$-space,
\begin{eqnarray}
&&x'=x\, {\rm cosh}\, \alpha + y\, {\rm sinh}\, \alpha \nonumber\\
&& y' =x\, {\rm sinh}\, \alpha + y\, {\rm cosh}\, \alpha
\label{26}
\end{eqnarray}
with the accompanying rotation of momenta,
\begin{eqnarray}
&&p_{x'}=p_x\, {\rm cosh}\, \alpha + p_y\, {\rm sin}\, \alpha \nonumber\\
&& p_{y'}= p_x\, {\rm sinh}\, \alpha + p_y\, {\rm cosh}\, \alpha,
\label{2.27}
\end{eqnarray}
the Lagrangian (\ref{2.22}) and the Hamiltonian (\ref{2.25}) become
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}'^2 - {\dot y}'^2) - \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 - \omega_2^2 y'^2),
\label{2.28}
\end{equation}
\begin{equation}
H=\mbox{$\frac{1}{2}$}({\dot x}'^2 - {\dot y}'^2) + \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 - \omega_2^2 y'^2).
\label{2.29}
\end{equation}
Again, the diagonalized Lagrangian and Hamiltonian contain the frequencies
$\omega_1$, $\omega_2$ of the PU oscillator.
Now we have the relations
\begin{eqnarray}
&& \omega_1^2\, {\rm cosh}^2 \,\alpha - \omega_2^2, {\rm sinh}^2 \,
\alpha = \mu_1 \label{2.29a}\\
&& -\omega_1^2\, {\rm sinh}^2 \,\alpha + \omega_2^2\, {\rm cosh}^2 \, \alpha = \mu_2 \label{2.29b}\\
&& (\omega_1^2-\omega_2^2)\, {\rm cosh}\, \alpha \,{\rm sinh}\, \alpha
= -\rho_1 \label{2.29c} .
\end{eqnarray}
The energy of the system is either positive or negative, depending on which degree of
freedom is more excited.
Cases I and II show that the PU oscillator can be described as a system of two
oscillators whose Hamiltonian is either (\ref{2.21}) or (\ref{2.29}).
Case I means positive definite Hamiltonian, whereas Case II means indefinite
Hamiltonian.
\section{Self-interacting PU oscillator}
\subsection{Equations of motion and the Lagrangian}
We have seen that the PU oscillator can be described as a system of two
oscillators (\ref{2.3}) and (\ref{2.4}) that can be written in the explicit
uncoupled form
\begin{equation}
{\ddot x'} +\omega_1^2 x' = 0
\label{3.1a}
\end{equation}
\begin{equation}
{\ddot y'} +\omega_2^2 y' = 0
\label{3.2a}
\end{equation}
For real $\omega_1$, $\omega_2$, this is an oscillating system, regardless
of whether for the corresponding Lagrangian we take (\ref{2.20})
or (\ref{2.28}). Both Lagrangians are equally good for describing the
PU oscillator\,\cite{Bolonek,Bagarello}.
If we include an interaction between the $x'$ and $y'$, then energy
can be transfered between those two degrees of freedom. Then it does
matter which Lagrangian we take.
\ (i) Let us first consider the following Lagrangian that is an extension
of (\ref{2.20}) (Case I):
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}'^2 + {\dot y}'^2) - \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 + \omega_2^2 y'^2) - \frac{\lambda}{4} (x'+y')^4
\label{3.1}
\end{equation}
The corresponding equations of motions are
\begin{equation}
{\ddot x'} +\omega_1^2 x' + \lambda (x'+y')^3 = 0
\label{3.2}
\end{equation}
\begin{equation}
{\ddot y'} +\omega_2^2 y'+ \lambda (x'+y')^3 = 0
\label{3.3}
\end{equation}
Introducing the new coordinates
\begin{equation}
u=\frac{x'+y'}{\sqrt{2}}~,~~~~~~~~~~v=\frac{x'-y'}{\sqrt{2}} ,
\label{3.4}
\end{equation}
we have
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot u}^2 + {\dot v}^2) - \mbox{$\frac{1}{4}$}
[(\omega_1^2 + \omega_2^2)(u^2+v^2) + 2 (\omega_1^2 - \omega_2^2)u v]
- \lambda u^4
\label{3.5}
\end{equation}
\begin{equation}
{\ddot u} + \mu_1 u -\rho_1 v + 4 \lambda u^3 = 0
\label{3.6}
\end{equation}
\begin{equation}
{\ddot v} + \mu_2 v -\rho_1 u = 0 ,
\label{3.7}
\end{equation}
where
\begin{equation}
\mu_1 = \mu_2 = \mbox{$\frac{1}{2}$}(\omega_1^2 + \omega_2^2) ~,~~~~~
-\rho_1 = \mbox{$\frac{1}{2}$}(\omega_1^2 - \omega_2^2).
\label{3.8}
\end{equation}
Eliminating $u$, we obtain the 4th order differential equation for $v$:
\begin{equation}
v^{(4)} + (\mu_1 + \mu_2) {\ddot v} + (\mu_1 \mu_2 - \rho_1^2) v +
4 \lambda \rho_1 ({\ddot v} + \mu_2 v)^3 =0 ,
\label{3.9}
\end{equation}
which is just that of the PU oscillator with an extra non linear term.
Similarly, by eliminating $v$, we obtain
\begin{equation}
u^{(4)} + (\mu_1 + \mu_2) {\ddot u} + (\mu_1 \mu_2 - \rho_1^2) u +
4 \mu_2 \lambda u^3 + 4 \lambda \frac{\mbox{\rm d}^2}{\mbox{\rm d} t^2} \left ( u^3 \right ) = 0,
\label{3.10}
\end{equation}
which is also the PU oscillator with a non-linear term.
(ii) Let us now consider the Lagrangian that is an extension of (\ref{2.28})
(Case II):
\begin{equation}
L=\mbox{$\frac{1}{2}$}({\dot x}'^2 - {\dot y}'^2) - \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 - \omega_2^2 y'^2) - \frac{\lambda}{4} (x'+y')^4 .
\label{3.11}
\end{equation}
The corresponding equations of motions are now
\begin{equation}
{\ddot x'} +\omega_1^2 x' + \lambda (x'+y')^3 = 0
\label{3.12}
\end{equation}
\begin{equation}
{\ddot y'} +\omega_2^2 y'- \lambda (x'+y')^3 = 0
\label{3.13}
\end{equation}
Notice the minus sign in the second equation.
In the new variables $u$, $v$, defined in Eq.\,(\ref{3.4}), we have
\begin{equation}
L={\dot u} {\dot v} - \mbox{$\frac{1}{4}$}
[(\omega_1^2 - \omega_2^2)(u^2+v^2) + 2 (\omega_1^2 + \omega_2^2)u v]
- \lambda u^4
\label{3.14}
\end{equation}
\begin{equation}
{\ddot u} + \mu_1 u -\rho_1 v = 0
\label{3.15}
\end{equation}
\begin{equation}
{\ddot v} + \mu_2 v -\rho_1 u + 4 \lambda u^3= 0 ,
\label{3.16}
\end{equation}
where $\mu_1,~\mu_2$ and $\rho_1$ are given in Eqs.\,(\ref{3.8}).
By eliminating $v$, we obtain
\begin{equation}
u^{(4)} + (\mu_1 + \mu_2) {\ddot u} +(\mu_1 \mu_2 -\rho_1^2) u +
4 \rho_1 \lambda u^3 = 0.
\label{3.17}
\end{equation}
Using (\ref{3.8}) and introducing $\Lambda=2 (\omega_1^2-\omega_2^2) \lambda$,
the latter equation reads
\begin{equation}
u^{(4)} + (\omega_1^2 + \omega_2^2) {\ddot u} +\omega_1 \omega_2^2 u
-\Lambda u^3 = 0.
\label{3.18}
\end{equation}
Now we obtain the equation of motion for the PU oscillator with a self-interaction
term.
The second order Lagrangian that gives the fourth order equation of motion
(\ref{3.18}) is that of the PU oscillator with a quartic self-interaction term:
\begin{equation}
L=\mbox{$\frac{1}{2}$} \left [ {\ddot u}^2 -(\omega_1^2 + \omega_2^2)
{\ddot u}^2 +\omega_1 \omega_2^2 u^2 \right ]
+\mbox{$\frac{1}{4}$} \Lambda u^4 .
\label{3.19}
\end{equation}
Notice that if $\omega_1=\omega_2$, then $\rho_1=0$, and $v$ in (\ref{3.15})
cannot be expressed in terms of $u$. Consequently, in the case $\omega_1=\omega_2$,
the system (\ref{3.15}),(\ref{3.16}) does not give the equation (\ref{3.17}) for
the PU oscillator.
The Ostrogradski second order formalism then leads us to
the phase space Lagrangian\,\cite{Bolonek:2006ir,Mannheim:2000ka,Mannheim:2004qz}
that is equivalent to (\ref{3.19}),
\begin{equation}
L= p_u {\dot u} + p_q {\dot q} - H,
\label{3.20}
\end{equation}
where
\begin{equation}
H = p_u q +\mbox{$\frac{1}{2}$} \left [ p_q^2 + (\omega_1^2 + \omega_2^2)q^2
- \omega_1^2 \omega_2^2 u^2 \right ] + \mbox{$\frac{1}{4}$} \Lambda u^4 .
\label{3.21}
\end{equation}
The latter Hamiltonian can be transformed into
\begin{equation}
H'= \mbox{$\frac{1}{2}$}(p_{x'}^2 - p_{y'}^2) + \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 - \omega_2^2 y'^2) + \frac{\lambda}{4} (x'+y')^4 .
\label{3.24}
\end{equation}
The phase space Lagrangian
\begin{equation}
L' = p_{x'} {\dot x'} + p_{y'} {\dot y'} - H'
\label{3.25}
\end{equation}
is equivalent to (\ref{3.11}). This can be directly seen by using the equations
of motion $p_{x'}={\dot x'}$ and $p_{y'}={\dot y'}$, and eliminating
$p_{x'}$, $p_{y'}$ from (\ref{3.25}).
Therefore,
the correct procedure is to start from the 4th order self-interacting
Lagrangian (\ref{3.19}) and to employ the Ostrogradski formalism. The Hamiltonian
so obtained can be positive or negative. The procedures discussed
in Refs.\,\cite{Mostafazadeh}, and also in Refs.\,\cite{Bolonek}--\cite{Nucci},
have limited validity, because they do not consider an interaction
term. They are valid descriptions of the PU oscillator in the absence of an
interaction, but not if one switches on an interaction.
\subsection{Solutions}
The Lagrangian of the form (\ref{3.11}) is usually considered as unsuitable
for physics, because it implies indefinite Hamiltonian, with
positive and negative energy states. The interaction term that mixes
the two types of states leads to instabilities. But as pointed out
in Refs.\,\cite{Smilga1}--\cite{SmilgaStable},\cite{Ilhan}, there exist islands of stability. We show this
explicitly by solving numerically the equations of motion
(\ref{3.12}), (\ref{3.13}). In Fig.\,1 there are examples of such
calculations, done by MATHEMATICA. In all examples we take $\omega_1^2=1$ and $\omega_2^2=1.5$
\setlength{\unitlength}{.8mm}
\begin{figure}[h!]
\hspace{3mm}
\begin{picture}(120,120)(25,0)
\put(25,61){\includegraphics[scale=0.4]{Example1c.pdf}}
\put(90,61){\includegraphics[scale=0.4]{Example1a.pdf}}
\put(150,61){\includegraphics[scale=0.4]{Example1b.pdf}}
\put(25,0){\includegraphics[scale=0.4]{Example2c.pdf}}
\put(90,0){\includegraphics[scale=0.4]{Example2a.pdf}}
\put(150,0){\includegraphics[scale=0.4]{Example2b.pdf}}
\put(73,83){$x'$}
\put(46,112){$y'$}
\put(139,83){$x'$}
\put(112,112){$y'$}
\put(74,24){$x'$}
\put(47,54){$y'$}
\put(140,24){$x'$}
\put(112,54){$y'$}
\put(145,92){$\frac{{\dot x}'^2}{2}$}
\put(201,59){$t$}
\put(145,30){$\frac{{\dot x}'^2}{2}$}
\put(201,-2){$t$}
\put(152,110){$^{\lambda=0.022}$}
\put(152,105){$^{x'(0)=0,~y'(0)=1}$}
\put(152,100){$^{{\dot x}'(0)=1,~{\dot y}'(0)=0}$}
\put(152,48){$^{\lambda=0.02299}$}
\put(152,43){$^{x'(0)=0,~y'(0)=1}$}
\put(152,38){$^{{\dot x}'(0)=0.9,~{\dot y}'(0)=0}$}
\end{picture}
\caption{\footnotesize Solutions of Eqs.\,(\ref{3.12})(\ref{3.13}) for
different values of the coupling constant $\lambda$ and different
initial conditions.
Left and middle: the trajectories in the $(x',y')$ space.
Right: The kinetic energy ${\dot x}'^2/2$ as function of time. The oscillations
within the envelope are so fine that they fill the diagram.}
\end{figure}
We see that the system is stable for sufficiently small coupling
constant $\lambda$ and the initial velocity ${\dot x}'(0)$, ${\dot y}'(0)$.
If $\lambda$ is too high, the system is unstable (Fig.\,2, up).
\begin{figure}[h!]
\hspace{3mm}
\begin{picture}(120,125)(25,0)
\put(25,61){\includegraphics[scale=0.4]{Example3c.pdf}}
\put(90,61){\includegraphics[scale=0.4]{Example3a.pdf}}
\put(150,61){\includegraphics[scale=0.4]{Example3b.pdf}}
\put(25,0){\includegraphics[scale=0.4]{Example4a.pdf}}
\put(90,0){\includegraphics[scale=0.4]{Example4c.pdf}}
\put(150,0){\includegraphics[scale=0.4]{Example4b.pdf}}
\put(73,95){$x'$}
\put(47,122){$y'$}
\put(130,96){$x'$}
\put(112,122){$y'$}
\put(65,37){$x'$}
\put(45,56){$y'$}
\put(140,10){$x'$}
\put(126,51){$y'$}
\put(145,92){$\frac{{\dot x}'^2}{2}$}
\put(201,59){$t$}
\put(145,30){$\frac{{\dot x}'^2}{2}$}
\put(201,-2){$t$}
\put(147,110){$^{\lambda=0.03}$}
\put(147,105){$^{x'(0)=0,~y'(0)=1}$}
\put(147,100){$^{{\dot x}'(0)=1,~{\dot y}'(0)=0}$}
\put(147,48){$^{\lambda=0.022}$}
\put(147,43){$^{x'(0)=0,~y'(0)=1}$}
\put(147,38){$^{{\dot x}'(0)=1.5,~{\dot y}'(0)=0}$}
\end{picture}
\caption{\footnotesize Up: By increasing the $\lambda$, the system becomes
unstable. The trajectory and the kinetic energy escape into infinity.
Down: Similarly, by increasing the initial velocity, the system also becomes
unstable. }
\end{figure}
Similarly, the system is unstable at too high velocities (Fig.\,2, down). Close
to the critical value of $\lambda$, the system seems to be stable for
long time, but then it escapes into infinity (Fig.\,3).
A similar behaviour occurs close to the critical value of the initial
velocity.
\begin{figure}[h!]
\hspace{3mm}
\begin{picture}(120,60)(25,0)
\put(25,0){\includegraphics[scale=0.4]{Example5a.pdf}}
\put(90,0){\includegraphics[scale=0.4]{Example5c.pdf}}
\put(150,0){\includegraphics[scale=0.4]{Example5d.pdf}}
\put(74,23){$x$}
\put(51,53){$y$}
\put(85,30){$\frac{{\dot x}^2}{2}$}
\put(141,-2){$t$}
\put(152,30){$E_{\rm tot}$}
\put(201,-2){$t$}
\put(130,48){$^{\lambda=0.02299}$}
\put(130,43){$^{x'(0)=0,~y'(0)=1}$}
\put(130,38){$^{{\dot x}'(0)=0.999851,~{\dot y}'(0)=0}$}
\end{picture}
\caption{\footnotesize At certain values of $\lambda$ and the initial conditions,
the system behaves stably for a long time, before it finally escapes to infinity.
The total energy $E_{\text tot}$ remains constant within the numerical error.
}
\end{figure}
By just slightly decreasing the coupling constant
from that of figure 3, $\lambda =0.02299$, to $\lambda =0.0229$,
the system appears to be stable. We checked its stability up to $t= 2664$,
but we do not plot the solutions here, in order to not crowd the paper with
too many figures.
The interaction potential $\frac{\lambda}{4}(x'+y')^4$ runs into infinity.
More realistically, it should not run into infinity, but there should
be a cutoff. As a more realistic coupling term let us consider
$\frac{\lambda}{4}\text{sin}^4\, (x'+y')$, that leads to the
Lagrangian (\ref{3.19}) in which $u^4$ is replaced by $\text{sin}^4 u$.
The equations of motion are then
\begin{equation}
{\ddot x'} +\omega_1^2 x' + \lambda \text{sin}^3\, (x'+y')
\text{cos} \,(x'+y')= 0,
\label{3.26}
\end{equation}
\begin{equation}
{\ddot y'} +\omega_2^2 y'- \lambda \text{sin}^3\, (x'+y')
\text{cos} \,(x'+y')= 0
\label{3.27}
\end{equation}
Such system is stable at all values of $\lambda>0$ and
initial velocity. We have checked this by performing many numerical runs.
In (Fig.\,4) we give two examples of numerical solutions. Later we will
demonstrate also analytically why the solutions of the system
(\ref{3.26}),(\ref{3.27}) are stable.
\setlength{\unitlength}{.8mm}
\begin{figure}[h!]
\hspace{3mm}
\begin{picture}(120,115)(25,-5)
\put(25,55){\includegraphics[scale=0.4]{Example7c.pdf}}
\put(90,55){\includegraphics[scale=0.4]{Example7a.pdf}}
\put(150,55){\includegraphics[scale=0.4]{Example7b.pdf}}
\put(25,-6){\includegraphics[scale=0.38]{Example8b.pdf}}
\put(90,0){\includegraphics[scale=0.4]{Example8a.pdf}}
\put(150,0){\includegraphics[scale=0.4]{Example8c.pdf}}
\put(76,81){$x'$}
\put(47,108){$y'$}
\put(141,79){$x'$}
\put(113,106){$y'$}
\put(73,22){$x'$}
\put(46,44){$y'$}
\put(141,20){$x'$}
\put(112,48){$y'$}
\put(147,89){$\frac{{\dot x}'^2}{2}$}
\put(201,53){$t$}
\put(147,33){$\frac{{\dot x}'^2}{2}$}
\put(201,-2){$t$}
\put(158,106){$^{\lambda=0.22}$}'
\put(157,101){$^{x'(0)=0,~y'(0)=1}$}
\put(157,96){$^{{\dot x}'(0)=1,~{\dot y}(0)=0}$}
\put(157,45){$^{\lambda=1}$}
\put(157,40){$^{x'(0)=0.3,~y'(0)=1}$}
\put(157,35){$^{{\dot x}'(0)=1,~{\dot y}'(0)=-0.5}$}
\end{picture}
\caption{\footnotesize Solutions to the equations of motion (\ref{3.26}),
(\ref{3.27}), in which the quartic interaction $\frac{\lambda}{4} (x'+y')^4$
is replaced by $\frac{\lambda}{4}\text{sin}^4\, (x'+y')$. The system
is now stable for all positive values of $\lambda$.}
\end{figure}
Another possible generalization is in replacing the Lagrangian (\ref{3.11}) with
\begin{equation}
L=\mbox{$\frac{1}{2}$}(m_1 {\dot x}'^2 - m_2 {\dot y}'^2) - \mbox{$\frac{1}{2}$}
(\omega_1^2 x'^2 - \omega_2^2 y'^2) - \frac{\lambda}{4} (x'+y')^4
\label{3.28}
\end{equation}
where $m_1$ and $m_2$ are now two different ``masses". In terms of the
variables $u$, $v$, we have
\begin{equation}
L = \mbox{$\frac{1}{2}$} \left [ m ({\dot u}^2 + {\dot v}^2) + 2 M
{\dot u} {\dot v} + \rho_1 (u^2 + v^2) - 2 \mu_1 u v \right ]
- \lambda u^4 ,
\label{3.29}
\end{equation}
where
\begin{equation}
m= \mbox{$\frac{1}{2}$} (m_1 - m_2) , ~~~~
M = \mbox{$\frac{1}{2}$} (m_1 + m_2)
\label{3.30}
\end{equation}
The equations of motion are now
\begin{equation}
m {\ddot u} + M {\ddot v} - \rho_1 u + \mu_1 v + 4 \lambda u^3 = 0
\label{3.31}
\end{equation}
\begin{equation}
m {\ddot v} + M {\ddot u} - \rho_1 v + \mu_1 u =0 .
\label{3.32}
\end{equation}
The corresponding 4th order equation is
\begin{equation}
u^{(4)} M (M^2-m^2) + 2 {\ddot u} M (\mu_1 M + \rho_1 m)
+ u M (\mu_1^2 - \rho_1^2)
+ 4 M \rho_1\lambda u^3 - 4 M m \lambda \frac{\mbox{\rm d}^2}{\mbox{\rm d} t^2}
\left ( u^3 \right ) = 0
\label{3.33}
\end{equation}
This is a deformed version of the equation (\ref{3.17}) for the interacting
PU oscillator. By taking $m=0$, $M=1$, we obtain the ordinary PU oscillator of
Eq.\,(\ref{3.17}).
Examples of numerical solutions to the equations of motion
\begin{equation}
m_1 {\ddot x}' + \omega_1^2 x' + \lambda (x'+y')^3 = 0,
\label{3.34}
\end{equation}
\begin{equation}
m_2 {\ddot y} + \omega_2^2 y' - \lambda (x'+y')^3 = 0,
\label{3.35}
\end{equation}
derived from the Lagrangian (\ref{3.28}), are given in Fig.\,5.
Whilst in the case of equal masses, $m_1=m_2=1$, the system
is unstable at $\lambda=0.03$ and higher, we see that for different masses,
$m_1<m_2$, the system is stable regardless of the values of $\lambda>0$
and initial velocities. This has been confirmed in many numerical runs
that we have done. Only a small sample is shown in Fig.\,5.
That for unequal masses the system becomes stable we previously observed
in Ref.\,\cite{PavsicUltrahyper}, where we studied an analogous system of two
oscillators, but with a different coupling term, namely,
$\frac{\lambda}{4}(x^2-y^2)^2$, which is a special case of that
considered in Ref.\,\cite{Ilhan}. However, such a coupling term does not
correspond to a quartic self-interaction of the PU oscillator, because the
coupling term $\lambda u^4$ in (\ref{3.14}) is then replaced by $\lambda u^2 v^2$,
which does not lead to Eq.\,(\ref{3.18}), but to a more complicated equation
with non-linear terms.
\setlength{\unitlength}{.8mm}
\begin{figure}[h!]
\hspace{3mm}
\begin{picture}(120,115)(25,0)
\put(25,61){\includegraphics[scale=0.4]{Example10b.pdf}}
\put(90,61){\includegraphics[scale=0.4]{Example11b.pdf}}
\put(150,61){\includegraphics[scale=0.4]{Example12b.pdf}}
\put(25,0){\includegraphics[scale=0.4]{Example16b.pdf}}
\put(90,0){\includegraphics[scale=0.4]{Example13b.pdf}}
\put(150,0){\includegraphics[scale=0.4]{Example14b.pdf}}
\put(21,92){$\frac{{\dot x}'^2}{2}$}
\put(76,59){$t$}
\put(87,92){$\frac{{\dot x}'^2}{2}$}
\put(141,59){$t$}
\put(145,92){$\frac{{\dot x}'^2}{2}$}
\put(201,59){$t$}
\put(21,30){$\frac{{\dot x}'^2}{2}$}
\put(76,-2){$t$}
\put(85,30){$\frac{{\dot x}'^2}{2}$}
\put(142,-2){$t$}
\put(145,30){$\frac{{\dot x}'^2}{2}$}
\put(202,-2){$t$}
\put(25,110){$^{\lambda=5,~m_1=0.7,~m_2=1.3}$}
\put(25,105){$^{x'(0)=0.3,~y'(0)=1}$}
\put(25,100){$^{{\dot x}'(0)=4,~{\dot y}'(0)=-0.5}$}
\put(30,95){$^{\omega_1=1,~\omega_2=\sqrt{1.5}}$}
\put(91,110){$^{\lambda=500,~m_1=0.7,~m_2=1.3}$}
\put(91,105){$^{x'(0)=0.3,~y'(0)=1}$}
\put(91,100){$^{{\dot x}'(0)=4,~{\dot y}'(0)=-0.5}$}
\put(96,95){$^{\omega_1=1,~\omega_2=\sqrt{1.5}}$}
\put(152,110){$^{\lambda=5,~m_1=0.7,~m_2=1.3}$}'
\put(152,105){$^{x'(0)=0.3,~y'(0)=1}$}
\put(152,100){$^{{\dot x}'(0)=40,~{\dot y}'(0)=55}$}
\put(157,95){$^{\omega_1=1,~\omega_2=\sqrt{1.5}}$}
\put(25,48){$^{\lambda=5,~m_1=0.99,~m_2=1.01}$}
\put(25,43){$^{x'(0)=0.3,~y'(0)=1}$}
\put(25,38){$^{{\dot x}'(0)=1,~{\dot y}'(0)=-0.5}$}
\put(30,33){$^{\omega_1=1,~\omega_2=\sqrt{1.5}}$}
\put(91,48){$^{\lambda=5,~m_1=0.7,~m_2=1.3}$}
\put(91,43){$^{x'(0)=0.3,~y'(0)=1}$}
\put(91,38){$^{{\dot x}'(0)=40,~{\dot y}'(0)=55}$}
\put(96,33){$^{\omega_1=\omega_2=0}$}
\put(152,48){$^{\lambda=5,~m_1=0.7,~m_2=1.3}$}
\put(152,43){$^{x'(0)=0.3,~y'(0)=1}$}
\put(152,38){$^{{\dot x}'(0)=4,~{\dot y}'(0)=-0.5}$}
\put(157,33){$^{\omega_1=\omega_2=0}$}
\end{picture}
\caption{\footnotesize Solutions of Eqs.\,(\ref{3.34})(\ref{3.35}) for
different values of the coupling constant $\lambda$ and different
initial conditions. We show here the kinetic energy ${\dot x}'^2/2$ as
function of time.}
\end{figure}
To see why the system with different masses is stable, let us inspect the
equations of motion (\ref{3.34}),(\ref{3.35}). We already know that at small
values of $\lambda$, the system is stable. At large values of $\lambda$,
we can neglect the terms $\omega_1^2 x'$ and $\omega_2^2 y'$. Equations
of motion are then
\begin{equation}
{\ddot x}' + \frac{1}{m_1} \lambda (x'+y')^3 = 0,
\label{3.36}
\end{equation}
\begin{equation}
{\ddot y}' - \frac{1}{m_2} \lambda (x'+y')^3 = 0.
\label{3.37}
\end{equation}
Taking the sum and the difference of the latter equations, we obtain
\begin{equation}
{\ddot \xi} + \left (\frac{1}{m_1} - \frac{1}{m_2} \right ) \lambda \xi^3=0,
\label{3.38}
\end{equation}
\begin{equation}
{\ddot \eta} + \left (\frac{1}{m_1} + \frac{1}{m_2} \right ) \lambda \xi^3=0,
\label{3.39}
\end{equation}
where $\xi=x'+y'$ and $\eta=x'-y'$.
{\it In the case of unequal masses}, $m_1 < m_2$, $\lambda >0$,
Eq.\,(\ref{3.38}) describes the quartic oscillator with the potential
$\frac{\lambda}{4} \xi^4 (1/m_1-1/m_2)$, which has stable, oscillatory solutions. Then,
Eq,\,(\ref{3.39}) also has stable, oscillatory, solutions. Stability is
maintained in the presence of the terms $\omega_1^2 x'$ and $\omega_2^2 y'$.
{\it In the case of equal masses}, $m_1=m_2$, Eq.\,(\ref{3.38}) becomes
${\ddot \xi}=0$, with the solution $\xi=\xi_0 + c_1 t$. Then the general solution of
(\ref{3.39}) is a runaway function
\begin{equation}
\eta = - \frac{2}{m_1} \frac{\lambda}{20} (\xi_0 + c_1 t)^5 + c_2 t .
\label{3.40}
\end{equation}
In the presence of the terms $\omega_1^2 x$ and $\omega_2^2 y$, the above
runaway behavior is modulated by oscillations.
Solutions to the system described by the Lagrangian (\ref{3.28}) are
stable, if $m_1 < m_2$, $\lambda >0$. If $m_1=m_2$, then the solutions are
stable at sufficiently small $\lambda$, whereas at higher values of
$\lambda$, they are unstable. (Fig.\,2).
If instead of $\frac{1}{4}(x'+y')$ we take the interaction term
$\frac{1}{4} \text{sin}^4\,(x'+y')$, we have stability even in the
case $m_1=m_2$. The equations of motion are then
\begin{equation}
{\ddot \xi}=0~,~~~~~~{\ddot \eta} + \frac{2}{m_1} \lambda
\,\text{sin}^3 \, \xi \, \text{cos}\, \xi =0,
\label{3.41}
\end{equation}
the general solution being
\begin{equation}
\xi=\xi_0 + c_1 t~,~~~{\dot \eta}= -\frac{2}{m_1} \frac{1}{4 c_1} \lambda
\,\text{sin}^4 \,(\xi_0 + c_1 t)~,
\label{3.42}
\end{equation}
\begin{equation}
\eta = -\frac{2}{m_1} \frac{1}{128 c_1^2} \lambda
\left [12 (\xi_0 + c_1 t) -8\, \text{sin}\,(2 (\xi_0 + c_1 t)
+\text{sin}\, (4 (\xi_0 + c_1 t) \right ]
\label{3.43}
\end{equation}
This solution is stable in the sense that the velocity and the kinetic
energy remain finite. The coordinates $\xi$, $\eta$, or equivalently,
$x'$, $y'$, proceed with time, on average linearly, into infinity.
The velocity thus oscillates around a constant velocity\footnote{
In the quantized theory, to such modulated uniform motion there corresponds
a modulated traveling wave, or uniformly moving wave packet.}.
If we include into the potential also the terms $\frac{1}{2}\omega_1^2 x'^2$ and
$\frac{1}{2}\omega_1^2 y'^2$, then the coordinates do not escape into
infinity, but they oscillate.
\section{Discussion}
It has been shown by some authors\,\cite{Mostafazadeh,Nucci}
(see also\,\cite{Bolonek}--\cite{Bagarello}) that the Pais-Uhlenbeck oscillator
can be described as a system of two degrees of freedom with positive
definite Hamiltonian. We point out that this holds for the free PU
oscillator only and that one cannot include a couppling term such that
the system would be equivalent to the PU oscillator with a quartic or similar
self-interaction term.
The interacting Pais-Uhlenbeck oscillator must be described, as usually,
by the second order Lagrangian. The Ostrogradski formalism then
leads to the indefinite Hamiltonian, with positive and negative energies.
An equivalent system is that of two oscillators described by the equations
of motion (\ref{3.12}),(\ref{3.13}), derived from the Lagrangian
(\ref{3.11}). We have studied numerical solutions to the latter system for
various coupling constants $\lambda$ and initial velocities. Solutions
are stable below a critical value of $\lambda$ and initial velocity.
We then considered two modifications of the Lagrangian that drastically
increase the range of stability.
Firstly, we replace the quartic interaction term $\frac{\lambda}{4}\,(x'+y')^4$,
that runs into infinity, with the term $\frac{\lambda}{4}\,\text{sin}^4 (x'+y')$
that is finite for all $x'$, $y'$. Then, instead of islands of stability,
we obtain the continent of stability that extends into infinity in the space of
the parameter $\lambda$ and initial conditions. Fig.\,4 shows that now the
system is stable even at $\lambda = 5$, whereas with the quartic interaction
it was unstable already at $\lambda=0.03$. We have done many numerical
runs with higher values of $\lambda$, even with $\lambda=500$, and the
solutions were always stable. By inspecting the
equations of motion, we also found analytically that such interacting
system is indeed stable for any positive $\lambda$, and for any initial
velocity.
Secondly, we replace the kinetic term $\frac{1}{2} ({\dot x}'^2 -{\dot y}'^2)$
with $\frac{1}{2} (m_1 {\dot x}'^2 -m_2 {\dot y}'^2)$, and consider the
case in which the ``masses" $m_1$ and $m_2$ are different. If
$m_1<m_2$, $\lambda >0$, and $\omega_1^2\le\omega_2^2$, the system is stable
for all finite positive values
of $\lambda$ and for all finite positive or negative initial velocities
${\dot x}'(0)$, ${\dot y}'(0)$. Analogously, the system is stable if
$m_1>m_2$, $\lambda <0$, and $\omega_1^2\ge\omega_2^2$.
Our findings invalidate the generally held belief that the Pais-Uhlenbeck
oscillator in the presence of an interaction is unstable, and therefore
problematic. There are vast regimes of stability that hold for all initial
velocities. This has consequences for the quantum PU oscillator. Namely,
stability of a classical system does not necessarily imply stability of
the corresponding quantum system, because the latter system can tunnel through
a potential barrier and then roll down the potential. But if a classical
system remains stable, regardless of how high is the initial velocity, then
also the quantum system is stable. We conclude that the Pais-Uhlenbeck
oscillator with a suitable self-interaction
is quite acceptable from the physical point of view. Since
the PU oscillator is a toy model for higher derivative gravity, we expect that
also the negative energy problems of the latter theory could be resolved
along similar lines as investigated in this paper.
\vspace{4mm}
\centerline{\bf Acknowledgment}
This work has been supported by the Slovenian Research Agency.
\baselineskip .43cm
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{section:introduction}
Let $R_t^{(\mu)}$ be the Bessel process with index $\mu\neq 0$. The transition probability density (with respect to the Lebesgue measure) of the process is expressed by the modified Bessel function in the following way
\formula[eq:transitiondensity:formula]
{
p^{(\mu)}(t,x,y) = \frac1t \left(\frac{y}{x}\right)^{\mu}y
\exp\left(-\frac{x^2+y^2}{2t}\right)I_{|\mu|}\left(\frac{xy}{t}\right)\/,\quad
x,y,t> 0\/.}
Our main goal is to describe behaviour of densities of the transition probabilities for the process $R_t^{(\mu)}$ killed when it leaves a half-line $(a,\infty)$, where $a>0$. Note that if the process starts from $x>a$ then the first hitting time $T_a^{(\mu)}$ of a level $a$ is finite a.s. when $\mu<0$ but it is infinite with positive probability when $\mu>0$. The density kernel of the killed semi-group is given by the Hunt formula
\formula[eq:hunt:general]{
p_a^{(\mu)}(t,x,y) &= p^{(\mu)}(t,x,y)-\textbf{E}_x^{(\mu)}[t>T^{(\mu)}_a; p^{(\mu)}(t-T^{(\mu)}_a, R^{(\mu)}_{T^{(\mu)}_a},y)]\/,
}
where $x,y>a$ and $t>0$. The main result of the paper is given in
\begin{theorem}
\label{thm:main}
Let $\mu\neq 0$ and $a>0$. For every $x,y> a$ and $t>0$ we have
\formula[eq:mainthm]{
p_a^{(\mu)}(t,x,y)\stackrel{\mu}{\approx} \left[1\wedge \frac{(x-a)(y-a)}{t}\right]\left(1\wedge \frac{xy}{t}\right)^{|\mu|-\frac{1}{2}} \left(\frac{y}{x}\right)^{\mu+\frac{1}{2}}\frac{1}{\sqrt{t}}\exp\left(-\frac{(x-y)^2}{2t}\right)\/.
}
\end{theorem}
Here $f(t,x,y)\stackrel{\mu}{\approx} g(t,x,y)$ means that there exist positive constants $c_1$ and $c_2$ depending only on the index $\mu$ such that $c_1\leq f/g\leq c_2$ for every $x,y>a$ and $t>0$. Since the constants are independent of $a>0$, one can pass to the limit with $a\to 0^+$ and obtain the well-known estimates of $p^{(\mu)}(t,x,y)$. Since the function $I_\mu(z)$ behaves as a power function at zero and that some exponential term appears in the asymptotic expansion at infinity (see Preliminaries for the details), the behaviour of $p^{(\mu)}(t,x,y)$ depends on the ratio $xy/t$. Note that similar situation takes place in the case of $p_a^{(\mu)}(t,x,y)$, which depends on $xy/t$ as well. It can be especially seen in the proof of Theorem \ref{thm:main}, where different methods and arguments are applied to obtain estimates (\ref{eq:mainthm}), whenever $xy/t$ is large or small. Finally, taking into account the behaviour of $p^{(\mu)}(t,x,y)$, one can rewrite the statement of Theorem \ref{thm:main} in the following way
\formula[eq:mainthm:rewrite]{
\frac{p_a^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)} \stackrel{\mu}{\approx} \left(1\wedge \frac{(x-a)(y-a)}{t}\right)\left(1\vee \frac{t}{xy}\right)\/,\quad x,y>a\/,\quad t>0\/,
}
where the expression on the right-hand side of (\ref{eq:mainthm:rewrite}) should be read as the description of the behaviour of $p_a^{(\mu)}(t,x,y)$ near the boundary $a$.
There are several ways to define the function $p_a^{(\mu)}(t,x,y)$ hence our result and its applications can be considered from different points of view. It seems to be the most classical approach to define the heat kernel $p_a^{(\mu)}(t,x,y)$ as the fundamental solution of the heat equation $
\left(\partial_t-L^{(\mu)}\right)u=0$, where $L^{(\mu)}$ is the Bessel differential operator. In the most classical case, i.e. when the operator $L^{(\mu)}$ is replaced by the classical Laplacian, the problem of finding description of the heat kernel has a very long history (see for example \cite{SC:2010} and the references within) and goes back to 1980s and the works of E.B. Davies (see \cite{DaviesSimon:1984}, \cite{Davies:1987}, \cite{Davies:1990}, \cite{Davies:1991}). However, the known results for Dirichlet Laplacian on the subsets of $\R^n$ (see \cite{Zhang:2002}) or in general on Riemannian manifolds (see \cite{SC:2010} for the references) are only qualitatively sharp, i.e. the constants appearing in the exponential terms in the upper and lower estimates are different. Note that in our result these constants are the same and consequently, the exponential behaviour of the density is very precise. Such sharp estimates seems to be very rare.
Note also that the operator $L^{(\mu)}$ plays an important r\^o{}le in harmonic analysis. However, since the set $(a,\infty)$ is unbounded, our consideration corresponds to the case when the spectrum is continuous. This operator on the set $(0,1)$ and the estimates of the corresponding Fourier-Bessel heat kernel were studied recently in \cite{NowakRoncal:2013a} and \cite{NowakRoncal:2013b}, but once again the results presented there are only qualitatively sharp, i.e. the estimates are not sharp whenever $|x-y|^2>>t$. Another essential difference between the case of bounded sets and our case is that in the first one, we can limit our considerations to $t\leq 1$, by the application of the intrinsic ultracontractivity.
However, the most interesting part of Theorem \ref{thm:main} (with difficult proof) seems to be when $t$ is large.
The third and our principal motivation comes from the theory of stochastic processes and the interpretation of $p_a^{(\mu)}(t,x,y)$ as a transition density function of the killed semi-group related to the Bessel process $R_t^{(\mu)}$. From this point of view, the present work is a natural continuation of the research started in \cite{BR:2006} (see also \cite{BGS:2007}), where the integral representation of the density
$q_x^{(\mu)}(t)$ of $T_a^{(\mu)}$ were provided together with its some asymptotics description. The sharp estimates of the density for the whole range of parameters with the explicit description of the exponential behaviour was given in \cite{BMR3:2013}. For the in-depth analysis of the asymptotic behaviour of $q_x^{(\mu)}(t)$ see \cite{HM:2013a}, \cite{HM:2012}, \cite{HM:2013b}.
The case $\mu=0$ is excluded from our consideration and it will be addressed in the subsequent work. As it is very common in this theory, this case requires different methods and should be considered separately. In particular, some logarithmic behavior is expected whenever $xy<t$.
The paper is organized as follows. In Preliminaries we introduce some basic notation and recall properties and known results related to modified Bessel functions as well as Bessel processes, which are used in the sequel. In particular, using scaling property and absolute continuity of the Bessel processes we reduced our consideration only to the case $\mu>0$ and $a=1$. After that we turn to the proof of Theorem \ref{thm:main}, which is split into two main parts, i.e. in Section \ref{section:xyt:large} we provide estimates whenever $xy/t$ is large and in Section \ref{section:xyt:small} we prove (\ref{eq:mainthm}) for $xy/t$ small. In both cases the result is given in series of propositions.
\section{Preliminaries}
\label{section:preliminaries}
\subsection{Notation}
The constants depending on the index $\mu$ and appearing in theorems and propositions are denoted by capitals letters $C_1^{(\mu)}, C_2^{(\mu)},\ldots$. We will denote by $c_1,c_2,\ldots$ constants appearing in the proofs and to shorten the notation we will omit the superscript $\,^{(\mu)}$, however we will emphasize the dependence on the other variables, if such occurs.
\subsection{Modified Bessel function}
\textit{The modified Bessel function of the first kind} is defined as (see \cite{Erdelyi:1953:volII} 7.2.2 (12))
\formula{
I_\mu(z) = \sum_{k=0}^\infty \left(\frac{z}{2}\right)^{\mu+2k}\frac{1}{k!\Gamma(k+\mu+1)}\/,\quad z>0\/,\quad \mu>-1\/.
}
It is well-known that whenever $z$ is real the function is a positive increasing real function. Moreover, by the differentiation formula (see \cite{Erdelyi:1953:volII} 7.11 (20))
\formula[eq:I:diff]{
\dfrac{d}{dz}\left(\frac{I_\mu(z)}{z^\mu}\right) = \frac{I_{\mu+1}(z)}{z^\mu}\/,\quad z>0\/
}
and positivity of the right-hand side of (\ref{eq:I:diff}) we obtain that $z \to z^{-\mu}I_\mu(z)$ is also increasing.
The asymptotic behavior of $I_\mu(z)$ at zero follows immediately from the series representation of $I_\mu(z)$
\formula[eq:I:asym:zero]{
I_\mu(z) &= \left(\frac{z}{2}\right)^\mu \frac{1}{\Gamma(\mu+1)}+O(z^{\mu+2})\/,\quad z\to 0^{+}\/,
}
where the behaviour at infinity is given by (see \cite{Erdelyi:1953:volII} 7.13.1 (5))
\formula[eq:I:asym:infty]{
I_\mu(z) &\sim \frac{e^z}{\sqrt{2\pi z}}\left(1+O(1/z)\right)\/,\quad z\to\infty\/.
}
Some parts of the proof strongly depends on the estimates of the ratio of two modified Bessel functions with different arguments. Here we recall the results of Laforgia given in Theorem 2.1 in \cite{Laforgia:1991}. For every $\mu>-1/2$ we have
\formula[MBF:ineq:upper]{
\frac{I_\mu(y)}{I_\mu(x)}<\left(\frac{y}{x}\right)^{\mu}e^{y-x}\/,\quad y\geq x>0\/.
}
Moreover, whenever $\mu\geq1/2$, the lower bound of similar type holds, i.e. we have
\formula[MBF:ineq:lower]{
\frac{I_\mu(y)}{I_\mu(x)}\geq \left(\frac{x}{y}\right)^{\mu}e^{y-x}\/,\quad y\geq x>0\/.
}
\subsection{Bessel processes}
In this section we introduce basic properties of Bessel processes. We follow the notation presented in \cite{MatsumotoYor:2005a} and \cite{MatsumotoYor:2005b}, where we refer the reader for more details.
We write $\pr_x^{(\mu)}$ and $\ex^{(\mu)}_x$ for the probability law and the corresponding expected value of a Bessel process $R_t^{(\mu)}$ with an index $\mu\in\R$ on the canonical path space with starting point $R_0=x>0$. The filtration of the coordinate process is denoted by $\mathcal{F}_t^{(\mu)}=\sigma\{R_s^{(\mu)}:s\leq t\}$. The laws of Bessel processes with different indices are absolutely continuous and the corresponding Radon-Nikodym derivative is described by
\formula[ac:formula]{
\left.\frac{d\pr^{(\mu)}_x}{d\pr^{(\nu)}_x}\right|_{\mathcal{F}_t}=\left(\frac{w(t)}{x}\right)^{\mu-\nu}\exp\left(-\frac{\mu^{2}-\nu^2}{2}\int_{0}^{t}\frac{ds}{w^{2}(s)}\right)\/,
}
where $x>0$, $\mu,\nu\in\R$ and the above given formula holds $\pr^{(\nu)}_x$-a.s on $\{T_0^{(\nu)}>t\}$. Here $T_0^{(\mu)}$ denotes the first hitting time of $0$ by $R_t^{(\mu)}$. The behaviour of $R_t^{(\mu)}$ at zero depends on $\mu$. Since we are interested in a Bessel process in a half-line $(a,\infty)$, for a given strictly positive $a$, the boundary condition at zero is irrelevant from our point of view. However, for completeness of the exposure we impose killing condition at zero for $-1<\mu<0$, i.e. in the situation when $0$ is non-singular. Then the density of the transition probability (with respect to the Lebesgue measure) is given by (\ref{eq:transitiondensity:formula}).
For $x> 0$ we define the first hitting of a given level $a>0$ by
\formula{
T_a^{(\mu)} =\inf\{t>0: R_t^{(\mu)}=a\}\/.
}
Notice that for $\mu\geq 0$ we have $T_a^{(\mu)}<\infty$ a.s., but for $\mu<0$ the variable $T_a^{(\mu)}$ is infinite with positive probability.
We denote by $q_{x,a}^{(\mu)}(s)$ the density function of $T_a^{(\mu)}$. The sharp estimates of $q_{x,a}^{(\mu)}(s)$ were obtained in \cite{BMR3:2013}. We recall this result for $a=1$, which implies the result for every $a>0$, due to the scaling property of Bessel processes. More precisely, it was shown that for every $x>1$ and $t>0$ we have
\formula[hittingtime:estimates]{
q_{x,1}^{(\mu)}(s) &\stackrel{\mu}{\approx}
(x-1)\left(1\wedge\frac{1}{x^{2\mu}}\right)\frac{
{e^{-(x-1)^2/(2t)}}}{t^{3/2}} \frac{ x^{2|\mu|-1} }{t^{|\mu|-1/2}+
x^{|\mu|-1/2}}\/,\quad \mu\neq 0\/.
}
The above-given bounds imply the description of the survival probabilities (see Theorem 10 in \cite{BMR3:2013} )
\formula[sp:estimate:mu]{
\textbf{P}_x^{(\mu)}(T^{(\mu)}_1>t) &\stackrel{\mu}{\approx} \frac{x-1}{\sqrt{x\wedge t}+x-1}\frac{1}{t^{\mu}+x^{2\mu}}\/, \quad x>1\/,\quad t>0\/.
}
The main object of our study is the density of the transitions probabilities for the Bessel process starting from $x>a$ killed at time $T_a^{(\mu)}$. Taking into account the Hunt formula (\ref{eq:hunt:general}) and the fact that continuity of the paths implies $R_{T_a^{(\mu)}}^{(\mu)}=a$ a.s., we can represent $p_a^{(\mu)}(t,x,y)$ in terms of $p^{(\mu)}(t,x,y)$ and $q_{x,a}^{(\mu)}(s)$ in the following way
\formula[eq:hunt:formula]{
p_a^{(\mu)}(t,x,y) & = p^{(\mu)}(t,x,y) - r_a^{(\mu)}(t,x,y)\\
\label{eq:hunt:formula2}
&= p^{(\mu)}(t,x,y)- \int_0^t p^{(\mu)}(t-s,a,y)q_{x,a}^{(\mu)}(s)ds\/.
}
The scaling property of a Bessel process together with (\ref{eq:hunt:formula2}) imply that
\formula[eq:pt1:scaling]{
p_a^{(\mu)}(t,x,y) = \frac{1}{a}p_1^{(\mu)}(t/a^2,x/a,y/a)\/,\quad x,y>a\/,\quad t>0\/.
}
Moreover, the absolute continuity property (\ref{ac:formula}) applied for $\mu>0$ and $\nu=-\mu$ gives
\formula{
p_1^{(-\mu)}(t,x,y) = \left(\frac{x}{y}\right)^{2\mu}p_1^{(\mu)}(t,x,y)\/,\quad x,y>1\/,\quad t>0\/.
}
These two properties show that it is enough to prove Theorem \ref{thm:main} only for $a=1$ and $\mu>0$. To shorten the notation we will write $q_x^{(\mu)}(s)=q_{x,1}^{(\mu)}(s)$. Since we consider the densities with respect to the Lebesgue measure (not with respect to the speed measure $m(dx)=2x^{2\mu+1}dx$) the symmetry property of $p_1^{(\mu)}(t,x,y)$ in this case reads as follows:
\formula[eq:pt1:symmetry]{
p_1^{(\mu)}(t,x,y) = \left(\frac{y}{x}\right)^{2\mu+1}p_1^{(\mu)}(t,y,x)\/,\quad x,y>1\/,\quad t>0\/.
}
Finally, for $\mu=1/2$ one can compute $p_1^{(\mu)}(t,x,y)$ explicitly from (\ref{eq:hunt:formula2}), by using $I_{1/2}(z)=\sqrt{\frac{2}{{\pi z}}}\sinh(z)$ and the fact that $q_{x}^{(1/2)}(s)$ is a density of $1/2$-stable subordinator. More precisely, since
\formula[formula:pt:12:1]{
q_{x}^{(1/2)}(t)&=\frac{x-1}{x}\frac{1}{\sqrt{2\pi t^3}}\exp{\left(-\frac{(x-1)^2}{2t}\right)},\\
\label{formula:pt:12:2}
p^{(1/2)}(t,x,y)&=\frac{1}{\sqrt{2\pi t}}\frac{y}{x} \left(\exp{\left(-\frac{(x-y)^2}{2t}\right)}-\exp{\left(-\frac{(x+y)^2}{2t}\right)}\right) ,
}
we obtain
\formula{r_{1}^{(1/2)}(t,x,y)=&\int_{0}^{t}q_{x}^{(1/2)}(s)p^{(1/2)}(t-s,1,y)ds\\
=&\frac{x-1}{x}\frac{y}{2\pi}(H(t,(x-1)^2,(y-1)^2)-H(t,(x-1)^2,(y+1)^2)),}
where
\formula{
H(t,a,b)=\int_{0}^{t}\frac{1}{\sqrt{t-s}}\frac{1}{\sqrt{s^3}}\exp{\left(-\frac{a}{2s}\right)}\exp{\left(-\frac{b}{2(t-s)}\right)}ds\/,\quad a,b>0\/.
}
Making the substitution $w=1/s-1/t$ and using formula 3.471.15 in \cite{GradsteinRyzhik:2007}
we get
\formula[eq:H:final]{
\nonumber
H(t,a,b)&=\frac{1}{\sqrt{t}}\exp{\left(-\frac{a + b}{2t}\right)}\int_{0}^{\infty}w^{-1/2}\exp{\left(-\frac{a}{2}w-\frac{b}{2w}\right)}dw\\
&=\sqrt{\frac{2\pi}{ta}}
\exp{\left(-\frac{(\sqrt{a}+\sqrt{b})^2}{2t}\right)}\/.
}
Hence we have
\formula[eq:rt:12:formula]{
r_{1}^{(1/2)}(t,x,y)=\frac{1}{\sqrt{2\pi t}}\frac{y}{x}\left[\exp{\left(-\frac{(x+y-2)^2}{2t}\right)}-\exp{\left(-\frac{(x+y)^2}{2t}\right)}\right]
}
which together with (\ref{eq:hunt:formula2}) and (\ref{formula:pt:12:2}) give
\formula[eq:pt1:12:formula]
{ p_{1}^{(1/2)}(t,x,y)=\frac{1}{\sqrt{2\pi t}} \frac{y}{x}\left(\exp\left(-\frac{(x-y)^2}{2t}\right)-\exp\left(-\frac{(x+y-2)^2}{2t}\right)\right).
}
One can also obtain this formula using the relation between $3$-dimensional Bessel process (i.e. with index $\mu=1/2$) and $1$-dimensional Brownian motion killed when leaving a positive half-line. Note also that
\formula[eq:pt1:12:asympt]{
p_1^{(1/2)}(t,x,y) \approx \left(1\wedge\frac{(x-1)(y-1)}{t}\right) \frac{y}{x} \frac{1}{\sqrt{ t}} \exp\left(-\frac{(x-y)^2}{2t}\right)\/.
}
which is exactly (\ref{eq:mainthm}) for $\mu=1/2$.
We end this section providing very useful relation between densities $q_x^{(\mu)}(t)$ with different indices, which once again follows from the absolute continuity property.
\begin{lemma}
\label{lem:q:nu12mu}
For every $x>1$ and $t>0$ we have
\formula[eq:q:nu12mu]{
x^{\mu-1/2}q_x^{(\mu)}(t)\leq q_x^{(1/2)}(t)\leq x^{\nu-1/2} q_x^{(\nu)}(t)\/,
}
whenever $\nu\leq 1/2\leq \mu$.
\end{lemma}
\begin{proof}
The second inequality in (\ref{eq:q:nu12mu}) was given in Lemma 4 in \cite{BMR3:2013}. To deal with the right-hand side of (\ref{eq:q:nu12mu}) we use (\ref{ac:formula}) to obtain for every $\delta>0$ and $0<\varepsilon \leq\delta^2/2\wedge 1$
\formula[eq:lem:proof01]{
\nonumber
x^{\mu-1/2}\ex_x^{(\mu)}[t-\varepsilon\leq T_1^{(\mu)}\leq t] &\leq \ex_x^{(1/2)}\left[t-\varepsilon\leq T_1^{(1/2)}\leq t;\left({R_t}\right)^{\mu-1/2}\right] \\
&\leq \left({1+\delta}\right)^{\mu-1/2}\ex_x^{(1/2)}[t-\varepsilon\leq T_1^{(1/2)}\leq t]+F_\varepsilon(x,t)\/,
}
where, by Strong Markov property
\formula{
F_\varepsilon(x,t) &= \ex_x^{(1/2)}[t-\varepsilon\leq T_1^{(1/2)}\leq t,R_t\geq 1+\delta;\left({R_t}\right)^{\mu-1/2}]\\
&=\ex_x^{(1/2)}[t-\varepsilon\leq T_1^{(1/2)}\leq t; \ex_1^{(1/2)}[R_{t-T_1^{(1/2)}}\geq 1+\delta;\left({R_{t-T_1^{(1/2)}}}\right)^{\mu-1/2}]]\\
&=\int_{t-\varepsilon}^t q_x^{(1/2)}(u)\int_{1+\delta}^\infty y^{\mu-1/2}p^{(1/2)}(t-u,1,y)\,dydu\/.
}
By (\ref{formula:pt:12:2}), for every $r\in (0,\varepsilon)$ we have
\formula{
\int_{1+\delta}^\infty y^{\mu-1/2}p^{(1/2)}(r,1,y)\,dy &\leq \frac{1}{\sqrt{2\pi r}}\int_{1+\delta}^\infty \exp\left(-\frac{(y-1)^2}{2r}\right)y^{\mu+1/2}\,dy\\
&\leq \frac{1}{\sqrt{2\pi r}}\exp\left(-\frac{\delta^2}{4r}\right)\int_{1+\delta}^\infty \exp\left(-\frac{(y-1)^2}{4}\right)y^{\mu+1/2}\,dy\\
&\leq \frac{1}{\sqrt{2\pi \varepsilon}}\exp\left(-\frac{\delta^2}{4\varepsilon}\right)\int_{1+\delta}^\infty \exp\left(-\frac{(y-1)^2}{4}\right)y^{\mu+1/2}\,dy\/,
}
where the last inequality follows from $\varepsilon\leq \delta^2/2$. It implies that $F_\varepsilon(t,x)/\varepsilon$ vanishes when $\varepsilon$ goes to zero. Consequently, dividing both sides of (\ref{eq:lem:proof01}) by $\varepsilon$ and taking a limit when $\varepsilon \to 0$, we arrive at
\formula{
x^{\mu-1/2}q_x^{(\mu)}(t)\leq ({1+\delta})^{\mu-1/2}q_x^{(1/2)}(t)\/.
}
Since $\delta$ was arbitrary, the proof is complete.
\end{proof}
\section{Estimates for $xy/t$ large}
\label{section:xyt:large}
We begin this Section with the application of the absolute continuity property of Bessel processes and the formula (\ref{eq:pt1:12:formula}) which give the upper bounds for $\mu\geq 1/2$ and lower bounds for $\nu\leq 1/2$. These bounds are sharp whenever $xy\geq t$.
\begin{proposition}
\label{prop:upperbounds:xytlarge}
Let $\mu\geq 1/2\geq \nu>0$. For every $x,y>1$ and $t>0$ we have
\formula[eq:up:low]{
\left(\frac{x}{y}\right)^{\mu-\frac{1}{2}}p_1^{(\mu)}(t,x,y)\leq p_1^{(1/2)}(t,x,y)\leq \left(\frac{x}{y}\right)^{\nu-\frac{1}{2}}p_1^{(\nu)}(t,x,y)\/.
}
\end{proposition}
\begin{proof}
From the absolute continuity property (\ref{ac:formula}) we get that for every $\mu\geq\nu>0$ and every Borel set $A\subset(1,\infty)$ we have
\formula{
\int_{A}p_{1}^{(\mu)}(t,x,y)dy &=\frac{1}{x^{\mu-\nu}}\ex_{x}^{(\nu)}\left[T_1^{(\nu)}>t,R_t \in A;
(R_t)^{\mu-\nu}\exp\left(-\frac{\mu^{2}-\nu^2}{2}\int_{0}^{t}\frac{ds}{R^{2}_s}\right)\right]\\
&\leq\frac{1}{x^{\mu -\nu}}\ex_{x}^{(\nu)}[T_1^{(\nu)}>t,R_t \in
A;(R_t)^{\mu-\nu}]= \int_A \left(\frac{y}{x}\right)^{\mu -\nu}p_1^{(\nu)}(t,x,y)\,dy\/.}
Hence
\formula[eq:munu:relation]{
p_{1}^{(\mu)}(t,x,y)\leq&\left(\frac{y}{x}\right)^{\mu-\nu}p_{1}^{(\nu)}(t,x,y)\/.
}
Taking $\mu\geq 1/2$ and $\nu=1/2$ gives the left-hand side of (\ref{eq:up:low}) and taking $\nu\leq 1/2$ and $\mu=1/2$ gives the right-hand side of (\ref{eq:up:low}).
\end{proof}
The absolute continuity can also be used to show the estimates for small times $t$ in a very similar way. Note that if $t<1$ then we always have $xy>t$. The proof of the main Theorem will be provided in subsequent propositions without the assumption that $t$ is bounded, but we present this simple proof to show that for $xy\geq t$ the estimates for small $t$ are just an immediate consequence of the absolute continuity of Bessel processes.
\begin{proposition}
Let $\mu >0$. For every $x,y> 1$ and $t\in(0,1]$ we have
\formula[eq:t:small]{
p_{1}^{(\mu)}(t,x,y)\stackrel{\mu}{\approx}& \left(1\wedge\frac{(x-1)(y-1)}{t}\right)\left(\frac{y}{x}\right)^{\mu+1/2} \frac{1}{\sqrt{t}}\exp\left(-\frac{(x-y)^2}{2t}\right) \/.
}
\end{proposition}
\begin{proof}
Let $\mu\geq \nu>0$. Taking Borel set $A\subset(1,\infty)$ and $t\leq 1$ we have
\formula{\int_{A}p_{1}^{(\mu)}(t;x,y)dy
=&\frac{1}{x^{\mu-\nu}}\ex_{x}^{(\nu)}\left[T_1^{(\nu)}>t;R_t \in A;
(R_t)^{\mu-\nu}\exp\left(-\frac{\mu^{2}-\nu^2}{2}\int_{0}^{t}\frac{ds}{R^{2}_s}\right)\right]\/.
}
Since $\inf\{R_s:s<t\}>1$ on $\{T_1^{(\nu)}>t\}$ we can write
\formula{
\int_{A}p_{1}^{(\mu)}(t,x,y)dy
\geq&\frac{1}{x^{\mu-\nu}}\ex_{x}^{(\nu)}\left[T_1^{(\nu)}>t;R_t \in A;
(R_t)^{\mu-\nu}\exp\left(-\frac{\mu^{2}-\nu^2}{2}t\right)\right]\\
\geq&\exp\left(-\frac{\mu^{2}-\nu^2}{2}\right)\int_{A}\left(\frac{y}{x}\right)^{\mu-\nu}p_{1}^{(\nu)}(t,x,y)dy\/.}
Hence we get
\formula{p_{1}^{(\mu)}(t,x,y)\geq&\exp\left(-\frac{\mu^{2}-\nu^2}{2}\right)\left(\frac{y}{x}\right)^{\mu-\nu}p_{1}^{(\nu)}(t,x,y)\/.
}
Now taking $\mu\geq 1/2$ and $\nu=1/2$ together with (\ref{eq:pt1:12:formula}) and the result of Proposition \ref{prop:upperbounds:xytlarge} gives the proof of (\ref{eq:t:small}) for $\mu\geq 1/2$. Analogous argument applied for $\mu<1/2$ ends the proof.
\end{proof}
Next proposition together with Proposition \ref{prop:upperbounds:xytlarge} provide the estimates for $x,y$ bounded away from $1$. Notice that if $x,y>c>1$ and $xy>t$ then
\formula[eq:xy:away:bounds1]{
\frac{(x-1)(y-1)}{t}\geq \left(1-\frac{1}{c}\right)^2 \frac{xy}{t} \geq \left(1-\frac{1}{c}\right)^2 \/.
}
and consequently the right-hand side of (\ref{eq:mainthm:rewrite}) is comparable with a constant which means that $p_1^{(\mu)}(t,x,y)$ is comparable with $p^{(\mu)}(t,x,y)$.
\begin{proposition}
\label{prop:xy:away}
Let $\mu\geq 1/2\geq \nu>0$. Then there exist constants $C_1^{(\nu)},C_2^{(\mu)}>0$ and $C_3^{(\mu)}>1$ such that
\formula{
C_1^{(\nu)}\left(\frac{x}{y}\right)^{\nu+1/2}p_1^{(\nu)}(t,x,y)\leq \frac{1}{\sqrt{t}}\exp\left(-\frac{(x-y)^2}{t}\right)\leq C_2^{(\mu)}\left(\frac{x}{y}\right)^{\mu+1/2}p_1^{(\mu)}(t,x,y)\/,
}
whenever $xy\geq t$ and the lower bounds holds for $x,y>2$ and the upper bounds are valid for $x,y>C_3^{(\mu)}$.
\end{proposition}
\begin{proof}
Taking $0<\nu\leq 1/2$ and using the description of the behaviour of $I_\nu(z)$ at infinity \eqref{eq:I:asym:infty} together with general estimate $p_1^{(\nu)}(t,x,y)\leq p^{(\nu)}(t,x,y)$ (which is an immediate consequence of the definition (\ref{eq:hunt:general})) we get
\formula{
p_1^{(\nu)}(t,x,y)&\leq p^{(\nu)}(t,x,y)
\stackrel{\nu}{\approx}\frac{1}{\sqrt{t}}\left(\frac{y}{x}\right)^{\nu+1/2}\exp\left(-\frac{(x-y)^2}{t}\right)\/.
}
This ends the proof for small indices.
Now let $\mu\geq 1/2$. Since the modified Bessel function $I_\mu(z)$ is positive, continuous and behaves like $(2\pi z)^{-1/2}e^{z}$ at infinity (see (\ref{eq:I:asym:infty})) there exists constant $c_1>1$ such that
\formula{
I_\mu\left(\frac{xy}{t}\right)\geq \frac{1}{c_1}\sqrt{\frac{t}{2\pi xy}}\exp\left(\frac{xy}{t}\right)\/,
}
whenever $xy\geq t$. One can show that it is enough to take $c_1 = (I_\mu(1)e^{-1}\sqrt{2\pi})^{-1}$.
Consequently, applying above given estimate to (\ref{eq:transitiondensity:formula}) we arrive at
\formula[eq:proof:1]{
\left(\frac{y}{x}\right)^{\mu-1/2}p^{(1/2)}(t,x,y) \geq p^{(\mu)}(t,x,y)\geq \frac{1}{c_1}\frac{1}{\sqrt{2\pi t}}\left(\frac{y}{x}\right)^{\mu+1/2}\exp\left(-\frac{(x-y)^2}{2t}\right)\/,\quad {xy}\geq{t}\/,
}
where the first inequality is just (\ref{eq:munu:relation}). Moreover, by (\ref{eq:q:nu12mu}), we have
\formula{
q_x^{(\mu)}(t)\leq \frac{q_x^{(1/2)}(t)}{x^{\mu-1/2}} = \frac{x-1}{x^{\mu-1/2}}\frac{1}{\sqrt{2\pi}t^{3/2}}\exp\left(-\frac{(x-1)^2}{2t}\right)\/,\quad t>0,x>1\/.
}
and it together with left-hand side of (\ref{eq:proof:1}) and (\ref{eq:rt:12:formula}) imply
\formula{
{r_1^{(\mu)}(t,x,y)} &= \int_0^t q_x^{(\mu)}(s)p^{(\mu)}(t-s,1,y)\,ds
\leq \left(\frac{y}{x}\right)^{\mu-1/2} \int_0^t q_x^{(1/2)}(s)p^{(1/2)}(t-s,1,y)\,ds\\
&= \left(\frac{y}{x}\right)^{\mu-1/2} r_1^{(1/2)}(t,x,y)\\
& = \frac{1}{\sqrt{2\pi t}}\left(\frac{y}{x}\right)^{\mu+1/2}\left(\exp\left(-\frac{(x+y-2)^2}{2t}\right)-\exp\left(-\frac{(x+y)^2}{2t}\right)\right)\/.
}
Let $C_3^{(\mu)} = \left(1-\sqrt{\frac{2c_1}{2c_1+1}}\right)^{-1}$ and taking into account right-hand side of (\ref{eq:proof:1}) and (\ref{eq:xy:away:bounds1}) we obtain for $x,y>C_3^{(\mu)}$ that
\formula{
\frac{r_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}&\leq c_1\exp\left(\frac{(x-y)^2}{2t}\right)\left(\exp\left(-\frac{(x+y-2)^2}{2t}\right)-\exp\left(-\frac{(x+y)^2}{2t}\right)\right)\\
&=c_1\left(\exp\left(-\frac{2(x-1)(y-1)}{t}\right)-\exp\left(-\frac{2xy}{t}\right)\right)\\
&\leq c_1\left(\exp\left(-c_2\frac{2xy}{t}\right)-\exp\left(-\frac{2xy}{t}\right)\right)\/,
}
where
\formula{
c_2 = \left(1-\frac{1}{C_3^{(\mu)}}\right)^2 = \frac{2c_1}{2c_1+1}<1\/.
}
Taking into account the general estimate
\formula{
e^{-c_2z}-e^{-z}\leq \frac{1-c_2}{c_2}\/,\quad z>0\/,c_2<1
}
we arrive at
\formula{
\frac{r_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}&\leq c_1\frac{1-c_2}{c_2} = \frac{1}{2}\/.
}
Consequently
\formula{
p_1^{(\mu)}(t,x,y)\geq \frac12 p^{(\mu)}(t,x,y)\geq \frac{1}{2c_1} \frac{1}{\sqrt{2\pi t}}\left(\frac{y}{x}\right)^{\mu+1/2}\exp\left(-\frac{(x-y)^2}{2t}\right)\/.
}
\end{proof}
Now we turn our attention to the case when $x$ and $y$ are bounded. The next proposition, however, is much more general.
\begin{proposition}
\label{prop:xminy}
For fixed $m>0$ and $\mu\geq 1/2\geq \nu>0$ there exist constants $C^{(\mu)}_4,C^{(\nu)}_4>0$ such that
\formula{
C^{(\mu)}_4\left(\frac{x}{y}\right)^{\mu+1/2}p_{1}^{(\mu)}(t,x,y)&\geq \left(1\wedge\frac{(x-1)(y-1)}{t}\right)\frac{1}{\sqrt{
t}}\exp\left(-\frac{(x-y)^{2}}{2t}\right)}
and
\formula{
\left(1\wedge\frac{(x-1)(y-1)}{t}\right)\frac{1}{\sqrt{
t}}\exp\left(-\frac{(x-y)^{2}}{2t}\right)\geq C^{(\nu)}_4\left(\frac{x}{y}\right)^{\nu+1/2}p_{1}^{(\nu)}(t,x,y)
}
whenever ${(x \wedge y)^{2}}\geq mt$.
\end{proposition}
\begin{proof}
Without lost of generality we can assume that $1<x<y$. We put $b=(x+1)/2$ and take $\mu\geq 1/2$. Using (\ref{ac:formula}) and the fact that $T_b^{(1/2)}\leq T_1^{(1/2)}$ we can write for every Borel set $A\subset (1,\infty)$ that
\formula{
\int_{A}p_{1}^{(\mu)}(t,x,y)dy&
\geq \ex_{x}^{(1/2)}\left[t<T_b^{(1/2)},R_t \in
A;\left(\frac{R_{t}}{x}\right)^{\mu-1/2}\exp\left(-\frac{\mu^2 -1/4}{2}\int_{0}^{t}\frac{ds}{R_s^{2}}\right)\right]
}
Since up to time $T_b^{(1/2)}$ we have
\formula{
\int_{0}^{t}\frac{ds}{R_s^{2}}\leq \frac{4t}{(x+1)^2}\leq \frac{4t}{x^2}\leq \frac{4}{m}\/,
}
we obtain
\formula{
\int_{A}p_{1}^{(\mu)}(t,x,y)dy\geq \exp\left(-\frac{4\mu^2-1}{2m}\right)\ex_{x}^{(1/2)}\left[t<T_b^{(1/2)},R_t
\in A;\left(\frac{R_t}{x}\right)^{\mu-1/2}\right]\/,
}
which gives
\formula[eq:proof:2]{
p_1^{(\mu)}(t,x,y)\geq \exp\left(-\frac{4\mu^2-1}{2m}\right)\left(\frac{y}{x}\right)^{\mu-1/2}p_b^{(1/2)}(t,x,y)\/.
}
From the other side, the scaling property \eqref{eq:pt1:scaling} and the formula \eqref{eq:pt1:12:asympt} give
\formula{
p_{b}^{(1/2)}(t,x,y) &=\frac{1}{b}p_{1}^{(1/2)}\left(\frac{t}{b^2};\frac{x}{b},\frac{y}{b}\right)\\
&\approx \frac{1}{\sqrt{t}}\frac{y}{x}\exp\left(-\frac{(x-y)^2}{2t}\right)\left(1\wedge \frac{(x-b)(y-b)}{t}\right)\\
&\approx \frac{1}{\sqrt{t}}\frac{y}{x}\exp\left(-\frac{(x-y)^2}{2t}\right)\left(1\wedge \frac{(x-1)(y-1)}{t}\right)\/,
}
where the last equalities follows from
\formula{
x-b = \frac{x-1}{2}\/,\quad \frac{y-1}{2}\leq y-b\leq y-1\/.
}
It ends the proof for $\mu\geq 1/2$.
For $1/2\geq \nu>0$ we similarly write
\formula{
\int_{A}p_{b}^{(1/2)}(t,x,y)dy&
\leq \ex_{x}^{(\nu)}\left[t<T_1^{(\nu)},R_t \in
A;\left(\frac{R_{t}}{x}\right)^{1/2-\nu}\exp\left(\frac{\nu^2-1/4}{2}\int_{0}^{t}\frac{ds}{R_s^{2}}\right)\right]
}
and we obtain
\formula{
p_{b}^{(1/2)}(t,x,y)\leq \exp\left(\frac{4\nu^2-1}{2m}\right)\left(\frac{y}{x}\right)^{1/2-\nu}p_1^{(\nu)}(t,x,y)\/.
}
This together with the above-given estimates for $p_b^{(1/2)}(t,x,y)$ finish the proof.
\end{proof}
Since for $x,y<C$ and $xy\geq t$, for some fixed $C>1$, we have
\formula{
\frac{(x\wedge y)^2}{t}\geq \frac{xy}{Ct}\geq \frac{1}{C}\/,
}
applying the results of Proposition \ref{prop:xminy} (with $m=C^{-1}$) and Proposition \ref{prop:upperbounds:xytlarge} gives
\begin{corollary}
\label{Cor:xy:bounded}
For every $C>1$ we have
\formula{
p_{1}^{(\mu)}(t,x,y)\stackrel{\mu, C}{\approx}& \left(1\wedge\frac{(x-1)(y-1)}{t}\right)\left(\frac{y}{x}\right)^{\mu+1/2} \frac{1}{\sqrt{t}}\exp\left(-\frac{(x-y)^2}{2t}\right)
}
whenever $x,y<C$ and $xy\geq t$.
\end{corollary}
Finally, we end this section with two propositions related to the case when one of the space variables is close to $1$ and the other is large. We deal with this case separately for $\mu<1/2$ and $\mu\geq 1/2$.
\begin{proposition}
For every $\nu\in (0,1/2)$ there exists constant $C_5^{(\nu)}>0$ such that
\formula{
p_1^{(\nu)}(t,x,y)\leq C_5^{(\nu)} \frac{1}{\sqrt{t}}\left(\frac{y}{x}\right)^{\nu+1/2}\exp\left(-\frac{(x-y)^2}{2t}\right)\left(1\wedge \frac{(x-1)(y-1)}{t}\right)
}
for $1<x\leq 2\leq y$ and $xy\geq t$.
\end{proposition}
\begin{proof}
By monotonicity of $I_\nu(z)$, for every $s\in(0,t)$ we have
\formula{
\frac{1}{t-s}\geq\frac{1}{\sqrt{t}}\frac{1}{\sqrt{t-s}}, \quad I_{\nu}\left(\frac{y}{t-s}\right)\geq I_{\nu}\left(\frac{y}{t}\right)\/.
}
Hence, using the right-hand side of (\ref{eq:q:nu12mu}), we get
\formula{
q_x^{(\nu)}(s)\geq \frac{x-1}{\sqrt{2\pi s^3}}\frac{1}{x^{\nu+1/2}}\exp{\left(-\frac{(x-1)^2}{2s}\right)}, \quad 0\leq \nu <1/2, \ s>0\/,
}
and the formula (\ref{eq:transitiondensity:formula}) we get
\formula{
r_1^{(\nu)}(t,x,y)&=\int_0^{t}q_x^{(\nu)}(s)\frac{y^{1+\nu}}{t-s}\exp{\left(-\frac{1+y^2}{2(t-s)}\right)}I_{\nu}\left(\frac{y}{t-s}\right)\,ds\\
&\geq\frac{x-1}{\sqrt{2\pi }}\left(\frac{y}{x}\right)^{\nu+1}\sqrt{\frac{x}{t}}I_{\nu}\left(\frac{y}{t}\right)H(t,(x-1)^2,1+y^2)\\
&={\frac{\sqrt{x}}{t}}\left(\frac{y}{x}\right)^{\nu+1}I_{\nu}\left(\frac{y}{t}\right)\exp{\left(-\frac{(x-1+\sqrt{y^2 +1})^2}{2t}\right)}
}
where the last equality follows from (\ref{eq:H:final}). Using \eqref{MBF:ineq:upper} we obtain
\formula{
p^{(\nu)}(t,x,y) &= \frac{y^{\nu+1}}{t}\exp\left(-\frac{x^2+y^2}{2t}\right)\,\frac{1}{x^{\nu}}I_\nu\left(\frac{xy}{t}\right)\\
&\leq \frac{y^{\nu+1}}{t}\exp\left(-\frac{(x-y)^2}{2t}\right)\exp\left(-\frac{y}{t}\right)I_\nu\left(\frac{y}{t}\right)\/,
}
which together with previously given estimates, \eqref{eq:hunt:formula} and finally \eqref{eq:I:asym:infty} give
\formula{
p_1^{(\nu)}(t,x,y) &\leq \frac{y^{\mu+1}}{t}\exp{\left(-\frac{(x-y)^2}{2t}\right)}\exp{\left(-\frac{y}{t}\right)}I_{\nu}\left(\frac{y}{t}\right)f_{y,t}(x)\\
&\leq c_1 \frac{y^{\mu+1/2}}{\sqrt{t}}\exp{\left(-\frac{(x-y)^2}{2t}\right)}f_{y,t}(x)\/,
}
where
\formula{
f_{y,t}(x) = 1-\frac{1}{x^{\nu+1/2}}\exp\left(-\frac{(x-1)(\sqrt{y^2+1}+y-1)}{t}\right)\/.
}
By elementary computation we can see that
\formula{
-f_{y,t}'(x) &= \frac{1}{x^{\nu+3/2}}\exp\left(-\frac{(x-1)(\sqrt{y^2+1}+y-1)}{t}\right)\left(\frac{\sqrt{y^2+1}+y-1}{t}x+\nu+1/2\right)\\
&\leq \frac{1}{x^{\nu+3/2}}\left(\frac{2xy}{t}+1\right)\leq \frac{4xy}{t} \leq 16\frac{y-1}{t}\/.
}
Here we have used the following inequalities
\formula{
\sqrt{y^2 +1}+y-1<2y\/,\quad xy\geq t\/,\quad 1<x\leq 2\leq y\/.
}
Thus, by the mean value theorem, there exists $d=d_{x,y,t}\in (1,x)$ such that
\formula{
f_{y,t}(x)&= (1-x)f_{y,t}'(d)\leq 16\frac{(x-1)(y-1)}{t}\/.
}
\formula{
p_1^{(\nu)}(t,x,y)&\leq c_1 2^{\nu+9/2} \left(1\wedge \frac{(x-1)(y-1)}{t}\right) \left(\frac{y}{x}\right)^{\nu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{(x-y)^2}{2t}\right)\/.
}
\end{proof}
\begin{proposition}
\label{prop:xyt:large:lower:3}
For every $\mu\geq 1/2$ and $c>1$ there exists constant $C_6^{(\mu)}(c)>0$ such that for every $1<x\leq c$ and $y\geq 5c(\mu+1)$ we have
\formula{
p_1^{(\mu)}(t,x,y)\geq C_6^{(\mu)}(c)\frac{1}{\sqrt{t}}\left(\frac{y}{x}\right)^{\mu+1/2}\exp{\left(-\frac{(x-y)^2}{2t}\right)}\left(1\wedge \frac{(x-1)(y-1)}{t}\right)\/,
}
whenever $xy\geq t$.
\end{proposition}
\begin{proof}
Let us fix $\mu\geq 1/2$. For every $0<s<t$, using \eqref{MBF:ineq:upper}, we have
\formula{
I_{\mu}\left(\frac{y}{t-s}\right)<I_{\mu}\left(\frac{y}{t}\right)\left(\frac{t}{t-s}\right)^{\mu}\exp\left({\frac{y}{t-s}}\right)\exp\left({-\frac{y}{t}}\right)
}
and consequently
\formula{
\frac{p^{(\mu)}(t-s,1,y)}{p^{(\mu)}(t,1,y)}&<\left(\frac{t}{t-s}\right)^{\mu +1}\exp{\left(-\frac{(y-1)^2}{2}\left(\frac{1}{t-s}-\frac{1}{t}\right)\right)}
=\frac{g_y(t-s)}{g_y(t)}\/,
}
where
\formula{
g_y(w)=\left(\frac{1}{w}\right)^{\mu +1}\exp\left({-\frac{(y-1)^2}{2w}}\right)\/,\quad w>0\/.
}
Note that
\formula{
g_y'(w) = \left(\frac{1}{w}\right)^{\mu +2}\exp\left({-\frac{(y-1)^2}{2w}}\right)\left(\frac{(y-1)^2}{2w}-(\mu +1)\right)\/.
}
Since $x\leq c$, $y\geq 5c(\mu+1)>2$ and $xy\geq t$ we have $4(y-1)\geq 2y \geq 2t/c$. Moreover $y-1\geq 4c(\mu+1)$. Thus
\formula{
\frac{(y-1)^2}{2t}\geq \frac{4c(\mu+1)(y-1)}{2t}\geq \mu+1\/.
}
It means that under our assumptions on $x$, $y$ and $t$ the function $g_y(w)$ is increasing on $(0,t)$ and consequently $g_y(t-s)\leq g_y(t)$ for every $0<s<t$.
\formula{
r_{1}^{(\mu)}(t,x,y) &=\int_{0}^{t}q_{x}^{(\mu)}(s)p^{(\mu)}(t-s,1,y)ds\leq p^{(\mu)}(t,1,y)\int_{0}^{t}q_{x}^{(\mu)}(s)ds\\
&= x^{-\mu}\exp{\left(\frac{x^2-1}{2t}\right)}\frac{I_{\mu}\left(y/t\right)}{I_{\mu}\left(xy/t\right)}p^{(\mu)}(t,x,y)\/.
}
The above-given ratio of modified Bessel functions can be estimated from above by using \eqref{MBF:ineq:lower} as follows
\formula{
I_{\mu}\left(\frac{y}{t}\right)\leq I_{\mu}\left(\frac{xy}{t}\right)\exp{\left(-\frac{(x-1)y}{t}\right)}x^{\mu}\/.
}
Consequently
\formula{r_{1}^{(\mu)}(t,x,y)\leq p^{(\mu)}(t,x,y)\exp{\left(-\frac{(x-1)(2y-x-1)}{2t}\right)}.}
Finally observe that $2y-x-1>y-1$ and we arrive at
\formula{
p_{1}^{(\mu)}(t,x,y)&\geq \left(1-\exp{\left(-\frac{(x-1)(y-1)}{2t}\right)}\right)p^{(\mu)}(t,x,y)\\
&\stackrel{\mu}{\approx}\left(1\wedge \frac{(x-1)(y-1)}{t}\right)\frac{1}{\sqrt{t}}\left(\frac{y}{x}\right)^{\mu+1/2}\exp{\left(-\frac{(x-y)^2}{2t}\right)}\/.
}
This ends the proof.
\end{proof}
The proof of (\ref{eq:mainthm}) in the case $xy\geq t$ can be deduced from above-given propositions in the following way. Let $\mu\geq 1/2$ and without any loss of generality we assume that $x\leq y$. The upper bounds for every $x,y>1$ are given in Proposition \ref{prop:upperbounds:xytlarge}. From Proposition \ref{prop:xy:away} we know that the lower bounds are valid for $x,y>C_3^{(\mu)}$. If $x\leq C_3^{(\mu)}$ and $y\geq 5C_3^{(\mu)}(\mu+1)$ then the lower bounds are given in Proposition \ref{prop:xyt:large:lower:3}. Finally, taking $C=5C_3^{(\mu)}(\mu+1)$ in Corollary \ref{Cor:xy:bounded} we get the lower bounds in the remaining range of the parameters $x$ and $y$. The proof for $\nu\leq 1/2$ is obtained in the same way.
\section{Estimates for $xy/t$ small}
\label{section:xyt:small}
In this section we provide estimates of $p_1^{(\mu)}(t,x,y)$ whenever $xy<t$. Note also that (\ref{eq:mainthm}) can be written in the following shorter way
\formula{
p_1^{(\mu)}(t,x,y) \stackrel{\mu}{\approx}\frac{x-1}{x}\frac{y-1}{y}\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\/,
}
whenever $xy<t$. The main difficulty is to obtain the estimates when one of the space parameters is close to $1$ and the other is large, i.e. tends to infinity. In this case we have to take care of cancellations of two quantities appearing in (\ref{eq:hunt:formula}) but also not to lose a control on the exponential behaviour. We begin with the upper bounds.
\begin{proposition}
\label{prop:xyt:small:upper}
For every $\mu>0$, there exists constant $C_7^{(\mu)}>0$ such that
\formula{
p_1^{(\mu)}(t,x,y)\leq C_7^{(\mu)} \frac{x-1}{x}\frac{y-1}{y}\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\/,
}
whenever $xy\leq t$.
\end{proposition}
\begin{proof}
If $x,y>2$ the result follows immediately from the general estimate $p_1^{(\mu)}(t,x,y)\leq p^{(\mu)}(t,x,y)$ and (\ref{eq:I:asym:zero}) which gives
\formula[pt:estimate:xyt:small]{
p^{(\mu)}(t,x,y)\approx \left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\/,\quad \frac{xy}{t}\leq 1\/.
}
Note that for every $x,y>0$ and $t>0$ there exists $c_1>0$ such that
\formula[pt:estimate:upper]{
p^{(\mu)}(t,x,y)\leq c_1 \frac{y^{2\mu+1}}{t^{\mu+1}}\/.
}
If $xy<t$, then it immediately follows from (\ref{pt:estimate:xyt:small}) by estimating the exponential term by $1$. For $xy\geq t$ we use the asymptotic behaviour (\ref{eq:I:asym:infty}) to show that
\formula[pt:eestimate:xyt:large]{
p^{(\mu)}(t,x,y)\approx\frac{1}{\sqrt{t}}\left(\frac{y}{x}\right)^{\mu+1/2}\exp\left(-\frac{|x-y|^2}{2t}\right)\leq \frac{y^{2\mu+1}}{t^{\mu+1}}\left(\frac{t}{xy}\right)^{\mu+1/2}\leq\frac{y^{2\mu+1}}{t^{\mu+1}}
}
In particular, for all $z,w>1$ and $1<y<2$ there exists $c_2>0$ such that
\formula[pt:estimate:rel]{
p^{(\mu)}(t/3,z,w)\leq c_2\left(\frac{w}{y}\right)^{2\mu+1}\frac{1}{t^{\mu+1}}\/.
}
The Chapman-Kolmogorov equation and estimating the middle term using (\ref{pt:estimate:rel}) give
\formula{
p_1^{(\mu)}(t,x,y) &= \int_1^\infty\int_1^\infty p_1^{(\mu)}(t/3,x,z)p_1^{(\mu)}(t/3,z,w)p_1^{(\mu)}(t/3,w,y)dzdw\\
&\leq \frac{c_3}{t^{\mu+1}} \int_1^\infty p_1^{(\mu)}(t/3,x,z)dz \int_1^\infty \left(\frac{w}{y}\right)^{2\mu+1} p_1^{(\mu)}(t/3,w,y)dw\\
&= \frac{c_3}{t^{\mu+1}} P^{(\mu)}_x(T^{(\mu)}_1>t/3)P^{(\mu)}_y(T^{(\mu)}_1>t/3)\/.
}
Here the last equality follows from the symmetry property (\ref{eq:pt1:symmetry}).
Since, by (\ref{sp:estimate:mu}), whenever $xy<t$ and $1<x,y<2$ we have
\formula{
P^{(\mu)}_x(T^{(\mu)}_1>t/3) &= P^{(\mu)}_x(\infty>T^{(\mu)}_1>t/3)+P^{(\mu)}_x(T^{(\mu)}_1=\infty)\approx \frac{x-1}{t^{\mu}}+1-\frac{1}{x^{2\mu}}\approx x-1\/,
}
which ends the proof of the upper-bound in this case.
Now assume that $y\geq 2$, $1<x\leq 2$ and $xy\leq t$. The other case $x\geq 2$, $1<y\leq 2$ follows from the symmetry condition mentioned above. Using the fact that $\int_0^\infty q_x^{(\mu)}(u)du=x^{-2\mu}$ and (\ref{eq:hunt:formula}), we can write
\formula{
p_1^{(\mu)}(t,x,y) &\leq p^{(\mu)}(t,x,y)-\int_0^{1/2}q_x^{(\mu)}(u)p^{(\mu)}(t-u,1,y)du\\
&= J_1(t,x,y)+J_2(t,x,y)+J_3(t,x,y)\/,
}
where
\formula{
J_1(t,x,y) &= p^{(\mu)}(t,x,y)-\frac{1}{x^{2\mu}}p^{(\mu)}(t,x,y)+\pr^{(\mu)}_x(\infty>T_1^{(\mu)}>1/2)p^{(\mu)}(t,x,y)\/,\\
J_2(t,x,y) &= \pr^{(\mu)}_x(T_1^{(\mu)}\leq 1/2)(p^{(\mu)}(t,x,y)-p^{(\mu)}(t,1,y))\/,\\
J_3(t,x,y) &= \int_0^{1/2} q_x^{(\mu)}(u)(p^{(\mu)}(t,1,y)-p^{(\mu)}(t-u,1,y))\/,du\/.
}
It is obvious that for $1<x<2$ we have
\formula{
J_1(t,x,y)\leq c_4 (x-1)p^{(\mu)}(t,x,y)\/.
}
To deal with $J_2(t,x,y)$ note that the differentiation formula (\ref{eq:I:diff}), the asymptotic behavior (\ref{eq:I:asym:zero}) and positivity of $I_\mu(z)$ give
\formula{
\dfrac{d}{dx}\left[e^{-x^2/2t}\left(\frac{t}{xy}\right)^{\mu}I_\mu\left(\frac{xy}{t}\right)\right] &= -\frac{x}{t}e^{-x^2/2t}\left(\frac{t}{xy}\right)^{\mu}I_\mu\left(\frac{xy}{t}\right)+e^{-x^2/2t}\frac{y}{t}\left(\frac{t}{xy}\right)^{\mu}I_{\mu+1}\left(\frac{xy}{t}\right)\\
&\leq c_5 e^{-x^2/2t}\left(\frac{xy}{t}\right)^2\leq c_5\/,
}
whenever $xy<t$. Consequently, by mean value theorem, we obtain
\begin{eqnarray*}
J_2(t,x,y)\leq (p^{(\mu)}(t,x,y)-p^{(\mu)}(t,1,y))\leq c_5 (x-1)\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}e^{-y^2/2t}\/.
\end{eqnarray*}
Finally, the bounds of $J_3(t,x,y)$ follow from the estimates for the derivative of $p^{(\mu)}(t,1,y)$ in $t$. Using once again (\ref{eq:I:diff}) and skipping the negative components we have
\begin{eqnarray*}
h(t,y) &\stackrel{def}{=}& \dfrac{d}{dt}\left(\frac{1}{t^{\mu+1}}e^{-\frac{1+y^2}{2t}}\left(\frac{t}{y}\right)^{\mu}I_\mu\left(\frac{y}{t}\right)\right)\\
&=& e^{-(1+y^2)/(2t)}\frac{I_\mu(y/t)}{ty^\mu}\left(-\frac{\mu+1}{t}+\frac{1+y^2}{2t^2}-\frac{y}{t}\frac{I_{\mu+1}(y/t)}{I_{\mu}(y/t)}\right)\\
&\leq& e^{-(1+y^2)/(2t)}\frac{I_\mu(y/t)}{ty^\mu}\frac{1+y^2}{2t^2}\leq c_6 e^{-(1+y^2)/(2t)} \frac{1}{t^{\mu+1}}\/,
\end{eqnarray*}
whenever $y<t$. Thus, there exists $c=c_{\mu,u,y}\in(t-u,t)$ such that
\begin{eqnarray*}
J_3(t,x,y) &=& \int_0^{1/2}q_x^{(\mu)}(u)u y^{2\mu+1}h(c_{\mu,u,y},y)du \leq c_6 y^{2\mu+1}\int_0^{1/2}q_x^{(\mu)}(u)u e^{-(1+y^2)/(2c)} \frac{1}{c^{\mu+1}}du\\
&\leq& c_6 e^{-(1+y^2)/(2t)}\frac{y^{2\mu+1}}{(t/2)^{\mu+1}}\int_0^{1/2}u q_{x}^{(\mu)}(u)du\/.
\end{eqnarray*}
Taking into account the upper bounds given in (\ref{hittingtime:estimates}) we get
\formula{
\int_0^{1/2}u q_{x}^{(\mu)}(u)du\leq c_7 \frac{x-1}{x^{\mu+1/2}}\int_0^{1/2}e^{-(x-1)^2/(2u)}\frac{du}{u^{1/2}}\leq c_8 (x-1)\/.
}
This ends the proof.
\end{proof}
The proof of the lower bounds is split into two parts. Next proposition corresponds to the case when $y>x>1$ and ${(y-1)^2}/{t}$ is large. Moreover, we enlarge the region and assume that $xy<mt$ for a given $m\geq 1$. It is forced by the lower bounds given in Proposition \ref{prop:xyt:large:lower:3}, where it is required to have $xy/t$ sufficiently large but also by the proof of Proposition \ref{prop:xyt:small:lower:2}.
\begin{proposition}
\label{prop:xyt:small:lower:1}
For every $\mu>0$ and $m\geq 1$, there exists constant $C^{(\mu)}_8(m)>0$ such that
\formula{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}\geq C^{(\mu)}_8(m)\frac{x-1}{x}\/,\quad y>x>1
}
whenever $xy<m t$ and $\frac{(y-1)^2}{t}\geq 2(\mu+1)$.
\end{proposition}
\begin{proof}
Since
\formula{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)} &= 1-\frac{p^{(\mu)}(t,1,y)}{p^{(\mu)}(t,x,y)}\frac{r_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,1,y)}\/,
}
using (\ref{MBF:ineq:upper}) for every $\mu>0$ and $(y-1)^2/t\geq 2(\mu+1)$, we have
\formula{
\frac{r_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,1,y)} &= \int_0^t q_x^{(\mu)}(s)\frac{p^{(\mu)}(t-s,1,y)}{p^{(\mu)}(t,1,y)}\,ds\\
& = \int_0^t q_x^{(\mu)}(s)\frac{t}{t-s} \exp\left(-\frac{1+y^2}{2t}\frac{s}{t-s}\right)\frac{I_\mu(y/(t-s))}{I_\mu(y/t)}\,ds\\
& \leq \int_0^t q_x^{(\mu)}(s)\left(\frac{t}{t-s}\right)^{\mu+1}\exp\left(-\frac{(y-1)^2}{2t}\frac{s}{t-s}\right)\,ds\/.
}
For every $s<t$ we can write
\formula[eq:fw:ratio]{
\left(\frac{t}{t-s}\right)^{\mu+1}\exp\left(-\frac{(y-1)^2}{2t}\frac{s}{t-s}\right) = \frac{f_y(t-s)}{f_y(t)}\/,
}
where $f_y(w) = w^{-\mu-1}e^{-(y-1)^2/2w}$. Then by simple calculation we get $f'_y(w) = w^{-\mu-2}e^{-(y-1)^2/2w}\left((\frac{(y-1)^2}{2w}-(\mu+1)\right)$ and consequently $f_y(w)$ is increasing on $\left(0,\frac{(y-1)^2}{2(\mu+1)}\right)$. It implies that right-hand side of (\ref{eq:fw:ratio}) is smaller than $1$ whenever $\frac{(y-1)^2}{t}\geq 2(\mu+1)$ and
\formula{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}&\geq 1-x^\mu\exp\left(\frac{x^2-1}{t}\right)\frac{I_\mu({y}/{t})}{I_\mu({xy}/{t})} \int_0^t q_x^{(\mu)}(s)\,ds\/.
}
Since the function $z^{-\mu}I_{\mu}(z)$ is increasing on $(0,\infty)$
\formula{
\frac{I_\mu\left({y}/{t}\right)}{I_\mu\left({xy}/{t}\right)}\leq \frac{1}{x^\mu}\/,\quad x,y>1\quad t\geq 0\/.
}
This, together with $\pr^{(\mu)}_x(T_1^{(\mu)}<\infty)=x^{-2\mu}$, gives
\formula[LB:basic]{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}&\geq 1-\frac{1}{x^{2\mu}}\exp\left(\frac{x^2-1}{t}\right)\/.
}
Now we assume that $1<x<(2e^m)^{1/(2\mu)}$ and $t>\frac{2(2e^m)^{1/\mu}}{\mu}$. Then
\formula{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}\geq 1-\frac{1}{x^{2\mu}}\exp\left(\frac{\mu(x^2-1)}{2(2e^m)^{1/\mu}}\right)
}
The mean value theorem ensures the existence of a constant $d\in (1,x)$ such that
\formula{
1-\frac{1}{x^{2\mu}}\exp\left(\frac{\mu(x^2-1)}{2(2e^m)^{1/\mu}}\right) &= \frac{2\mu(x-1)}{d^{2\mu+1}}\exp\left(\frac{\mu(d^2-1)}{2(2e^m)^{1/\mu}}\right)\left(1-\frac{d^2}{2(2e^m)^{1/\mu}}\right)\\
&\geq c_1(m)(x-1)
}
where the last inequality comes from the fact that $1<d<x<(2e^m)^{1/(2\mu)}$.
The next step is to take $x\geq (2e^m)^{(1/(2\mu))}$ and $t>\frac{2(2e^m)^{1/\mu}}{\mu}$. Since $x^2<xy<mt$ using (\ref{LB:basic}) we get
\formula{
\frac{p_1^{(\mu)}(t,x,y)}{p^{(\mu)}(t,x,y)}\geq 1-\frac{1}{x^{2\mu}}e^m\geq 1-\frac{1}{2} \approx \frac{x-1}{x}\/.
}
Finally, we consider the case when $x>1$, $ xy/m<t\leq\frac{2(2e^m)^{1/\mu}}{\mu}=:t_0$ and $\frac{(y-1)^2}{t}\geq 2(\mu+1)$. Using absolute continuity property (\ref{ac:formula}) and (\ref{eq:pt1:12:formula}), we can write
\formula{
p_1^{(\mu)}(t,x,y)&\geq (e^{-t_0(\mu^2/2-1/8)}\wedge 1)\left(\frac{y}{x}\right)^{\mu-1/2}p_1^{(1/2)}(t,x,y)\\
&\stackrel{\mu,m}{\approx} \left(1\wedge\frac{(x-1)(y-1)}{t}\right)\left(\frac{y}{x}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\\
&\geq \left(1\wedge\frac{(x-1)\sqrt{2(\mu+1)/m}}{t_0}\right)\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\\
&\stackrel{\mu,m}{\approx}\frac{x-1}{x}p^{(\mu)}(t,x,y)\/.
}
This ends the proof.
\end{proof}
We end this section with the proof of the lower bounds, whenever ${((y\vee x)-1)^2}/{t}$ is small. Note that in the proof of the next proposition we use the lower bounds of $p_1^{(\mu)}(t,x,y)$ for $xy\geq t$ obtained previously in Section \ref{section:xyt:large} as well as the result of Proposition \ref{prop:xyt:small:lower:1}. As previously, due to the symmetry, it is enough to assume that $y>x>1$.
\begin{proposition}
\label{prop:xyt:small:lower:2}
For every $\mu>0$ there exists constant $C_9^{(\mu)}>0$ such that
\formula{
\frac{p^{(\mu)}_1(t,x,y)}{p^{(\mu)}(t,x,y)}\geq C_9^{(\mu)} \frac{x-1}{x}\frac{y-1}{y}\/,\quad y>x>1\/,
}
whenever $xy<t$ and $\frac{(y-1)^2}{t}\leq 2(\mu+1)$.
\end{proposition}
\begin{proof}
Let $xy<t$ and $y>x>1$. At the beginning we additionally assume that $t\geq 4$. Note that there exists $c_1>0$ such that for every $s>1/2$ we have $e^{-s}\geq c_1 s^{\mu+1/2}e^{-2s}$. This, together with the lower bounds of $p_1^{(\mu)}(t,z,w)$ for $z,w\geq \sqrt{t}$ (then $zw\geq t$) obtained in Section \ref{section:xyt:large}, enable us to write
\formula{
p_1^{(\mu)}(t,z,w)&\geq c_2\left(1\wedge\frac{(z-1)(w-1)}{t}\right)\left(\frac{w}{z}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{|z-w|^2}{2t}\right)\\
&\geq \frac{c_2}{4}\left(\frac{w}{z}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{z^2}{2t}\right)\exp\left(-\frac{w^2}{2t}\right)\\
&\geq \frac{c_2 c_1^2}{4} \left(\frac{wz}{t}\right)^{\mu+1/2}\left(\frac{w^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{z^2}{t}\right)\exp\left(-\frac{w^2}{t}\right)\\
&\geq c_3\left(\frac{w^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{z^2}{t}\right)\exp\left(-\frac{w^2}{t}\right)\/.
}
Consequently, using the Chapmann-Kolmogorov equation and (\ref{eq:pt1:symmetry}), we get
\formula{
p_1^{(\mu)}\lefteqn{(3t,x,y) = \int_1^\infty\int_1^\infty p_1^{(\mu)}(t,x,z)p_1^{(\mu)}(t,z,w)p_1^{(\mu)}(t,w,y)dzdw}\\
&\geq \int_{\sqrt{t}}^\infty\int_{\sqrt{t}}^\infty p_1^{(\mu)}(t,x,z)p_1^{(\mu)}(t,z,w)p_1^{(\mu)}(t,w,y)dzdw\\
&\geq c_3\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\int_{\sqrt{t}}^\infty p_1^{(\mu)}(t,x,z)e^{-{z^2}/{t}}dz \int_{\sqrt{t}}^\infty \left(\frac{w}{y}\right)^{2\mu+1}p_1^{(\mu)}(t,w,y)e^{-{w^2}/{t}}dw\\
&= c_3\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}F^{(\mu)}_t(x)F^{(\mu)}_t(y)\/,
}
where
\formula{
F^{(\mu)}_t(x) &:= \int_{\sqrt{t}}^\infty p_1^{(\mu)}(t,x,z)e^{-{z^2}/{t}}dz\/.
}
Since for $t\geq 4$ and $\frac{(y-1)^2}{t}\leq 2(\mu+1)$ we have
\formula{
\frac{x^2}{t}\leq \frac{y^2}{t}\leq \left(2\wedge 4\frac{(y-1)^2}{t}\right)\leq c_4
}
and consequently
\formula{
p^{(\mu)}(3t,x,y)\approx \left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\/,\quad xy<t\/,
}
it is enough to show that $F^{(\mu)}_t(x) \geq c_5 \frac{x-1}{x}$ for every $x>1$. However, for $z\geq b\sqrt{t}$, with $b=2\sqrt{2(\mu+1)}$, and $t\geq 4$ we have $\frac{(z-1)^2}{t}\geq \frac{1}{4}\frac{z^2}{t}\geq 2(\mu+1)$. We can use the lower bounds given in Proposition \ref{prop:xyt:small:lower:1} with $m=2b$ and obtain
\formula{
F^{(\mu)}_t(x) &\geq \int_{b\sqrt{t}}^{2bt/x} p_1^{(\mu)}(t,x,z)e^{-{z^2}/{t}}dz\\
&\geq c_6 \frac{x-1}{x\sqrt{t}}\int_{b\sqrt{t}}^{2bt/x} \left(\frac{z^2}{t}\right)^{\mu+1/2}e^{-{z^2}/{2t}}e^{-{z^2}/{t}}dz
\geq c_7\frac{x-1}{x\sqrt{t}}\int_{b\sqrt{t}}^{2bt/x} e^{-{2z^2}/{t}}dz\\
&= c_7\frac{x-1}{x}\int_{b}^{2b\sqrt{t}/x} e^{-{2u^2}}du\geq c_7\frac{x-1}{x}\int_{b}^{2b} e^{-{2u^2}}du\/.
}
Finally, for $t\leq 4$, the same computations as in the end of the proof of the previous Proposition (but with $t_0=4$) gives
\formula{
p_1^{(\mu)}(t,x,y)&\geq c_7(e^{-4(\mu^2/2-1/8)}\wedge 1)\left(1\wedge\frac{(x-1)(y-1)}{t}\right)\left(\frac{y^2}{t}\right)^{\mu+1/2}\frac{1}{\sqrt{t}}\exp\left(-\frac{x^2+y^2}{2t}\right)\\
&\stackrel{\mu}{\approx} \frac{x-1}{x}\frac{y-1}{y}p^{(\mu)}(t,x,y)\/,
}
where the last approximation follows from the fact that $(x-1)(y-1)< xy \leq t\leq 4$ which gives
\formula{
1\wedge \frac{(x-1)(y-1)}{t} = \frac{(x-1)(y-1)}{t}=\frac{(x-1)(y-1)}{xy}\frac{xy}{t}\approx\frac{(x-1)(y-1)}{xy}\/.
}
\end{proof}
\subsection*{Acknowledgments}
The authors are very grateful to Tomasz Byczkowski for critical remarks and comments
which enabled them to improve the presentation of the paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
There are two endeavors which are studied by many physicists over
and over again, and often the solutions sought for the two
problems intersect. The first one is to write a theory which is
shown to have a nontrivial limit, or zero for the beta function
of the renormalization group for a non zero value of the coupling
constant, as the renormalization cut-off goes to infinity. The
perturbatively nontrivial $\phi^4$ theory in four dimensions was
shown to go to a free theory as the cut-off was lifted a while ago
\cite{ba_ki_79,ba_ki_81,wi_73}. This example shows that
perturbative expansions are not decisive in obtaining a nontrivial
model. There is continuing research on this subject \cite{kl_06}.
The other ongoing endeavor is to use only fermions to build a
model of nature, where all the observed bosons are constructed as
composites of these entities. In solid state physics the basic
fields, the electrons come together to form bosons to explain
superconductivity \cite{ba_co_sc_57}. Heisenberg spent years to
formulate a "theory of everything" for particle physics, using
only fermions \cite{he_54}. The Nambu-Jona-Lasinio model
\cite{na_jo_61}, which was constructed based on an analogy with
the BCS theory of superconductivity \cite{ba_co_sc_57}, is a model
which satisfies both ambitions, which is written in terms of
fermions only and perturbatively non-renormalizable. This model
also was shown to go to a trivial model \cite{ko_ko_94,zi_89}.
There are new attempts to make sense of these theories either as
an effective model at low energies, which will give valuable
information in QCD \cite{mu_87, mi_93}, for example for the
studies of hadron mass generation through spontaneous symmetry
breaking. Another attempt is gauging the model and investigating
whether the new coupling gives rise to a nontrivial theory
\cite{ba_le_lo_86,le_lo_ba_86,re_00,re_hepth_99}. These examples
show that the search for non-trivial models using only fermions
may be an interesting endeavor.
Another attempt in writing a model using only fermions came with
the work of G\" ursey, \cite{gu_56}. Here a non-polynomial but
conformal invariant Lagrangian was written to describe
self-interacting fermions with the intention of remedying some of
the problems of the Heisenberg model \cite{he_54}. To be able to
write a conformally invariant lagrangian, G\"{u}rsey had to use a
non-polynomial form. Kortel found solutions to this conformal
invariant theory \cite{ko_56} which were shown to be instantons
and merons much later \cite{ak_82}.
One of us, with collaborators, tried to make quantum sense of this
model a while ago
\cite{ak_ar_du_ho_ka_pa_82-34,ak_ar_du_ho_ka_pa_82-41,ak_ar_ho_pa_83},
finding that even if these attempts were justified, this model
went to a trivial model as the cut-off is removed. Several
processes were calculated \cite{ar_ho_83} involving incoming and
outgoing spinors which gave exactly the naive quark model results,
missing the observed logarithmic behavior predicted by QCD
calculations.
We tried to give a new interpretation of our old work in
\cite{ho_lu_06}. In that work we saw that the polynomial form of
the original model really did not correspond to the original G\"
ursey model in the exact sense. The two versions obey different
symmetries. This was shown explicitly in reference
\cite{ho_lu_06}. We went to higher orders in the calculation,
beyond one loop for scattering processes. By using the
Dyson-Schwinger and Bethe-Salpeter equations we could calculate
higher order processes. We saw that while the non-trivial
scattering of the fundamental fields was not allowed, bound states
could scatter from each other with non-trivial amplitudes.
The essential point in our analysis was the fact that, being
proportional to ${{\epsilon}\over{p^2}}$, the composite scalar
field propagator cancelled many of the potential infinities that
arise while calculating loop integrals. As a result of this
cancellation, only composite fields participate in physical
processes such as scattering and particle production. The
scattering and production of elementary spinor fields were not
allowed. This phenomena was an example of treating the bound
states, instead of the principal fields, as physical entities.
A further point will be to couple an elementary vector field to
the model described in reference \cite{ho_lu_06}, in line with the
process studied for the Nambu-Jona-Lasinio model
\cite{ba_le_lo_86,le_lo_ba_86}. Coupling the same elementary field
to the model described in reference \cite{ho_ta_hepth_06} will be
similar, giving a model with two vector fields, one composite, the
other one elementary. Our final goal is to investigate if we get a
non-trivial theory when we couple a Yang-Mills system with color
and flavor degrees of freedom, like it is done in
\cite{re_00,re_hepth_99}. Here we study the abelian case, as an
initial step.
In this note we summarize the changes in our results when this
elementary vector field is coupled to the model described in
reference \cite{ho_lu_06}. We outline the model as is given in
Refs. \cite{ak_ar_du_ho_ka_pa_82-34} and \cite{ho_lu_06} in next
section and give our new results in subsequent sections. The main
conclusion is that our original model, in which only the
composites take part in physical processes like scattering or
particle production, is reduced to a gauged--Higgs-Yukawa model,
where both the composites and the fundamental spinor and vector
fields participate in all the processes.
\\
\\
\section{The Model}
Our initial model is given by the Lagrangian
\begin{equation}
L = {i\overline{\psi}} \partial \!\!\!/ \psi + g {\overline{\psi}}
\psi \phi +\xi ( g{\overline {\psi}} \psi -a\phi^{3} ).\label{cl}
\end{equation}
Here the only terms with kinetic part are the spinors. $\xi$ is a
Lagrange multiplier field, $\phi $ is a scalar field with no
kinetic part, $g$ and $a$ are coupling constants. This expression
contains two constraint equations, obtained from writing the
Euler-Lagrange equations for the $\xi$ and $\phi$ fields. Hence,
it should be quantized using the Dirac constraint analysis as
performed in reference \cite{ho_lu_06}.
The Lagrangian given above is just an attempt in writing the
original G\" ursey Lagrangian
\begin{equation}
L={i\overline{\psi}} \partial \!\!\!/ \psi + g' ({\overline{\psi}} \psi)^{4/3}
,\label{gl}
\end{equation}
in a polynomial form.
We see that the $\gamma^{5} $ invariance of the original
G\"{u}rsey Lagrangian is retained in the form written in equation
(\ref{cl}). This discrete symmetry prevents $\psi$ from acquiring
a finite mass in higher orders. We also see that the two models
given by lagrangians in equations (\ref{cl}) and (\ref{gl}) are
not equivalent since the former does not obey one extra symmetry
obeyed by the latter one. This was carefully studied in reference
\cite{ho_lu_06}. We, therefore, take the first model as a model
which only approximates the original G\"{u}rsey model, without
claiming equivalence and study only that model in this work.
To quantize the latter system consistently we proceed via the path
integral method. This procedure was carried out in reference
\cite{ho_lu_06}. At the end of these calculations we found out
that we can write the constrained lagrangian given in equation
(\ref{cl}) as
\begin{equation}
L'' = {i\overline{\psi}}[\partial \!\!\!/ -ig
\Phi]\psi-{{a}\over{16}}(\Phi^{4}+2\Phi^{3}\Xi-2\Phi\Xi^{3}-
\Xi^{4})+{i\over{4}}c^*(\Phi^{2}+2\Phi\Xi+\Xi^{2}) c,
\end{equation}
where the effective lagrangian is expressed in terms of scalar
fields $\Phi$, and $\Xi$, ghost fields $c $, $c^*$ and spinor
fields only.
The fermion propagator is the usual Dirac propagator in lowest
order, as can be seen from the Lagrangian. After integrating over
the fermion fields in the path integral, we obtain the effective
action. The second derivative of the effective action with respect
to the $\Phi$ field gives us the induced inverse propagator for
the $\Phi$ field, with the infinite part given as
\begin{equation} \mbox{Inf} \left[ {{ig^2}\over{ (2\pi)^4}} \mbox{Tr} \int {{d^4
p}\over {p\!\!\!/(p\!\!\!/+q\!\!\!/)}}\right]=
{{g^2 q^2}\over {8\pi^{2} \epsilon}}.
\end{equation}
Here dimensional regularization is used for the momentum integral
and $\epsilon = 4-n$. We see that the $\Phi$ field propagates as
a massless field.
When we study the propagators for the other fields, we see that no
linear or quadratic term in $\Xi$ exists, so the one loop
contribution to the $\Xi$ propagator is absent. Similarly the
mixed derivatives of the effective action with respect to $\Xi$
and $\Phi$ are zero at one loop, so no mixing between these two
fields occurs. We can also set the propagators of the ghost fields
to zero, since they give no contribution in the one loop
approximation. The higher loop contributions are absent for
these fields.
In reference \cite{ho_lu_06} we also studied the contributions to
the fermion propagator at higher orders and we found, by studying
the Dyson-Schwinger equations for the two point function, that
there were no new contributions. We had at least one phase where
the mass of the spinor field was zero.
In reference \cite{ho_ta_hepth_06} we studied a similar model
where the composite vector field replaced the composite scalar
field, with similar results.
\section{New Results and Higher Orders}
Here we couple an elementary vector field to the model described
in reference \cite{ho_lu_06}, in a minimal way, with a new
coupling constant $e$, acting in accordance of the work in
references
\cite{ba_le_lo_86,le_lo_ba_86,re_00,ko_ta_ya_93,ku_te_99}. The new
lagrangian is given as
\begin{eqnarray} L' = {i\overline{\psi}}[ \partial \!\!\!/ -i g \Phi] \psi -
{{a }\over{4}} (\Phi^4+2\Phi^3 \Xi-2\Xi^3
\Phi-\Xi^4)\nonumber \\
+{{i}\over{4}} c^*(\Xi^2 + 2\Phi \Xi +\Phi^2) c-{{1}\over{4}}
F_{\mu \nu} F^{\mu \nu} -{\overline{\psi}}e A\!\!\!/\psi .
\end{eqnarray}
Here $A^{\mu}$ is the elementary vector field and $ F^{\mu \nu}$
is defined from $A^{\mu}$ in the usual way. We take the vector
field propagator in the Feynman gauge in our explicit
calculations. This lagrangian reduces to the effective expression
given below, since the $\Xi$ and the ghost fields decouple.
\begin{eqnarray} L' = {i\overline{\psi}}[ \partial \!\!\!/ -i g \Phi] \psi -
{{a }\over{4}} \Phi^4 -{{1}\over{4}} F_{\mu \nu} F^{\mu \nu}
-{\overline{\psi}}e A\!\!\!/\psi . \end{eqnarray}
In this section we summarize the changes in our results for this
new model.
If our fermion field had a color index $i$ where $i=1...N$, we
could perform an 1/N expansion to justify the use of only ladder
diagrams for higher orders for the scattering processes. Although
in our model the spinor has only one color, we still consider only
ladder diagrams anticipating that one can construct a variation of
the model with N colors.
\subsection{Renormalization Group Analysis}
In the models given in references \cite{ho_lu_06} and
\cite{ho_ta_hepth_06}, we had two coupling constants, $g$ and $a$
in reference \cite{ho_lu_06} and only one, which we rename as
$g'$, in reference \cite{ho_ta_hepth_06}. In the model described
in reference \cite{ho_ta_hepth_06}, there is no need for infinite
coupling constant renormalization, since the spinor box diagram is
finite when the incoming and outgoing particles are vectors
\cite{ka_ne_50,wa_50}. In the model described in reference
\cite{ho_lu_06}, the coupling constant $a$ needs renormalization.
In these models there is no need for infinite renormalization for
$g$ and respectively for $g'$ since the diagrams for the
$<{\overline{\psi}}\psi\phi> $ and $<{\overline{\psi}}\psi A_\mu>$
vertices are finite.
Using the language of renormalization group analysis, the first order for this vertex is
given by
\begin{equation}\mu\frac{dg_0}{d\mu}=0, \end{equation}
since the diagram given in Figure 1.a is finite, due to the
presence of $\epsilon$ in the scalar propagator. Higher order
calculations using the Bethe-Salpeter equation verify that the
right hand side of the equation does not change in higher orders.
This process was studied in reference \cite{ho_lu_06}.
We see that in the original model the only infinite
renormalization is needed for the four $\phi$ vertex; hence the
coupling constant for this process {\it runs}. The first
correction to the tree diagram is the box diagram, shown in Figure
1.b . This diagram has four spinor propagators and gives rise to a
${{1}\over {\epsilon}} $ type divergence. The renormalization
group equation written for this vertex is
\begin{eqnarray}
16\pi^2\mu\frac{da}{d\mu} &=& -dg_0^4 ,
\end{eqnarray}
Here the right hand side of the equation is equal to a constant,
since $g_0$ does not run. Since we include the four $\phi$ term in
our original lagrangian, we can renormalize the coupling constant
of this vertex to incorporate this divergence. There are no
higher infinities for this vertex. The two loop diagram contains,
as shown in Figure 1.c, a $\phi$ propagator which makes this
diagram finite. The three-loop diagram is made out of eight spinor
and two scalar lines, Figure 1.d. At worst we end up with a first
order infinity of the form ${{1}\over{\epsilon}}$ using the
dimensional regularization scheme. Higher order ladder diagrams
give at worst the same type of divergence. This divergence for the
four scalar vertex can be renormalized using standart means.
\begin{figure}[h]
\begin{center}
$\begin{array}{c@{\hspace{1cm}}c@{\hspace{5mm}}c@{\hspace{5mm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
[-0.53cm]
\epsfxsize=16mm \epsffile{yukawa1loop} &
\epsfxsize=25mm \epsffile{g4} &
\epsfxsize=25mm \epsffile{4s2loopa} &
\epsfxsize=40mm \epsffile{4s3loopb} \\
[0.4cm]
\mbox{\bf (a)} &
\mbox{\bf (b)} &
\mbox{\bf (c)} &
\mbox{\bf (d)}
\end{array}$
\end{center}
\caption{The diagrams related to the initial model . Here dotted
lines represent the scalar, solid lines the spinor particles. }
\label{fig475}
\end{figure}
In the new model, where an elementary vector field is added to the
model described in reference \cite{ho_lu_06}, we add a new
coupling constant $e$ which describes the coupling of the vector
field to the spinors. Here all three coupling constants are
renormalized.
We can write the three first order renormalization group equations
for these three coupling constants similar to the analysis in
\cite{ha_ki_ku_na_94}.
\begin{eqnarray}
16\pi^2\mu\frac{de}{d\mu} &=& be^3, \\
16\pi^2\mu\frac{dg}{d\mu} &=& -cge^2,\\
16\pi^2\mu\frac{da}{d\mu} &=& -dg^4 ,
\end{eqnarray}
where $b$, $c$, $d$ are numerical constants. These values are
given as $b=2$, $c=4$, $d=4$. These processes are illustrated in
diagrams shown in Figure 2. below.
\begin{figure}[h]
\begin{center}
$\begin{array}{c@{\hspace{2cm}}c@{\hspace{2cm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
[-0.53cm]
\epsfxsize=17mm \epsffile{e3} &
\epsfxsize=17mm \epsffile{ge2} &
\epsfxsize=25mm \epsffile{g4} \\
[0.4cm]
\mbox{\bf (a)} &
\mbox{\bf (b)} &
\mbox{\bf (c)}
\end{array}$
\end{center}
\caption{The three coupling constant corrections in one loop. Here
vector particles are represented by the wiggly lines additional to
the former ones shown in Fig. 1. } \label{fig456}
\end{figure}
Our equations differ from those in reference
\cite{ha_ki_ku_na_94}, since the interaction of the composite
scalar field with the spinors does not result in infinite terms
due to the presence of the factor $\epsilon$ in the scalar
propagator. Here $\epsilon$, the parameter in the dimensional
regularization scheme, is inversely proportional to $\ln
{{\Lambda}\over {\Lambda_0}} $ where $\Lambda$ is the cut-off
parameter. These equations have the immediate solutions
\begin{eqnarray}
e^2 &=& {{e^{2}_{0} }\over {A}}, \\
g &=&g_0 A^{c/2b}, \\
a &=& a_0 + \frac{dg_0^4}{2(2c+b)e_0^2} A^{\frac{2c}{b}+1} \label{ega1} \end{eqnarray}
where
\begin{eqnarray} A= 1-\frac{2b e^2_0}{16\pi^2}\ln{{\mu}\over{\mu_0}}. \end{eqnarray}
If we use diagrammatical analysis, we see that only the
spinor-vector field coupling gives infinite contribution to the
first two equations. The third equation diverges because of the
contribution of the box diagram, which is infinite even in the
absence of the vector field. For this coupling constant, at one
loop level, there is no difference from its behavior in the
original model.
In the original model we need an infinite renormalization for only
one of the coupling constants, the one with the four scalar field.
Further renormalization may be necessary at each higher loop, like
any other renormalizable model. The difference between our
original model and other renormalizable models lies in the fact
that, although this model is a renormalizable one using naive
dimensional counting arguments, we have only one set of diagrams
which is divergent. We need to renormalize only one of the
coupling constants by an infinite amount. This set of diagrams,
corresponding to the scattering of two bound states to two bound
states, has the same type of divergence, i.e.
${{1}\over{\epsilon}}$ in the dimensional regularization scheme
for all odd number of loops. The contributions from even number of
diagrams are finite, hence require no infinite renormalization.
When additional the vector particle contributions are added, this
expression is modified. The process where two scalar particles
goes to two scalar particles gets further infinite contributions
from the box type diagrams with vector field insertions, where one
part of the diagram is connected to the non-adjacent part with a
vector field as shown in the Figure 3.a. All these diagrams go as
${{1}\over {\epsilon}}$ where $\epsilon$ is the parameter in
dimensional regularization scheme. There are no higher divergences
for this process. Note that mixed scalar and vector insertions do
not give additional infinities, since the scalar propagator
reduces the degree of divergence. Also note that the diagram
where the internal photon is connecting adjacent sides, as shown
Figure 3.b, will be a contribution to the coupling constant
renormalization of one of the vertices. Since this is not a new
contribution, we will not consider it separately.
\begin{figure}[h]
\begin{center}
$\begin{array}{c@{\hspace{2cm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
\epsfxsize=25mm \epsffile{g4e2} &
\epsfxsize=25mm \epsffile{g4capraze2} \\
[0.4cm]
\mbox{\bf (a)} &
\mbox{\bf (b)}
\end{array}$
\end{center}
\caption{(a) The vector particle correction to the fermion box
diagram, (b) the box diagram with one vertex correction. }
\label{fig2}
\end{figure}
\subsection{Propagators and Vertices}
We have no essential change in the spinor propagator. In reference
\cite{mi_93} Miransky explains how for coupling constant $\alpha$
less than $\pi/3$, there is no mass generation in the quenched
approximation. Here $\alpha= {{e^2}\over{4\pi}}$. J.C.R. Bloch,
in his Durham thesis, \cite{bl_hepph_02}, explores the range where
this result is valid when the calculation is done without this
approximation. He states that the quenched and the rainbow
approximations, used by Miransky and collaborators, have non
physical features, namely they are not gauge invariant, making
the calculated value wildly vary depending on the particular gauge
used. Bloch, himself, uses the Ball-Chiu vertex,
\cite{ba_ch_80}, instead of the bare one, where the exact
longitudinal part of the full QED vertex, is uniquely determined
by the Ward-Takahashi identity relating the vertex with the
propagator. The transverse part of the vertex, however, is still
arbitrary. Bloch then considers a special form of the
Curtis-Pennington vertex \cite{cu_pe_90} in which the transverse
part of the vertex is constructed by requiring the multiplicative
renormalizability of the fermion propagator with additional
assumptions.
Bloch claims that for the different gauges used with this choice,
he gets rather close values for the critical coupling
\cite{ak_bl_gu_pe_re_94}. He also performs numerical calculations
where the approximations are kept to a minimum. The results are
given in the table on pg. 202 of hep-ph/0208074.
Using on the arguments in the Bloch's thesis, also using the
results of his numerical calculations, we conclude that at least
for $\alpha < 0.5 $ we can safely claim that there will be no
mass generation or the assumed $\gamma_5$ symmetry will be not
broken. Since we do not study heavy ion processes, the numerical
value we have for $\alpha$ will be much smaller than this limit.
Hence, our results will be valid. Note that in QCD mass
generation occurs at relatively low energies, where the coupling
constant has already increased.
Miransky \cite{mi_93} also explains how in the Landau gauge we can
take the coefficient of the momentum term as unity. Using these
arguments we can conclude that there are no additional
contributions to the spinor propagator used in reference
\cite{ho_lu_06}, at least in the Landau gauge.
The photon propagator also will be the similar as the one given in
QED, with only additional ${{1}\over{\epsilon}}$ contributions
from the scalar particle insertions. The lowest order diagram for
this process is shown in Figure 4.a. The dominant contribution
will be from the vector insertions, which are studied in QED.
\begin{figure}[htb!]
\begin{center}
$\begin{array}{c@{\hspace{2cm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
[-0.53cm]
\epsfxsize=25mm \epsffile{d_f_1} &
\epsfxsize=25mm \epsffile{d_1} \\
\end{array}$
\end{center}
\caption{ (a) Scalar contribution to the vector propagator, (b)The
vector particle correction to the scalar propagator}
\end{figure}
The additional contribution to the scalar propagator can be
calculated using diagrammatical analysis. If we take only the
planar diagrams which connect two different spinor lines, as shown
in Figure 4.b, the scalar field contributions are only of order
${{1}\over {\epsilon}}$, the same as the one loop initial
contribution. Higher order divergences come from the vector field
insertions.
The higher order planar insertions will be the dominant ones if we
allow $N_f$ flavors for the fermions, where $N_f$ is large, and
perform an ${{1}\over{N_f}}$ expansion. We will assume that the
same approximation can be done in our case too. The diagrams
where there are $n-1$ nonadjacent and planar vector field
contributions, go as $({{-D}\over {\epsilon}})^n$, where $D=
{{-4e^2}\over {(4\pi)^2}}$ is a numerical constant. Naively the
planar vector field contributions can be summed up as a geometric
series \cite{ka_va_hepth_06}. The same result is true also for the
planar vertex corrections as in Figure 5.a.
The vector- spinor- antispinor vertex do not get infinite
contributions from our composite scalar particle. A typical
diagram is given in Figure 5.b. Here the infinities coming from
the integrations are cancelled by the $\epsilon$ factors in the
scalar propagators. That vertex, for the purely electromagnetic
case, Figure 5.c, is vastly studied in the literature
\cite{ba_ch_80,cu_pe_90}.
\begin{figure}[htbp]
\begin{center}
$\begin{array}{c@{\hspace{2cm}}c@{\hspace{2cm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
[-0.53cm]
\epsfxsize=20mm \epsffile{tree4.eps} &
\epsfxsize=20mm \epsffile{treg4.eps} &
\epsfxsize=20mm \epsffile{tre_.eps} \\
[0.4cm]
\mbox{\bf (a)} &
\mbox{\bf (b)} &
\mbox{\bf (c)}
\end{array}$
\end{center}
\caption{ (a) The vector particle correction in higher orders, (b)
The scalar particle insertions to the vector- spinor- antispinor
vertex, (c) The vector- spinor-antispinor vertex } \label{fig3}
\end{figure}
\subsection{Scattering and Production Processes}
The process where two composite scalars scattering from each other
was studied above. The scattering of two scalars producing four,
or to any higher even number of scalars is finite, as expected to
have a renormalizable model. The process where two scalars create
an odd number of scalars is forbidden by the $\gamma^5$ invariance
of the theory, hence two scalar $\phi$ particles can only go to an
even number of scalar particles. This assertion is easily checked
by diagrammatic analysis.
We also note that in the original model the four spinor kernel was
of order $\epsilon $. The lowest order diagram, shown in Figure
6.a, vanishes as $\epsilon$ due to the presence of the scalar
propagator. In higher orders this expression can be written in the
quenched ladder approximation \cite{mi_93}, where the kernel is
seperated into a scalar propagator with two spinor legs joining
the proper kernel. If the proper kernel is of order $\epsilon$,
the loop involving two spinors and a scalar propagator can be at
most finite that makes the whole diagram in first order in
$\epsilon$. This fact also shows that there is no nontrivial
spinor-spinor scattering.
As a result of this analysis, in the ungauged version, we end up
with a model where there is no scattering of the fundamental
fields, i.e. the spinors, whereas the composite scalar fields can
take part in a scattering process. The coupling constant for the
scattering of the composite particles runs, whereas the coupling
constant for the spinor-scalar interaction does not run. The
processes giving this conclusion are carefully studied in
reference \cite{ho_lu_06}.
This result changes drastically when the gauged model is studied
instead of the original one. This process, which is prohibited in
the previous model, \cite{ho_lu_06}, now is possible due to the
presence of the vector field channel. In lowest order this process
goes through the tree diagram given in Figure 6.b.
The process is finite though, since at the next higher order the
QED box diagram with two spinors and two vector particles, Figure
6.c, is ultra violet finite from dimensional analysis, and is
calculated in reference \cite{po_ru_02}. Higher orders do not give
new type of ultra violet divergences.
We also allow spinor production from the scattering of scalar
particles, since now we can use vector particles as
intermediaries, Figure 6.d.
\begin{figure}[htbp]
\begin{center}
$\begin{array}{c@{\hspace{1cm}}c@{\hspace{1cm}}c@{\hspace{1cm}}c}
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}&
\multicolumn{1}{l}{\mbox{\bf }}\\
[-0.53cm]
\epsfxsize=26mm \epsffile{2fer1sca.eps} &
\epsfxsize=26mm \epsffile{4s1v.eps} &
\epsfxsize=17mm \epsffile{fourfermion3.eps} &
\epsfxsize=25mm \epsffile{box.eps} \\
[0.4cm]
\mbox{\bf (a)} &
\mbox{\bf (b)} &
\mbox{\bf (c)} &
\mbox{\bf (d)}
\end{array}$
\end{center}
\caption{(a) Two fermion scattering through the scalar particle
channel, (b) Two fermion scattering through the vector particle
channel, (c) Higher order diagram for two spinor scattering, (d)
Spinor production from scattering of scalars} \label{fig4}
\end{figure}
\section{Conclusion}
In this note we discussed the differences between the new
model, introduced in this paper, and the model studied in
reference \cite{ho_lu_06}. We found out that many of the features
of the original model are not true anymore. As far as
renormalizations are concerned, we have essentially QED, with
corrections coming from the scalar part mimicking the Yukawa
interactions with the $\Phi^4$ term added. We end up with the
gauge-Higgs-Yukawa system, although our starting point is gauging
a constrained model.
We also have scattering processes where two scalar particles go
to an even number of scalar particles, or scattering of spinor
particles from each other. In the one loop approximation all these
diagrams give finite results, like the case in the standard Yukawa
coupling model. We also have creation of spinor particles from the
interaction of scalars, as well as scattering of spinors with each
other, and all the other processes in the gauge-Higgs-Yukawa
system.
If we consider the model described in reference
\cite{ho_ta_hepth_06}, we see that the same differences prevail.
The main results are the same. The only difference from the scalar
model is the finiteness of the spinor box diagram with incoming
and outgoing vector particles, \cite{ka_ne_50,wa_50}, both in the
new model and the one in reference \cite{ho_ta_hepth_06}.
\vspace{5mm}{\bf{Acknowledgement}}: The work of M.H. is also
supported by TUBA, the Academy of Sciences of Turkey. This work is
also supported by TUBITAK, the Scientific and Technological
Council of Turkey.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsection{Online Form Structure}
\label{sec:form structure}
Modern online form services usually allow users to create a form by piling up different types of blocks. There are eight common block types: \textit{Text Field, Choice, Time, Date, Likert, Rating, Upload}, and \textit{Description}. Each block type has a predefined structure (\textit{e.g.}\xspace, the options of a choice block) and corresponds to a specific layout shown in the user interface (\textit{e.g.}\xspace, bullet points or checkboxes of the options).
The order of the blocks in a form usually matters because they are designed to organize questions in an easy-to-understand way, and to collect data from various related aspects. For example, in \reffig{Fig.demo}, easier profile / fact questions are asked before the preference / opinion questions.
As shown at the top of \reffig{Fig.scope},
an online form can be viewed as an ordered tree. The root node $T$ represents the form title, and its children nodes $\operatorname{Ch}(T)=(\text{Desc}, B_1, ...,B_N)$ represent the form description and a series of blocks. The subtree structure of $B_i$ depends on its type. For \textit{Choice} and \textit{Rating} blocks, $\operatorname{Ch}(B_i)=(\text{Type}_i, \text{Title}_i, \text{Desc}_i, C_i^{(1)}, ..., C_{i}^{(n_i)})$ where $C_i^{(k)}$ are the options or scores; For \textit{Likert}~\citep{johns2010likert} blocks, $\operatorname{Ch}(B_i)=(\text{Type}_i, \text{Title}_i, \text{Desc}_i, R_i^{(1)}, ..., R_i^{(m_i)}, C_i^{(1)}, ..., C_i^{(n_i)})$ where $R_i^{(j)}$ are rows and $C_i^{(k)}$ are columns; For the remaining block types, $\operatorname{Ch}(B_i)=(\text{Type}_i, \text{Title}_i, \text{Desc}_i)$. All description parts ($\text{Desc}$) are optional.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{images/type.pdf}
\caption{Distribution of Block Types in Online Forms.}
\label{Fig.type}
\end{figure}
\subsection{Online Form Dataset}
\label{sec:dataset}
Since there is no existing dataset for online forms, we construct our own OOF (Open Online Forms) dataset by crawling public online forms created on a popular online form website. We filter out forms with low quality and only consider English forms in this work. In total, 62K public forms are collected across different domains, \textit{e.g.}\xspace, education, finance, medical, community activities, \textit{etc.}\xspace
Due to the semi-structured nature of online forms, we further parsed the crawled HTML pages into JSON format by extracting valid contents and associating each block with its type.
\reffig{Fig.type} shows the distribution of block types in our collected dataset.
More details of the dataset construction and its statistics can be found in Appendix~\ref{appendix:dataset}.
\subsection{\blue{Machine Learning Tasks}}
\label{sec:creation aids}
\label{sec:problem}
As illustrated in Figure~\ref{Fig.demo}, when adding a new block, one needs to specify its type and title in the first step. Then, other required components -- such as a list of options for a \textit{Choice} block -- are added according to the block type. In this paper, we focus on the following three tasks which provide Form Creation Ideas to users in the first and later steps.
\noindent
\textbf{Question Recommendation}\quad
The Question Recommendation aims at providing users with a recommended question based on the selected block type and the previous context. Formally, the model needs to predict $\text{Title}_i$ based on $T$, $\text{Desc}$, $B_1, ..., B_{i-1}$ and $\text{Type}_i$. For example, in \reffig{Fig.demo}, it is desirable that the model could recommend ``Employee ID'' when the form designer creates a \textit{Text Field} block after the first block.
\noindent
\textbf{Block Type Suggestion}
Different from the scenario of Question Recommendation, sometimes form designers may first come up with a block title without clearly specifying its block type. The Block Type Suggestion helps users select a suitable type in this situation.
For example, for the last block of \reffig{Fig.demo}, the model will predict it as a \textit{Rating} block and suggest adding candidate rating scores if the form designer has not appointed the block type himself / herself.
Formally, given $\text{Title}_i$ and the available context ($T, \text{Desc}, B_1, ..., B_{i-1}$), the model should predict $\text{Type}_i$ in this task.
\noindent
\textbf{Options Recommendation}
As \reffig{Fig.type} shows, \textit{Choice} blocks are frequently used in online forms. When creating a \textit{Choice} block, one should additionally provide a set of options, and the Options Recommendation helps in this case. Given the previous context ($T, \text{Desc}, B_1, ..., B_{i-1}$) and $\text{Title}_i$, the model predicts $C_i^{(1)},...,C_i^{(n_i)}$ if $\text{Type}_i=\textit{Choice}$. In this work, we expect the model to recommend a set of possible options at the same time, so the desired output of this task is $C_i^{(1)},...,C_i^{(n_i)}$ concatenated with a vertical bar. For example, in \reffig{Fig.demo}, the model may output ``Yes | No'' to recommend options for the third block.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{images/scope.pdf}
\caption{The Overview of FormLM Methodology. \textbf{(A)} \textbf{Form Serialization} (\cref{sec:linearization}) serializes an online form by adding block type tokens and separate tokens to preserve the tree structure. \textbf{(B)} \textbf{Structural Attention} (\cref{sec:attention}) encodes the token type and block-level distance by adding structural biases to each attention layer. Different colors in the attention bias matrix denote different items in the lookup table
and the number inside each circle represents the block-level distance of a token pair. \textbf{(C)} \textbf{Continual Pre-training} (\cref{sec:pre-training}) requires the model to recover the input sequence corrupted by SpanMLM and BTP. We use the cross-entropy loss between the decoder's output and the uncorrupted sequence for model optimization.}
\label{Fig.scope}\label{fig:FormLM}
\end{figure*}
\subsection{Form Serialization}
\label{sec:linearization}
As discussed in \cref{sec:form structure}, an online form could be viewed as an ordered tree. In FormLM we serialize the tree into a token sequence which is compatible with the input format of common PLMs.
\reffig{fig:FormLM}(A) depicts the serialization process which utilizes special tokens and separators.
First, a special token is introduced for each block type to explicitly encode $\text{Type}_i$.
Second, the vertical bar ``|'' is used to concatenate a list of related items within a block -- options / scores $C_i^{(k)}$ of a \textit{Choice} / \textit{Rating} block, and rows $R_i^{(j)}$ or columns $C_i^{(k)}$ of a \textit{Likert} block.
Finally, multiple subcomponents of $B_i$ are concatenated using \texttt{<sep>}.
Note that there is no information loss in the serialization process, \textit{i.e.}\xspace, the hierarchical tree structure of an online form can be reconstructed from the flattened sequence.
\subsection{Structural Attention}
\label{sec:attention}
Beyond adding structural information into the input sequence, in FormLM we further enhance its backbone PLM with specially designed \textit{Structural Attention} (StructAttn).
Our intuition is that the attention calculation among tokens should consider their different roles and locations in a form. \textit{E.g.}\xspace, tokens within a question title seldom correlates with the tokens of an option from another question; tokens in nearby blocks (or even the same block) are usually stronger correlated with each other than those from distant blocks.
As illustrated in \reffig{Fig.scope}(B), StructAttn encodes the structural information of an online form by adding two bias terms based on the token type (\textit{i.e.}\xspace, the role that a token plays in the flattened sequence) and the block-level position. For each attention head, given the query matrix $\vb{Q}=[\vb{q_1}, \cdots, \vb{q_n}]^\top\in\mathbb{R}^{n\times d_k}$, the key matrix $\vb{K}=[\vb{k_1}, \cdots, \vb{k_m}]^\top\in\mathbb{R}^{m\times d_k}$, and the value matrix $\vb{V}=[\vb{v_1}, \cdots, \vb{v_m}]^\top\in\mathbb{R}^{m\times d_v}$, the original output is calculated by
\begin{equation}
\hat{\vb{A}} = \frac{\vb{Q} \vb{K}^{\top}}{\sqrt{d_{k}}},
\operatorname{Attn}(H)=\operatorname{softmax}(\hat{\vb{A}})\vb{V}
\end{equation}
In FormLM, we add two biases to $\hat{A}$ and the attention head output of StructAttn is calculated by
\begin{equation}
\resizebox{\columnwidth}{!}{$%
\begin{gathered}
\vb{A}_{ij} = \hat{\vb{A}}_{ij} + L[\operatorname{type}(\vb{q_i}),\operatorname{type}(\vb{k_j})] + \mu e^{-\lambda \operatorname{d}(\vb{q_i},\vb{k_j})} \\
\operatorname{Attn}(H)=\operatorname{softmax}(\vb{A})\vb{V}
\end{gathered}$%
}
\label{eq:structAttn}
\end{equation}
In \refequ{eq:structAttn}, the token type bias is calculated based on a learnable lookup table $L[\cdot,\cdot]$ in each attention layer, and the lookup key $\operatorname{type}(\cdot)$ is the type of the corresponding token within the form structure. Specifically, in our work, $\operatorname{type}(\cdot)$ is chosen from 9 token types: \texttt{FormTitle}, \texttt{FormDesc}, \texttt{BlockTitle}, \texttt{BlockDesc}, \texttt{Option}, \texttt{LikertRow}, \texttt{LikertColumn}, \texttt{BlockType}, \texttt{SepToken}. If $\vb{Q}$ or $\vb{K}$ corresponds to the flattened sequence given by form serialization, $\operatorname{type}(\cdot)$ can be directly obtained from the original form tree; otherwise, in generation tasks, $\vb{Q}$ or $\vb{K}$ may correspond to the target, and we set $\operatorname{type}(\cdot)$ as the expected output token type, \textit{i.e.}\xspace, \texttt{BlockTitle} when generating the question and \texttt{Option} when generating the options.
Another bias term in \refequ{eq:structAttn} is calculated by an exponential decay function to model the relative block-level position, where $\operatorname{d}(\vb{q_i}, \vb{k_j})$ is the block-level distance between the corresponding tokens of $\vb{q_i}$ and $\vb{k_j}$ on the form tree. To make $\operatorname{d}(\vb{q_i}, \vb{k_j})$ well-defined for each token pair, we set $\text{Desc}$ as the 0-th block ($B_0$) and specify $\operatorname{d}(\vb{q_i}, \vb{k_j})$ as 0 if $\operatorname{type}(\vb{q_i})$ or $\operatorname{type}(\vb{k_j})$ is equal to \texttt{FormTitle}. Note that there are two parameters $\lambda,\mu$ in this term. We make them trainable and constrain their values to be positive to ensure tokens in neighboring blocks give more attention to each other.
We apply StructAttn to three parts of FormLM, self attentions of FormLM encoder, self attentions and cross attentions of FormLM decoder. $\vb{Q}, \vb{K}, \vb{V}$ of encoder self attentions and $\vb{K}, \vb{V}$ of decoder cross attentions correspond to the source sequence; while $\vb{Q}, \vb{K}, \vb{V}$ of decoder self attentions and $\vb{Q}$ of decoder cross attentions correspond to the target sequence.
In classification, both the source and the target are the flattened form; while in generation, the target is the recommended question or options.
In \cref{sec:ablation}, we will prove the effectiveness of StructAttn through ablation studies and comparing alternative design choices of StructAttn.
\subsection{Continual Pre-training}
\label{sec:pre-training}
Note that it is difficult to train a model for online forms from scratch due to the limited data. To effectively adapt FormLM to online forms, we conduct continual pre-training on the training set of our collected dataset (see \cref{sec:dataset}) with the following two structure-aware objectives.
\noindent
\textbf{Span Masked Language Model (SpanMLM)}\quad
We adapt the masked language model (MLM) to forms by randomly selecting and masking some nodes on the form tree within the masking budget. Compared to SpanBERT~\citep{joshi-etal-2020-spanbert} which improves the MLM objective by masking a sequence of complete words, we do the masking in a higher level of granularity based on the form structure. Our technique masks a block title, option, \textit{etc.}\xspace, instead of arbitrarily masking subword tokens. The latter was proven suboptimal in~\citet{joshi-etal-2020-spanbert, zhang-etal-2019-ernie}.
Specifically, we use a masking budget of 15\% and replacing 80\% of the masked tokens with \texttt{<MASK>}, 10\% with random tokens and 10\% with the original tokens.
\noindent
\textbf{Block Title Permutation (BTP)}\quad
As discussed in \cref{sec:form structure}, each block can be viewed as a subtree.
We introduce the block title permutation objective by permuting block titles in a form and requiring the model to recover the original sequence with the intuition that the model needs to understand the semantic relationship between $B_i$ and $\operatorname{Ch}(B_i)$ to solve this challenge. We randomly shuffle all the block titles to construct the corrupted sequence.
Following the pre-training process of BART, we unify these two objectives by optimizing a reconstruction loss, \textit{i.e.}\xspace, we input the sequence corrupted by SpanMLM and BTP and optimize the cross-entropy loss between the decoder's output and the original intact sequence.
\subsection{Evaluation Data and Metrics}
\label{sec:data_and_metric}
We evaluate FormLM and other models on the three tasks of Form Creation Ideas (\cref{sec:creation aids}) with our OOF dataset (\cref{sec:dataset}). The 62k public forms are split into 49,904 for training, 6,238 for validation, and 6,238 for testing. For each task, random sampling is further performed to construct an experiment dataset.
Specifically, for each task, we randomly select no more than 5 samples from a single form to avoid sample bias introduced by those lengthy forms.
For Question Recommendation and Block Type Suggestion, each sample corresponds to a block and its previous context (see \cref{sec:problem}). 239,544, 29,558 and 29,466 samples are selected for training, validation and testing, respectively. For Options Recommendation, each sample corresponds to a \textit{Choice} block with context. 124,994, 15,640 and 15,867 samples are selected for training, validation, and testing.
For Question and Options Recommendations, following the common practice in natural language generation research, we adopt ROUGE\footnote{We use the Hugging Face implementation to calculate the ROUGE score, \url{https://huggingface.co/metrics/rouge}.}~\citep{lin-2004-rouge} scores with the questions/options composed by human as the ground truth. During option recommendation, because the model is expected to recommend a list of options at once, we concatenate options with a vertical bar (described in \cref{sec:linearization})
for the comparison of generated results and ground truths. Since it is difficult to have a thorough evaluation of the recommendation quality through the automatic metric, we further include a qualitative study in Appendix~\ref{appendix:study} and conduct human evaluations for these two generation tasks (details in Appendix~\ref{sec:human}). For Block Type Suggestion, both accuracy and Macro-F1 are reported to take account of the class imbalance issue.
\subsection{Baselines}
\label{sec:baselines}
As there was no existing system or model specifically designed for forms, we compare FormLM with three general-purposed PLMs -- RoBERTa~\citep{liu2020roberta}, GPT-2~\citep{radford2019language} and BART~\citep{lewis-etal-2020-bart}, which represent widely-used encoder, decoder, encoder-decoder based models, respectively. To construct inputs for these PLMs, we concatenate NL sentences in the available context (see \cref{sec:creation aids})
MarkupLM~\citep{li-etal-2022-markuplm}, a recent model for web page modeling, is also chosen as a baseline since forms can be displayed as HTML pages on the Internet. To keep accordance with the original inputs of MakupLM, we remove the tags without NL text (\textit{e.g.}\xspace, \texttt{<script>}, \texttt{<style>}) in the HTML file in OOF dataset.
The number of parameters of each model can be found in Appendix~\ref{sec:config}.
\subsection{FormLM Implementation}
\label{sec:setups}
We implement FormLM using the Transformers library~\citep{wolf-etal-2020-transformers}. FormLM and $\text{FormLM}_{\text{BASE}}$ are based on the architecture and parameters of BART\footnote{\url{https://huggingface.co/facebook/bart-large}} and $\text{BART}_\text{BASE}$\footnote{\url{https://huggingface.co/facebook/bart-base}} respectively.
For continual pre-training, we train FormLM for 15k steps on 8 NVIDIA V100 GPUs with the total batch size of 32 using the training set of the OOF dataset. For all the three tasks of Forms Creation Ideas, we fine-tune FormLM and all baseline models for 5 epochs with the total batch size of 32 and the learning rate of 5e-5.
More pre-training and fine-tuning details are described in Appendix~\ref{appendix:implementation}.
In the rest of this paper, each experiment with randomness is run for 3 times and reported with averaged evaluation metrics.
\subsection{Main Results}
\label{sec: main results}
For FormLM and the baseline models (see \cref{sec:baselines}), \reftab{table:mainResults} shows the results on the Form Creation Ideas tasks.
FormLM significantly outperforms the baselines on all tasks.
Compared to its backbone BART model (well-known for conditional generation tasks), FormLM further improves the ROUGE-1 scores by 4.71 and 1.12 on Question and Options Recommendations. Human evaluation results in Appendix~\ref{sec:human} also confirm the superiority of FormLM over other baseline models in these two generation tasks. Figure~\ref{Fig.case_study} shows questions recommended by BART and FormLM on an example form from the test set. FormLM's recommendations (\textit{e.g.}\xspace, ``Destination'', ``Departure Date'') are more specific and more relevant to the topic of this form, while BART's recommendations (\textit{e.g.}\xspace, ``Name'', Special Requests'') are rather general.
Also, after users create $B_1,B_2,B_3,B_4$ and select $B_5$ as a \textit{Date} type block, FormLM recommends ``Departure Date'' while BART recommends ``Name'' which is obviously not suitable to $B_5$.
On Block Type Suggestion, FormLM improves the Macro-F1 score by 10.6. The improvement of FormLM over BART ($\uparrow$ rows in Table~\ref{table:mainResults}) shows that our method is highly effective. We will further analyze this in \cref{sec:ablation}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/next_q.pdf}
\caption{Sample Outputs by FormLM and BART for Question Recommendation. FormLM's recommended questions are more relevant to the topic and more suitable to the selected block type.}
\label{Fig.case_study}
\end{figure}
Note that MarkupLM is a very strong baseline for Block Type Suggestion. This model can partly capture the structural information by parsing the form as a DOM~\citep{wood1998document}
tree.
However, since MarkupLM is not specifically designed for online forms,
it is still 4.1 points worse in Macro-F1 than FormLM on this task.
\subsection{Analysis of FormLM Designs}
\label{sec:ablation}
\begin{table}[ht]
\centering
\small
\begin{tabular}{lccc}
\toprule
& \textbf{Question} & \textbf{Options} & \textbf{Type} \\
& R2 & R2 & F1 \\
\hline
\textbf{Full Model} & \textbf{19.70} & \textbf{34.65} & \textbf{83.9} \\
~$-$
Decoder StructAttn & 18.90 & 34.36 & 83.7 \\
~$-$
Encoder StructAttn & 19.58 & 34.41 & 77.9 \\
~$-$
Form Serialization & 17.43 & 33.83 & 75.5 \\
~$-$
Previous Context & 12.67 & 27.65 & 71.8 \\
\bottomrule
\end{tabular}
\caption{Ablation Studies on Form Serialization and Structural Attention. ``$-$'' means the corresponding component is sequentially removed from FormLM.
``$-$ Previous Context'' means that the closest block title is the only input
}
\label{table:ablation1}
\end{table}
To further investigate the effectiveness of the design choices in FormLM, we conduct ablation studies and controlled experiments (which are fine-tuned under the same settings as described in \cref{sec:setups}) on the following aspects.
\noindent
\textbf{Form Serialization}\quad
For Form Creation Ideas, it is important to model the complete form context (defined in \cref{sec:problem}).
Row ``$-$ Previous Context'' of \reftab{table:ablation1} shows that there is a large performance drop on all the tasks if block title is the only input.\footnote{For ablation studies in \reftab{table:ablation1}, the components are sequentially removed because StructAttn depends on the tree structure preserved in form serialization and both techniques become meaningless if we don't model the form context.
Therefore, we also study the effect of form serialization (see \cref{sec:linearization}) which flattens the form context while preserving its tree structure.
A naive way of serialization is directly concatenating all available text as NL inputs. Results in this setting (row ``$-$ Form Serialization'' of \reftab{table:ablation1}) are much worse than the results of FormLM with form serialization technique. On Block Type Suggestion, the gap is as large as 8.4 on Macro-F1.
\begin{table}
\centering
\small
\begin{tabular}{lccc}
\toprule
& \textbf{Question} & \textbf{Options} & \textbf{Type} \\
& R2 & R2 & F1 \\
\midrule
w/o Type Info & 17.96 & 33.97 & 81.5 \\
w/~ ~Type Info & \textbf{19.70} & \textbf{34.65} & \textbf{83.9} \\
\bottomrule
\end{tabular}
\caption{\label{table:type-info}Performance of FormLM ``w/'' and ``w/o'' Incorporating the Block Type Information.}
\end{table}
\noindent
\textbf{Block Type Information}\quad
A unique characteristic of online forms is the existence of block type (see \cref{sec:form structure}).
To examine whether FormLM can leverage the important block type information, we run a controlled experiment where block type tokens are replaced by with a placeholder token \texttt{<type>} during form serialization (while other tokens are untouched). As shown in \reftab{table:type-info}, removing block type tokens hurts the model performance on all three tasks, which suggests that FormLM can effectively exploit such information.
\noindent
\textbf{Structural Attention}\quad
FormLM enhances its backbone PLM with StructAttn (\cref{sec:attention}). As the row ``$-$ Encoder StructAttn'' of \reftab{table:ablation1} shows, when we ablate StructAttn from FormLM, the Macro-F1 score of Block Type Suggestion drops from 83.9 to 77.9 and the performance on the generation tasks also drops.
In FormLM, we apply StructAttn to both encoder and decoder parts. We compare it with the setting without modifying the decoder (row ``$-$ Decoder StructAttn'') and find applying StructAttn to both the encoder and decoder yields uniformly better results, which may be due to better alignment between the encoder and decoder.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/structAttn.pdf}
\caption{Results of FormLM Using Different Design Choices of StructAttn. (Averaged over 3 runs with std.)}
\label{Fig.structAttn}
\end{figure}
There are alternative design choices of StructAttn for us to experiment. As \refequ{eq:structAttn} shows, there are two bias terms to model the token type and the block-level distance. We compare this design choice (``Hybrid'' in \reffig{Fig.structAttn}) with adding only the token type bias (``Type'') and only the distance bias (``Dist''). Note that ``Hybrid'' encodes block-level distance through the exponential decay function, we also compare it with another intuitive design (``Hybrid*'') where we use a learnable bias to indicate whether two tokens are within the same block. Besides adding biases, another common practice of modifying attentions is masking. We experiment this design choice (``Mask'') by restricting attentions to those tokens in the same node or parent and grandparent nodes within the tree structure. The comparison results are demonstrated in \reffig{Fig.structAttn}. ``Mask'' performs uniformly worse than adding biases. Among the rest of design choices, ``Hybrid'' shows slightly better performance on Options Recommendation and Block Type Suggestion.
\begin{table}
\small
\centering
\begin{tabular}{lccc}
\toprule
& \textbf{Question} & \textbf{Options} & \textbf{Type} \\
& R2 & R2 & F1 \\
\hline
w/o Pre-training & 18.82 & 33.78 & 82.2 \\
BTP & 19.35 & 34.18 & 83.3 \\
SpanMLM & 19.42 & 33.94 & 83.3 \\
SpanMLM + BTP & \textbf{19.70} & \textbf{34.65} & \textbf{83.9} \\
\bottomrule
\end{tabular}
\caption{Ablation Study of Different Continual Pre-training Objectives. (Averaged over 3 runs.)}
\label{table:ablation2}
\end{table}
\noindent
\textbf{Continual Pre-training Objectives}\quad
We design two objectives (\cref{sec:pre-training}), SpanMLM and BTP, to continually pre-train FormLM on OOF dataset for better domain adaptation. \reftab{table:ablation2} shows the ablation results of different objectives. We find FormLM trained with both SpanMLM and BTP performs the best. This suggests SpanMLM which focuses more on the recovery of a single node on the tree and BTP which focuses more on the relationship between different nodes can complement each other.
\section{Details of Open Online Forms Dataset}
\label{appendix:dataset}
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{images/wc.png}
\caption{Frequent Words Among Titles of Forms in OOF Dataset.}
\label{Fig.wc}
\end{figure}
OOF (Open Online Forms) dataset consists of 62K public forms collected on the Web, covering a wide range of domains and purposes. Figure~\ref{Fig.wc} shows some frequent words among titles of the collected data.
\subsection{Dataset Preprocessing}
We crawled 232,758 forms created by a popular online form service on the Internet and filter the crawled data using the following constraints: (1) have at least one question block; (2) have no duplicate question blocks; (3) detected as ``en''\footnote{\url{https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes}} by Language Detection API of Azure Cognitive Service for Language\footnote{\url{https://docs.microsoft.com/en-us/azure/cognitive-services/language-service/language-detection/overview}}. Finally, 62,380 forms meet all constraints. We randomly split them into 49,904 for training, 6,238 for validation and 6,238 for training.
As introduced in \cref{sec:dataset}, we parsed the crawled HTML pages into JSON format according to the online form structure. Specifically, each JSON file contains keys of ``title'', ``description'' and ``body'' which correspond to form title ($T$), form description ($Desc$), and an array of blocks ($\{B_1,\cdots,B_n\}$). Each block contains keys of ``title'', ``description'' and ``type''. For \textit{Choice} type blocks and \textit{Rating} type blocks, they further contain the key of ``options''; for \textit{Likert} type blocks, they further contain keys of ``rows'' and ``columns''. For \textit{Description} block, we only keep the plain NL text and remove possible information of other modalities (\textit{i.e}, image, video) because only around 0.1\% of \textit{Description} blocks contain video and 2.0\% contain image. When parsing the HTML pages into JSON format, we also remove non-ASCII characters within the form.
\subsection{Form Length Distribution}
We define the length of an online form as the number of blocks within it. Around 80\% of collected forms have a form length no greater than 20. The detailed distribution of form length is shown in \reffig{Fig.length}. As we have discussed in \cref{sec:data_and_metric}, we further perform random sampling to construct our experiment dataset to avoid sample biases introduced by those lengthy forms.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{images/length.png}
\caption{Form Length Distribution of Forms in OOF Dataset.}
\label{Fig.length}
\end{figure}
\section{Model Configurations}
\label{sec:config}
We compare FormLM with four baseline models, RoBERTa, GPT-2, MarkupLM, and BART. FormLM adds a small number of additional parameters to its backbone model (278K for FormLM and 208K for $\text{FormLM}_\text{BASE}$) to encode structural information in attention layers (\cref{sec:attention}). \reftab{table:config} shows model configurations of FormLM and baselines in our experiments.
\begin{table}[h]
\centering
\small
\begin{tabular}{lll}
\toprule
\textbf{Model} & \textbf{\#Params} & \textbf{\#Layers} \\
\midrule
RoBERTa & 124M & 12 \\
GPT-2 & 124M & 12 \\
MarkupLM & 135M & 12 \\
$\text{BART}_\text{BASE}$ & 139M & 6+6 \\
BART & 406M & 12+12 \\
$\text{FormLM}_\text{BASE}$ & 139M & 6+6 \\
FormLM & 406M & 12+12 \\
\bottomrule
\end{tabular}
\caption{Model Configurations of FormLM and Baselines.}
\label{table:config}
\end{table}
\section{More Implementation Details}
\label{appendix:implementation}
\noindent
\textbf{Continual Pre-training Details}\quad
We conduct continual pre-training on the training set of the OOF dataset using SpanMLM and BTP objectives (\cref{sec:pre-training}). We adopt a masking budget of 15\% in SpanMLM and do BTP on all training samples. We train FormLM for 15K steps on 8 NVIDIA V100 GPUs with 32G GPU memory. We set the total batch size as 32 and the max sequence length as 512. We use AdamW optimizer~\citep{loshchilov2018decoupled} with $\beta_1=0.9$, $\beta_2=0.999$ and the learning rate of 5e-5. It takes around 8 hours to complete the continual pre-training on our machine.
\noindent
\textbf{Fine-tuning Details}\quad
Among our downstream tasks, Next Question Recommendation and Options Recommendation are formulated as conditional generation tasks. We use the form serialization procedure (\cref{sec:linearization}) to convert the available context into model inputs. We fine-tune FormLM for 5 epochs with the total batch size of 32, the max source sequence length of 512, and the max target sequence length of 64. We load the best model which has the highest ROUGE-2 score on the validation set in the training process. During generation, we do beam search and set the beam size as 5. Block Type Classification is formulated as a sequence classification task. We follow the original implementation of BART by feeding the same input into the encoder and decoder and passing the final hidden state of the last decoded token into a multi-class linear classifier for classification. We fine-tune FormLM with 5 epochs with the total batch size as 32 and load the best model which has the highest Macro-F1 score on the validation set during the fine-tuning process.
\section{Qualitative Study}
\label{appendix:study}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{images/options_combine.pdf}
\caption{Sample Outputs by FormLM for Options Recommendation. The suggested options are highlighted in \color{myblue}{blue}.}
\label{Fig.options}
\end{figure*}
Online forms, as a special format of questionnaires, are mainly used to collect information, \textit{i.e.}\xspace, demographic information, needs, preferences, \textit{etc.}\xspace~\citep{krosnick2018questionnaire}. As shown in~\reffig{Fig.wc}, the online forms in the OOF dataset are more about objective topics like ``Application'' and ``Registration'' because these information collection scenarios prevail in the daily usage. To collect information effectively, a good questionnaire should
include questions related to the topic and these questions must be logically connected with each other. Also, for those close-ended questions (the majority of them are \textit{Choice} type questions), they are expected to offer all possible answers for respondents to choose from but not include off-topic options which may cause confusion~\citep{reja2003open}. These criteria of good questionnaires restrict the searching space of online form composition, thus making the automatic recommendation of creation ideas conceptually possible.
In~\cref{sec: main results}, \reffig{Fig.case_study} shows some questions recommended by FormLM. FormLM is able to recommend questions like ``Destination'', ``Departure Date'', ``Type of Accommodation'' which are highly related to the topic of travelling and can help collect meaningful information for the travel agency. For Options Recommendation, FormLM can accurately identify polar questions and recommend ``Yes'', ``No'' as candidate options. Also, since FormLM is continually pre-trained on a large amount of online forms, it has no difficulty recommending options for those frequently asked questions, \textit{e.g.}\xspace, ``Gender'', ``Current Educational Qualifications'', \textit{etc.}\xspace. More interestingly, we notice that FormLM can provide accurate recommendation for questions which are related to their previous contexts. \reffig{Fig.options} gives two sample outputs by FormLM for Options Recommendation. In the left sample, FormLM gives concrete suggestions which are based on the form title; in the right sample, the recommended locations are all related to school, and they accord well with the domain of this form. We assume that such good performance can be attributed to the effective understanding of form structure and context.
\section{Human Evaluation}
\label{sec:human}
Apart from reporting automatic evaluation results using ROUGE scores, we further conduct human evaluations for Question Recommendation and Options Recommendation. We randomly choose 50 samples from the test sets of the two task and collect the recommended question / options from 5 models (GPT-2, $\text{BART}_\text{BASE}$, BART, $\text{FormLM}_\text{BASE}$, FormLM). We use an HTML website (actually an online form service) to collect the manual labels. Human evaluation instructions are shown in \reffig{Fig.eval1} and \reffig{Fig.eval2}. Eight experts familiar with online form software products participate in the experiment. For each sample of a task, we construct a Likert question containing the 5 outputs (randomly shuffled and anonymized) of the models. For each sample, three experts compare the 5 outputs using a rating scale of 1 to 5 (the higher, the better) at the same time to achieve better comparison and annotation consistency across different outputs. So in total, we collect 150 expert ratings for each model on each task.
\begin{table}[th]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llllllllll}
\toprule
Rating & 5 & 4 & 3 & 2 & 1 & Avg. & $\geq$4 & $\geq$3 & $\leq$2 \\
\midrule
GPT-2 & 16 & 22 & 23 & 20 & 69 & 2.31 & 38 & 61 & 89 \\
$\text{BART}_\text{BASE}$ & 28 & 21 & 12 & 23 & 66 & 2.48 & 49 & 61 & 89 \\
BART & 26 & 23 & 25 & 18 & 58 & 2.61 & 49 & 74 & 76 \\
$\text{FormLM}_\text{BASE}$ & 63 & 47 & 13 & 15 & 12 & 3.89 & 110 & 123 & 27 \\
FormLM & 72 & 41 & 16 & 9 & 12 & \textbf{4.01} & 113 & 129 & 21 \\
\bottomrule
\end{tabular}
}
\caption{Summary of Human Evaluation Ratings for Question Recommendation.}
\label{table:eval_q}
\end{table}
\begin{table}[th]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llllllllll}
\toprule
Rating & 5 & 4 & 3 & 2 & 1 & Avg. & $\geq$4 & $\geq$3 & $\leq$2 \\
\midrule
GPT-2 & 16 & 10 & 6 & 9 & 109 & 1.77 & 26 & 32 & 118 \\
$\text{BART}_\text{BASE}$ & 63 & 28 & 17 & 14 & 28 & 3.56 & 91 & 108 & 42 \\
BART & 68 & 30 & 23 & 9 & 20 & 3.78 & 98 & 121 & 29 \\
$\text{FormLM}_\text{BASE}$ & 71 & 35 & 18 & 9 & 17 & 3.89 & 106 & 124 & 26 \\
FormLM & 89 & 29 & 14 & 7 & 11 & \textbf{4.19} & 118 & 132 & 18 \\
\bottomrule
\end{tabular}
}
\caption{Summary of Human Evaluation Ratings for Options Recommendation.}
\label{table:eval_o}
\end{table}
The evaluation results are shown in \reftab{table:eval_q} and \reftab{table:eval_o}. We can see FormLM and $\text{FormLM}_\text{BASE}$ outperform all baseline models on both Question and Options Recommendation when manually evaluated by the experts, which is in accordance with the automatic evaluation results.
We further conduct Wilcoxon signed-rank test~\citep{woolson2007wilcoxon} which is a non-parametric hypothesis test for the matched-pair data to check statistical significance of the comparison between FormLM, $\text{FormLM}_\text{BASE}$ and their backbone models. At 95\% confidence level, when comparing FormLM with BART and comparing $\text{FormLM}_\text{BASE}$ with $\text{BART}_\text{BASE}$, both $p$-values from Wilcoxon test are less than 0.005. These results show that our models have better performance on these two generation tasks than their backbone PLMs which are well-known for conditional generation.
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{images/human_eval1.pdf}
\caption{Human Evaluation Instructions. (Page 1 / 2)}
\label{Fig.eval1}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{images/human_eval2.pdf}
\caption{Human Evaluation Instructions. (Page 2 / 2)}
\label{Fig.eval2}
\end{figure*}
\section{Introduction}
\input{1-Introduction}
\section{Preliminaries}
\input{2-Preliminaries}
\section{Form Creation Ideas}
\input{3-Problem}
\section{Methodology}
\input{4-Methodology}
\section{Experiments}
\input{5-Experiments}
\section{Related Work}
\input{6-RelatedWork}
\section{Conclusion}
\input{7-Conclusion}
\section*{Limitations}
\input{8-Limitations}
\section*{Ethics Statement}
\input{10-Ethic}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Location-based service is meant to provide real-time information based on the current location of a user by combining multiple entities like the global positioning systems, information and communication systems, and the Internet \cite{schiller2004location, junglas2008location}. User identity, location and query information are sensitive and personal, and, hence, must be protected \cite{shin2012privacy, 8360466}. This information needs to be protected as this could potentially be misused \cite {8316781}. Hence, it is in the interest of a Location-based Service (LBS) provider to protect the private information of the users to maintain its reputation, and, hence, its business itself.
Ma et al. proposed a privacy-preserving location-based service using Somewhat Homomorphic Encryption (SHE) \cite{ma-infocom19}. The user provides his/her
encrypted location information and the encrypted query to an Edge Node (EN).
The location coordinates are encrypted using an SHE, while the query is
encrypted using a traditional encryption scheme.
When the encrypted service request and the encrypted user location coordinates reach EN, it generates an encrypted virtual location using a standard K-anonymity technique, in turn
referring to the historical location information \cite{8274909}.The LBS server contains the location coordinates of many points of interest. This information is not encrypted. Depending on the user's query, it will select a subset
of these points. But this selection must be done using the encrypted
coordinates of the user virtual location computed by EN and the points of interest stored as plaintexts in LBS. The main
purpose is to securely choose, say, $k$, nearest points of interest around
the user's location \cite{8560131}. The metric used here is the Euclidean distance. So the
crux of the protocol of Ma et al. is an efficient privacy-preserving distance comparison
protocol that is executed between EN and LBS. The detailed steps are recalled
in the Section \ref{sec:recap}.
\noindent\textbf{Our Contribution.} We show, in Section \ref{sec:analysis}, that the privacy-preser\-ving
distance comparison protocol of Ma et al., that is eventually used in determining
the nearest points of interest, suffers from a correctness flaw. Namely, the
output of this comparison protocol is \textit{not} necessarily correct.
A major consequence of this flaw is that a straightforward approach to fix this
flaw would be to give out the LBS the (signed) differences of the distances.
We show, for the sake of completeness, that using these differences an LBS will be able to
recover the actual location coordinates in each and every user query. We also
consider another straightforward modification of the protocol whereby the
differences of distances are masked by an independently chosen random value but that
still allows for efficient comparison. We again show
that this approach too fails in preserving the privacy of the user locations.
Our work demonstrates that fixing the protocol of Ma et al. is non-trivial
without incurring a significant cost.
\section {Recap of the Protocol from Ma et al.}
\label{sec:recap}
In this section, we briefly recollect the steps of the protocol from \cite{ma-infocom19}.
There are four different entities in the protocol:
\begin{itemize}
\item User
\item Edge Node (EN)
\item Location-based Services (LBS) Server
\item Certificate Authority (CA)
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{blocks.jpg}
\caption{Entities involved in the protocol from \cite{ma-infocom19}.}
\end{figure*}
\subsection{Initialization}
During the initialization step, the user registration process is executed. When a user requests for a location service, CA sets up the required private and public keys for the user, EN, and LBS. The CA generates 3 pairs of public and private keys.
\begin{itemize}
\item $pk_P, sk_P$:
$pk_P$ is sent to the user to encrypt the user request. $sk_P$ is sent to the LBS to decrypt the user query.
\item $pk_\tau$ is sent to the LBS to encrypt the query response using SHE \cite{DBLP:journals/iacr/FanV12}.
$sk_\tau$ is sent to the user to decrypt the same.
\item $pk_\tau$ is sent to the EN which is used to calculate the encryption of $z$.
\item $pk_a$, $sk_a$:
$pk_a$ is sent to the LBS and $sk_a$ is sent to the user.
\end{itemize}
Note that in this protocol the FV SHE scheme is used to encrypt users' location information. Recall that
if $m$ and $m'$ are plaintexts and their corresponding ciphertexts are $c$ and $c'$, then
using an SHE scheme, the encryption of $m + m'$ or $m \cdot m'$ can be derived from $c$ and $c'$ without the need to decrypt the ciphertexts. If $\llbracket x
\rrbracket$ is a SHE ciphertext of a plaintext $x$, then $\llbracket m + m' \rrbracket$ can be computed as $\llbracket m \rrbracket + \llbracket m' \rrbracket$.
Similarly, $\llbracket m \cdot m' \rrbracket$ can be computed as $\llbracket m \rrbracket \cdot \llbracket m' \rrbracket$.
\subsection{User Query: USER to EN}
To preserve the privacy of the user location, a K-anonymity based technique is used \cite{6831391}. When the user creates a query, the query is encrypted with $pk_P$ and the encrypted user query is sent to the nearest EN. The user sends the SHE encrypted location coordinates $\llbracket X_{a} \rrbracket$ and $\llbracket Y_{a} \rrbracket$ to EN. The EN is equipped with better storage and computing power in comparison to the user device \cite{8336969}. The EN only knows that the user is within its coverage area. The EN calculates the virtual address of the user by fetching the historical location information. If the current user is considered as the $\eta$th user, then EN fetches the (encrypted) location coordinates about the previous $t-1$ users:
\begin{equation} \label{eq:xva}
X_{va} = \frac{1}{t} (\llbracket X_{a} \rrbracket + \llbracket X_{\eta-1} \rrbracket + \llbracket X_{\eta-2} \rrbracket + \dots + \llbracket X_{\eta-t+1} \rrbracket)\\
\end{equation}
\begin{equation} \label{eq:yva}
Y_{va} = \frac{1}{t} (\llbracket Y_{a} \rrbracket + \llbracket Y_{\eta-1} \rrbracket + \llbracket Y_{\eta-2} \rrbracket + ... + \llbracket Y_{\eta-t+1} \rrbracket)
\end{equation}
$(X_{va}$, $Y_{va})$ is the computed virtual location of the user. This is the classical \textit{moving average} technique used in statistics \cite{wiki:Moving_average}. $(\llbracket X_{va} \rrbracket$, $\llbracket Y_{va} \rrbracket)$ denotes its SHE ciphertext. The question that arises is how to compute the encryption of $\frac{1}{t}$. One could possibly use the fixed-point encoding scheme from \cite{CostacheSVW16} for this purpose.
\subsection{EN to LBS}
The EN relays the encrypted user query to LBS. After receiving the user query from EN, LBS decrypts it using $sk_P$ and obtains the
user query as plaintext. The LSB does not have any information about the virtual address $(\llbracket X_{va} \rrbracket, \llbracket Y_{va} \rrbracket)$
as it is encrypted. It needs to further interact with EN to build the query response. The EN's territorial information is available
with LBS. It fetches location coordinates of the services whose information it has, and sends them to EN after encrypting these location coordinates using the SHE scheme. If there are $n$
services supported by EN, then let the location coordinates for these $n$ services be $\{( x_{i} , y_{i} ): i = 1, ... n \}$. The encryption of these coordinates using key $pk_a$ are $\{ (\llbracket x_{i} \rrbracket, \llbracket y_{i} \rrbracket): i = 1, \ldots, n$ \}.
\subsection{LBS to EN} \label{LBS-EN-1}
The LBS sends $\{ (\llbracket x_{i} \rrbracket, \llbracket y_{i} \rrbracket): i = 1, ... n$ \} to EN. Once EN receives the information from LBS, it calculates $\{\llbracket d_{i} \rrbracket : i = 1, \dots, n \}$, which are the squared Euclidean distances of the
user virtual location to the $n$ services, as
\begin{equation} \label{eq:3}
\llbracket d_{i} \rrbracket = (\llbracket X_{va} \rrbracket - \llbracket x_{i} \rrbracket) \cdot (\llbracket X_{va} \rrbracket - \llbracket x_{i} \rrbracket) + (\llbracket Y_{va} \rrbracket - \llbracket y_{i}\rrbracket) \cdot (\llbracket Y_{va} \rrbracket - \llbracket y_{i} \rrbracket).
\end{equation}
In order to ensure the privacy of users locations, $\llbracket X_{va} \rrbracket$ and $\llbracket
Y_{va} \rrbracket$ should not be sent to LBS, and so is the case with $\{\llbracket d_{i} \rrbracket : i = 1, \dots, n \}$. Next, the LBS must somehow securely sort the encrypted (squared Euclidean) distances $\{\llbracket d_{i} \rrbracket : i = 1, \dots, n \}$ to determine
the nearest distance(s). An obvious way of sorting is to compare every pair of (encrypted) distances, and this is what is done next.
From the list of $\{\llbracket d_{i} \rrbracket : i = 1,..., n \}$, pick any two elements, say, $\llbracket d_{a} \rrbracket$ and $ \llbracket d_{b} \rrbracket$.
Let $m$ be the maximum distance covered by EN. If the range is considered as a circular area, then $m$ is the diameter of the circle. In this case, $0 \leq d_{a}, d_{b} \leq m $. EN selects a number $l$ such that
\begin{equation} \label{eq:4}
2 ^ l \geq m.
\end{equation}
The EN computes
\begin{equation} \label{eq:5}
\llbracket z \rrbracket = \llbracket 2 ^ l + d_{a} - d_{b} \rrbracket = \llbracket 2 ^ l \rrbracket + \llbracket d_{a} \rrbracket + \llbracket -d_{b} \rrbracket .\\
\end{equation}
$\llbracket z \rrbracket$ is the SHE ciphertext of $z$, and $z$ is an $l+1$-bit integer whose Most Significant Bit (MSB), $z_l$, depends on the value of $d_{a}$ and $d_{b}$. If the MSB of $z$ is 0, then $d_{a} < d_{b}$. Otherwise, $d_{a} \geq d_{b}$. In order to indirectly send the value $\llbracket z \rrbracket$ to LBS, EN creates a uniform random number $\rho$ of size $k + l + 1$ bits. Here, $k$ is the security parameter. The sum of $\llbracket z \rrbracket$ and $\llbracket \rho \rrbracket$ is computed, and is then sent to LBS:
\begin{equation} \label{eq:w}
\llbracket w \rrbracket = \llbracket z \rrbracket + \llbracket \rho \rrbracket .
\end{equation}
\subsection{EN to LBS}
Once LBS receives $\llbracket w \rrbracket$, it decrypts $\llbracket w \rrbracket$ and obtains $w$.
From $w$, it calculates $\bar w$ as follows:
\begin{equation} \label{eq:wbar}
\bar w = w \pmod{2 ^ l},
\end{equation}
and then computes its SHE ciphertext $\llbracket \bar w \rrbracket$.
\subsection{LBS to EN}
\label{subsec:wrho}
Let
\begin{equation}
\label{eq:rhobar}
\bar \rho = \rho \pmod{2 ^ l}.
\end{equation}
Note that
$\bar w$ is available with LBS, and $\bar \rho$ is available with EN. Let ${(\bar w_{t-1}, \dots , \bar w{0})}$ and ${(\bar \rho_{t-1}}, \dots, \bar \rho_{0})$ be the bits of $\bar w$ and $\bar \rho$, respectively. The LBS encrypts each bit of $\bar w$ and is sent to EN.
It is proposed to compare $\bar w$ and $\bar \rho$ and determine the MSB of $z$, using which we can in turn compare $d_{a}$ and $d_{b}$. The DGK scheme
\cite{DamgardGK07,DamgardGK09} is used for this step.
LBS server then runs the DGK key generation algorithm to generate the public and private key pair.
The public key is sent to EN.
The below steps are run multiple times so that eventually the LBS learns the sorted order of $(d_{i} : i = 1, ..., n)$ . After this, the LBS builds the response to the user query and then sends it to the user.
During this process, the LBS server chooses a random number between 1 and $-1$, and assigns it to $\epsilon$.
It calculates, for $j = 0, \ldots, l-1$,
\begin{equation} \label{eq:8}
\llbracket c_{j} \rrbracket = \llbracket \bar w_{j} \rrbracket \; \cdot \; \llbracket - \bar \rho_{j} \rrbracket \;\cdot\; \llbracket \epsilon \rrbracket \; \cdot\; \left(\prod_{v=j+1}^{l-1} \llbracket \bar w_{v} \oplus \bar \rho_{v}\rrbracket\right)^3
\end{equation}
EN randomly selects $\xi_j \in Z_{n}\; (j = 0, \dots l-1)$, and then it computes
\begin{equation} \label{eq:9}
\llbracket \bar c_{j} \rrbracket = \llbracket c_{j} \cdot \xi_{j} \rrbracket = \llbracket c_{j} \rrbracket ^ {\xi_{j}}. \end{equation}\\
Finally, EN would have $(\llbracket \bar c_{l-1} \rrbracket, \ldots, \llbracket \bar c_{0} \rrbracket)$.
\subsection{EN to LBS}
\label{subsec:last}
EN sends $(\llbracket \bar c_{l-1} \rrbracket, \dots, \llbracket \bar c_{0} \rrbracket)$ to LBS. After LBS receives this information, it decrypts this and checks the presence of 0. If 0 is present, that, the authors claim, indicates $\bar w > \bar \rho$, otherwise, $\bar w \leq \bar \rho$
The EN and LBS need to run these steps $n(n - 1)/2$ times. Finally, LBS will obtain the sorted order of $(d_{i} : i = 1, ..., n)$ without knowing anything about the values $(d_{i} : i = 1, ..., n)$. After the service locations to be sent is securely determined, the user query response is created and sent to EN after encryption with key $pk_\tau$. Finally, EN relays this query response to the user, and then the user decrypts it using its secret key $sk_\tau$.
\section {Correctness Flaw and Security implications on the Protocol of Ma et al.}
\label{sec:analysis}
In this section, we point out a critical flaw in the protocol from \cite{ma-infocom19} recalled in the previous section.
This flaw corresponds to the steps of the protocol described in Sections \ref{subsec:wrho} and \ref{subsec:last}. Recall
that the idea behind these steps is to use $\bar \rho$ and $\bar w$ to determine $z_l$, the MSB of $z$
(see Equations \eqref{eq:w}, \eqref{eq:wbar} \eqref{eq:rhobar}). Recall that this bit $z_l$
is used to compare distances $d_{a}$ and $d_{b}$. The following is an elementary fact from arithmetic:
\begin{fact}
$z_l$ is independent of $\bar w$ and $\bar \rho$.
\end{fact}
This is because $\bar w$ is determined only by $\bar \rho$ and $\bar z$, and the latter is completely independent of $z_l$.
\begin{corollary}
The comparison protocol from \cite{ma-infocom19} does not correctly determine the comparison between encrypted distances.
\end{corollary}
The following toy examples illustrate the above observation.
\noindent\textbf{Example 1}
$z = 3 = (11)_{2}$
$l = 2 = (10)_{2}$
$\rho = 31 = (11111)_{2}$
$w = z + \rho = 34 = (100010)_{2}$
$\bar \rho = 31 \pmod{4} = 3 = (11)_{2}$
$\bar w = w \pmod {2 ^ 2} = 34 \pmod{4} = 2 = (10)_{2}$
\noindent\textbf{Example 2}
$z = 7 = (111)_{2}$
$l = 2 = (10)_{2}$
$\rho = 31 = (11111)_{2}$
$w = z + \rho = 38 = (100110)_{2}$
$\bar \rho = 31 \pmod{4} = 3 = (11)_{2}$
$\bar w = w \pmod {2 ^ 2} = 38 \pmod{4} = 2 = (10)_{2}$
In both the examples, the values of $\bar w$ and $\bar \rho$ remain the same, but $z_l$ takes both $0$ and $1$.
\subsection {Security Implications}
The privacy-preserving comparison protocol discussed above was proposed in \cite{ma-infocom19} in order to leak to LBS only
$z_l$, i.e., the result of comparison between any pair of distances. This was done because leaking the full value of $z$ would
enable an adversary to determine the original user locations. For completeness, we briefly recollect next the steps to recover
$(X_a,Y_a)$, the original user location coordinates, when LBS obtains $\llbracket z \rrbracket$. Note that since the secure comparison protocol is flawed, giving
out $z$ is a straightforward, but insecure, way of fixing the protocol that can still retain the efficiency of the original
protocol. Note that fully homomorphic
sorting is currently impractical to be deployed on a large scale \cite{HongKCLC21}.
Once the LBS receives $\llbracket z \rrbracket$, it can decrypt it to obtain $z$, and then subtract $2^l$ from $z$
to obtain the signed difference $d_i - d_j$, $1 \le i < j \le n$. This can be repeated for every pair of distances. We then end up
with $n(n-1)/2$ equations in the $n$ many $d_i$ $(1 \le i \le n)$. Hence, LBS will be able to solve for all the $d_i$ from this
overdetermined system of linear equations.
Once, say, $d_1$, is obtained. Then, the LBS can try to solve for $(X_{va},Y_{va})$ from the following equation:
\[
(X_{va} - x_1)^2 + (Y_{va} - y_1)^2 = d_1.
\]
The above equation corresponds to a circle and there can be infinitely many solutions. Note that LBS knows
$(x_1,y_1)$, i.e., as plaintexts.
If $(X_{va},Y_{va})$ are encoded as (scaled) integers, then it will only have a ``couple'' of solutions on an average \cite{KumaraswamyMV21}. But to
keep things simple, we can write similar equations for $d_2$, $d_3$, .... Since three circles are likely to intersect at
a single point, the LBS will very likely be able to recover the user virtual location $(X_{va},Y_{va})$. If there are more than
two points at which these three circles intersect, then we can continue this process until we narrow down to a single point.
Once the LBS obtains $(X_{va},Y_{va})$, then it will try to recover the original user location coordinates $(X_{a},Y_{a})$.
It is not unreasonable to assume that the LBS would have tried to recover the user location coordinates from the very
beginning. In this case, the LBS would also know the historical location coordinates $(X_{\eta-1},Y_{\eta-1})$, $\ldots$ ,
$(X_{\eta-t+1},Y_{\eta-t+1})$ used in Equations \eqref{eq:xva} and \eqref{eq:yva}. Also, the value of $t$ is typically
known to LBS as part of the protocol, or else, it can be guessed as it is usually small. Then, from
Equations \eqref{eq:xva} and \eqref{eq:yva},
\[
X_a = t \cdot X_{va} - ( X_{\eta-1} + X_{\eta-2} + \ldots + X_{\eta-t+1}),
\]
\[
Y_a = t \cdot Y_{va} - ( Y_{\eta-1} + Y_{\eta-2} + \ldots + Y_{\eta-t+1}).
\]
Here, we are assuming that the virtual location information is only computed with the actual location data.
Else, what if initially the parameters $(X_{\eta-1},Y_{\eta-1})$, $\ldots$ , $(X_{\eta-t+1},Y_{\eta-t+1})$, were randomly chosen?
After $t$ instances of the protocol have been
evoked, the initially chosen random values will no longer affect the computation of $(X_{va},Y_{va})$.
While the convergence and divergence of these moving averages is well-studied in statistics, we do not
know of how to recover the individual data points, if at all it is possible. In this case, we can only recover
the virtual location coordinates.
\subsection {Another Failed Attempt}
Next, we look at another straightforward method to fix the comparison protocol of \cite{ma-infocom19}.
The (signed) difference of distance is now masked by a random and independently chosen value $R$. Note that
$R$ could be a possibly large value chosen independently for every difference. We then have
\begin{equation} \label{eq:z-enc-rand}
\llbracket z \rrbracket = (\llbracket d_a \rrbracket - \llbracket d_b \rrbracket) \cdot \llbracket R \rrbracket.
\end{equation}
One would expect that the LBS upon decrypting $\llbracket z \rrbracket$ obtains
\begin{equation} \label{eq:z-rand}
z = (d_a - d_b ) \cdot R,
\end{equation}
and that this would only reveal the sign of $d_a - d_b$ and not the exact value, thereby, thwarting the attack mentioned previously.
We next show that the above method is insecure too. We use the technique from \cite{MurthyV19} to recover the difference
$d_a - d_b$ from $z$ alone with a good probability. The idea is to use the fact that every $d_a$ and $d_b$, $0 \le |d_a - d_b| \le m$. Therefore,
there will be a ``small'' factor of $z$ that is less than $m$ and, hence, feasible to recover this factor. This factor
would be a possible candidate for $R$. One could use the
brute-force technique to factorize, or, for larger values of $m$, the elliptic-curve method of factorization would be more efficient. Once a possible
value of $R$ is determined, then
\[
d_a - d_b = z/R.
\]
In case one ends up
with many candidates for $R$, then we need to brute force over these choices of $R$, and then check the consistency of these computed
differences with other similarly computed distances. This way inconsistent choices of $R$ are eliminated. In the worst case, we may end up with more than one possibility for the distances
$d_i$, and, hence, as many possibilities for $(X_{va},Y_{va})$.
Hence this fix too would \textit{not} lead to a secure protocol.
\section {Conclusion}
In this paper, we analyzed the correctness of the protocol of Ma et al. \cite{ma-infocom19}. We showed that their
efficient ``secure'' comparison technique does not give the correct output. We then showed that straightforward attempts to fix this
flaw would lead to security vulnerabilities, where the location-based service provider will be able to recover
information about users locations. It seems that fixing the protocol of Ma et al. is non-trivial without
incurring significant cost in terms of computation time and communication bandwidth.
There have been several attempts to design a privacy-friendly comparison protocol that is significantly more efficient than
the homomorphic/MPC evaluation of the entire comparison circuit. Unfortunately, many of them have been shown to be insecure.
Hence, it is an interesting open problem to design a comparison protocol that is lightweight in terms of both time
and bandwidth.
\begin{acks}
This work was partially funded by the Infosys Foundation Career Development Chair Professorship grant for Srinivas Vivek.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The implementation of high-speed communications is a challenging task. Commercially available transceivers for optical communications operate at throughputs of $800$\,Gbit/s and beyond~\cite{sun2020800g}. In order to maximize throughput and transmission reach, powerful forward error correction (FEC) is necessary. Modern FEC schemes require net coding gains of 11\,dB and more at residual bit error rates (BERs) of $10^{-15}$, for code rates larger than $0.8$~\cite{graell20forward}. For high-performance applications, soft-decision decoding (SDD) of low-density parity-check (LDPC) codes is now state-of-the-art in fiber-optic communication (see, e.g.,~\cite{graell20forward} for further references and~\cite{sun2020800g} for a recent commercial example). The adoption of SDD in fiber-optic communications represented a breakthrough with respect to the classical schemes based on algebraic codes (BCH and Reed-Solomon codes) and hard-decision decoding. However, the implementation of SDD schemes for popular codes still presents several challenges at very high data rates, in particular due to large internal decoder data flows~\cite{Smith2012}. Recently, optimized codes for SDD with reduced decoder dataflows were proposed~\cite{barakatain2018low}, but these schemes require an additional low-complexity outer code (the latter being subject of the investigations in this paper).
Some ubiquitous applications like data-center inter- and intraconnects require an extremely low transceiver complexity, which leads to heavy power consumption constraints on the transceiver circuits that often prohibit the use of SDD. The lower complexity of typical hard-decision decoding (HDD) circuits motivates their use for applications where complexity and throughput is a concern~\cite{Smith2012}. Powerful code constructions for HDD date back to the 1950s, when Elias introduced product codes~\cite{Elias54}. In the recent years, the introduction of new code constructions, such as staircase codes~\cite{Smith2012} and related schemes~\cite{Jian2013}, \cite{sukmadji2019zipper}, and the link between these constructions and codes-on-graphs, has led to a renewed interest in HDD for high-speed communications.
HDD unfortunately entails an unavoidable capacity loss stemming from the hard decision at the channel output, reducing the achievable coding gains by 1-2\,dB compared to SDD. Recent work has focused on improving the performance of modern codes for HDD by employing soft information from the channel, see, e.g.,~\cite{graell20forward,hager2018approaching,sheikh2019binary,sheikh2020novel} and references therein. Most of these schemes assume that the decoder has access to the full soft information (e.g., the channel output after transmission over a binary-input additive white Gaussian noise (AWGN) channel model) and internally use binary or ternary message passing~\cite{lechner2011analysis,yacoub2019protograph} and possibly error-and-erasure decoding~\cite{forney1966generalized,Wicker95,Blahut} of the component codes. However, in many high-speed optical communication systems, in particular those optimized for low cost and short reach, the use of a high-precision analog-to-digital converter (ADC) is prohibitive as the power consumption of an ADC scales approximately in proportion to its bit resolution~\cite{pillai2014end} and often simple 1-bit ADCs are used~\cite{ossieur2018asic}.
A promising approach for reducing the capacity loss while still keeping both the receiver and decoding complexity low is error-and-erasure decoding of linear codes using a 3-level (ternary) ADC at the channel output.
For instance, error-and-erasure decoding can be implemented by just two usual binary decodings and a little decision logic \cite{MoonBook}.
While error-and-erasure decoding for algebraic and product codes~\cite{wainberg1972error} is well understood, its application to modern codes for high-speed communications is largely unexplored.
The ternary output increases the capacity of the binary-output channel and can be used to improve decoding of, e.g., LDPC codes~\cite{RichardsonCapa,yacoub2019protograph}.
Recently, it was shown using both simulations and a stall pattern analysis that error-and-erasure decoding for product or staircase codes can improve their decoding performance~\cite{sumkmadji2020zipper,soma2021errors}. A rigorous analysis including miscorrections and allowing easy parameter optimization was however lacking.
In this paper, we investigate the potential of ternary message passing with ternary channel outputs for high-rate product and staircase codes with BCH component codes. Our investigation extends the density evolution analysis of~\cite{jianApproachingCapacity2017} to ternary channel outputs and ternary message passing for various decoding algorithms. This analysis fully takes into account possible miscorrections. One goal of the analysis is to find the quantizer levels that maximize the decoding performance. Interestingly, we find that the optimal quantizer, for which the noise threshold gets minimal, is significantly different from the one maximizing the capacity and that the gains of the noise threshold that can be obtained are less than the maximum achievable capacity gain.
\section{Background}\label{sec:background}
\subsection{Product \& Staircase Codes}
\subsubsection{Component Codes}
In this paper, \(\mathcal{C}\) denotes a linear \((n, k, t)\) component code of a product or staircase code that is decoded by a \(t\) error-correcting bounded distance decoder (BDD), as described in Sec. \ref{sec:Decoder}.
For product codes, \(\mathcal{C}\) is either a \((2^\nu - 1, k, t)\) binary cyclic BCH code, \(\mathcal{C}_{\textsub{BCH}}\), or its \((2^\nu - 1, k - 1, t)\) cyclic even-weight subcode,
\(
\mathcal{C}_{\textsub{BCH-Ev}} \coloneqq \{\vec{c} \in \mathcal{C}_{\textsub{BCH}} : \we(\vec{c}) = 2 j \}
\).
Although the minimum distance of BCH codes is in general not known, a lower bound is given by the design distance, \(d_{\textsub{des}}(t)\), which is \(2 t + 1\) for a BCH code and \(2 t + 2\) for its even-weight subcode.
For staircase codes, we use shortened BCH codes or shortened even-weight subcodes, i.e. we take from an \((n, k, t)\) code only the codewords \(\vec{c}\) that begin with \(c_1 = 0\) and delete the first coordinate
\cite[Ch.~1. §9]{MacWilliamsSloane}. By doing so, we obtain an \((n - 1, k - 1, t)\) linear code.
The \(d_{\textsub{des}}(t)\) is \(2 t + 1\) for a shortened BCH code and \(2 t + 2\) for a shortened even-weight subcode.
\subsubsection{Product Code}
A product code of an \((n, k, t)\) component code \(\mathcal{C}\) is a set of binary \(n {\times} n\) matrices whose rows and columns are codewords of \(\mathcal{C}\), resulting in a code of rate \(r = \left(\frac{k}{n}\right)^2\). To decode a product code, the rows and columns are alternately decoded by the component decoder \(\DF_{\textsub{C}}\). %
A product code can be interpreted as a generalized LDPC (GLDPC) code, hence, its performance under iterative decoding can be estimated through the average performance of a proper GLDPC ensemble.
This makes an analysis via density evolution (DE) possible as described in \cite{jianApproachingCapacity2017}.
The adequate GLDPC ensemble consists of the Tanner graphs with \(m\) constraint nodes (CNs) of degree \(n\) and \(N = \frac{n m}{2}\) variable nodes (VNs) of degree \(2\). In the following, the ensemble is denoted as \((\mathcal{C}, m)\) GLDPC ensemble.
The CNs of the Tanner graphs are defined by \(\mathcal{C}\), i.e., the binary values of the VNs connected to a CN must form a valid codeword of \(\mathcal{C}\).
To construct a random graph of this ensemble, the outgoing edges of the VNs are connected to the sockets of the CNs via a random permutation~\cite{jianApproachingCapacity2017}.%
\subsubsection{Staircase Code}
A staircase code of an \((n, k, t)\) component code \(\mathcal{C}\) is a chain of \(L\) binary matrices of size \(\frac{n}{2} {\times} \frac{n}{2}\). Its rate is \(r = 2\frac{k}{n} - 1\) \cite{Smith2012}.
Similar to the product code, we consider the \((\mathcal{C}, m, L)\) spatially-coupled GLDPC (SC-GLDPC) ensemble for the analysis.
\begin{figure}
\centering
\includegraphics{img/SCGLDPC}
\caption{Random element of the \((\mathcal{C}, m, L)\) SC-GLDPC ensemble. \(\pi_i\) and \(\pi^\prime_i\) are random permutations of the edges. Image based on~\cite{jianApproachingCapacity2017}.}
\label{fig:SCGLDPCCode}
\end{figure}
Figure~\ref{fig:SCGLDPCCode} shows the construction of a random Tanner graph of this ensemble.
In the ensemble, the VNs are divided into \(L\) groups and the CNs into \(L + 1\) groups. Each group of VNs contains \(N = \frac{n m}{2}\) nodes of degree \(2\) and each group of CNs contains \(m\) nodes of degree \(n\) so that each group of VNs or CNs has \(2 N\) edges.
To construct a random Tanner graph, the \(2 N\) edges of each group are divided via a uniform random permutation \(\pi_i\) and \(\pi_i^\prime\), respectively, into two sets of \(N\) edges. The first set of edges of VN group \(i \in \{1, \dotsc, L\}\) is connected to a set of edges of CN group \(i\) and the second set is connected to a set of edges of CN group \(i + 1\).
The remaining edges of CN group \(1\) and \(L + 1\) are connected to VNs with the fixed value \(0\), which can be shortened.
\subsubsection{GLDPC Decoding}
The GLDPC codes of both ensembles are decoded via the same message passing algorithm, which we briefly explain here. The CNs and VNs of the Tanner graph are indexed. Let \(\sigma_j(k)\) be the index of the VN that is connected to socket \(k \in \{1, \dotsc, n\}\) of the \(j\)-th CN.
During message passing, the messages belonging to a set \(S\) are passed along the edges between VNs and CNs. For HDD, the messages are from \(S = \{0, 1\}\) and for the error-and-erasure decoding introduced below, \(S = \{0, \que, 1\}\).
Let \(\nu_{i,j}^{(\ell)} \in S\) be the message that is passed from the \(i\)-th VN to the \(j\)-th CN in the \(\ell\)-th iteration and let \(\tilde{\nu}_{i, j}^{(\ell)}\) be the message that is passed back from CN \(j\) to VN \(i\) in the \(\ell\)-th iteration.
To decode a received word \(\vec{r} =(r_1, r_2, \ldots)\), where \(r_i \in S\) is the received channel value of the \(i\)-th VN, the following steps are performed:
During initialization, the received channel value, \(r_i\), of each VN \(i\) is sent to its two connected CNs, \(j, j^\prime\), where we set \(\nu_{i, j}^{(1)} = \nu_{i, j^\prime}^{(1)} = r_i\).
Then, several decoding iterations are performed consisting of a CN update followed by a VN update.
In the \(\ell\)-th CN update, each CN \(j\) receives the incoming messages
\((\nu_{\sigma_j(1), j}^{(\ell)}, \dotsc, \nu_{\sigma_j(n), j}^{(\ell)})\). To calculate the message that is sent back to the VN \(i, \sigma_j(k)\) connected at the \(k\)-th position of CN \(j\), two different approaches are considered: Intrinsic message passing (IMP)~\cite{Smith2012} and extrinsic message passing (EMP)~\cite{jianApproachingCapacity2017}.
For IMP, the incoming messages are combined to the word
\[
\vec{y}_{j, \textsub{IMP}}^{(\ell)}
\coloneqq
(\nu_{\sigma_j(1), j}^{(\ell)}, \dotsc, \nu_{\sigma_j(n), j}^{(\ell)})
\]
and are decoded by the component decoder \(\DF_{\textsub{C}}\). Then, the \(k\)-th symbol of the result is sent back to VN \(i\):
\(
\tilde{\nu}_{i, j}^{(\ell)}
=
\big[\DF_{\textsub{C}}(\vec{y}_{j, \textsub{IMP}}^{(\ell)})\big]_k
\).
For EMP, the \(k\)-th incoming message is replaced by the channel value \(r_i\) resulting in the extrinsic word
\[
\phantom{.}
\vec{y}_{j, \textsub{EMP}, k}^{(\ell)}
\coloneqq
(
\nu_{\sigma_j(1), j}^{(\ell)}, \dotsc ,
\nu_{\sigma_j(k-1), j}^{(\ell)}, r_{i},
\nu_{\sigma_j(k+1), j}^{(\ell)}, \dotsc
)
.
\]
Then, the word is decoded and the \(k\)-th symbol of the result is sent back to VN \(i\):
\(
\tilde{\nu}_{i, j}^{(\ell)}
=
\big[\DF_{\textsub{C}}(\vec{y}_{j, \textsub{EMP}, k}^{(\ell)})\big]_k
\).
In the VN update, each VN \(i\) receives two messages from its connected CNs \(j\), \(j^\prime\)
and forwards to each CN the message that it has received from the respective other CN:
\(\nu_{i, j^\prime}^{(\ell+1)} = \tilde{\nu}_{i, j}^{(\ell)}\),
\(\nu_{i, j}^{(\ell+1)} = \tilde{\nu}_{i, j^\prime}^{(\ell)}\).
At the end of the message passing, each VN has two incoming messages to determine the decoding result. To make a decision, one of the incoming messages is chosen randomly. If the message is erased, it is replaced by a random binary value.%
\footnote{Note that this decision rule is not optimal. In practical decoders, one would only choose randomly if both messages are erased. We use the proposed rule because it allows an easy calculation of the final bit error probability in the DE (see \eqref{eqn:Def_BER}).}
\subsubsection{Remarks}
In practice, a sliding window is used to decode a staircase code. In most cases, to the best of our knowledge, IMP is used due to the lower memory requirements. This window slides over the binary matrices and decoding is only performed for matrices in the window \cite{Smith2012}. We neglect windowed decoding in our analysis and our results can be seen as an upper bound on the performance under windowed decoding.
Note that EMP requires \(n\) component decodings per CN update whereas IMP requires only one.
However, for EMP decoding without erasures, there exist an algorithm that requires only one decoding \cite{jianApproachingCapacity2017}. Hence, the complexity does not increase by the factor \(n\) because CN updates without erasures can be carried out with this algorithm and the number of erasures is normally very low after only a few iterations.
Further note that we restrict our analysis to the GLDPC and SC-GLDPC code ensembles. Product and staircase codes are not necessarily typical code realizations of these ensembles, hence the analysis may not directly apply. Numerical investigations show however good agreements between the ensemble analysis and the decoding performance of product and staircase codes \cite[Sec. 7.5.9]{graell20forward}, \cite{hager2018approaching}. The behavior of more deterministic code constructions has been analyzed in~\cite{hager2017density} and~\cite{zhang2017spatially} for the binary erasure channel (using two different approaches), but the authors acknowledge that their approach cannot be easily extended towards more general channels without ignoring miscorrections.
\subsection{Error-and-Erasure Decoding}
\subsubsection{Channel} \label{sec:channel}
For the following analysis, we assume that the GLDPC codewords \(\vec{x}\) are transmitted over a binary-input additive white Gaussian noise (BI-AWGN) channel which generates
\(
\tilde{r}_i \coloneqq (-1)^{x_i} + n_i
\),
where \(n_i\) is an AWGN sample with noise variance
\(
\sigma^2 = (2 E_{\textsub{s}} / N_0)^{-1}
\).
To reduce the capacity loss due to HDD, error-and-erasure decoding uses a ternary channel output and message alphabet \(\{0, \que, 1\}\). To determine the discrete channel outputs \(r_i\), the values \(\tilde{r}_i \in [-T, +T]\) are declared as erasure ``\(\mathord{?}\)''. Values outside this interval are mapped to \(0\) and \(1\) by the usual HDD rule, i.e. \(r_i = 1\) for \(\tilde{r}_i < -T\) and \(r_i = 0\) for \(\tilde{r}_i > +T\).
This channel is abstracted through the discrete, memoryless channel model shown in Fig.~\ref{fig:channel_model}.
\begin{figure}
\centering
\includegraphics{img/channel_model}
\caption{Discrete channel model}
\label{fig:channel_model}
\end{figure}
The channel transition probabilities are given by
\begin{equation}\label{eqn:Channel_Trans_Probs}
\begin{aligned}
\operatorname{\delta_{\textsub{c}}}
&=
\QFunc\left(\sqrt{2\frac{E_{\textsub{s}}}{N_0}} (T + 1)\right),\\
\operatorname{\epsilon_{\textsub{c}}}
&=
1 -
\QFunc\left(\sqrt{2\frac{E_{\textsub{s}}}{N_0}} (T - 1)\right) -
\QFunc\left(\sqrt{2\frac{E_{\textsub{s}}}{N_0}} (T + 1)\right),
\end{aligned}
\end{equation}
where \(\delta_{\textsub{c}}\) is the probability for an error and \(\epsilon_{\textsub{c}}\) for an erasure.
Since the channel is completely described through \(E_{\textsub{s}} / N_0\) and \(T\), it is denoted by \((E_{\textsub{s}} / N_0, T)\).
It is easy to see that for a fixed $T$, the capacity of this channel is
\begin{equation*}
\phantom{,}
C\left(\frac{E_{\textsub{s}}}{N_0}, T\right)
=
c_{\textsub{c}} \log_2\left( \frac{2 c_{\textsub{c}}}{1 - \epsilon_{\textsub{c}}} \right)
+ \delta_{\textsub{c}} \log_2 \left( \frac{2 \delta_{\textsub{c}}}{1 - \epsilon_{\textsub{c}}} \right)
,
\end{equation*}
where \(c_{\textsub{c}} \coloneqq 1 - \delta_{\textsub{c}} - \epsilon_{\textsub{c}}\) is the probability of correctly receiving a symbol.
Optimization of \(C(E_{\textsub{s}} / N_0, T)\) with respect to \(T\) results in a capacity gain for this channel compared to HDD (\(T=0\)).
\subsubsection{Decoder} \label{sec:Decoder}
The decoder of the introduced component codes \(\mathcal{C}\) is a bounded distance decoder (BDD).
Let
\[
\mathcal{S}_t(\vec{c}) \coloneqq \{\vec{y} \in \{0, 1\}^n : \dH(\vec{y}, \vec{c}) \leq t\}
\]
be the Hamming sphere of radius \(t\) around a codeword \(\vec{c} \in \mathcal{C}\) that consists of all words \(\vec{y} \in \{0, 1\}^n\) whose Hamming distance from \(\vec{c}\) is less than or equal to \(t\).
For a given word \(\vec{y} \in \{0, 1\}^n\), a \(t\) error-correcting BDD selects the codeword \(\vec{c} \in \mathcal{C}\) for which \(\vec{y} \in \mathcal{S}_t(\vec{c})\) holds. Otherwise, a decoding failure is declared:
\[
\DF_{\textsub{BDD}}(\vec{y}) \coloneqq
\begin{cases}
\vec{c} & \text{if \(\exists \vec{c}\in\mathcal{C}\) such that \(\vec{y} \in \mathcal{S}_t(\vec{c})\)} \\
\text{fail} & \text{otherwise}.%
\end{cases}
\]
Since the channel output alphabet is \(\{0, \que, 1\}\), a BDD cannot be used. Hence, we use the following error-and-erasure decoder (EaED), which is a modification of \cite[Sec.~3.8.1]{MoonBook}. %
Let \(E(\vec{y}) = |\{i \in \{1, \dotsc, n\} : y_i = \mathord{?}\}|\) be the number of erasures of the word \(\vec{y}\) and let \(\dnE{\vec{y}}(\vec{a}, \vec{b})\) be the Hamming distance between the words \(\vec{a}\) and \(\vec{b}\) at the unerased coordinates of \(\vec{y}\).
The EaED performs the following steps to decode a word \(\vec{y} \in \{0, \que, 1\}^n\) to the result \(\vec{w}\):
\begin{enumerate}
\item If \(E(\vec{y}) \geq d_{\textsub{des}}(t)\), \(\vec{w} = \vec{y}\). Otherwise, continue with \ref{item:GenerateY1}).
\item \label{item:GenerateY1}
Generate a random vector \(\vec{p} \in \{0, 1\}^{E(\vec{y})}\) and place the values of \(\vec{p}\) at the erased coordinates of \(\vec{y}\), yielding \(\vec{y}_1\).
\item \label{item:GenerateY2}
Generate the inverted vector of \(\vec{p}\), denoted by \(\xor{\vec{p}}\), by inverting every bit of \(\vec{p}\) and placing the values of \(\xor{\vec{p}}\) at the erased coordinates of \(\vec{y}\), yielding \(\vec{y}_2\).
\item Decode $\vec{y}_i$, $i\in\{1,2\}$, using the BDD:
\(\vec{w}_i = \DF_{\textsub{BDD}}(\vec{y}_i)\) %
\item \label{item:Decision_EaED}
Obtain the decoding result, \(\vec{w}\), as
\begin{LaTeXdescription}%
\item[Case 1:]
\(\vec{w}_1 = \vec{w}_2 = \text{fail}\): \(\vec{w} = \vec{y}\)
\item[Case 2:]
\(\vec{w}_i \in \mathcal{C} \text{ for exactly one \(\vec{w}_i\)}\): \(\vec{w} = \vec{w}_i\)
\item[Case 3:]
\(\vec{w}_1, \vec{w}_2 \in \mathcal{C}\):
Output the codeword \(\vec{w}_i\) for which \(\dnE{\vec{y}}(\vec{y}, \vec{w}_i)\) is smallest.
If both distances are equal, one codeword \(\vec{w}_i\) is chosen at random.
\end{LaTeXdescription}
\end{enumerate}
\begin{remark}
In practical decoders, \(\vec{p}\) is usually the all-zero vector. However, this is not suitable for our analysis, based on the all-zero codeword, because the decoder preferably decodes to the all-zero codeword leading to a falsified too good analysis result.
The use of random vectors in step 2), akin to the channel adapters of~\cite{hou2003capacity}, solves this issue which we prove in Theorem~\ref{thm:Performance_CW_Independent}.
\end{remark}
The following theorem, based on \cite[Sec.~3.8.1]{MoonBook}, %
estimates the correction capability of the EaED:
\begin{theorem}
\label{thm:EaED_correcting_capability}
For the defined component codes, the EaED will correct a word with \(D\) errors and \(E\) erasures for certain if
\begin{equation}\label{eqn:Condition_EaED_correcting_capability}
2 D + E < d_{\textsub{des}}(t).
\end{equation}
\end{theorem}
\begin{proof}
See Appendix~\ref{proof:EaED_correcting_capability}.
\end{proof}
In addition, we consider a simplification of the EaED. For this, %
we define the Hamming spheres in \(\{0, \que, 1\}^n\) as
\[
\mathcal{S}^3_t(\vec{c}) = \{\vec{y} \in \{0, \que, 1\}^n : 2 \dnE{\vec{y}}(\vec{y}, \vec{c}) + E(\vec{y}) < d_{\textsub{des}}(t)\}.
\]
The extended EaED (EaED+) is then given by
\[
\DF_{\textsub{EaED+}}(\vec{y}) \coloneqq
\begin{cases}
\vec{w} \coloneqq \DF_{\textsub{EaED}}(\vec{y}) &\text{if \(\vec{w} \in \mathcal{C}\) and \(\vec{y} \in \mathcal{S}^3_t(\vec{w})\)}\\
\vec{y} & \text{otherwise}.
\end{cases}
\]
Because of Theorem \ref{thm:EaED_correcting_capability} and the linearity of \(\mathcal{C}\), the EaED decodes deterministically all \(\vec{y} \in \mathcal{S}_t^3(\vec{c})\) to a codeword \(\vec{c}\). Hence, the EaED+ decodes a word \(\vec{y}\) to a codeword \(\vec{c}\) if and only if \(\vec{y} \in \mathcal{S}_t^3(\vec{c})\). This leads to an alternative definition of the EaED+, which is used in the following analysis:
\[
\phantom{.}
\DF_{\textsub{EaED+}}(\vec{y})
=
\begin{cases}
\vec{c} & \text{if \(\exists\vec{c}\in\mathcal{C}\) such that \(\vec{y} \in \mathcal{S}_t^3(\vec{c})\)} \\
\vec{y} & \text{otherwise}.
\end{cases}
\]
\begin{remark}
In contrast to the EaED+, the EaED will also decode error patterns outside the Hamming spheres with a certain probability. This allows the correction of more errors but there will be also more miscorrection for patterns with too many errors. We will see later decoding configuration in which each decoder outperforms the other one.
\end{remark}
\section{Density Evolution}\label{sec:density_evolution}
In the following, we assume that the all-zero codeword is transmitted, which is justified by the following theorem:
\begin{theorem}
\label{thm:Performance_CW_Independent}
The performance of the GLDPC decoder is independent of the transmitted codeword for all introduced component decoders.
\end{theorem}
\begin{proof}
See Appendix~\ref{proof:Performance_CW_Independent}.
\end{proof}
To analyze the decoding performance of a product or staircase code, we
analyze the average performance of the corresponding GLDPC ensemble by DE.
For the analysis, we assume that the codewords are transmitted over a channel \((E_{\textsub{s}} / N_0, T)\) and EMP is used. \(\vec{\chi}_{\textsub{c}} \coloneqq (\delta_{\textsub{c}}, \epsilon_{\textsub{c}})\) denotes the channel transition probabilities, which are calculated using \eqref{eqn:Channel_Trans_Probs}.
\subsection{GLDPC Ensemble}
As shown in \cite{jianApproachingCapacity2017}, the \((\mathcal{C}, m)\) GLDPC ensemble can be analyzed by DE if the limit \(m \to \infty\) is considered.%
\footnote{%
It is not immediately obvious that the proposed EMP allows DE. The explanation for this is given in Appendix~\ref{sec:DE_EMP}.}
Let \(\vec{\chi}_{\textsub{m}}^{(\ell)} \coloneqq (\delta_{\textsub{m}}^{(\ell)}, \epsilon_{\textsub{m}}^{(\ell)})\) be the error and erasure probability of the VN-to-CN messages \(\nu^{(\ell)}_{i, j}\) in the \(\ell\)-th iteration.
In the first iteration, we have \(\vec{\chi}_{\textsub{m}}^{(1)} = \vec{\chi}_{\textsub{c}}\) because the VN-to-CN messages are initialized with the received channel values.
To derive the DE recursion, we randomly select a VN \(i\), which is connected to a CN \(j\) at position \(k = \sigma_j^{-1}(i)\) and to a second CN \(j^\prime\). Now, we consider the message \(\tilde{\nu}^{(\ell)}_{i, j}\) that is passed from CN \(j\) to VN \(i\) in the \(\ell\)-th iteration.
To compute this message, CN \(j\) constructs
\(
\vec{e}
\coloneqq
\vec{y}_{j, \textsub{EMP}, k}^{(\ell)}
\).
By definition, \(e_k\) is replaced by \(r_i\), hence, the error and erasure probabilities of \(e_k\) are \(\vec{\chi}_{\textsub{c}}\). The other positions of \(\vec{e}\) are VN-to-CN messages, which are wrong or erased with the probabilities \(\vec{\chi}_{\textsub{m}}^{(\ell)}\).
We will call these positions ``\(\compl{k}\)'' with \(\compl{k} \subset \{1, \dotsc, n\}\) in the following.
After construction, \(\vec{e}\) is decoded to \(\vec{w} \coloneqq \DF_{\textsub{C}}(\vec{e})\), and the \(k\)-th symbol \(w_k\) is sent to VN \(i\) and forwarded to CN \(j^\prime\):
\(\nu^{(\ell+1)}_{i, j^\prime} = \tilde{\nu}^{(\ell)}_{i, j} = w_k\).
This leads to the DE recursion
\begin{equation}
\label{eqn:GLPDC_Rec}
\begin{aligned}
\phantom{.}
\vec{\chi}_{\textsub{m}}^{(\ell + 1)}
&=
\chi_{\textsub{rec}}(\vec{\chi}_{\textsub{m}}^{(\ell)})
\coloneqq
(
\operatorname{\delta}_{\textsub{rec}}(\vec{\chi}_{\textsub{m}}^{(\ell)}),
\operatorname{\epsilon}_{\textsub{rec}}(\vec{\chi}_{\textsub{m}}^{(\ell)})
)\\
&=
(\Prob(w_k = 1), \Prob(w_k = \mathord{?})),
\end{aligned}
\end{equation}
which is a system of two coupled recursive functions. %
Next, we decompose these probabilities. We define the event
\[
\Error(D^\prime, E^\prime)
\coloneqq
\{\text{%
\(\vec{e}\) has
\(D^\prime\) \(1\)s and
\(E^\prime\) \(\mathord{?}\)s
in \(\compl{k}\)%
}\}
\]
with the probability
\begin{multline*}
f(D^\prime, E^\prime, \vec{\chi}_{\textsub{m}}^{(\ell)}) \coloneqq \Prob\left(\Error(D^\prime, E^\prime)\right) =\\
\binom{n - 1}{D^\prime, E^\prime}
\left(\delta_{\textsub{m}}^{(\ell)}\right)^{D^\prime}
\left(\epsilon_{\textsub{m}}^{(\ell)}\right)^{E^\prime}
\left(1 - \delta_{\textsub{m}}^{(\ell)} - \epsilon_{\textsub{m}}^{(\ell)}\right)^{n - 1 - D^\prime - E^\prime}
,
\end{multline*}
where \(\binom{n - 1}{D^\prime, E^\prime} := \frac{(n-1)!}{D^\prime!E^\prime!(n-1-D^\prime-E^\prime)!}\) is the multinomial coefficient counting the ways of distributing \(D^\prime\) \(1\)s and \(E^\prime\) \(\mathord{?}\)s in \(n-1\) positions.
Furthermore, we define the decoder transition probabilities
\begin{equation*}
\T{\alpha}{\beta}{D^\prime, E^\prime}
\coloneqq
\Prob\left(w_k = \beta \mid %
\text{%
\(e_k = \alpha\),
\(\Error(D^\prime, E^\prime)\)%
}\right)
\end{equation*}
which depend on the respective component decoder. We will determine these probabilities in Sec.~\ref{sec:calc_dec_trans_probs}.
Applying the law of total probability two times to \(\Prob(w_k = 1)\) results in
\begin{multline*}
\operatorname{\delta}_{\textsub{rec}}(\vec{\chi}_{\textsub{m}}^{(\ell)})
=
\Prob(w_k = 1) =
\sum_{D^\prime = 0}^{n-1}
\smashoperator[r]{\sum_{E^\prime = 0}^{n-1-D^\prime}}
f(D^\prime, E^\prime, \vec{\chi}_{\textsub{m}}^{(\ell)})\\
\Big(\delta_{\textsub{c}} \T{1}{1}{D^\prime, E^\prime}
+ \epsilon_{\textsub{c}} \T{\mathord{?}}{1}{D^\prime, E^\prime}
+ c_{\textsub{c}} \T{0}{1}{D^\prime, E^\prime} \Big)
,
\end{multline*}
where \(c_{\textsub{c}} \coloneqq 1 - \delta_{\textsub{c}} - \epsilon_{\textsub{c}}\).
A similar decomposition is possible for \(\Prob(w_k = \mathord{?})\) leading to %
\[
\operatorname{\epsilon}_{\textsub{rec}}(\vec{\chi}_{\textsub{m}}^{(\ell)})
=
\sum_{D^\prime = 0}^{n-1}
\smashoperator[r]{\sum_{E^\prime = 0}^{n-1-D^\prime}}
f(D^\prime, E^\prime, \vec{\chi}_{\textsub{m}}^{(\ell)})
\epsilon_{\textsub{c}} \T{\mathord{?}}{\mathord{?}}{D^\prime, E^\prime},
\]
where we used \(\T{0}{\mathord{?}}{D^\prime, E^\prime} = \T{1}{\mathord{?}}{D^\prime, E^\prime} = 0\) for all \(D^\prime\) and \(E^\prime\) because these transitions do not occur with the selected decoders.
\begin{remark}
With some adjustments, it is possible to analyze codes with different component codes. For instance, for a product code with different code types for rows and columns, two different DE recursions could be applied one after the other. However, note that the degree of the VNs in the ensemble should still be \(2\) to enable the simplified VN update.
\end{remark}
\subsection{SC-GLDPC Ensemble}
To take the structure of the SC-GLDPC ensemble into account, error and erasure probabilities are defined for the messages of each VN or CN group corresponding to different edge types.
Let
\(\vec{\chi}_{\textsub{m},i}^{(\ell)} = (\delta_{\textsub{m}, i}^{(\ell)}, \epsilon_{\textsub{m}, i}^{(\ell)})\)
be the average error and erasure probability of the messages that are sent in the \(\ell\)-th iteration from the VNs of group \(i\) to the CNs. The values of the VNs in group \(0\) and \(L + 1\) are fixed and therefore known at the decoder. Hence, their messages are always correct:
\(\vec{\chi}_{\textsub{m}, 0}^{(\ell)} = \vec{\chi}_{\textsub{m}, L + 1}^{(\ell)} = (0, 0)\).
The average error and erasure probability \(\vec{\hat{\chi}}_{\textsub{m}, i}^{(\ell)}\) of the messages sent to the CNs of group \(i\) is
\(
\vec{\hat{\chi}}_{\textsub{m}, i}^{(\ell)}
=
\frac{1}{2}
(
\vec{\chi}_{\textsub{m}, i - 1}^{(\ell)} +
\vec{\chi}_{\textsub{m}, i}^{(\ell)}
)
\)
because half of the messages are from VN group \(i - 1\) and the other half are from VN group \(i\) (see Fig.~\ref{fig:SCGLDPCCode}).
At the CNs, the CN update is performed and the CNs of group \(i\) return messages with the probabilities
\(
\operatorname{\chi}_{\textsub{rec}}\big(
\vec{\hat{\chi}}_{\textsub{m}, i}^{(\ell)}
\big)
\)
to the VNs. \(\operatorname{\chi}_{\textsub{rec}}\) denotes the DE recursion of the GLDPC ensemble as defined in \eqref{eqn:GLPDC_Rec}.
Then, at VN group \(i\), the probabilities \(\vec{\chi}_{\textsub{m}, i}^{(\ell+1)}\) are derived by averaging over the probabilities of the messages which are sent to this group in the last iteration.
This leads to the recursion
\begin{align} \label{eqn:SC_GLPDC_Rec}
&\vec{\chi}_{\textsub{m}, i}^{(\ell+1)}
=
\frac{1}{2}
\big(
\operatorname{\chi}_{\textsub{rec}}\big(
\vec{\hat{\chi}}_{\textsub{m}, i}^{(\ell)}
\big) +
\operatorname{\chi}_{\textsub{rec}}\big(
\vec{\hat{\chi}}_{\textsub{m}, i+1}^{(\ell)}
\big)
\big)\\
&=
\frac{1}{2}
\left(
\operatorname{\chi}_{\textsub{rec}}\bigg(
\frac{\vec{\chi}_{\textsub{m}, i-1}^{(\ell)} + \vec{\chi}_{\textsub{m}, i}^{(\ell)}}{2}
\bigg) +
\operatorname{\chi}_{\textsub{rec}}\bigg(
\frac{\vec{\chi}_{\textsub{m}, i}^{(\ell)} + \vec{\chi}_{\textsub{m}, i+1}^{(\ell)}}{2}
\bigg)
\right)
\nonumber
,
\end{align}
for \(i = 1, \dotsc, L\).
\subsection{Calculation of the Decoder Transition Probabilities}
\label{sec:calc_dec_trans_probs}
In this section, we calculate \(\T{\alpha}{\beta}{D^\prime, E^\prime}\) for both decoders.
For this, we only consider \(\Ts{\alpha}{\beta}\) with \(\alpha \neq \beta\).
The required transitions with \(\alpha = \beta\) are given by
\begin{align*}
\T{1}{1}{D^\prime, E^\prime} &=
1 - \T{1}{0}{D^\prime, E^\prime} - \T{1}{\mathord{?}}{D^\prime, E^\prime}, \\
\T{\mathord{?}}{\mathord{?}}{D^\prime, E^\prime} &=
1 - \T{\mathord{?}}{0}{D^\prime, E^\prime} - \T{\mathord{?}}{1}{D^\prime, E^\prime}.
\end{align*}
Since the transition \(\Trans{1}{\mathord{?}}\) does not happen, we only need to compute
\(\Ts{0}{1}\),\(\Ts{\mathord{?}}{1}\),\(\Ts{1}{0}\) and \(\Ts{\mathord{?}}{0}\).
For \(E \coloneqq E^\prime + \ind{\alpha=\mathord{?}} \geq d_{\textsub{des}}(t)\), with \(\ind{\alpha=\mathord{?}}\) denoting the indicator function returning $1$ if the condition \(\{\alpha=\mathord{?}\}\) is true and $0$ otherwise, both decoders return the input word unchanged. This results in \(\T{\alpha}{\beta}{D^\prime,E^\prime} = 0\) for \(\alpha \neq \beta\). Hence, in the following, only the cases with \(E < d_{\textsub{des}}(t)\) are considered.
\subsubsection{Weight Distributions}
For the following calculations, we require the weight distributions of the component code \(\mathcal{C}\). %
Let \(A(b_1)\) denote the number of codewords of weight \(b_1\) in \(\mathcal{C}\).
For \(t = 2, 3\), we calculate the weight distributions of the BCH codes by the MacWilliams identity
\cite[Theorem~3.6]{MoonBook} %
from the distributions of the corresponding dual codes, given in \cite[Sec.~6.1.3]{MoonBook}. %
For BCH codes with an unknown weight distribution, we use the asymptotically-tight binomial approximation
\[
A(b_1) \approx
\begin{cases}
2^{-\nu t} \binom{n}{b_1} & \text{if \(2t + 1 \leq b_1 \leq n - (2t + 1)\)} \\
1 & \text{if \(b_1 = 0, b_1 = n\)} \\
0 & \text{otherwise},
\end{cases}
\]
where \(n = 2^{\nu} - 1\) \cite[Eq.~(17)]{jianApproachingCapacity2017}. For large \(n\), there exists a bound on the relative error of the approximation of order \(n^{-0.1}\) \cite{sidel1971weight}.
The weight distribution \(A_{\textsub{Ev}}(b_1)\) of the even-weight subcode of a BCH code with weight distribution \(A(b_1)\) is \(A_{\textsub{Ev}}(b_1) = A(b_1)\) if \(b_1\) is even and \(A_{\textsub{Ev}}(b_1) = 0\) otherwise.
The weight distribution \(A_{\textsub{Sh}}(b_1)\) of a shortened code based on an BCH code or even-weight subcode of weight distribution \(A(b_1)\) and length \(n + 1\) is
\[
A_{\textsub{Sh}}(b_1) = \frac{n + 1 - b_1}{n + 1} A(b_1).
\]
This follows directly from Theorem \ref{thm:Fixed_Weight_Dis} below because BCH codes and their even-weight subcodes are cyclic.
Besides the weight distribution, the biweight distribution \cite[Ch.~5. §6]{MacWilliamsSloane} %
is required.%
\footnote{In \cite{MacWilliamsSloane}, the biweight distribution is called ``biweight enumerator''.}
Its coefficients \(B(b_{11}, b_{10}, b_{01}, b_{00})\) count the number of ordered codeword pairs \((\vec{c}_1, \vec{c}_2) \in \mathcal{C}^2\) that have the configuration \((b_{11}, b_{10}, b_{01}, b_{00})\), which measures the overlapping symbols of \(\vec{c}_1\) and \(\vec{c}_2\): An ordered pair \((\vec{c}_1, \vec{c}_2)\) has the configuration \((b_{11}, b_{10}, b_{01}, b_{00})\) if
\(
b_{fg} =
|\{i \in \{1, \dotsc, n\} : c_{1, i} = f, c_{2, i} = g\}|
\)
holds for all \(f, g \in \{0, 1\}\). For instance, a pair has the configuration \((1, 0, n-1, 0)\) if at one positions both \(\vec{c}_1\) and \(\vec{c}_2\) have a \(1\) and at the other ones \(\vec{c}_1\) has a \(0\) and \(\vec{c}_2\) a \(1\).
Obviously, we have \(b_{11} + b_{10} + b_{01} + b_{00} = n\) and \(B(b_{11}, 0, 0, b_{00}) = A(b_{11})\).
To the best of our knowledge, the biweight distribution of BCH codes is not known, however, for our use case, the approximation described in Appendix~\ref{sec:App_Biweight_Dis} yields good results.
In the following calculations, the symbol at position \(k \in \{1, \dotsc, n\}\) of a codeword is often fixed. In this case, \(A_k^\alpha(b_1)\) denotes the number of codewords \(\vec{c}_1 \in \mathcal{C}\) of weight \(b_1\) with \(c_{1, k} = \alpha\) and
\(B_k^{\alpha \beta}(b_{11}, b_{10}, b_{01}, b_{00})\) is the biweight distribution with \(c_{1, k} = \alpha\) and \(c_{2, k} = \beta\) (\(\alpha, \beta \in \{0, 1\}\)).
For cyclic codes (e.g. BCH codes or their even-weight subcodes), we have the following theorem.
\begin{theorem}
\label{thm:Fixed_Weight_Dis}
For a cyclic code of length \(n\), we have
\begin{align}
\label{eqn:Fixed_Weight_Dis}
A_k^{\alpha}(b_1)
&=
\frac{b_\alpha}{n} A(b_1), \\
\label{eqn:Fixed_Biweight_Dis}
B_k^{\alpha \beta}(b_{11}, b_{10}, b_{01}, b_{00})
&=
\frac{b_{\alpha \beta}}{n}
B(b_{11}, b_{10}, b_{01}, b_{00}).
\end{align}
\end{theorem}
\begin{proof}
See Appendix~\ref{proof:Fixed_Weight_Dis}.
\end{proof}
For shortened codes, which are not, in general, cyclic, we use \eqref{eqn:Fixed_Weight_Dis} and \eqref{eqn:Fixed_Biweight_Dis} as an approximation for \(A_k^{\alpha}\) and \(B_k^{\alpha \beta}\).
\subsubsection{EaED+}\label{sec:Trans_Prob_EaEDPlus}
We now derive \(\T{\alpha}{\beta}{D^\prime, E^\prime}\) for the EaED+ %
based on \cite{jianApproachingCapacity2017} and \cite[Sec.~3.7.2]{MoonBook}. %
Consider a random experiment in which an error pattern \(\vec{e}\) is chosen from
\[
\Omega \coloneqq
\{
\vec{e} \in \{0, \que, 1\}^n : %
\text{%
\(e_k = \alpha\) and
\(\Error(D^\prime, E^\prime)\)%
}
\}
\]
uniformly at random.
Let \(\mathcal{M} \subset \Omega\) be the subset that contains only the error patterns \(\vec{e}\) whose decoding result \(\vec{w} \coloneqq \DF_{\textsub{EaED+}}(\vec{e})\) fulfills \(w_k = \beta\).
Then, the transition probability can be calculated through
\(
\T{\alpha}{\beta}{D^\prime, E^\prime} = |\mathcal{M}| / |\Omega|
\),
where \(|\Omega| = \binom{n - 1}{D^\prime, E^\prime}\).
Because of \(\alpha \neq \beta\), \(\mathcal{M}\) contains exactly these error patterns of \(\Omega\) that are in \(\mathcal{S}_t^3(\vec{c})\) of a codeword
\(
\vec{c} \in \mathcal{C}_k^\beta
\coloneqq \{\vec{c} \in \mathcal{C} : c_k = \beta\}
\).%
To count these error patterns, we consider a codeword \(\vec{c} \in \mathcal{C}_k^\beta\) and an error pattern \(\vec{e} \in \Omega\), as shown in Fig.~\ref{fig:EaED_Plus_Derivation}.
For both, the symbol at position \(k\) is fixed: \(c_k = \beta\) and \(e_k = \alpha\).
\begin{figure}
\centering
\includegraphics{img/EaED_Plus_Derivation}
\caption{Schematic illustration of the variables in the derivation of the transition probabilities of the EaED+: The symbols of \(\vec{e}\) at \(\compl{k}\) are divided into groups at the \(1\)s of \(\vec{c}\) (1-coordinates) and into groups at the \(0\)s (0-coordinates).}
\label{fig:EaED_Plus_Derivation}
\end{figure}
At the remaining positions \(\compl{k}\), \(\vec{e}\) has \(E^\prime\) erased positions. In addition, let \(\Delta^\prime\) of the unerased positions differ from \(\vec{c}\). We call these positions ``differences''.
For \(\vec{e} \in \mathcal{S}_t^3(\vec{c})\), \(\Delta^\prime\) must be in the range of
\[
0 \leq \Delta^\prime \leq \Delta^\prime_{\textsub{max}}
\coloneqq
\left\lfloor
\frac{d_{\textsub{des}}(t) - E^\prime - 1 - \ind{\alpha=\mathord{?}}}{2}
\right\rfloor
- \ind{\alpha \neq \mathord{?}}.
\]
Moreover, let \(\vec{e}\) have \(b_{1\mathord{?}}\) erasures and \(b_{10}\) differences at the \(1\)-coordinates of \(\vec{c}\) and the remaining \(E^\prime - b_{1\mathord{?}}\) erasures and \(\Delta^\prime - b_{10}\) differences at the \(0\)-coordinates.
Then, since \(\vec{e}\) must have \(D^\prime\) \(1\)s at \(\compl{k}\), the weight \(b_1\) of \(\vec{c}\) at \(\compl{k}\) must be
\[
b_1 = b_1(\Delta^\prime, b_{1\mathord{?}}, b_{10}) \coloneqq D^\prime - \Delta^\prime + b_{1\mathord{?}} + 2 b_{10}.
\]
There are \(A_k^\beta(b_1 + \ind{\beta=1})\) codewords of \(\mathcal{C}_k^\beta\) of weight \(b_1\). For each codeword, there are
\[
\Theta(\Delta^\prime, b_{1\mathord{?}}, b_{10}, b_1) \coloneqq \\
\binom{b_1}{b_{1\mathord{?}}, b_{10}}
\binom{n - 1 - b_1}{\Delta^\prime - b_{10}, E^\prime - b_{1\mathord{?}}},
\]
different error patterns \(\vec{e}\) whose erasures and differences at \(\compl{k}\) are distributed as defined above by \(\Delta^\prime\), \(b_{1\mathord{?}}\) and \(b_{10}\).
By summing over all possible combinations of \(\Delta^\prime\), \(b_{1\mathord{?}}\) and \(b_{10}\), we obtain
\begin{multline*}
|\mathcal{M}|
=
\sum_{\Delta^\prime = 0}^{\Delta^\prime_{\textsub{max}}}
\sum_{b_{10} = 0}^{\Delta^\prime}
\sum_{b_{1\mathord{?}} = 0}^{E^\prime} \Big(
A_k^\beta\big(b_1(\Delta^\prime, b_{1\mathord{?}}, b_{10}) + \ind{\beta=1}\big)\\
\Theta\big(\Delta^\prime, b_{1\mathord{?}}, b_{10}, b_1(\Delta^\prime, b_{1\mathord{?}}, b_{10})\big)
\Big)
,
\end{multline*}
where we use the convention that \(A_k^\beta(b_1) = 0\) if \(b_1 < 0\) or \(b_1 > n\) and \(\binom{n}{k_1, k_2} = 0\) if \(n < 0\), \(k_1 < 0\), \(k_2 < 0\) or \(k_1 + k_2 > n\).
Note that no error pattern is counted twice as all spheres \(\mathcal{S}_t^3(\vec{c})\) are disjoint, which is an implication of Theorem~\ref{thm:EaED_correcting_capability}.
\subsubsection{EaED}
The derivation of \(\T{\alpha}{\beta}{D^\prime, E^\prime}\) of the EaED is based on the same principle as the derivation for the EaED+ above. It is described in Appendix~\ref{sec:Decoder_Trans_Probs_EaED}.
\subsection{Noise Threshold}
We use the DE recursion of \(\vec{\chi}_{\textsub{m}}^{(\ell)}\) to evaluate the performance of the code over the channel \((E_{\textsub{s}} / N_0, T)\).
\subsubsection{GLDPC Ensemble}
We first focus on the GLDPC ensemble.
First, the channel transition probabilities \(\vec{\chi}_{\textsub{c}}\) of \((E_{\textsub{s}} / N_0, T)\) are calculated via \eqref{eqn:Channel_Trans_Probs}.
Then, the recursion \(\chi_{\textsub{rec}}\) is applied \(\ell\) times to \(\vec{\chi}_{\textsub{m}}^{(1)} = \vec{\chi}_{\textsub{c}}\) resulting in
\(
\vec{\chi}_{\textsub{m}}^{(\ell + 1)}
= (\delta_{\textsub{m}}^{(\ell + 1)}, \epsilon_{\textsub{m}}^{(\ell + 1)})
\).
The bit error probability after \(\ell\) decoding iterations is given by
\begin{equation}\label{eqn:Def_BER}
\phantom{.}
\rho^{(\ell)}\left(E_{\textsub{s}} / N_0, T\right) \coloneqq
\delta_{\textsub{m}}^{(\ell + 1)} + \frac{1}{2} \epsilon_{\textsub{m}}^{(\ell + 1)}
.
\end{equation}
\(\rho^{(\ell)}\) is used to define the noise threshold
\begin{equation} \label{eqn:noise_threshold}
\left(\frac{E_{\textsub{s}}}{N_0}\right)^{\mathclap{\ast}}(T)
\coloneqq
\inf
\bigg\{
\frac{E_{\textsub{s}}}{N_0} \geq 0:
\lim_{\ell \to \infty}
\rho^{(\ell)}\left(\frac{E_{\textsub{s}}}{N_0}, T\right)
= 0
\bigg\}
\end{equation}
as a performance measure of the channel~\cite{RichardsonCapa}. %
For this definition, we assume that \(\rho^{(\ell)}\left(E_{\textsub{s}} / N_0, T\right)\) is a monotonically decreasing function in \(E_{\textsub{s}} / N_0\).
\subsubsection{SC-GLDPC Ensemble}
For the SC-GLDPC ensemble only the first \(32\) groups of VNs are considered, to keep the computational effort of the DE manageable.
Their error and erasure probabilities are initialized with
\(\vec{\chi}_{\textsub{m}, 0}^{(1)} = (0, 0)\) and \(\vec{\chi}_{\textsub{m}, i}^{(1)} = \vec{\chi}_{\textsub{c}}\) for \(i > 0\). Then, recursion~\eqref{eqn:SC_GLPDC_Rec} is applied \(\ell\) times to \(\vec{\chi}_{\textsub{m}, i}^{(1)}\) resulting in \(\vec{\chi}_{\textsub{m}, i}^{(\ell + 1)}\). To calculate \(\rho^{(\ell)}\), the error and erasure probabilities of VN group \(i = 1\) to \(10\) are averaged to \(
\vec{\chi}_{\textsub{m}}^{(\ell + 1)}
= (\delta_{\textsub{m}}^{(\ell + 1)}, \epsilon_{\textsub{m}}^{(\ell + 1)})
\).
Then, \(\rho^{(\ell)}\) and the noise threshold are determined by \eqref{eqn:Def_BER} and \eqref{eqn:noise_threshold} with \(\vec{\chi}_{\textsub{m}}^{(\ell + 1)}\).
We limit the calculation of \(\rho^{(\ell)}\) on the first \(10\) groups to reduce the computational effort. If the bit error probability of the first \(10\) groups converges to \(0\), it can be assumed that the bit error probability of the following groups will also converge to \(0\). Furthermore, this limitation justifies the consideration of only the first \(32\) groups in the DE. The following groups would only have a negligible effect on the performance of the first \(10\) groups because they are too far away.
\subsubsection{Numerical Estimation}
For the numerical estimation of the limit in \eqref{eqn:noise_threshold}, the recursion is applied until the change of \(\rho^{(\ell)}\) in one iteration is less than \num{e-12}. The infimum of the set in \eqref{eqn:noise_threshold} is calculated by a binary search, which searches for the minimal \(E_{\textsub{s}} / N_0\) with \(\lim_{\ell \to \infty} \rho^{(\ell)}(E_{\textsub{s}} / N_0, T) < \num{e-10}\).
We use \(\num{e-10}\) to avoid numerical instabilities, which occurred for lower error probabilities.
\section{Results}\label{sec:results}
\subsection{Theoretical Results}
We evaluated the noise threshold \((E_{\textsub{s}} / N_0)^{\ast}(T)\) numerically for different \(T\) using the DE analysis based on either the GLDPC or the SC-GLDPC ensemble. The result of a product code (GLDPC ensemble) of a \((511, 484, 3)\)-BCH code is shown in Fig.~\ref{plot:Noise_Thr_DE}.
\begin{figure}
\centering
\includegraphics{img/noiseThr_T_Product}
\caption{Noise thresholds calculated via DE for the \((511, 484, 3)\)-BCH product code. The dotted line marks the noise threshold of HDD.}
\label{plot:Noise_Thr_DE}
\end{figure}
The dotted line marks the performance of HDD (\(T=0\)) for the EaED and EaED+.
The threshold of the EaED has a minimum at \(T \neq 0\), i.e., EaED performs better than HDD. To quantify the performance increase of the EaED compared to HDD, we define the optimal \(T\) by
\begin{align}
T_{\textsub{opt}} &\coloneqq \argmin_{T \geq 0}\left\{ (E_{\textsub{s}} / N_0)^{\ast}(T) \right\} \label{eqn:topt}\\
\intertext{and the decrease in \((E_{\textsub{s}} / N_0)^{\ast}\) at \(T_{\textsub{opt}}\) compared to HDD by the predicted gain}
\Delta(\lEsNO)^{\ast} &\coloneqq (E_{\textsub{s}} / N_0)^{\ast}(0) - (E_{\textsub{s}} / N_0)^{\ast}(T_{\textsub{opt}}).\label{eqn:gainTheo}
\end{align}
For this code, we get for the EaED performance:
\(T_{\textsub{opt}} = \num[round-mode = places, round-precision=3]{0.056935}\)
and \(\Delta(\lEsNO)^{\ast} = \SI[round-mode = places, round-precision=3]{0.094738}{\dB}\).
However, the EaED+ has its minimum noise threshold at \(T=0\). For it, the use of error-and-erasure decoding results in a worse performance and erasures are not beneficial. %
One explanation for this behavior is as follows: The errors and erasures of a component code can be corrected by the EaED+ if \(2 D + E < d_{\textsub{des}}(t)\) is fulfilled. For \(T > 0\), because of AWGN, more correctly than incorrectly received bits are mapped to erasures.
Hence, on average, \(2 D + E\) could be larger than \(2 D\) for \(T=0\), which results in a performance decrease. The EaED, on the other hand, can also correct some error patterns outside these spheres.
For larger values of \(T\), the noise threshold increases significantly for both decoders. The reason for the increase is that for large \(T\), many correctly received symbols are mapped to erasures, which results in a loss of information.
\subsubsection{Parameter Analysis BCH Code}
\begin{figure*}[tb]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
\subfloat[Results of the product and staircase codes of a usual and shortened BCH code, respectively.\label{plot:parameter_not_even}]{%
\includegraphics{img/parameter}
}\\
\subfloat[Results of the product codes of a usual BCH code or even-weight subcode.\label{plot:parameter_even}]{%
\includegraphics{img/parameter_even.eps}
}
\caption{Results of the parameter analysis: The predicted noise threshold gain \(\Delta(\lEsNO)^{\ast}\) that the EaED achieves compared to HDD is plotted on the left and the corresponding \(T_{\textsub{opt}}\) on the right.
The component code is an \((n, k, t)\)-BCH code with \(n \in \{63, 127, 255, 511\}\) and \(t \in \{2, 3, 4\}\) or its even-weight subcode or shortened code.
The dashed curves are the results of the capacity analysis. They mark the maximal achievable predicted gain in theory and the corresponding \(T_{\textsub{opt}}\) (Sec.~\ref{sec:channel}).
}
\label{plot:parameter}
\end{figure*}
We now analyze the predicted gain \(\Delta(\lEsNO)^{\ast}\) for different component codes. We limit this analysis to the EaED as this decoder is the most relevant in practice.
Figure~\ref{plot:parameter}-\subref{plot:parameter_not_even} shows \(\Delta(\lEsNO)^{\ast}\) for several product and staircase codes (SC-GLDPC ensemble) of a BCH and shortened BCH code, respectively, plotted as a function of their rates.
The corresponding \(T_{\textsub{opt}}\) %
is shown on the right of Fig.~\ref{plot:parameter}-\subref{plot:parameter_not_even}.
The dashed curves in Fig.~\ref{plot:parameter} are the result of the capacity analysis
(Sec.~\ref{sec:channel}) and show the capacity gain, i.e. the maximal predicted gain that could be expected if error-and-erasure decoding is used instead of~HDD. %
The predicted gain increases with decreasing length \(n\) of the BCH code (decreasing rate in the diagram). For instance, the predicted gain of the product and staircase codes of BCH codes with \(n = 511\) increases from less than \SI{0.19}{\dB} to \SI{0.65}{\dB} for the staircase code of a \((63, 39, 4)\)-BCH code.
A possible reason may be that, for fixed \(t\), the number of correctable erasures per bit decreases with \(n\) according to Theorem~\ref{thm:EaED_correcting_capability}.
Furthermore, the predicted gain increases with \(t\), and all staircase codes achieve a larger predicted gain than the product code of the same component code.
\subsubsection{Parameter Analysis Even-Weight Subcode}
Figure~\ref{plot:parameter}-\subref{plot:parameter_even} shows the predictd gain \(\Delta(\lEsNO)^{\ast}\) and \(T_{\textsub{opt}}\) of product codes that are constructed from an even-weight subcode (circles) compared to the results of Fig.~\ref{plot:parameter}-\subref{plot:parameter_not_even} (crosses).
For the sake of clarity, the results of the staircase codes are omitted as they are similar to the ones of the product codes.
The use of the even-weight subcode leads to an increase in the predicted gain, in particular for smaller values of \(t\).
This increase can be motivated using Theorem~\ref{thm:EaED_correcting_capability}: a word is corrected for \(2 D + E \leq 2 t\) if a BCH code is used and for \(2 D + E \leq 2 t + 1\) if its even-weight subcode is used.
Hence, using an even-weight subcode enables the correction of one extra erasure.
This explains why even-weight subcodes benefit more from error-and-erasure coding.
Furthermore, it explains the large increase for \(t=2\): Because of the small error-correcting capability, the extra erasure has a greater impact than for larger~\(t\).
\subsection{Simulation}
To check if the theoretical results of the DE are consistent with the performance, we simulated the performance of product and staircase codes.
In this section, we define the noise threshold \((E_{\textsub{s}} / N_0)^{\ast\ast}\) as that \(E_{\textsub{s}} / N_0\) for which the output \(\mathrm{BER}\) is equal to \(\mathrm{BER}_{\textsub{target}} \coloneqq \num{e-4}\) after \num{20} decoding iterations.
The simulated gain and \(T_{\textsub{opt}}\) are defined in the same way as the predicted gain and \(T_{\textsub{opt}}\) in \eqref{eqn:gainTheo} and \eqref{eqn:topt}.
In the simulation, the points of the \(\mathrm{BER}\)-\(E_{\textsub{s}} / N_0\)-curve are estimated by a Monte Carlo method, along with a binary search to determine the intersection of the curve with \(\mathrm{BER}_{\textsub{target}}\) at \((E_{\textsub{s}} / N_0)^{\ast\ast}\).
During the binary search, the number of trials is dynamically adapted to ensure that despite the randomness of the simulations, the estimated \(\mathrm{BER}\) is greater or smaller than \(\mathrm{BER}_{\textsub{target}}\) with sufficiently large confidence.
Figure~\ref{plot:simulation} compares the simulation results of a product code of the \((511, 484, 3)\)-BCH code with the results of the DE analysis of Fig.~\ref{plot:Noise_Thr_DE}.
For each decoder, we did two simulations, one with EMP and one with IMP.
\begin{figure*}[tb]
\centering
\subfloat[EaED]{%
\includegraphics{img/noiseThr_T_Sim_Theo_Product_1}%
}%
\qquad
\subfloat[EaED+]{%
\includegraphics{img/noiseThr_T_Sim_Theo_Product_2}%
}%
\caption{
Simulation results of a product code of the \((511, 484, 3)\)-BCH code compared with the results of the DE analysis.
The error bars of the EMP curve are the remaining search interval after the termination of the binary search. The error bars of the IMP results were omitted because they are negligible.
}
\label{plot:simulation}
\end{figure*}
The plots show an approximately constant gap between the predicted thresholds and the simulated \((E_{\textsub{s}} / N_0)^{\ast\ast}\) with EMP decoding (The gap slightly decreases over \(T\) and is in the range of \SIrange{0.050}{0.058}{\dB}.).
The gap is due to finite length effects because the DE analysis considers GLDPC graphs of infinite size in contrast to the finite size of the simulated product code.
Since the gap is approximately constant over \(T\), the predicted gain and \(T_{\textsub{opt}}\) of the DE analysis match those of the results in practice.
However, for both decoders, the curve ``Simulation IMP'' has no similarity to the theory.
Hence, an estimation of the simulated gain of error-and-erasure coding with DE is not possible if IMP is used.
Nevertheless, the IMP performance of the EaED+ is quite surprising:
Although this decoder achieves no simulated gain using EMP decoding, it achieves a simulated gain of around \SI{0.106}{\dB} at \(T_{\textsub{opt}} = \num{0.04}\) using IMP decoding.
It outperforms IMP decoding of the EaED, which has only a negligible simulated gain.
Furthermore, we simulated the product code of the \((63, 45, 3)\)-BCH code that is decoded by the EaED (results not shown).
In this case, the DE analysis underestimates the simulated gain of the EMP simulation by \SI{21}{\percent}, while \(T_{\textsub{opt}}\) is calculated correctly.
The difference between predicted and simulated gain may result from finite length effects and the approximation of the biweight distribution.%
Figure \ref{plot:ber_curves_simulation} shows the simulated BER curves of a product code of the \((511, 484, 3)\)-BCH code that is decoded by both EaED and EaED+ with \(20\) decoding iterations using either EMP or IMP. For \(T\), we choose \(0\) or the \(T_{\textsub{opt}}\) of the respective decoder.
We observe that error-and-erasure decoding does not lead to early error floors and that the gains are consistent with the DE results.
\begin{figure}[tb]
\centering
\includegraphics{img/ber_curves_simulation}
\caption{Simulated BER curves for a product code of the \((511, 484, 3)\)-BCH code using EMP and IMP decoding, respectively.
\label{plot:ber_curves_simulation}}
\end{figure}
\section{Conclusions \& Outlook}\label{sec:conclusion}
We analyzed the error-and-erasure decoding of product and staircase codes based on BCH codes or their even-weight subcodes.
For the analysis, we formulated DE on the corresponding GLDPC or SC-GLDPC ensembles that are decoded with EMP.
We have shown that error-and-erasure decoding archives a gain in \(E_{\textsub{s}} / N_0\) compared to HDD, whereby the predicted gain is larger for lower rate codes and if an even-weight subcode is used as a component code. Finally, we have verified the results by a simulation of a product code using both EMP decoding but also the simpler IMP decoding, where we also observed predicted gains for a variation of the component code decoders.
In practice, instead of using the even-weight subcodes as component codes, BCH codes are often extended by a parity check bit.
Since these codes have also an even design distance, we assume that their predicted gains are comparable with the results of the even-weight subcodes.
A detailed analysis of extended BCH codes is subject of further work.
\appendices
\section{Proof of Theorem~\ref{thm:EaED_correcting_capability}}
\label{proof:EaED_correcting_capability}
\begin{proof}
Based on \cite[Sec.~3.8.1]{MoonBook}: For \(\vec{y}_1\), the EaED assigns to the erased coordinates of \(\vec{y}\) the random vector \(\vec{p}\) and for \(\vec{y}_2\), the inverted vector \(\xor{\vec{p}}\).
Because of this assignment, \(\vec{y}_1\) has \(D_1 \leq E\) errors in addition to the \(D\) errors of \(\vec{y}\).
At the erased coordinates, \(\vec{y}_2\) has errors where \(\vec{y}_1\) has no errors because, for \(\vec{y}_2\), the inverted vector \(\xor{\vec{p}}\) is inserted. Hence, \(\vec{y}_2\) has \(D_2 = E - D_1\) errors besides the \(D\) errors.
Therefore, \(D_i \leq E / 2\) holds for at least one \(\vec{y}_i\) with \(i \in \{1, 2\}\). The total number of errors of this \(\vec{y}_i\) fulfills
\[
\phantom{.}
D + D_i \leq D + \frac{E}{2}
\stackrel{\text{(a)}}{<}%
t + 1
\Rightarrow
D + D_i \leq t
,
\]
where (a) holds because of \eqref{eqn:Condition_EaED_correcting_capability} and \(d_{\textsub{des}}(t) \leq 2 t + 2\) for the defined component codes.
Hence, the BDD decodes at least one \(\vec{y}_i\) to the right codeword.
It remains to prove that if both results \(\vec{w}_1\) and \(\vec{w}_2\) are codewords, the EaED selects the correct codeword. A wrong selection is only possible if one decoding result is not correct. Let \(\vec{w}_c\) be the correct and \(\vec{w}_e\) the erroneous result of \(\vec{w}_1\) and \(\vec{w}_2\). Suppose that \(\vec{w}_e\) is falsely selected. Then the following inequality contradicts \eqref{eqn:Condition_EaED_correcting_capability} (\(\dnE{\vec{y}}\) and \(\dE{\vec{y}}\) are the distance at the unerased and erased coordinates of \(\vec{y}\), respectively.):
\begin{align*}
&d_{\textsub{des}}(t)
\leq
d_{\textsub{min}}
\leq
\dH(\vec{w}_e, \vec{w}_c) \\
&= \dnE{\vec{y}}(\vec{w}_e, \vec{w}_c) + \dE{\vec{y}}(\vec{w}_e, \vec{w}_c)
\smash{\stackrel{\text{(a)}}{\leq}}
\dnE{\vec{y}}(\vec{w}_e, \vec{w}_c) + E \\
&\stackrel{\text{(b)}}{\leq}
\dnE{\vec{y}}(\vec{w}_e, \vec{y}) + \dnE{\vec{y}}(\vec{y}, \vec{w}_c) + E
\stackrel{\text{(c)}}{\leq}
2 D + E\,,
\end{align*}
where (a) holds because the distance of two words of \(E\) coordinates is at most \(E\).
(b) is the triangle inequality and (c) uses that \(\dnE{\vec{y}}(\vec{y}, \vec{w}_c) = D\) because \(\vec{y}\) has \(D\) errors at the unerased coordinates. Moreover, according to the assumption, \(\dnE{\vec{y}}(\vec{w}_e, \vec{y}) \leq \dnE{\vec{y}}(\vec{y}, \vec{w}_c) = D\), as otherwise \(\vec{w}_e\) would not have been selected.
\end{proof}
\section{Proof of Theorem~\ref{thm:Performance_CW_Independent}}
\label{proof:Performance_CW_Independent}
\begin{proof}
Let \(\oplus\colon\,\{0, 1\}^n \times \{0, \que, 1\}^n \to \{0, \que, 1\}^n\) be an operator that computes for each component
\[
[\vec{a} \oplus \vec{b}]_i \coloneqq
\begin{cases}
a_i + b_i & \text{if \(b_i \neq \mathord{?}\)} \\
\mathord{?} & \text{otherwise}.
\end{cases}
\]
Then, it is easy to see that the BDD and the EaED+ fulfill the symmetry condition
\begin{equation}
\label{eqn:Symmetrie_Condition}
\phantom{,}
\DF(\vec{c} \oplus \vec{e}) = \vec{c} \oplus \DF(\vec{e})
\quad
\text{for all \(\vec{c} \in \mathcal{C}\) and \(\vec{e}\)},
\end{equation}
where, in the case of the BDD, we define \(\vec{c} \oplus \text{fail} := \text{fail}\) and \(\vec{e}\) is an error-and-erasure pattern (\(\vec{e} \in \{0, \que, 1\}^n\)).
For the EaED, we interpret words of \(\{0, \que, 1\}^n\) as random variables taking values in \(\{0, \que, 1\}^n\), so that \(\DF_{\textsub{EaED}}\) is a function that transforms random variables.
Then, the EaED fulfills the symmetry condition
\begin{equation}
\label{eqn:Symmetrie_Condition_EaED}
\phantom{,}
\DF_{\textsub{EaED}}(\vec{c} \oplus \vec{e}) \mathrel{\overset{d}{=}} \vec{c} \oplus \DF_{\textsub{EaED}}(\vec{e})
\quad
\text{for all \(\vec{c} \in \mathcal{C}\) and \(\vec{e}\)},
\end{equation}
where \(\vec{e}\) is an arbitrary random variable on \(\{0, \que, 1\}^n\) and ``\(\mathrel{\overset{d}{=}}\)'' means that the random variables are equal in distribution.
To prove this condition, we require an alternative description of the EaED. Let \(\Lambda(\vec{w}_1, \vec{w}_2, \vec{y})\) be the function that determines the decoding result of \(\vec{y}\) from \(\vec{w}_1\) and \(\vec{w}_2\) in decoding step~\ref{item:Decision_EaED} of the EaED (Sec.~\ref{sec:Decoder}).
For the sake of clarity, we decompose \(\vec{y}\) into an unerased and and erased component: \(\vec{y} = [\vec{y}_{\compl{E}}, \vec{\mathord{?}}]\). Using \(\Lambda\), we get
\begin{equation*}
\DF_{\textsub{EaED}}(\vec{y})
=
\Lambda(\DF_{\textsub{BDD}}([\vec{y}_{\compl{E}}, \vec{p}]), \DF_{\textsub{BDD}}([\vec{y}_{\compl{E}}, \xor{\vec{p}}]), \vec{y}),
\end{equation*}
where the erased coordinates of \(\vec{y}\) are replaced by \(\vec{p}\), which is a uniform random variable on \(\{0, 1\}^n\).
Let \(\vec{c}\in\mathcal{C}\) be a codeword and \(\vec{e}\) be an arbitrary random variable on \(\{0, \que, 1\}^n\). We decompose \(\vec{c}\) and \(\vec{e}\) into the bits at the unerased and erased coordinates of \(\vec{e}\) giving
\(\vec{c} = [\vec{c}_{\compl{E}}, \vec{c}_{E}]\) and \(\vec{e} = [\vec{e}_{\compl{E}}, \vec{\mathord{?}}]\). By doing so, we get
\begin{align*}
\label{eqn:symmetry_cond_proof}
&\DF_{\textsub{EaED}}(\vec{c} \oplus \vec{e})
=
\DF_{\textsub{EaED}}([\vec{c}_{\compl{E}} + \vec{e}_{\compl{E}}, \vec{?}]) \nonumber \\
&=
\Lambda\left(
\DF_{\textsub{BDD}}([\vec{c}_{\compl{E}} + \vec{e}_{\compl{E}}, \vec{p}]),
\DF_{\textsub{BDD}}([\vec{c}_{\compl{E}} + \vec{e}_{\compl{E}}, \xor{\vec{p}}]),
\vec{c} \oplus \vec{e}\right) \nonumber \\
&\stackrel{\text{(a)}}{=}
\begin{aligned}[t]
\Lambda(
&\vec{c} \oplus \DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \vec{c}_{E} + \vec{p}]),\\
&\vec{c} \oplus \DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \vec{c}_{E} + \xor{\vec{p}}]),
\vec{c} \oplus \vec{e})
\end{aligned}\\
&=
\vec{c} \oplus
\Lambda\left(
\DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \vec{c}_{E} + \vec{p}]),
\DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \vec{c}_{E} + \xor{\vec{p}}]),
\vec{e}\right), \nonumber
\end{align*}
where (a) uses the symmetry condition of the BDD.
Finally, we substitute \(\vec{p}_2 \coloneqq \vec{c}_E + \vec{p}\) and \(\xor{\vec{p}}_2 = \vec{c}_E + \xor{\vec{p}}\) resulting in
\begin{align*}
\DF_{\textsub{EaED}}(\vec{c} \oplus \vec{e})
&=
\vec{c} \oplus
\Lambda\left(
\DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \vec{p}_2]),
\DF_{\textsub{BDD}}([\vec{e}_{\compl{E}}, \xor{\vec{p}}_2]),
\vec{e}\right) \\
&\mathrel{\overset{d}{=}} \vec{c} \oplus \DF_{\textsub{EaED}}(\vec{e}),
\end{align*}
because \(\vec{p}_2\), just as \(\vec{p}\), is a uniform random variable on \(\{0, 1\}^n\).
This proves the symmetry condition of the EaED.
Using the respective symmetry condition \eqref{eqn:Symmetrie_Condition} or \eqref{eqn:Symmetrie_Condition_EaED}, it can be shown, similar to \cite{RichardsonCapa}, that the expected number of errors and erasures of the whole GLDPC decoder is independent of the transmitted codeword.
\end{proof}
\section{Alternative description of EMP}
\label{sec:DE_EMP}
It is not immediately obvious that DE is allowed for the proposed message passing algorithm, as the channel input values are used in the CN update. Therefore, we present an alternative description of the same message passing algorithm in which the decoding of the component codes is moved from the CN to the VN update. This allows the insertion of the channel input value at the VN similar to the approach in \cite{jianApproachingCapacity2017}.
The message passing starts with the initialization \(\nu_{i, j}^{(1)} = \nu_{i, j^\prime}^{(1)} = r_i\) of the outgoing VN messages. During the CN update, each CN \(j\) combines the incoming messages into a vector. For each VN \(i\), connected at socket \(k=\sigma^{-1}(i) \in \{1, \dotsc, n\}\), it replaces the \(k\)-th symbol by a blank \(\square\) and returns the vector to the VN:
\[
\phantom{.}
\tilde{\nu}_{i, j}^{(\ell)}
\coloneqq
(
\nu_{\sigma_j(1), j}^{(\ell)}, \dotsc ,
\nu_{\sigma_j(k-1), j}^{(\ell)}, \square,
\nu_{\sigma_j(k+1), j}^{(\ell)}, \dotsc
, \nu_{\sigma_j(n), j}^{(\ell)}
).
\]
Due to the replacement of the \(k\)-th message, there is only extrinsic information passed.
In the VN update, each VN \(i\) receives two messages from its connected CNs \(j\), \(j^\prime\). To calculate the outgoing message for CN \(j\), the VN takes the incoming message of the respective other CN \(j^\prime\) and replaces the blank by its own channel input value \(r_i\) resulting in
\[
\phantom{.}
\vec{y}_{j, \textsub{EMP}, k}^{(\ell)}
\coloneqq
(
\nu_{\sigma_j(1), j}^{(\ell)}, \dotsc ,
\nu_{\sigma_j(k-1), j}^{(\ell)}, r_{i},
\nu_{\sigma_j(k+1), j}^{(\ell)}, \dotsc
)
,
\]
which was generated in the original algorithm in the CN update. Then \(\vec{y}_{j, \textsub{EMP}, k}^{(\ell)} \) is decoded and the symbol at the position of the blank \(\square\) is sent to CN \(j^\prime\):
\(\nu_{i, j^\prime}^{(\ell+1)} = \big[\DF_{\textsub{C}}(\vec{y}_{j, \textsub{EMP}, k}^{(\ell)})\big]_k\).
It is easy to see that these VN-to-CN messages are identical with the ones of the original message passing algorithm introduced in Sec.~\ref{sec:background}. This proves that DE can be applied on the original message passing algorithm because only extrinsic information is passed in this scheme.
\section{Biweight Distribution Approximation}
\label{sec:App_Biweight_Dis}
In order that a pair \((\vec{c}_1, \vec{c}_2)\) has the configuration \(\vec{b}_\ast \coloneqq (b_{11}, b_{10}, b_{01}, b_{00})\), \(\vec{c}_1\) must have \(b_1 \coloneqq b_{11} + b_{10}\) \(1\)s and \(b_0 \coloneqq b_{01} + b_{00}\) \(0\)s, which is the case for \(A(b_1)\) codewords. Moreover, the weight of \(\vec{c}_2\) must be \(\we(\vec{c}_2) \coloneqq b_{11} + b_{01}\) and \(\dH(\vec{c}_1, \vec{c}_2) = b_{10} + b_{01}\).
For \(B\), we use the approximation
\begin{align*}
&B(b_{11}, b_{10}, b_{01}, b_{00}) \approx A(b_1)\\
&\cdot \begin{cases}
A(\we(\vec{c}_2)) &
\begin{aligned}[c]
&A(\we(\vec{c}_2)) = 0 \\
&\text{or \(\we(\vec{c}_2) \in \{0, n\}\)}
\end{aligned} \\
A(b_{11} + b_{01})
\binom{b_1}{b_{11}} \binom{b_0}{b_{01}}
/ \binom{n}{b_{11} + b_{01}} &
\we(\vec{c}_2) \leq \dH(\vec{c}_1, \vec{c}_2) \\
A(b_{10} + b_{01})
\binom{b_1}{b_{10}} \binom{b_0}{b_{01}}
/ \binom{n}{b_{10} + b_{01}} &
\we(\vec{c}_2) > \dH(\vec{c}_1, \vec{c}_2),
\end{cases}
\end{align*}
which will be motivated in the following.
In the first case, either no valid pair exists, or \(\vec{c}_2\) is the all-zero or the all-one codeword (if existing). The all-zero or all-one codeword form together with all codewords of weight \(b_1\) a pair of the configuration \(\vec{b}_\ast\). In these cases, no approximation is necessary, and we have \(A(b_1) A(\we(\vec{c}_2))\) valid pairs.
For the second case, we first consider a fixed codeword \(\vec{c}_1\) from the \(A(b_1)\) codewords of weight \(b_1\). Now, we approximate the number of codewords \(\vec{c}_2\) that form together with this \(\vec{c}_1\) a pair of \(\vec{b}_\ast\).
We know that \(A(\we(\vec{c}_2))\) codewords have the correct weight. Since we have no further information on the code, we assume that each of these codewords is independently and uniformly chosen at random from the set of the binary words of length \(n\) and weight \(\we(\vec{c}_2)\).
Then, the probability that one of these random words has \(b_{11}\) \(1\)s at the \(1\)-coordinates of \(\vec{c}_1\) and \(b_{01}\) \(1\)s at the \(0\)-coordinates is
\(
P = \binom{b_1}{b_{11}} \binom{b_0}{b_{01}} / \binom{n}{b_{11} + b_{01}}
\).
Hence, on average, \(A(\we(\vec{c}_2)) P\) codewords form together with \(\vec{c}_1\) a pair of the configuration \(\vec{b}_\ast\). Since there are \(A(b_1)\) possible codewords for \(\vec{c}_1\), we have \(A(b_1) A(\we(\vec{c}_2)) P\) pairs in total.
In the third case, we count the pairs that have the configuration
\[
\vec{\tilde{b}}_\ast \coloneqq
(\tilde{b}_{11}, \tilde{b}_{10}, \tilde{b}_{01}, \tilde{b}_{00}) =
(b_{10}, b_{11}, b_{01}, b_{00})
\]
using the second case.
Each pair \((\vec{\tilde{c}}_1, \vec{\tilde{c}}_2)\) of \(\vec{\tilde{b}}_\ast\) can be transformed into a pair \((\vec{c}_1, \vec{c}_2)\) of \(\vec{b}_\ast\) by the bijective transformation
\(
(\vec{c}_1, \vec{c}_2) =
(\vec{\tilde{c}}_1, \vec{\tilde{c}}_1 + \vec{\tilde{c}}_2)
\)
due to linearity.
Therefore, the biweight distribution of \(\vec{\tilde{b}}_\ast\) and \(\vec{b}_\ast\) are equal.
We observed that the results of the third approximation are better than the second one when \(\we(\vec{c}_2)\) is not too small.
\section{Proof of Theorem~\ref{thm:Fixed_Weight_Dis}}
\label{proof:Fixed_Weight_Dis}
\newcommand{\mathcal{K}}{\mathcal{K}}
\begin{proof}
For the sake of clarity, we use the abbreviation \(\vec{b}_\ast \coloneqq (b_{11}, b_{10}, b_{01}, b_{00})\) in this proof. Let \(\mathcal{K}(\vec{b}_\ast)\) be the set of ordered codeword pairs
\((\vec{c}_1, \vec{c}_2) \in \mathcal{C}^2\) that have the configuration \(\vec{b}_\ast\). Its cardinality is the biweight distribution \(B(\vec{b}_\ast)\).
Let \(\mathcal{K}_k^{\alpha \beta}(\vec{b}_\ast) \subset \mathcal{K}(\vec{b}_\ast)\) be the subset whose pairs \((\vec{c}_1, \vec{c}_2)\) additionally fulfill \(c_{1, k} = \alpha\), \(c_{2, k} = \beta\) with \(k \in \{1, \dotsc, n\}\) and \(\alpha, \beta \in \{0, 1\}\).
Its cardinality is
\(
B_k^{\alpha \beta}(\vec{b}_\ast) = |\mathcal{K}_k^{\alpha \beta}(\vec{b}_\ast)|
\).
First, we show that for cyclic codes, \(B_k^{\alpha \beta}(\vec{b}_\ast)\)
is independent of \(k\). Let \(i, j \in \{1, \dotsc n\}\), \(i \neq j\) be two different positions.
Consider the function
\[
s\colon \, \mathcal{K}_i^{\alpha \beta}(\vec{b}_\ast) \to \{0, 1\}^n \times \{0, 1\}^n
\]
that cyclically shifts each codeword of a pair \((\vec{c}_1, \vec{c}_2)\) so that the \(i\)-th position is shifted to the \(j\)-th position. Since the code is cyclic, the words of the shifted pair \((\vec{c}_1^s, \vec{c}_2^s) \coloneqq s\big((\vec{c}_1, \vec{c}_2)\big)\) are also codewords. In addition, the shift does not change the configuration \(\vec{b}_\ast\) of the pair, and we have \((\vec{c}_1^s, \vec{c}_2^s) \in \mathcal{K}(\vec{b}_\ast)\).
Furthermore, according to the definition of \(s\), \(c_{1, j}^s = c_{1, i} = \alpha\) and \(c_{2, j}^s = c_{2, i} = \beta\), which implies \((\vec{c}_1^s, \vec{c}_2^s) \in \mathcal{K}_j^{\alpha \beta}(\vec{b}_\ast)\). Hence, \(s\) is an injective function from \(\mathcal{K}_i^{\alpha \beta}(\vec{b}_\ast)\) to \(\mathcal{K}_j^{\alpha \beta}(\vec{b}_\ast)\). Since an injective function from \(\mathcal{K}_j^{\alpha \beta}(\vec{b}_\ast)\) to \(\mathcal{K}_i^{\alpha \beta}(\vec{b}_\ast)\) can be constructed in the same way, we obtain
\(
B_i^{\alpha \beta}(\vec{b}_\ast) = B_j^{\alpha \beta}(\vec{b}_\ast)
\),
which proves the independence of \(B_k^{\alpha \beta}(\vec{b}_\ast)\) from \(k\).
Next, we use this result to prove \eqref{eqn:Fixed_Biweight_Dis}. According to the definition of \(\vec{b}_\ast\), each pair
\((\vec{c}_1, \vec{c}_2) \in \mathcal{K}(\vec{b}_\ast)\) has \(b_{\alpha \beta}\) positions, where \(c_{1, i} = \alpha\) and \(c_{2, i} = \beta\) with \(i \in \{1, \dotsc, n\}\), and therefore, is contained in
\(b_{\alpha \beta}\) sets of \(\{\mathcal{K}_i^{\alpha \beta}(\vec{b}_\ast)\}_i\). Hence, the sum over the cardinalities \(B_i^{\alpha \beta}(\vec{b}_\ast)\) of \(\{\mathcal{K}_i^{\alpha \beta}(\vec{b}_\ast)\}_i\) is
\begin{align}
\sum_{i = 0}^{n} B_i^{\alpha \beta}(\vec{b}_\ast)
&= b_{\alpha \beta} B(\vec{b}_\ast).
\label{eqn:Sum1_Weight_Dis}
\intertext{
Since \(B_i^{\alpha \beta}(\vec{b}_\ast)\) is independent of \(i\), we also have
}
\sum_{i = 0}^{n} B_i^{\alpha \beta}(\vec{b}_\ast)
&= n B_k^{\alpha \beta}(\vec{b}_\ast)
\label{eqn:Sum2_Weight_Dis}
\end{align}
for any \(k \in \{1, \dotsc n\}\).
Solving \eqref{eqn:Sum1_Weight_Dis} and \eqref{eqn:Sum2_Weight_Dis} for \(B_k^{\alpha \beta}(\vec{b}_\ast)\) proves \eqref{eqn:Fixed_Biweight_Dis} of the theorem.
Equation~\eqref{eqn:Fixed_Weight_Dis} follows directly from \eqref{eqn:Fixed_Biweight_Dis} and the identities
\(A(b_1) = B(b_1, 0, 0, n - b_1)\) and \(A_k^{\alpha}(b_1) = B_k^{\alpha\alpha}(b_1, 0, 0, n - b_1)\).
\end{proof}
\section{Decoder Transition Probabilities EaED}
\label{sec:Decoder_Trans_Probs_EaED}
This section presents the derivation of \(\T{\alpha}{\beta}{D^\prime, E^\prime}\) for the EaED.
In the following, we use \(A \sqcup B\) to denote the union of two disjoint sets \(A\), \(B\) and \(\xor{\beta}\) to negate a binary value \(\beta\).
For a set of codewords \(A \subset \mathcal{C}\), we define
\(\mathcal{S}_t(A) \coloneqq \bigcup_{\vec{c} \in A} \mathcal{S}_t(\vec{c})\).
\subsection{Random Experiment}
In decoding steps~\ref{item:GenerateY1} and \ref{item:GenerateY2} of the EaED described in Sec.~\ref{sec:Decoder}, the EaED generates from an error pattern \(\vec{e} \in \Omega\) an error pattern pair \((\vec{e}_1, \vec{e}_2)\).
To describe these pairs, let
\(
\Omega_{\textsub{p}}(\alpha_1, \alpha_2) \subset (\{0, 1\}^{n})^2
\)
(\(\alpha_1, \alpha_2 \in \{0, 1\}\)) be the set of ordered binary error pattern pairs \((\vec{e}_1, \vec{e}_2)\) for which the following conditions hold:
\begin{itemize}
\item The distance between \(\vec{e}_1\), \(\vec{e}_2\) at \(\compl{k}\) is
\(\operatorname{d}_{\compl{k}}(\vec{e}_1, \vec{e}_2) = E^\prime\).
\item There are \(D^\prime\) positions of \(\compl{k}\) at which \(\vec{e}_1\), \(\vec{e}_2\) have a \(1\).
\item
\(e_{1, k} = \alpha_1\) and
\(e_{2, k} = \alpha_2\).
\end{itemize}
Then, the EaED generates from \(\vec{e} \in \Omega\) a pair from the set
\[
\phantom{.}
\Omega_{\textsub{p}}
\coloneqq
\begin{cases}
\Omega_{\textsub{p}}(\alpha, \alpha) & \alpha \neq \mathord{?} \\
\Omega_{\textsub{p}}(1, 0) \sqcup \Omega_{\textsub{p}}(0, 1) & \alpha = \mathord{?}.
\end{cases}
\]
It is easy to see that each pair of \(\Omega_{\textsub{p}}\) occurs with the same probability. Hence, \(\T{\alpha}{\beta}{D^\prime, E^\prime}\) can be calculated through
\(
\T{\alpha}{\beta}{D^\prime, E^\prime} = |\mathcal{M}|/|\Omega_{\textsub{p}}|
\),
where \(\mathcal{M} \subset \Omega_{\textsub{p}}\) contains only the pairs \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}\) whose decoding result \(\vec{w}\) fulfills \(w_k = \beta\).
\(
|\Omega_{\textsub{p}}| = |\Omega| 2^E
\)
because the \(E \coloneqq E^\prime + \ind{\alpha = \mathord{?}}\) erased positions of \(\vec{e}\) can be filled with \(2^E\) different binary values to generate a pair.
\(|\mathcal{M}|\) will be calculated in the following sections.
\subsection{Decomposition of \(|\mathcal{M}|\)}
The analysis method in Sec.~\ref{sec:Calc_M3} below can only be applied to specific subsets. Therefore, we first decompose \(\mathcal{M}\) into such subsets.
For \(\alpha = \mathord{?}\), the \(k\)-th bits of the pairs of \(\mathcal{M}\) are not fixed. To avoid this, we define
\(
\mathcal{M}(\alpha_1, \alpha_2) \coloneqq
\mathcal{M} \cap \Omega_{\textsub{p}}(\alpha_1, \alpha_2)
\)
for \(\alpha_1, \alpha_2 \in \{0, 1\}\), whose pairs have fixed \(k\)-th bits and get
\begin{equation}\label{eqn:Decomp_M_into_M_alpha1_alpha2}
\mathcal{M} =
\begin{cases}
\mathcal{M}(\alpha, \alpha) & \alpha \neq \mathord{?} \\
\mathcal{M}(1, 0) \sqcup \mathcal{M}(0, 1) & \alpha = \mathord{?}.
\end{cases}
\end{equation}
Below, we will calculate \(|\mathcal{M}(\alpha_1, \alpha_2)|\) for arbitrary \(\alpha_1, \alpha_2 \in \{0, 1\}\). Then, we can obtain \(|\mathcal{M}|\) by \eqref{eqn:Decomp_M_into_M_alpha1_alpha2}.
The following sets \(\mathcal{M}_i\) are all subsets of \(\Omega_{\textsub{p}}(\alpha_1, \alpha_2)\), so for the sake of clarity, we do not specify the domain in the set definitions.
In the case of \(w_k = \beta\), the decoding had to be successful because a failed decoding would result in \(w_k = \alpha \neq \beta\).
That is why \(w_k = \beta\) is only possible for pairs of
\[
\mathcal{M}_1 \coloneqq
\{
\text{\(\vec{e}_1 \in \mathcal{S}_t(\mathcal{C}_k^\beta)\) or \(\vec{e}_2 \in \mathcal{S}_t(\mathcal{C}_k^\beta)\)}
\}.
\]
However, there are pairs \((\vec{e}_1, \vec{e}_2) \in \mathcal{M}_1\) where one \(\vec{e}_i\) is closer to a codeword \(\vec{c} \in \mathcal{C}_k^{\xor{\beta}}\) than the other is to a codeword \(\vec{c} \in \mathcal{C}_k^{\beta}\). The decoding result \(\vec{w}\) of these pairs fulfills \(w_k = \xor{\beta}\).
Removing these pairs from \(\mathcal{M}_1\) results in
\[
\mathcal{M}(\alpha_1, \alpha_2)
= \mathcal{M}_1 \setminus
\underbrace{\{
(\vec{e}_1, \vec{e}_2) \in \mathcal{M}_1,
w_k = \xor{\beta}
\}}_{\eqqcolon \mathcal{M}_2}.
\]
Since \(\mathcal{M}_2 \subset \mathcal{M}_1\), we have \(|\mathcal{M}(\alpha_1, \alpha_2)| = |\mathcal{M}_1| - |\mathcal{M}_2|\).
A further decomposition
\begin{align*}
\mathcal{M}_1 &=
\{\vec{e}_1 \in \mathcal{S}_t(\mathcal{C}_k^\beta)\} \cup \{\vec{e}_2 \in \mathcal{S}_t(\mathcal{C}_k^\beta)\}, \\
\mathcal{M}_2 &=
\{
\vec{e}_1 \in \mathcal{S}_t(\mathcal{C}_k^\beta),
w_k = \xor{\beta}
\}
\sqcup
\{
\vec{e}_2 \in \mathcal{S}_t(\mathcal{C}_k^\beta),
w_k = \xor{\beta}
\}
\end{align*}
yields
\begin{align*}
|\mathcal{M}_1| &= 2 \cdot
|\underbrace{\{
\vec{e}_1 \in \mathcal{S}_t(\mathcal{C}_k^\beta)
\}}_{\eqqcolon \mathcal{M}_3}|
-
|\underbrace{\{
\vec{e}_1, \vec{e}_2 \in \mathcal{S}_t(\mathcal{C}_k^\beta)
\}}_{\eqqcolon \mathcal{M}_4}|, \\
|\mathcal{M}_2| &= 2 \cdot
|\underbrace{\{
\vec{e}_1 \in S_t(\mathcal{C}_k^\beta), w_k = \xor{\beta}
\}}_{\eqqcolon \mathcal{M}_5}|.
\end{align*}
because the same decoding method is used for \(\vec{e}_1\) and \(\vec{e}_2\).
\subsection{Derivation of \(|\mathcal{M}_3|\)}\label{sec:Calc_M3}
To calculate \(|\mathcal{M}_3|\), the following algorithm could be used: Iterate over all triples
\((\vec{c}, \vec{e}_1, \vec{e}_2)\) with \(\vec{c} \in \mathcal{C}_k^\beta\)
and count the cases in which \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\)
and \(\vec{e}_1 \in \mathcal{S}_t(\vec{c})\) holds.%
\footnote{Since the spheres \(\mathcal{S}_t(\vec{c_i})\) are pairwise disjoint, no pair is counted twice.}
Since this algorithm is too complex, we use an approach similar to that used in Sec.~\ref{sec:Trans_Prob_EaEDPlus} for the EaED+:
In that section, not all tuples \((\vec{c}, \vec{e})\) are counted individually but all tuples with identical coefficients \(b_{1\mathord{?}}\), \(b_{10}\), \(b_{01} \coloneqq D^\prime - b_{10}\) and \(b_{0\mathord{?}} \coloneqq E^\prime - b_{1\mathord{?}}\) could be treated at once.
Figure~\ref{fig:M3_configuration} shows the generalization of this concept for the triple \((\vec{c}, \vec{e}_1, \vec{e}_2)\).
\begin{figure}
\centering
\includegraphics{img/Configuration_M3}
\caption{Schematic illustration of the coefficients describing the configuration \(\vec{b}_\ast\) of the triple \((\vec{c}, \vec{e}_1, \vec{e}_2)\).
For example, \(b_{101}\) determines the number of positions of \(\compl{k}\) where \(\vec{c}\) has a \(1\), \(\vec{e}_1\) has a \(0\) and \(\vec{e}_2\) has a \(1\).}
\label{fig:M3_configuration}
\end{figure}
Because of \(\vec{c} \in \mathcal{C}_k^\beta\) and \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\), the bits at \(k\) are fixed by \(\beta\), \(\alpha_1\) and \(\alpha_2\).
The overlaps of the bits at \(\compl{k}\) are described by the coefficients
\begin{align*}
b_{f} &\coloneqq |\{i \in \compl{k} : c_i = f\}|,\\
b_{fg} &\coloneqq |\{i \in \compl{k} : c_i = f, e_{1, i} = g\}|, \\
b_{fgh} &\coloneqq |\{i \in \compl{k} : c_i = f, e_{1, i} = g, e_{2, i} = h\}|,
\end{align*}
which are collectively called configuration \(\vec{b}_\ast\).
Let \(\mathcal{K}(\vec{b}_\ast)\) be the set of all triples whose bits at \(k\) are \(\beta\), \(\alpha_1\), \(\alpha_2\) and the positions at \(\compl{k}\) have the configuration \(\vec{b}_\ast\).
Then it is possible to determine if the triples of \(\mathcal{K}(\vec{b}_\ast)\) are counted in the algorithm above, although the exact positions of the symbols are not known:
\begin{subequations}
\label{eqn:Condition_M3_Config}
For \(\vec{e}_1 \in S_t(\vec{c})\), the condition%
\begin{align}
\dH(\vec{c}, \vec{e}_1) &=
b_{10} + b_{01} + \ind{\alpha_1 \neq \beta} \leq t
\intertext{%
must hold and for \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\), we get
}
\operatorname{d}_{\compl{k}}(\vec{e}_1, \vec{e}_2)
&=
\sum_{\mathclap{v \in \{0, 1\}}} (b_{v10} + b_{v01}) = E^\prime,\\
\sum_{v \in \{0, 1\}} b_{v11} &= D^\prime.
\end{align}
\end{subequations}
To calculate \(|\mathcal{M}_3|\), we iterate over all configurations fulfilling \eqref{eqn:Condition_M3_Config} and sum the numbers of triples that belong to them:
\begin{align*}
&|\mathcal{M}_3| = \\
&\sum\limits_{\substack{
\vec{b}_\ast
\text{ if \eqref{eqn:Condition_M3_Config}}
}}
\underbrace{
\vphantom{\prod_{v \in \{0, 1\}} \binom{b_v}{b_{v1}}}
A_k^{\beta}(b_1 + \ind{\beta = 1})
}_{\text{Ways for \(\vec{c}\)}}
\underbrace{
\prod_{v \in \{0, 1\}} \binom{b_v}{b_{v1}}
}_{\substack{%
\text{Ways for \(\vec{e}_1\) given \(\vec{c}\)}\\
}}
\underbrace{
\prod_{vv \in \{0, 1\}^2} \binom{b_{vv}}{b_{vv1}}
}_{\substack{%
\text{Ways for \(\vec{e}_2\) given \(\vec{c}\), \(\vec{e}_1\)}\\
}}
.
\end{align*}
\subsection{Derivation of \(|\mathcal{M}_4|\) and \(|\mathcal{M}_5|\)}
In the following, \(\beta_1\) and \(\beta_2\) denote binary values, where, in the derivation of \(|\mathcal{M}_4|\), we set \(\beta_1 = \beta_2 = \beta\) and in the derivation of \(|\mathcal{M}_5|\), \(\beta_1 = \beta\) and \(\beta_2 = \xor{\beta}\).
To calculate \(|\mathcal{M}_4|\) and \(|\mathcal{M}_5|\), the same approach as before is used for tuples \((\vec{c}_1, \vec{c}_2, \vec{e}_2, \vec{e}_1)\), where position \(k\) is fixed by
\(c_{1, k} = \beta_1\), \(c_{2, k} = \beta_2\), \(e_{2, k} = \alpha_2\) and \(e_{1, k} = \alpha_1\).
The overlaps at \(\compl{k}\) are described by configurations \(\vec{b}_\ast\), whose coefficients have up to \(4\) indices.
As before, the \(i\)-th index in a coefficient denotes the symbol of the \(i\)-th word of \((\vec{c}_1, \vec{c}_2, \vec{e}_2, \vec{e}_1)\).
For \(|\mathcal{M}_4|\), we count all tuples for which \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\) and \(\vec{e}_1 \in \mathcal{S}_t(\vec{c}_1)\), \(\vec{e}_2 \in \mathcal{S}_t(\vec{c}_2)\) holds.
In the configuration domain, \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\) transforms into
\begin{subequations}
\label{eqn:Condition_M4_M5}
\begin{align}
\operatorname{d}_{\compl{k}}(\vec{e}_1, \vec{e}_2)
&=
\sum_{\mathclap{vv \in \{0, 1\}^2}}
(b_{vv10} + b_{vv01})
= E^\prime,\\
\sum_{\mathclap{vv \in \{0, 1\}^2}} b_{vv11}
&= D^\prime
\end{align}
and \(\vec{e}_1 \in \mathcal{S}_t(\vec{c}_1)\), \(\vec{e}_2 \in \mathcal{S}_t(\vec{c}_2)\) transform into
\begin{align}
\dH(\vec{c}_1, \vec{e}_1) &=
\sum_{\mathclap{vv \in \{0, 1\}^2}}
(b_{1vv0} + b_{0vv1})
+ \ind{\alpha_1 \neq \beta_1}
\leq t, \\
\dH(\vec{c}_2, \vec{e}_2) &=
\sum_{\mathclap{v \in \{0, 1\}}}
(b_{v10} + b_{v01})
+ \ind{\alpha_2 \neq \beta_2}
\leq t,
\end{align}
\end{subequations}
where \(\beta_1 = \beta_2 = \beta\).
By summing the number of tuples of each configuration that fulfills \eqref{eqn:Condition_M4_M5}, we get
\begin{multline*}
\phantom{.}
|\mathcal{M}_4| =
\smash{
\smashoperator[l]{\sum_{\text{
\(\vec{b}_\ast\) if \eqref{eqn:Condition_M4_M5}
}}}
\Bigg(
}
B_k^{\beta \beta}(b_{11} + \ind{\beta = 1}, b_{10}, b_{01}, b_{00} +\ind{\beta = 0}) \\
\prod_{vv \in \{0, 1\}^2} \binom{b_{vv}}{b_{vv1}}
\prod_{vvv \in \{0, 1\}^3} \binom{b_{vvv}}{b_{vvv1}}
\Bigg).
\end{multline*}
For \(|\mathcal{M}_5|\), again, only tuples fulfilling \((\vec{e}_1, \vec{e}_2) \in \Omega_{\textsub{p}}(\alpha_1, \alpha_2)\) and \(\vec{e}_1 \in \mathcal{S}_t(\vec{c}_1)\), \(\vec{e}_2 \in \mathcal{S}_t(\vec{c}_2)\) are counted, which transforms into \eqref{eqn:Condition_M4_M5} (with \(\beta_1 = \beta\) and \(\beta_2 = \xor{\beta}\)) in the configuration domain.
In addition,
\begin{equation}
\label{eqn:Condition_M5}
\dnE{\vec{e}}(\vec{c}_2, \vec{e}) \leq \dnE{\vec{e}}(\vec{c}_1, \vec{e})
\end{equation}
must hold so that the decoder decodes to \(\vec{c}_2\) resulting in \(w_k = \xor{\beta}\).
In configuration domain, these distances are obtained by
\begin{align*}
\phantom{.}
\dnE{\vec{e}}(\vec{c}_1, \vec{e}) &=
\sum_{\mathclap{v \in \{0, 1\}}} (b_{1v00} + b_{0v11})
+ \ind{\text{\(\alpha \neq \mathord{?}\) and \(\alpha \neq \beta\)}}, \\
\dnE{\vec{e}}(\vec{c}_2, \vec{e}) &=
\sum_{\mathclap{v \in \{0, 1\}}} (b_{v100} + b_{v011})
+ \ind{\text{\(\alpha \neq \mathord{?}\) and \(\alpha \neq \xor{\beta}\)}}
.
\end{align*}
Summing over the valid configurations yields
\begin{multline*}
|\mathcal{M}_5| =
\smashoperator[l]{\sum_{\text{
\(\vec{b}_\ast\) if \eqref{eqn:Condition_M4_M5}, \eqref{eqn:Condition_M5}
}}}\!
\Bigg(
B_k^{\beta \xor{\beta}}(b_{11}, b_{10} + \ind{\beta = 1}, b_{01} + \ind{\beta = 0}, b_{00}) \\
\prod_{vv \in \{0, 1\}^2} \binom{b_{vv}}{b_{vv1}}
\prod_{vvv \in \{0, 1\}^3} \binom{b_{vvv}}{b_{vvv1}}
\operatorname{Corr}(\vec{b}_\ast)
\Bigg).
\end{multline*}
In the case of \(\dnE{\vec{e}}(\vec{c}_2, \vec{e}) = \dnE{\vec{e}}(\vec{c}_1, \vec{e})\), the decoder chooses between \(\vec{c}_1\) and \(\vec{c}_2\) at random, and therefore, on average only half of the pairs have a result with \(w_k = \xor{\beta}\).
To take this into account, the correction term \(\operatorname{Corr}(\vec{b}_\ast)\) is \(\frac{1}{2}\) in this case and \(1\) otherwise.
Now, the transition probabilities can be calculated by putting all contributions together.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
One of the remaining puzzles in the silent flight of owls is the function of the serrated leading edge. This `comb-like' structure is more developed in nocturnal than diurnal owl species \citep{Weger2016}, suggesting that the leading-edge comb must have some benefit for hunting in the night.
Indeed it was suggested early on \citep{graham1934, lilley1998} that the serrations are one of the adaptations found in owls that underlie silent flights, where the owl needs to be as quiet as possible when hunting nocturnally.
Acoustic measurements by \citet{neuhaus1973} and \citet{geyer2020} support this suggestion, although the effect was marginal for low angles of attack, the situation being relevant for the gliding phase persisting up to the final phase of direct attack of the prey. Alternative suggestions for their function were focusing on a possible aerodynamic benefit of a serrated leading edge \citep{Hertel1963,Kroeger1972,klaen2010, winzen2014, wagner2017,Rao2017,ikeda2018,wei2020}, summarized in the most recent review given in 2020 by \citet{Jaworski2020}.
An early contribution interpreted the leading edge comb as a tripping device, which triggers the boundary layer to turbulent transition, keeping the flow over the aerofoil attached \citep{Hertel1963}.
This, however, would cause some extra turbulent noise, which is not observed \citep{geyer2020}. \citet{Kroeger1972} presented a comprehensive study of the flow around the leading edge of an owl wing.
Using wool tufts, these authors showed a spanwise flow behind the comb, which they interpreted as a way to prevent flow separation.
Acoustic measurements by these authors, however, showed no direct influence of the presence of the comb.
It was only at high angles of attack that a difference of about 3~dB was noticeable.
This result was later confirmed by \citet{geyer2020} using acoustic 2D sound maps.
These authors could show that the sources of higher noise levels for high angles of attack stem from the wing tip. \citet{Jaworski2020} speculated that the leading edge comb may play a role in reducing spanwise flow variations due to separation at high angles of attack, thereby reducing the strength of the tip vortex and the associated tip noise \citep{Jaworski2020}.
If so, it would, however, not be relevant for the gliding phase.
In a similar way, aerodynamic performance measurements on wings with serrated leading edge show benefits mostly with increasing angle of attack, again not much relevant for the gliding phase. \citet{Rao2017} showed that planar leading-edge serrations can passively control the laminar-to-turbulent transition over the upper wing surface. Each of the serrations generates a vortex pair, which stabilizes the flow similar as vortex generators do. \citet{wei2020} applied such serrations on the wing of a propeller to shift the location of laminar-to-turbulent transition on the suction side. \citet{ikeda2018} investigated different length of the serrations to find the optimum of lift-to-drag ratio at angles of attack $< 15^\circ$.
A remaining contribution to noise reduction at gliding flight conditions may be the influence of the comb on leading-edge noise from incoming vortices and unsteady flow components present in the air environment.
To test this hypothesis, researchers investigated the noise emission of wings in an anechoic wind tunnel with unsteady inflow conditions generated by an upstream inserted turbulence grid \cite{geyer2017-LEC}.
The results showed that serrations can attenuate unsteady flow effects caused by oncoming vortices and turbulence. Similar results were found from LES simulations of serrations in turbulent inflow conditions \citep{chaitanya}. These findings agree with measurements on noise emission of stationary aerofoils where artificial serrations led to a lower noise radiation in unsteady flow \citep{geyer2017-LEC, Narayanan2015}.
Herein, we introduce a novel hypothesis which is related to the influence of serrations on swept wing aerodynamics. First, data of owls in gliding flight clearly demonstrate that the wing's leading edge is swept backward, about 10--20$^\circ$, see Figure \ref{fig: FeatherScan} (adapted from snapshots of the movie produced in \citet{Durston2019} for a gliding American barn owl). Second, the serrations in nature are curved in a complex 3D shape protruding out of the plane of the wing \citep{Bachmann2011}. All of this may influence the flow over the wing and probably - by the complex coupling between flow and sound generation - it may influence also the overall noise emission. For swept wings it is known that a backward sweep can introduce considerable cross-flow instabilities, which trigger transition \citep{serpieri, Radeztsky, Edward}, invoking the substantially drag-increasing turbulent boundary-layer state \citep{Wassermann2002}. To overcome this drag penalty, flow control methods such as suction \citep{Kloker2008} and plasma actuators \citep{Dorr2015} have been developed to attenuate the instabilities. The present work demonstrates, that a similar effect may be achieved in a passive way by using a comb-like leading-edge structure with 3D curved finlets, inspired from the geometry of serrations on the owl wing. We show in the following that the serrations cause a change in flow direction near the wall (flow turning) at sweep angles observed in nature, thereby delaying transition and contributing to a more silent flight.
\section{Methods}
\subsection{Coordinate System of the wing}\label{sec: Method-Coordinates}
The world coordinate system of the flying body is typically defined in relation to the body axes and the direction of the flight path.
Herein, we define (in capital letters) another Cartesian coordinate system which is fixed with the wing and oriented with the leading edge, see Fig. 1. The positive X-axis points in chordwise direction, the positive Y-axis vertically upwards, and the positive Z-axis is aligned with the leading edge of the wing (Fig. \ref{fig: FeatherScan}).The same coordinate system was used to describe the morphology of the leading edge comb of the owl feather in nature and for the model data, see Table \ref{table: Design Spec}. As a research platform of swept wing instabilities, often a flat swept plate was chosen for better control of the boundary conditions and access for the measurement methods \citep{Abegg}. Therefore the flat swept plate is considered as a generic testing model in the relevant community and is used herein for the same reasons of better access for CFD and experimental studies. Additional wing curvature effects on laminar-turbulent transition can be simulated by imposing either a negative or a positive pressure gradient on the potential flow outside \citep{Abegg}.
\subsection{Generation of the generic comb model}
As may be seen in Fig. 1b, the feather that forms the leading edge has an outer vane with separated, filamentous barb endings.
These barb endings are the serrations \citep{Bachmann2011}.
Many parallel serrations form a leading edge comb-like structure.
Each single serration has a complex shape with strong curvature in two major planes of the feather, the frontal Y-Z plane and the cross-sectional X-Y plane \citep{Bachmann2011}.
A generic model of the leading edge comb was built based on data available in \citep{Bachmann2011}.
The model consists of a series of barbs.
Each barb starts with the root and ends with the tip.
While the roots of the serrations are connected to each other, the tips are separated.
In the following we first describe the properties of the single barbs in more detail, before we explain how the barbs are aligned to form a leading-edge comb.
Table \ref{table: Design Spec} indicates the range of values for the key geometric parameters of measured barbs found from the barn owl in nature, comparing those with the selected parameter of our generic model, following the data provided in \citep{Bachmann2011}. The definition of the geometric parameters is illustrated in Fig. \ref{fig: figure2}.
The width is the extension of the major axis of the barb and the thickness is the extension of the minor axis of the barb.
The inclination angle is defined herein between the barb's base and the Z-direction in the X-Z plane (Fig. \ref{fig: figure2}c).
The tilt angle is the angle between the barb's tip and the base in the Y-Z plane (Fig. \ref{fig: figure2}b).
The height and the length of the barb is referred to as H and L as illustrated in Fig. \ref{fig: figure2}.
The software SolidWorks (Dassault Syst{\`e}mes, France) was used to design a synthetic barb in the form of a beam with elliptical cross-section (long axis: width, short axis: thickness) and a linear taper from root to tip (root width: 500~$\mu$m, thickness: plate thickness; tip: width: 250~$\mu$m, thickness: 50~$\mu$m) (see Tab. \ref{table: Design Spec}) .
The length of the initially straight beam was 2250~$\mu$m.
The elliptical beam was first twisted by 30$^\circ$ (see stagger angle in Fig.\ref{fig: figure3}b, then tilted in the X-Z plane and finally curve-bent in the X-Y plane to reach the desired angles of tilt and inclination given in Tab.\ref{table: Design Spec}.
In a second step, the root of the beam was then smoothly integrated into the elliptical nose of the flat plate (aspect ratio of about three, thickness of the plate: thickness of the barb at the root) to form the serrated leading edge comb.
The comb was built as a row of successive barbs with the same spacing (wavelength $\lambda$ = 500~$\mu$m) and size.
The back, side and top views of the recreated leading edge comb is shown in Fig. \ref{fig: figure2}.
A final qualitative check was done with the geometry of a digitized piece of a 10$^{\text{th}}$ primary feather of an American barn owl (T. furcata pratincola).
The generic model resembled the natural geometry well in all major details of the barb's 3D shape, compare Fig. 1a,b and Fig. 2b,c.
In the following, we interpret the comb as a cascade of blades following the classical nomenclature used in the field of turbomachinery.
Each blade is represented by one barb and the cascade blade spacing is equal to the comb wavelength.
According to this, we can define the stagger angle as the angle between the chord line of the barb and the axis normal to the leading edge (LE) in the X-Z plane (Fig. \ref{fig: figure3}a) \citep{Dixon}.
Cross sectional views of individual barbs along the root, middle and tip locations are shown in Fig. \ref{fig: figure3}a.
The stagger angle is about 30$^\circ$ at the root of the barb and decreases to zero at the barbs' tip.
Also, the chord decreases along the barbs' height, hence, with same spacing the spacing to chord ratio increases from root towards the tip as shown in Fig. \ref{fig: figure3}b.
\begin{figure}[h]
\centering\includegraphics[scale = 0.4]{Figures/ExpArr/fig1.jpg}
\caption{Gliding owl and leading edge serrations. a) Top view of an owl in gliding flight, illustrating the backward sweep of the wing. The situation is shown in a body-fixed observer situation with wind coming from left at a velocity ($U_\infty$). The wing portion at mid span has an effective positive sweep angle of $\beta$ $\approx$ 10\si{\degree}, increasing to $\beta$ $\approx$ 20\si{\degree} further towards 3/4 span. The picture of the owl is reproduced/adapted from the video published in \citep{Durston2019} with permission from Journal of Experimental Biology, reference \citep{Durston2019} with DOI: 10.1242/jeb.185488. Inset b) pointed picture of leading edge comb in back view with flow coming out of the paper plane; inset c) pointed picture of side view of the serrations with flow coming from left .} \label{fig: FeatherScan}
\end{figure}
\begin{table}[]
\begin{tabular}{l|l|l}
\textbf{Nomenclature} & \textbf{Barn owl data} & \textbf{Idealized model} \\ \hline
\\
Length ($\mu$m) & 1823 -- 2716 & 1840
\\
Wavelength ($\mu$m) & 490 -- 670 & 500
\\
Width ($\mu$m) @ tip & 157 -- 215 & 250
\\
Width ($\mu$m) @ root & 528 -- 652 & 500
\\
\\
Thickness ($\mu$m) @ tip & 46.9 -- 53.9 & 50
\\
Thickness ($\mu$m) @ root & 82 -- 87.2 & = plate thickness
\\
\\
Tilt Angle ($^\circ$) & 35.3 -- 36.7 & 37.5
\\
Average Inclination Angle ($^\circ$) & 50 & 55.8
\\
Angle LE / flight path ($^\circ$) & 106 -- 138 & 90 -- 110 \\
\end{tabular}
\caption{Dimensions and key geometric parameters of the idealised modeled barb element, leaned upon measurements on barn owls presented by \citet{Bachmann2011}.}\label{table: Design Spec}
\end{table}
\begin{figure}[h]
\centering\includegraphics[scale = 0.3]{Figures/ExpArr/serrationviews.jpg}
\caption{Orientation of the reconstructed serrated leading edge. a) back-view of the comb, locking from the back over the feather onto the outstanding barbs of the right wing, compare also Fig. 1b. b) Side view on a single barb in enlarged scale c) top-view of the comb in the feather plane, showing the inclination of serrations along the spanwise direction.}\label{fig: figure2}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale = 0.35]{Figures/ExpArr/csection.jpg}
\caption{ Serration drawings and plots a) Single barb with three sections showing the cross section twist. b) Stagger angle ($\xi$), Normalised chord ($C/C_{Root}$) and spacing to chord ratio ($\lambda$/C) with normalised height of serration}\label{fig: figure3}
\end{figure}
\subsection{Numerical Flow Simulations}
American barn owls have an average wing chord length of $C_W = 0.178$~m \citep{Klan2009} and are supposed to fly with velocities of $U_\infty$= 2.5~m/s to 7~m/s \citep{Bachmann2012}, a number derived from data on European barn owls \citep{Mebs2000}.
At these velocities the Reynolds number $Re_{Wing}$, defined with the wing chord $C_W$, ranges between 30,000 and 100,000, if air temperatures are between 10$^\circ$C and 20$^\circ$C.
All the simulations and the flow visualisation in the work refer to an average flight speed of 5m/s, which lies within the specified flight-velocity range. For the corresponding $Re_{wing}$ of 60,000 the boundary layer is in the transitional regime to turbulence, where growing instabilities have an important contribution on noise production.
Therefore, any possible means to manipulate the flow at or near the leading edge to delay transition may have consequences on the overall flow and acoustic characteristics of the whole wing.
For our studies, we consider the situation of the animal in gliding flight at constant speed within an otherwise quiescent environment. Therefore, we can chose steady in-flow conditions.
For the first 10 percent chord of the wing including the barbs on the leading edge, the flow is expected to remain laminar and stationary.
As the barbs have a tiny filamentous shape with a diameter of only few tenth of micron, the local Reynolds-number (built with the chord of the barb) falls around 50, which is small enough that no vortex shedding will occur, see the work of \citep{Paul} for elliptic cylinders.
These conditions pave the way to use a steady-state flow solver in Computational Fluid Dynamics (CFD) to investigate the flow behind the serrations. Numerical simulations were carried out using ANSYS-Fluent 19.0.
The wing-fixed coordinate system as defined in §\ref{sec: Method-Coordinates} is used to analyze the data.
The computational domain extends six serration length upstream and downstream along the X-axis, from the leading edge of the flat plate where the serrations were attached.
Similarly, the domain length in wall-normal direction (Y-axis) extends five serration lengths in either direction and the spanwise direction (Z-axis) has a length which accommodates 11 serrations as shown in Fig.\ref{fig: ExpArr_CFD}a.
The domain is meshed with tetrahedral elements with inflation layers near the wall, furthermore, the mesh was refined near the serrations to capture the flow gradients accurately, resulting in a mesh size with about 19 million cells.
Computations were performed with a steady-state solver and the $k-\omega$ model for solving the RANS turbulence equations.
At the inlet a constant free stream velocity ($U_\infty$) is assumed.
The direction of this velocity vector relative to the coordinate system of the wing and the leading edge indicates whether the flow is facing a swept wing or not.
Zero sweep means that the leading edge is aligned with the outboard directed spanwise axis of the flying body and the inflow velocity vector is parallel to the chord-wise axis of the wing ($\beta = 0$ relative to the X-axis in the X-Z plane) as shown in Fig. \ref{fig: ExpArr_CFD}b.
To simulate the sweep effect of the wing, the angle $\beta$ was varied from -10 degree (forward swept wing) to +20 degree (backward swept wing).
Constant pressure was assumed at the outlet and periodic boundary conditions were given at the lateral sides, which results in infinite repetitions of the serrations (neglecting end effects).
\begin{figure}[t]
\centering\includegraphics[scale = 0.13]{Figures/ExpArr/cfddomain1.jpg}
\caption{Sketches of the CFD domain and the flow configuration with respect to the comb. (a) Isometric view of the CFD domain with periodic conditions in Z-direction. Leading edge serrations attached with the flat plate is shown in blue colour surface (b) Enlarged view of leading edge serration in the X-Z plane showing the direction of the inlet flow velocity vector ($U_\infty$) at an angle ($\beta$) (sweep angle) with X-axis. (Hidden lines of the serration are indicating the periodic boundary condition)}\label{fig: ExpArr_CFD}
\end{figure}
\subsection{Flow Visualization}
For the experimental flow studies, the model of the flat plate with the leading-edge comb was 3D printed with a 20:1 upscaling factor (Stratasys OBJET 30 PRO printer with a print accuracy of 30 microns, material Veroblack).
Fabrication of the serrations in original size was discarded after tests of different micro-manufacturing methods showed extreme difficulties to reproduce the shape of the barbs in good quality with the current available technology.
With the given up-scaling factor, the method of dynamic similitude in fluid mechanics \citep{Durst} provides the corresponding boundary conditions for flow visualization studies in a wind- or water-tunnel, the latter being more suited herein.
Dye flow visualisations were carried out in the CHB Water tunnel facility at City, University of London.
The tunnel is a closed loop, open surface tunnel which operates horizontally with a 0.4~m wide, 0.5~m deep and 1.2~m long test section.
According to the laws of similitude, the freestream velocity of the water was set to 3.3 cm/s, corresponding to the situation of 5 m/s in air with the serration in original scale.
The leading edge of the up-scaled model was placed vertically in the tunnel 0.4~m downstream of the entrance of the test section, extending from the floor of the tunnel up to the free water-surface (Fig. \ref{fig: ExpArr}).
This situation reproduces the flow along the flat plate with zero sweep of the leading edge.
Fluorescent dye was injected through a tiny needle (1~mm inner diameter, 1.6~mm outer diameter) which was placed upstream of the model (Fig. \ref{fig: ExpArr}b).
Care was taken to control the dye exit velocity the same as the bulk fluid flow.
This is crucial to avoid instabilities of the fine dye streakline ultimately compromising the result \citep{Merzkirch1987}.
An ultra-violet (UV) lamp was placed underneath the perspex floor of the test section to enhance the contrast of the fluorescent dye against the background.
A NIKON D5100 DSLR camera was used to capture the resulting flow visualization (Fig. \ref{fig: ExpArr}a).
The camera was mounted on a tripod and was situated parallel to the surface of the model, to observe the evolution of the dye filament on the surface of the model.
Due to the low light level, a long exposure (20 seconds) image was taken with the lens aperture set to f/10.
Such a long-time exposure is allowed as the flow pattern remained stationary, indicating a steady flow situation.
The images were then subsequently enhanced using `Adobe Photoshop' to provide better clarity.
\begin{figure}[t]
\centering\includegraphics[scale = 0.95]{Figures/ExpArr/ExpArr_OLE-01.png}
\caption{Sketches of the experimental set-up for the dye flow visualizations carried out in the CHB Water Tunnel at City, University of London. (a) plan view of the set-up in the horizontal cross-section. (b) Side view on the vertically mounted flat plate.}\label{fig: ExpArr}
\end{figure}
\section{Results}
In the following we present both experimental and simulation data on a new hypothesis on the function of the serrated comb of the leading edge of the owl wing.
The new hypothesis states that the 3D curvature of the serrations cause a change in the direction of the flow inboards towards the owl's body (called ``flow turning" in the following), in this way counteracting the outboards directed cross-span flow induced by the backward sweep of the wing.
We first show the basic predictions of our model and the validation of these predictions by experiments in a water tank.
In a second part we examine the properties of the flow turning in more detail.\\
\subsection{Basic results of experiments and CFD simulations}
Figure 6 shows the streamlines (Fig.\ref{fig: FlowVis} experiment, Fig.\ref{fig: CFD Steamlines} computed from the steady state CFD simulation) when started upstream from the serrations to downstream of them, first analyzed for the situation of zero sweep. The flow situation in the water tunnel with dye flow visualization shows a white coloured thick streamline upstream of the serrations in direction parallel to the X-axis. Once the water passes the serration, a flow turning effect can be seen as the streamline points downwards at a certain angle in negative Z-direction (inboards). Furthermore, the visualization shows that the flow remains laminar and steady. This justifies our decision to use a steady-state flow solver. The near-surface streamlines generated from the CFD results look very similar to the experimental result (Fig. \ref{fig: CFD Steamlines}). The different colours indicate different streamlines started at the same X,Z-location but at varying wall-normal distances `Y' to the flat plate. Near the wall (blue to green colours), the flow turning is maximum, it reduces with increasing distance from the plate and disappears completely at the serration tip (red colour). This indicates an induced cross flow near the wall. We interpret this data such that the 3D curved shape of the serrations cause this change in flow direction, because in a plate without serrations or a plate with symmetric planar serrations such a change in flow direction is not expected to occur. In Fig. \ref{fig: CFDvsFlowVis} the envelope of the flow turning effect is given by the two extreme streamlines, the one with zero and the one with maximum turning, respectively, for both the CFD and the flow visualization in Fig.\ref{fig: CFDvsFlowVis}. Since the result from the flow visualisation and the CFD are in good agreement, further results from CFD simulations can be accepted with confidence.
Fig.\ref{fig: figure7} shows the near-surface streamlines (along the first cell away from the wall of the numerical mesh) on the flat plate surface for various inlet flow angles in the X-Z plane. In Fig.\ref{fig: figure7}b the inlet flow is aligned with X-axis (zero sweep) and once the fluid passes through the serration the flow is turned towards inboard direction as already explained above. The same trend of flow turning is observed also for increasing backward sweep (angle $\beta$ = 10\si{\degree} Fig. \ref{fig: figure7} c and 20\si{\degree} Fig. \ref{fig: figure7}d). Altogether, this data proves that the serrations work as a cascade of guide vanes or finlets which turn the flow in the boundary layer in the opposite direction of the normally observed cross-span flow in a coherent manner along the span.
\begin{figure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Results/Streamlines/FlowVis.jpg}
\caption{Long-time exposure image of the dye flow visualisation, illuminated under ultra violet light (image has been contrast-enhanced for better clarity).}\label{fig: FlowVis}
\end{subfigure} \hspace{3mm}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/Results/Streamlines/Streamline_CFD_Cropped_2-01.jpg}
\caption{Top view on streamlines with different starting points along the wall-normal axis in color (green: near-wall to red: tip of the serrations, CFD simulation at $\beta$ = 0\si{\degree}).}
\label{fig: CFD Steamlines}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[scale = 1]{Figures/Results/Streamlines/CFD_FlowVis_Comparison.pdf}
\caption{Range of the most-extreme turning streamline relative to the streamline at the tip. From the CFD simulation and the dye trace from the water tunnel experiment}
\label{fig: CFDvsFlowVis}
\end{subfigure}
\caption{Comparison of flow visualisation and CFD results. }
\label{fig: Comparison}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale = 0.12]{Figures/Results/Streamlines/flowdef1.jpg}
\caption{Surface streamlines from CFD simulations. (a) Negative sweep angle $\beta$ = -10\si{\degree}. (b) Zero Sweep angle $\beta$ = 0\si{\degree}. Positive sweep angle (c) $\beta$=+10\si{\degree}. (d) $\beta$=+20\si{\degree}}\label{fig: figure7}
\end{figure}
\subsection{Detailed examination of the flow turning}
Further information is gained from the flow turning angle just behind the serrations shown in Fig.\ref{fig: figure8} for various inlet flow angles.
As the chord and the stagger angle are largest at the root of the barbs (Fig. \ref{fig: figure3}b), it is obvious that the flow turning is more pronounced near their root, while it reduces when moving towards the tip.
We again take help from the similarity to stationary guide-vanes and approximated the flow turning angle as proportional to the difference between inlet flow angle ($\beta$) and the stagger angle ($\xi$). The correlation of the turning angle equal to ($\beta-\xi$)/2 is based on the classical exit flow angle formula used for cascade blades $\xi$ \citep{Dixon}.
For cases with inlet flow angle of $\beta$ = 0 and +10 degrees the correlation is reasonably good (Fig.\ref{fig: figure8}b and Fig.\ref{fig: figure8}c), even for larger $\beta$ = +20 degrees the trend is captured quite well (Fig.\ref{fig: figure8}d).
The observed correlation captures the overall trend based on considerations for classical 2D guide vanes, indicating that even though the serrations have a 3D curved shape, the main factors in defining the flow turning is mostly determined by the dimensional variation of the chord and the stagger angle.
Note, that the flow turning effect induced at the plane of the serrations is affecting the direction of the streamlines even far downstream the chord until at the downstream end of the simulation domain (Fig. \ref{fig: CFDvsFlowVis}), see also the flow visualisation experiment. Therefore the serration have a far-reaching effect on the boundary layer flow down the chord.
To show that, we compared simulations for the plain plate with those having attached the leading-edge comb under otherwise identical boundary conditions.
Normalised chordwise and spanwise velocity profiles at the outlet section at X/L=6 for a sweep angle of 10 degrees are shown in Fig.\ref{fig: figure9}a and Fig.\ref{fig: figure9}b.
With serrations, the chordwise velocity profile shows a larger deficit than without serrations (Fig. \ref{fig: figure9}a), which leads to an increase of the displacement ($\delta^*$) and momentum thickness ($\theta$) to twice the value without serrations (flat plate).
However, the shape factor ($H = \delta^*/\theta$) remains around 2.4, suggesting that the serrations are not acting as a flow tripping device (this is when the shape factor gets beyond 3.5).
The spanwise velocity profile for the plain plate (without serrations) resembles the one in chordwise direction (Fig. \ref{fig: figure9}b).
However, adding the leading-edge comb leads to a dramatic decrease of the spanwise flow inside the boundary layer region with further reach into the free-stream.
For a better illustration of the net-effect induced by adding the leading-edge comb, we plot the difference of the spanwise velocity profile ($\Delta$W) defined as $W_{wi}-W_{wo}$ for all the cases considered here (wi - with serrations, wo - without serrations).This resultant velocity profile increases from zero to a maximum value within half the height of the barb and then it monotonically decays to minimal value at a height which is more than twice the height of the barb.
Hence, this profile strongly resembles that of a wall jet, which counter-acts the sweep-induced spanwise flow in the plain plate (Fig. \ref{fig: figure9}c).
The peak values in $\Delta$W are reached at about half the serration height for all flow angles. Furthermore, the magnitude of the peaks increase with increasing sweep angle.
These results show also a significant flow turning effect for the negative sweep angle ($\beta = -10^\circ$), which was not clearly recognizable from the illustration of the surface streamlines (Fig. \ref{fig: figure7}a).
\begin{figure}[ht]
\centering\includegraphics[scale = 0.5]{Figures/Results/Streamlines/cascade.jpg}
\caption{Wall-normal variation of turning angle behind serrations at X/L=0 for different sweep angles from CFD results and analytical formula. (a) $\beta$ = -10\si{\degree}. (b) $\beta$ = 0\si{\degree}. (c) $\beta$ = 10\si{\degree}. (d) $\beta$ = 20\si{\degree}.}\label{fig: figure8}
\end{figure}
When all the $\Delta$W profiles are normalised with respect to their corresponding maximum and the coordinates are scaled with respect to the position of maximum velocity, the profiles nearly collapse (Fig.\ref{fig: figure9}d).
The data well resembles the spanwise velocity profile used in the theoretical work from \citet{Ustinov2018} that was effective in counter-acting the cross-wise instabilities in swept wing flows.
\begin{figure}[h]
\centering\includegraphics[scale = 0.5]{Figures/Results/Streamlines/figure7.jpg}
\caption{Velocity profiles from CFD simulations at X/L = 6 downstream of the leading edge. (a) Chordwise velocity for $\beta$ = +10\si{\degree}. (b) Spanwise velocity for $\beta$ = +10\si{\degree}. Net-effect of cross-flow profile (c) For all sweep angles. (d) Normalised cross-flow velocity profile with comparison to \citet{Ustinov2018}}\label{fig: figure9}
\end{figure}
\section{Discussion and Conclusions}
We showed that serrations at the leading edge of an owl inspired model induce an inboard directed flow that is in opposite direction to the cross-span flow induced by the backward sweep of the wing.
In the following we shall first discuss these data with respect to the existing literature, arguing about some methodological considerations and then speculating about its consequences for owl flight and flight in general.
\subsection{Comparison with other work}
To the best of our knowledge, no study has directly addressed how the sweep angle influences the flow in nature-inspired serrated wings. The work most important to our new data and hypothesis is that by \citet{Ustinov2018}.
The near overlap of the curves in Fig.\ref{fig: figure9}d shows that the serrations reproduce the effect envisioned by \citet{Ustinov2018}.
These authors discussed this effect as to counter-acting the cross-wise flow in swept wing and thereby attenuating the crossflow instabilities, a negative feature of backward swept wing aerodynamics.
The work of these authors is based on a theoretical consideration of micro-perforation or winglets on the surface of a wing, which are arranged in a way that they produce a spanwise flow in the boundary layer opposite in direction to the cross-span flow induced by the sweep-effect.
With this configration, \citet{Ustinov2018} observed a wall-jet like flow profile in spanwise direction that is similar in shape and relative magnitude to our net-effect result.
Therefore, the 3D curved serrations of the barn owl wing could be thought of as a leading-edge laminar flow control device which counteract the cross flow instabilities in swept wing aerodynamics.
As we could show here, the serrations of the Owl wing are not comparable to classical vortex generators, which was speculated so far in previous work \citep{geyer2020, Hertel1963}.
These vortex generators are used traditionally to control the flow separation on the suction side of the airfoils \citep{linjohn}.
They produce strong streamwise vortices to mix the fluid flow via the lift-up effect, thus increasing streamwise momentum near the wall.
In comparison, our study found that the serrations studied herein, behave similar to 3D curved cascade blades which turn the flow to a certain degree depending on the spacing to chord ratio and the blade angle (stagger angle).
Hence, near the root of the serrations the spacing to chord ratio is low and the stagger angle is high to guide the flow to turn at relatively high angles when compared with the tip. \citet{Kroeger1972} hinted on the cascading effect of the leading edge serrations. However, they stated that the serrations push the flow behind the leading edge towards the outboard region of the owl wing, which is opposite to our observation. Note, that their statement resulted from tuft flow visualisation where the length of the tufts was greater than 4~mm. Therefore, the tuft motion will be the result of an integration all over the complete boundary layer thickness and part of the external flow. Since the height of the serrations is less than 2~mm, they probably could not see our results because of this integration effect. In addition, any method of flow visualization or flow measurement must ensure to get data very close to the wall as provided herein. This is where we benefit from the testing of an enlarged model in a water tunnel, fulfilling the rules of fluid mechanical similitude.
A vague indication of flow turning may be found in the results from \citet{wei2020}, although not mentioned therein. It seems from their Fig. 10b in \citet{wei2020}) that the hook-like serrations changed the direction of flow. However, since the graph is cut downstream at about 0.5 of serration length, it is difficult to infer a concluding answer on any flow turning.
\subsection{Methodological considerations}
It is obvious from live recordings of the gliding flight of owls that the leading edge in the region of serrations is swept backward \citep{Kroeger1972, Durston2019}, an aspect which has so far not found attention in the discussion of the function of the serrations. We observed a flow turning effect induced by the 3D curved serrations, which counter-acts the crossflow induced in backward-swept wing. In this respect it seems important that we have carefully rebuilt the natural shape of the serrations, characterized by twisting and tilting and taper, which Bachmann and Wagner \citep{Bachmann2011} called a first order approach and not used the zero order approach, i.e. use simply-shaped, often symmetric serrations as is done in most studies\citep{geyer2020, Rao2017,ikeda2018, geyer2017-LEC}.
The focus of the study was to demonstrate the basics of the novel turning effect. A good correlation was found between the observed turning angle and the classical formula for cascade blades, approximated as the summation as inlet flow angle $\beta$ and the stagger angle $\xi$ \citep{Dixon}.
Not all parameters could be assessed in this first study. Further work might unravel the role of the wavelength, as it is obvious that a too large inter-spacing will destroy the homogeneity of the induced crossflow and a too small inter-spacing will cause unnecessary form drag.
More studies are also necessary to find out how the angle of attack and the Reynolds number influences the flow turning, and how far the laminar hypothesis is valid.
\subsection{Consequences for owl flight}
The consequence of a manipulation on the flow reported in \citet{Ustinov2018} for a swept wing is that it delays transition to turbulence.
Because of the striking similarity of the effect of the manipulation on the boundary layer profile to the effect we observed, we conclude that the leading-edge comb acts to delay transition on the swept wing of the owl.
A delay of transition would correspond to a reduction in noise production as the portion on the wing surface where the flow is turbulent is reduced or even completely removed.
Owl flight is so silent that it is difficult to measure directly (in absolute terms) the noise these birds produce. Only in comparison with other, non-serrated wings, does the noise-reduction of owl flight become clear \citep{neuhaus1973, geyer2020}. Thus, the influence on the air flow as demonstrated here may be critical in nature, where a hunting owl has to remain silent until right before the strike.
Serrations which can help to keep the flow laminar and preventing cross-flow instabilities for typical flight conditions with backward swept wing , therefore, may provide a major advantage for the hunt.
\subsection{Conclusions}
To conclude, we have investigated the effect of a nature-inspired leading edge comb on the flow along a swept flat plate.
Special focus is laid on the leading-edge comb influence on the backward swept wing in gliding flight, which is known in classical wing aerodynamics to introduce considerable cross-span flow, which suffers instabilities and triggers early transition \citep{serpieri, Radeztsky, Edward}.
As evidenced in the CFD and the experiments, our model produces a flow turning which is counter-acting the cross-span flow. The magnitude of this effect is proportional to the stagger angle of the local cross-section of the barbs. If the sweep angle is increased, the flow turning becomes more pronounced, suggesting that the owl's leading-edge comb is tailored for attenuating the cross-flow instabilities. Ultimately, this means a laminar flow control with benefit of a quiet flight.
\section*{Acknowledgements}
The position of Professor Christoph Bruecker is co-funded by BAE SYSTEMS and the Royal Academy of Engineering (Research Chair No. RCSRF1617$\backslash$4$\backslash$11, which is gratefully acknowledged. The position of MSc Muthukumar Muthuramalingam was funded by the Deutsche Forschungsgemeinschaft in the DFG project BR~1494/32-1 and MEng Edward Talboys was funded by the School of Mathematics, Computer Science and Engineering at City, University of London.
Hermann Wagner was supported by RWTH Aachen University.
We like to thank Matthias Weger, Adrian Klein and Horst Bleckmann for discussion on the owl's leading edge geometry and its relevance to silent owl flight.
\section*{Author contributions statement}
All authors conceived the experiment(s), M.M. and E.T. conducted the experiments, all authors analysed the results. Initial draft was prepared by M.M, E.T and C.B. The finalised version was prepared with the contribution from all authors.
\section*{Additional information}
\textbf{Accession codes} (where applicable);\\ \textbf{Financial Competing interests} The authors declare no competing interests. \\ \textbf{Non-Financial Competing interests} The authors declare no competing interests.
\newcommand{}{}
\bibliographystyle{unsrtnat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Many polymers (e.g.,~ polyethylene, polyamides, polyesters, etc.) are semi-crystalline; their solid-states contain both crystalline and amorphous domains. The characteristics of these domains are governed by phenomena that occur during polymer processing, dictating the properties of polymeric materials and their potential uses (e.g., see ref.~\citenum{ParkRutledge2018,ShenNatureNano2010}). As such, understanding and controlling polymer crystallization is of great scientific and technological importance, especially in light of the ubiquity of polymeric materials. However, scientific understanding of polymer crystallization remains incomplete and an active area of research as evidenced by a number of recent reviews.\cite{CuiEtAlChemRev2018,YuePolyCrystal2018,LotzMacro2017,SchickJPhysCond2017,ZhangCrystals2017} In the intervening decades since Keller's seminal work\cite{KellerDFS1958,KellerPhilMag1957} revealing that polyethylene chains adopt folded conformations upon crystallization, there has been a large number of experimental,\cite{LiuTianMacro2014,KanayaMacro2013,ZhaoMacro2013,GutierrezEtAlMacro2004,SekiMacro2002,BalzanoPRL2008,WangEtAlMacro2000,RyanFarad1999,SamonMacro1999,TerrillEtAlPolymer1998} theoretical\cite{SadlerNat1987,SadlerGilmerPRL1986,LauritzenJAppPhys1973,PointMacro1979,PointFaraday1979,FrankTosi1961,PriceJCP1961,Lauritzen1960} and computational\cite{ZhaiEtAlMacro2019,YamamotoMacro2019,HagitaEtAlJCP2019,LiEtAlJCP2019,MoyassariPolymer2019,MoyassariEtAlMacro2019,HuEtAlPolymer2019,ZhangLarsonJCP2019,AnwarGrahamJCP2019,HallPolymers2020,HallShapeJCP2019,HallHeatJCP2019,ZhangLarsonMacro2018,HagitaAIPAdvances2018,VerhoMacro2018,SliozbergMacro2018,NieEtAlMacro2018,KumarMacro2017,BourqueEtAlJPCB2017,BourqueMacro2016,LuoPolymer2017,LuoEtAlMacro2016,TangetalJCP2018,TangPRM2017,WelchJCP2017,WangEtAlCTC2015,LuoSommerPRL2014,AnwarEtAlJCP2014,YamamotoJCP2013,YiMacro2013,AnwarJCP2013,LuoSommerMacro2011,YamamotoJCP2010,YamamotoJCP2008,HuFrenkelMacro2004,HuEtAlMacro2002,MeyerJCP2001,MuthukumarWelchPolymer,DoyePolymer2000,DoyeFrenkelJCP19992,DoyeFrenkelJCP19991,LiuMuthukumar1998} studies probing polymer crystallization and related phenomenology.
Crystal nucleation corresponds to the very earliest stages of crystallization in which a nascent crystal (i.e., nucleus) emerges from a metastable melt or solution. Crystal nucleation is being increasingly studied using simulations as they provide direct access to molecular details at high spatiotemporal resolutions, something challenging to achieve experimentally. Previous \textit{in silico} work has probed polymer crystal nucleation in the context of isolated chains,\cite{HagitaAIPAdvances2018,MuthukumarWelchPolymer,LiuMuthukumar1998} solutions,\cite{LuoEtAlMacro2016,HuFrenkelMacro2004,HuEtAlMacro2002,LiuMuthukumar1998} and melts.\cite{MoyassariEtAlMacro2019,ZhangLarsonJCP2019,ZhangLarsonMacro2018,AnwarGrahamJCP2019,HallPolymers2020,HallShapeJCP2019,HallHeatJCP2019,WelchJCP2017,WangEtAlCTC2015,AnwarEtAlJCP2014,YiMacro2013,YamamotoJCP2010,MeyerJCP2001} Simulations have also been used to elucidate the effects of molecular weight distribution,\cite{ZhaiEtAlMacro2019,LiEtAlJCP2019,MoyassariPolymer2019,MoyassariEtAlMacro2019,NieEtAlMacro2018} chain topology (e.g.,~ linear vs. ring chains),\cite{HagitaEtAlJCP2019} chain branching,\cite{MoyassariEtAlMacro2019,HuEtAlPolymer2019,ZhangLarsonMacro2018} and cross-linking\cite{PaajanenPolymer2019} on polymer crystal nucleation. There has been much interest in the evolution of chain conformations,\cite{MoyassariPolymer2019,MoyassariEtAlMacro2019,HallHeatJCP2019,WangEtAlCTC2015,YamamotoJCP2010,YamamotoJCP2008,LiuMuthukumar1998} topological details related to chain folding,\cite{ZhaiEtAlMacro2019,YamamotoMacro2019,MoyassariEtAlMacro2019,MoyassariPolymer2019,HallPolymers2020,HallShapeJCP2019,HallHeatJCP2019,MorthomasEtAlPhysRevE2017,LuoEtAlMacro2016,YiMacro2013,MeyerJCP2001} and connecting entanglements/disentanglement to observed crystallization phenomenology.\cite{ZhaiEtAlNano,LuoPolymer2017,LuoEtAlMacro2016,LuoSommerPRL2014} For example, Luo and Sommer\cite{LuoSommerPRL2014} demonstrated that local entanglements play a key role in polymer crystallization memory effects, and revealed connections between local entanglement lengths, stem lengths and densities. Through simulations, it has also been demonstrated that quenching polymer melts below their nematic-isotropic transition temperatures can enhance crystallization via the formation of metastable nematic precursors.\cite{ZhangLarsonJCP2019} Other recent studies have probed flow-induced crystal nucleation in polymer melts\cite{AnwarGrahamJCP2019,YamamotoMacro2019,SliozbergMacro2018,NieEtAlMacro2018,AnwarEtAlJCP2014} and short-chain alkane systems.\cite{NicholsonRutledgeJCP2016}
Additional \textit{in silico} work\cite{LuoPolymer2017,YamamotoJCP2013} has explored crystal nucleation in supported and free-standing thin films, probing the effects of polymer-melt and polymer-substrate interfaces on nucleation phenomenology. While there has been much work on polymer crystal nucleation, the interfacial structuring of nuclei remains underexplored. Structural descriptions of polymer nucleus-melt interfaces are generally qualitative and focused on topological details. For example, previous simulation-based work\cite{SliozbergMacro2018} qualitatively revealed that chain ends are preferentially partitioned to crystal-melt interfaces, and that chain entanglements are generally relegated to amorphous domains during the crystallization of entangled polymer melts under uniaxial strain. A quantitative, spatially-resolved picture of the nucleus-melt interfacial region has yet to emerge.
To address this knowledge gap, we have conducted molecular dynamics (MD) simulations of crystal nucleation in entangled polyethylene melts, and thereby provide a quantitative understanding of variations in polymer properties in the vicinity of nuclei. This study explores how polymer properties vary from the center of a nucleus into the surrounding metastable melt (see the schematic in Fig.~\ref{fig:schematic}), revealing that there is a partial spatial decoupling of segment properties at the nucleus interface. In particular, nuclei arising during quiescent, non-flow crystallization apparently reside in nematic droplets, which has broad implications for crystallization phenomenology as well as the interpretation of both experimental and computational results.
\begin{figure}[!b]
\centering
\includegraphics[width=0.5\columnwidth]{Figure1.jpg}
\caption{A schematic of a polymer nucleus. The dark blue lines correspond to the ordered crystalline polymer chain segments that form the nucleus (i.e., stems). The red arrow indicates the direction of the stems (i.e., the nematic director of the nucleus). The pale blue lines indicate polymer chain segments that are amorphous and not part of the nucleus. A fold corresponds to an amorphous segment that connects two stems that are part of the same nucleus. The surrounding melt has not been shown for visual clarity. The work reported herein focuses on the evolution of average segment properties with distance from the center of mass of a nucleus as represented by the gray arrow.}
\label{fig:schematic}
\end{figure}
\section{Methods}
\subsection{Crystallization Simulations}
The MD simulation details for this study are the same as for our earlier work;\cite{HallShapeJCP2019} see ref.~\citenum{HallShapeJCP2019} for full details. Briefly, ten high-temperature, entangled \textit{n-}C\textsubscript{720}H\textsubscript{1442}~ melts were prepared, and then sequentially cooled to crystal forming conditions (285 K) where they were simulated for $\sim$4 $\mu$s. The simulations probed polymer crsytal nucleation under quiescent, non-flow conditions. Crystal nucleation took place on the microsecond timescale,
and two melts did not crystallize during the 4-$\mu$s simulation window (see Fig.~\ref{fig:potEner}). During each simulation, system configurations (snapshots) were saved every 20,000 iterations ($\sim$0.4 ns). The coarse-grain Shinoda-DeVane-Klein (SDK) model\cite{ShinodaMolSim2007} was used to represent \textit{n-}C\textsubscript{720}H\textsubscript{1442}~. More specifically, \textendash(CH\textsubscript{2})\textsubscript{3}\textendash~and \textendash{CH\textsubscript{2}}CH\textsubscript{2}CH\textsubscript{3} segments along the polymer chain backbones were represented using the CM and CT coarse-grain beads.\cite{ShinodaMolSim2007} As such, the terms ``bead'' and ``segment'' are used interchangeably throughout this study. We have previously demonstrated that the SDK model is an appropriate coarse-grain model for simulating polyethylene systems and processes.\cite{HallModelJCP2019} The simulations were conducted using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS).\cite{Plimpton1995}
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\columnwidth]{Figure2.jpg}
\caption{Temporal evolution of the nucleation simulations and their final states. A) The average molecular potential energy ($U_{chain}$) curves for this study's ten nucleation simulations at 285 K. Crystallization is associated with a large decrease in $U_{chain}$. B) A zoomed-in view of the final configuration from the simulation corresponding to the dark blue curve labelled I in Panel A. Only the ordered polymer chain segments composing the largest nucleus are shown for visual clarity. The nucleus corresponds to approximately 100 carbon atom. According to previous work,\cite{HallShapeJCP2019} the critical nucleus size for the conditions and model considered in this study is $\sim$600 carbon atoms. The blue box corresponds to the simulation cell. C) A zoomed-out view of the final configuration from the simulation corresponding to the black curve labelled II in Panel A. The simulation cell corresponds to the blue box; all simulations were performed under periodic boundary conditions for this study. All nuclei larger than the critical nucleus size are shown, and each nucleus is visualized using a different color. The methodological details of how nuclei were extracted from simulation snapshots are provided in the main text.}
\label{fig:potEner}
\end{figure}
\subsection{Nucleus Extraction}
Nuclei were extracted from each snapshot using the same approach as our previous work,\cite{HallShapeJCP2019,HallModelJCP2019,HallHeatJCP2019} an approach based on the $P_2$ order parameter. The $P_2$ order parameter captures the degree to which a polymer segment is locally aligned with its neighbouring segments. The $P_2$ order parameter and variations thereof have been used to study polymer crystallization by many other researcher groups (e.g.,~ see ref.~\citenum{MoyassariEtAlMacro2019,YamamotoMacro2019,HuEtAlPolymer2019,ZhangLarsonJCP2019,SliozbergMacro2018,NicholsonRutledgeJCP2016,BourqueMacro2016,YiMacro2013,YamamotoJCP2013,LiuMuthukumar1998}). For the purposes of this study, the backbone direction at a given coarse-grain segment (bead $i$) was estimated using the vector connecting its intramolecular nearest neighbours (i.e.,~ beads $i-1$ and $i+1$). The exception to this was the segments at the ends of the polymer chains, which only have one intramolecular nearest neighbour. In this case, the backbone direction was estimated using the vector connecting each terminal segment to its nearest intramolecular neighbour. In turn, the $P_2$ order parameter value for the i\textsuperscript{th} polymer segment was calculated according to:
\begin{equation}
P_2=\bigg \langle \frac{3cos^{2}\theta_{ij}-1}{2} \bigg \rangle
\label{eq:p2}
\end{equation}
where $\theta_{ij}$ is the angle between the backbone direction at segment $i$ and the backbone direction at neighbouring segment $j$. The angular brackets in Eq.~\ref{eq:p2} indicate local averaging over all intramolecular and intermolecular neighbours within 0.635 nm of segment $i$. For reference, average $P_2$ values of 1.0, 0 and -0.5 indicate that a polymer segment is perfectly aligned, randomly oriented, and perpendicular with respect to its neighbouring polymer chain segments. All beads with $P_2 \geq 0.85$ were labelled ordered (i.e.,~ crystalline).\cite{HallHeatJCP2019} The above protocol and cutoffs were established as part of our previous work\cite{HallHeatJCP2019} probing polyethylene crystallization with the SDK model. The polymer segments in each simulation snapshot were thus labelled ordered or disordered (i.e.,~ crystalline or non-crystalline). Cluster analysis was then performed on the ordered polymer segments in each snapshot to extract its nuclei; ordered segments within 0.635 nm of each other were taken to be part of the same cluster.\cite{HallHeatJCP2019}
\subsection{Local Properties}
The local properties of the segments (beads) in each snapshot were also assessed in order to construct spatial property distributions of nuclei and their surroundings. The following four properties were measured in addition to the $P_2$ order parameter. \newline \newline
\textbf{Potential Energy (U):} The potential energy of each segment was estimated as the sum of the potential energy contributions arising from its bonded and non-bonded interactions as specified by the SDK model.\cite{ShinodaMolSim2007}
\newline \newline
\textbf{Density ($\rho$):} A Voronoi tessellation was performed on each snapshot according to the positions of its constituent polymer segments (beads). For reference, a Voronoi tessellation partitions the simulation cell into a set of sub-cells, specifically one sub-cell per segment, such that the sub-cell associated with a given polymer segment corresponds to the region of space for which that segment is the nearest segment. In turn, the local density associated with each segment was estimated by dividing the mass of the segment by its Voronoi sub-cell volume.
\newline \newline
\textbf{Nucleus-Segment Alignment ($S$):}
The nematic director of a nucleus corresponds to the overall direction of its constituent ordered polymer chain segments, and is schematically represented by the red arrow in Fig.~\ref{fig:schematic}. The nematic director of each nucleus was extracted in accordance with the procedure of Eppanga and Frenkel.\cite{EppangeMolPhys1984} The tensor order parameter $Q$ was constructed based on the backbone vectors of the segments comprising a nucleus, and then the nematic director of that nucleus was taken to be the eigenvector associated with the largest eigenvalue of $Q$ (see ref.~\citenum{EppangeMolPhys1984}).
$S$ quantifies the alignment of polymer segments with the nematic director of a particular nucleus, and was calculated according to:
\begin{equation}
S=\frac{3cos^{2}\theta_{i}-1}{2}
\label{eq:S}
\end{equation}
where $\theta_{i}$ is the angle between the nematic director of the specified nucleus and the polymer backbone direction at segment $i$; the latter was estimated using the same procedure as for the $P_2$ calculations. As with the $P_2$ order parameter, average $S$ values of 1.0, 0 and -0.5 indicate perfect alignment, random orientations, and perpendicular configurations, respectively. However, $S$ probes nucleus-segment alignment whereas $P_2$ probes local segment-segment alignment.
\newline \newline
\textbf{Steinhardt-based Order ($q_{6}q_{6}^{*}$):}
The $q_{6}q_{6}^{*}$ metric was recently introduced by Zhang and Larson\cite{ZhangLarsonJCP2019,ZhangLarsonMacro2018} to probe polymer crystallization. It is based on the $q_{6m}$ metric as introduced by Steinhardt and coworkers,\cite{SteinhardtPRC1983} and the work of Auer and Frenkel.\cite{AuerFrenkelJCP2004} In this study, $q_{6m}$ was calculated for a given segment ($i$) according to:
\begin{equation}
q_{6m}(i)=\langle Y_{6m}(\theta(\vec{r}_{ij}),\phi(\vec{r}_{ij})\rangle
\label{eq:q6}
\end{equation}
where $Y_{6m}$ corresponds to the $m$ component of the degree-6 spherical harmonic (i.e.,~ l=6), $\vec{r}_{ij}$ is the vector from segment $i$ to a neighbouring segment ($j$), and $\theta$ and $\phi$ indicate the spherical polar orientation of $\vec{r}_{ij}$ in a fixed frame of reference centered on segment $i$. The angular brackets indicate averaging over all neighbouring segments within 0.635 nm of segment $i$. In turn, the $q_{6}q_{6}^{*}$ value of a segment was determined according to:
\begin{equation}
q_{6}q_{6}^{*}=\sum_{j=1}^{N_{i}} \sum_{m=-6}^{m=6} q_{6m}(i) q_{6m}^{*}(j)
\label{eq:q6q6}
\end{equation}
where the first sum is over all neighbouring segments within 0.635 nm of segment $i$ excluding itself, and $q_{6m}^{*}(j)$ is the complex conjugate of $q_{6m}(j)$. Given that $q_{6}q_{6}^{*}$ depends on the $q_{6m}$ values of segments within 0.635 nm~of segment $i$, and that the $q_{6m}$ values of these neighboring segments depend on beads within 0.635 nm of them, the $q_{6}q_{6}^{*}$ value of a segment depends indirectly on segments up to $\sim$1.27 nm away from that segment. In turn, $q_{6}q_{6}^{*}$
probes an approximately eightfold larger region of space than the $P_2$ order parameter. Note that a distance cutoff of 0.635 nm was used for Eq.~\ref{eq:q6} and~\ref{eq:q6q6} instead of the 0.54-nm cutoff as in Zhang and Larson's work\cite{ZhangLarsonJCP2019,ZhangLarsonMacro2018} because this study uses a different molecular model. Zhang and Larson used atomistic models whereas this study uses the coarse-grain SDK model.\cite{ShinodaMolSim2007}
\subsection{Spatial Property Distributions}
The property values and nuclei extracted from each snapshot were in turn used to construct spatial distributions capturing the evolution of segment properties in the vicinity of nuclei corresponding to 240-360 carbon atoms inclusive. This size range was chosen as it allowed for good sampling (i.e.,~ thousands of nuclei) while still being comparable to the critical nucleus size for the polyethylene melts considered in this study (i.e.,~ $\sim$600 carbon atoms based on our previous work\cite{HallShapeJCP2019}).
A principle axis system was determined for each extracted nucleus by calculating the three eigenvectors and eigenvalues of its radius of gyration tensor. The eigenvalues indicate the spatial extent of the particles along each eigenvector (i.e.,~ axis). As previously demonstrated,\cite{HallShapeJCP2019} a pre-critical polymer crystal nucleus is, on average, an anisotropic entity possessing a long axis (L), a median axis (M), and a short axis (S). The eigenvalues were used to assign these labels to the eigenvectors, and thus establish a local LMS frame of reference for each cluster. For reference, the center of mass (COM) of a nucleus corresponds to the origin of the local LMS frame of reference for that nucleus. In turn, property distributions where extracted for each nucleus using the LMS frame of reference, and these distributions were averaged together for nuclei corresponding to 240-360 carbon atoms inclusive to create average property distributions around pre-critical nuclei. The average distributions presented in this paper are based on only those snapshots where the largest nucleus in the corresponding system corresponded to 360 or fewer carbon atoms. This was done to ensure that the distributions reflect the initial nucleus-melt interface and not more complex, subsequent structures; multiple large-scale, post-critical clusters formed in some of the simulations (see Fig.~\ref{fig:potEner}C).
In this study, we consider two classes of distributions, radial and cylindrical. Each radial distribution captures the evolution of a local property (e.g.,~ density) as a function of radial distance ($r$) from the nucleus COM (see~Fig.~\ref{fig:schematic}), and is thus a 1D projection of the full 3D LMS distribution. A cylindrical distribution captures the evolution of a local property as a function of: 1) distance along the long axis of a nucleus from its COM ($z$), and 2) radial distance from the nucleus long axis ($r'$). Such a cylindrical distribution corresponds to a 2D projection of the full 3D LMS distribution. Radial and cylindrical distributions enabled improved sampling compared to the full 3D distributions. Radial distributions are generally presented in this study in order to facilitate comparisons between different properties.
\subsection{Crystalline Reference Data}
In order to compare the properties of nuclei and their surroundings to those of crystalline polyethylene, a second set of simulations was performed on a perfect polyethylene crystal. The methodological details of these simulations (e.g.,~ thermostating and barostating) were the same as those of the crystallization simulations at 285 K unless otherwise noted below. A perfect crystal consisting of 450 polyethylene chains was constructed according the crystal structure of long-chain normal hydrocarbons.\cite{Bunn1939} Each chain consisted of 240 coarse-grain beads (720 carbon atoms) and was connected to itself across the periodic boundaries of the simulation cell, effectively yielding an infinite polymer chain. The system thus lacked crystal defects arising from chain ends and folds. The perfect crystal was heated from 1 K to 250 K over 20,000,000 time steps ($\sim$407 ns). The temperature ramp was achieved by linearly increasing the set point of the simulation's Nos{\'e}-Hoover \cite{Nose1984,Hoover1985} chain thermostat\cite{MartynaJCP1992} using the internal functionality of the fix NPT command in LAMMPS. The crystal was then equilibrated at 285 K for 2,000,000 time steps ($\sim$20 ns).\footnote{This simulation used a time step that was half the length of the time step used in the other simulations.} A production simulation was then performed at 285 K for 1,000,000 time steps ($\sim$20 ns) to sample the properties of the equilibrated crystalline system. As with the nucleation simulations, snapshots were saved every 20,000 time steps ($\sim$0.4 ns). Local properties (e.g.,~ $P_{2}$) were calculated for all polymer segments in the system, and then averaged across all segments and snapshots to obtain average local property values for the perfect crystal.
\section{Results and Discussion}
Normalized radial property distributions reveal that there is a partial spatial decoupling of segment properties at the nucleus-melt interface (see~Fig.~\ref{fig:relInterface}). For example, on approaching the center of a nascent nucleus (i.e.,~ r = 0 nm in Fig.~\ref{fig:relInterface}), surrounding polymer chains begin to display increasing alignment with the nematic director of the nucleus at potentially up to $\sim$6 nm away from the nucleus as can be seen from the $S$ curve in Fig.~\ref{fig:relInterface}A. In contrast,
changes in local density, local potential energy, and $q_{6}q_{6}^{*}$ begin to manifest only at much shorter distances in the vicinity of $\sim$2-3 nm. There is thus a spatial lag in some property transitions. As such, polymer segments approaching the crystal-melt interface experience increased order (reductions in their entropy) as evidenced by both $S$ and $P_2$ before achieving the lower enthalpy (potential energy) of the crystalline phase. This phenomenology is consistent with microscopic explanations of interfacial free energies in which the spatial separation of entropy losses and ethalpic payback makes it unfavorable to expand an interface.
\clearpage
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\columnwidth]{Figure3.jpg}
\caption{Average variation in segment properties at increasing radial distances from nucleus COM for nuclei corresponding to 240-360 carbon atoms. A) Normalized property profiles. The profiles have been normalized so that values of 0.0 and 1.0 indicate property values corresponding those of the surrounding metastable \textit{n-}C\textsubscript{720}H\textsubscript{1442}~ melt, and those of a perfect polyethylene crystal, respectively. The curves correspond to 5-parameter sigmoidal fits of the underlying data points ($R^2$ $>$ 0.99). The standard error range associated with each data point is much smaller than its symbol size except for the first points of $\rho$ and U at r =0.1 nm where the standard error ranges are comparable to the symbol sizes. B) The average probability of finding segments that are part of the nucleus at $r$=0 nm, $P(nucleus)$, as a function of increasing distance from that nucleus. Note that $P(nucleus)$ is shown using a log axis.}
\label{fig:relInterface}
\end{figure}
\clearpage
Interestingly, the $q_{6}q_{6}^{*}$ order parameter fails to capture this expected interfacial phenomenology. In fact, the $q_{6}q_{6}^{*}$ data are nearly coincident with the density data in Fig.~\ref{fig:relInterface}A. This likely stems from two factors. First, $q_{6}q_{6}^{*}$ is based on a sum~\textemdash~not an average~\textemdash~over neighbouring beads within a spatial cutoff (see Eq.~\ref{eq:q6q6}), so $q_{6}q_{6}^{*}$ is sensitive to density changes in addition to orientational ordering. Second and as previously mentioned, $q_{6}q_{6}^{*}$ depends on a much larger region of space than $P_2$, so it is likely not as sensitive to local ordering that may occur in interfacial regions. It is also important to note that all segments within 0.635 nm of a segment contribute equally to its $q_{6m}$ and $q_{6}q_{6}^{*}$ values while such segments do not contribute equally to a segment's potential energy; a segment that is farther away makes a smaller contribution in accordance with the spatial dependency of non-bonded interactions. Consequently, the $q_{6}q_{6}^{*}$ metric potentially possesses greater environmental sensitivity than potential energy, which explains why the $q_{6}q_{6}^{*}$ profile in Fig.~\ref{fig:relInterface} is displaced to the left of the potential energy profile.
While the centers of pre-critical nuclei are close the crystalline state in terms of segment alignment (i.e.,~ $S$ and $P_2$ values), they still markedly differ from the crystalline state in terms of density, potential energy, and $q_{6}q_{6}^{*}$ (see Fig.~\ref{fig:relInterface}A). In fact, the normalized density and potential energy curves in Fig.~\ref{fig:relInterface}A exhibit values of approximately 0.6-0.7 at r = 0 nm, indicating that these properties have only transitioned $\sim$60-70\% of the way to the crystalline state. Nevertheless, the pre-critical nuclei used to construct Fig.~ \ref{fig:relInterface}A are still relatively large with respect to the critical nucleus size (i.e.,~ 240-320 carbon atoms vs. $\sim$600 carbon atoms\cite{HallShapeJCP2019}). For the conditions considered in this study, polymer nuclei do not exhibit a ``crystalline'' core surrounded by a crystal-melt interface as envisioned, for example, in classical nucleation theory. Rather, polymer nuclei exhibit properties intermediate between those of the melt and the crystal. Previous studies\cite{HallPolymers2020,HallShapeJCP2019} probing other facets of polymer nucleation similarly indicate that nuclei are not simply miniature versions of lamellar polymer crystals.
Another notable feature in Fig.~\ref{fig:relInterface}A is the length scales over which the profiles deviate from the properties of the metastable melt (i.e.,~ values of 0), which is up to $\sim$6 nm in the case of nucleus-segment alignment ($S$). Importantly, nucleus-segment alignment is not simply a manifestation of the spatial extent of the polymer segments that are part of a nucleus (see Fig.~\ref{fig:relInterface}B). In particular, the probability of encountering a polymer segment that is part of the nucleus has already dropped below 0.01 by $\sim$2.4 nm, and it is several orders of magnitude lower at distances greater than 4 nm (Fig.~\ref{fig:relInterface}B). Therefore, the nematic alignment of a nucleus does not propagate from the nucleus into the surrounding metastable melt only through the polymer segments comprising the nucleus and their immediate neighbours (i.e.,~ contact pairs). Rather, nucleus-segment alignment propagation at the nucleus-melt interface is related to a nucleus's folds as evidenced by Fig.~\ref{fig:nematic-fold}. This relationship between the spatial extent of folds and nucleus-segment alignment is physically reasonable since folds are bounded by polymer segments that are part of the nucleus, and so propagation of nucleus alignment along a fold constitutes an intramolecular process. Moreover, previous work\cite{WelchJCP2017} on polymer melts has connected polymer crystallization to the development of on-chain (intramolecular) order, though the microscopic nature of this connection was not elucidated. Long-range intramolecular processes are relevant to polymer nucleus-melt interfacial phenomena, and relatively large-scale MD simulations are likely required to probe polymer crystallization phenomena.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\columnwidth]{Figure4.jpg}
\caption{Nematic alignment-fold correlation. The spatial evolution of average nucleus-segment alignment is overlaid with that of the average probability of encountering a segment that is part of a nucleus's folds, $P(fold)$, for nuclei corresponding to 240-360 carbon atoms. For reference and in accordance with our previous work,\cite{HallShapeJCP2019,HallModelJCP2019,HallHeatJCP2019} a segment was considered to be part of a fold if it was part of a set of noncrystalline segments (i.e.,~ $P_{2} < 0.85$) that was bordered by crystalline segments that were part of the same nucleus. The axes and curves are color-coded, and the error bars indicate standard errors. The inset provides the correlation between $P(fold)$ and $S$ for r=1.1-5.9 nm, and the line is a linear fit of the data ($R^2 > 0.99$).}
\label{fig:nematic-fold}
\end{figure}
Given that nucleus-segment alignment extends far beyond the region of space where there is a high probability of encountering the constituent segments comprising the nucleus, the full spatial extent of polymer structuring at polyethylene growth fronts and their interfaces may have been historically underestimated in some \textit{in silico} studies. More specifically, the length scales in Fig.~\ref{fig:relInterface}A may appear comparable to: 1) the $\sim$4-nm thicknesses revealed in Yamamoto's work\cite{YamamotoJCP2013,YamamotoJCP2008} on the tapered growth fronts of growing polyethylene crystals (see Fig. 9 in ref.~\citenum{YamamotoJCP2013} and Fig. 4 in ref.~\citenum{YamamotoJCP2008}), and 2) variations in crystallinity profiles during the growth of \textit{n-}C\textsubscript{50}H\textsubscript{102} crystals on polyethylene and tetrahedral substrates.\cite{BourqueEtAlJPCB2017,BourqueMacro2016} However, these studies probed growth fronts in terms of stem lengths and crystalline beads whereas the profiles in Fig.~\ref{fig:relInterface}A capture the spatial variation of properties in the vicinity of a nucleus arising from both constituent and non-constituent polymer segments. Profiles in the aforementioned studies likely do not fully capture structuring in the vicinity of melt-crystal interfaces.
It is important to distinguish between long-range nucleus-segment alignment and local segment-segment alignment (i.e.,~ $S$ and $P_2$). In particular, the $P_2$ order parameter does not start to deviate from melt-like behaviour until one is less than $\sim$4 nm from a nucleus (Fig.~\ref{fig:relInterface}A). Therefore, polymer segments do not to exhibit enhanced alignment with their neighbouring segments until they are within $\sim$4 nm of a nucleus (with respect to the center of mass of that nucleus). In contrast, polymer chain segments start to align with the nematic director (i.e.,~ chain direction) of a nucleus at a distance $\sim$6 nm as demonstrated by the $S$ profile in Fig.~\ref{fig:relInterface}A. Long-range alignment and local packing are distinct, consistent with a nematic-like transition. These results thus indicate that a polymer nucleus can be considered to reside in a nematic-like droplet. In accord with this perspective, $S$-based estimates of the combined volume of a nucleus and its interfacial region are much larger than volumes obtained using other metrics, such as density and $P_2$, as can be seen in Fig.~\ref{fig:volume}. On approaching a nucleus, first long-range nematic alignment (S) increases, then local segment-segment alignment ($P_2$) increases, and then potential energy, density and $q_{6}q_{6}^{*}$ start to change (Fig.~\ref{fig:relInterface}A and Fig.~\ref{fig:volume}). As a result, polymer crystal nucleation involves a partial spatial decoupling of polymer properties for the quiescent, non-flow conditions considered in this study.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\columnwidth]{Figure5.JPG}
\caption{Estimates of the average combined volume of a nucleus and its interface based different properties. Each property's volume estimate was constructed using the distance at which the corresponding radial profile in Fig~\ref{fig:relInterface} reached a normalized value of 0.05 (i.e.,~ 5\% of the transition from the melt to crystalline state) while invoking spherical geometry.}
\label{fig:volume}
\end{figure}
Our insights align well with results from previous computational studies\cite{ZhangLarsonJCP2019,TangetalJCP2018,TangPRM2017,AnwarEtAlJCP2014,AnwarJCP2013,LuoSommerMacro2011} on polymer and alkane crystallization. Zhang and Larson\cite{ZhangLarsonJCP2019} recently demonstrated that quenching polyethylene melts below their isotropic-nematic transition temperature ($T_{IN}$) results in rapid crystal nucleation on timescales $<$100 ns, which is much shorter than the microsecond induction times associated with this study's simulations (see Fig.~\ref{fig:potEner}).
To test whether our systems were susceptible to an isotropic-nematic transition prior to crystallization, average system-wide nematic order was calculated for the two systems that did not nucleate during their $\sim$4-$\mu$s simulations (i.e.,~ the systems lacking substantial potential energy dips in Fig.~\ref{fig:potEner}) using a procedure similar to what was used to establish the nematic director of individual nuclei. More specifically, the tensor order parameter $Q_{system}$ was constructed for each snapshot based on the backbone vectors of all polymer segments in the system
in accordance with the work of Eppange and Frenkel.\cite{EppangeMolPhys1984} The nematic order of the system was taken to be the largest eigenvalue ($\lambda$) of $Q_{system}$; $\lambda$ approaches 1 with increasing alignment and zero for a bulk isotropic phase. This study's perfect crystal system yielded $\lambda$=0.976 $\pm$ 0.0002 (average $\pm$ standard deviation) while the average $\lambda$ value for the non-nucleating simulations is 0.014 $\pm$ 0.006. These results indicate that the supercooled \textit{n-}C\textsubscript{720}H\textsubscript{1442}~ melts considered in this study generally correspond to isotropic states prior to nucleation and not a metastable nematic states. The non-zero $\lambda$ for the non-nucleating simulations is likely due to transient pre-critical nuclei, which do exhibit chain alignment (nematic order). Consequently, this study considers polymer crystal nucleation under different conditions to Zhang and Larson.\cite{ZhangLarsonJCP2019} Nevertheless, our results can be used to interpret findings from the aforementioned study. In particular, Zhang and Larson\cite{ZhangLarsonJCP2019} concluded that rapid nucleation at $T<T_{IN}$ results from the formation of metastable nematic precursors that temporally precede and enhance crystal nucleation. Note that their work\cite{ZhangLarsonJCP2019} probed nematic order in terms of local segment-segment alignment (i.e.,~ using a variant of the $P_2$ order parameter) rather than probing long-range nucleus-segment order. It was also found that the onset of local segment-segment nematic alignment and crystallinity become temporally coincident as $T_{IN}$ is approached. In essence, previous work\cite{ZhangLarsonJCP2019} demonstrates that the development of local segment-segment alignment and crystallinity can become, at least partially, temporally decoupled under certain circumstances, but remain temporally coincident at higher temperatures. Our results demonstrate that spatial decoupling is present at the nucleus-melt interface. In turn, given that non-flow nuclei reside in droplets of long-range nematic ordering and local segment-segment order, processes that induce nematic order should enhance crystallization, such as the specific conditions considered in ref.~\citenum{ZhangLarsonJCP2019}.
Based on fully atomistic simulations, Tang et al.\cite{TangPRM2017} found that hexagonal structuring precedes the formation of nuclei with orthorhombic structuring (i.e.,~ the preferred polyethylene crystal structure under ambient conditions) such that orthorhombic nuclei form inside droplets exhibiting nascent hexagonal ordering and through their coalescence. They also observed nascent hexagonal structuring temporally preceding changes in the Voronio volume (local density) of carbon atoms. Moreover, subsequent work by Tang et al.\cite{TangetalJCP2018} on shear-induced crystallization suggests that chain alignment and densification precede crystallization under shear conditions. For reference, Tang et al. assigned particles as crystalline using a metric based on spherical harmonics and similar in spirit to $q_{6}q_{6}^{*}$, so their observations could correspond to a scenario where the $q_{6}q_{6}^{*}$ and $\rho$ curves in Fig.~\ref{fig:relInterface}A are spatially (and thus temporally) distinct as a result of flow. While this study's coarse-grain simulations cannot be used to probe the interplay between hexagonal and orthorhombic structuring, the temporally staged nature of local ordering and densification is consistent with the spatial decoupling in Fig.~\ref{fig:relInterface}A. Computational work by Anwar et al.\cite{AnwarJCP2013,AnwarEtAlJCP2014} revealed that crystal nucleation in \textit{n-}C\textsubscript{20}H\textsubscript{42} and \textit{n-}C\textsubscript{150}H\textsubscript{302} melts proceeds temporally via: 1) development of nematic alignment, 2) densification and, 3) establishment of local orientational order (i.e.,~ what they call crystallinity and estimated using a metric based on the Steinhardt $q_{6m}$ metric). Anwar et al. may have observed a more exaggerated difference between density and local Steinhardt-based order compared to the current study due to differences between: 1) the $\bar{q}_{6}$ metric used in their study and the $q_{6}q_{6}^{*}$ metric used in the current study, and 2) their decision to monitor the time evolution of segment properties by tracking only those segments comprising a nucleus at a particular point in time (i.e.,~ they did not track the evolution of the segments comprising a nucleus as that nucleus evolved). It is also worth noting that the exact nature of the temporal separations observed in their work depend on system conditions (e.g.,~ flow vs. quiescent). However, in general, the temporal progressions observed in the work of Anwar et al.\cite{AnwarJCP2013,AnwarEtAlJCP2014} are similar to the spatial progression observed as the center of a nucleus is approached (Fig.~\ref{fig:relInterface}A). Consequently, there is a correspondence between the temporal separation of changes in segment properties during polymer crystallization, and their spatial decoupling at the nucleus-melt interface.
Based on analysis of a single snapshot, Luo and Sommer\cite{LuoSommerMacro2011} found that there is a thin $\sim$0.8-nm region around the edge of a growing poly(vinyl alcohol) (PVA) lamellar crystal where PVA chains segments exhibit enhanced alignment while their local densities are still comparatively low. Their results thus support a spatial decoupling between local ordering and densification, consistent with the results in Fig.~\ref{fig:relInterface}A. Luo and Sommer's comparatively low estimate of the extent of spatial variations in segment properties could be due to: 1) their analysis being based on a single snapshot, which would be susceptible noise and have difficulty resolving the long tails in Fig.~\ref{fig:relInterface} and~\ref{fig:nematic-fold}, and 2) differences in the chemical nature of their PVA system and this study's \textit{n-}C\textsubscript{720}H\textsubscript{1442}~ systems. Kumar et al.\cite{KumarMacro2017} probed the interfacial structure at the boundary between crystalline and amorphous lamellar domains in polyethylene. In the case of linear polyethylene, their interfacial width estimates based on density were smaller than those based on local $P_2$ values, which aligns with the density curve in Fig.~\ref{fig:relInterface}A being to the left of the $P_2$ curve. Though previous computational studies have not quantitatively probed the interfacial structural details of nascent polymer nuclei, the results of this study are consistent with insights from previous work.
The spatial decoupling of variations in polymer properties during crystal nucleation under quiescent, non-flow conditions (as considered in this study) also provides a lens for understanding previous work on flow-induced polymer crystallization.\cite{LiuTianMacro2014,ZhaoMacro2013,BalzanoPRL2008,GutierrezEtAlMacro2004,WangEtAlMacro2000,RyanFarad1999,SamonMacro1999,TerrillEtAlPolymer1998} For reference, flow-induced crystallization often, though not always, results in shish-kebab morphologies in which long fibrils of aligned, crystalline polymer segments are decorated with thinner crystalline lamellar lobes in a manner reminiscent to skewered food, hence their name. Previous experimental work\cite{SamonMacro1999} on the melt spinning of polyethylene and poly(vinylidene fluoride) revealed that 2D small-angle x-ray scattering (SAXS) patterns show structuring prior to the development of crystalline reflections in the 2D wide-angle x-ray scattering (WAXS) patterns. These results were interpreted as indicating that defective shish-like structures form first, yielding SAXS-level structuring without crystalline WAXS patterns. In turn, the shish structures mature and the kebabs develop, leading to crystalline WAXS reflections. These results for flow-induced crystallization can also be interpreted as consistent with the spatial decoupling revealed in this study. For example, the defective shish precursor structures could correspond to large nematic droplets, such as the region of enhanced $S$ order in Fig.~\ref{fig:relInterface}A. As shown in Fig.~\ref{fig:relInterface}A and Fig.~\ref{fig:volume}, these nematic regions exhibit a large spatial extent well before their interior nuclei are fully crystalline (e.g.,~ in terms of $q_{6}q_{6}^{*}$), potentially supporting a lag between between the experimental detection of different types of ordering, especially if their spatial separation were enhanced.
Many other studies\cite{LiuTianMacro2014,ZhaoMacro2013,BalzanoPRL2008,WangEtAlMacro2000,RyanFarad1999,TerrillEtAlPolymer1998} on polymer crystallization under a variety of conditions have also noted the development of structure in SAXS patterns prior the detection of crystallinity by WAXS. Such results have been interpreted as evidence for the existence of: non-crystalline pre-ordered precursors,\cite{LiuTianMacro2014} conformationally distinct regions in metastable melts,\cite{RyanFarad1999} metastable precursors lacking crystallinity,\cite{BalzanoPRL2008} and ``large scale ordering prior to crystal growth.''\cite{RyanFarad1999} Similarly, based on spatial variations in SAXS and WAXS signals, Guti{\'{e}}rrez et al.\cite{GutierrezEtAlMacro2004} concluded that flow-induced crystallization involves the formation of quasi-ordered bundles of parallel polymer chains that serve as precursors to polymer crystallization. Consistent with the aforementioned experimental insights, our results for quiescent, non-flow nucleation show that segment alignment develops first as a polymer segment approaches a nucleus (or is consumed by the growing nucleus), and prior to changes in other properties such as density and $q_{6}q_{6}^{*}$. In essence, the decoupling of SAXS and WAXS signals can be viewed as a temporal manifestation of the spatial decoupling of segment properties at the nucleus-melt interface with crystallization conditions dictating the extent to which properties, and hence signals, are decoupled. Furthermore, based on the temporal separation of SAXS and WAXS patterns, Terrill et al.\cite{TerrillEtAlPolymer1998} concluded, ``The transformation from the disordered phase [parent melt] to the better ordered partially crystalline phase [resulting semi-crystalline state] proceeds continuously passing through a sequence of slightly more ordered states rather than building up a crystalline state instantaneously.'' Consistent with such a perspective, order develops gradually across the nucleus-melt interface (see Fig.~\ref{fig:relInterface}). Kanaya et al.\cite{KanayaMacro2013} experimentally observed that shearing polymer samples above their nominal melting temperatures can induce micron-scale shish-kebab precursors that are only 0.15\% crystalline. Still other experimental work\cite{SekiMacro2002} probing shear-induced crystallization in blends of long and short chains found that shear enhanced the formation of crystallization precursors, and connected the morphology of the precursors (i.e.,~ point-like vs. thread-like) to different types of flow-induced orientational order (e.g.,~ segmental vs. large-scale, long-chain orientation) as arising from different system conditions. These shear results also point to a decoupling of chain ordering and crystallization, and highlight that flow-related processes greatly enhance the nanoscale spatial decoupling present under quiescent non-flow conditions (Fig.~\ref{fig:relInterface}). Other work\cite{WangEtAlMacro2000} studying polymer crystallization with x-ray and polarized light (PL) scattering found that large domains possessing local orientational order precede crystallization, and it was proposed that domains of varying structure form in a temporally sequential, spatially-nested manner (e.g.,~ the PL-detectable domains appear first, and then the SAXS-detectable domains form afterwards in the PL-detectable domains). Such behavior is in accord with the spatial decoupling in the interfacial region of this study's non-flow nuclei in which segment alignment precedes other changes and property transitions are spatially nested (Fig.~\ref{fig:relInterface} and Fig.~\ref{fig:volume}). In summary, the results presented in this study for crystal nucleation under quiescent, non-flow conditions are consistent with those from experimental flow studies.\cite{LiuTianMacro2014,KanayaMacro2013,ZhaoMacro2013,BalzanoPRL2008,GutierrezEtAlMacro2004,SekiMacro2002,WangEtAlMacro2000,RyanFarad1999,SamonMacro1999,TerrillEtAlPolymer1998} In this paradigm, observed phenomenology during the early stages of flow-induced polymer crystallization can be related to characteristic structural distinctions present in nuclei formed under quiescent, non-flow conditions. While there are many differences between crystallization under non-flow and flow conditions (e.g.,~ final morphologies), this work highlights one way in which there is conceptual continuity between the two.
Given that nematic ordering and chain alignment enhance crystallization\cite{ZhangLarsonJCP2019,HuEtAlMacro2002} and that a nucleus resides in a nematic droplet, it is reasonable to expect that the formation of one nucleus could induce the formation other nuclei. To explore this theory, we calculated the average probability of finding ordered polymer segments ($P_{2} \geq 0.85$) as a function of distance from the center of each nucleus corresponding to 240-360 carbon atoms while excluding the constituent polymer segments of that nucleus. The resulting spatial distribution thus corresponds to the average probability of finding polymer segments that are part of other nuclei, $P(other)$, in the vicinity of a given nucleus. As can be seen in Fig.~\ref{fig:2dfig}A, there is on average a region around each nucleus where there is an enhanced probability of finding ordered segments that are not part of the nucleus. In fact, it is approximately $\sim$1 k\textsubscript{b}T more favourable for an ordered polymer segment to be in this band than in the surrounding
\begin{figure}[!b]
\centering
\includegraphics[width=1.0\columnwidth]{Figure6.jpg}
\caption{The presence of other nuclei. A) The cylindrical distribution of the average probability of ordered polymer segments not part of a nucleus in the vicinity of that nucleus. These segments are thus part of the other nuclei, hence $P(other)$. Here, $z$ corresponds to the direction of the long axis of the nucleus while $r'$ is radial distance from the long axis of the nucleus. Note ($r',z$)=(0 nm, 0 nm) corresponds to the nucleus COM. B) The average relative free energy of ordered polymer chain segments not part of a nucleus at increasing distances from the nucleus (nucleus COM, $r$ = 0 nm). The profile has been estimated using $G=-k_{b}TlnP(Other)$, and then subtracting off the free energy minimum. C) Average radial profiles of nucleus-segment alignment for: 1) ordered polymer segments that are part of the nucleus with its COM at $r$=0 nm (black data), and 2) ordered polymer segments that are part of other nuclei (green data). Points are only shown for distances where the radial averages were well sampled (i.e.,~ $n_{sample} \geq 30$). The error bars in Panels B and C correspond to standard errors. The distribution and profiles in Panels A-C are for a nucleus corresponding to 240-360 carbon atoms residing at $r$=0 nm and ($r',z$)=(0 nm, 0 nm).}
\label{fig:2dfig}
\end{figure}
\noindent metastable melt (see Fig.~\ref{fig:2dfig}B). The non-constituent ordered segments in the vicinity of a nucleus are less aligned with the nematic director of the nucleus than those segments comprising the nucleus as shown in Fig.~\ref{fig:2dfig}C. Importantly, this difference in alignment behavior is statistically significant (paired-data single-tail t-test on region where the curves overlap in Fig.~\ref{fig:2dfig}C, level of significance: 0.05, calculated p-value: $4.33\times10^{-7}$). Therefore, the non-constituent ordered segments in the vicinity of a nucleus do differ from those comprising the nucleus. Moreover, the enhancement band is well within the nematic droplet of the nucleus (compare Fig.~\ref{fig:2dfig}A-B with the $\sim$6 nm extent of the $S$ profile in Fig.~\ref{fig:relInterface}A), providing strong evidence that the nematic droplet of one nucleus can serve as the birthplace of other nuclei. Note that just because one nucleus can enhance the formation of other nuclei, it does not mean that the resulting nuclei will overcome the free energy barrier to crystallization, and become distinct, post-critical entities. For example, a nucleus might induce the formation of surrounding nuclei, and these induced nuclei might subsequently coalesce with the original nucleus as it expands, contributing to polymer crystal growth. In this vein, the two nuclei in Fig.~\ref{fig:twoCluster} are spatially close as highlighted by the 6-nm black sphere centered on the yellow nucleus, and are experiencing each other's alignment field based on the length scales in Fig.~\ref{fig:relInterface}A. As the simulation progresses, the red nucleus grows laterally and consumes the region of space around the yellow nucleus, eventually resulting in the purple crystallite in Fig.~\ref{fig:potEner}. This coalescence is likely facilitated by the alignment between the nuclei; the angle between the nematic directors of the two nuclei is 17.6\textdegree, which yields $S$=0.86 when substituted into Eq.~\ref{eq:S}. Nevertheless, the formation of one nucleus could initiate the formation of subsequent distinct nuclei (i.e.,~ crystalline clusters) through a cascade-like process, consuming the metastable melt and transforming it to a semi-crystalline solid state. Consistent with such a perspective, previous experimental work on shear-mediated polymer crystallization\cite{SekiMacro2002} has proposed that thread-like morphologies form through the attachment of long polymer chains to a nascent point-like nucleus, and that the dangling segments of the chains then become oriented in front and behind the point-like nucleus, leading to the enhanced formation of nuclei ahead of and behind the nucleus (along the shear direction) and ultimately thread formation. Moreover, subsequent \textit{in silico} work\cite{NieEtAlMacro2018} on shear-induced nucleation supports such a mechanism. A number of \textit{in silico} studies on polymer crystallization have also observed the formation of multiple, distinct crystallites in a single system (e.g.,~ see ref.~\citenum{ZhaiEtAlNano,ZhaiEtAlMacro2019,MoyassariEtAlMacro2019,TangetalJCP2018,TangPRM2017,MorthomasEtAlPhysRevE2017,LuoPolymer2017,LuoEtAlMacro2016,LuoSommerPRL2014,YamamotoJCP2013,YamamotoJCP2008}). Nucleus-induced nucleus formation is likely an important feature of both flow-induced and quiescent polymer crystallization, and relevant to the morphology of semi-crystalline polymer samples.
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\columnwidth]{Figure7.jpg}
\caption{Two nuclei from a snapshot at $\sim$1.8 $\mu$s in the simulation corresponding to the black curve in Fig.~\ref{fig:potEner}A. The black sphere has a radius of 6 nm, and is centered on the COM of the yellow nucleus. The red and yellow nuclei correspond to 2,703 and 597 carbon atoms, respectively.}
\label{fig:twoCluster}
\end{figure}
\section{Conclusions}
Through a detailed analysis of the structure of pre-critical nuclei, this study has demonstrated that the central regions of nascent nuclei exhibit far from crystalline behaviour in terms of density, potential energy, and $q_{6}q_{6}^{*}$ while they may be viewed as approaching crystalline in terms of chain alignment metrics (e.g.,~ $P_2$ and $S$). Consistent with these differences, the region of space over which polymer segments transition from melt-like to solid-like is metric dependent; there is a partial spatial decoupling of polymer properties. In particular, nucleus-segment alignment can be observed at distances up to $\sim$6 nm from a nucleus whereas variations in density only occur when polymer segments are much closer to the nucleus (i.e., within $\sim$3 nm of its COM). These results highlight that nuclei and their interfaces can extend over a substantial region of space even if they are pre-critical as considered in this study. The broad spatial extent of nucleus-segment alignment compared to other metric suggests that nuclei reside in nematic-like droplets even under quiescent non-flow conditions. Moreover, the extent of the nematic droplet is connected to the folds associated with a nucleus. Through the lenses of spatial decoupling and nematic droplets, we connected our findings to observations from previous experimental and computational studies on polymer crystallization under flow and non-flow conditions, thereby conceptually linking two polymer crystallization regimes that are often viewed as distinct. In particular, the temporal separation of SAXS and WAXS structuring during flow-induced crystallization can be viewed as a manifestation of flow selectively enhancing spatial property decoupling already present in non-flow nuclei. While this study has probed quiescent polyethylene crystal nucleation, it is reasonable to expect that spatial property decoupling at the nucleus-melt interface and associated phenomena are general features of the crystallization of semi-crystalline polymers.
This study also revealed that the nematic droplet around a nucleus corresponds to a region of space where there is enhanced formation of ordered polymer chain segments that are not part of the nucleus. It was demonstrated that these surrounding ordered segments are statistically distinct from those comprising the nucleus, highlighting the possibility of one nucleus inducing the formation of other nuclei. Such nucleus-induced nucleus formation is consistent conclusions from previous experimental work, and is relevant to the morphological development of semi-crystalline polymeric materials. The interfacial details revealed in this study provide a new lens for understanding structural and temporal evolution during the earliest stages of polymer crystallization.
\clearpage
\begin{acknowledgement}
This work was supported by the U.S. Army Research Laboratory through contracts W911NF-18-9-0269 and W911NF-16-2-0189. This study used the HPC facilities at Temple University, which were partially funded by the National Science Foundation through a major research instrumentation grant (grant number: 1625061). M.L.K. acknowledges the support of H.R.H. Sheikh Saud through a Sheikh Saqr Research Fellowship.
\end{acknowledgement}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Since the time of their formation, galaxies have undergone a variety of transformations, from major mergers to slow accretion of dark matter and intergalactic gas \citep{Press1974, White1978, LeFevre2000, Dekel2009}. As old objects evolving in these galaxies, globular clusters and dwarf galaxies can potentially probe such changes and record some of their signatures \citep{Krauss2003, Kravtsov2005}. It is thus important to understand how the evolution of galaxies has impacted present-day stellar populations. Ideally, theoretical studies on this topic should account for the coupling between the internal dynamics of small stellar systems (clusters and dwarfs), and the effect of their galactic environment in cosmological context. However, because of the wide range of scales and of physical processes involved, considering both aspects simultaneously can be challenging.
One possible approach consists in focussing on low-density objects like dwarf galaxies, where star-star interactions can be neglected. By adopting a collisionless treatment of gravitation in numerical simulations, one can efficiently explore a wide parameter space. Such methodology is largely used in the studies of satellite galaxies and tidal streams \citep[e.g.][]{Penarrubia2005, Bovy2014, Bonaca2014, Kuepper2015}.
But for denser systems like star clusters, two-body relaxation plays a paramount role in setting the flux of stars escaping the cluster, and thus in the evolution of the cluster \citep{Fukushige2000}. Therefore, gravitation must be treated in a collisional fashion, which involves an important numerical cost, and forbids to do so over galactic and cosmological scales \citep[see also][]{Dehnen2014}. Therefore, the large scale influence is not (yet) considered self-consistently in star cluster $N$-body simulations. Up to now, several paths have been followed to implement the tidal effects from the galaxy on a cluster:
\begin{enumerate}
\item the cluster remains within a single galaxy for its entire lifetime \citep[e.g.][]{Baumgardt2003, Kuepper2010a, Hurley2012, Madrid2012, Vesperini2014, Webb2014}. The galactic potential is usually static, and often axisymmetric.
\item the cluster is accreted from a satellite galaxy onto a massive galaxy. The accretion event is often modelled by replacing one galactic potential with another, both static \citep{Miholics2014, Bianchini2015}.
\item the galactic potential is evolved using a separate galaxy simulation to allow for time-evolving, non-analytical potentials and include effects like galaxy mergers \citep{Renaud2013a} or even complex tidal histories in cosmological context \citep{Rieder2013}.
\end{enumerate}
Although the last approach provides a relatively high level of realism, it does not allow us to distinguish the relative role of the numerous physical processes involved in the evolution of the clusters. In this paper, we focus on a specific aspect of the galactic tides, namely the secular, adiabatic cosmological growth of galaxies, and neglect other effects, at the expense of realism. The full story of the co-evolution of star clusters and their hosts would be told by considering a combination of this particular effect, and the other mechanisms driving galaxy evolution, like minor and major mergers, and the accretion of gas. In that respect, cosmological simulations and merger trees would serve as a base to establish the relative weight and frequency of these events, and to allow us to build the true tidal history of clusters, in future studies.
Here, we follow the evolution of clusters in a time-dependent galactic potential, and compare to that in static potentials, to quantify the role of secular galaxy evolution on the present-day properties of star clusters. Our method is two-fold: (\emph{i}) an exploration of the parameter space performed with very fast codes, but at the price of some simplifications, and (\emph{ii}) several much slower but more accurate $N$-body simulations using the relevant sets of parameters identified in the first step.
\section{Methodology}
\subsection{Time-evolving potential}
\label{sec:potential}
For simplicity and to limit degeneracy in the results, we only consider the halo component of the galaxy and opt for a self-similar growth. The galaxy does not experience any merger event but slowly and smoothly grows (in mass and radius) with time. Furthermore, the halo remains spherically symmetric and we neglect dynamical friction. We choose to make such simplifications in order to focus on the role of a single physical process (namely the adiabatic cosmological growth).
We use the analytical description of a growing \citet*{Navarro1997} halo proposed by \citet{Buist2014}, who fitted the evolution of halo parameters using the Aquarius simulation \citep{Springel2008}. Namely, the mass-scale and scale-length of the halo evolve with redshift $z$ as
\begin{equation}
\left\{\begin{array}{lcl}
M_s(z) & = & M_{s,0} \exp{\left( -0.2 z \right)}\\
R_s(z) & = & R_{s,0} \exp{\left( -0.1 z \right)}.
\end{array} \right.
\label{eqn:scale}
\end{equation}
We have adopted the values of 0.1 for the mass growth parameter ($a_g$) and of 2.0 for the $\gamma$ parameter in \citet{Buist2014}, their equations 22 and 23. This provides an evolution comparable to that of the Milky Way-like halo labelled Aq-E-4 in \citet[their Fig. 4]{Buist2014}. The effect of changing these parameters is discussed in \sect{discussion}. The galactic potential, as a function of radius $r$ and redshift $z$ is then
\begin{equation}
\phi_\textrm{G}(r,z) = -\frac{GM_s(z)}{r}\ln{\left( 1 + \frac{r}{R_s(z)} \right)}
\label{eqn:potential}
\end{equation}
(where $G$ is the universal constant of gravitation), and corresponds to the density profile:
\begin{equation}
\rho_\textrm{G}(r,z) = \frac{M_s(z)}{4\pi r \left[r + R_s(z)\right]^2}.
\end{equation}
To evaluate the paramaters of the potential during the simulation of the cluster, we convert the time $t$ since the beginning of the simulation (i.e. the age of cluster) into redshift using
\begin{equation}
1+z = \left(\frac{1-\Omega_m}{\Omega_m}\right)^{1/3} \left\{ \sinh{\left[ \left(t + t_0\right) \frac{3 H_0 \sqrt{1-\Omega_m}}{2} \right]} \right\}^{-2/3},
\label{eqn:redshift}
\end{equation}
(which follows from equation 13.20 of \citealt{Peebles1993}). $t_0$ represents the age of the Universe when the simulation is started. We adopt the values of the cosmological constants $\Omega_m = 0.31$ and $H_0 = 68 \U{km\ s^{-1}}\U{Mpc}^{-1}$ \citep{Planck2014} such that the Hubble time equals $13.7 \U{Gyr}$.
We start our study at redshift 5, which leads to $t_0 \approx 1169 \U{Myr}$, and we consider the evolution of clusters over $12.56 \U{Gyr}$ (corresponding to the time difference between $z=5$ and $z=0$. We choose the present day ($z=0$) values of the mass-scale and scale-length to be $M_{s,0} = 1.5 \times 10^{11} \msun$ and $R_{s,0} = 16 \U{kpc}$. \fig{galaxy_scales} shows the evolution of the mass-scale ($M_s$) and scale-length ($R_s$), both normalised to their values at $z=0$.
\subsection{Galaxies}
\label{sec:galaxies}
\begin{figure}
\includegraphics{galaxy_scales.eps}
\caption{Evolution of the scale parameters of the galaxy, normalised to their value at $z=0$ (i.e. for $t\approx 12.6 \U{Gyr}$). $t=0$ corresponds to the start of the simulations (i.e. $z=5$).}
\label{fig:galaxy_scales}
\end{figure}
Our study of the effects of time-varying potentials on the evolution of star cluster comprises three types of tidal histories:
\begin{itemize}
\item S5 (static, $z=5$): the cluster evolves in a static version of the potential of Equation~(\ref{eqn:potential}), with properties corresponding to $z=5$. Equation~(\ref{eqn:scale}) provides the corresponding galactic mass scale and scale radius: $\approx 5.5 \times 10^{10} \msun$ and $\approx 9.7 \U{kpc}$.
\item TD (time dependent): the initial setup is strictly the same as for S5, but the potential evolves with redshift from $z=5$ to $z=0$, as described in the previous Section.
\item S0 (static, $z=0$): the cluster reaches the exact same \emph{final} orbital position and velocity than in TD, but has evolved in a static potential with properties for $z=0$. In practice, we compute the final position and velocity of the cluster from TD, when $z=0$ has been reached. We then ``freeze'' the potential, and perform a backward integration of the cluster orbit. This gives us the initial position and velocity of the cluster for the S0 run.
\end{itemize}
Therefore, by setting one initial position and velocity for a cluster in the S5 galaxy, we uniquely define three orbits and the associated three tidal histories: S5, TD and S0.
For simplicity, we always initially set the cluster at apocenter, with a purely tangential velocity, in our S5 and TD cases. Orbit integration is performed using the \texttt{NBODY6tt}\xspace method \citep{Renaud2015b}, either to integrate the motion of the cluster (Section~\ref{sec:paramspace}), or in the full $N$-body context where the cluster is described star-by-star (Section~\ref{sec:nbody}).
\subsection{Star cluster fiducial initial conditions}
\label{sec:ic}
We considered several intrinsic and orbital initial conditions for the clusters. Unless otherwise mentioned, our clusters are modeled with 32768 stars distributed on a Plummer sphere with a virial radius of $3 \U{pc}$ (i.e. a half-mass radius of $\approx 2.3 \U{pc}$). The masses of the stars follow a \citet{Kroupa2001b} initial mass function, from 0.1 to $1 \msun$, leading to a total initial mass of $\approx 1.03 \times 10^4 \msun$. We do not account for stellar evolution. We explore other cases by varying these parameters in the next Sections.
\section{Parameter space exploration}
\label{sec:paramspace}
We aim to determine if, under which conditions and to what extend, the evolution of a cluster in a cosmologically growing potential (TD) differs from that of the same cluster in a static $z=0$ version of this potential (S0). This difference relies on (\emph{i}) the strength of the tidal field and (\emph{ii}) how sensitive the cluster is to tides.
\subsection{Quantifying the tidal field}
We first seek an estimate of the galactic disruptive effect on star clusters. Such effect comprises contributions of gravitational (inertial) and orbital origin (non-inertial). For non-circular orbits, non-inertial effects, like the centrifugal force, do not yield an analytical expression. However, we can estimate their contributions by considering that the orbit is instantaneously circular, i.e. by neglecting the Coriolis and Euler effects, and by computing the centrifugal effect using the instantaneous orbital angular frequency. In this idealised framework, \citet{Renaud2011} provides the expression of the effective tidal tensor, which encompasses both the gravitational and centrifugal effects. Using the expression of the galactic potential (Equation~\ref{eqn:potential}), the main eigenvalue of the effective\footnote{The main eigenvalue of the purely gravitational tidal tensor (i.e. neglecting the centrifugal effect) is:
\begin{equation}
\lambda(r, z) = \frac{G M_s(z)}{r^3} \left\{ 2 \ln{\left(1+\frac{r}{R_s(z)}\right)} - \frac{3 r^2 + 2 r R_s(z)}{\left[r+R_s(z)\right]^2} \right\}.\nonumber
\end{equation}} tidal tensor reads
\begin{equation}
\lambda_\textrm{e}(r, z) = \frac{G M_s(z)}{r^3} \left\{ 3 \ln{\left(1+\frac{r}{R_s(z)}\right)} - \frac{4 r^2 + 3 r R_s(z)}{\left[r+R_s(z)\right]^2} \right\}.
\label{eqn:lambda}
\end{equation}
This quantity represents the galactic effect along the galaxy-cluster axis, and can be used to estimate the tidal radius\footnote{In the textbook context of circular orbits around point-masses, using the effective eigenvalue leads to the definition of the tidal radius of \citet{King1962} or \citet{Binney2008}, while the purely gravitational eigenvalue corresponds to the definition of \citet{Spitzer1987}.} \citep[see][for details]{Renaud2011}.
\begin{figure*}
\includegraphics{orbit.eps}
\caption{Top: galactocentric radius for the three orbits (S5 in red, TD in black and S0 in blue). For the S5 and TD runs, the cluster is initially set $20 \U{kpc}$ from the galactic center, with a purely tangential velocity of $50 \U{km\ s^{-1}}$. The initial conditions for the S0 runs are constructed as explained in \sect{galaxies}. The dotted lines indicate the corresponding scale radius ($R_s$) of the galactic potential. Middle: galactic mass enclosed within the instantaneous orbital radius. Bottom: main eigenvalue of the effective tidal tensor, under the approximation that the orbit is instantaneously circular (see text, Equation~\ref{eqn:lambda}). The dashed lines indicate the time-average of the eigenvalue along the S5 and S0 orbits.}
\label{fig:orbit}
\end{figure*}
We show an example of evolution of the effective eigenvalue in \fig{orbit}, together with the galactocentric radius and the galactic mass enclosed in this radius, for the three orbits (TD, S5 and S0). As constructed, the TD case evolves from the S5 to the S0 setup. On average, despite of a smaller orbital radius, the mass enclosed within the galactocentric radius in S0 is larger than that in S5, because the S0 galaxy yields a higher total mass. We estimate the mean tidal strength over an orbit by computing the time-average $\left<\lambda_\textrm{e}\right>$ of the main effective eigenvalue over an orbital period (dashed lines on \fig{orbit}). This quantity is constant in static potentials. The larger enclosed mass and smaller galactocentric distance result in a stronger tidal effect along the S0 orbit than in the S5 case.
The secular cosmological galactic growth can only affect the evolution of star clusters if the tidal effects in the S5 and S0 case are significantly different, or in other words, if the ratio
\begin{equation}
\frac{\left<\lambda_\textrm{e,S0}\right>}{\left<\lambda_\textrm{e,S5}\right>}
\end{equation}
is large with respect to unity. This ratio depends on the initial position and velocity of the cluster in a non-trivial way. By varying the initial galactocentric radius and orbital eccentricity of the cluster for the S5 orbit and integrating it numerically, we obtain the map of the ratio of tidal strengths shown in \fig{eigen}. (Recall from \sect{galaxies} that setting the initial position and velocity for S5 uniquely defines the initial position and velocity for S0.)
\begin{figure}
\includegraphics{eigen.eps}
\caption{Relative strength of the tidal field between the S0 and S5 cases, as a function of initial galactocentric radius ($r_\textrm{S5}(t=0)$) and orbital eccentricity in the S5 case (see text for details). The difference between tidal strengths is maximal for circular orbits in the outer region of the halo.}
\label{fig:eigen}
\end{figure}
The largest differences between the tidal fields of the redshift 5 and 0 galaxies are found in the outer regions of the galactic halo, and for circular orbits. Secular galactic growth induces the largest differences in the outer regions of galactic halos and thus, the largest differences in tides are found along orbits that remain in such regions for the largest fraction of their period.
However, at large galactocentric distance, tidal forces are weak and only have a mild impact on the evolution of clusters. The differences found between the S5 and S0 cases might thus not translate into differences in the properties of the clusters.
To summarise, we have identified the orbits favouring the largest differences in tides between high and low redshift, but the resistance of clusters to tidal harassment must also be considered before concluding on the effect of secular galactic growth on star clusters.
\subsection{Cluster sensitivity to tidal harassment}
We have seen in the previous Section that the average strength of the tidal field experienced by a cluster could increase by a factor of a few because of the secular cosmological growth of the galaxy. We focus here on star cluster responses to such differences.
One of the most accurate manner to study star cluster evolution is through $N$-body simulations. These are however very numerically costly and such approach forbids an exploration of a wide parameter space. At first order however, cluster evolution relies on an handful of coupled differential equations \citep[see e.g.][]{Ambartsumian1938, Chandrasekhar1942, King1958, Henon1961, Heggie1975, Hut1992, Lee1987, Gieles2011b}. The code \texttt{EMACSS}\xspace \citep{Alexander2012,Gieles2014,Alexander2014} solves these equations numerically and provides an easy and very fast way to evaluate the properties of the cluster along its evolution.
We first setup our fiducial cluster (\sect{ic}) in \texttt{EMACSS}\xspace, using the tidal field strengths computed in the previous Section. To model the tidal effect, we determine the mass of the point-mass galaxy that would lead to the same average effective eigenvalue $\left<\lambda_\textrm{e}\right>$ than our NFW halos, at a given orbital radius. Doing this allows us to set the same tidal radii in \texttt{EMACSS}\xspace than the time-average ones measured in the previous Section. However, we neglect the differences between the shape of the potentials \citep[see][for a discussion on that matter]{Tanikawa2010}. This assumption will be validated later.
\begin{figure}
\includegraphics{finalmass.eps}
\caption{Ratio of the final cluster mass (computed with \texttt{EMACSS}\xspace, see text) when evolved in the S0 and S5 potentials, as a function of initial galactocentric radius and orbital eccentricity in the S5 case. The cluster initial parameters are those of the fiducial case. Note that \texttt{EMACSS}\xspace is not designed to treat orbits with a high Roche filling factor, and thus the cases within the central $5 \U{kpc}$ are not considered here.}
\label{fig:finalmass}
\end{figure}
\fig{finalmass} shows the ratio between the final cluster mass when embedded in the S0 and S5 tides. Despite the differences in tidal strengths found in \fig{eigen}, the differences in the final mass of the cluster remain below 10 percent in all cases. In other words, the final mass of our fiducial cluster is almost the same in S5 and S0 for all initial orbital radii and eccentricities considered.
\begin{figure}
\includegraphics{filling.eps}
\caption{Top: ratio of the final cluster mass when evolved in the S0 and S5 potentials, as a function of initial orbital radius (with an eccentricity of 0.3) and initial half-mass radius. The black symbols correspond to the $N$-body simulations described in \sect{nbody}. Bottom: same but for the final half-mass radius. The horizontal ridge visible in this plot is an effect of clusters reaching the core-collapse phase shortly before the end of the simulation ($\approx 12.6 \U{Gyr}$). Clusters with a long relaxation time (large half-mass radius, above the ridge) are still in the unbalanced pre-core-collapse phase, with a decreasing half-mass radius, when the \texttt{EMACSS}\xspace simulations are stopped \citep[see][for details]{Gieles2014}.}
\label{fig:filling}
\end{figure}
To extend this conclusion, we apply the same method to clusters with different sensitivity to tides by varying the initial half-mass radius and thus, indirectly, the Roche-filling factor (the ratio between the cluster half-mass radius and the tidal radius). We plot in \fig{filling} the ratios of the final masses (as in \fig{finalmass}) and final half-mass radii. We arbitrarily choose the tidal field strength of orbits with an eccentric of 0.3, but reach the same conclusion of other values, as already suggested by the independence to eccentricity showed in \fig{finalmass}. In all cases explored, the properties of the cluster (mass, size) along the S5 orbit lie within less than a few percent from those along the S0 orbit.
Because the TD case represents a slow and smooth transition between S5 and S0, the differences between the latter can be seen as an upper limit of the expected differences between the time-dependent (TD) and static (S0) cases. Since this upper limit is very small, we expect clusters with a time-dependent tidal history to share very similar properties as those in static potentials. In other words, cluster evolution should be fairly independent of the evolution of its host galaxy (for the secular growth considered here).
\section{$N$-body simulations}
\label{sec:nbody}
When performing the exploration of the parameter space described in the previous Section, we made the assumption that the time-average tidal strength can be used to infer the galactic influence on star clusters. While this is perfectly valid for circular orbits, it is not for eccentric cases \citep{Baumgardt2003, Webb2014}. An analytical description of tidal effects for non-circular orbit is yet to be established. Furthermore, we have neglected the second order effect of the detailed shape of the tidal field when assuming the tidal radius is a good representation of the tidal strength.
To test these assumptions and validate our conclusions, we select a few cases from our parameter space study, and perform full $N$-body simulations. We use the method and implementation \texttt{NBODY6tt}\xspace presented in \citet{Renaud2015b}, based on \texttt{NBODY6}\xspace \citep{Aarseth2003, Nitadori2012}. The method relies on a description of the galactic potential (through a numerical routine) as a function of position and time. Using this definition, the code integrates the motion of the cluster around the galaxy, and adds tidal acceleration to the internal acceleration for all stars. We have defined the galactic potentials and their time-dependence exactly as describe in \sect{potential} and performed several sets of S5, TD and S0 runs.
\subsection{Cluster structure}
\begin{figure}
\includegraphics{fiducial.eps}
\caption{Evolution of the mass (top) and the Lagrange radii (10, 50 and 90 percent of the mass) of our fiducial cluster along the orbits showed in \fig{orbit}, computed with \texttt{NBODY6tt}\xspace. Only stars with a negative energy (kinetic + potential from other cluster members) ar considered here, as in \citep{Renaud2015b}.}
\label{fig:fiducial}
\end{figure}
\fig{fiducial} shows the evolution of the mass and some Lagrange radii of our fiducial cluster (\sect{ic}) set at an initial position of $20 \U{kpc}$ with an orbital eccentricity of 0.3 (as in \fig{orbit}). As expected from the parameter space exploration presented above (black dot in \fig{filling}), the relative difference in final mass (respectively half-mass radius) between the S5 and S0 runs is very small ($\approx 6.1$ percent, respectively $\sim 4$ percent). The excellent agreement between the \texttt{NBODY6tt}\xspace results and \texttt{EMACSS}\xspace validates our assumptions, at least in this region of the parameter space. We have also tested this agreement by considering other eccentricities (0, 0.5 and 0.7, not shown here) and reached the same conclusions. As foreseen in the previous Section, the difference between the S0 and TD runs is even smaller ($\approx 2.6$ percent for the mass and $\sim 3$ percent for the half-mass radius) than that between the S5 and S0 cases.
We also run other \texttt{NBODY6tt}\xspace models, corresponding to the black triangles on \fig{filling} and reach the same conclusions, both on the validity of our method, and on the physical results obtained.
\subsection{Tidal tails}
Tidal ejection of stars from clusters leads to the formation of tidal tails. Although we have found that the mass loss rate of clusters appear to be independent of tidal history (in the context of secular growth), tidal tails could respond differently to a time-dependent and a static galactic potential.
\begin{figure}
\includegraphics{tails.eps}
\caption{Left: Position of the stars at redshift 0 for the TD (black) and S0 (blue) runs. The plus sign indicates the position of the center of the galactic halo. Right: Isodensity contours of the clusters and the tidal tails. The dashed line represents the orbit of the cluster. The two models overlap almost perfectly.}
\label{fig:tails}
\end{figure}
\begin{figure}
\includegraphics{tails2.eps}
\caption{Distribution of mean galactocentric distance of stars (top) and of the mass (bottom), as a function of azimuth in the galactic reference frame, for clusters having evolved $\approx 12.6 \U{Gyr}$ in the TD and S0 potentials. (The cluster are centered on the azimuth $0^{\circ}$.) The dashed line represents the orbit of the cluster.}
\label{fig:tails2}
\end{figure}
\fig{tails} shows the position of all the stars (cluster and tails), and \fig{tails2} displays the distribution of mean galactocentric distance and the mass, in galatic azimuth bins (of width $1^{\circ}$), for the clusters having experienced the TD and S0 tidal histories. Differences between the models appear at rather large distance from the cluster center. The differences in mass are of the order of a few solar masses, i.e. concern only an handful of stars.
Despite different tidal histories over the $12.6 \U{Gyr}$ of evolution we considered, the final position, density, length and distribution of substructures in the tidal tails between the TD and S0 models are almost undistinguishable. In other words, clusters and their tidal debris do not retain signatures of the tidal history they experienced.
\section{Discussion and conclusions}
\label{sec:discussion}
We study the evolution of star clusters embedded in cosmologically growing galaxies, and focus on the secular, adiabatic growth of dark matter halos, neglecting impulsive and transient events like galaxy mergers. Our main findings are:
\begin{itemize}
\item Although the typical tidal fields associated with high and low redshift halos can vary by a factor of several, these variations do not translate into major differences in the properties of star clusters embedded in those fields.
\item Because of these similarities, the details on the secular evolution of the halo (growth rate, growth epoch) have a negligible effect on the present-day properties of the clusters.
\item Present-day star clusters that co-evolved with their host galaxy yield the same properties as if they had evolved in a static halo.
\end{itemize}
Clusters orbiting in the innermost regions of galaxies are affected by the tidal effect from the baryonic components that we have neglected. Among other effects, the details on the formation of thin and thick discs could modify the role of disc shocking in the evolution of clusters \citep{Spitzer1958} and alter our conclusions. We still understand too little on galaxy and structure formation to reach conclusion on this matter.
Furthermore, we only considered spherically symmetric halos, i.e. neglecting (time-dependent) anisotropy and substructures. Although we expect these aspects to rarely be of significant importance for star clusters, \citet{Bonaca2014} showed that could alter the morphology of tidal debris.
Our conclusions depend on the growth history of the galaxy, mainly when and how fast the bulk of the growth takes place. Cosmological simulations indicate that most of the adiabatic growth happens at high redshift (recall \fig{galaxy_scales}, see \citealt{Buist2014}), which has a very mild effect on star clusters, as we have shown. A galaxy can also experience a significant growth in the form of a violent event. If so, the growth enters the impulsive regime and can be seen a galaxy merger. The response of the clusters to such event is non-negligible but complex, as showed in \citet{Renaud2013a}. The real evolution of a galaxy and of its clusters lies in between these two extremes. Our study demonstrates that the main impacts of galactic tides on star clusters in the cosmological context are the impulsive events, and not the adiabatic growth.
In the context of the Milky-Way, the absence of evidence for recent major mergers (in the last $\sim 6-9 \U{Gyr}$, \citealt{Deason2013}, and/or since the formation of the disc \citealt{Ruchti2014}) makes our results directly applicable to clusters formed \emph{in-situ}. Studies of such clusters can thus safely make the assumption that the galactic potential has been static over several Gyr, without biasing the conclusions on the mass- and size-functions of the clusters. Studies that consider globular cluster population evolution in a static Milky Way potential showed that it is not possible, within a Hubble time, to evolve an initial cluster mass function with a power-law shape and an index of -2 (as found for young massive cluster today, \citealt{Portegies2010}), into a mass function that is peaked at a universal mass of $\sim 2 \times 10^5 \msun$ with secular dynamical evolution \citep{Baumgardt1998, Vesperini2001, Gieles2008b}. Our results support this conclusion. Note however that this is not valid when considering the non-negligible fraction of clusters of external origin, which have been accreted from dwarf satellite galaxies onto the Milky Way \citep{Marin-Franch2009, Leaman2013}.
The diversity of scenarios of galaxy evolution seen in cosmological simulations implies that present-day star clusters have experienced a wide variety of tidal histories. By decomposing these scenarios into individual processes and events like major mergers \citep{Renaud2013a}, accretion of dwarf satellites \citep{Bianchini2015}, secular growth (this work) and others, and by understanding their relative roles on the evolution of clusters, we will soon able to seek and identify specific imprints of galaxy evolution on the observed properties of star clusters.
\section*{Acknowledgements}
We thank Hans Buist for his help in choosing the halo parameters, and the reviewer for a prompt and constructive repport. We acknowledge support from the European Research Council through grant ERC-StG-335936 (CLUSTERS). MG acknowledges financial support from the Royal Society in the form of a University Research Fellowship and an equipment grant that was used to purchase the GPU machines that were used for the $N$-body computations.
\bibliographystyle{mn2e}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
This paper, the second in a series, presents the isophotal analysis for the
optical images of the Carnegie-Irvine Galaxy Survey (CGS), a detailed study
of a statistically complete sample of nearby, bright galaxies in the southern
sky (Ho et al. 2011, hereinafter Paper~I). The immediate aim of this paper
is to reduce our extensive set of images to a uniform database of
one-dimensional (1-D) radial profiles of surface brightness and geometric
parameters, on which much of our subsequent scientific analysis will depend.
Although we intend to apply more sophisticated methods of analysis to the
images (Peng et al. 2010; S. Huang et al. 2011, in preparation), the 1-D analysis
already contains a wealth of useful information that can be exploited for
science. Moreover, 1-D analysis has the virtue of simplicity. It can be
efficiently applied to a large sample of objects, allowing a quick overview of
the global properties of the survey.
The brightness profiles of galaxies have long helped to guide our
understanding of their physical nature. Despite the visual complexity of
their images, the 1-D radial brightness profiles of galaxies in the nearby
universe actually show a surprising degree of order. De~Vaucouleurs (1948)
first noticed that the light distributions of elliptical galaxies generally
follow a $r^{1/4}$ profile, which has been interpreted as a signature of
dissipationless formation processes \citep{vanalbada82, katz91}.
Later studies, beginning with \citet{caon93}, increasingly recognized
that many ellipticals, in fact, do not strictly follow the $r^{1/4}$ law, but
instead are better described by the more general $r^{1/n}$ function of
\citet{sers68}, of which de~Vaucouleurs' law is a special case ($n=4$).
Indeed, the S\'{e}rsic function has since been generally adopted as the
standard formula for fitting the brightness profiles of ellipticals
\citep[e.g.,][]{grah96, truj01, korm09}.
Our modern view of bulges has also grown steadily more complex over time.
Once thought to be "mini-ellipticals" with $r^{1/4}$ profiles, bulges, too,
are now known to be better described by a S\'ersic $r^{1/n}$ function
\citep{ansa94, andr95, dejo96, cour96, maca03}. The S\'ersic indices of
bulges have a broad distribution of observed values, from $n < 1$ to $n > 4$
\citep[e.g.,][]{maca03, fidr08, gadotti08}, and it is argued that they reflect
different formation physics. Spheroids with $n$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 2 are regarded as
pseudobulges \citep{fidr08}, which formed through internal, secular
processes, while those with $n$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$}\ 2 are classical bulges, which, like the
ellipticals, were assembled more rapidly, most likely with the assistance of
mergers \citep{koke04}.
The brightness profiles of the disks of S0 and spiral galaxies have been
traditionally described by a single exponential function \citep{deva59,
free70}, which arises as a natural consequence of viscous transport in a disk
\citep{yoso89, zhwy00, fecl01, slyz02}, perhaps mediated by star formation and
feedback processes \citep{robe04, gove07}. In actuality, very few disks are so
simple. Many possess breaks and inflections in their outer radial profile
\citep{vand79, vase81, maga97, pohl00, degr01, erwi05, erbp05, potr06}. No
general consensus yet exists as to their cause, but they offer
\begin{figure*}[t]
\centerline{\psfig{file=fig1.eps,width=17.5cm,angle=0}}
\figcaption[fig1.ps]{
Illustration of how we determine the sky radius for the $B$-band image of
NGC~1400. Left: star-cleaned image, showing the full field-of-view of
8\farcm9$\times$8\farcm9. Right: radial profile of the isophotal
intensity, in units of ADU~pixel$^{-1}$. The vertical dash-dotted line marks
the radius where the isophotal intensities start to oscillate rather than
continue to decrease; this radius, designated by the keyword {\tt SKY\_RAD} in
the image header, denotes the region outside of which the sky dominates. The
corresponding isophotal ellipse is overplotted in the left-hand panel.
\label{figure:skyrad}}
\end{figure*}
\noindent
important clues
to a host of physical processes pertinent to galaxy formation
\citep{vanderkruitfreeman11}.
Apart from intensity profiles, isophotal analysis of galaxy images yields
other useful diagnostics. The radial variation of the ellipticity and
position angle, for example, provides an efficient means to identify bars and
to quantify their length and strength \citep[e.g.,][]{lain02, erwi05, mene07}.
Fourier decomposition of the isophotes provides yet another method to probe
non-axisymmetric perturbations in the light distribution.
The relative amplitude of the $m=2$ mode, in combination with its phase angle,
has been shown to be effective in isolating bars \citep{elmegreen85, buta86,
ohta90} and spirals \citep{elmegreen89, rixzar95, odewahn02}. Both of these
features are common constituents in disk galaxies, and both are thought to
play a dynamical role in facilitating angular momentum transport and driving
secular evolution. Likewise, a significant fraction of disk galaxies exhibits
global lopsidedness in their stellar light distribution \citep{rixzar95,
zari97, bour05, reic08}, whose main culprit remains in dispute
\citep{joco08}. As shown by \citet{rixzar95}, this type of non-axisymmetric
perturbation is again conveniently revealed through Fourier analysis of the
isophotes, in this case through the $m=1$ mode.
This paper is organized as follows. Section~2 gives a brief overview of the
CGS sample, the observations, and some basic characteristics of the images.
Section~3 describes our method of sky subtraction. The procedural details of
isophotal analysis are presented in Section~4, including our method for
extracting geometric parameters, surface brightness profiles, and Fourier
components. We generate composite light distributions (Section~5) to identify
statistical trends in disk profiles, and assemble integrated colors and
color gradients (Section~6). The products from the isophotal and Fourier
analysis are used to quantify the strengths of bars (Section~7), spiral arms
(Section~8), and lopsidedness (Section~9). Section~10 assesses the reliability
of our measurements using internal and external tests. Section~11 gives a
brief summary and an outline of future plans. The database of isophotal
parameters is described in the Appendix.
\section{Sample Properties}
The CGS covers a statistically complete sample of 605 bright, nearby galaxies
of all morphological types in the southern hemisphere, with $B$-band total
magnitude $B_{T} \leq 12.9$ and $\delta < 0^{\circ}$. These very general
selection criteria enable us to probe galaxies with a broad range of physical
properties and morphologies. The primary parent sample\footnote{As described
in Paper~I, we observed an additional 11 galaxies that do not formally meet
the selection criteria of CGS. We still analyze them here but will not use
their results to draw statistical inferences on the sample.} comprises 17\%
ellipticals, 18\% S0 and S0/a, 64\% spirals, and 1\% irregulars. The bulk of
the sample is relatively nearby (median $D_L$ = 24.9 Mpc), luminous (median
$M_{B_T} = -20.2$ mag), and well resolved. The typical seeing of CGS is
$\sim 1$$^{\prime\prime}$, and the sample has a median isophotal angular diameter of
$D_{25}$ = 3\farcm3 at a surface brightness level of $\mu_B=25$ mag
arcsec$^{-2}$.
Paper~I describes the observing strategy, data reductions, and photometric
calibration of the optical imaging component of the project. We only repeat
a few essential details here. The broadband \emph{BVRI} images have a
field-of-view of 8\farcm9$\times$8\farcm9 and a pixel scale of 0\farcs259,
which is well matched to the good seeing typically achieved with the
du~Pont 2.5-m telescope at Las Campanas Observatory. The median seeing
of the survey, as determined from over 6000 science images, is 1\farcs17,
1\farcs11, 1\farcs01, and 0\farcs96 in the $B$, $V$, $R$, and $I$ band,
respectively. A little more than half of the galaxies were observed under
photometric conditions, with median photometric errors of 0.08, 0.04, 0.03,
and
\begin{figure*}[t]
\centerline{\psfig{file=fig2.eps,width=17.5cm,angle=0}}
\figcaption[fig2.ps]{
Left: $B$-band image of NGC~1400, binned by $20\times 20$,
with the ellipse in Figure~\ref{figure:skyrad} overplotted. We show the full
field-of-view of 8\farcm9$\times$8\farcm9. The sky value and its uncertainty
are simply the mean and standard deviation of the pixel values outside of this
ellipse, after excluding the masked objects, which are shown as black regions.
Right: normalized histograms of the background pixels of the original
and the binned images; their peak positions are $0.3060\pm0.0102$ and
$0.3059\pm0.0011$ ADU~s$^{-1}$~pixel$^{-1}$, respectively.
(A color version of this figure is available in the online journal.)
\label{figure:skybin}}
\end{figure*}
\noindent
0.04 mag for the \emph{B, V, R} and $I$ filters, respectively. We devised
a calibration strategy to establish an approximate photometric zero point for
the non-photometric observations, for which the corresponding photometric
errors are 0.21, 0.11, 0.10, and 0.09 mag. After correcting for large-scale
gradients in the background, the flatness of the final images is about 0.6\%,
and the typical depth of the surface brightness, defined as $1\, \sigma$
above the background, has a median value of $\mu \approx 27.5, 26.9, 26.4,$
and 25.3 $\rm mag\ arcsec^{-2}$ in the $B, V, R,$ and $I$ bands,
respectively.
We derived a number of data products from the reduced, calibrated images.
These include red--green--blue color composites generated from the $B$, $V$,
and $I$ bands, images cleaned of foreground stars and background galaxies,
a stacked image from a weighted combination of the four filters optimized
to enhance regions of low surface brightness, structure maps designed to
accentuate high-spatial frequency features, and color index maps from
different combinations of the filters.
\section{Sky Determination}
Sky determination is a crucial, fundamental step in the data analysis. Many
of the basic galaxy parameters we are interested in measuring (magnitudes,
colors, characteristic size and brightness level, etc.) are predicated
on having the sky level properly subtracted. Importantly, under-subtraction
or over-subtraction of the sky value can introduce spurious curvature into
the brightness profile, especially in the faint, outer regions of the
galaxy \citep[e.g.,][]{erwi08}. \citet{maca03} studied the influence of the
sky value on the bulge and disk parameters for a sample of spirals, and
concluded that the disk, but to a lesser extent even the bulge, parameters
are sensitive to the sky value.
There are a variety of ways to measure the sky value of a CCD image. Science
data such as ours, however, wherein an extended object fills a substantial
portion of the chip, pose unique challenges. This is especially so because
the background of our images is not always entirely uniform (Paper~I). We
adopt a two-step approach. As in \citet{noovan07}, we generate the isophotal
intensity radial profile of the galaxy to the edge of the field to determine
the radius beyond which the sky background dominates the signal. As
illustrated in Figure~\ref{figure:skyrad}, the transition from the galaxy's
outer boundary to the sky-dominated region manifests itself as flattening of
the radial profile, beyond which it oscillates about a constant intensity
level. To be specific, we define the outer radius to be the first data point
where the measured isophotal intensity rises instead of decreases
monotonically. Typically the outer radius is large enough to avoid the spiral
arms or other features that may cause a real rise in the outer brightness
profile. In the standard procedure of \citet{noovan07}, the average value of
this isophotal intensity outside the outer radius and the associated standard
deviation gives estimates of the sky level and
its uncertainty. However, this technique is reliable only if the background
is uniform and well measured \citep{erwi08}. The field-of-view of our images
is typically only a factor of $\sim$2 larger than the galaxies, generally too
marginal to provide enough data points in the sky-dominated region to yield a
statistically robust measurement of the background and its error. The
situation is further exacerbated by the occasional presence of residual
large-scale non-uniformities in the background. In view of these
complications, we use Noordermeer \& van~der~Hulst's method only to determine
the radius of the sky-dominated region (Figure~\ref{figure:skyrad}), which we
record in the image header under the keyword {\tt SKY\_RAD}.
To estimate the actual sky value and its associated error, we follow a method
similar to that used by \citet{erwi08}. We first smooth the original
$2042\times2042$ pixel image by binning it down to a $102\times102$ pixel
image. This highlights underlying large-scale, systematic fluctuations in the
background, which is the main factor that ultimately limits the accuracy with
which
\vskip 0.3cm
\psfig{file=fig3.eps,width=8.75cm,angle=0}
\figcaption[fig3.eps]{
Empirical relationship between sky$_{\rm fit}$/sky$_{\rm real}$ and
$R_{\rm fit} / R_{25}$, for each of the four filters. For $R_{\rm fit}
/ R_{\rm 25}$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$}\ 1.3, ${\rm sky}_{\rm fit} \approx {\rm sky}_{\rm real}$ as indicated
by the best-fitted red solid lines with slopes close to 0, but when
$R_{\rm fit} / R_{\rm 25}$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 1.3, ${\rm sky}_{\rm fit}$ is systematically larger
than ${\rm sky}_{\rm real}$. The green solid lines represent the best-fit linear
relations given by Equations (1)--(4).
(A color version of this figure is available in the online journal.)
\label{figure:sky_fit_empeqn}}
\vskip 0.3cm
\noindent
we can determine the sky level, and hence the final sensitivity of the
surface brightness of our images. The value of each binned pixel is the mean
value of all the pixels inside a $20\times20$ pixel box, after excluding field
stars and background galaxies identified in the image's object mask (Paper~I).
The background pixels, then, are defined to be all the pixels in the binned
image outside of the isophotal ellipse marked by {\tt SKY\_RAD}. The sky level
is simply the mean of the background pixel values, and its uncertainty is the
standard deviation of the pixel flux distribution; these values are stored in
the image header under the keywords {\tt SKY\_VAL} and {\tt SKY\_ERR}. (The
standard deviation of the mean, useful for other applications, is stored
separately under the keyword {\tt SKY\_SIG}.) We have opted to compute the
mean of the pixel flux distribution, rather than its median or mode, but in
practice this makes little difference (they generally agree to $\sim$0.1\%)
because the shape of the distribution is highly symmetrical. This reflects
the robustness of our estimate of the sky-dominated region and the
effectiveness of our object masks in rejecting faint halos around foreground
stars and background galaxies.
We have tested the effect of choosing different scales for the smoothing,
varying the binning box sizes from 5 to 50 pixels. While the average sky
value remains stable, the width of the pixel distribution decreases with
increasing smoothing length, leveling off to a near-constant value for
box sizes {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$} 20 pixels ($\sim$5$^{\prime\prime}$). We interpret this to represent the
typical scale of large-scale systematic fluctuations in the sky background.
We choose a box size of $20\times20$ pixel as a reasonable compromise in order
to retain a statistically significant number of data points to compute their
average and standard deviation.
Figure~\ref{figure:skybin} illustrates our method of sky estimation, using
a $B$-band image of NGC~1400. The image has been binned $20\times 20$, and
the foreground stars and background galaxies have been masked out. The sky
value is the mean of pixel values outside of the ellipse, after excluding the
masked objects, and the error is simply the standard deviation of the sky
pixels in the binned image. The right-hand panel shows normalized histograms
of the sky pixels of the original and binned image. Clearly they peak at
nearly identical locations (the peaks of the two histograms differ by 0.0001
${\rm ADU\ s^{-1}\ pixel^{-1}}$), but the distribution for the binned image is
narrower than that of the original image by a factor of 10.
The above-described strategy for sky determination can only be applied to
galaxies with angular diameters $D_{25}$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 5$^\prime$--6$^\prime$. For the
$\sim$15\% of the sample more extended than this, it is difficult to
impossible to determine the radius of the sky-dominated region and obtain
robust statistics for the sky pixels, and we must resort to a more indirect
approach. The signal in the outer regions of the CCD frame consists of galaxy
light plus a constant sky background. Assuming that the galaxy component can
be modeled by a single S\'ersic function, we can fit the observed light profile
to solve for the underlying sky value. We perform the fitting on the 1-D
surface brightness profile (Section~4), after excluding the central regions of
the galaxy and other features such as the bulge, the bar, or strong spiral
arms, if present. Simple experimentation shows that the best-fit sky value
depends on the fitting radius relative to the size of the galaxy. Clearly, if
the fitting radius is large compared to the outer edge of the galaxy, the sky
will be well determined; however, if the fitting radius lies substantially
interior to the main body of the galaxy, the inferred sky value will depend
critically on how well the S\'ersic model represents the intrinsic light
profile of the galaxy.
We devise an empirical correction, as follows. We select several galaxies
that (1) have relatively simple structures, (2) are small compared
to the CCD's field-of-view, and (3) have well-determined sky values. Then,
we fit their surface brightness profiles with different fitting radii
($R_{\rm fit}$), to mimic the actual situation in galaxies that are too
angularly extended to have a reliable sky determination. The resulting
fitted sky value, ${\rm sky}_{\rm fit}$, is then compared with the independently
known, correct value ${\rm sky}_{\rm real}$. Figure~\ref{figure:sky_fit_empeqn}
shows ${\rm sky}_{\rm fit} / {\rm sky}_{\rm real}$ vs. $R_{\rm fit} / R_{25}$, where
$R_{25} = 0.5\,D_{25}$. We can see that ${\rm sky}_{\rm fit} / {\rm sky}_{\rm real}
\approx 1$ when $R_{\rm fit} / R_{25}$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$}\ 1.3. When $R_{\rm fit} / R_{25} \leq
1.3$, ${\rm sky}_{\rm fit}$ overestimates ${\rm sky}_{\rm real}$, but it does so
systematically, in such a way that we can apply an approximate empirical
correction to recover the true sky value. The best-fitting linear
relations (and their associated rms scatter), are s follows.
\begin{itemize}
\item {\it B}\ band
\begin{align}
\frac{{\rm sky}_{\rm fit}}{{\rm sky}_{\rm real}} &= 1.364 - 0.327 \times
\frac{R_{\rm fit}}{R_{25}}, &\sigma = 0.05,
\end{align}
\item {\it V}\ band
\begin{align}
\frac{{\rm sky}_{\rm fit}}{{\rm sky}_{\rm real}} &= 1.427 - 0.389 \times
\frac{R_{\rm fit}}{R_{25}}, &\sigma = 0.033,
\end{align}
\item {\it R}\ band
\begin{align}
\frac{{\rm sky}_{\rm fit}}{{\rm sky}_{\rm real}} &= 1.309 - 0.287 \times
\frac{R_{\rm fit}}{R_{25}}, &\sigma = 0.042,
\end{align}
\item {\it I}\ band
\begin{align}
\frac{{\rm sky}_{\rm fit}}{{\rm sky}_{\rm real}} &= 1.070 - 0.060 \times
\frac{R_{\rm fit}}{R_{25}}, &\sigma = 0.020.
\end{align}
\end{itemize}
\noindent
The best-fit sky value, ${\rm sky}_{\rm fit}$, and its associated statistical
error, $\sigma_{\rm fit}$, are recorded under the header keywords {\tt SKY\_VAL}
and {\tt SKY\_ERR} with the comment ``fitted sky value.'' If the above empirical
correction to ${\rm sky}_{\rm fit}$ is necessary, we fold the scatter of the
correction relation into the error budget.
\section{Isophotal Analysis}
\subsection{Geometric Parameters and Surface Brightness Profile}
The IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatory, which is operated by the Association of Universities for Research
in Astronomy, Inc., under cooperative agreement with the National Science
Foundation.} task {\em ellipse} is commonly used to measure the surface
brightness profiles of galaxies \citep[e.g.,][]{silels94, miljog99,
lain02, jogee04, agu05, marjog07, noovan07, bara08}. Following the iterative
method of \citet{jedr87}, we fit the isophotes of the
galaxy with a set of ellipses. This is motivated by the fact that the
isophotes of most galaxies, especially early-type systems such as ellipticals
and lenticulars, are quite close to ellipses\footnote{An important exception
is edge-on galaxies, which are not well suited to ellipse fits. As the
bulge and disk components have different profiles and ellipticities, their
relative contributions change with radius and azimuth (i.e. along major or
minor axes). A dust lane, if present, also can strongly affect the averaged
isophotal intensity. For edge-on galaxies it is preferable to extract the
isophotal intensities along cuts in the major and minor axis directions
\citep[e.g.,][]{degr98, fry99, wu02}.}. In our implementation, the ellipses
are sampled along the semi-major axis of the galaxy in logarithmic intervals,
starting from $r \approx $ 0\farcs3 and increasing the radius of each
successive ellipse by a factor of 1.1. After reaching the outermost ellipse,
the fitting reverses direction and moves toward the galaxy center from
$r \approx $ 0\farcs3, with each subsequent radius decreasing by a factor of
1.1.
As in \citet{noovan07}, we determine the isophotal geometric parameters of
the galaxy in two steps. In the first step, we estimate the center of the
galaxy. Ellipses are fitted to the $I$-band image with the center, position
angle (PA), and ellipticity ($e$) set as free parameters. We use the $I$-band
image as the fiducial reference because of its relative insensitivity to
dust extinction and young stars, and because it generally has the best seeing.
The center of the galaxy is the average central position of the ellipses
inside $\sim$5$^{\prime\prime}$--7$^{\prime\prime}$. In images of regular galaxies the center of the
best-fit ellipses often converge to a well-defined value, with rms
$\approx$ 0\farcs015. However, in galaxies with dusty nuclear regions,
the best-fitting central isophotes may give a poor measure of the true
center. In such cases, a better estimate of the true center comes from
isophotes at intermediate radii, $\sim$10$^{\prime\prime}$--30$^{\prime\prime}$, far enough to be
undisturbed by central dust but yet sufficiently close to the nucleus to give
a faithful measure of its position. The typical uncertainty of the
central positions estimated in this way is $\sim$0\farcs02.
Next, we fix the center just determined and run {\em ellipse} again,
while still setting $e$ and PA free. Our goal is to determine the
characteristic $e$ and PA of the galaxy based on its best-fit isophotes.
Typically we take the average value of these parameters in the outer regions
of the galaxy, where the intensity is about $1 \,\sigma$ above the sky, as
their characteristic values and use their standard deviations over that
region as the uncertainties. These parameters usually converge to a constant
value within that region, with variations of $\sim$0.04 for $e$ and
$\sim$2$^{\circ}$\ for PA. However, the intrinsic geometric parameters of some
galaxies can be distorted by mergers or interactions, causing $e$ and PA
to diverge at large radii. In these cases we simply estimate their values
manually from the visually best-fitting isophotes near the edge of the galaxy.
The $B$, $V$, $R$, and $I$ images have their center, $e$, and PA values
determined independently, and they are stored in their corresponding image
headers.
During this second step, we also record the deviations of the isophotes from
perfect ellipses, which, as described in \citet{jedr87}, are parameterized by
the third ($A_3, B_3$) and fourth ($A_4, B_4$) harmonics of the
intensity distribution. The $A_3$ and $B_3$ parameters give
``egg-shaped'' or ``heart-shaped'' isophotes \citep{cart78, jedr87}.
\citet{pele90} point out that $A_3$ and $B_3$ appear to be sensitive
diagnostics of dust features in elliptical galaxies. The most interesting
parameter among them is $B_4$: if it is positive, the underlying isophote is
disky with respect to a perfect ellipse; a negative $B_4$ corresponds to a
boxy isophote. Figure~1 of \citet{pele90} gives examples of the different
isophotal shapes for different values of $B_3, A_4$, and $B_4$.
We run {\em ellipse} for a third and final time to extract the average
intensity of the isophotes, fixing the geometric parameters to the values
determined above \citep[e.g.,][]{potr06, noovan07}. For this step, we do not
allow the geometric parameters to vary in order to reduce the influence of
bars and other non-axisymmetric features on the average intensity profile, as
well as to reach convergence in regions where the signal-to-noise ratio (S/N)
is marginal \citep {erwi08}. For consistency, we apply this isophote
measurement to all the galaxies in our sample. For the lopsided galaxies, we
also experimented with allowing the isophotal centers to be left as free
parameters. We find that the typical difference in the brightness profile,
compared with the fits based on fixed isophotal centers, is $\sim$ 0.2 $\rm
mag\ arcsec^{-2}$. Moreover, our tests show that the relative amplitudes of
the Fourier terms (Section 4.2) decrease significantly when the isophotal
centers are allowed to be free, due to the fact that the free-fitting ellipses
can better trace the distorted disk in the outer part of the galaxy to produce
very small fluctuations along each isophote. This will make the Fourier
analysis less effective for detecting and quantifying the properties of bars
or lopsided structures.
After subtracting the sky background from the image, the surface brightness
is calculated from
\begin{eqnarray}
\mu = -2.5 \log \left( \frac{I_{\rm iso}}{t_{\rm exp} \times A}
\right) + {\rm zpt},
\end{eqnarray}
\noindent
where $I_{\rm iso}$ is the isophotal intensity after subtracting the sky,
$t_{\rm exp}$ is the exposure time of the image in units of seconds, $A$ is
the pixel area in units of $\rm arcsec^2$, and ${\rm zpt}$ is the photometric
zero point of the image in units of magnitudes. We calculate the surface
brightness only from those isophotes whose intensities are larger than
$I_{\rm sky} + \sigma_{\rm sky}$. We propagate the errors on $I_{\rm iso}$
into errors on $\mu$ in magnitude units. The surface brightness profiles in
the $B, V,$ and $R$ bands are constrained to have the same geometric
parameters as determined in the $I$ band.
To construct 1-D color profiles, we blur all the images to a common seeing.
This is done by convolving the image having the better seeing with a
two-dimensional (2-D) Gaussian function whose full width at half maximum
(FWHM) is the quadrature difference between the two seeing values. The
measured isophotal ellipses of the unblurred $I$-band image are used to
directly calculate the isophotal intensity of the blurred images in all the
filters. Color profiles follow from straightforward differencing of one band
from another.
\begin{figure*}[t]
\centerline{\psfig{file=fig4.eps,width=17.0cm,angle=0}}
\figcaption[fig4.eps]{
$B$-band composite profiles of our sample, divided by morphological
type. The elliptical galaxies are normalized at a reference radius of
$r_{\rm ref}$ = $R_{20}$, and the profiles are plotted vs.
$(r/r_{\rm ref})^{1/4}$. The thick black line corresponds to a de~Vaucouleurs
$r^{1/4}$ law. The profiles for the disk galaxies are scaled according to the
scale length $h$ of the disk outside $r_{\rm ref}$. Profiles of Type I, II,
and III are marked in red, purple, and blue, respectively. The thick black
line corresponds to a pure exponential function.
(A color version of this figure is available in the online journal.)
\label{figure:compprofb}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig5.eps,width=17.0cm,angle=0}}
\figcaption[fig5.eps]{
$I$-band composite profiles of our sample. See Figure~\ref{figure:compprofb}
for details.
(A color version of this figure is available in the online journal.)
\label{figure:compprofi}}
\end{figure*}
\subsection{Fourier Analysis}
We study the harmonic components of the intensity distribution of the
isophotes. In our work, we decompose the intensity distribution along each
ellipse into a Fourier series of the form
\begin{eqnarray}
I(\theta) = I_{0} + \sum_{j=1}^{\infty} I_{j} \cos j(\theta + \phi_{j}),
\end{eqnarray}
\noindent
where $I$ is the intensity (in units of ADU~s$^{-1}$~pixel$^{-1}$) on the
ellipse in the direction $\theta$, $I_{0}$ is the average intensity of the
ellipse, $I_{j}$ measures the strength of the $j$th mode in the series, and
$\phi_{j}$ is the corresponding phase angle of that mode. The angle $\theta$
is defined to be 0$^{\circ}$\ along the positive $y$-axis and increases
counterclockwise; $\phi_{j} = 0$$^{\circ}$\ along the positive $y$-axis and
increases clockwise. A high S/N is required to derive a statistically
significant measurement of the high-order Fourier terms \citep{noovan07};
thus, we only perform this decomposition inside the radius where the average
intensity of the isophote is $3 \,\sigma_{\rm sky}$ larger than the determined
sky value. The relative amplitude of the $m$ = 1 ($I_{1}/I_{0}$) and $m = 2$
($I_{2}/I_{0}$) mode will be especially useful in our analysis, as they
reflect the lopsidedness of the galaxy \citep{rixzar95} and the strength of
the bar or spiral arms \citep{buta86}, respectively.
\citet{buta86} performed a Fourier analysis to study the azimuthal variations
of the light distribution of NGC~1433, which is also included in the CGS
sample. We found good agreement between Buta's values of the relative
amplitudes of the $m$ = 1 and 2 modes and those calculated by us. This helps
to confirm the robustness of our method.
\subsection{Database of 1-D Profiles}
The full database of isophotal parameters for the 605 galaxies in CGS
(including the 11 extras not formally part of the survey) is given in the
Appendix, as Figures 19.1-19.616, as well as on the project Web site
{\tt http://cgs.obs.carnegiescience.edu}.
\section{Composite Profiles}
Composite profiles can help to highlight characteristic statistical trends,
as well as to isolate interesting outliers, in a class of objects. We
normalize the surface brightness profiles to a common reference radius. For
the elliptical galaxies in our sample, we set $r_{\rm ref}$ to $R_{20}$,
the radius wherein 20\% of the total flux is enclosed, and plot the profiles
as a function of $(r/r_{\rm ref})^{1/4}$. In this reference frame, a
classical de~Vaucouleurs $r^{1/4}$ profile traces a
\begin{figure*}[t]
\centerline{\psfig{file=table1.ps,width=16.5cm,angle=0}}
\end{figure*}
\noindent
straight line.
Figure~\ref{figure:compprofb} (top left panel) illustrates
the now-known fact that not all ellipticals obey the $r^{1/4}$ law, but rather
are better described by a more general S\'ersic function with $r^{1/n}$.
For the disk (S0 and spirals) galaxies, we set $r_{\rm ref}$ to be roughly the
boundary between the bulge and disk, and we plot the brightness profiles as a
function of $(r - r_{\rm ref})/h$, where $h$ is the scale length of the disk,
as determined by fitting an exponential function to the profile outside of
$r_{\rm ref}$. This choice of coordinates helps to reveal possible deviations
of the disks from a canonical exponential profile ($n = 1$), which appears as
a straight line. It is apparent that the light distributions of the disk very
rarely follow a pure exponential function (Figure~\ref{figure:compprofb}),
especially in their outer regions. Most show a downward turn compared to a
single exponential, but not an insignificant number show an upward turn. This
phenomenon is well-known \citep[e.g.,][]{phil91, erwi05, erbp05, potr06}. We
code the three profile types with different colors, with Type~I profiles (no
break) in red, Type~II profiles (downward break) in purple, and Type~III
profiles (upturn) in blue. The scatter among the normalized profiles is
smaller in the redder bands, indicating that dust extinction and young stars
have a greater effect on the profile shapes in the blue.
Figure~\ref{figure:compprofi} shows the equivalent montage for the $I$ band.
Table~1 tabulates the frequency of the different
profiles for each bin of morphological type for the disk galaxies.
Occasionally, there are some galaxies with complicated surface brightness
profiles, which cannot be classified as any of the three standard types listed
above. The profiles for such objects are listed as ``Other'' in
Table~1\footnote{
Since the brightness profile in the outer regions of the galaxy depends
sensitively on the accuracy of the sky subtraction, we exclude objects
with unreliable sky determination. We also omit galaxies whose light
distribution is severely adversely affected by very bright foreground stars,
by excessively crowded field stars, or by an interacting neighbor. The
excluded objects are flagged in Table~2. Note that a star that is bright
and excluded in one filter may not be equally bright or rejected in another;
therefore, the number of objects in a morphological bin is not the same
among all the filters. We further omit the 11 extra galaxies that do not
formally meet the CGS selection criteria.}.
Type~II and Type~III profiles are common in our sample, and their fractions
depend on the galaxy morphology. Type~II profiles occur more frequently in
late-type disk galaxies ($\sim 70\%$ among Sc--Sd spirals), whereas Type~III
profiles are preferentially found in more bulge-dominated, earlier-type
systems, especially
among the S0--S0/a class ($\sim 50\%-60\%$). The fraction of galaxies with
Type~I profiles, on the other hand, seems to be roughly constant, at
$\sim 20\%$, across all morphological types; the fraction increases
systematically toward redder bandpasses, except for the Sdm--Sm galaxies,
where the fraction seems to be roughly constant, although the number of
objects is small. A detailed analysis of the different profile types and their
dependence on other physical parameters will be presented in a separate paper.
\section{Color Information}
Table~2 presents integrated colors and color gradients for
the sample, corrected for foreground Galactic extinction using values from
\citet{schl98}. We list $B-I$, $V-I$, and $R-I$, from which other color
combinations can be readily derived; we use total magnitudes within the last
reliable isophote (1 $\sigma$ above the sky), as given in Table~4 of Paper~I.
For each of
\begin{figure*}[t]
\centerline{\psfig{file=table2_short.ps,height=10.5in,angle=180}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig6.eps,width=17.0cm,angle=0}}
\figcaption[fig6.eps]{
Normalized histograms of the inner and outer $B-I$ color gradients,
divided by morphological type. The inner and outer color gradients are
represented in red and blue solid histograms, respectively. The
vertical dashed lines in each panel mark the adopted boundaries for negative
($\nabla(B-I) < -0.1$), flat ($-0.1 \leq \nabla(B-I) \leq 0.1$), and positive
($\nabla(B-I) > 0.1$) color gradients. A positive color gradient means that the
color becomes redder outward, while a negative value indicates that the color
gets bluer outward.
(A color version of this figure is available in the online journal.)
\label{figure:cghistbi}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig7.eps,width=17.0cm,angle=0}}
\figcaption[fig7.eps]{
Normalized histograms of the inner and outer $V-I$ color gradients.
See Figure~\ref{figure:cghistbi} for details.
(A color version of this figure is available in the online journal.)
\label{figure:cghistvi}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig8.eps,width=17.0cm,angle=0}}
\figcaption[fig8.eps]{
Normalized histograms of the inner and outer $R-I$ color gradients.
See Figure~\ref{figure:cghistbi} for details.
(A color version of this figure is available in the online journal.)
}
\label{figure:cghistri}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table3.ps,width=16.5cm,angle=0}}
\end{figure*}
\noindent
these color combinations, we also calculate two simple measures
of the color gradient, after resampling the color profiles with 300 equally
spaced data points in linear space, to overcome the heavily sampled points in
the central region.
Similar to \citet{tayl05}, the color gradient is derived
both inside and outside the half-light radius, $R_{50}$, as determined in the
$I$ band. The inner region ranges from 3 times the seeing FWHM (to avoid
spurious effects from seeing mismatches) to $R_{50}$, while the outer region
extends between $R_{50}$ and $2.5\ R_{50}$. Tests show that most of the color
profiles of the CGS galaxies can reach further than $2.5\ R_{50}$. We confine
the color gradient into specific radial ranges marked by the three anchor
points for each galaxy\footnote{In the future, we will also derive the color
gradient in physically interesting regions, such as spiral arms, bars, or the
break points of the surface brightness profiles.}.
The resulting color profile slope represents the change in color ($\Delta$ mag)
per dex in radius, where a positive slope indicates that the galaxy is getting redder
with increasing radius from its center.
We implement a Monte Carlo method to compute the
color gradients and their uncertainties. The essence of our approach is to
iteratively sample the two observables, the radius and the color at that
radius, while incorporating uncertainties from Poisson noise, sky subtraction,
radius measurement, and stochastic fluctuations in the color profile (due to,
for instance, substructure from dust lanes, star clusters, or spiral arms).
Except for Poisson noise, which is random, the other uncertainties generally
contribute to errors in a systematic way. Our Monte Carlo approach makes it
possible to derive an effective random uncertainty from a series of systematic
errors. We start with the premise that the effective radius is uncertain by a
Gaussian error with $\sigma \approx 0.1 R_{\rm 50}$, which is frequently true
when the data are not in the noise-dominated regime. The systematic
uncertainty on $R_{\rm 50}$ arises from the fact that there is a range of
plausible, acceptable models that scatter around the best-fitting solution.
We draw a radius from that distribution, and the color at that corresponding
radius. The color value is also sampled from a Gaussian distribution given by
the Poisson noise, centered on the original color at the sampling radius. The
net random uncertainty of the color includes uncertainty in the sky value.
The same sampling process is applied at $2.5 R_{\rm 50}$, now with an
effective scatter of $2.5 \, \sigma (R_{\rm 50})$, and in the central
region with a scatter of the seeing FWHM. Having sampled
the radii and colors around $R_0$, $R_{\rm 50}$, and
$2.5 R_{\rm 50}$, we calculate the color gradient following
\begin{eqnarray}
\nabla({\rm Color})_{\rm in/out} = \frac{({\rm Color})_{r_1} -
({\rm Color})_{r_0}}{\log{r_1} - \log{r_0}}.
\end{eqnarray}
\noindent
This procedure is repeated 10$^4$ times to generate a
distribution of color, the median and width of which are the gradient and the
uncertainty in the gradient, respectively.
Figure~\ref{figure:cghistbi} shows the normalized histograms of the inner and
outer gradients of the $B-I$ color, sorted into bins of different
morphological types; the corresponding gradients for the $V-I$ and $R-I$
colors are shown in Figures~\ref{figure:cghistvi} and
\ref{figure:cghistri}, respectively. We define positive gradients
as those larger than 0.1, flat gradients as those between $-$0.1 and 0.1, and
negative ones as those smaller than $-$0.1. The vertical dashed lines in each
panel mark the boundaries between these three categories.
Table~3 summarizes the statistics for CGS\footnote{We
excluded the galaxies flagged in Table~2 as adversely affected by field stars.}.
The color
gradients depend strongly on galaxy morphology and color. Elliptical galaxies
generally have very little, if any, measurable color gradients. Interestingly,
their central regions show a slight tendency to exhibit {\it positive}\
gradients in all three colors, whereas beyond their effective radii the trend
reverses and there is a mild preference for negative gradients. In either
case, the distribution of gradients is narrowly peaked, with a dispersion
of $\sim 0.11$. S0 and S0/a galaxies largely follow the same pattern as the
ellipticals. By contrast, spirals of types Sa through Sd behave quite
differently. The inner regions of these galaxies show a wide dispersion
in gradients ($\sim 0.23$), and they are predominantly negative: the colors
get redder toward the center. The gradients in the outer regions, on the
other hand, are predominantly flat (peak near 0), and there is roughly
an equal number of positive and negative values, although the scatter is
large. Galaxies belonging to the latest types (Sdm and Sm)
display no preference for gradients of either sign, neither in their interior
nor in their exterior regions. The above trends stand out most clearly in
$B-I$, the color combination with the greatest wavelength separation, and they
become less pronounced in $V-I$, and even more so in $R-I$, although they are
still noticeable.
\section{Bars}
Two fundamental quantities that characterize a bar are its length and strength.
There are many ways to estimate the characteristic size of a bar. Apart from
simple visual inspection \citep{korm79}, the most commonly used approaches
involve measurement of the maximum value of the bar ellipticity
\citep[e.g.,][]{lain02, mene07}, the radial variation of the position angle
\citep[e.g.,][]{erwi05, mene07},
the radial variation of the phase angle of the second Fourier mode
\citep{ague03}, and the bar-interbar contrast \citep{ague03}, as well as
direct decomposition of the image into different components \citep{prie97}.
The strength of the bar can be ascertained by quantifying the maximum
ellipticity in the bar region \citep{mart95, mafr97, mene07}, the torques generated by
the bar \citep{bubl01}, the amplitude of the even Fourier modes of the
isophotal intensity distribution \citep{ohta90, atmi02}, and direct
decomposition of the galaxy into its constituent light fractions
\citep{laurikainen05, gadotti08, peng10}. Here we present a preliminary
appraisal of the bar properties of the CGS sample based on information that
can be readily extracted from our 1-D isophotal data. We describe analyses
based on the geometric parameters and Fourier components.
\subsection{Geometric Analysis}
In the absence of confusion from dust, star-forming regions, and projection
effects, bars usually leave a distinctive imprint on the $e$ and PA profile of
a galaxy. The bar is marked by a region wherein the ellipticity rises
steadily until it reaches a peak and drops, and, unless the bar semi-major
axis is closely aligned with the major axis of the
disk, the constant position angle in the bar region abruptly changes value
as it transitions into the disk region \citep{gadotti07}.
We begin with the $e$ and PA profiles of the $I$-band image, as extracted from
the second step of running the task {\em ellipse} (Section 4.1), during
which only the galaxy center was held fixed. The $I$ band is preferred over
the other bands at shorter wavelengths because it mitigates contamination by
dust and young stars. We consider the profiles from an inner radius
corresponding to 3 times the seeing disk to the radius where the isophotal
intensity is $1 \, \sigma$ above the sky background, beyond which we truncate
the surface brightness profile. Similar to \citet{mene07} and \citet{ague09},
bars are required to have a maximum projected ellipticity ($e_{\rm max}$)
greater than 0.2, and within the bar region the position angle should be
constant to within $\Delta\rm PA<20^\circ$. If none of the data points in the
$e$ profile exceeds 0.2, or if $\Delta e \leq 0.1$ throughout the entire $e$
profile, we classify the galaxy as unbarred. If $\Delta e > 0.1$ somewhere
along the $e$ profile but the associated $\Delta$PA $\leq 10^{\circ}$, it is
possible that a bar exists but happens to align fortuitously with the major
axis of the outer disk. We flag these cases as ``possibly'' barred and
carefully inspect the galaxy image visually to see if we can confirm their
reality. If the galaxy is barred, we set the inner boundary of the bar region
to be the first data point in the $e$ profile that exceeds 0.2. \citet{mene07}
find that near the end of the bar the ellipticity and position angle usually
begin to show large deviations, typically at the level of $\Delta e \geq 0.1$
and $\Delta$PA $\geq 10^{\circ}$. We adopt these criteria to define the
radius of the outer boundary of the bar.
The projected bar size is set to be the semi-major axis of the isophote where
$e$ peaks \citep{mene07}, with the associated error as the semi-major axis
range containing the tip of the $e$ profile (i.e. $e \geq e_{\rm max}-0.01$)
in the bar region. The reason we do not simply equate the bar size with the
outer boundary of the bar is because the change in $e$ and PA in the
transition zone between the bar and spiral arms can be influenced by the
latter. The outer boundary of the bar can be overestimated if the bar is
aligned with the spiral arms. This effect can be mitigated by using the
position where $e$ peaks as the bar size, since spiral features cannot
produce ellipticity values as high as those of a bar. Assuming that the
intrinsic shape of the galaxy disk is purely circular, we correct the
observed, projected bar length ($R^{o}_{\rm bar}$) to its intrinsic value
($R^{i}_{\rm bar}$) following
\begin{eqnarray}
R^{i}_{\rm bar} = R^{o}_{\rm bar}\sqrt{(\cos{\Delta{\rm PA)}}^2
+ \left(\frac{\sin{\Delta{\rm PA}}}{1 - e_{\rm gal}}\right)^2},
\end{eqnarray}
\bigskip
\noindent
where $\Delta{\rm PA} = {\rm PA}_{\rm gal} - {\rm PA}_{\rm bar}$, and
${\rm PA}_{\rm gal}$ and $e_{\rm gal}$ are the position angle and ellipticity
of the outer disk of the galaxy, respectively. Although a full 2-D analytical deprojection
of the bar is more accurate \citep[see Appendix A in][]{gadotti07}, in
practice the size estimates from the two methods agree very well. The typical
difference in bar radii measured by 1-D fitting compared to 2-D deprojection
is about $0\farcs3$. Since the difference is very small, we use the 1-D method
for simplicity. The projected $e_{\rm max}$ then represents
the strength of the bar \citep{mene07}, with the fitted error of $e_{\rm max}$
as its uncertainty. The position angle of the bar, ${\rm PA}_{\rm bar}$, is
given by the average PA over the bar region, with the rms as its error.
Figure~\ref{figure:barexp} illustrates how we identify and measure the
bar size and strength, as applied to the star-cleaned $I$-band image of the
SBb galaxy NGC~7513. The radial profiles of $e$ and PA clearly show the
hallmark features of a bar: a distinctly broad peak in $e$ above our minimum
threshold of 0.2, reaching $e_{\rm max}$ = 0.68, and an extended plateau of
near-constant PA $\approx$ 75$^{\circ}$\ ($\Delta {\rm PA} \leq 20$$^{\circ}$). The outer
disk of the galaxy has a clearly different $e$ (0.33) and PA (105$^{\circ}$). The
two vertical solid lines mark the inner and outer boundaries of the
bar-dominated region; the vertical dotted line gives the projected bar radius.
Although widely used in the literature, the measured $e_{\rm max}$ of the bar
is actually $\sim$ 20\% lower than that derived from 2-D image
decomposition when the bulge and disk components are also included
\citep{gadotti08}. Indeed, reliable bar parameters can only be determined
\begin{figure*}[t]
\centerline{\psfig{file=fig9.eps,width=17.5cm,angle=0}}
\figcaption[fig9.ps]{
Illustration of how we determine the size and strength of a bar.
Left: star-cleaned $I$-band image of NGC~7513; the size of the image
is $\sim$3$^\prime$$\times$3$^\prime$. Right: radial profiles of PA, $e$,
$I_2/I_0$, and $\phi_2$. The horizontal dashed lines in the PA and $e$ panels
denote the characteristic values of the galaxy. The solid vertical lines
mark the inner and outer boundaries of the bar-dominated region, and the
dotted vertical line represents the projected size of the bar determined
using the geometric method, where the bar is required to have $e_{\rm max} \geq
0.2$ and $\Delta {\rm PA} \leq 20$$^{\circ}$. In the $I_2/I_0$ and $\phi_2$ panels,
the solid vertical line marks the inner boundary of the bar. The vertical
dashed line represents both the outer boundary and size of the bar,
determined using the Fourier method, where the bar criteria are $I_2/I_0 \geq
0.2$ and $\Delta \phi_2 \leq 20$$^{\circ}$. The corresponding isophotal ellipses for
the two methods are overplotted on the left-hand image with the same type
of lines as in the right-hand panel.
\label{figure:barexp}}
\end{figure*}
\begin{figure*}[t]
\hbox{
\hskip -0.1in
\psfig{file=fig10_1.eps,width=9.8cm,angle=0}
\hskip -0.3in
\psfig{file=fig10_2.eps,width=9.8cm,angle=0}
}
\figcaption[fig10.ps]{
Left: correlation between $R_{\rm bar}^{\rm Ellipse}$, the
deprojected bar size measured from the geometric, ellipse method and
$R_{\rm bar}^{\rm Fourier}$, that measured from the Fourier decomposition.
The dashed line represents $y = x$. Right: correlation between
$\phi_2$ and PA of the bar. The dashed line represents $y = 180^{\circ} - x$.
\label{figure:barcmp}}
\end{figure*}
\noindent
by
2-D decomposition of the images with all of the other components included.
Such analysis is outside the scope of this work, but future papers will
present results from 2-D decompositions of the CGS images.
\subsection{Fourier Analysis}
In addition to the geometric method described above, we also derive bar
properties using the radial profiles of the relative amplitude of the $m$ = 2
Fourier mode ($I_2/I_0$) and its associated phase angle ($\phi_2$). As
before, we work with the $I$-band images. Bars are usually associated with
the first local maximum in the $I_2/I_0$ profile, where the bar/interbar
contrast is the largest, over an extended region where $\phi_2$ keeps
approximately constant. Subsequent maxima in the $I_2/I_0$ profile, if
present, trace spiral arms or ring structures, but in these instances
$\phi_2$ varies with radius. Spiral arms always produce varying phase angles,
and thus the region where they dominate can be easily excluded from the bar
size measurement.
Adopting a procedure similar to that used by \citet{ague09}, we define the bar
to be the region wherein the maximum relative $m = 2$ Fourier amplitude
$({I_2/I_0})_{\rm max} > 0.2$ and the phase angle remains constant to
$\Delta{\phi_2} < 20^{{\circ}}$. We set the inner boundary of the bar to be
the first data point outside of 3 times the seeing radius in which $I_2/I_0 >
0.2$. Past the peak, we
\begin{figure*}[t]
\centerline{\psfig{file=table4_short.ps,width=16.5cm,angle=0}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig11.eps,width=17.5cm,angle=0}}
\figcaption[fig11.ps]{
Illustration of how we measure the strength of the spiral arms using
the Fourier method. Left: star-cleaned $I$-band image of NGC~5247; the
size of the image is $\sim$7\farcm3$\times$7\farcm3. Right: radial
profiles of PA, $e$, $I_2/I_0$, and $\phi_2$. The solid vertical lines
mark the inner and outer boundaries of the disk-dominated region, which
is defined to be outside the central bulge or bar component, but inside the
radius where the isophotal intensity is 3 $\sigma$ above the sky background.
The corresponding isophotal ellipses are overplotted on the left-hand image.
The horizontal dashed lines in the PA and $e$ panels denote the characteristic
values of the galaxy.
\label{figure:armexamp}}
\end{figure*}
\vskip 0.3cm
\noindent
designate the radius where $I_2/I_0 =
({I_2/I_0})_{\rm max}-0.1$ as the outer boundary of the bar region. In the
event that there is a secondary maximum and the local minimum between the two
peaks exceeds $({I_2/I_0})_{\rm max} - 0.1$, we set the position of the local
minimum to be the outer boundary of the bar region.
We assign $({I_2/I_0})_{\rm max}$ to be the bar strength, with the uncertainty
set by the statistical error derived from the Fourier decomposition process.
As the Fourier method is minimally affected by spiral arms, we simply set the
bar size to be equal to the radius of its outer boundary; its associated
uncertainty is the semi-major axis range between the outer boundary and the
radius where $I_2/I_0 = ({I_2/I_0})_{\rm max} - 0.05$. The Fourier analysis
is performed on the isophotes extracted in the third step of the ellipse
fitting, where the geometric parameters were all fixed to those of the
outermost isophote (see Section 4.1). For simplicity, we just assume
that the disk is purely circular in its face-on view, so the semi-major
axes of the isophotal ellipses are the radii of the circles. The size of the
bar is denoted by the semi-major
\begin{figure*}[t]
\centerline{\psfig{file=fig12.eps,width=17.5cm,angle=0}}
\figcaption[fig12.ps]{
Illustration of how we measure the strength of the spiral arms using
the structure map. Left: star-cleaned composite color image of
NGC~5247; the size of the image is $\sim$7\farcm3$\times$7\farcm3.
Right: structure map of the star-cleaned $B$-band image. The two
overplotted ellipses mark the inner and outer boundaries where the spiral
arms populate. After rejecting the masked objects, we use the rms of the
pixels within these two ellipses to estimate the strength of the spiral arms.
(A color version of this figure is available in the online journal.)
\label{figure:struexamp}}
\end{figure*}
\vskip 0.3cm
\noindent
axis of the particular isophote that encloses
the bar region, which is actually the radius of the circle that passes right
through the end of the bar in a face-on view. Under this simplified
assumption, the bar size is already deprojected. The characteristic phase
angle of the bar is the average $\phi_2$ within the bar region, with
$\Delta \phi_2$ as the uncertainty. The two bottom-right panels of
Figure~\ref{figure:barexp} show our Fourier technique applied to NGC~7513.
Fourier analysis offers a very useful approach to studying bars. It not only
provides another independent method to identify and quantify bars, but also
recovers bars missed by the geometric method in galaxies with internal
structure too complex to yield an unambiguous bar signature in their $e$ and
PA profiles. In fact, the bar parameters derived from these two methods
usually agree quite well. Figure~\ref{figure:barcmp} (left panel)
compares the bar sizes measured from the ellipse method ($R_{\rm bar}^{\rm
Ellipse}$; deprojected) versus those measured from the Fourier method
($R_{\rm bar}^{\rm Fourier}$). Overall, there is a good correlation, but some
glaring outliers stand out. Although our analysis is done in the $I$ band,
dust extinction can still be significant in some galaxies. The dust extinction
features near the central region of a galaxy tends to trick the Fourier method
by producing a false peak in $I_2/I_0$ with roughly constant $\phi_2$; this
yields an unusually compact, incorrect bar size. The Fourier method can also
be unreliable for weak bars embedded in disks with strong spiral arms. Under
these circumstances, the relatively weak local peak of the bar in the
$I_2/I_0$ profile can be overshadowed by a stronger peak generated by the
spiral arms, leading to an overestimate of the bar size. The position angle
of the bar correlates strongly with $\phi_2$ (Figure~\ref{figure:barcmp},
right panel). Ideally, $\phi_2 = 180^{\circ} - {\rm PA}$. However, when the PA
of the bar approaches $0^{\circ}$ or $180^{\circ}$, the corresponding phase
angle can have values similar to the PA; this is responsible for the points
lying on the lower-left and upper-right portions of the figure. Dust
in the central regions of the galaxy further contributes to the scatter.
\subsection{Final Bar Classification}
Since neither of the methods discussed above is foolproof, we use both to
assign the final bar classification to the galaxies in CGS. If a galaxy is
not classified as barred in both the geometric and Fourier analysis, it is
labeled unbarred. If the galaxy is classified as barred by only one method, we
call it ``possibly'' barred. If both methods consider a galaxy barred, we
check whether the derived bar sizes are consistent between them. We only
classify it as definitely barred if the bar sizes from the two methods differ
by less than 10$^{\prime\prime}$; if they disagree by more than 10$^{\prime\prime}$, we call it
``possibly'' barred.
For the ``possibly'' barred galaxies, we visually examine their $I$-band images
and further scrutinize their geometric and Fourier profiles to check whether
we are being misled by internal structural complexities such as spiral arms or
dust features. If any of the measurements from either of the two methods is
suspect, we manually set the inner and outer boundaries of the bar region and
redo the measurements. Not all ambiguous cases can be resolved, and in our
final classification we continue to flag their bar status as uncertain. For
our final measurement of the bar size, we usually give higher priority to the
results derived from the geometric method, which generally gives more accurate
sizes than those derived from the Fourier method. The definition of the bar
size in the Fourier method is of limited utility because it is largely
arbitrary and more prone to being influenced by the presence of spiral arms.
Table~4 summarizes the final bar classification and some
basic bar parameters derived from our analysis. Among the 501 disk galaxies
($T \geq -3$) in the final catalog, 44 (9\%) are deemed definitely barred, 136
as possibly barred (27\%), and 321 (64\%) as unbarred. As bar identification
is uncertain in highly inclined galaxies, we reexamine the statistics in a
subsample restricted to have ellipticities smaller than $e_{\rm gal} = 0.6$.
As expected, the bar fraction increases. Out of 387 disk galaxies, 173
($45\%$) are barred and 214 ($55\%$) are unbarred. A more detailed comparison
between our results and those in the literature will be deferred to a future
paper.
\vskip 1cm
\begin{figure*}[t]
\centerline{\psfig{file=table5_short.ps,width=16.5cm,angle=0}}
\end{figure*}
\section{Spiral Arms}
We provide two quantitative measurements (Table~5) that
can be used to access the presence and strength of spiral arms in galaxies.
The analysis is applied uniformly to all non-elliptical galaxies in the
sample, including disk galaxies traditionally deemed to lack spiral arms, such
as S0s.
We perform a simple measurement of the average strength of $I_2/I_0$ in the
disk region outside the central bulge and the bar. Spiral arms are the main
contributor to any significant $m=2$ mode in this region\footnote{The $m=2$
mode is most sensitive to systems with grand design, two-arm spirals. However,
flocculent or multiple-arm spirals still exhibit significant amplitude in the
$m=2$ mode. We defer a full treatment of spiral arms, including exploration of
high-order modes, to a separate paper.}. If neither a featureless, classical
bulge nor a bar is present, the minimum inner boundary for our calculation is
set to 3 times the seeing radius. For barred galaxies, the inner boundary is
naturally set to the bar radius, which almost always lies exterior to the
bulge. For unbarred galaxies with classical bulges, we define the inner
boundary to be the radius where $e > 0.2$, beyond which the disk usually
dominates over the bulge. This criterion fails for face-on galaxies with very
weak spiral arms and classical bulges, because the disk becomes
indistinguishable from the bulge on the basis of its ellipticity alone.
Fortunately, under these circumstances both the bulge and the disk contribute
little to $I_2/I_0$ anyway, and it makes little difference whether the bulge
is excluded or not. The outer boundary is the radius where the isophotal
intensity reaches $3 \, \sigma$ above the background in the $I$-band image; we
apply the same boundary for the other filters. We then calculate a
characteristic value of $I_2/I_0$ by averaging its profile between the inner
and outer boundaries. We illustrate our methodology in
Figure~\ref{figure:armexamp}, applied to the Sbc galaxy NGC~5247.
Our second method makes use of the structure maps (Paper~I) to estimate the
strength of the spiral features (Figure~\ref{figure:struexamp}). After masking
out the field stars and background galaxies, we compute the standard deviation
of all the remaining pixels within the inner and outer boundaries of the
disk-dominated region, as determined above. The agreement between the two
different measurements is not good, as can be seen in
Figure~\ref{figure:armcmp}, where we plot $\left<I_2/I_0\right>$ against
$\sigma_{\rm s}$ for all the filters. The two parameters trace structures on
different scales. The structure map optimally filters spatial features on the
scale of the point-spread function. It effectively highlights features such as
dust lanes and thin arms, but it is not very sensitive to smooth and wide
spiral arms, which can be better probed via $\left<I_2/I_0\right>$.
\section{Lopsidedness}
The relative amplitude of the $m$ = 1 Fourier mode is widely used to study the
lopsidedness of galactic stellar \citep{rixzar95, zari97, bour05, joco08,
reic08} and gaseous \citep[e.g.,][]{vaneymeren11} disks. This approach is
well-defined, quantitative, and relatively straightforward to implement, and
hence can be applied to study large samples of galaxies. A lopsided disk
stands out as a region of enhanced $I_1/I_0$ and roughly constant phase angle
$\phi_1$. A one-arm spiral also exhibits a large $I_1/I_0$, but $\phi_1$
increases monotonically as a function of radius.
Our method to measure lopsidedness differs somewhat from that used in previous
works. \citet{rixzar95} perform a bulge-to-disk decomposition of the surface
brightness profile to determine the scale length of the disk, and then compute
the average relative Fourier amplitude between 1.5 and 2.5 scale lengths. As
we do not yet have robust structural decompositions for the entire CGS sample
(this work is in progress), we resort to a simpler strategy, one based on the
expectation that the lopsided portion of the disk should be characterized by a
roughly flat $\phi_1$ radial profile. Through careful experimentation
\begin{figure*}[t]
\centerline{\psfig{file=fig13.eps,width=17.5cm,angle=0}}
\figcaption[fig13.ps]{
Comparison between the two measures of the spiral arms
($\left<I_2/I_0\right>$ and $\sigma_{\rm s}$) for \emph{BVRI} filters. The
correlation is very weak or absent, because the two measures essentially probe
structures on quite different scales. $\sigma_{\rm s}$ is mainly determined by
structures with typical scales of the point-spread function, while
$\left<I_2/I_0\right>$ is more sensitive to more extended, large-scale
features coherent over significant portions of the spiral arms.
\label{figure:armcmp}}
\end{figure*}
\noindent
with a
number of galaxies with prominent lopsided disks, we find that the lopsided
region can be effectively isolated by requiring that the phase angle be
constant to within $\Delta \phi_1 \leq 70^{\circ}$. We
apply this criterion to the phase angle profile of the $I$-band image to
define the inner and outer radii of the lopsided region, and then adopt these
values for the other filters. For each filter, the lopsidedness is the
average value of $I_1/I_0$ within that region, with the standard deviation as
its associated uncertainty. The characteristic value of $\phi_1$ and its error
are calculated similarly. Figure~\ref{figure:lopexamp} illustrates our
method on the Scd galaxy NGC~7070.
Lopsidedness measurements for the entire sample are given in
Table~6\footnote{We have flagged the galaxies whose
lopsidedness measurements may be unreliable because of contamination
by bright stars or excessive crowding by field stars.}.
Table~7 summarizes the
frequency of galaxies with significant lopsidedness, defined as
$\left<I_1/I_0\right>_I \ge 0.2$ \citep[e.g.,][]{zari97}, for the
subsample of 350 disk galaxies with $e_{\rm gal} \le 0.6$ for which robust
measurements could be made. Consistent with previous studies, the fraction
of galaxies with significant lopsidedness is high, and the frequency increases
substantially in late-type galaxies.
\section{Data Verification}
\subsection{Internal Comparison}
\begin{figure*}[t]
\centerline{\psfig{file=fig14.eps,width=17.5cm,angle=0}}
\figcaption[fig14.ps]{
Illustration of how we measure lopsidedness. Left:
star-cleaned $I$-band image of NGC~7070; the size of the image is
$\sim$3\farcm6$\times$3\farcm6. Right: radial profiles of PA, $e$,
$I_1/I_0$, and $\phi_1$. The horizontal dashed lines in the PA and $e$ panels
denote the characteristic values of the galaxy. The solid vertical lines, and
the corresponding isophotal ellipses overplotted in the left-hand image, mark
the inner and outer boundaries of the region used to compute the lopsidedness,
which we define to be that in which the radial variation of $\phi_1$ is
smaller than $70^\circ$. The lopsidedness is measured simply by averaging
$I_1/I_0$ within this region.
\label{figure:lopexamp}}
\end{figure*}
\vskip 0.3cm
\begin{figure*}[t]
\centerline{\psfig{file=table6_short.ps,width=16.5cm,angle=0}}
\end{figure*}
Several galaxies were observed more than once during different nights
throughout the survey. Although only the best images are included in the
final CGS catalog, the repeat observations, which were reduced and analyzed in
the same manner as the rest, afford an opportunity to access the accuracy of
our calibration methods and the reliability of the parameter measurements.
Table~8 lists the galaxies that have pairs of repeat
observations useful for internal comparison. Note that for this exercise we
only select objects that have reasonably good data. Many of the duplicate
observations were taken precisely because the original observation was deemed
to be of exceptionally low quality, either because of weather conditions
(bad seeing, excessive cloud cover) or technical problems (poor telescope
focus, tracking errors). Because one of the observations in the comparison
pair is---by definition---suboptimal, the following assessment, in some sense,
gives an overly conservative estimate of the magnitude of internal errors.
Figure~\ref{figure:prof_diff} compares the surface brightness profiles for
\begin{figure*}[b]
\centerline{\psfig{file=table9.ps,width=17.5cm,angle=0}}
\end{figure*}
\vskip 0.3cm
\psfig{file=table7.ps,width=8.75cm,angle=0}
\noindent
galaxies observed on different nights. In most cases, they agree quite well.
The weighted average of the profile differences (dashed line, calculated by
averaging the profile differences weighted by the corresponding error at
each data point) resides well inside the formal $1 \, \sigma$ photometric
uncertainty \citep{noovan07}. This suggests that
our photometric errors are robust, both for the photometric and non-photometric
observations. A few objects (NGC 1374, 2196, 6810, 6861, 7590) show slightly
larger, but by no means alarming, discrepancies. Two types of differences in
profile {\it shape}\ can be seen. The innermost portions of the profiles
often show systematic deviations, sometimes as large as $\sim 0.5$ mag~$\rm
arcsec^{-2}$. This effect can be entirely attributed to mismatches in seeing,
but is well confined within $\sim$3 times the radius of the seeing disk. This
is the reason we restrict all of our scientific analysis to radii beyond this.
Additionally, many of the profiles show some level of systematic deviation at
large radii. This most likely arises from errors in sky subtraction. For most
objects, the deviations occur at the level of $\sim 0.2$ mag~$\rm arcsec^{-2}$,
but they lie well within the error bars of the individual isophotal
intensities, which again indicates that our error budget is realistic. The
most extreme deviations occur in galaxies that are too extended
\vskip 0.3cm
\psfig{file=table8.ps,width=8.75cm,angle=0}
\vskip 0.5cm
\noindent
for standard
sky subtraction, for which we had to resort to an indirect estimate based on
profile fitting (Section~3). In these cases, the deviations in the outer
profiles may be as large as $\sim 0.5$ mag~$\rm arcsec^{-2}$. NGC~2434 is
such an example. However, our formal error bars appear to be realistic even in
these extreme situations.
Figure~\ref{figure:interpara} compares nine measured parameters derived from
the set of repeat observations. Observation 1 denotes the measurement with
better quality that has been adopted in the final database of the survey,
and Observation 2 gives the comparison measurement. We can see that overall
the agreement is quite good. Notable exceptions can be identified with
galaxies that have especially unreliable sky values, such as NGC~2434, which
is the most deviant outlier in the $R_{80}$ plot (this quantity is
\vskip 1.5cm
\begin{figure*}[t]
\centerline{\psfig{file=fig15_1.eps,width=17.0cm,angle=0}}
\figcaption[fig15.eps]{
Internal comparison of the surface brightness profiles for the
objects for which we have repeat observations. The data points are calculated
by subtracting one profile from another. The horizontal dashed line in each
panel is the weighted average of the residual data, and the two horizontal
dotted lines mark the 1 $\sigma$ photometric uncertainty of the two
observations.
\label{figure:prof_diff}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\centerline{\psfig{file=fig15_2.eps,width=17.0cm,angle=0}}
\figcaption[fig15.eps]{
continued
\label{figure:prof_diff}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\centerline{\psfig{file=fig15_3.eps,width=17.0cm,angle=0}}
\figcaption[fig15.eps]{
continued
\label{figure:prof_diff}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\centerline{\psfig{file=fig15_4.eps,width=17.0cm,angle=0}}
\figcaption[fig15.eps]{
continued
\label{figure:prof_diff}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\vbox{
\psfig{file=fig15_5.eps,width=17.0cm,angle=0}
\psfig{file=fig15_6.eps,width=17.0cm,angle=0}
}
\figcaption[fig15.eps]{
continued
\label{figure:prof_diff}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=fig16.eps,width=17.5cm,angle=0}}
\figcaption[fig16.eps]{
Internal comparison of derived parameters for objects with repeat
observations. Observation 1 denotes values we adopt in the survey, while
Observation 2 gives the reference values for internal comparison.
The dashed line denotes $y = x$.
\label{figure:interpara}}
\end{figure*}
\vskip 0.3cm
\begin{figure*}[t]
\centerline{\psfig{file=fig17.eps,width=17.5cm,angle=0}}
\figcaption[fig17.eps]{
External comparison of derived parameters between CGS and HyperLeda.
The full sample available for comparison is shown as black squares, while the
red open points mark the subset with the smallest measurement uncertainty. For
$I_{\rm tot}$, $B_{\rm tot}$, $B-V$, and $D_{25}$, the most reliable points are
those that were observed under photometric conditions, that have well-measured
sky values, and that do not suffer from contamination by nearby bright field
stars. For $e$ the red points highlight objects with minimal bright star
contamination, and for the PA we additionally require that $e \geq 0.3$.
The dashed line denotes $y = x$.
(A color version of this figure is available in the online journal.)
\label{figure:cingshype}}
\end{figure*}
\vskip 0.3cm
\begin{figure*}[t]
\centerline{\psfig{file=fig18_1.ps,width=17.0cm,angle=0}}
\figcaption[fig18.ps]{
Comparison between the surface brightness profiles derived from CGS
and SDSS. The SDSS data were analyzed in the same way as the CGS data,
and their photometric system was transformed to ours as described in
Section~10.3. Within each panel, the upper subpanel shows the two profiles,
truncated at the radius where the isophotal intensity is $1 \, \sigma$ above
the local sky background; the bottom subpanel shows the difference between the
CGS and SDSS profiles.
(A color version of this figure is available in the online journal.)
\label{figure:cingssdss}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\centerline{\psfig{file=fig18_2.ps,width=17.0cm,angle=0}}
\figcaption[fig18.ps]{
continued
\label{figure:cingssdss}}
\end{figure*}
\vskip 0.3cm
\noindent
sensitive
to data at large radii). The average differences and standard deviations of
the parameters plotted in Figure~\ref{figure:interpara} are listed in
Table~9. The average differences are quite close to 0,
and the scatter is small.
\subsection{External Comparison with HyperLeda}
Several of the global parameters measured in our sample have independent data
listed in HyperLeda\footnote{{\tt http://leda.univ-lyon1.fr}} \citep{patu03},
which we can use to perform an external comparison of our errors.
Figure~\ref{figure:cingshype} compares the following six parameters
between CGS and HyperLeda: total $I$-band magnitude ($I_{\rm tot}$), total
$B$-band magnitude ($B_{\rm tot}$), integrated $B-V$ color, isophotal
diameter at 25 $B$ mag~$\rm arcsec^{-2}$ ($D_{25}$), $e$, and PA.
The overall agreement is quite good for the integrated magnitudes, $B-V$
color, and $D_{25}$ diameters, especially for the red open points, which
represent galaxies that were observed under photometric conditions, that have
no contamination from the nearby bright stars, and that have reliable sky
values. The PA comparison improves dramatically after isolating the subset
with $e \geq 0.3$. When the galaxy is round, the PA is hard to determine
because the semi-major axis for any given isophote is ill-defined. In
addition, extreme outliers lying on the lower-right and upper-left corners of
the distribution can be attributed to the 180$^{\circ}$\ ambiguity for PAs close
to 0$^{\circ}$\ or 180$^{\circ}$. The ellipticities show by far the worst agreement.
The large scatter may be due in part to the fact that the HyperLeda values
pertain to measurements made at $\mu_B = 25$ mag~$\rm arcsec^{-2}$, whereas
ours are made at a significantly lower surface brightness threshold, at $\mu_B
\approx 27$ mag~$\rm arcsec^{-2}$. A more serious problem may be related to
inherent biases and errors that are known to plague the axial ratios
(ellipticities) contained in the HyperLeda database (see the Appendix of Ho 2007).
\subsection{External Comparison with SDSS}
Roughly 9\% of the CGS galaxies overlap with the Sloan Digital Sky Survey
\citep[SDSS;][]{york00, stou02}, which provides well-calibrated, uniform optical
images with a photometric accuracy of $2\%-3\%$. The SDSS images are
available in the \emph{ugriz} filters, but the $u$ and $z$ images have very low
S/N, and we concentrate our attention on the $g, r,$ and $i$ bands. We
analyze the SDSS data following exactly the same procedures applied to the
CGS. After registering the $g$ and $r$ images to the $i$ image, we extract
isophotal intensity profiles as described in Section~4.1. As the SDSS images
were observed in a drift-scan mode, they have both a very large field-of-view
and an exceptionally uniform background \citep{potr06}. This allows us to
determine accurate sky values using the method of \citet{noovan07}, as
described in Section~3. To convert the SDSS \emph{gri} photometry into our
standard \emph{BVRI} system, we use the transformation equations given in
\citet{jest05}:
\begin{eqnarray}
B & = & g + 0.39(g - r) + 0.21 \\
V & = & g - 0.59(g - r) - 0.01 \\
R & = & V - 1.09(r - i) - 0.22 \\
I & = & R - 1.00(r - i) - 0.21.
\end{eqnarray}
\bigskip
Figure~\ref{figure:cingssdss} compares the surface brightness profiles from
CGS with those derived from SDSS, for a subset of four relatively small
galaxies (NGC~936, 1084, 1087 and 1090) that have well-measured sky values.
The profiles are truncated at the radius where the isophotal intensity is
$1 \, \sigma$ above the local sky value. It is clear that most of the profiles
agree well with each other. The absolute value of the average profile
differences is 0.24, 0.14, 0.08, and 0.14 mag~$\rm arcsec^{-2}$ for $B, V,
R,$ and $I$ bands, respectively. This level of discrepancy is not unexpected,
given that all four of these galaxies were observed under non-photometric
conditions in CGS, not to mention of additional uncertainties introduced by
the photometric transformation from the SDSS to the CGS system. This
comparison confirms that the basic reduction and calibration of the CGS
data are sound.
Despite the short exposures of the SDSS images (54~s), they have superior
background uniformity and better sky determination than CGS. These advantages
translate to better sensitivity in terms of surface brightness, by $\sim 0.4$,
0.2, 0.6, and 0.9 mag~$\rm arcsec^{-2}$ in the $B$, $V$, $R$, and $I$ bands,
respectively. However, the longer integration times, better seeing, and finer
pixel scale of the CGS images imply that they have much higher S/N and
sensitivity to compact structures compared to SDSS, typically by a factor of
$\sim4-5$.
\section{Summary}
We present a comprehensive isophotal analysis of optical (\emph{BVRI}) images
for a statistically complete, magnitude-limited sample of 605 bright, southern
galaxies, observed as part of the Carnegie-Irvine Galaxy Survey (CGS). We
discuss our strategy for determining the sky level and its error. It is
challenging to achieve very accurate sky subtraction with our images because
the CGS galaxies are relatively large and the background suffers from
low-level non-uniformities due to residual flat-fielding errors. Nevertheless,
crosschecks with internal and external data indicate that our calibration
and sky subtraction strategies are robust, and that our quoted measurement
uncertainties are sound.
This paper focuses on the derivation of radial profiles of surface brightness,
color, and various geometric parameters that characterize the shape and
orientation of the isophotes. We construct composite brightness profiles as
a function of Hubble type to highlight statistical trends. Non-exponential
disks are seen in many S0 and spiral galaxies. We perform a Fourier analysis
of the isophotes to characterize their non-axisymmetric deviations from pure
ellipses. The relative amplitude of the $m = 1$ mode effectively identifies
lopsided structures in the light distribution, which we find to be common in
our sample, especially among late-type galaxies. Bars and spiral arms, by
contrast, are best revealed by the relative amplitude of the $m = 2$ Fourier
mode. We present a uniform set of quantitative measurements of bar size and
bar strength, spiral arm strength, and lopsidedness amplitudes.
Forthcoming papers will utilize the databases assembled here and in Paper~I
to explore a number of scientific issues, including the following.
\begin{enumerate}
\item{Statistics of bars, bar properties, and their possible connection to
spiral arms.}
\item{Incidence of lopsidedness and its dependence on global galaxy and
environmental parameters.}
\item{Disk profiles, truncations, and correlations with color gradients.}
\end{enumerate}
\acknowledgements
We thank the referee for a prompt and helpful review of this manuscript.
This work was supported by the Carnegie Institution for Science (L.C.H.),
the UC Irvine School of Physical Sciences (A.J.B.), the China Scholarship
Council (Z.-Y.L.), and the Plaskett Fellowship of the Herzberg Institute of
Astrophysics, National Research Council of Canada (C.Y.P.). Z.-Y.L. is
grateful to Professor X.-B. Wu of the Department of Astronomy in Peking University
for his support and helpful suggestions on this project. We made use of
HyperLeda and the NASA/IPAC Extragalactic Database (NED), which is operated by
the Jet Propulsion Laboratory, California Institute of Technology, under
contract with the National Aeronautics and Space Administration.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan
Foundation, the Participating Institutions, the National Science Foundation,
the U.S. Department of Energy, the National Aeronautics and Space
Administration, the Japanese Monbukagakusho, the Max Planck Society, and the
Higher Education Funding Council for England. The SDSS Web site is
{\tt http://www.sdss.org}.
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Any smooth projective variety $X$ comes equipped with the tangent bundle $T_X$. So the tangent bundle is often used for classification problems of algebraic varieties.
One of the most important results in this direction is Mori's solution of the Hartshorne-Frankel conjecture characterizing projective spaces as the only smooth projective varieties with ample tangent bundle \cite{Mori1}. As a generalization of Mori's result, F. Campana and T. Peternell conjectured that the only complex smooth Fano varieties with nef tangent bundle are rational homogeneous \cite{CP1}. We call it the {\it CP-conjecture} for short. The CP-conjecture has been varified up to dimension five and for certain special classes of varieties. We refer the reader to \cite{MOSWW} and \cite{Kane2}.
Based on the fact $T_X$ is the normal bundle of the diagonal $\Delta_X$ in the product $X\times X$, B. Lehmann and C. Ottem recently studied how the geometry of $X$ is reflected in the positivity properties of $\Delta_X$ in their paper \cite{LO}. For example, if $T_X$ is nef, then so is the diagonal $\Delta_X$ as a cycle on $X\times X$. However in general the converse is not true. For instance, any fake projective space, which is a smooth projective variety with the same Betti numbers as ${\mathbb P}^n$ but not isomorphic to ${\mathbb P}^n$, has nef diagonal. This means that the class of varieties with nef diagonal is strictly larger than that of varieties with nef tangent bundle. The nefness of the diagonal imposes restrictions on the structure of varieties. For instance, if $X$ is a projective variety with nef diagonal, then every (possibly higher-codimensional) pseudoeffective cycle is nef (see \cite[Theorem 1.4, Proposition~4.1]{LO}). This yields that every extremal contraction of $X$ is of fiber type (Proposition~\ref{prop:bir}). Moreover, if the diagonal $\Delta_X$ is nef and big as a cycle on $X\times X$, then the Picard number of $X$ is one (Proposition~\ref{prop:nefbig:Pic}).
One of the purposes of this paper is to study the nefness of the diagonal for complete intersections of hypersurfaces and smooth del Pezzo varieties:
\begin{theorem}[Theorem~\ref{them:ci}, Theorem~\ref{thm:dPnef}]\label{MT}
Let $X$ be a smooth complex projective variety. Assume that the diagonal $\Delta_X$ is nef as a cycle on $X \times X$. Then the following holds.
\begin{enumerate}
\item If $X$ is a complete intersection of hypersurfaces, then $X$ is a projective space, a quadric or an odd-dimensional complete intersection of two quadrics.
\item If $X$ is a del Pezzo variety, then $X$ is one of the following:
\begin{enumerate}
\item an odd-dimensional complete intersection of two quadrics,
\item the Grassmannian $G(2, {\mathbb C}^5)$,
\item a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ embedded via the Pl\"ucker embedding,
\item ${\mathbb P}^1\times {\mathbb P}^1 \times {\mathbb P}^1$,
\item ${\mathbb P}^2 \times {\mathbb P}^2$ or
\item ${\mathbb P}(T_{{\mathbb P}^2})$.
\end{enumerate}
\end{enumerate}
\end{theorem}
We know all varieties in the above theorem have nef diagonal except odd-dimensional complete intersections of two quadrics (see Remark~\ref{rem:22} and Question~\ref{ques:22}). Moreover this theorem yields the following:
\begin{corollary}[Corollary~\ref{cor:ci:big}, Corollary~\ref{cor:dp:big}] Let $X$ be a smooth complex projective variety. Assume that the diagonal $\Delta_X$ is nef and big as a cycle on $X \times X$. Then the following holds.
\begin{enumerate}
\item $X$ is a complete intersection of hypersurfaces if and only if $X$ is a projective space or an odd-dimensional quadric.
\item $X$ is a del Pezzo variety if and only if $X$ is a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$.
\end{enumerate}
In both cases, $X$ is a projective space or a fake projective space.
\end{corollary}
As a byproduct of Theorem~\ref{MT} in a different direction, we may conclude that the CP-conjecture holds for complete intersections of hypersurfaces (Corollary~\ref{cor:CPci}) and del Pezzo varieties (Corollary~\ref{cor:CP}). For complete intersections, it was firstly proved by R. Pandharipande \cite{Pan} (see also \cite{Fur}). On the other hand, it is easy to check that the CP-conjecture holds for almost all smooth del Pezzo varieties. However, to the best of our knowledge, there is no literature stating the CP-conjecture holds for del Pezzo varieties of degree one and two.
We also study spherical varieties with nef diagonal. The following directly follows from \cite{Li}:
\begin{theorem}[Proposition~\ref{prop:Sph}]\label{them:spherical} For any smooth projective spherical variety $X$, the following are equivalent to each other.
\begin{enumerate}
\item The diagonal $\Delta_X$ is nef.
\item For any $k$, the pseudoeffective cone of $k$-cycles coincides with the nef cone of $k$-cycles.
\end{enumerate}
\end{theorem}
As an example of spherical varieties, we prove the odd symplectic Grassmannian of lines has non-nef diagonal provided that it is not homogeneous (Proposition~\ref{prop:OSGL}). We also give explicit descriptions of the nef cone and the pseudoeffective cone of cycles for a five-dimensional odd symplectic Grassmannian in Proposition~\ref{prop:dp:cone}. The result gives an answer to a question on cones of cycles of horospherical varieties raised by Q. Li (Remark~\ref{rem:Li}).
\section{Preliminaries}
Along this paper, we work over the complex number field. We employ the notation as in \cite{Har}. For a smooth complex projective variety $X$, we denote by $b_i(X)$ the $i$-th Betti number of the complex manifold $X^{an}$ associated to $X$.
\if0
\begin{notation} For a set of positive integers ${\bf d}=(d_1, d_2, \ldots, d_r)$, a {\it complete intersection of type ${\bf d}$} means a complete intersection of hypersurfaces of degrees $d_1, \ldots, d_r$.
\end{notation}
\fi
\subsection{Cycles and cones}
For the intersection theory, we refer the reader to \cite{Ful}. Let $X$ be a smooth projective variety of dimension $n$. A {\it $k$-cycle} on $X$ is a finite formal linear combination $\sum a_i [Z_i]$ where $Z_i$ are $k$-dimensional closed subvarieties of $X$ and $a_i$ are real numbers. The cycle $\sum a_i [Z_i]$ is {\it effective} if $a_i\geq 0$ for any $i$. The cycle $\sum a_i [Z_i]$ is {\it nef} if it has non-negative intersection numbers with all $(n-k)$-dimensional closed subvarieties. We denote by $A_k(X)=A^{n-k}(X)$ (resp. $N_k(X)=N^{n-k}(X)$) the group of $k$-cycles with real coefficients modulo rational equivalence (resp. numerical equivalence) on $X$. We may take the intersection product of cycles on the Chow ring $A(X):=\bigoplus A_k(X)$ or $N(X):=\bigoplus N_k(X)$. Since numerical equivalence is coarser than rational equivalence, we have a surjective ring homomorphism $A(X)\to N(X)$.
By \cite[Example~19.1.4]{Ful}, $N_k(X)$ is a finite-dimensional ${\mathbb R}$-vector space. We may define the degree homomorphism $\operatorname{deg}: {A}_0(X)\to {\mathbb R}$ (see \cite[Definition~1.4]{Ful}). For a zero-cycle $\sum a_i [P_i]$ ($P_i \in X$), $\operatorname{deg} \sum a_i [P_i]=\sum a_i$. For cycles $Z_i \in A^{k_i}(X)$ $(\sum_i^r k_i=n)$, their intersection product $Z_1 \cdot Z_2 \cdots Z_r$ is a zero-cycle. Then the intersection number of $Z_i$'s is the degree $\operatorname{deg} (Z_1 \cdot Z_2 \cdots Z_r)$.
The {\it pseudoeffective cone} $\overline{{\rm Eff}}_k(X)=\overline{{\rm Eff}}^{n-k}(X)$ in $N_k(X)$ is the closure of the cone generated by effective $k$-cycles on $X$. A cycle in the pseudoeffective cone is called a {\it pseudoeffective cycle}. A {\it $k$-cycle} $Z$ on $X$ is {\it big} if $Z$ lies in the interior of $\overline{{\rm Eff}}_k(X)$. A {\it $k$-cycle} $Z$ on $X$ is {\it universally pseudoeffective} if $\pi^{\ast}Z$ is pseudoeffective for any morphism of projective varieties $f: Y \to X$ with $Y$ smooth (see \cite[Section~1.3]{FL2}). The {\it nef cone} ${\rm Nef}_k(X)={\rm Nef}^{n-k}(X)$ in $N_k(X)$ is the cone generated by nef $k$-cycles on $X$. Remark that the nef cone ${\rm Nef}_k(X)$ is the dual of the pseudoeffective cone $\overline{{\rm Eff}}^{k}(X)$ via the intersection pairing.
Let $E$ be a vector bundle of rank $r$ on $X$. For each $k=0, 1, \ldots, r$, the {\it $k$-th Chern class} $c_k(E) \in A^k(X)$ is defined by the relations
$$\sum_{k=0}^r(-1)^k\pi^{\ast}c_k(E)\xi^{r-k}=0~\text{and}~c_0(E)=1,
$$
where $\xi$ is the divisor associated to the tautological line bundle ${\mathcal{O}}_{{\mathbb P}(E)}(1)$ and $\pi: {\mathbb P}(E) \to X$ is the natural projection. The {\it Chern polynomial} of $E$ is defined by
$$
c_t(E):=c_0(E)+c_1(E)t+ \cdots+ c_r(E)t^r.
$$
\if0Let $A_{k}(X)={A}^{n-k}(X)$ be the group of $k$-dimensional cycles with integer coefficients on $X$ modulo rational equivalence. The {\it Chow ring} of $X$ is the graded ring $A^{\ast}(X)=\bigoplus_{k=0}^n A^k(X)$.\fi
\if0
\subsection{Chern classes and Segre classes} Let $E$ be a vector bundle of rank $r$ on a smooth projective variety $X$ of dimension $n$. For each $k=0, 1, \ldots, r$, the {\it $k$-th Chern class} $c_k(E)$ is defined by the relations
$$\sum_{i=0}^rc_1\pi^{\ast}c_i(E)({\mathcal{O}}_{{\mathbb P}(E)}(1))^{r-k}=0~\text{and}~c_0(E)=1.
$$
For any integer $k$, the {\it $k$-th Segre class} $s_k(E) \in A_k(X)$ is defined by
$$s_k(E)=\pi_{\ast}(c_1({\mathcal{O}}_{{\mathbb P}(E^{\vee})}(1))^{r+k-1}),
$$
where $\pi: {\mathbb P}(E^{\vee}) \to X$ is the natural projection.
By definition, we have
$$s_k(E) \cap \alpha=\pi_{\ast}(c_1({\mathcal{O}}_{{\mathbb P}(E^{\vee})}(1))^{r+k-1}\cap \pi^{\ast}\alpha).$$
Furthermore,
$$c_t(E)\cdot s_t(E)=1.
$$
\fi
\subsection{Properties of varieties with nef diagonal}\label{subsec:nef}
For a smooth projective variety $X$, the diagonal $\Delta_X \subset X \times X$ is said to be {\it nef} (resp. {\it big}) if $\Delta_X$ is nef (resp. {big}) as a cycle on $X \times X$.
\begin{proposition}[{\cite[Theorem~1.4, Proposition~4.1]{LO}}]\label{prop:pseffcone} Let $X$ be a smooth projective variety with nef diagonal. Then every pseudoeffective class on $X$ is nef.
\end{proposition}
\begin{proposition}[{\cite[Proposition~8.1.12]{Ful}}]\label{prop:diag^2} For a smooth projective variety $X$, let $\Delta_X \subset X \times X$ be the diagonal of $X$. Then
$$ \operatorname{deg} \Delta_X^2= \operatorname{deg} c_n(X).
$$
\end{proposition}
\begin{remark}\label{rem:euler}
Let $X$ be a smooth complex projective variety. By the Gauss-Bonnet theorem (see for instance \cite[P. 70, Chapter~I, Theorem~4.10.1]{Hir}), the degree of the top Chern class of $X$ is nothing but the topological Euler characteristic: $$\operatorname{deg} c_n(X)=\sum_{i} (-1)^{i}b_i(X).$$
\end{remark}
\begin{example}\label{ex:curve} Let $C$ be a smooth projective curve with nef diagonal. By Proposition~\ref{prop:diag^2}, $\operatorname{deg} c_1(C)=\operatorname{deg} \Delta_C^2 \geq 0$. This implies that the only smooth projective curves with nef diagonal are a projective line and elliptic curves.
\end{example}
\begin{proposition}[{\cite[Proposition~4.4]{LO}}]\label{prop:LO} Let $X$ be a smooth projective variety of dimension $n$ admitting a finite morphism $f :X \to {\mathbb P}^n$ of degree $d$. Suppose that $\operatorname{deg} c_n(X)>(n+1)d$. Then $\Delta_X$ is not nef.
\end{proposition}
\begin{proposition}\label{prop:bir} Let $X$ be a smooth projective variety with nef diagonal. Then any extremal contraction of $X$ is of fiber type.
\end{proposition}
\begin{proof} Let $f: X \to Y$ be an extremal contraction of birational type. By Proposition~\ref{prop:pseffcone}, a projective curve $C \subset X$ contracted by $f$ is a nef $1$-cycle. For an ample divisor $H$ on $Y$, $f^{\ast}H$ is nef and big. Thus there exist an ample ${\mathbb{Q}}$-divisor $A$ and an effective ${\mathbb{Q}}$-divisor $E$ such that $f^{\ast}H=A+E \in N^1(X)$. Then we have $(A+E).C\geq A.C >0$, but this contradicts to $f^{\ast}H.C=0$.
\if0 Suppose that $X$ admits a non-isomorphic birational morphism $f: X \to Y$. By Proposition~\ref{prop:pseffcone}, a projective curve $C \subset X$ contracted by $f$ is a nef $1$-cycle. Then \cite[Theorem~0.2]{BDPP} yields $[C] \in \overline{{\operatorname{Mov}}}_1(X)$, where $\overline{{\operatorname{Mov}}}_1(X)$ is the movable cone of curves.
For an ample divisor $H$ on $Y$, $f^{\ast}H$ is nef and big. Thus there exist an ample ${\mathbb{Q}}$-divisor $A$ and an effective ${\mathbb{Q}}$-divisor $E$ such that $f^{\ast}H=A+E$. Then we have $(A+E).C\geq A.C >0$, but this contradicts to $f^{\ast}H.C=0$.
\fi
\end{proof}
\begin{proposition}[{\cite[Corollary~4.2.(1)]{LO}}]\label{prop:nefbig:Pic} Let $X$ be a smooth projective variety with nef and big diagonal. Then $N^1(X) \cong {\mathbb R}$.
\end{proposition}
\begin{definition} A {\it fake projective space} is a smooth projective variety with the same Betti numbers as ${\mathbb P}^n$ but not isomorphic to ${\mathbb P}^n$.
\end{definition}
\begin{proposition}[{\cite[Section~1]{LO}}]\label{prop:fps} A projective space and any fake projective space have nef and big diagonal.
\end{proposition}
\if0
\begin{lemma}[cf. {\cite[Lemma~4.3]{LO}}]\label{lem:fib} If a smooth projective variety $X$ admits a surjective morphism $f: X \to Y$ to a smooth projective variety $Y$ with negative top Chern class, then $\Delta_X$ is not nef.
\end{lemma}
\begin{proof} Let $n$ and $m$ be the dimension of $X$ and $Y$ respectively. For an ample divisor $H$ on $X \times X$, there exists a positive integer $a$ such that $$(f \times f)_{\ast}(\Delta_X\cdot H^{n-m})=a\Delta_Y$$ in $A_m(Y\times Y)$. By the projection formula, we obtain $$\operatorname{deg} \bigl\{\Delta_X\cdot H^{n-m}\cdot (f \times f)^{\ast}\Delta_Y\bigr\}= a \operatorname{deg} \Delta_Y^2=a\operatorname{deg} c_m(Y) <0.$$
\end{proof}
\fi
\if0
\begin{proposition}\label{prop:top=0} Let $X$ be an $n$-dimensional smooth projective variety with $\operatorname{deg} c_n(X)=0$. Assume that $b_i(X)=1$ for any even integer $i$. Then $\Delta_X$ is not nef.
\end{proposition}
\begin{proof}
According to a result of Paranjape \cite{Para} and Laterveer \cite{Late} (see also \cite[Theorem~3.18]{voi}), there exists a decomposition
\begin{eqnarray}\label{eq:decomp}
m\Delta_X=Z_0+ \ldots + Z_{n} \in A^{n}(X \times X),
\end{eqnarray}
where $m\neq 0$ is an integer and $Z_i$ is supported in $V_i \times W_i$ with $\dim V_i=i$ and $\dim W_i=n-i$.
By the K\"unneth decomposition, we have
$$
H^{n}(X \times X, {\mathbb{Q}})= \bigoplus_{i+j=n}H^i(X, {\mathbb{Q}})\otimes H^j(X, {\mathbb{Q}}).
$$
Let us take a class $h \in H^2(X, {\mathbb{Q}})$ satisfying $h^{n}=1$. By the decomposition (\ref{eq:decomp}), we have a cohomological decomposition of the diagonal:
$$
\Delta_X=\sum_{i=0}^{n} a_ih^{n-i} \otimes h^i \in H^{n}(X \times X, {\mathbb{Q}}),
$$where $a_i \in {\mathbb{Q}}$. Assume that the diagonal $\Delta_X$ is nef. Then we have
$$
a_i= \operatorname{deg} (\Delta_X\cdot h^i\otimes h^{n-i}) \geq 0.
$$
Moreover it follows from Proposition~\ref{prop:diag^2} that
$$\operatorname{deg} \Delta_X^2=\operatorname{deg} c_{n}(X)=0.$$
Since $\operatorname{deg} \Delta_X^2=a_0a_{n}+a_1a_{n-1}+ \cdots +a_na_{0}$, we conclude that $a_i=0$ for any $i$. This implies that $\Delta_X$ is numerically trivial, but this is a contradiction.
\end{proof}
\fi
\if0
\begin{lemma}\label{lem:base} Let $f: X \to Y$ be a flat morphism between smooth projective varieties. If $\Delta_X$ is nef, then so is $\Delta_Y$.
\end{lemma}
\begin{proof} Let $n$ and $m$ be the dimension of $X$ and $Y$ respectively. For an ample divisor $H$ on $X \times X$, there exists a positive integer $a$ such that $$(f \times f)_{\ast}(\Delta_X\cdot H^{n-m})=a\Delta_Y.$$
Since nefness is preserved by flat pushforward to a smooth base, our assertion holds.\footnote{Claim: $\Delta_X\cdot H^{n-m}$is nef.
In fact, for any $Z \in {\operatorname{Eff}}_{2n-m}(X \times X)$, we have $Z\cdot H^{n-m} \in {\operatorname{Eff}}_{n}(X \times X)$. Since $\Delta_X$ is nef, $\Delta_X\cdot H^{n-m}\cdot Z \geq 0$.}
\end{proof}
\fi
\if0
\begin{lemma}\label{lem:fiber} Let $f: X \to Y$ be a flat morphism between smooth projective varieties and $F$ be a general fiber of $f$. Assume that $A_0(Y)\cong {\mathbb Z}$ (For instance, this holds if $X$ is rationally connected). If $\Delta_X$ is nef, then so is $\Delta_F$.
\end{lemma}
\begin{proof} Let $n$ and $m$ be the dimension of $X$ and $Y$ respectively. Let $H$ be an ample divisor on $Y \times Y$. Denote by $a$ the self-intersection number $H^{2m}$. For a flat morphism $f \times f: X \times X \to Y \times Y$, we obtain $$a[F \times F]=(f \times f)^{\ast}[H^{2m}] \in A_{2n-2m}(X\times X). $$Since $\Delta_F=\Delta_X \cap F \times F$, this tells us that $\Delta_F$ is nef.
\end{proof}
\begin{proposition}\label{prop:cone} Let $X$ be a smooth Fano variety with nef diagonal. Then the Kleiman-Mori cone $\overline{NE}(X)$ is simplicial.
\end{proposition}
\fi
\section{Complete intersections}\label{sect:ci}
In this section, we will study the case of complete intersections. The following two results are well known:
\begin{proposition}[{\cite[Theorem~3]{Az}}]\label{prop:euler} Let $X$ be an $n$-dimensional smooth projective complete intersection of type $(d_1, d_2, \ldots, d_r)$. Then the degree of the top Chern class $c_n(X)$ is given by
$$\operatorname{deg} c_n(X)=d_1d_2\cdots d_r\Biggl\{\sum_{i=0}^n (-1)^{n-i} \binom{n+r+1}{i} h_{n-i}(d_1,\ldots,d_r)\Biggr\},
$$
where $\displaystyle{h_k(d_1,\ldots,d_r):=\sum_{\substack{i_1+i_2+ \ldots + i_r=k\\ i_j \geq 0}} d_1^{i_1}d_2^{i_2}\cdots d_r^{i_r}}$.
\end{proposition}
\begin{definition}\label{def:eq} For a set of non-negative integers $(d_1, \ldots, d_r, n)$, we put
$$\chi(d_1, d_2, \ldots, d_r; n)=d_1d_2\cdots d_r\Biggl\{\sum_{i=0}^n (-1)^{n-i} \binom{n+r+1}{i} h_{n-i}(d_1,\ldots,d_r)\Biggr\}
$$
\end{definition}
\begin{proposition}[{\cite[P. 146]{Az}}]\label{cor:formula} \rm Under the notation as in Definition~\ref{def:eq}, we have a formula
$$
\chi(d_1, d_2, \ldots, d_r; n)=d_1\chi(d_2, \ldots, d_r; n)-(d_1-1)\chi(d_1, d_2, \ldots, d_r; n-1)
$$
\end{proposition}
\begin{proposition}\label{prop:hyp} Let $X$ be an $n$-dimensional smooth projective hypersurface of degree $d \geq 3$. If $n$ is even, then the degree of the top Chern class $\operatorname{deg} c_n(X)$ is positive. If $n$ is odd, then $\operatorname{deg} c_n(X)$ is negative except the case where $(n, d)=(1,3)$.
\end{proposition}
\begin{proof} By Proposition~\ref{prop:euler}, we have an equality
$$d \operatorname{deg} c_n(X)=(1-d)^{n+2}-1+d(n+2).
$$ Then our claim follows from this equality.
\end{proof}
\begin{proposition}\label{prop:ci} Let $X$ be an $n$-dimensional smooth projective complete intersection of type $(d_1, d_2, \ldots, d_r)$. Assume that $2 \leq r$, $2\leq d_1 \leq d_2 \leq \ldots \leq d_r$ and $3 \leq d_r$. Then, if $n$ is even, then the degree of the top Chern class $\operatorname{deg} c_n(X)$ is positive. If $n$ is odd, then $\operatorname{deg} c_n(X)$ is negative.
\end{proposition}
\begin{proof} Let us first consider the case where $n=1$. In this case, by Proposition~\ref{cor:formula}, we have an equality
\begin{eqnarray}\label{eq:chi}
\chi(d_1, d_2, \ldots, d_r; 1)&=&d_1\chi(d_2, \ldots, d_r; 1)-(d_1-1)\chi(d_1, d_2, \ldots, d_r; 0) \\
&=&d_1\chi(d_2, \ldots, d_r; 1)-(d_1-1)d_1d_2\cdots d_r. \nonumber
\end{eqnarray}
When $r=2$, we have $\chi(d_1,d_2;1)=d_1\chi(d_2;1)-(d_1-1)d_1d_2$. Since it follows from Proposition~\ref{prop:hyp} that $\chi(d_2;1)$ is nonpositive, $\chi(d_1,d_2;1)$ is negative. By induction on $r$, the equality (\ref{eq:chi}) tells us that $\chi(d_1, d_2, \ldots, d_r; 1)$ is negative. Hence our assertion holds for $n=1$ and $r \geq 2$.
To prove our statement, we use induction on $n+r$. Remark that we have already dealt with the case where $n \geq 2$ and $r=1$ in Proposition~\ref{prop:hyp}. Hence if $n+r=3$, then our assertion holds. Put $m:=n+r$ and suppose the result is true for $m-1\geq 3$. By Proposition~\ref{cor:formula}, we have an equation
$$\chi(d_1, d_2, \ldots, d_r; n)=d_1\chi(d_2, \ldots, d_r; n)-(d_1-1)\chi(d_1, d_2, \ldots, d_r; n-1)
$$
By the induction hypothesis, if $n$ is even, then we see that $\chi(d_2, \ldots, d_r; n)>0$ and $\chi(d_1, d_2, \ldots, d_r; n-1)<0$. Thus $\operatorname{deg} c_n(X)=\chi(d_1, d_2, \ldots, d_r; n)$ is positive. If $n$ is odd, the same argument implies the negativity of $\operatorname{deg} c_n(X)=\chi(d_1, d_2, \ldots, d_r; n)$.
\end{proof}
\begin{corollary}\label{cor:even} Let $X$ be an $n$-dimensional smooth projective complete intersection of type $(d_1, d_2, \ldots, d_r)$. Assume one of the following holds:
\begin{enumerate}
\item $r=1$ and $(n,d_1)\neq (2,3)$, or
\item $2 \leq r$, $2 \leq d_1 \leq d_2 \leq \ldots \leq d_r$ and $3 \leq d_r$.
\end{enumerate}
If $n$ is even, then $\operatorname{deg} c_n(X)>(n+1)d_1d_2\cdots d_r$.
\end{corollary}
\begin{proof} As the first case, assume that $r=1$ and $(n,d_1)\neq (2,3)$. By Proposition~\ref{prop:euler}, we have an equality
$$\operatorname{deg} c_n(X)=\frac{(1-d_1)^{n+2}-1}{d_1}+n+2.
$$
Then it is straightforward to show that $\operatorname{deg} c_n(X)>(n+1)d_1$. We also remark that $\operatorname{deg} c_n(X)=(n+1)d_1$ provided that $r=1$ and $(n,d_1)= (2,3)$.
As the second case, let us assume that $2 \leq r$, $2 \leq d_1 \leq d_2 \leq \ldots \leq d_r$ and $3 \leq d_r$. We proceed induction on $r$. When $r=2$, the previous argument implies that $\chi(d_2;n) \geq (n+1)d_2$, and it follows from Proposition~\ref{prop:ci} that $\chi(d_1, d_2; n-1) <0$. Hence, by Proposition~\ref{cor:formula},
$$\chi(d_1, d_2; n)=d_1\chi(d_2; n)-(d_1-1)\chi(d_1, d_2; n-1)>(n+1)d_1d_2.
$$ We may therefore assume that our claim holds for $r-1$. Then we have $$\chi(d_2, \ldots, d_r; n)>(n+1)d_2\cdots d_r,$$ and it follows from Proposition~\ref{prop:ci} that $\chi(d_1, d_2, \ldots, d_r; n-1)<0$. Thus Proposition~\ref{cor:formula} concludes
$$
\chi(d_1, d_2, \ldots, d_r; n)>(n+1)d_1\cdots d_r
$$
\end{proof}
\begin{proposition}\label{prop:quad} Let $X$ be an $n$-dimensional smooth projective complete intersection of $r$ quadrics. Assume that $r \geq 3$. If $n$ is odd, then $\operatorname{deg} c_n(X)$ is negative. If $n$ is even and $(n, r) \neq (2,3)$, then $\operatorname{deg} c_n(X)>2^r(n+1)$.
\end{proposition}
\begin{proof}
For positive integers $n,r$, let us set
$$
b(n,r):=\frac{(-1)^n \operatorname{deg} c_n(X)}{2^r}.
$$
By Proposition~\ref{prop:euler} and Proposition~\ref{cor:formula}, we have a relation
\begin{equation}\label{eq:b}
b(n,r)=b(n,r-1)+b(n-1,r).
\end{equation}
It also follows from Proposition~\ref{prop:euler} that
$$b(n,1)=\frac{(-1)^{n}(2n+3)+1}{4}~~\mbox{and}~~ b(1,r)=r-2.$$
By a straightforward computation, we have
\begin{equation}\label{eq:b2}
b(n,2)=
\begin{cases}
\frac{n}{2}+1 & \text{if $n$ is even} \\
0 & \text{if $n$ is odd}
\end{cases}
\end{equation}
To prove the proposition, it is enough to show the following claim:
\begin{enumerate}
\item[(i)] If $r \geq 3$, then $b(n,r)>0$.
\item[(ii)] If $r \geq 3$, $n$ is even and $(n,r)\neq (2,3)$, then $b(n,r)>n+1$.
\end{enumerate}
From now on, we assume that $r \geq 3$. By the above equation (\ref{eq:b2}), $b(n,2)$ is non-negative. Thus it follows from (\ref{eq:b}) that
$$b(n,3)=b(n,2)+b(n-1,3)\geq b(n-1,3) \geq \ldots \geq b(1,3)=1.$$
Since we also see $b(1,r)=r-2\geq 1$, the above equation (\ref{eq:b}) concludes $\rm (i)$.
Assume that $n \geq 2$ and $r \geq 3$. By (\ref{eq:b}) and $\rm (i)$, we have
$$b(n,r)=b(n,r-1)+b(n-1,r) >b(n, r-1).$$
If $n=2$, then for any $r \geq 4$, we have
$$
b(2,r)>b(2,3)=3=n+1.
$$
If $n=4$, then for any $r \geq 3$, we have
$$
b(4,r)\geq b(4,3)=6>n+1.
$$
By induction on $n$, if $n$ is even such that $n \geq 6$, then for any $r \geq 3$,
\begin{eqnarray}
b(n,r)&\geq& b(n,3)=b(n,2)+b(n-1,3) \nonumber \\
&=&\frac{n}{2}+1+b(n-1,2)+b(n-2,3) \nonumber \\
&>& \frac{n}{2}+1 +(n-1)=\frac{3}{2}n>n+1. \nonumber
\end{eqnarray}
\end{proof}
\if0
\begin{proposition}\label{prop:quad} Let $X$ be an $n$-dimensional smooth projective complete intersection of $r$ quadrics. If $r \geq 2$, then $c_n(X)>2^r(n+1)$.
\end{proposition}
\begin{proof} By Proposition~\ref{prop:euler}, we have an equality
$$c_n(X)=2^r\Biggl\{\sum_{i=0}^n \binom{n+r+1}{i} (-2)^{n-i}\Biggr\}.
$$
Let us set
$$
a(n,r):=\sum_{i=0}^n \binom{n+r+1}{i} (-2)^{n-i},
$$ and it is enough to prove $a(n,r)>n+1$.
A direct computation shows
$$a(n,r)=a(n,r-1)+a(n-1,r).
$$
Since $a(1,r)=r$ and $a(n,2)=n+\frac{n^2}{4}+\frac{7+(-1)^n}{8}>0$, we see that $a(n,r)>0$.
If $r=2$, then
$$a(n,2)=n+\frac{n^2}{4}+\frac{7+(-1)^n}{8}>n+1.
$$Thus we suppose that $a(n,r)>n+1$ for $r \geq 2$. By induction on $r$ and the positivity of $a(n,r)$, we obtain
$$a(n,r)=a(n,r-1)+a(n-1,r)>a(n,r-1)>n+1.
$$
As a consequence, our assertion holds.
\end{proof}
\fi
\begin{theorem}\label{them:ci}
Let $X$ be a smooth projective complete intersection of hypersurfaces. Assume that the diagonal $\Delta_X$ is nef. Then $X$ is a projective space, a quadric or an odd-dimensional complete intersection of two quadrics.
\end{theorem}
\begin{proof} Let $X$ be an $n$-dimensional smooth projective complete intersection of type $(d_1, d_2, \ldots, d_r)$. Assume moreover $X$ is not a projective space, a quadric or an odd-dimensional complete intersection of two quadrics.
We may assume that $n \geq 2$ and $d_i>1$. By Lemma~\ref{lem:except} below, we may also assume that $X$ is not a cubic surface, a $2$-dimensional complete intersection of type $(2,2, 2)$ and an even-dimensional complete intersection of type $(2,2)$.
Then, applying Proposition~\ref{prop:ci}, Corollary~\ref{cor:even} and Proposition~\ref{prop:quad}, we see that
$$\operatorname{deg} c_n(X)<0\,\,\,\,\mbox{or}\,\, \operatorname{deg} c_n(X)>d_1d_2 \cdots d_r(n+1).$$
If $\operatorname{deg} c_n(X)<0$, then Proposition~\ref{prop:diag^2} tells us $\operatorname{deg} \Delta_X^2=\operatorname{deg} c_n(X)<0$. Hence $\Delta_X$ is not nef. Suppose that $\operatorname{deg} c_n(X)>d_1d_2 \cdots d_r(n+1)$. For $X \subset {\mathbb P}^{n+r}$, let us consider a general projection $\pi: X \to {\mathbb P}^n$ from a linear subspace of dimension $r-1$. Then $\pi$ is a finite morphism of degree $d_1d_2 \cdots d_r$. Applying Proposition~\ref{prop:LO}, we see that $\Delta_X$ is not nef.
\end{proof}
\begin{lemma}\label{lem:except} Let $X$ be one of the following:
\begin{enumerate}
\item a cubic surface,
\item a $2$-dimensional complete intersection of type $(2, 2, 2)$ or
\item an even-dimensional complete intersection of type $(2,2)$.
\end{enumerate}
Then $\Delta_X$ is not nef
\end{lemma}
\begin{proof} Any cubic surface has a $(-1)$-curve. Then it follows from Proposition~\ref{prop:pseffcone} that the diagonal of a cubic surface is not nef. If $X$ is a $2$-dimensional complete intersection of type $(2,2, 2)$, then it is a K3 surface. Hence $\Delta_X$ is not nef by \cite[Theorem~6.6]{LO}.
Let $X$ be an $2n$-dimensional complete intersection of type $(2,2)$. By \cite[Theorem~3.8]{R}, we may take $n$-planes $\Lambda_1, \Lambda_2 \subset X$ such that $\dim \Lambda_1\cap \Lambda_2=1$. Applying \cite[Lemma~3.10]{R}, we have $\operatorname{deg} \Lambda_1\cdot \Lambda_2=-1$. Thus $\Delta_X$ is not nef.
\end{proof}
\begin{remark}\label{rem:22} Any $(2n+1)$-dimensional smooth projective complete intersection $X$ of two quadrics has Betti numbers
\begin{equation}\label{eq:b2}
\nonumber b_{2k}(X)=1;~~
b_{2k+1}(X)=
\begin{cases}
0 & \text{if $k\neq n$,}\\
2n+2 & \text{if $k=n$}.
\end{cases}
\end{equation}
In particular, any effective cycle on $X$ is nef, $\operatorname{deg} c_{2n+1}(X)=0$ and $X$ is not a fake projective space. Thus we cannot conclude the nefness/non-nefness of the diagonal $\Delta_X$ from the criteria in Section~\ref{subsec:nef}.
\end{remark}
\begin{question}\label{ques:22} Does any odd-dimensional complete intersection $X$ of two quadrics have nef diagonal
?
\end{question}
We do not know the answer to this question even for the $3$-dimensional case (see \cite[Section~8.1]{LO}).
\begin{corollary}\label{cor:ci:big} Let $X$ be a smooth projective complete intersection of hypersurfaces. Then the diagonal $\Delta_X$ is nef and big if and only if $X$ is a projective space or an odd-dimensional quadric.
\end{corollary}
\begin{proof} Assume that the diagonal is nef and big and $X$ is not a projective space. By Theorem~\ref{them:ci}, $X$ is a quadric or an odd-dimensional complete intersection of two quadrics. If $X$ is a quadric, then the dimension should be odd (see for instance \cite[Section~7.1]{LO}). In the case, $X$ is a fake projective space. If $X$ is an odd-dimensional complete intersection of two quadrics, then the degree of the top Chern class of $X$ is zero. Since $\operatorname{deg} \Delta_X^2=\operatorname{deg} c_{\dim X}(X)=0$, this yields that $\Delta_X$ lies in the boundary of the pseudoeffective cone $\overline{{\rm Eff}}_{\dim X}(X \times X)$. Thus $\Delta_X$ is not big. Hence we see that $X$ is a projective space or an odd-dimensional quadric provided that the diagonal is nef and big. Conversely, it follows from Proposition~\ref{prop:fps} that a projective space and an odd-dimensional quadric have nef and big diagonal.
\end{proof}
\begin{corollary}\label{cor:CPci} Let $X$ be a complete intersection of hypersurfaces. Assume that the tangent bundle $T_X$ is nef. Then $X$ is a projective space or a quadric.
\end{corollary}
\begin{proof} Assume that $X$ is neither a projective space nor a quadric. By Theorem~\ref{them:ci}, $X$ is an odd-dimensional complete intersection of two quadrics. We claim that the tangent bundle is not nef in this case. Here we give a sketch of the proof based on the same idea as in \cite{Pan}.
Let $X$ be a $(2n+1)$-dimensional complete intersection of two quadrics. Assume that the tangent bundle is nef. Since $X$ is covered by lines, we may take the family of lines $M$ on $X$ and its universal family $U$. We denote by $p: U \to M$ the universal morphism and by $q: U \to X$ the evaluation morphism with a fiber $F$. By construction, $p$ is a smooth morphism whose fibers are ${\mathbb P}^1$. On the other hand, the nefness of $T_X$ implies the smoothness of $q$. Then it is straightforward to check that $F$ is a $2(n-1)$-dimensional complete intersection of two quadrics. Then we have $$p_M(t)p_{{\mathbb P}^1}(t)=p_X(t)p_F(t),$$ where $p_{Z}(t):=\sum_ib_i(Z)(-t)^i $ is the Poincar\'e polynomial of a variety $Z$. However this contradicts to the fact that $p_{{\mathbb P}^1}(t)=1+t^2, p_X(t)=\sum_{i=0}^{2n+1}t^{2i}-2(n+1)t$ and $p_F(t)=\sum_{i=0}^{2(n-1)}t^{2i}$.
\end{proof}
\section{Weighted hypersurfaces}
To study del Pezzo varieties of degree one and two in the next section, we compute the degree of the top Chern class of weighted hypersurfaces.
All results of this section are classically well known, but we include proofs for the reader's convenience.
For a vector of positive integers ${\bf a}=(a_0, a_1, \ldots, a_m)$, let us denote by ${\mathbb P}({\bf a})$ the weighted projective space of type ${\bf a}$.
Let $X \subset {\mathbb P}({\bf a})$ be a smooth weighted hypersurface of degree $d$. Assume that $X$ is contained in the smooth locus of ${\mathbb P}({\bf a})$ and $m \geq 4$. By \cite[Theorem~5.32]{KKA}, ${\mathcal{O}}_X(1)$ generates ${\rm Pic}(X)$: ${\rm Pic}(X)={\mathbb Z}[{\mathcal{O}}_X(1)]$. We denote by $h \in A^1(X)$ the class corresponding to ${\mathcal{O}}_X(1)$.
\begin{proposition}[{\cite[Theorem~12.1]{BC}}]\label{prop:genel-Euler} There is an exact sequence of sheaves on ${\mathbb P}({\bf a})$,
$$
0\to \Omega_{{\mathbb P}({\bf a})} \to \bigoplus_{i=0}^m{\mathcal{O}}_{{\mathbb P}({\bf a})}(-a_i) \to {\mathcal{O}}_{{\mathbb P}({\bf a})} \to 0.
$$This exact sequence is called {\it the generalized Euler sequence of ${\mathbb P}({\bf a})$}.
\end{proposition}
\begin{proposition}\label{prop:weighted} Let $X \subset {\mathbb P}({\bf a})$ be a smooth weighted hypersurface of degree $d$. Assume that $X$ is contained in the smooth locus of ${\mathbb P}({\bf a})$ and $m \geq 4$. Then the degree of the top Chern class is given by
$$\operatorname{deg} c_{m-1}(X)=\biggl\{ \sum_{i=0}^{m-1}e_{m-1-i}(a_0, \ldots,a_m)(-d)^i\biggr\} h^{m-1}.
$$
\end{proposition}
\begin{proof}
From the generalized Euler sequence of ${\mathbb P}({\bf a})$,
$$
c_t(\Omega_{{\mathbb P}({\bf a})}|_X)=(1-a_0ht)(1-a_1ht)\cdots (1-a_mht).
$$
By the exact sequence
$$
0 \to {\mathcal{O}}_X(-d) \to \Omega_{{\mathbb P}({\bf a})}|_X \to \Omega_X \to 0,
$$
we obtain
\begin{eqnarray}
c_t(T_X)
&=&\dfrac{(1+a_0ht)(1+a_1ht)\cdots (1+a_mht)}{1+dht}\nonumber \\
&=&(1+a_0ht)(1+a_1ht)\cdots (1+a_mht) \sum_{j=0}^{m-1} (-dht)^j \nonumber \\
&=&\biggl\{ \sum_{i=0}^{m-1} e_i(a_0, \ldots, a_m)(ht)^i \biggr\} \biggl\{\sum_{j=0}^{m-1} (-dht)^j\biggr\}, \nonumber
\end{eqnarray}
where $e_i(x_0, \ldots, x_m)$ is the $i$-th elementary symmetric polynomial in $(m+1)$ variables $x_0, \ldots, x_m$:
$$
e_i(x_0, \ldots, x_m)=\sum_{0 \leq j_1< j_2 \ldots<j_i\leq m} x_{j_1}x_{j_2}\cdots x_{j_i}.
$$
As a consequence, our assertion holds.
\end{proof}
\section{Del Pezzo varieties}
A {\it smooth Fano variety} $X$ is a smooth projective variety with ample anticanonical divisor $-K_X$. For a smooth Fano variety $X$ of dimension $n$, we denote by $i_X$ the {\it Fano index} of $X$, the largest integer by which $-K_X$ is divisible in ${\rm Pic}(X)$. By virtue of \cite{KO}, if the Fano index $i_X$ is at least $n$, then $X$ is isomorphic to a projective space ${\mathbb P}^n$ or a quadric hypersurface $Q^n \subset {\mathbb P}^{n+1}$. Since ${\mathbb P}^n$ and $Q^n$ are homogeneous, in these cases the diagonal is nef.
Smooth Fano varieties of dimension $n\geq 3$ and index $n-1$ are called {\it smooth del Pezzo varieties}. In this section, we shall classify smooth del Pezzo varieties with nef diagonal. Let us recall the classification of smooth del Pezzo varieties due to T. Fujita and V. A. Iskovskikh:
\begin{theorem}[{\cite[Theorem~8.11]{Fuj}, \cite{Fuj1, Fuj2, Fuj3}}, \cite{Isk1, Isk2, Isk3}]\label{thm:delpezzo} Let $X$ be a smooth del Pezzo variety of dimension $n \geq 3$ and degree $d=H^n$, where $-K_X=(n-1)H \in {\rm Pic}(X)$. Then $X$ is one of the following.
\begin{enumerate}
\item If $d=1$, then $X$ is a weighted hypersurface of degree $6$ in the weighted projective space ${\mathbb P}(3, 2, 1, \ldots, 1)$.
\item If $d=2$, then $X$ is a weighted hypersurface of degree $4$ in the weighted projective space ${\mathbb P}(2, 1, \ldots, 1)$. In this case, $X$ is a double cover branched along a quartic in ${\mathbb P}^n$.
\item If $d=3$, then $X \subset {\mathbb P}^{n+1}$ is a cubic hypersurface.
\item If $d=4$, then $X \subset {\mathbb P}^{n+2}$ is a complete intersection of type $(2,2)$.
\item If $d=5$, then $X$ is a linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ embedded via the Pl\"ucker embedding.
\item If $d=6$, then $X$ is either ${\mathbb P}^1\times {\mathbb P}^1 \times {\mathbb P}^1$, ${\mathbb P}^2 \times {\mathbb P}^2$ or ${\mathbb P}(T_{{\mathbb P}^2})$.
\item If $d=7$, then $X$ is the blow-up of ${\mathbb P}^3$ at a point.
\end{enumerate}
\end{theorem}
By using the above classification result, we shall prove the following.
\begin{theorem}\label{thm:dPnef} Let $X$ be a smooth del Pezzo variety of dimension $n \geq 3$. If $\Delta_X$ is nef, then $X$ is one of the following:
\begin{enumerate}
\item an odd-dimensional complete intersection of two quadrics,
\item the Grassmannian $G(2, {\mathbb C}^5)$,
\item a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ embedded via the Pl\"ucker embedding,
\item ${\mathbb P}^1\times {\mathbb P}^1 \times {\mathbb P}^1$,
\item ${\mathbb P}^2 \times {\mathbb P}^2$ or
\item ${\mathbb P}(T_{{\mathbb P}^2})$.
\end{enumerate}
In the cases of $(2), (4), (5), (6)$, $X$ is homogeneous. In the case of $(3)$, $X$ is a fake projective space.
\end{theorem}
\subsection{Cases of degree one and two}
\begin{proposition}\label{prop:deg1} Let $X$ be a smooth del Pezzo variety of degree one. Then $\Delta_X$ is not nef.
\end{proposition}
\begin{proof} By Theorem~\ref{thm:delpezzo}, $X$ is a weighted hypersurface of degree $6$ in the weighted projective space ${\mathbb P}:={\mathbb P}(3, 2, 1, \ldots, 1)$. Applying \cite[Proposition~7]{DD}, the singular locus ${\mathbb P}_{\rm sing}$ of ${\mathbb P}$ consists of two points:
$${\mathbb P}_{\rm sing}=\{(1:0:\ldots: 0), (0:1:0:\ldots: 0)\}. $$
Since $\dim X \geq 3$, we have $$\operatorname{codim}_X(X \cap {\mathbb P}_{\rm sing}) \geq 3.$$ Hence $X$ is in general position relative to ${\mathbb P}_{\rm sing}$ in the sense of A. Dimca \cite[Definition~1]{Dim}. Then \cite[Proposition~8]{Dim} tells us that the singular locus of $X$ coincides with $X \cap {\mathbb P}_{\rm sing}$. Since $X$ is smooth by definition, $X$ is contained in the smooth locus of ${\mathbb P}$.
Moreover $X$ is a double cover of the Veronese cone. This yields that $X$ is a covering space of ${\mathbb P}^n$ of degree $2^n$. Applying Proposition~\ref{prop:weighted}, we obtain
$$\operatorname{deg} c_n(X) = \sum_{i=0}^ne_{n-i}(3,2,1^n)(-6)^i.
$$
Here $e_{n-i}(3,2,1^n)$ means $e_{n-i}(3,2,\underbrace{1, \ldots, 1}_{n{\rm -times}})$. In the following, we use similar notation.
Remark that
\begin{eqnarray}
&&e_k(3,2,1^n) \nonumber \\
&=&6e_{k-2}(1^n)+3e_{k-1}(1^n)+2e_{k-1}(1^n)+e_k(1^n)\nonumber \\
&=&6\binom{n}{k-2}+5\binom{n}{k-1}+\binom{n}{k}. \nonumber
\end{eqnarray}
Thus we have
\begin{eqnarray}
\operatorname{deg} c_n(X)&=&\sum_{i=0}^n \biggl\{ 6\binom{n}{i+2}+5\binom{n}{i+1}+\binom{n}{i} \biggr\}(-6)^i \nonumber \\
&=&\dfrac{1}{6}\sum_{i=0}^{n-2} \binom{n}{i+2}(-6)^{i+2}-\dfrac{5}{6}\sum_{i=0}^{n-1}\binom{n}{i+1}(-6)^{i+1}+\sum_{i=0}^{n} \binom{n}{i}(-6)^i \nonumber \\
&=&\dfrac{1}{6}\biggl\{ (-5)^n+6n-1\biggr\} -\dfrac{5}{6}\biggl\{ (-5)^n-1\biggr\} +(-5)^n \nonumber \\
&=&\dfrac{1}{3}\biggl\{ 3n+2+(-5)^n\biggr\}. \nonumber
\end{eqnarray}
If $n$ is odd, then $\operatorname{deg} c_n(X)$ is negative. Thus $\Delta_X$ is not nef. If $n$ is even, then it is straightforward to show the following:
$$\dfrac{1}{3}\biggl\{ 3n+2+(-5)^n\biggr\}>2^n(n+1)~\text{if}~n \geq 4.
$$
Hence it follows from Proposition~\ref{prop:LO} that $\Delta_X$ is not nef.
\end{proof}
\begin{proposition}\label{prop:deg2} Let $X$ be a smooth del Pezzo variety of degree two. Then $\Delta_X$ is not nef.
\end{proposition}
\begin{proof} By Theorem~\ref{thm:delpezzo}, $X$ is a weighted hypersurface of degree $4$ in the weighted projective space ${\mathbb P}:={\mathbb P}(2, 1, \ldots, 1)$. By the same arguments as in Proposition~\ref{prop:deg1}, the singular locus ${\mathbb P}_{\rm sing}$ consists of one point $\{(1: 0: \ldots: 0)\}$ and $X$ is contained in the smooth locus of ${\mathbb P}$.
Moreover, $X$ is a double cover branched along a quartic in ${\mathbb P}^n$. Applying Proposition~\ref{prop:weighted}, we obtain
$$\operatorname{deg} c_n(X) = 2\sum_{i=0}^ne_{n-i}(2,1^{n+1})(-4)^i.
$$
Remark that
$$e_k(2,1^{n+1})=2e_{k-1}(1^{n+1})+e_{k}(1^{n+1})=2\binom{n+1}{k-1}+\binom{n+1}{k}.
$$
Thus we have
\begin{eqnarray}
\operatorname{deg} c_n(X)&=&2\sum_{i=0}^n \biggl\{ 2\binom{n+1}{i+2}+\binom{n+1}{i+1} \biggr\}(-4)^i \nonumber \\
&=&\dfrac{1}{4}\sum_{i=0}^{n-1} \binom{n+1}{i+2}(-4)^{i+2}-\dfrac{1}{2}\sum_{i=0}^{n}\binom{n+1}{i+1}(-4)^{i+1} \nonumber \\
&=&\dfrac{1}{4}\biggl\{ (-3)^{n+1}-(n+1)(-4)-1\biggr\} -\dfrac{1}{2}\biggl\{ (-3)^{n+1}-1\biggr\} \nonumber \\
&=&\dfrac{1}{4}\biggl\{ 4n+5-(-3)^{n+1}\biggr\}. \nonumber
\end{eqnarray}
If $n$ is odd, then $\operatorname{deg} c_n(X)$ is negative. Thus $\Delta_X$ is not nef. If $n$ is even, then it is straightforward to show the following:
$$\dfrac{1}{4}\biggl\{ 4n+5-(-3)^{n+1}\biggr\}>2(n+1)~\text{if}~n \geq 4.
$$
Hence it follows from Proposition~\ref{prop:LO} that $\Delta_X$ is not nef.
\end{proof}
\subsection{Cases of degree five}\label{sec:deg5}
Let $X$ be a smooth del Pezzo variety of dimension $n \geq 3$ and degree $5$. By Theorem~\ref{thm:delpezzo}, $X$ is a linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ embedded via the Pl\"ucker embedding. When $n=3$, $X$ is a fake projective space. When $n=6$, $X$ is the Grassmaniann $G(2, {\mathbb C}^5)$. Thus, in these cases, $\Delta_X$ is nef. We study the case of dimension $4$.
Let $X$ be a del Pezzo $4$-fold of degree $5$. Then $X$ is a linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$.
Let $p_{ij}$ be the Pl\"ucker coordinates of the Grassmannian $G(2,{\mathbb C}^5) \subset {\mathbb P}({\wedge^k {\mathbb C}^5}^{\vee})$ $(0\leq i < j \leq 4)$. Since any del Pezzo $4$-fold of degree $5$ is isomorphic to each other, we may assume that $X \subset {\mathbb P}((\wedge^k {\mathbb C}^5)^{\vee})$ is defined by
\begin{eqnarray}
&&p_{01}p_{23}-p_{02}p_{13}+p_{03}p_{12}=0 \nonumber \\
&&p_{01}p_{24}-p_{02}p_{14}+p_{04}p_{12}=0 \nonumber \\
&&p_{01}p_{34}-p_{03}p_{14}+p_{04}p_{13}=0 \nonumber \\
&&p_{02}p_{34}-p_{03}p_{24}+p_{04}p_{23}=0 \nonumber \\
&&p_{12}p_{34}-p_{13}p_{24}+p_{14}p_{23}=0 \nonumber \\
&&p_{12}-p_{03}=0 \nonumber \\
&&p_{13}-p_{24}=0 \nonumber
\end{eqnarray}
Then $X$ contains the following planes:
\begin{itemize}
\item $\Pi:=\{p_{ij}=0\mid (i,j)\neq (0, 1), (0,2), (0,4)\},$
\item $\Lambda:=\{p_{ij}=0\mid (i,j)\neq (0, 1), (0,4), (1,4)\}.$
\end{itemize}
These are the Schubert varieties $\sigma_{3,1}$ and $\sigma_{2,2}$ on $G(2,{\mathbb C}^5)$ respectively \cite[Chapter~1, Section~5]{GH}. Then it is known that $\operatorname{deg} \Pi\cdot \Lambda=-1$ (see for instance \cite[Proof~of~Corollary~4.7]{PZ}). Thus we obtain the following:
\begin{proposition}\label{prop:deg5:4fold} Let $X$ be a smooth del Pezzo $4$-fold of degree $5$. Then $\Delta_X$ is not nef.
\end{proposition}
\begin{proof}[Proof of the Theorem~\ref{thm:dPnef}] If $d=1$ or $2$, then $\Delta_X$ is not nef by Proposition~\ref{prop:deg1} and Proposition~\ref{prop:deg2}. If $d=3$ or $4$, then $X$ is a complete intersection. In these cases, it follows from Theorem~\ref{them:ci} that $\Delta_X$ is not nef provided that $X$ is not an odd-dimensional complete intersection of two quadrics. If $d=6$, then $X$ is homogeneous. Thus $\Delta_X$ is nef. If $d=7$, then $X$ admits a blow-up structure. Hence $\Delta_X$ is not nef by Proposition~\ref{prop:bir}.
The remaining case is the case of degree five. Let us assume that $d=5$.
As we have seen in the above Section~\ref{sec:deg5}, $\Delta_X$ is nef provided that $n=3$ or $6$. On the other hand, it follows from Proposition~\ref{prop:deg5:4fold} that $\Delta_X$ is not nef if $n=4$. When $n=5$, we will prove that $\Delta_X$ is not nef in Proposition~\ref{prop:dp:cone} below.
\end{proof}
We end this section with two corollaries.
\begin{corollary}\label{cor:dp:big} Let $X$ be a smooth complex del Pezzo variety. Then the diagonal $\Delta_X$ is nef and big if and only if $X$ is a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$.
\end{corollary}
\begin{proof} Assume the diagonal $\Delta_X$ is nef and big and $X$ is not a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$. By Theorem~\ref{thm:dPnef}, $X$ is an odd-dimensional complete intersection of two quadrics, the Grassmannian $G(2, {\mathbb C}^5)$ or a variety which satisfies $N^1(X) \not\cong {\mathbb R}$. In these cases, it follows from Corollary~\ref{cor:ci:big}, \cite[Section~7.4]{LO} and Proposition~\ref{prop:nefbig:Pic} that the diagonal is not nef and big. On the other hand, a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ is a fake projective space. Thus it has nef and big diagonal by Proposition~\ref{prop:fps}.
\end{proof}
\begin{corollary}\label{cor:CP} Let $X$ be a smooth del Pezzo variety with nef tangent bundle. Then $X$ is homogeneous.
\end{corollary}
\begin{proof} Assume that $X$ is not homogeneous. Then, by Theorem~\ref{thm:dPnef}, $X$ is isomorphic to an odd-dimensional complete intersection of two quadrics or a $3$-dimensional linear section of the Grassmannian $G(2, {\mathbb C}^5) \subset {\mathbb P}^9$ provided that $X$ is not homogeneous. In these cases, it follows from Corollary~\ref{cor:CPci} and \cite[Theorem~5.1]{CP1} that the tangent bundle is not nef.
\end{proof}
\section{Spherical varieties}
In this section, we shall study the intersection theory of cycles on spherical varieties. For spherical varieties, we refer the reader to \cite{Per}.
\subsection{Spherical varieties}
\begin{definition}\label{def:spherical}
Let $G$ be a reductive linear algebraic group and $B$ a Borel subgroup of $G$. A smooth projective $G$-variety $X$ is {\it ($G$-)spherical} if it has a dense $B$-orbit. A $G$-spherical variety is {\it ($G$-)horospherical} if there is a point $x$ in the open $G$-orbit on $X$ such that the isotropy group $G_x$ contains the unipotent radical of $B$. A spherical $G$-variety $X$ is {\it ($G$-)toroidal} if every $B$-stable divisor but not $G$-stable contains no $G$-orbit.
\end{definition}
\begin{remark} We have two remarks.
\begin{enumerate}
\item For a smooth projective $G$-variety $X$, $X$ is spherical if and only if $X$ has finitely many $B$-orbits (see \cite[Theorem~2.1.2]{Per}).
\item Any smooth projective toric variety is toroidal.
\end{enumerate}
\end{remark}
\begin{proposition}\label{prop:Sph} For a smooth projective spherical variety $X$, the following are equivalent to each other.
\begin{enumerate}
\item $\Delta_X$ is nef.
\item ${\operatorname{Nef}}_k(X)={\operatorname{Eff}}_k(X)$ for any $k$.
\item The closures of any $B$-orbit on $X$ have non-negative intersection against the closure of every $B$-orbit of the complementary dimension.
\end{enumerate}
\end{proposition}
\begin{proof} $\rm (1) \Rightarrow (2)$ By \cite[Theorem~1.1]{Li}, any spherical variety satisfies $${\operatorname{Nef}}_k(X)\subset {\operatorname{Eff}}_k(X)=\overline{{\operatorname{Eff}}}_k(X).$$ Thus Proposition~\ref{prop:pseffcone} yields ${\operatorname{Nef}}_k(X)={\operatorname{Eff}}_k(X)$ provided that $\Delta_X$ is nef.\\
$\rm (2) \Rightarrow (1)$ Assume that ${\operatorname{Nef}}_k(X)={\operatorname{Eff}}_k(X)$ for any $k$. By \cite[Corollary~3.5]{Li}, we have ${\operatorname{Nef}}_k(X \times X)={\operatorname{Eff}}_k(X\times X)$ for $0 \leq k \leq 2 \dim X$. In particular, $\Delta_X$ is nef. \\
$\rm (2) \Leftrightarrow (3)$ According to \cite[Theorem~1.3]{FMSS}, the effective cone of a spherical variety is generated by the closures of $B$-orbits. Hence our assertion holds.
\end{proof}
\begin{corollary}[{\cite[Theorem~1.2]{Li}}]\label{cor:Li} For a smooth projective toroidal variety $X$ of dimension $n$, the following are equivalent to each other.
\begin{enumerate}
\item $\Delta_X$ is nef.
\item ${\operatorname{Nef}}_k(X)={\operatorname{Eff}}_k(X)$ for any $k$.
\item ${\operatorname{Nef}}_k(X)={\operatorname{Eff}}_k(X)$ for some $1 \leq k \leq n-1$.
\item $X$ is rational homogeneous.
\end{enumerate}
\end{corollary}
\begin{proof} The equivalence of $\rm (3)$ and $\rm (4)$ is due to Q. Li {\cite[Theorem~1.2]{Li}}. The remaining part directly follows from Proposition~\ref{prop:Sph}.
\end{proof}
\begin{corollary}\label{cor:spherical:nefbig} For a smooth projective spherical variety $X$, assume that $\Delta_X$ is nef and big. Then $N_k(X) \cong {\mathbb R}$ for any $k$.
\end{corollary}
\begin{proof} Let us assume that $\Delta_X$ is nef and big. As in the proof of Proposition~\ref{prop:Sph}, we have ${\operatorname{Nef}}_{\dim X}(X \times X)={\operatorname{Eff}}_{\dim X}(X\times X)$. Then it follows from \cite[Example~4.5]{FL2} that $\Delta_X$ is universally pseudoeffective. Applying \cite[Corollary~4.2]{LO}, we see that $N_k(X) \cong {\mathbb R}$ for any $k$.
\end{proof}
\subsection{Odd symplectic Grassmannians}
In this section, we recall some known results on odd symplectic Grassmannians.
All of materials here are in \cite[Section 3 and 4]{Mih}.
Let $E$ be a $(2n+1)$-dimensional complex vector space and $\omega \in \wedge^2 E^{\vee}$ a skew symmetric bilinear form of rank $2n$. We call this $\omega$ an {\it odd symplectic form on $E$}. In \cite{Proc}, R. A. Proctor introduces the {\it odd symplectic group}:
$$
\operatorname{Sp}({2n+1}):=\{g\in {\rm GL}(E) \mid \omega(gu,gv)=\omega(u,v) ~\text{for~all}~ u, v \in E\}.
$$
Let $\overline{J}$ be the matrix of the form
$$\left(
\begin{array}{cc}
O & J \\
-J & O
\end{array}
\right),
$$
where $J$ be the anti-diagonal $(1,1, \ldots, 1)$ of size $n \times n$.
Taking a basis $\{\bm{e_0}, \bm{e_1}, \ldots, \bm{e_{2n}}\}$ of $E$ suitably, we may assume that the form $\omega$ is given by the $2n \times 2n$ matrix
$$
(\omega(\bm{e_i},\bm{e_j}))=
\left(
\begin{array}{cc}
0 & 0 \\
0 & \overline{J}
\end{array}
\right)
$$
The restriction of $\omega$ to the subspace $F=\langle \bm{e_1}, \ldots, \bm{e_{2n}} \rangle_{{\mathbb C}}$ is a usual symplectic form.
Then the odd symplectic group $\operatorname{Sp}({2n+1})$ is represented as
$$
\operatorname{Sp}({2n+1})=\biggl\{ \left(
\begin{array}{cc}
\lambda & \ell \\
0 & S
\end{array}
\right)
\mid \lambda \in {\mathbb C}^{\ast}, \ell \in {\mathbb C}^{2n}, S \in \operatorname{Sp}(^{2n})
\biggr\},
$$ where $\operatorname{Sp}({2n})$ is the usual symplectic group with respect to the form $\omega|_{F}$.
This description tells us that $\operatorname{Sp}(2n+1)$ admits a Levi decomposition:
$$
\operatorname{Sp}(2n+1)\cong ({\mathbb C}^{\ast} \times \operatorname{Sp}({2n})) \ltimes {\mathbb C}^{2n}.
$$
The subgroup $B$ of $\operatorname{Sp}({2n+1})$ of upper triangular matrices is a Borel subgroup.
The {\it odd symplectic Grassmannian} is defined as a subvariety of a Grassmannian $G(k, E)$ parametrizing isotropic subspaces with respect to $\omega$:
$$G_{\omega}(k, E):=\{[V] \in G(k, E)\mid \omega(V, V)=0\}.
$$
According to \cite[Proposition~4.3]{Mih}, the odd symplectic Grassmannian $G_{\omega}(k, E)$ has an action of the odd symplectic group $\operatorname{Sp}({2n+1})$ with two orbits
$$
X_0:=\{V \in G_{\omega}(k, E) \mid \bm{e_0} \in V\},
X_1:=\{V \in G_{\omega}(k, E) \mid \bm{e_0} \not\in V\}.
$$
Furthermore, the closed orbit $X_0$ is isomorphic to $G_{\omega}(k-1, F)$, and the open orbit $X_1$ is isomorphic to the total space of the dual of the tautological bundle of $G_{\omega}(k, F)$. From this description, we see that $G_{\omega}(k, E)$ is a horospherical variety.
By \cite[Proposition~4.1]{Mih}, $G_{\omega}(k, E)$ is a smooth projective subvariety of codimension $\frac{k(k-1)}{2}$ of the Grassmannian $G(k, E)$. More precisely, it is the zero locus of a general section on $\wedge^2 {\mathcal S}$, where ${\mathcal S}$ is the rank $k$ tautological subbundle on $G(k, E)$ (see the proof of \cite[Proposition~4.1]{Mih}).
In particular, $G_{\omega}(2, E)$ is a hyperplane section of $G(2, E) \subset {\mathbb P}(\wedge^2 E^{\vee})$ by $\{\omega=0\}$.
Since the odd symplectic form $\omega$ can be extended to a (usual) symplectic form $\tilde{\omega}$ on $\tilde{E}={\mathbb C}^{2n+2}$ (see \cite[Section~3.2]{Mih}), $G_{\omega}(k, E)$ is a subvariety of a (usual) symplectic Grassmannian
$$G_{\tilde{\omega}}(k, \tilde{E}):=\{[V] \in G(k, \tilde{E})\mid \tilde{\omega}(V, V)=0\}.$$
Moreover $G_{\omega}(k, E)$ identifies with the Schubert variety of $G_{\tilde{\omega}}(k, \tilde{E})$ that parametrizes $k$-dimensional isotropic subspaces contained in the hyperplane $E \subset \tilde{E}$. Then we may define {\it Schubert varieties} of the odd symplectic Grassmannian $G_{\omega}(k, E)$ to be those of $G_{\tilde{\omega}}(k, \tilde{E})$ contained in $G_{\omega}(k, E)$ (see \cite[Section~4.8]{Mih}). Then we have
\begin{proposition}[{\cite[Proposition~4.12]{Mih}}]\label{prop:Mih} For an odd symplectic Grassmannian $G_{\omega}(k, {\mathbb C}^{2n+1})$, Schubert varieties coincide with the closures of $B$-orbits.
\end{proposition}
\subsection{Cones of cycles of the odd symplectic Grassmannian of lines}
Combining Proposition~\ref{prop:Mih} and \cite[Theorem~1.3]{FMSS}, we obtain the following:
\begin{proposition}\label{prop:OSG:effective} For an odd symplectic Grassmannian $G_{\omega}(k, {\mathbb C}^{2n+1})$, the effective cone of cycles is generated by Schubert varieties.
\end{proposition}
From now on, we focus on the odd symplectic Grassmannian of lines $G_{\omega}(2, {\mathbb C}^{2n+1})$.
To describe Schubert varieties of odd symplectic Grassmannians, we employ the notation by C. Pech \cite[P. 191]{Pe}.
As in \cite[Section~1.2]{Pe}, we denote by $\tau_{\lambda}$ the cohomology class associated to the Schubert variety $X_{\lambda}(F_{\bullet}) \subset G_{\omega}(2, {\mathbb C}^{2n+1})$, where $F_{\bullet}$ is an isotropic complete flag of ${\mathbb C}^{2n+1}$ and $\lambda=(\lambda_1, \lambda_2)$ (see for details \cite[Section~1.1]{Pe}). The codimension of the cycle $\tau_{\lambda}$ is equal to $\lambda_1+\lambda_2$. Then, by \cite[Remark 1.1.1]{Pe}, $\lambda$ is either
\begin{itemize}
\item $(n-2)$-strict partitions $(2n-1 \geq \lambda_1 \geq \lambda_2 \geq 0)$, or
\item the partition $(2n-1, -1)$.
\end{itemize}
Assume that $n \geq 2$. We chose non-negative integers $a \geq b$ satisfying $a+b=2n-1$.
From the Pech's Pieri formula \cite[Proposition~5]{Pe}, the intersection number of the cycles $\tau_{a,b}$ and $\tau_{2n-1, -1}$ is given by
$$
\operatorname{deg}(\tau_{a,b}\cdot \tau_{2n-1, -1}) =(-1)^{a-1}.
$$
This implies that $\tau_{2n-1, -1}$ is a non-nef effective cycle. Thus we have
\begin{proposition}\label{prop:OSGL} Let $X$ be the odd symplectic Grassmannian of lines $G_{\omega}(2, {\mathbb C}^{2n+1})$. Then $\Delta_X$ is not nef.
\end{proposition}
Furthermore, we may describe the cone of pseudoeffective/nef cycles of a del Pezzo $5$-fold of degree $5$ as follows. Let $X$ be a del Pezzo $5$-fold of degree $5$. By Theorem~\ref{thm:delpezzo}, $X$ is a hyperplane section of $G(2, {\mathbb C}^5)$. Thus it is an odd symplectic Grassmaniann $G_{\omega}(2, {\mathbb C}^{5})$. According to Proposition~\ref{prop:OSG:effective} and the above argument, the cone of effective cycles on $X$ is generated by
\begin{itemize}
\item $\tau_{(0,0)}$: the whole space $X$
\item $\tau_{(1,0)}$: a hyperplane section of $X$
\item $\tau_{(2,0)}$: a cycle of codimension $2$
\item $\tau_{(3,-1)}$: a cycle of codimension $2$
\item $\tau_{(3,0)}$: a plane
\item $\tau_{(2,1)}$: a plane
\item $\tau_{(3,1)}$: a line
\item $\tau_{(3,2)}$: a point.
\end{itemize}
Thanks to \cite[Proposition~4~and~5]{Pe}, the intersection numbers are given by
$$
\operatorname{deg}(\tau_{(1,0)}\cdot \tau_{(3,1)})=1,
\operatorname{deg}(\tau_{(2,0)}\cdot \tau_{(3,0)})=0, \operatorname{deg}(\tau_{(2,0)}\cdot \tau_{(2,1)})=1,$$
$$\operatorname{deg}( \tau_{(3,-1)}\cdot \tau_{(3,0)})=1, \operatorname{deg}(\tau_{(3,-1)}\cdot \tau_{(2,1)})=-1.
$$
As a consequence, we obtain the following.
\begin{proposition}\label{prop:dp:cone} Let $X$ be a smooth del Pezzo $5$-fold of degree five. Then we have
\begin{align}
{\operatorname{Nef}}^2(X)&={\mathbb R}_{\geq 0} \tau_{(2,0)} \oplus {\mathbb R}_{\geq 0} [\tau_{(2,0)}+\tau_{(3,-1)}], \nonumber \\
\overline{{\operatorname{Eff}}}^2(X)&={\mathbb R}_{\geq 0} \tau_{(2,0)} \oplus {\mathbb R}_{\geq 0} \tau_{(3,-1)}, \nonumber \\
{\operatorname{Nef}}^3(X)&={\mathbb R}_{\geq 0} \tau_{(3,0)} \oplus {\mathbb R}_{\geq 0} [\tau_{(3,0)}+\tau_{(2,1)}], \nonumber \\
\overline{{\operatorname{Eff}}}^3(X)&={\mathbb R}_{\geq 0} \tau_{(3,0)} \oplus {\mathbb R}_{\geq 0} \tau_{(2,1)}. \nonumber
\end{align}
In particular, $\Delta_X$ is not nef.
\end{proposition}
\begin{remark}[An answer to a question by Q. Li]\label{rem:Li}
A del Pezzo $5$-fold $X$ of degree $5$ is a smooth projective horospherical variety satisfying ${\operatorname{Nef}}^1(X)=\overline{{\operatorname{Eff}}}^1(X)$ and ${\operatorname{Nef}}^2(X) \neq \overline{{\operatorname{Eff}}}^2(X)$. This gives an answer to a question raised by Q. Li \cite[pp. 971-972]{Li}.
\end{remark}
\if0
\begin{proposition}\label{prop:deg5:5fold} Let $X$ be a del Pezzo $5$-fold of degree $5$. Then $\Delta_X$ is NOT nef.
\end{proposition}
\begin{proof} By Theorem~\ref{thm:delpezzo}, $X$ is a hyperplane section of $G(2, {\mathbb C}^5)$. Thus it is an odd symplectic Grassmaniann $G_{\omega}(2, {\mathbb C}^{5})$. Combining Proposition~\ref{prop:Sph} and \ref{prop:Mih}, it is enough to study the positivity of $B$-orbits....
\end{proof}
\fi
\if0
\section{Nefness of tangent bundle/diagonal}
As an application of the results we obtained in above sections, we study the following conjecture due to Campana and Peternell:
\begin{conjecture}[CP-conjecture]\label{conj:CP} Let $X$ be a complex smooth Fano variety. If $T_X$ is nef, then $X$ is homogeneous.
\end{conjecture}
For a smooth projective variety $X$, the normal bundle of the diagonal $\Delta_X$ in $X\times X$ is the tangent bundle $T_X$. Thus if $T_X$ is nef, then it follows from \cite[Corollary~8.4.3]{L2} that $\Delta_X$ is nef as a cycle on $X \times X$. By Theorem~\ref{MT} and Corollary~\ref{cor:Li}, we obtain the following:
\begin{theorem}\label{them:CP} Let $X$ be a smooth Fano variety with nef tangent bundle. Then $X$ is homogeneous if $X$ is one of the following:
\begin{enumerate}
\item a complete intersection of hypersurfaces,
\item a del Pezzo variety or
\item a toroidal variety.
\end{enumerate}
\end{theorem}
\begin{proof} By Theorem~\ref{MT} and Corollary~\ref{cor:Li}, $X$ is isomorphic to an odd-dimensional complete intersection of two quadrics or a linear section $G(2, {\mathbb C}^5) \cap (1)^3$ if $X$ is not homogeneous. If $X$ is an odd-dimensional complete intersection of two quadrics, then
\end{proof}
For complete intersections, we also obtain the following:
\begin{theorem}\label{them:CPpos} Let $X$ be a smooth Fano variety defined over an algebraically closed field $k$. If $X$ is a complete intersection of hypersurfaces, then $X$ is homogeneous.
\end{theorem}
\begin{proof} For smooth projective varieties defined over any algebraically closed field, it is easy to check that Proposition~\ref{prop:diag^2}, Proposition~\ref{prop:LO} and all results in Section~\ref{sect:ci} except Lemma \ref{lem:except} hold. Let $X$ be a smooth Fano complete intersection with nef $T_X$. By the same argument as in Theorem~\ref{them:ci}, $X$ is a cubic surface or a complete intersection of type $(2,2)$. Since any smooth cubic surface
has a $(-1)$-curve, its tangent bundle is not nef. Thus, to prove our assertion, it is enough to deal with the case of complete intersections of type $(2,2)$.
Let $X$ be a complete intersections of type $(2,2)$.
\end{proof}
\fi
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\noindent A crucial step in connecting cosmological scenarios with large-scale-structure (LSS)
observations is the bias expansion for LSS tracers (see \cite{Desjacques:2016bnm} for a review).
Like all effective field theories, the bias expansion is firmly rooted in the idea of locality. The
simplest example is that of dark matter halos. Since their dynamics is governed by gravitational interactions only,
we know that at any given point in spacetime the halo overdensity $\delta_h$ will be a functional of all possible gravitational observables
that one can build from the Newtonian potential. Let us focus, for example, on the
total matter overdensity $\delta$. At leading order in perturbation theory, we can write the most general
functional dependence of the halo overdensity on $\delta$ as the integral
\begin{equation}
\label{eq:locality-A}
\delta_h(\eta,\vec{x})=\int\mathrm{d}\eta'\mathrm{d}^3y\,F_h(\eta,\eta',\abs{\vec{y}})\,\delta(\eta',\vec{x}+\vec{y})\,\,,
\end{equation}
where the kernel $F_h$ can only depend on $\abs{\vec{y}}$ because of statistical isotropy,
and cannot depend on $\vec{x}$ because of statistical homogeneity. Even
if we do not know the exact shape of the kernel, we know that it is supported only for $\abs{\vec{y}}\lesssim R(M_h)$,
where $R(M_h)$ is the Lagrangian radius of the halo. This
is because the matter within a given halo originates from a region of size $R(M_h)$ in Lagrangian space. We
can then expand $\delta(\eta',\vec{x}+\vec{y})$ in a Taylor series inside the integral, and obtain
\begin{equation}
\label{eq:locality-B}
\delta_h(\eta,\vec{x}) = \int\mathrm{d}\eta'\mathrm{d}^3y\,F_h(\eta,\eta',\abs{\vec{y}})\,\delta(\eta',\vec{x})
+ \frac{1}{6}\int\mathrm{d}\eta'\mathrm{d}^3y\,F_h(\eta,\eta',\abs{\vec{y}})\,\abs{\vec{y}}^2\,\nabla^2\delta(\eta',\vec{x}) + \dots\,\,.
\end{equation}
Finally, using the fact that, at linear order in perturbations, $\delta$ evolves in a scale-independent way,
we can easily carry out the time integration to arrive at the expansion
\begin{equation}
\label{eq:locality-C}
\delta_h(\eta,\vec{x}) = b_1(\eta)\delta(\eta,\vec{x})+b_{\nabla^2\delta}(\eta)\nabla^2\delta(\eta,\vec{x})+\dots\,\,,
\end{equation}
where the bias coefficients $b_1,b_{\nabla^2\delta},\dots$, are related to the moments of the kernel $F_h$ \cite{McDonald:2009dh}. Since
the nonlocality scale of the kernel is $R(M_h)$, we see that $b_{\nabla^{2n}\delta}\sim R^{2n}(M_h)$.
So far, so good: everything seems to be controlled by the single scale $R(M_h)$.
Moreover, this scale is typically of the same order of magnitude as the scale at which the matter density field becomes nonlinear,
so that it does not strongly restrict the validity of the perturbative bias expansion.
When we move from halos to galaxies and consider their overdensity $\delta_g$,
we can still write it as in \eq{locality-C}. However, now we have to ask what is the nonlocality scale of the higher-derivative terms for galaxies. If
the properties of galaxies in a given sample are completely determined by those of their host halos, then this scale is still $R(M_h)$.
However, the real universe is not so simple. For example, baryons are also present. What is their impact on
\eq{locality-C}? Pressure forces contribute to the right-hand side of the Euler equation for baryonic matter
through a pressure gradient $\vec{\nabla}\delta p_{b} = c^2_{\rm s}\widebar{\rho}_{b}\vec{\nabla}\delta_{b}
\approx c^2_{\rm s}\widebar{\rho}_{b}\vec{\nabla}\delta$ (approximating $\delta_{b}\approx\delta$ on large scales),
and then give rise to a baryon-dark matter relative velocity $\vec{v}_{bc}$. Since
this relative velocity is a local observable, it can enter in the bias expansion.
At leading order in perturbations we can then add the term $\vec{\nabla}\cdot\vec{v}_{bc}\sim\nabla^2\delta$ to \eq{locality-C}.
Thus, we see that these baryonic effects are also captured by higher-derivative terms.
However, the length scale suppressing them is not $R(M_h)$, but the Jeans length $\lambda_{\rm J} = c_{\rm s}(G_{\rm N}\rho)^{-1/2}$,
which depends on the average density of gas in the halo and its speed of sound $c^2_{\rm s}\sim T/m$ (for gas particles of mass $m$).
Fortunately, this length scale is again quite small, and less than $R(M_h)$ except for very low-mass halos.
Another source of nonlocality is radiative transfer (RT). Indeed,
ionizing radiation can affect star-forming galaxies directly: for example, Refs.~\cite{Efstathiou:1992zz,Barkana:1999apa} showed that
it can reduce the cooling rate of the gas accreting onto the parent halo (effectively evaporating it from the halo),
slowing down the star-formation rate and leading to a suppression of the stellar-mass-to-halo-mass ratio in low-mass galaxies. In
this case the nonlocality scale is some ``effective'' mean free path (m.f.p.) $\lambda_{\rm eff}$ of ionizing radiation,
that in the following will always be understood as the comoving one (we use the term ``effective'' since we will see that what matters
for the bias expansion is some average of the mean free path over the photon energy).
Both these effects are expected to become relevant during reionization,
when the progenitors of galaxies observed at lower redshifts were actively forming \cite{Schmidt:2017lqe}.
It is during this epoch (between redshifts $z\approx 15$ and $z\approx 6$) that newly formed radiating objects like metal-poor stars, supernova explosions,
accreting black holes, X-ray binaries, mini-quasars, dwarf galaxies, etc. injected photons into the intergalactic medium,
and the universe reverted from neutral to ionized. The gas temperature jumped from a few $\rm K$ to several thousands,
leading to a Jeans length $\lambda_{\rm J}\approx\lmpch{0.08}$ around the end of reionization \cite{Schmidt:2017lqe},
while the m.f.p.~can be of order $\lmpch{50}$ at redshift $z\sim 5\DIV6$ \cite{Worseck:2014fya,Becker:2014oga}.
Once we have the expansion of \eq{locality-C}, and similar expansions at higher order in perturbation theory,
we can predict the statistics of galaxies in terms of those of the underlying matter field at a given order in perturbation theory.
In the best case, this prediction holds up to the nonlinear scale, $k_{\rm NL}\approx\kmpch{0.3}$ at $z=0$,
that controls the perturbation theory for the dark matter distribution. However,
it is clear that we must be able to truncate the series in \eq{locality-C} at some order if we want to be predictive. This is easy to see in Fourier space. Let
us consider for example the higher-derivative terms coming from RT effects.
In \eq{locality-C} we obtain a series of terms that scale as $(k^2\lambda_{\rm eff}^2)^n$ times a coefficient that,
naively, we expect to be of order $1$: then, the derivative expansion will break down at the very latest at $k\sim 1/\lambda_{\rm eff}$,
since at that scale infinitely many terms will become equally relevant.
In this paper we will focus on these terms, answering the following questions. \emph{Is it possible, within the effective field theory framework,
to resum them in a way that allows us to predict the galaxy statistics also at momenta $k\gtrsim 1/\lambda_{\rm eff}$ (but still $k \ll k_{\rm NL}(z)$)? Can
we do this with only a finite number of new bias coefficients? Which assumptions are necessary to achieve this?}
These questions are important because $\lambda_{\rm eff}$ can be much larger than $\lambda_{\rm J}$: while
there is no problem in treating the higher-derivative terms from pressure forces perturbatively (unless their bias coefficients are unnaturally large),
doing so with those from RT can lead to a sizeable loss in predictivity.
How does this happen? When we marginalize over the free coefficients of the bias expansion, we need to take into account their prior.
The coefficients $b_{\nabla^{2n}\delta}$ of the higher-derivative terms are dimensionful:
however, once we identify the longest non-locality scale $\lambda$ (be it $R(M_h)$, or $\lambda_{\rm eff}$, etc.) that we think affects the formation of the tracers under
consideration, it is reasonable to assume that after we factor it out the dimensionless coefficients are of order $1$, as we discussed below \eq{locality-C}.\footnote{Therefore,
we would take the priors on the coefficients $b_{\nabla^{2n}\delta}$ to be $[{-{\cal O}}(1),{\cal O}(1)]\times\lambda^{2n}$, where $\lambda^{2n}$ is fixed.}
Consequently, if we want to keep a finite number of higher-derivative terms (and then a finite number of coefficients to marginalize over),
we are forced to stop at a $k_{\rm max}$ which is not larger than $\sim 1/\lambda$. The longer $\lambda$, then,
the smaller is the number of modes that we are using, and this would lead to a degradation of the constraints on cosmological parameters
(for more details we refer, for example, to the discussions in Sections 3 and 4 (especially Section 4.2) of \cite{Gleyzes:2016tdh}).
More precisely, these higher-derivative terms will strongly modify the shape of the galaxy two-point correlation function around BAO scales,
damping the amplitude of the BAO feature and possibly affecting its measured position
(see \eg~\cite{Pritchard:2006ng,Coles:2007be,Pontzen:2014ena,Gontcho:2014nsa} for a discussion).
Finally, they will also have a negative impact on the constraints on parameters like the neutrino
mass and the amplitude of equilateral primordial non-Gaussianity: indeed,
the scale-dependent effects induced by neutrinos and equilateral non-Gaussianity are controlled by the free-streaming scale $k_{\rm fs}$
and the equality scale $k_{\rm eq}$ respectively, and both these scales could be close to $1/\lambda_{\rm eff}$.
Before proceeding, we emphasize that these RT effects also affect line emission from diffuse gas,
like the Lyman-$\alpha$ forest and {$\rm 21cm$} intensity mapping (see \eg~\cite{Pontzen:2014ena,Gontcho:2014nsa}).
Actually, we expect that the response of galaxies to ionizing radiation is very suppressed if
they reside in halos with mass much larger than the Jeans mass at reionization \cite{Schmidt:2017lqe},
while the line emission depends strongly on the ionization state of the medium and then on the ambient radiation field.
This distinction is not important for this paper: since we follow an effective field theory approach, all our conclusions will
apply to any physical tracer of the matter field (the differences between various tracers being encoded in their respective bias coefficients).
Our paper is subdivided in four main sections. We formulate in more detail the above questions in Section \ref{sec:questions},
and we answer them in Sections \ref{sec:radiative_bias} and \ref{sec:full_RT}. We draw our conclusions in Section \ref{sec:conclusions}.
Technical details on Sections \ref{sec:radiative_bias} and \ref{sec:full_RT} are collected in Appendix \ref{app:radiative_transfer}.
\paragraph{Notation and conventions}
We largely follow the notation of \cite{Desjacques:2016bnm}. Some differences are that we denote conformal time by $\eta$ ($\tau$ is reserved for the optical depth),
and denote the leading stochastic term in the bias expansion for galaxies by $\epsilon_{g}$ (instead of simply $\epsilon$), since we will be dealing with multiple tracers.
For simplicity we omit the dependence of the fluid trajectory $\vec{x}_{\rm fl}$ on the Lagrangian coordinate.
The metric signature is $(-,+,+,+)$. We work in natural units $c=\hbar=\varepsilon_0=1$, where $\varepsilon_0$ is the permittivity of free space. Consequently,
we freely exchange energy with (angular) frequency and time with length in our discussion of radiative transfer.
\section{Setting up the analysis}
\label{sec:questions}
\noindent As one can easily imagine, we can capture the effect of RT on galaxy formation
by allowing their number density to be a functional of the incoming flux of ionizing radiation \cite{Schmidt:2017lqe}. In
principle, galaxy formation is sensitive to this flux in a region of finite size, corresponding to the extent of the gas cloud around the parent halo,
which is of order of the Jeans length of the gas. This nonlocality can then be dealt with via a derivative expansion like that of \eq{locality-A},
and at leading order it is enough to consider the flux evaluated along the trajectory $\vec{x}_{\rm fl}$ of a Lagrangian patch enclosing the tracer:
higher-order corrections involve derivatives of the incoming flux along this fluid trajectory.
The additional scales of nonlocality in the problem are the following:
\begin{description}[leftmargin=*]
\item[Mean free path of radiation] As we discussed in the introduction, one is $\lambda_{\rm eff}$. Radiation
can travel long distances before reaching the galaxy
(while in contrast matter and biased tracers typically move $\lesssim \pi/k_{\rm NL}(z=0) \sim \lmpch{10}$ over the entire history of the universe). Therefore,
since the emissivity of the sources of photoionizing radiation is also biased with respect to matter,
we can conclude that the galaxies are now sensitive to the distribution of matter within their whole past light cone.
However, radiation is also being absorbed by the intergalactic medium, and then
sources farther than $\lambda_{\rm eff}$ from a given galaxy are not able to influence it.
This is why we can assume that higher-derivative operators from RT effects are suppressed by $1/\lambda_{\rm eff}$,
\ie~$b_{\nabla^{2n}\delta}\sim f_{(2n)}\lambda^{2n}_{\rm eff}$. With
this assumption on the scaling of higher-derivative bias coefficients, Ref.~\cite{Schmidt:2017lqe} obtained a constraint on the bias coefficient of the operator $\nabla^2\delta$
for the galaxies of the BOSS DR12 galaxy sample \cite{Beutler:2016ixs,Beutler:2016arn}: $\abs{f_{(2)}}\lesssim\num{0.002}\,(\lambda_{\rm eff}/\lmpch{50})^{-2}$
(the smallness of $f_{(2)}$ is consistent with the fact that the galaxies of the BOSS sample reside in massive halos, $M_{h}\approx\solar{d13}$).
\item[Response history to incoming flux] It is possible that galaxy formation does not respond instantaneously to the incoming radiation,
but keeps some ``memory'' of it.
To see the consequences of this, imagine for a moment that all ionizing radiation is emitted
in an interval $\Delta\eta$ around some redshift $z_\ast$, with $\Delta\eta\ll1/{\cal H}(z_\ast)$.
If the response of galaxies to the incoming flux of ionizing radiation was instantaneous, the galaxy number density at any event along the fluid worldline
would only depend on the distribution of sources at the intersection of the past light cone of that event with the hypersurface $z=z_\ast$
(see \fig{geometry_2D} below). In general however, it also knows about sources at $z_\ast$ that are inside the past light cone,
closer to the spatial position of the galaxies under consideration by an amount controlled by the response time.
Clearly this effect goes in the opposite direction with respect to that of a finite mean free path of radiation, and must also be accounted for.
\end{description}
Before discussing how to account for both these scales in our effective field theory approach,
let us also briefly think about the alternative to this. Obviously,
one could attempt a direct modeling of what would be called ``UV physics'' in the language of particle physics.
This means both a model of the response of galaxies to the ionizing photons,
but also of the spatial distribution of the radiation sources within the light cone of the observed galaxies.
However, it is clear that we cannot directly test our models for the latter against observations,
since the past light cone of the observed galaxies lies \emph{within} (and not \emph{on})
our past light cone: while in principle we could imagine to reconstruct the response of galaxies with observations at different redshifts
(since by statistical homogeneity the response cannot depend on the spatial position of the galaxies),
we cannot receive the light from all the sources inside it. This is easy to see in an FLRW universe
(see for example \figs{geometry_1D}{geometry_2D} in the next section)
or any conformally flat spacetime, but holds non-perturbatively.\footnote{In general,
consider an event $P$ and an event $Q$ in the causal past of $P$.
Then, any event $R$ in the causal past of $Q$ belongs to the causal past of $P$, since we can always find a timelike or lightlike curve connecting $P$ with $R$.
However, it is clear that there are many events in the causal past of $Q$ that are not on the light cone
of $P$, like for example any event that is reached by a past-directed timelike curve from $Q$.}
Instead of pursuing this route, we attempt to remain as general as possible and only
assume that the sources of ionizing radiation can be described by a bias expansion whose nonlocality scale
is much shorter than the m.f.p.~of ionizing radiation. Further, since in this paper we are only interested in treating the higher-derivative
terms controlled by $\lambda_{\rm eff}$, we will drop all higher-derivative terms that are suppressed by these additional short scales
and stop at first order in perturbations. The extension to higher orders in perturbations is well understood when spatial nonlocality
is expanded following \eq{locality-B} (see Sections 2 and 4 of \cite{Desjacques:2016bnm}). Moreover,
we expect RT effects to typically be of relatively small amplitude, so that a linear treatment of their contribution is appropriate.
\section{Radiative-transfer effects and the bias expansion}
\label{sec:radiative_bias}
\noindent We start by presenting the equations of radiative transfer, and discuss in more detail the properties of the emission and absorption coefficients
(Section \ref{ssec:emission_absorption}).
Then, in order to set the stage, in Section \ref{ssec:single_flash} we consider a ``flash'' of radiation emitted
around $\eta=\eta_\ast$, \ie~emitted within an interval $\Delta\eta$ much shorter than a Hubble time, and focus only
on inhomogeneities in the radiation field on this hypersurface (\ie~we neglect the effect of inhomogeneities along the photon geodesics leading to the galaxies).
We show that, in this idealized scenario, we can capture the RT effects \emph{without
expanding in derivatives} by adding one new function of $k^2$ in the bias expansion: this
function encodes the response of the galaxies to the incoming flux of ionizing radiation along the past fluid worldline.
We also show that, even if we do not know this response exactly, it is possible to predict the shape of this function
at every order in an expansion in powers of the radiation m.f.p.~divided by the Hubble radius. That is, as long as the m.f.p.~is significantly smaller than the horizon,
we can successfully resum the power series in $k^2 \lambda_{\rm eff}^2$ into a computable function.
Our ignorance of the small-scale physics is then parameterized by the overall coefficients of this expansion, which are generic functions of time.
This statement is not trivial. It is well known that, in the standard bias expansion, all the Green's functions that describe the nonlocality in time of galaxy formation can,
at each order in perturbation theory, be rewritten in terms of a finite number of time-dependent (but not scale-dependent) bias coefficients. At linear order, this
is shown explicitly in Section \ref{sec:introduction} (compare \eq{locality-B} with \eq{locality-C}). In our case,
instead, any time dependence along the past fluid trajectory turns into a dependence on the spatial distribution of sources at $\eta=\eta_\ast$
(since the integrated incoming flux receives contributions from everything along the past light cone of the galaxies),
and this leads to a modification of all the higher-derivative terms in the bias expansion.
However, as long as the m.f.p.~is much shorter than the Hubble radius, photons are actually arriving from a small comoving volume
of size $\sim\lambda_{\rm eff}^3$ around the past fluid trajectory. This is what ultimately allows a resummation
of all these higher-derivative terms into specific functions of $k^2$, each proportional to an increasing power of ${\cal H}\lambda_{\rm eff}$. Crucially,
this resummation allows us to describe galaxy clustering even for $k\gg 1/\lambda_{\rm eff}$. We will later see whether this also holds beyond
the instantaneous flash and homogeneous medium approximations.
We compute the impact of the resummed RT effects on the galaxy power spectrum and
possible degeneracies with the neutrino mass in Section \ref{ssec:galaxy_statistics}.
In Sections \ref{ssec:q_tracers} and \ref{ssec:power_spectra} we discuss briefly how to treat tracers
that are immune to the very nonlocal effects discussed above (we label these tracers by ``$q$'' in the following).
This provides an illustration for how multiple LSS tracers might help in understanding the RT contributions.
\subsection{Radiative-transfer equation}
\label{ssec:emission_absorption}
\noindent Let us first write down the equation of radiative transfer.
We define the phase-space density of emitted radiation as $\mathcal{N}(x^\mu,P^\nu)$, where $P^\mu$ is the photon four-momentum. Calling $U^\mu$
the four-velocity of the observer that follows the fluid worldline $x^\mu_{\rm fl} = (\eta,\vec{x}_{\rm fl}(\eta))$,
we can decompose $P^\mu$ as $E(U^\mu+l^\mu)$, where $E$ and $l^\mu$ are, respectively,
the photon energy and its direction as measured by the observer defined by $U^\mu$. Consequently,
we can define the specific intensity of emitted radiation for the observer $U^\mu$ as ${\cal I} = E^3 \mathcal{N}$.
Its evolution is dictated by the Boltzmann equation \mbox{(see \eg~\cite{Rybicki:2004hfl})}
\begin{equation}
\label{eq:boltzmann-1}
\frac{{\rm D}{\cal I}}{\mathrm{d}\lambda} = \frac{3{\cal I}}{E}\frac{{\rm D} E}{\mathrm{d}\lambda} + {\cal I}\sigma_{\rm ab} j^\mu_{\rm ab} P_\mu
- \frac{\rho_{\rm em}\varepsilon_{\rm em}{U^\mu_{\rm em}}P_\mu}{4\pi}\,\,,
\end{equation}
where $\lambda$ is an affine parameter along the photon geodesics, normalized such that $P^\mu = {\mathrm{d} x^\mu}/{\mathrm{d}\lambda}$,
and the term $3{\cal I}\times({\rm D}\log E/{\rm d}\lambda)$ encodes, \eg, the dilution due to the expansion of the universe. For simplicity of
notation we have suppressed the argument of $\cal I$: in general, like $\cal N$, it is also a function of the spacetime position $x^\mu$ and the photon four-momentum $P^\mu$.
Let us discuss in more detail the second and the third term on the right-hand side of \eq{boltzmann-1}, which are directly related to the so-called emission and absorption coefficients.
For simplicity, here we have considered a single family of emitter and absorbers, since the generalization to an arbitrary number is straightforward:
\begin{itemize}
\item the number current of absorbers (\eg~neutral hydrogen) is given by $j^\mu_{\rm ab} = n_{\rm ab} U^\mu_{\rm ab}$,
where $U^\mu_{\rm ab}$ is their four-velocity and $n_{\rm ab}$ their number density;
\item the absorption cross section is called $\sigma_{\rm ab}$.
In the case of absorption by neutral hydrogen with emission of an electron in the continuum (photoionization), this would be the bound-free cross section $\sigma_{\rm bf}$;
\item $U^\mu_{\rm em}$ is the four-velocity of the emitting medium and $\rho_{\rm em}$ its mass density.
For emitters of fixed mass $m_{\rm em}$ and number current $j^\mu_{\rm em}$ we have $\rho_{\rm em} U^\mu_{\rm em} = m_{\rm em} j^\mu_{\rm em}$,
\item the dimensionless function $\varepsilon_{\rm em}(x^\mu,P^\nu)$ is the emissivity,
defined as the energy emitted by the source medium per unit frequency per unit time per unit mass \cite{Rybicki:2004hfl}.
\end{itemize}
With this, the emission and absorption coefficients,
defined as the energy emitted per unit time per unit solid angle per unit volume and the cross-sectional area presented by absorbers per unit volume,
are $(\rho_{\rm em}\varepsilon_{\rm em})/(4\pi)$ and $\sigma_{\rm ab}n_{\rm ab}$, respectively.
What about the absorption cross section and the emissivity? There is no preferred direction towards which the constituents
of the source medium can align (at first order in perturbations): therefore the emissivity does not depend on the photon direction $l^\mu$. Moreover,
here we are also considering the total absorption cross section, so that $\sigma_{\rm ab}$ depends only on the photon energy.\footnote{Consider
for example the photoionization of hydrogen in the $1s$ state. If
the emitted electron is nonrelativistic, the angular dependence of the differential cross section is given by $\abs{\vers{k}_e\cdot\vers{\epsilon}}^2$,
where $\vec{k}_e$ is the momentum of the outgoing electron and $\vers{\epsilon}$ is the polarization of the incoming photon.
However, in this case we are not interested in
the direction of the outgoing electron, so we integrate over it (spin is conserved for nonrelativistic electrons, so tracing over it is trivial).
Further, we average over $\vers{\epsilon}$ since we consider unpolarized radiation.}
We emphasize that these assumptions are common to both analytic studies of reionization \cite{Mao:2014rja}
and radiative-transfer simulations (see \eg~the discussion in \cite{McQuinn:2018zwa}).
Finally, $\varepsilon_{\rm em}$ does not depend on the fluid trajectory $\vec{x}_{\rm fl}(\eta)$ but only on time, since
the emission of radiation is localized at the position of the sources. For some examples of specific source models we refer to \cite{Zhang:2006kr,DAloisio:2013mgn,Mao:2014rja}.
Before proceeding, we also emphasize that in \eq{boltzmann-1} we have neglected scattering.
We discuss this issue in more detail in Section \ref{sec:full_RT}. While \eq{boltzmann-1} holds in general spacetimes,
for the purposes of this paper it is sufficient to consider an FLRW universe, and neglect metric perturbations.
More precisely, lensing will be relevant only at second order in perturbations (since a nontrivial lensing effect only arises if there are anisotropies in the radiation field),
and gravitational redshift effects (Sachs-Wolfe and Integrated Sachs-Wolfe) can be neglected on sufficiently sub-horizon scales.
Let us now take advantage of the fact that, if we neglect higher-derivative terms suppressed by the halo Lagrangian radius,
all of emitters, absorbers, and receivers are comoving with the matter fluid, \ie~there is no velocity bias
\cite{Senatore:2014eva,Mirbabayi:2014zca,Desjacques:2016bnm}.
Then, we have that $j^\mu_{\rm ab} = n_{\rm ab} U^\mu$ and $U^\mu_{\rm em} = U^\mu$.
This tells us that $j^\mu_{\rm ab} P_\mu = {-n_{\rm ab}}E$, and ${-P_\mu U_{\rm em}^\mu} = E$.
Therefore \eq{boltzmann-1} becomes
\begin{equation}
\label{eq:boltzmann-2}
E(U^\mu+l^\mu)\nabla_\mu{\cal I} + \frac{\partial{\cal I}}{\partial E}\frac{{\rm D}E}{{\rm d}\lambda}
= \frac{3{\cal I}}{E}\frac{{\rm D} E}{\mathrm{d}\lambda} - {\cal I}\sigma_{\rm ab} n_{\rm ab} E + \frac{\rho_{\rm em}\varepsilon_{\rm em}E}{4\pi}\,\,,
\end{equation}
where we have expanded ${\rm D}/{\rm d}\lambda$ using the fact that ${\cal I}$ does not depend explicitly on $l^\mu$,
given our assumptions on $\sigma_{\rm ab}$ and $\varepsilon_{\rm em}$.
Then, for an FLRW metric in comoving coordinates
we have $l^\mu=(0,-\vers{n}/a)$, where $\vers{n}$ is opposite to the photon direction:
$\vers{n}$ remains constant because there is no lensing, and ${\rm D}E/{\rm d}\lambda = -HE^2$. Moreover,
since we stop at first order in perturbations in the bias expansion, it is not necessary to consider displacements \cite{Senatore:2014eva,Mirbabayi:2014zca,Desjacques:2016bnm}:
that is, the fluid worldline is just given by $x^\mu_{\rm fl} = (\eta,\vec{x}_{\rm fl}(\eta)) = (\eta,\vec{x})$ and then $U^\mu=\delta^\mu_0/a$.
\eq{boltzmann-2} then becomes
\begin{equation}
\label{eq:boltzmann-3}
\frac{\partial{\cal I}}{\partial\eta} - \vers{n}\cdot\vec{\nabla}{\cal I} - {\cal H}E\frac{\partial{\cal I}}{\partial E} =
{-3}{\cal H}{\cal I} - {\cal I}\sigma_{\rm ab} n_{\rm ab}a + \frac{\rho_{\rm em}\varepsilon_{\rm em}a}{4\pi}\,\,,
\end{equation}
where now and in the following we write ${\cal I}$ in terms of the arguments ${\cal I}(\eta,\vec{x},E,\vers{n})$.\footnote{Given our assumptions of isotropic emissivity and no scattering the dependence on $\vers{n}$
comes only from the photon free-streaming after emission: we refer to \eq{boltzmann_solution} for details.}
It is straightforward to solve \eq{boltzmann-3} by an integral along the line of sight. This is done in Appendix \ref{app:solution_of_boltzmann_equation}.
Before discussing the most general solution (which will be done in Section \ref{sec:full_RT}), let us focus on the case
where the emissivity is non-vanishing only for a short interval of time around $\eta_\ast$,
and there are no inhomogeneities in the number density of absorbers ($n_{\rm ab}(\eta,\vec{x}) = \widebar{n}_{\rm ab}(\eta)$).
Clearly, the time evolution of the emissivity and the fluctuations in $n_{\rm ab}$ must also be considered.
However, these assumptions allow us to introduce in a simple way the response of galaxies to the ionizing radiation.
Moreover, as discussed above, we will show that
it is in this scenario that we are able to predict how the correction to the galaxy bias from radiative transfer depends on $k$.
\subsection{Instantaneous flash and homogeneous optical depth}
\label{ssec:single_flash}
\noindent The spacetime diagram that summarizes our setup is shown in \figs{geometry_1D}{geometry_2D}.
Radiation is emitted in a single flash around $\eta=\eta_\ast$, and can affect the galaxies over the past fluid worldline (blue line in \figs{geometry_1D}{geometry_2D}).
\begin{figure
\centering
\includegraphics[width=0.85
\columnwidth]{geometry_1D.pdf}
\caption{Spacetime diagram for a single emission time $\eta_\ast$. The observer is at $(\eta_0,\vec{0})$,
while the observed galaxies are at $(\eta_{g},\vec{x}_{g})$. The tracers that we assume to be locally biased
with respect to the radiation field are observed at $(\eta_{q},\vec{x}_{q})$.}
\label{fig:geometry_1D}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.85
\columnwidth]{geometry_2D.pdf}
\caption{Same as \fig{geometry_1D}, but suppressing only one space dimension (with $\Delta \eta_{{g},\ast}$ defined as $\eta_{g}-\eta_\ast$).
We see that there is only one position on the surface $\eta=\eta_\ast$
from which we could in principle receive the radiation directly (without it being scattered towards us at $(\eta_{g},\vec{x}_{g})$):
however this point is hidden behind the observed galaxies.}
\label{fig:geometry_2D}
\end{figure}
At leading order in $\Delta\eta\ll1/{\cal H}(\eta_\ast)$, and in absence of scattering and of additional sources after $\eta_\ast$,
the radiation intensity received by the galaxies at some event $(\eta,\vec{x})$ on $x^\mu_{\rm fl}$ takes the form (see Appendix \ref{app:solution_single_flash} for a derivation)
\begin{equation}
\begin{split}
\label{eq:observed_intensity}
{\cal I}(\eta,\vec{x},E,\vers{n}) &= \bigg(\frac{1+z(\eta)}{1+z_\ast}\bigg)^3e^{-\tau(\eta,\,\vec{x},\,E,\,\vers{n})}\,
{\cal I}_\ast\big(\eta_\ast,\vec{x}+\vers{n}(\eta-\eta_\ast),E(\eta_\ast,\eta)\big) \\
&= \bigg(\frac{1+z(\eta)}{1+z_\ast}\bigg)^3e^{-\tau(\eta,\,\vec{x},\,E,\,\vers{n})}\,\frac{\Delta\eta\,a_\ast\,{\rho}_{\rm em}\big(\eta_\ast,\vec{x}+\vers{n}(\eta-\eta_\ast)\big)\,
\varepsilon_{\rm em}\big(\eta_\ast,E(\eta_\ast,\eta)\big)}{4\pi}\,\,,
\end{split}
\end{equation}
where the emitted intensity ${\cal I}_\ast$ is equal to $\Delta\eta(\rho_{\rm em}\varepsilon_{\rm em}a)|_{\eta=\eta_\ast}/(4\pi)$
(and is independent of $\vers{n}$ given that $\varepsilon_{\rm em}$ is isotropic),
and the redshift dependence of the photon energy is, for general $\eta'\leq\eta$, given by
\begin{equation}
\label{eq:energy_redshift_relation}
E(\eta',\eta)=E\frac{1+z(\eta')}{1+z(\eta)}\,\,,
\end{equation}
so that $E$ is the energy measured by an observer at $(\eta,\vec{x})$. Given
the number density of absorbers and the absorption cross section, the optical depth $\tau$ in \eq{observed_intensity} is equal to
\begin{equation}
\label{eq:optical_depth}
\tau(\eta,\vec{x},E,\vers{n}) = \int_{\eta_\ast}^{\eta}\mathrm{d}\eta'\,(\sigma_{\rm ab}n_{\rm ab} a)\big(\eta',\vec{x}+\vers{n}(\eta-\eta'),E(\eta',\eta)\big)\,\,.
\end{equation}
From now on we take $\sigma_{\rm ab}$ to be the bound-free cross section $\sigma_{\rm bf}$ for photoionization of the $1s$ state,
and $n_{\rm ab}=n_{\text{HI}}$ to be the corresponding number density of neutral hydrogen in the $1s$ state.
We will discuss what happens if we consider the full cross section and the more general scenario of multiple absorbers
(like higher levels of hydrogen or helium, for example) in Section \ref{ssec:caveats}.
Moreover, since in this section we assume the absorbers to be homogeneous, $n_{\text{HI}} = \widebar{n}_{\text{HI}}$, $\tau$ does not depend on $\vers{n}$.
Also, for simplicity of notation we will drop the bar {on $\widebar{n}_{\text{HI}}$.}
Let us then assume that there are some local properties of galaxies that depend on the received flux along their whole past history:
for example, these could be the heating and cooling rates of the gas accreting onto the parent halo,
which are in turn related to the star-formation rate.
Therefore, we expect this dependence to be inherited by the galaxy number density $n_{g}$ \cite{Schmidt:2017lqe}.
More precisely, we can parameterize its response to the intensity of ionizing radiation by means of a Green's function $G_{g}$ along the fluid worldline
(the subscript ``$g$'' is to differentiate it from the Green's function for the $q$ tracers, that is discussed in Section \ref{ssec:q_tracers}).
Such function encodes the ``UV physics'' of galaxy formation that the bias expansion is oblivious about.
At zeroth order, the RT effects on the mean galaxy density can then be written as
\begin{equation}
\label{eq:definition_of_G_g-A-1}
\begin{split}
{\widebar{n}_{g}|_{\rm ion}(\eta) } = 4\pi
\int_{\eta_\ast}^\eta\mathrm{d}\eta'\int_0^{+\infty}\mathrm{d} E\,{G}^{(0)}_{g}(\eta,\eta',E)\,\widebar{{\cal I}}(\eta',E)\,\,,
\end{split}
\end{equation}
where $G_g^{(0)}$ is the zeroth-order Green's function and we have used the fact that $\widebar{{\cal I}}$ does not depend on $\vers{n}$.
Similarly, at first order in perturbations, we have
\begin{equation}
\label{eq:definition_of_G_g-A-2}
\begin{split}
&\delta{n}_{g}|_{\rm ion}(\eta,\vec{x}) = \int_{\eta_\ast}^\eta\mathrm{d}\eta'\int\mathrm{d}\vers{n}\int_0^{+\infty}\mathrm{d} E\,{G}^{(1)}_{g}(\eta,\eta',E,\vers{n})\,
\delta{{\cal I}}\big(\eta',\vec{x}_{\rm fl}(\eta'),E,\vers{n}\big) \\
&\hphantom{\delta{n}_{g}|_{\rm ion}(\eta,\vec{x}) } =
\int_{\eta_\ast}^\eta\mathrm{d}\eta'\int\mathrm{d}\vers{n}\int_0^{+\infty}\mathrm{d} E\,{G}^{(1)}_{g}(\eta,\eta',E,\vers{n})\,\delta{{\cal I}}(\eta',\vec{x},E,\vers{n})\,\,,
\end{split}
\end{equation}
where again we have used the fact that $\vec{x}_{\rm fl}$ is equal to $\vec{x}$ at the order we are working at.
Before proceeding, let us discuss \eqsII{definition_of_G_g-A-1}{definition_of_G_g-A-2} in more detail.
What we have done is to isolate the contribution of ionizing radiation to the evolution of the galaxy number density
(hence the subscript ``$\rm ion$'' on $\widebar{n}_g$ and $\delta{n}_{g}$).
At the order in perturbations we are working at, there is no loss of generality if we just sum this contribution to that arising from purely gravitational evolution,
whose nonlocality scale is the halo Lagrangian radius $R(M_{h})$. That is, we can write the total dimensionless perturbation $\delta_g$ to the
galaxy number density as the sum $\delta_g|_{\rm grav}+\delta_g|_{\rm ion}$,
where $\delta_g|_{\rm ion}$ is obtained from \eqsII{definition_of_G_g-A-1}{definition_of_G_g-A-2} by simply taking the ratio $\delta n_g|_{\rm ion}/\widebar{n}_g|_{\rm ion}$.
We also emphasize that we allowed for two different Green's functions, one for the response of the average galaxy number density and one for its perturbation.
Indeed, there is no physical reason why the response should be the same at all orders in perturbations \cite{Desjacques:2016bnm}.
Besides, as we discussed at the beginning of this section, the sources of ionizing radiation are also
expected to be a biased tracer of the underlying matter distribution. Therefore,
$\delta{\cal I}$ will have both a deterministic and a stochastic contribution.
If these two components have different emission spectra (which also effectively leads to them having different mean free paths),
it is possible for galaxies to have a different response to each of them: we will not consider this case in the following,
and just comment briefly in Section \ref{ssec:galaxy_statistics} on how this would modify the results we obtain here.
What are then the properties of these Green's functions, ${G_g^{(0)}}$ and ${G_g^{(1)}}$?
Let us first focus on their dependence on the photon four-momentum.
At this order in perturbations we do not expect the galaxy response to depend on the photon arrival direction $\vers{n}$:
from the point of view of the large-scale bias expansion, this would amount to the presence of a preferred direction in the rest frame of the observer comoving with the fluid,
which is forbidden by rotational invariance.\footnote{At higher orders in perturbations we can, for example, use vectors constructed from gradients
of the matter overdensity.} Then, in order to make progress, we assume that the time and energy dependencies can be factorized.
Moreover, we assume that the energy dependence of both Green's functions is proportional to the absorption cross section itself,
\ie~we write
\begin{equation}
\label{eq:definition_of_G_g-B}
G^{(i)}_g(\eta,\eta',E)={\cal G}\sigma_{\rm bf}(E)G^{(i)}_g(\eta,\eta')\quad\text{for $i=0,1$}\,\,,
\end{equation}
where ${\cal G}$ is a constant with dimensions of an inverse length squared. This assumption is justified
if we are thinking of the effect of photoevaporation of the gas accreting onto the halo
(in any case, our conclusions will not change even if we consider a more general dependence on $E$, as we will see in Section \ref{ssec:caveats}).
We now turn to the time dependence of the Green's functions, to have an idea of what are the different scales involved in the problem.
Clearly, in the case of an instantaneous response of galaxy formation to the incoming flux,
they would both be a function of $\eta$ times $\delta(\eta-\eta')$.
In general, there is a finite response time, \ie~we can imagine that
the Green's functions are peaked around $\eta'=\eta$, with some finite width for the peak.
The incoming ionizing radiation is (obviously) not interacting with the dark matter in the host halo,
but with the gas accreting onto it. The time scale of this interaction is set by atomic physics, and then we expect it to be much faster
than a Hubble time. Nevertheless, we also expect the star-formation rate to be tied to the accretion rate of baryonic matter on the host halo.
This, in turn, is related to the total mass flow, which we know from gravity-only $N$-body simulations to be controlled by Hubble.
For this reason, in this section we are going to consider the case in which the galaxy response is varying only on cosmological time scales.
A more detailed discussion is presented in Section \ref{sec:conclusions}.
With this assumption, there is no loss of generality in redefining the two Green's functions
in such a way as to reabsorb the factor $(1+z(\eta'))^3/(1+z_\ast)^3$ in \eq{observed_intensity}.
More precisely, we write
\begin{equation}
\label{eq:reabsorb_redshift}
G^{(i)}_{g}(\eta,\eta')\to\bigg(\frac{1+z_\ast}{1+z(\eta')}\bigg)^3\,{G}^{(i)}_{g}(\eta,\eta')\quad\text{for $i=0,1$}\,\,.
\end{equation}
We will later use the fact that the responses vary on time scales of order Hubble.
We can now carry out the integral over $\mathrm{d}\eta'$ and $\mathrm{d}\vers{n}$ in \eq{definition_of_G_g-A-2}.
The full calculation is carried out in Appendix \ref{app:solution_single_flash}:
eventually, we see that $\delta n_g|_{\rm ion}$ is equal to
\begin{equation}
\label{eq:J_ion}
\begin{split}
&\delta n_g|_{\rm ion}(\eta,\vec{x})=\frac{\Delta\eta\,a_\ast\,{\cal G}}{4\pi}\int_{\abs{\vec{y}}\,\leq\,\eta\,-\,\eta_\ast}\mathrm{d}^3y\,
\frac{{G}^{(1)}_{g}(\eta,\eta_\ast+\abs{\vec{y}})}{\abs{\vec{y}}^2}\,\delta\rho_{\rm em}(\eta_\ast,\vec{x}+\vec{y})\,\times \\
&\hphantom{\delta n_g|_{\rm ion}(\eta,\vec{x})=\frac{\Delta\eta\,a_\ast\,{\cal G}}{4\pi} }
\int_0^{+\infty}\mathrm{d} E\,\sigma_{\rm bf}(E)\,\varepsilon_{\rm em}\big(\eta_\ast,E(\eta_\ast,\eta_\ast+\abs{\vec{y}})\big)\,{e^{-\hat{\tau}(\eta,\,\vec{y},\,E)}}\,\,,
\end{split}
\end{equation}
where $\hat{\tau}$ is given by
\begin{equation}
\label{eq:hat_tau}
\hat{\tau}(\eta,\vec{y},E) = \hat{\tau}(\eta,\abs{\vec{y}},E) = \abs{\vec{y}}\int_{0}^{1}\mathrm{d} u\,(n_{\text{HI}} a)(\eta_\ast+u\abs{\vec{y}})\,
\sigma_{\rm bf}\big(E(\eta_\ast+u\abs{\vec{y}},\eta_\ast+\abs{\vec{y}})\big)\,\,.
\end{equation}
The photoionization of $1s$ hydrogen requires energies higher than the Lyman limit $E_{\infty}$:
therefore the integral over $E$ in \eq{J_ion} starts from $E_{\infty}$.\footnote{Notice that
$E(\eta',\eta)\geq E$ for $\eta'<\eta$ (photons lose energy as they travel towards the galaxy): therefore this constraint is automatically satisfied in \eq{hat_tau}.}
At energies much higher than $E_\infty$, but small enough that the emitted electron is still nonrelativistic,
$\sigma_{\rm bf}(E)\sim E^{-7/2}$ (see \eg~\cite{Sakurai:1167961,Rybicki:2004hfl}). More precisely, we have
\begin{equation}
\label{eq:photoionization_cross_section_leading_scaling}
\sigma_{\rm bf}(E) = \frac{64\pi\alpha^3}{3E_\infty^2}\bigg(\frac{E_\infty}{E}\bigg)^{\frac{7}{2}}\equiv\sigma_\infty\bigg(\frac{E_\infty}{E}\bigg)^{\frac{7}{2}}\,\,,
\end{equation}
where $\alpha=e^2/4\pi$ is the fine-structure constant.
Corrections to this scaling can be expanded as a series in $E_\infty/E$, and are discussed in more detail in Section \ref{ssec:caveats}.
Let us then make three further assumptions. First, we consider a power-law
spectrum of emitted radiation, with a spectral index $s$ (which we take to be not too hard, \ie~$s < 1$): this is, for example, the parameterization
discussed in \cite{Zhang:2006kr,DAloisio:2013mgn}, while \cite{Mao:2014rja} studies the cases $s=0$ and $s=-2$.
Therefore, in \eq{J_ion} we can write $\varepsilon_{\rm em}(\eta_\ast,E)$ as $\varepsilon_{\infty}(\eta_\ast)(E/E_\infty)^{s}$.
Finally, we further approximate $\hat{\tau}(\eta,\abs{\vec{y}},E)$ as
\begin{equation}
\label{eq:hat_tau_approx}
\begin{split}
\hat{\tau}(\eta,\abs{\vec{y}},E) &=
\abs{\vec{y}}\int_{0}^{1}\mathrm{d} u\,(n_{\text{HI}} a)(\eta_\ast+u\abs{\vec{y}})\,
\sigma_{\rm bf}\big(E(\eta_\ast+u\abs{\vec{y}},\eta_\ast+\abs{\vec{y}})\big) \\
&\approx\abs{\vec{y}}\sigma_{\rm bf}\big(E(\eta_\ast,\eta_\ast)\big)(n_{\text{HI}} a)(\eta_\ast)
=\abs{\vec{y}}\sigma_{\rm bf}(E)(n_{\text{HI}} a)(\eta_\ast) \equiv \frac{\abs{\vec{y}}}{\lambda_{\rm ion}(\eta_\ast,E)}\,\,.
\end{split}
\end{equation}
Clearly, the time evolution of the mean number density of neutral hydrogen must be taken into account,
since it enters in \eq{hat_tau_approx} and gives an additional dependence on $\abs{\vec{y}}$.
We know that the ionization fraction basically changes from $0$ to $1$ during reionization,
and it is possible that this change happened very quickly, leading to a strong dependence of $\hat{\tau}$ on $\abs{\vec{y}}$
(however, it is important to stress that we do not have yet a clear picture of the time evolution of the ionization fraction,
since the main constraints come from the measurement of CMB anisotropies, that are only sensitive to the optical depth to redshift $z\approx 1100$).
Therefore it is important to take this time dependence into account as well:
we will come back to this point in Section \ref{ssec:caveats}, together with the full dependence of the cross section on redshift in \eq{hat_tau_approx}.
For now, the approximation of \eq{hat_tau_approx} is sufficient. With these assumptions, then, \eq{J_ion} becomes
\begin{equation}
\label{eq:J_ion_approximated}
\begin{split}
&\delta n_g|_{\rm ion}(\eta,\vec{x})=\frac{\Delta\eta\,a_\ast
\,{\cal G}\,\sigma_\infty\,\varepsilon_\infty}{4\pi}\int_{\abs{\vec{y}}\,\leq\,\eta\,-\,\eta_\ast}\mathrm{d}^3y\,
\frac{{G}^{(1)}_{g}(\eta,\eta_\ast+\abs{\vec{y}})}{\abs{\vec{y}}^2}\,\bigg(\frac{1+z_\ast}{1+z(\eta_\ast+\abs{\vec{y}})}\bigg)^s\,\times \\
&\hphantom{\delta n_g|_{\rm ion}(\eta,\vec{x})=\frac{\Delta\eta\,a_\ast
\,{\cal G}\,\sigma_\infty\,\varepsilon_\infty}{4\pi} }
\,\delta\rho_{\rm em}(\eta_\ast,\vec{x}+\vec{y})
\int_{E_\infty}^{+\infty}\mathrm{d} E\,\bigg(\frac{E}{E_\infty}\bigg)^{s\,-\,\frac{7}{2}}\,{e^{-\frac{\abs{\vec{y}}}{\lambda_{\rm ion}(\eta_\ast,\,E)}}}\,\,.
\end{split}
\end{equation}
In the following we are going to neglect the restriction of the integral to the interior of the past light cone of $(\eta,\vec{x})$,
since the mean free path is much shorter than $\eta-\eta_\ast$.
Indeed, if we imagine to observe galaxies at $z=\num{1.5}$, which is a typical redshift for the upcoming large-scale galaxy redshift surveys,
we have that the ratio between $(\eta-\eta_\ast)$ and the m.f.p.~is of order $100$, even taking conservatively $\eta_\ast$ corresponding to the end of reionization ($z\approx 6$),
and using that the m.f.p.~is of order $\lmpch{50}$ around these redshifts \cite{Worseck:2014fya,Becker:2014oga}.
In any case, including the constraint $\abs{\vec{y}}\leq\eta-\eta_\ast$ is straightforward, but it would only complicate the calculations without adding relevant physics:
we briefly discuss it in Section \ref{ssec:caveats}. Moreover, similarly to what we did in \eq{reabsorb_redshift} and without loss of generality, we reabsorb
the factor $(1+z_\ast)^s/(1+z(\eta_\ast+\abs{\vec{y}}))^s$ coming from the redshift dependence of the emissivity into the Green's function.
The integral over energy in \eq{J_ion_approximated} can then be carried out analytically. Let us however \mbox{approximate it by}
\begin{equation}
\label{eq:integral_over_energy}
\int_{E_\infty}^{+\infty}\mathrm{d} E\,\bigg(\frac{E}{E_\infty}\bigg)^{s\,-\,\frac{7}{2}}\,{e^{-\frac{\abs{\vec{y}}}{\lambda_{\rm ion}(\eta_\ast,\,E)}}} \approx
\frac{2E_{\infty}}{5-2s}\,
e^{{-\frac{2s\,-\,5}{2(s\,-\,6)}}\abs{\vec{y}}\sigma_\infty(n_{\text{HI}} a)(\eta_\ast)}\,\,,
\end{equation}
from which we can define an ``effective'' m.f.p.~${\lambda}_{\rm eff}(\eta_\ast)$ such that the integral is proportional to
$\exp({-\abs{\vec{y}}}/{\lambda}_{\rm eff}(\eta_\ast))$. This approximation works well for $\abs{\vec{y}}\ll{\lambda}_{\rm eff}(\eta_\ast)$,
with corrections starting at order $\abs{\vec{y}}^2/{\lambda}^2_{\rm eff}(\eta_\ast)$ (we will discuss in Section \ref{ssec:caveats} how to go beyond this approximation).
To summarize, we have reduced \eq{J_ion_approximated} to the form
\begin{equation}
\label{eq:J_ion_more_approximated}
\begin{split}
\delta n_g|_{\rm ion}(\eta,\vec{x})&=\underbrace{\frac{\Delta\eta\,a_\ast
\,{\cal G}\,\sigma_\infty\,\varepsilon_\infty\,E_\infty}{2\pi(5-2s)}}_{
\hphantom{{\cal C}_{\rm ion}\,}\equiv\,{\cal C}_{\rm ion}}\int\mathrm{d}^3y\,
\frac{{G}^{(1)}_{\rm g}(\eta,\eta_\ast+\abs{\vec{y}})}{\abs{\vec{y}}^2}\,\delta\rho_{\rm em}(\eta_\ast,\vec{x}+\vec{y})\,
{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}(\eta_\ast)}}}\,\,.
\end{split}
\end{equation}
For simplicity of notation we will regroup the overall factors in a single dimensionless quantity ${\cal C}_{\rm ion}$ in the following.
Moreover, for the rest of this section we will always intend $\lambda_{\rm eff}$ as evaluated at $\eta_\ast$
and $\cal H$ as evaluated at $\eta$, unless stated otherwise.
We now use the fact that the responses vary on time scales of order Hubble by expanding $G^{(1)}_g$ in a Taylor series around $\eta_\ast$:\footnote{We could
have equivalently expanded the Green's function around $\eta$. We have chosen $\eta_\ast$ as the expansion point since this makes calculations easier.}
\begin{equation}
\label{eq:G_g_expansion}
G^{(1)}_{g}(\eta,\eta') = \sum_{n\,=\,0}^{+\infty}g_{n}(\eta){\cal H}^n(\eta)(\eta'-\eta_\ast)^n\,\,,
\end{equation}
where, without loss of generality, we have made the functions $g_{n}(\eta)$ dimensionless by factoring out ${\cal H}^n(\eta)$.
The response for ${\widebar{n}_{g}|_{\rm ion}(\eta)}$ can be expanded in a similar way, but this is unnecessary (as we will see in a moment).
Introducing the dimensionless perturbation $\delta_{\rm em}=\delta\rho_{\rm em}/\widebar{\rho}_{\rm em}$ of the number density of emitters, \eq{J_ion_more_approximated} becomes
\begin{equation}
\label{eq:J_ion_more_approximated-bis}
\begin{split}
\delta n_g|_{\rm ion}(\eta,\vec{x})
&={\cal C}_{\rm ion}\widebar{\rho}(\eta_\ast)\sum_{n\,=\,0}^{+\infty}g_{n}(\eta){\cal H}^{n}(\eta)\int\mathrm{d}^3y\,
\abs{\vec{y}}^{n\,-\,2}\,\delta_{\rm em}(\eta_\ast,\vec{x}+\vec{y})\,{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}(\eta_\ast)}}}\,\,,
\end{split}
\end{equation}
Finally, after dividing \eq{J_ion_more_approximated-bis}
by the average galaxy number density $\widebar{n}_g|_{\rm ion}$ we obtain the expression for the dimensionless fluctuation $\delta_g|_{\rm ion}$, \ie~
\begin{equation}
\label{eq:flux_perturbations}
\delta_g|_{\rm ion}(\eta,\vec{x})=\frac{{\cal C}_{\rm ion}\widebar{\rho}(\eta_\ast)}{\widebar{n}_g|_{\rm ion}(\eta)}\sum_{n\,=\,0}^{+\infty}g_{n}(\eta){\cal H}^{n}\int\mathrm{d}^3y\,
\abs{\vec{y}}^{n\,-\,2}\,\delta_{\rm em}(\eta_\ast,\vec{x}+\vec{y})\,{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}}}}\,\,.
\end{equation}
What does \eq{flux_perturbations} mean for the bias expansion of $\delta_{g}|_{\rm ion}$? We
see that \eq{flux_perturbations} is an example of a nonlocal contribution to the bias expansion:
only if its kernel is sufficiently localized is it possible to do a (spatial) derivative expansion and end up with local operators.
Here it is the m.f.p.~that controls the spatial extent of the kernel (galaxies are sensitive
only to ionizing photons coming from a comoving volume of size $\sim\lambda_{\rm eff}^3$ along the past fluid trajectory),
so the expansion will be in $\lambda_{\rm eff}\vec{\nabla}$.
Consequently, all the higher-derivative operators become important at momenta $k\sim1/\lambda_{\rm eff}$.
This is the conclusion obtained in \cite{Schmidt:2017lqe}. However, we also see that \eq{flux_perturbations} is actually a resummation of all these higher-derivative terms:
if we can treat the sum over $n$ perturbatively, we are able to predict the scale dependence of $\delta_g|_{\rm ion}$
also for $k\gtrsim 1/\lambda_{\rm eff}$ in terms of a finite number of functions of time.
This can be seen more easily if we work in Fourier space. \eq{flux_perturbations} is a convolution in real space, so $\delta_g|_{\rm ion}(\eta,\vec{k})$ is
\begin{equation}
\label{eq:delta_perturbations-A}
\delta_g|_{\rm ion}(\eta,\vec{k}) = f_{\rm ion}(\eta,k^2)\delta_{\rm em}(\eta_\ast,\vec{k})\,\,,
\end{equation}
where $f_{\rm ion}(\eta,k^2)$ (which can depend only on $k$ because the kernel in \eq{flux_perturbations} depends only on $\abs{\vec{y}}$,
and cannot contain odd terms in $k$ because of locality) is given by
\begin{equation}
\label{eq:delta_perturbations-B}
f_{\rm ion}(\eta,k^2) = \frac{{\cal C}_{\rm ion}\widebar{\rho}(\eta_\ast)}{\widebar{n}_g|_{\rm ion}(\eta)}
\sum_{n\,=\,0}^{\infty}g_{n}(\eta){\cal H}^{n}\int\mathrm{d}^3y\,e^{{-}i\vec{k}\cdot\vec{y}}\,\abs{\vec{y}}^{n\,-\,2}\,
{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}}}}\,\,.
\end{equation}
We can simplify this expression for $f_{\rm ion}$ if we define
\begin{equation}
\label{eq:redefine_f_ion-A}
\int\mathrm{d}^3y\,e^{{-}i\vec{k}\cdot\vec{y}}\,\abs{\vec{y}}^{n\,-\,2}\,
{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}}}}\equiv\lambda_{\rm eff}^{n\,+\,1}\,(4\pi n!)\,{\cal F}^{(n)}_{\rm ion}(k^2\lambda^2_{\rm eff})\,\,,
\end{equation}
where ${\cal F}^{(n)}_{\rm ion}$ are dimensionless functions of $k^2\lambda^2_{\rm eff}$,
and the overall factor of $4\pi n!$ has been chosen so that they are all equal to $1$ for $k=0$.
Further, since $\widebar{n}_g|_{\rm ion}(\eta)$ depends only on time, for each $n$ we can reabsorb the dimensionless factor
$(4\pi n!)\times (\lambda_{\rm eff}{\cal C}_{\rm ion}\widebar{\rho}(\eta_\ast))/(\widebar{n}_g|_{\rm ion}(\eta))$
into the corresponding coefficient $g_n$. With these redefinitions, $f_{\rm ion}$ takes the form
\begin{equation}
\label{eq:redefine_f_ion-B}
f_{\rm ion}(\eta,k^2)={\sum_{n\,=\,0}^{+\infty}g_{n}(\eta)({\cal H}\lambda_{\rm eff})^n {\cal F}^{(n)}_{\rm ion}(k^2\lambda^2_{\rm eff})}\,\,,
\end{equation}
and in the following we will often suppress the time dependencies of the coefficients $g_{n}$ for simplicity of notation.
For future reference, we also write down the explicit expression of the functions ${\cal F}^{(n)}_{\rm ion}$, \ie~
\begin{equation}
\label{eq:functions_f_n}
{\cal F}^{(n)}_{\rm ion}(x^2)=\frac{\sin\!\big(n\arctan(x)\big)}{nx(1+x^2)^{n/2}}\,\,,
\end{equation}
with $f^{(0)}_{\rm ion}(x^2)=4\pi\arctan(x)/x$. A plot of the first two functions $\smash{{\cal F}^{(n)}_{\rm ion}}$ is shown in \fig{PS_corrections}.
\begin{figure
\centering
\includegraphics[width=0.85\columnwidth]{functions_of_k.pdf}
\captionsetup{singlelinecheck=off}
\caption[.]{Plot of the first two functions ${\cal F}^{(n)}_{\rm ion}$ for different values of $\lambda_{\rm eff}$. At large $k\gg 1/\lambda_{\rm eff}$
they decay as $1/k^{n\,+\,1}$, so that their different scale dependence can in principle \mbox{be distinguished.}}
\label{fig:PS_corrections}
\end{figure}
As we discussed at the end of Section \ref{sec:questions}, we
assume that $\delta_{\rm em}$ can be expressed in terms of the matter overdensity via a bias expansion
with a nonlocality scale much shorter than the m.f.p.~of ionizing radiation. Therefore everything depends on whether or not we
are able to truncate the sums in \eq{redefine_f_ion-B} at a finite order: otherwise we would need to know an infinite number of functions of $\eta$
(\ie~the $g_{n}$) to compute $f_{\rm ion}$. Fortunately, this is possible, since the m.f.p.~of ionizing radiation is much shorter than ${\cal H}^{-1}$. Therefore
we can safely truncate the sums at some finite order in ${\cal H}{\lambda}_{\rm eff}$,
and correspondingly introduce only a finite number of new bias coefficients in the bias expansion for galaxies.\footnote{We
notice that in the limit of the effective mean free path going to zero there is no response of the galaxy number density to ionizing radiation if the
Green's function for $\delta n_g|_{\rm ion}$ is zero at $\eta=\eta_\ast$. This makes sense since in this limit
only radiation emitted from events along the fluid worldline could have reached the galaxies.}
We emphasize again how \eq{redefine_f_ion-B} has nothing to do with an expansion of \eq{delta_perturbations-B} in powers of $k^2{\lambda}^2_{\rm eff}$.
The point here is that the scale dependence at each order in ${\cal H}{\lambda}_{\rm eff}$ can be computed non-perturbatively
in terms of the single parameter ${\lambda}_{\rm eff}$. By solving the full
RT equation, and by simple locality arguments on the response of galaxies to the ionizing radiation,
we have obtained a resummation of the expansion in spatial derivatives. This gives a precise correction to the shape of the power spectrum of galaxies
with an amplitude set by a finite number of new bias coefficients, \ie~the $g_{n}$ appearing in \eq{redefine_f_ion-B}.
Before proceeding to the next section, where we are going to compute the power spectrum of galaxies,
we also recall that, given that we see a suppression of radiative-transfer effects in the clustering of high-mass galaxies \cite{Schmidt:2017lqe},
the Green's function for $\delta n_{\rm g}|_{\rm ion}$ must be suppressed by some factor $p(M_h,M_{\rm J})$
with respect to the one for the average galaxy density, with $p(M_h,M_{\rm J})\ll 1$ for $M_h\gg M_{\rm J}$.
This is clear since, as we can see from \eqsII{definition_of_G_g-A-1}{delta_perturbations-B}, the effect of radiative transfer on the fractional galaxy density perturbation
is controlled by the logarithmic response of the galaxy density to the radiation, that is the ratio of the two Green's functions in
\eqsII{definition_of_G_g-A-1}{definition_of_G_g-A-2}.
\subsection{Galaxy statistics}
\label{ssec:galaxy_statistics}
\noindent We can now compute the contribution to the galaxy power spectrum from \eq{delta_perturbations-A}.
This equation relates $\delta_g|_{\rm ion}$ at $\eta$ with $\delta_{\rm em}$ at $\eta_\ast$: that is, we only need to know the
statistics of the sources at $\eta_\ast$ to compute the contribution of RT effects to $P_{gg}$ at $\eta$.
At leading order in perturbations and derivatives we can write
\begin{equation}
\label{eq:delta_em_bias_expansion-A}
\delta_{\rm em}(\eta,\vec{k}) = b_{\rm em}(\eta)\delta(\eta,\vec{k}) + \epsilon_{\rm em}(\eta,\vec{k})\,\,,
\end{equation}
where $b_{\rm em}$ is the linear LIMD bias, and the operator $\epsilon_{\rm em}$ (not to be confused with the emissivity)
captures the effect of short-scale physics on the evolution of the sources at this order in perturbations (see \eg~Section 2 of \cite{Desjacques:2016bnm} for details).
The stochastic term $\epsilon_{\rm em}$ is uncorrelated with $\delta$ and has a $k$-independent correlation function (up to terms of order $k^2R^2(M_h)$), \ie\footnote{Notice
that, similarly to what happens to dark matter halos (or any other tracer),
stochasticity is never completely uncorrelated with large-scale density fluctuations:
gravitational evolution couples long- and short-wavelength modes, so that at higher orders in perturbations terms like $\epsilon_{{\rm em},\delta}\delta$
(where $\epsilon_{{\rm em},\delta}$ is a different operator than $\epsilon_{\rm em}$) must be included. See, \eg, \cite{Desjacques:2016bnm} for a detailed discussion.}
\begin{equation}
\label{eq:delta_em_bias_expansion-B}
\braket{\epsilon_{\rm em}(\eta,\vec{k})\epsilon_{\rm em}(\eta',\vec{k}')}'=P_{\epsilon_{\rm em}}^{\{0\}}(\eta,\eta')\,\,.
\end{equation}
Without loss of generality, we can write $\delta_{\rm em}(\eta_\ast,\vec{k})$ in terms of $\delta(\eta,\vec{k})$ by
reabsorbing the linear growth factor ${D_1(\eta_\ast)}/{D_1(\eta)}$ in the bias parameter $b_{\rm em}(\eta_\ast)$.
Consequently, through $\delta_{g}|_{\rm ion}(\eta,\vec{k})$ the bias expansion for galaxies will gain the two new terms
\begin{equation}
\label{eq:terms_in_bias_expansion}
\delta_{g}|_{\rm ion}(\eta,\vec{k}) = {f_{\rm ion}(\eta,k^2)}\big[b_{\rm em}(\eta_\ast)\delta(\eta,\vec{k}) + \epsilon_{\rm em}(\eta_\ast,\vec{k})\big]\,\,.
\end{equation}
We emphasize a minor point about this expression for $\delta_{g}|_{\rm ion}$,
that will nevertheless become relevant when we consider multiple emission times in Section \ref{ssec:generalization}.
Here $\delta_{g}|_{\rm ion}$ is a sum of a deterministic and a stochastic term, each multiplied by a function of $k^2$.
Even if the overall amplitude of these two functions is different, their scale dependence is the same (\ie~it is given by the single function $f_{\rm ion}$).
Moreover, we can also see what would happen if we considered two different Green's function for the stochastic and deterministic contributions
to $\delta{\cal I}$ in \eq{definition_of_G_g-A-2}. The calculations leading to \eq{terms_in_bias_expansion} go through in
the same way: the only difference is that instead of a single function $f_{\rm ion}$ we now have two different functions
multiplying $\delta$ and $\epsilon_{\rm em}$.
These two functions both have an expansion like that of \eq{redefine_f_ion-B}, but each of them has its own set of coefficients $g_{n}$:
this captures the difference between the Green's functions for the stochastic and deterministic contributions.
The full galaxy fractional overdensity is simply given by the sum of $\delta_{g}|_{\rm grav}$ and $\delta_{g}|_{\rm ion}$ at this order in perturbations,
as we have seen at the beginning of Section \ref{ssec:single_flash}. That is, it takes the form
\begin{equation}
\label{eq:galaxy_overdensity}
\begin{split}
\delta_g(\eta,\vec{k}) &=\delta_{g}|_{\rm grav}(\eta,\vec{k})+\delta_{g}|_{\rm ion}(\eta,\vec{k}) \\
&= b_1(\eta)\delta(\eta,\vec{k})+\epsilon_g(\eta,\vec{k})
+ {f_{\rm ion}(\eta,k^2)}\big[b_{\rm em}(\eta_\ast)\delta(\eta,\vec{k}) + \epsilon_{\rm em}(\eta_\ast,\vec{k})\big]\,\,,
\end{split}
\end{equation}
where $\delta_g|_{\rm grav}$ contains the linear LIMD and stochastic terms ($b_1\delta$ and $\epsilon_g$, respectively).
Let us then compute the equal-time galaxy power spectrum. For this purpose, it is useful to factor out
$g_{0}$ from our expression for $f_{\rm ion}$, since in any case it is degenerate with the bias coefficient $b_{\rm em}$
and the amplitude of the stochastic term $\epsilon_{\rm em}$: that is, in \eq{galaxy_overdensity} we redefine
\begin{equation}
\label{eq:redefining_f_ion_to_simplify_P_gg-A}
f_{\rm ion}(\eta,k^2)\to{g_{0}(\eta)}\,f_{\rm ion}(\eta,k^2)\,\,,
\end{equation}
so that now we have
\begin{equation}
\label{eq:redefining_f_ion_to_simplify_P_gg-B}
f_{\rm ion}(\eta,k^2) = \mathcal{F}^{(0)}_{\rm ion}(k^2{\lambda}^2_{\rm eff}) + {\cal O}({\cal H}{\lambda}_{\rm eff})\,\,,
\end{equation}
\ie~$f_{\rm ion}(\eta,k^2)$ is independent of the galaxy response to ionizing radiation at leading order in ${\cal H}\lambda_{\rm eff}$.
If we further reabsorb $g_{0}$ into $b_{\rm em}$ and $\epsilon_{\rm em}$, the equal-time galaxy power spectrum $P_{gg}(\eta,k)$ takes the form
\begin{equation}
\label{eq:galaxy_PS}
\begin{split}
P_{gg}(\eta,k) &= \big[b_1(\eta)+{b}_{\rm em}(\eta_\ast)f_{\rm ion}(\eta,k^2)\big]^2P_{\rm L}(\eta,k) + P^{\{0\}}_{\epsilon_{g}}(\eta) \\
&\;\;\;\; + 2f_{\rm ion}(\eta,k^2)P^{\{0\}}_{\epsilon_{g}\epsilon_{\rm em}}(\eta,\eta_\ast) +
f^2_{\rm ion}(\eta,k)P^{\{0\}}_{\epsilon_{\rm em}}(\eta_\ast)\,\,,
\end{split}
\end{equation}
where $P_{\rm L}$ is the linear matter power spectrum and, following \cite{Desjacques:2016bnm},
we have defined the cross power spectrum between the two different stochastic terms $\epsilon_g$ and $\epsilon_{\rm em}$ as
\begin{equation}
\label{eq:stochastic_cross_PS}
\braket{\epsilon_{g}(\eta,\vec{k})\epsilon_{\rm em}(\eta',\vec{k}')}'\equiv P^{\{0\}}_{\epsilon_{g}\epsilon_{\rm em}}(\eta,\eta')\,\,.
\end{equation}
We notice that the contributions involving $\epsilon_{\rm em}$ are basically modifying the form of the higher-derivative corrections to the stochastic term $\epsilon_g$.
These are present also if we consider gravitational interactions only: while in that case they become relevant at scales of order of the halo Lagrangian radius,
here they are controlled by the m.f.p.~of ionizing radiation.
From \eq{stochastic_cross_PS} we see that, if we stop at zeroth order in the expansion of \eq{redefining_f_ion_to_simplify_P_gg-B}
(so that the function $f_{\rm ion}$ is univocally determined in terms of the mean free path),
at fixed $\eta$ we can predict the galaxy power spectrum up to $4$ constants,
\ie~$b_1$, ${b}_{\rm em}$, and the amplitudes of the two stochastic terms $\epsilon_g$, $\epsilon_{\rm em}$.\footnote{Of course we can consider also the
effective mean free path as an additional free parameter, with a prior between $\sim\kmpch{30}$ and $\sim\kmpch{100}$,
to account for uncertainties in the evolution of the mean density of neutral hydrogen.}
We also see that there are no degeneracies between these four parameters.
Clearly we can discriminate between $b_1,{b}_{\rm em}$ and $\smash{P^{\{0\}}_{\epsilon_g}},\smash{P^{\{0\}}_{\epsilon_{\rm em}}}$ thanks to the scale dependence of
the matter power spectrum. Then, the scale dependence of $f_{\rm ion}$ allows to respectively distinguish $b_1$ from ${b}_{\rm em}$,
and $\smash{P^{\{0\}}_{\epsilon_g}}$ from $\smash{P^{\{0\}}_{\epsilon_{\rm em}}}$.
If we go beyond the zeroth order in ${\cal H}\lambda_{\rm eff}$, we need more free parameters to compute $f_{\rm ion}$.
Thanks to the different scale dependence of the functions $\smash{{\cal F}^{(n)}_{\rm ion}}$, however, it is in principle possible to discriminate between them.
\begin{figure
\centering
\includegraphics[width=0.85\columnwidth]{functions_of_k-with_nu.pdf}
\captionsetup{singlelinecheck=off}
\caption[.]{Plot of $\smash{{\cal F}^{(0)}_{\rm ion}}$ for different values of $\lambda_{\rm eff}$ (blue curves).
It has been multiplied by an RT bias coefficient (more precisely, by the combination $2b_{\rm em}(\eta_\ast)/b_1(\eta)$ of \eq{relative_difference})
with arbitrary value of $\num{0.04}$ (note that, depending on the parent halo mass and redshift, the RT bias coefficients could be small)
and its limit for $k=0$ has been subtracted (since it can always be reabsorbed in the linear LIMD bias).
The green dot-dashed curve shows the relative difference between the square of the transfer functions for the total matter overdensity (at $z=1$)
with $\sum m_\nu = \SI{0.1}{\rm eV}$ and $m_\nu=0$.}
\label{fig:neutrino_scale_dependence}
\end{figure}
We conclude this section with a brief discussion about the possible degeneracy between these RT effects and massive neutrinos.
If we neglect the fact that massive neutrinos in general lead to a scale-dependent bias
(see \cite{Ichiki:2011ue,Castorina:2013wga,LoVerde:2014pxa,Villaescusa-Navarro:2017mfx,Chiang:2017vuk,Chiang:2018laa}), the effect
on the galaxy power spectrum is captured by the modification to the transfer function $T(k)$ for the total matter overdensity. More precisely, the relative correction to the
deterministic part of the galaxy power spectrum is given by
\be
\frac{\Delta P_{gg}(\eta,k)}{P_{gg}(\eta,k)} = \frac{T^2(\eta,k, m_\nu\neq 0)}{T^2(\eta,k,m_\nu=0)}-1\,\,.
\ee
This is the green dot-dashed curve in \fig{neutrino_scale_dependence}.
The corresponding correction due to RT effects, at leading order in $b_{\rm em}$, is given by (see \eq{galaxy_PS})
\begin{equation}
\label{eq:relative_difference}
\frac{\Delta P_{gg}(\eta,k)}{P_{gg}(\eta,k)} = \frac{2b_{\rm em}(\eta_\ast)}{b_1(\eta)}f_{\rm ion}(\eta,k^2)\,\,,
\end{equation}
where $f_{\rm ion}$ is given by \eq{redefining_f_ion_to_simplify_P_gg-B} and we recall that $b_{\rm em}$ must indeed be regarded as a small number
since we reabsorbed the leading RT bias coefficient $g_{0}$ into it. Can this mimic the effect due to massive neutrinos?
First, we see that the amplitude of this scale-dependent correction at $k=0$ is always degenerate with the linear LIMD bias.
In other words, $b_1$ is fixed by the amplitude of $P_{gg}$ on large scales (obviously neglecting degeneracies with, \eg, $\sigma_8$, since
they are not relevant for the sake of this discussion). This corresponds to redefining $b_1$ in \eq{galaxy_PS} as
$b_1(\eta)\to b_1(\eta)-b_{\rm em}(\eta_\ast)f_{\rm ion}(\eta,0)$.
Correspondingly \eq{relative_difference} sees simply $f_{\rm ion}(\eta,k^2)$ replaced by $f_{\rm ion}(\eta,k^2)-f_{\rm ion}(\eta,0)$.
This is plotted in \fig{neutrino_scale_dependence} at leading order in ${\cal H}\lambda_{\rm eff}$ (blue curves).
From the plot it is clear that these scale-dependent corrections are very similar in shape to those
from massive neutrinos, \ie~RT effects could give rise to a bias in the constraints on $\sum m_\nu$ if not accounted for.\footnote{In
principle we could think of getting around this degeneracy by isolating $f_{\rm ion}$ from the corrections to the stochasticity (see \eq{galaxy_PS}).
However, we expect that neutrinos affect also the scale dependence of the stochastic term on scales $k\sim k_{\rm fs}$.
Besides, as we are going to see in Section \ref{sec:full_RT}, when we consider the generic case of emission of radiation over Hubble time scales
and add the inhomogeneities in the optical depth, the function of $k^2$ multiplying the stochastic term will not be the same as the one \mbox{multiplying $\delta$.}}
\subsection{Locally-biased tracers}
\label{ssec:q_tracers}
\noindent We now move to the $q$ tracers. These tracers are assumed to be locally sensitive to the ionizing radiation emitted by the sources.
We capture their memory of the emitted radiation field by allowing their number density to depend on the integral over $\log E$ and $\vers{n}$
of the emission coefficient $(\rho_{\rm em}\varepsilon_{\rm em})/(4\pi)$, which is the total amount of ionizing photons emitted per unit time and unit volume. However,
this is not enough. While we expect this dependence to be local in space, we must in principle integrate the emission coefficient also along the past fluid worldline,
with a Green's function $G_{q}$ that describes how $n_{q}$ responds to the emissivity of the sources. As in the case of galaxies, we allow for two different Green's
functions at zeroth and first order in perturbations. In principle these Green's functions can also depend on the photon energy
(while they cannot depend on the photon direction $\vers{n}$ because of rotational invariance,
at the order in perturbations we are working at). Therefore, we write $\delta {n}_q|_{\rm ion}$ as
\begin{equation}
\label{eq:q_tracers-A-2}
\begin{split}
\delta n_{q}|_{\rm ion}(\eta,\vec{x}) &= \int_0^\eta\mathrm{d}\eta'\int_0^{+\infty}\frac{\mathrm{d} E}{E}\int\mathrm{d}\vers{n}\,G^{(1)}_{q}(\eta,\eta',E)\,
\frac{(\delta\rho_{\rm em}\varepsilon_{\rm em})(\eta',\vec{x},E)}{4\pi} \\
&= \int_0^\eta\mathrm{d}\eta'\,\delta{\rho}_{\rm em}(\eta',\vec{x})\int_0^{+\infty}\frac{\mathrm{d} E}{E}\,G^{(1)}_{q}(\eta,\eta',E)\,
\varepsilon_{\rm em}(\eta',E)\,\,,
\end{split}
\end{equation}
where we have used our assumptions of an isotropic emissivity. A similar relation holds for $\widebar{n}_q|_{\rm ion}$.
In \eq{q_tracers-A-2} we have also used the fact that, at the order we are working at, everything is comoving. In principle
we should have written our integral over the emission coefficient as, schematically,
$n_{q}\supset{-\int\rho_{\rm em}\varepsilon_{\rm em}U^\mu_{\rm em} U_\mu}$, where ${-\rho_{\rm em}}U^\mu_{\rm em}U_\mu$ is the energy
density of the emitters as seen from the observer comoving with the fluid.
Calling $\vec{v}_{\rm rel}$ the relative velocity between the $q$ tracers and the sources
we have ${-U^\mu_{\rm em} U_\mu}\sim 1+\abs{\vec{v}_{\rm rel}}^2/{2}$, so that these effects are very suppressed in the nonrelativistic limit.
Moreover, in presence of gravitational interactions only, $\vec{v}_{\rm rel}$ is only sourced starting from first order in derivatives, due to the equivalence principle.
From \eq{q_tracers-A-2} we see that the integral in $\mathrm{d} E$ can be redefined as a new Green's function that depends on $(\eta,\eta')$ only.
In the case of instantaneous emission, then, the fractional overdensity $\delta_{q}|_{\rm ion}(\eta,\vec{x})$ is simply proportional
to $\delta_{\rm em}(\eta_\ast,\vec{x})$ through a $\eta$-dependent function
(even if this assumption is relaxed, using the bias expansion for $\delta_{\rm em}$ of \eq{delta_em_bias_expansion-A}
we can still simply absorb the integral over $\eta'$ in the time dependence of the bias coefficients and stochastic terms, similarly
to what happened when we moved from \eq{locality-B} to \eq{locality-C}). Therefore, the final expression for $\delta_{q}$ in Fourier space is simply
\begin{equation}
\label{eq:q_tracers-B}
\delta_{q}(\eta,\vec{k}) = b_{q}(\eta)\delta(\eta,\vec{k}) + \epsilon_{q}(\eta,\vec{k}) + b_{\epsilon_{\rm em}}(\eta_\ast)\epsilon_{\rm em}(\eta_\ast,\vec{k})\,\,,
\end{equation}
where we kept the dependence on both stochastic terms $\epsilon_{q}$ and $\epsilon_{\rm em}$ (since they are different fields),
while $b_{\rm em}\delta\subset\delta_{\rm em}$ was absorbed in the LIMD bias.
We see that we cannot reabsorb the factor $b_{\epsilon_{\rm em}}$ in the amplitude of the stochastic term $\epsilon_{\rm em}$
without altering our expression for the galaxy power spectrum of \eq{galaxy_PS}.
Before proceeding we notice that, as we discussed below \eq{definition_of_G_g-A-2},
also in this case we could allow for two different Green's function in \eq{q_tracers-A-2}, one for
the response to the deterministic part of the emission coefficient and one for its stochastic part.
It is straightforward to see, however, that \eq{q_tracers-B} would not change.
\subsection{\texorpdfstring{$gq$}{gq} cross-correlation}
\label{ssec:power_spectra}
\noindent It is straightforward to compute $P_{gq}$ and $P_{qq}$ using \eqsIII{galaxy_overdensity}{redefining_f_ion_to_simplify_P_gg-A}{q_tracers-B}:
we find that $P_{gq}$ is equal to
\begin{equation}
\label{eq:P_gq}
\begin{split}
P_{gq}(\eta,k) &= b_{q}(\eta)\big[b_1(\eta)+{b}_{\rm em}(\eta_\ast)f_{\rm ion}(\eta,k^2)\big]P_{\rm L}(\eta,k) + P^{\{0\}}_{\epsilon_{g}\epsilon_{q}}(\eta)
+ b_{\epsilon_{\rm em}}(\eta_\ast)P^{\{0\}}_{\epsilon_{g}\epsilon_{\rm em}}(\eta,\eta_\ast) \\
&\;\;\;\; + f_{\rm ion}(\eta,k^2)P^{\{0\}}_{\epsilon_{q}\epsilon_{\rm em}}(\eta,\eta_\ast)
+ b_{\epsilon_{\rm em}}(\eta_\ast)f_{\rm ion}(\eta,k^2)P^{\{0\}}_{\epsilon_{\rm em}}(\eta_\ast)\,\,,
\end{split}
\end{equation}
while $P_{qq}$ is simply
\begin{equation}
\label{eq:P_qq}
P_{qq}(\eta,k) = b^2_{q}(\eta)P_{\rm L}(\eta,k) + P^{\{0\}}_{\epsilon_{q}}(\eta)
+ 2 b_{\epsilon_{\rm em}}(\eta_\ast)P^{\{0\}}_{\epsilon_{q}\epsilon_{\rm em}}(\eta,\eta_\ast)
+ b^2_{\epsilon_{\rm em}}(\eta_\ast)P^{\{0\}}_{\epsilon_{\rm em}}(\eta_\ast)\,\,.
\end{equation}
We notice that even if we stop at zeroth order in ${\cal H}\lambda_{\rm eff}$ we cannot constrain the parameter $b_{\epsilon_{\rm em}}$:
a change in $b_{\epsilon_{\rm em}}$ can always be compensated by a change in the amplitude of $\epsilon_q$
in both \eq{P_gq} and \eq{P_qq}. This can be easily seen also at the level of the fields from \eq{q_tracers-B}.
The importance of observations of the $q$ tracers lies in the (well-known: see \cite{Seljak:2008xr}) fact that,
if we combine $P_{gg}$, $P_{gq}$ and $P_{qq}$, it is possible to beat down cosmic variance
in the constraints on the different free bias parameters that $f_{\rm ion}$ depends on. In
this case we expect this multi-tracer technique to be useful also because
we are able to predict exactly the scale dependence of the different functions of $k$ that make up $f_{\rm ion}$, \ie~\eq{functions_f_n}:
we are not expanding it in a power series in $k^2\lambda^2_{\rm eff}$, like we do with higher-derivative terms
coming from gravitational dynamics (which are expanded in powers of $k^2 R^2(M_h)$).
Still, it is also important to keep in mind that the situation is very different from that of constraints on local primordial non-Gaussianity,
in the context of which the multi-tracer technique was originally proposed.
Indeed, there the equivalent of $f_{\rm ion}$ is a function that scales as $k^{-2}$ at small $k$, and is therefore
completely orthogonal to the higher-derivative corrections from gravitational evolution. A
precise analysis on the usefulness of the multi-tracer technique is clearly beyond the scope of this paper,
so we will not investigate this topic further here.
\subsection{More physics that can be captured in the bias expansion}
\label{ssec:caveats}
\noindent We can now check all the possible effects that we have neglected in the above discussion
and see if and how they can affect the scale dependence of $f_{\rm ion}(\eta,k^2)$.
\subsubsection*{Integration over the past light cone}
\noindent The first and simplest one is the restriction of the spatial integral to the past light cone, $\abs{\vec{y}}\leq\eta-\eta_\ast$, in \eq{J_ion_approximated}.
Including this restriction would modify the integral in \eq{redefine_f_ion-A} for every $n$.
More precisely, it is straightforward to check that at a given $n$ that Fourier transform is now proportional to a
dimensionless function of $k/(\eta-\eta_\ast)$, $k{\lambda}_{\rm eff}$ and $(\eta-\eta_\ast)/{\lambda}_{\rm eff}$,
with a proportionality factor that still scales as ${\lambda}_{\rm eff}^{n\,+\,1}$. Therefore,
the same reasoning applies: these functions can be computed
without the need of any perturbative expansion in these three dimensionless variables,
and increasing orders in $n$ are still suppressed by $({\cal H}{\lambda}_{\rm eff})^{n}$.
\subsubsection*{General expression for the optical depth}
\noindent Let us go back to \eqsII{hat_tau_approx}{integral_over_energy}: first, $\hat{\tau}(\eta,\abs{\vec{y}},E)$ takes the form
\begin{equation}
\label{eq:time_dependence-A}
\begin{split}
\hat{\tau}(\eta,\abs{\vec{y}},E) &= \abs{\vec{y}}\int_{0}^{1}\mathrm{d} u\,(n_{\text{HI}} a)(\eta_\ast+u\abs{\vec{y}})\,
\sigma_{\rm bf}\big(E(\eta_\ast+u\abs{\vec{y}},\eta_\ast+\abs{\vec{y}})\big) \\
&= \underbrace{\abs{\vec{y}}\sigma_\infty\int_0^1\mathrm{d} u\,(n_{\text{HI}} a)(\eta_\ast+u\abs{\vec{y}})
\bigg(\frac{1+z(\eta_\ast+\abs{\vec{y}})}{1+z(\eta_\ast+u\abs{\vec{y}})}\bigg)^{\frac{7}{2}}}_{
\hphantom{\hat{\tau}_{\rm eff}(\eta,\,\abs{\vec{y}})\,}\equiv\,\hat{\tau}_{\rm eff}(\eta,\,\abs{\vec{y}})}\bigg(\frac{E_\infty}{E}\bigg)^{\frac{7}{2}}\,\,.
\end{split}
\end{equation}
With this definition, we can look in more detail to the integral over energy in the expression for $\delta n_g|_{\rm ion}$.
As we discussed above \eq{integral_over_energy}, this integral can be carried out analytically (it can be written in terms of the incomplete Gamma function):
while we could have used this function directly in Section \ref{ssec:single_flash}, we have approximated its behavior as that of an exponential.
Now, using \eq{time_dependence-A} instead of \eq{hat_tau_approx}, we can write \eq{integral_over_energy} as
\begin{equation}
\label{eq:time_dependence-B}
\int_{E_\infty}^{+\infty}\mathrm{d} E\,\bigg(\frac{E}{E_\infty}\bigg)^{s\,-\,\frac{7}{2}}\,{e^{-\hat{\tau}(\eta,\,\abs{\vec{y}},\,E)}} \approx
\frac{2E_{\infty}}{5-2s}\,e^{{-\frac{2s\,-\,5}{2(s\,-\,6)}}\hat{\tau}_{\rm eff}(\eta,\,\abs{\vec{y}})}\,\,,
\end{equation}
which again is correct for small $\hat{\tau}_{\rm eff}$ up to ${\cal O}(\hat{\tau}^2_{\rm eff})$ (see discussion below and Appendix \ref{app:appendix_energy_integral}).
Let us then first see how we can treat the corrections that come from $\hat{\tau}_{\rm eff}(\eta,\,\abs{\vec{y}})$ not being simply proportional to $\abs{\vec{y}}$.
These come from the dependence of the photon energy and the average hydrogen density on redshift, \ie~from the time dependence of the mean free path.
\subsubsection*{Time dependence of the mean free path}
\noindent It is relatively easy to deal with the time dependence of $n_{\text{HI}} a$ and $E$
(which is reflected by the fact that $\hat{\tau}_{\rm eff}(\eta,\abs{\vec{y}})$ is an integral over the variable $u$).
We can write the exponential term in \eq{time_dependence-B} as
\begin{equation}
\label{eq:time_dependence-C}
\begin{split}
e^{{-\frac{2s\,-\,5}{2(s\,-\,6)}}\hat{\tau}_{\rm eff}(\eta,\,\abs{\vec{y}})} = e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}}}
\bigg[1&+\frac{\abs{\vec{y}}^2}{{\lambda}_{\rm eff}}\,\mathcal{O}\bigg(\frac{\partial\log{\lambda}_{\rm eff}(\eta_\ast)}{\partial\eta_\ast}\bigg)
+\frac{\abs{\vec{y}}^2}{{\lambda}_{\rm eff}}\,\mathcal{O}\bigg(\frac{\partial\log(1+z_\ast)}{\partial\eta_\ast}\bigg)\bigg]\,\,.
\end{split}
\end{equation}
From this we see how these corrections give rise to the same type of terms that we already discussed
in Section \ref{ssec:single_flash} (see \eg~\eq{delta_perturbations-B}). Those involving the derivative of $1+z_\ast$ scale as
${\cal H}(\eta_\ast){\lambda}_{\rm eff} \sim {\cal H}{\lambda}_{\rm eff}$.
Therefore, it is clear that at any fixed order in the expansion of $f_{\rm ion}(\eta,k^2)$
in ${\cal H}{\lambda}_{\rm eff}$ we only need a finite number of these terms:
for example, the first of them modifies the scale dependence of the second term on the right-hand side of
\eq{redefine_f_ion-B}. However, it is important to emphasize that these new terms do not involve any new free functions of time,
so they affect $f_{\rm ion}(\eta,k^2)$ in a way which is completely under control.
The same reasoning can, in principle, be applied to those involving the derivative of ${\lambda}_{\rm eff}$ at $\eta_\ast$.
However, now we have
\begin{equation}
\label{eq:time_dependence-D}
\begin{split}
\text{new terms from \eq{time_dependence-C}}
&\sim\bigg({\cal H}(\eta_\ast){\lambda}_{\rm eff}\frac{\partial\log{\lambda}_{\rm eff}(\eta_\ast)}{\partial\log\eta_\ast}\bigg)^n
\sim\bigg({\cal H}{\lambda}_{\rm eff}\frac{\partial\log{\lambda}_{\rm eff}(\eta_\ast)}{\partial\log\eta_\ast}\bigg)^n\,\,.
\end{split}
\end{equation}
It is then clear that we can safely reabsorb these terms in the expansion of \eq{redefine_f_ion-B}
only if ${\partial\log{\lambda}_{\rm eff}(\eta_\ast)}/{\partial\log\eta_\ast}$ is not much larger than $1$.
This is not guaranteed to be true, since the mean free path can change rapidly during reionization. For example, if we assume
that the neutral fraction evolves in a step-like fashion with a width $\delta\eta$, for $\eta_\ast$ close to the time at which reionization is halfway through we have that
${\partial\log{\lambda}_{\rm eff}(\eta_\ast)}/{\partial\log\eta_\ast}\sim1/({\delta\eta\cal H}(\eta_\ast))$,
which could be large if the time for the transition is much shorter than an Hubble time.
For example, while Str{\" o}mgren spheres of ionized gas are forming,
the m.f.p.~is fixed by the mean size of the bubbles, which can be very short until the bubbles
coalesce. After this happens, and the universe reionizes, it is controlled by the mean density of the residual neutral hydrogen.
In this scenario the size of the time derivatives of the m.f.p.~is
controlled by how fast the coalescence happens with respect to a Hubble time.
Nevertheless we emphasize that, as before, the new terms of \eq{time_dependence-D} do not add new free functions
of time to the expansion of $f_{\rm ion}(\eta,k^2)$. Even if the time evolution of the mean density of neutral hydrogen is still uncertain, this is
different from our uncertainty on the microphysics of galaxy formation that is encoded by the Green's functions
(\ie~by the bias coefficients $g_{n}$). Moreover, we expect that modeling the time dependence of the average hydrogen density $\widebar{n}_{\rm HI}(\eta)$
is much less difficult than that of, say, the fluctuations $\delta{n}_{\rm HI}(\eta,\vec{x})$ on the past light cone of the galaxies.
In other words, we can capture these corrections non-perturbatively in ${\cal H}{\lambda}_{\rm eff}$ once we assume a time evolution of $\widebar{n}_{\text{HI}}$.
\subsubsection*{Corrections from the integral over energy}
\noindent We then have to consider our approximation of the integral over energy in \eq{time_dependence-B}.
We can account for the corrections to the right-hand side of that equation by using the full expression of the integral in terms of the incomplete Gamma function.
This is shown in Appendix \ref{app:appendix_energy_integral}.
The integral over $E$ shows the same qualitative behavior at large $\abs{\vec{y}}$, \ie~it goes to zero for $\abs{\vec{y}}\gg{\lambda}_{\rm eff}$.
Therefore, nothing stops us from doing the same expansion of the Green's function in powers of $\abs{\vec{y}}$ that led to our expression for
$f_{\rm ion}(\eta,k^2)$ as a power series in ${\cal H}{\lambda}_{\rm eff}$. The only difference will now be in the shape of the functions dimensionless
functions of $k^2\lambda_{\rm eff}^2$ of \eqsII{redefine_f_ion-A}{functions_f_n}. We show these functions in \fig{functions_of_k_energy_integral}
of Appendix \ref{app:appendix_energy_integral}. It is clear that their scale dependence is similar to those of \eq{functions_f_n}:
they both go to $1$ at small $k$ and vanish at large $k$, with the turnaround being at $k\sim1/\lambda_{\rm eff}$.
\subsubsection*{Energy dependence of $\sigma_{\rm bf}$, $\varepsilon_{\rm em}$ and the Green's functions, and multiple absorbers}
\noindent The observation of the previous subsection is also what allows us to
include the corrections to the bound-free cross section that we neglected in \eq{photoionization_cross_section_leading_scaling}.
They do not change the behavior of the integral over energy. The qualitative dependence of the cross section on energy is still the same,
\ie~it goes to zero for $E\gg E_\infty$ and scales as $(E_\infty/E)^{7/2}$ for small $E$.
Therefore, if we neglect the time dependencies (that can be treated perturbatively anyway, as discussed above),
the final result is still given by $E_\infty$ times a function of $\abs{\vec{y}}\sigma_\infty(n_{\text{HI}} a)(\eta_\ast)$,
where this function still goes to zero at very large $\abs{\vec{y}}$. The same conclusions then apply.
We stress once again that in order to compute all the effects that we discussed here we require basically only the knowledge of the absorption cross section
(that can be straightforwardly computed from first principles once we know what the absorbers are).
Our uncertainty on the small-scale physics is only contained in the free functions $g_{n}$,
exactly in the same way as the gravity-only bias expansion.
Before proceeding to the next section, we briefly investigate what happens if we consider
different species of absorbers, and different assumptions for the energy dependence of the emissivity or of the galaxy response:
\begin{itemize}
\item including different absorbers with density $n_{\rm ab}$, each with their own
absorption cross section $\sigma_{\rm ab}$, clearly affects the m.f.p.,
since now the optical depth $\tau$ is given by a sum over $\sigma_{\rm ab}n_{\rm ab}$ of terms like the one in \eq{optical_depth}.
Therefore, as long as we know the energy dependence of the different $\sigma_{\rm ab}$,
we can deal with this in the same way as we did when we considered only absorption from neutral hydrogen;
\item things are a bit more complicated if we modify the energy dependence of the emissivity $\varepsilon_{\rm em}$ or the Green's functions in \eq{definition_of_G_g-B}.
Let us assume, for example, that galaxy formation is sensitive to radiation at frequencies lower than $E_{\infty}$.
Then, forgetting for a moment about redshift,\footnote{Depending on the difference in redshift between emission and absorption
it is possible that some photons reach the galaxy with energies lower than the ionization energy, even if they were emitted with energies higher than $E_{\infty}$.
The optical depth for these photons is then still controlled by the bound-free cross section for part of their path to the galaxy.}
the optical depth for photons of such frequencies is controlled by the bound-bound cross section (\ie, mainly by the Rayleigh cross section).
In this case, their effect on the bias expansion can still be treated in a similar way as we did above.
More precisely, we can split the integral over $E$ in \eqsII{definition_of_G_g-A-1}{definition_of_G_g-A-2}
in such a way that in each interval only one absorption line peaks. Then, for each of these sub-integrals it is possible to
repeat the procedure above, ending up with a sum of different functions $f_{\rm ion}$:
each of these functions will have its own specific m.f.p., which is controlled by the cross section at the corresponding absorption line;
\item more problematic is instead the case where galaxy formation is sensitive to photons to which the universe is basically transparent, like for example X-rays,
and the emissivity is significant at those frequencies (which is expected for active galactic nuclei and microquasars).
In this case the effective m.f.p.~is of order of the Hubble radius, and we expect that any
derivative expansion breaks down. A perturbative bias expansion would clearly be insufficient to describe
this situation, so we will not investigate it further in this paper.
\end{itemize}
\section{Full radiative transfer}
\label{sec:full_RT}
\noindent In this Section we see what happens if we drop basically all the assumptions on radiative transfer that we made in Section \ref{sec:radiative_bias}. We
start by the considering a long epoch of emission of radiation on time scales of order a Hubble time and an inhomogeneous optical depth (Section \ref{ssec:generalization}).
We show that now we need to add three functions of $k^2$ in the bias expansion (instead of one)
in addition to the linear LIMD bias + stochastic term relation coming from gravitational interactions.
The jump from one to two functions is due to the fact that now we need to take into account the dependence of the m.f.p.~and of the coefficients $g_{n}$ on $\eta_\ast$
(the latter comes just from the fact that we are expanding the Green's function of galaxies around the emission time).
Combined with the fact that the source overdensity is not evolving via a single growth factor because of the presence of the stochastic term $\epsilon_{\rm em}$,
which evolves independently from the LIMD term $b_{\rm em}\delta$,
we end up with the relation $\delta_g|_{\rm ion} = f_{\rm em}(k^2)\delta + f_{\epsilon_{\rm em}}(k^2)\epsilon_{\rm em}$.
In the same way, adding inhomogeneities in the density of absorbers further increases the number of functions from two to three.
This is essentially due to the fact that they play the role of sinks of radiation, \ie~of sources with negative emissivity.
The most important point, however, is not the increase in the number of functions, but the fact
that now it is impossible to predict the shape of these higher-derivative corrections to the bias expansion
at all orders in ${\cal H}\lambda_{\rm eff}$, unlike what we found in Section \ref{ssec:single_flash}.
It is enough to consider the case of homogeneous optical depth to see this.
Indeed, now we need new response parameters for every significant fraction of a Hubble time during which the emissivity is not vanishing.
Finally, in Section \ref{ssec:scatterings} we briefly study what is the impact of scattering on our analysis.
\subsection{Adding multiple flashes and inhomogeneities in the optical depth}
\label{ssec:generalization}
\noindent It is not difficult to see what happens if we move from the case of a very fast flash of radiation around $\eta = \eta_\ast$
to that of radiation emitted during an interval that can be of order of a Hubble time.
In the case of a single emission, we have seen that at any given order $n$ in the ratio between the m.f.p.~and the Hubble radius
the scale dependence of $f_{\rm ion}$ can be predicted at the price of $n$
free functions of time. Now, even if we fix $n$, as we increase the number of emission times we need more and more free coefficients
$g_{n}$. In the limit of continuous emission that extends on cosmological time scales, we would then need an infinite number of bias parameters.
In a sense (to be made more precise below), we are effectively integrating over $\eta_\ast$.
The function $f_{\rm ion}$ depends on $\eta_\ast$ both through the time dependence of the m.f.p.~and, most importantly,
through the coefficients $g_{n}$: since we do not know how they depend on time, we cannot carry out this integral.
Let us discuss this in more detail. In the interest of clarity, we focus separately on inhomogeneities in the density of emitters and inhomogeneities in the optical depth.
\subsubsection*{Inhomogeneous emitting medium}
\noindent We first consider two short bursts of radiation at $\eta_{\ast,1}$ and $\eta_{\ast,2}$,
with $\eta_{\ast,1}<\eta_{\ast,2}$ and $\eta_{\ast,2}-\eta_{\ast,1}\gtrsim{\cal H}^{-1}$.
Following the same steps that brought to \eq{J_ion_more_approximated-bis}, we now have
(see also Appendix \ref{app:solution_single_flash} for details)
\begin{equation}
\label{eq:sssec_generalization_source-A}
\begin{split}
\delta n_g|_{\rm ion}(\eta,\vec{x}) &= {\cal C}_{\rm ion}(\eta_{\ast,1})\widebar{\rho}_{\rm em}(\eta_{\ast,1})
\sum_{n\,=\,0}^{+\infty}g_{n}(\eta,\eta_{\ast,1}){\cal H}^{n}
\int\mathrm{d}^3y\,\abs{\vec{y}}^{n\,-\,2}\,\delta_{\rm em}(\eta_{\ast,1},\vec{x}+\vec{y})\,{e^{-\frac{\abs{\vec{y}}}{{\lambda}_{\rm eff}(\eta_{\ast,1})}}} \\
&\;\;\;\; + (\eta_{\ast,1}\to\eta_{\ast,2})\,\,,
\end{split}
\end{equation}
where we made explicit the dependence of ${\cal C}_{\rm ion}$ and $g_{n}$ on the emission time (we
always intend $\cal H$ as evaluated at $\eta$, as in Section \ref{ssec:single_flash}).
From the above equation we see that, in Fourier space, $\delta_g|_{\rm ion}$ is given by
\begin{equation}
\label{eq:sssec_generalization_source-B}
\begin{split}
\delta_g|_{\rm ion}(\eta,\vec{k}) &=
\frac{\widebar{n}_g|_{\rm ion}(\eta,\eta_{\ast,1})}{\widebar{n}_g|_{\rm ion}(\eta)}f_{\rm ion}(\eta,\eta_{\ast,1},k^2)
\delta_{\rm em}(\eta_{\ast,1}\vec{k}) + (\eta_{\ast,1}\to\eta_{\ast,2})\,\,,
\end{split}
\end{equation}
where $f_{\rm ion}(\eta,\eta_{\ast,i},k^2)$ takes the same form as in \eq{delta_perturbations-B}, and $\widebar{n}_g|_{\rm ion}(\eta,\eta_{\ast,i})$
is the background galaxy number density computed for the single flash of radiation at $\eta_{\ast,i}$,
so that $\widebar{n}_g|_{\rm ion}(\eta)$ is equal to $\widebar{n}_g|_{\rm ion}(\eta,\eta_{\ast,1})+\widebar{n}_g|_{\rm ion}(\eta,\eta_{\ast,2})$.
Then, from \eq{sssec_generalization_source-B} we see that, in full generality, we can write $\delta_g|_{\rm ion}$ as the sum
\begin{equation}
\label{eq:sssec_generalization_source-D}
\delta_g|_{\rm ion}(\eta,\vec{k}) = f_{\rm em}(\eta,k^2)\delta(\eta,\vec{k}) + f_{\epsilon_{\rm em}}(\eta,k^2)\epsilon_{\rm em}(\eta,\vec{k})\,\,.
\end{equation}
The two functions $f_{\rm em}$ and $f_{\epsilon_{\rm em}}$ are surely different since the matter
overdensity and the stochastic term do not evolve with time in the same way.
However, it is more interesting to ask if the ratio between them is scale-independent
(as was the case when we considered a single emission time in Section \ref{ssec:galaxy_statistics}: see \eq{terms_in_bias_expansion}).
The answer is negative. More precisely, since the time evolution of both $\delta$ and $\epsilon_{\rm em}$ is independent of $k$
at the order we are working at, this is due only to the dependence of the function $f_{\rm ion}$ on the emission time
(through, \eg, the mean free path).
Let us then see what happens in the limit of a continuous emission of radiation over an interval of order ${\cal H}^{-1}$. The
question now is whether or not we can predict the scale dependence of the two functions multiplying $\delta$ and $\epsilon_{\rm em}$ in \eq{sssec_generalization_source-D}.
The simplest way to see that the answer is negative is to focus on the leading order in the expansion in the mean free path.
These two functions, then, depend on all the bias coefficients $g_{0}(\eta,\eta_{\ast,i})$ for $i=1,\dots,n_{\rm flashes}$
(generalizing the result of \eq{redefine_f_ion-B} to $n_{\rm flashes}$ emission times).
As we bring $n_{\rm flashes}$ to infinity, we would then need an infinite number of functions of time, losing all predictivity.
Before proceeding to study the inhomogeneities in the number density of absorbers,
we notice that the solution for the intensity of radiation received by the galaxy
in the limit of continuous emission over an interval $\eta_f-\eta_i\sim{\cal H}^{-1}$ can be obtained simply by an integral over $\eta_\ast$.
Looking at the full solution of the RT equation in Appendix \ref{app:solution_of_boltzmann_equation},
it is straightforward to see that such limit corresponds to the replacement $\sum_{\eta_{\ast,i}}\Delta\eta\to\int_0^\eta\mathrm{d}\eta_\ast$,
where $\Delta\eta$ (defined at the beginning of Section \ref{ssec:single_flash}) is the duration of a single flash.
However, we cannot obtain the dimensionless perturbation $\delta_g|_{\rm ion}$ in terms of an integral over $\eta_\ast$ as easily.
The simplest way to see this is that $\Delta\eta$ always cancels in the expressions for $\delta_g|_{\rm ion}$ (as shown in \eq{redefine_f_ion-B}, for example).
\subsubsection*{Inhomogeneous absorbing medium}
\noindent We know that there are inhomogeneities in the number density of neutral hydrogen.
The ionization fronts propagating into the neutral medium formed Str{\" o}mgren spheres of ionized, heated gas,
so we cannot treat $n_{\text{HI}}$ as homogeneous on scales of order of the typical bubble size,
but we have to take into account $\delta_{\text{HI}}$ for scales $k\gtrsim\kmpc{d-2}$ \cite{Mao:2014rja}.
We can imagine that these inhomogeneities in the absorbing medium can also be treated by a bias expansion on sufficiently large scales.
Indeed, \cite{McQuinn:2018zwa} has recently developed such a bias expansion for the inhomogeneities in the neutral fraction
and, with it, a bias expansion for the $\rm 21cm$ signal from reionization (see also \cite{Giri:2018dln} for a recent
computation of the position-dependent $\rm 21cm$ power spectrum via separate-universe simulations).
Since we have the full solution of the RT equation (\eqsIII{boltzmann_solution}{source_definition}{tau_solution} in Appendix \ref{app:solution_of_boltzmann_equation}),
it should be easy, in principle, to study the effect of these inhomogeneities in the optical depth.
However, since $\tau$ itself is given by an integral along the line of sight,
the resulting expressions for $\widebar{n}_g|_{\rm ion}$ and $\delta n_g|_{\rm ion}$ are very complicated.
For this reason, we study the effect of an inhomogeneous $\tau$ in the case of radiation emitted in a single flash around $\eta_\ast$:
this is simpler to treat, and the generalization to continuous emission is straightforward.
Let us go back to \eqsII{observed_intensity}{optical_depth}. We can expand $n_{\text{HI}}$ in a background $\widebar{n}_{\text{HI}}$ and a dimensionless perturbation $\delta_{\text{HI}}$.
We see that, at first order in perturbations, the contribution of the $\tau$ inhomogeneities to the perturbations $\delta{\cal I}$
of the specific intensity of ionizing radiation received by the galaxies is equal to
\begin{equation}
\label{eq:sssec_generalization_tau}
\delta{\cal I}\supset{-\bigg(\frac{1+z(\eta)}{1+z_\ast}\bigg)^3} e^{-\widebar{\tau}}\,\widebar{{\cal I}}_\ast\big(\eta_\ast,E(\eta_\ast,\eta)\big)
{\int_{\eta_\ast}^\eta}\mathrm{d}\eta'\,\frac{\delta_{\text{HI}}\big(\eta',\vec{x}+\vers{n}(\eta-\eta')\big)}{\lambda_{\rm ion}\big(\eta',E(\eta',\eta)\big)}\,\,,
\end{equation}
where the energy-dependent m.f.p.~is given, as in \eq{hat_tau_approx}, by $\lambda_{\rm ion} = 1/(\sigma_{\rm bf}\widebar{n}_{\text{HI}} a)$,
and $\widebar{\tau}$ is obtained from \eq{optical_depth} by taking $n_{\text{HI}}=\widebar{n}_{\text{HI}}$.~Then,
we see that the structure of \eq{sssec_generalization_tau} is similar to that of the full solution of Appendix \ref{app:solution_of_boltzmann_equation},
if we take a homogeneous optical depth and an emissivity proportional to $\sigma_{\rm bf} \widebar{n}_{\text{HI}}$.
That is, the absorber medium plays a role of an additional ``negative'' source term (\ie~a sink),
whose emissivity is not localized around a single time $\eta_\ast$ (unlike that of the emitters).
Generalizing to an arbitrary number of emission times, this tells us that the inhomogeneities in the density of absorbers contribute to $\delta_g|_{\rm ion}$
through two additional functions of $(\eta,k^2)$, which we can call $f_{\text{HI}}$ and $f_{\epsilon_{\HI}}$. These two functions multiply $\delta$ and $\epsilon_{\rm HI}$ respectively.
There is no loss of generality in summing $f_{\text{HI}}$ and $f_{\rm em}$ into a single function, given that both multiply the matter overdensity $\delta$.
We cannot instead reabsorb $f_{\epsilon_{\HI}}$ in $f_{\epsilon_{\rm em}}$ since the two stochastic terms $\epsilon_{\text{HI}}$ and $\epsilon_{\rm em}$ are different fields.
However, the most important point to emphasize is that, again, the scale dependence of $f_{\text{HI}}$ and $f_{\epsilon_{\HI}}$
cannot be computed perturbatively with a finite number of bias parameters.
\subsection{Scattering}
\label{ssec:scatterings}
\noindent While a full discussion of the impact of scattering on the radiative-transfer equation is beyond the scope of this paper,
in this short section we briefly draw a qualitative picture of how our previous results would be affected if they play an important role.
The most important effect is the redistribution of the direction of the photons.\footnote{The scattering kernel
can be only a function of $\vers{n}\cdot\vers{n}'$ at the order in perturbations we are working at.
The reason is the same as the one we used to conclude that the emissivity and the galaxy response cannot depend on $\vers{n}$.}
Let us then consider again the case of a single flash to build some intuition.
We can imagine that, after photons are emitted isotropically at $\eta_\ast$, some of those that are initially directed
away from the galaxy are later scattered towards it at some time $\eta_{\rm sc} > \eta_\ast$.
This effectively mimics a second burst of radiation at $\eta_{\rm sc}$. Therefore, in the limit of scattering happening continuously over time,
we expect to roughly go back to the same situation that we have discussed when we have included inhomogeneities in the optical depth.
Finally, what happens if scattering dominates over absorption?
In this case we expect that the intensity is made isotropic on scales larger than the (comoving) diffusion length $D\sim\sqrt{1/(\sigma_{\rm sc}\widebar{n}_{\rm sc}a\cal H)}$
($\sigma_{\rm sc}$ and $\widebar{n}_{\rm sc}$ being the overall amplitude of the scattering cross section and the average number density of scatterers, respectively).
On these scales, the monopole of the intensity will follow a (sourced) diffusion equation, so once we compute $\delta_g|_{\rm ion}$
using \eqsII{definition_of_G_g-A-1}{definition_of_G_g-A-2} we see that the higher-derivative terms from RT effects can be expanded in powers of $k^2D^2$.
\section{Discussion and conclusions}
\label{sec:conclusions}
\noindent In this paper we have investigated whether it is possible to find a resummation of the higher-derivative terms in the bias expansion that come from
radiative-transfer (RT) effects.
We have shown that, at linear order in perturbations and when mainly absorption and emission play a relevant role in the RT equation, these
effects are captured by three functions of $k^2\lambda_{\rm eff}^2$, where $\lambda_{\rm eff}$ is the effective mean free path of ionizing radiation.
Whether or not it is possible to predict the scale dependence of these three functions,
instead of simply relying on their expansion in powers of $k^2\lambda_{\rm eff}^2$, depends on the following factors:
\begin{itemize}
\item the time dependence of the emissivity of the sources of ionizing radiation;
\item the time dependence of the response of galaxies to the flux of ionizing radiation;
\item the presence of inhomogeneities in the optical depth.
\end{itemize}
\begin{table}[b!]
\myfloatalign
\caption[.]{This table shows which assumptions are necessary to predict the scale dependence of the bias from RT effects, assuming no inhomogeneities in the optical depth.
$\Delta\eta_{\rm em}$ and $\Delta\eta_{G}$ are the interval over which $\varepsilon_{\rm em}$ is non-vanishing and the typical extent in time of the galaxy response, respectively.
By~\ding{51}~we mean that the higher-derivative terms can be resummed into well-defined functions of $k^2\lambda_{\rm eff}^2$, each multiplied by an RT bias coefficient
and an increasing power of ${\cal H}\lambda_{\rm eff}$. The case of an instantaneous galaxy response is discussed in \mbox{Appendix \ref{app:fourier_transform_intensity}.}}
\label{tab:conclusions_table-1}
\centering
\medskip
\begin{tabular}{lcc}
\toprule
{$\delta\tau=0$} & $\Delta\eta_{G}\ll {\cal H}^{-1}$ & $\Delta\eta_{G}\sim {\cal H}^{-1}$ \\
\midrule
$\Delta\eta_{\rm em}\ll {\cal H}^{-1}$ & \ding{51} & \ding{51} \\[2ex]
$\Delta\eta_{\rm em}\sim {\cal H}^{-1}$ & \ding{51} & \ding{55} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[b!]
\myfloatalign
\caption[.]{Same as \tab{conclusions_table-1}, but taking into account the inhomogeneities in the optical depth.
Note that having an inhomogeneous $\tau$ is not a problem if the galaxy response is instantaneous:
we refer to Appendix \ref{app:fourier_transform_intensity} for more details.}
\label{tab:conclusions_table-2}
\centering
\medskip
\begin{tabular}{lcc}
\toprule
{$\delta\tau\neq 0$} & $\Delta\eta_{G}\ll {\cal H}^{-1}$ & $\Delta\eta_{G}\sim {\cal H}^{-1}$ \\
\midrule
$\Delta\eta_{\rm em}\ll {\cal H}^{-1}$ & \ding{51} & \ding{55} \\[2ex]
$\Delta\eta_{\rm em}\sim {\cal H}^{-1}$ & \ding{51} & \ding{55} \\
\bottomrule
\end{tabular}
\end{table}
This is summarized in \tabs{conclusions_table-1}{conclusions_table-2}. For example, let us consider the case of the galaxy
response varying on cosmological time scales; that is, the observable properties of a given galaxy sample retain a memory of the ionizing flux received at some earlier time.
Then, the dependence of these functions on the details of galaxy formation can be absorbed in
generalized RT bias coefficients, without needing a detailed ``UV'' modeling of galaxy formation, if: \emph{a}) one can neglect inhomogeneities in the optical depth;
\emph{b}) the emission of ionizing radiation only happens for a short period of time $\Delta\eta\ll{\cal H}^{-1}$.
If these assumptions do not hold, we cannot predict the dependence on $k$ of these three functions
unless we know precisely how the response of galaxies to the ionizing radiation depends on time.
It is important to stress that this does not happen in the gravity-only bias expansion.
There, the time dependence of the Green's function can be reabsorbed, order by order in perturbations and spatial derivatives,
into the time-dependent bias coefficients. Here we cannot do this because, through the received flux,
the galaxies are sensitive to the inhomogeneities in the distribution of sources and sinks of ionizing radiation evaluated along
their past light cone, and not only along the past fluid worldline.
Consequently, the time dependence of the Green's function affects also the scale dependence of the bias.
It is however important to emphasize that it is entirely possible for the response to ionizing radiation to happen on time scales much
faster than a Hubble time. For example, in the case of line emission from diffuse gas we can imagine that the response to the ambient radiation field
is controlled by the recombination rate, which has nothing to do with Hubble.
More precisely, if we wanted to be completely general, we should have considered a Fourier transform of the Green's functions of Section \ref{ssec:single_flash}
with respect to $\eta-\eta'$. This Fourier transform can have support for both $\omega\sim{\cal H}$ and for $\omega\gg{\cal H}$,
and in this work we have focused on the case where the Fourier coefficients are nonzero at low frequencies.
In this sense, our study is conservative: as we can see from \tabs{conclusions_table-1}{conclusions_table-2},
the contributions to the bias expansion due to the high-frequency part of the response are always under control.
In this paper we have followed an effective field theory approach, \ie~we have not made any specific assumption on the
Green's functions describing the response to the radiation (apart from very general assumptions on their time dependence, as discussed above).
In other words, our results apply equally well to any other tracer of the underlying matter distribution that is sensitive to RT effects.
For galaxies, the suppression of the star-formation rate coming from photo-evaporation of the gas accreting onto the parent halo
is not very relevant for halos whose mass is much larger than the Jeans mass.
We thus expect that the bias coefficients of the higher-derivative terms from these RT effects are actually a strong function of the parent halo mass,
and their amplitude is very small for $M_{h}\gg M_{\rm J}$. Since we do expect the RT to typically lead to a smooth contribution to the
galaxy power spectrum (see \fig{PS_corrections}), a contribution that is very small in amplitude can presumably
be absorbed by marginalizing over a sufficiently flexible template for the RT effects.
However, this might well lead to a degradation of constraints on the neutrino mass $m_\nu$ or equilateral non-Gaussianity $f_{\rm NL}^{\rm equil}$.
In particular, \fig{neutrino_scale_dependence} illustrates that the RT effects are expected to have a scale dependence
that is quite similar to that induced by nonzero neutrino masses.
However, we do not expect RT effects to be necessarily small for tracers like, \eg, the Lyman-$\alpha$ forest. In this case, we can imagine that
treating the higher-derivative terms perturbatively (\ie~through an expansion in powers of $k^2$) would to lead to a significant loss of
constraining power on cosmological parameters in general. Indeed, the bias expansion would stop being predictive already at
the large scales where the RT effects leave their imprint on the power spectrum of these tracers.
\paragraph{Future prospects} An explicit modeling of the response to ionizing radiation is surely a way to go around this problem.
However, it is clear that any uncertainty on the theoretical errors of this model would reduce
the robustness of galaxy clustering as a cosmological probe. A second way relies on the fact that
we do not expect RT effects to induce a sizeable velocity bias. Indeed, let us consider the momentum density $\vec{p}_{\cal I}$ carried
by the ionizing radiation field, as seen by an observer comoving with the galaxies.
It is straightforward to show that, at leading order in derivatives, it is proportional to $\widebar{\rho}_{\cal I}\lambda_{\rm eff}\vec{\nabla}\delta$
(see Appendix \ref{app:momentum}), where $\widebar{\rho}_{\cal I}$ is the average energy density of the radiation field.
The momentum transfer from the radiation field to the galaxies, and consequently the contribution from RT to
the velocity bias $\vec{v}_g-\vec{v}$, is then controlled by the ratio $\widebar{\rho}_{\cal I}/\widebar{\rho}_b = \Omega_{\cal I}/\Omega_b$.
Since the density of radiation is dominated by the CMB, this ratio is much smaller than $\Omega_r/\Omega_b\approx 5\Omega_r/\Omega_m\approx 10^{-3}(1+z)$.
From this we conclude that the peculiar velocities of galaxies, and with them the higher-multipole contribution
to the galaxy overdensity in redshift space, are very weakly affected by radiative transfer
(as pointed out by \cite{Gontcho:2014nsa}), allowing us to obtain, in principle, unbiased measurements on cosmological parameters.
\paragraph{Comparison with recent work} While this work was being completed,
two papers that investigate similar topics have been published on the arXiv \textnormal{\cite{Meiksin:2018wij,Sanderbeck:2018lwc}}. In
\cite{Meiksin:2018wij} the authors compute the inhomogeneities $\delta_{\cal I}$ of the intensity $\cal I$
(integrated over angles and averaged over frequencies with some weighting $w(E)$, whose frequency dependence is
assumed to be that of the ${\text{HI}}$ photoionization cross section).
The spirit of our calculation is similar to theirs (see for example Appendix \ref{app:fourier_transform_intensity}), with the difference that they
consider directly the physically-motivated case of continued emission over Hubble time scales.
The work \cite{Sanderbeck:2018lwc}, instead, studies the impact of ionizing radiation on the clustering of tracers.
The difference with our computation is that the authors assume that the contribution of radiative transfer,
\ie~what we called $\delta_g|_{\rm ion}$, is given by $b_{\cal I}\delta_{\cal I}$, with $b_{\cal I}$
a time-dependent bias coefficient (see their Eqs.~(1), (2)). That is,
they assume that the scale of nonlocality in time of the response of tracers to ionizing radiation is much faster than Hubble,
so that their Green's functions are proportional to $b_{\cal I}(\eta)\delta(\eta-\eta')$.
Consequently, based on this strong simplifying assumption and after assuming a model for the emissivity and the absorption coefficient,
they are able to compute the scale dependence of these corrections to galaxy clustering without requiring an infinite number of bias coefficients.
This is in agreement with our conclusions (see \eg~\tabs{conclusions_table-1}{conclusions_table-2}).
\section*{Acknowledgements}
\noindent It is a pleasure to thank Philipp Busch, Chris Byrohl, Jens Chluba, Ildar Khabibullin, Eiichiro Komatsu, Kaloian Lozanov, Matt McQuinn, and Shun Saito for useful discussions.
G.~C. and F.~S. acknowledge support from the Starting Grant (ERC-2015-STG 678652) ``GrInflaGal'' from the European Research Council.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Conclusion}
We introduce Spider-DK, a human-curated dataset based on the Spider benchmark for evaluating the generalization of text-to-SQL models, with the focus of understanding the domain knowledge.
We demonstrate that the performance of existing text-to-SQL models drops dramatically on Spider-DK, even if the domain knowledge appears in the training set.
Our evaluation indicates that the models do not always understand the underlying domain knowledge for prediction, thus we consider improving the model understanding of domain knowledge as an important direction to achieve the cross-domain generalization.
\section*{Acknowledgements}
We would like to thank the anonymous reviewers for their helpful comments.
Matthew Purver is partially supported by the EPSRC under grant EP/S033564/1, and by the European Union's Horizon 2020 programme under grant agreement
825153 (EMBEDDIA, Cross-Lingual Embeddings for Less-Represented Languages in European News Media). Xinyun Chen is supported by the Facebook Fellowship. The results of this publication reflect only the authors' views and the Commission is not responsible for any use that may be made of the information it contains.
\section{Example Appendix}
\end{document}
\section{Experiments}
\subsection{Experimental Setup}
We evaluate the previous state-of-the-art models on the Spider-DK and Spider \cite{Yu2018a}.
As discussed in Section~\ref{section:2Overall}, the Spider test set is not publicly accessible, and thus Spider-DK does not contain a test set.
We extracted 535 examples corresponding to Spider-DK from Spider for evaluation instead of using a whole Spider development set for better comparison.
In addition, we select 125 examples with domain knowledge from the training set to evaluate the training effect.
Therefore, there are three evaluation sets:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0em]
\item $ \textbf{Spider}_{\textnormal{\small{T}}}$: 125 examples drawn from the Spider training set.
\item $ \textbf{Spider}_{\textnormal{\small{D}}}$: 535 examples drawn from the Spider development set.
\item {\bf Spider-DK}: Spider-DK development set with 535 examples.
\end{itemize}
We evaluate open-source models that reach competitive performance on Spider: GNN \cite{Bogin2019}, IRNet \cite{Guo2019}, RAT-SQL \cite{Wang2019} with and without BERT \cite{Kenton2017}, and RAT-SQL + GAP \cite{DBLP:journals/corr/abs-2012-10309}
We present their results of the 265 Spider-DK domain knowledge examples and analyze their performance in each knowledge type.
Our evaluation is based on the exact match metric defined in the original Spider benchmark, which measures whether the predicted query without condition values as a whole is equivalent to the gold query.
\begin{table}[t]
\centering
\resizebox{.99\columnwidth}{!}{
\smallskip\begin{tabular}{lccc}
\hline
\bf model & \bf $ \textbf{Spider}_{\textnormal{\small{T}}}$ &\bf $ \textbf{Spider}_{\textnormal{\small{D}}}$ & \bf Spider-DK \\
\hline \hline
GNN \cite{Bogin2019} & 61.6\% & 46.2\% & 26.0\% \\
IRNet \cite{Guo2019} & 87.2\% & 53.8\% & 33.1\% \\
RAT-SQL \cite{Wang2019} & 93.6\% & 61.1\% & 35.8\% \\
RAT-SQL + BERT \cite{Wang2019} & 92.0\% & \bf 73.3\% & 40.9\% \\
RAT-SQL + GAP \cite{DBLP:journals/corr/abs-2012-10309} & \bf 98.4\% & 67.8\% & \bf 44.1\% \\
\hline
\end{tabular}
}
\caption{Exact match accuracy on the $ \textbf{Spider}_{\textnormal{\small{T}}}$, $ \textbf{Spider}_{\textnormal{\small{D}}}$ and {\bf Spider-DK}, where models are trained on the original Spider training set.}\smallskip
\label{table:previous-models}
\end{table}
\subsection{Main Results}
Table \ref{table:previous-models} presents the exact match accuracy of different models on $ \textnormal{Spider}_{\textnormal{\small{T}}}$, $ \textnormal{Spider}_{\textnormal{\small{D}}}$, and Spider-DK. All models are trained on the original Spider training set.
Compared to $ \textnormal{Spider}_{\textnormal{\small{D}}}$, the performance of all models has significantly dropped by about 20\% to 30\% on Spider-DK.
Although the Spider-DK is designed based on the $ \textnormal{Spider}_{\textnormal{\small{T}}}$, whose exact match evaluation is pretty high, these models can not generalize to the Spider-DK well.
In particular, although RAT-SQL + BERT achieves better performance on $ \textnormal{Spider}_{\textnormal{\small{D}}}$ than RAT-SQL + GAP, RAT-SQL + GAP outperforms RAT-SQL + BERT on Spider-DK, indicating that GAP facilitates the model to grasp a better understanding of domain knowledge.
Despite some improvement achieved by recent models, the results show that domain knowledge understanding is still a considerable gap toward the realization of cross-domain text-to-SQL generation.
\begin{table}[t]
\centering
\resizebox{.99\columnwidth}{!}{
\smallskip\begin{tabular}{lcccccc}
\hline
\bf Approach & \bf ALL & \bf T1 & \bf T2 & \bf T3 & \bf T4 & \bf T5 \\
\hline \hline
GNN & 6.8\%& 5.3\% & 7.6\% & 2.7\% & 8.3\% & 21.2\%\\
IRNet & 19.2\%& \bf 9.2\% & 4.6\% & 42.4\% & 4.5\% & 27.2\%\\
RAT-SQL & 16.6\%& 2.6\% & 13.8\% & 26.0\% & \bf 9.1\% & 36.4\%\\
RAT-SQL + BERT & 19.6\% & 3.9\% & 12.3\% & 41.1\% & 4.5\% & 30.3\%\\
RAT-SQL + GAP & \bf 27.1\% & 7.9\% & \bf 20.0\% & \bf 53.4\% & \bf 9.1\% & \bf42.4\%\\
\hline
\end{tabular}
}
\caption{Break down exact match accuracy in the Spider-DK examples containing domain knowledge.}\smallskip
\label{table:break-down-results}
\end{table}
\subsection{Performance on Knowledge Type Splits}
To better understand the performance facing the domain knowledge, we present the breakdown accuracies of different domain knowledge types in Table \ref{table:break-down-results}.
RAT-SQL + GAP unsurprisingly achieves the best performance on all examples and outperforms other models from T2 to T5.
However, IRNet surprisingly obtains an overall accuracy close to the RAT-SQL + BERT, because IRNet integrates a ConceptNet~\cite{speer-havasi-2012-representing} to recognize the country, state, and city synonyms, which can improve its accuracy in T3.
The GNN and RAT-SQL perform relatively poorly on T3 because they do not have extra knowledge components such as ConceptNet.
Besides, GNN trains its embeddings from scratch, and RAT-SQL uses GLOVE~\cite{pennington-etal-2014-glove} that has been shown worse than BERT in many scenarios.
Although ConceptNet helps IRNet in T3, it is not a generalization method for solving other domain knowledge problems.
However, even the best-performing T3 is still far from the accuracy in $ \textnormal{Spider}_{\textnormal{\small{D}}}$, which shows that there is still much room for improvement.
\subsection{Error Analysis}
Table \ref{table:example-error} presents five error examples in each knowledge type drawn from the prediction of RAT-SQL + GAP.
These error predictions are similar to the training examples shown in Table \ref{table:example-of-dk}.
There are three reasons why existing models can not perform well in the Spider-DK.
The first reason is that some domain knowledge is not common enough in the training set.
For example, in T2, the phrase \verb|`from old to young'| appears more often with \verb|age|, which trains the model to output a \verb|desc age| order.
The unbalance training data may lead the model to prefer outputting a \verb|desc| order even its column is the \verb|`date of birth'|.
The second reason is that the model has insufficient generalization ability for similar problems. Many training examples belong to the T3 and T4. However, these examples can not cover all cases.
For example, the training data may not or rarely contain examples where the USA is substituted with the United States, but we expect the models can still handle these examples correctly.
The third reason is that a word will be used twice to generate schema items and SQL structure as we discussed the T5 in Section \ref{sec:dk}.
\begin{table}[t]
\centering
\resizebox{.99\columnwidth}{!}{
\smallskip\begin{tabular}{cl}
\\ \specialrule{0.08em}{0pt}{4pt}
{\bf (T1)NL} & \textit{Find the \uline{name} of the professionals \textbf{\textrm{...}}} \\
{\bf Pred} & \ttfamily select \ulverb|first_name| from \textbf{\textrm{...}} \\
{\bf Gold} & \ttfamily select \ulverb|first_name| , \ulverb|last_name| from \textbf{\textrm{...}}
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf (T2)NL} & \textbf{\textrm{...}} \textit{sorted \uline{from oldest to youngest}? } \\
{\bf Pred} & \textbf{\textrm{...}} \ttfamily order by birth\_date \ulverb|desc| \\
{\bf Gold} & \textbf{\textrm{...}} \ttfamily order by birth\_date \ulverb|asc|
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf (T3)NL} & \textit{List all \uline{American} airline names and their abbreviations.} \\
{\bf Pred} & \ttfamily select airline, abbreviation from airlines \\
{\bf Gold} & \ttfamily select airline, abbreviation from airlines \\
& \ttfamily where {\uline{\ttfamily country = `USA'}}
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf (T4)NL} & \textit{What is the average age of all the \uline{abandoned} dogs?} \\
{\bf Pred} & \ttfamily select avg(age) from dogs \\
{\bf Gold} & \ttfamily select avg(age) from dogs \\ & \ttfamily where {\uline{\ttfamily abandoned\_y = 1}}
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf (T5)NL} & \textit{Show \uline{average} attendance for all stadiums \textbf{\textrm{...}}} \\
{\bf Pred} & \ttfamily select \ulverb|avg(average)| from stadium \textbf{\textrm{...}} \\
{\bf Gold} & \ttfamily select \ulverb|average| from stadium \textbf{\textrm{...}} \\
\bottomrule
\end{tabular}
}
\caption{Sample wrong predictions of RAT-SQL + GAP in each type of domain knowledge.}
\label{table:example-error}
\end{table}
\section{Introduction}
Research on cross-domain text-to-SQL benchmarks has led to numerous advances.
Recent works \cite{DBLP:journals/corr/abs-2101-09901,DBLP:journals/corr/abs-2010-12412,DBLP:journals/corr/abs-2103-04399} have achieved over 70\% accuracy on Spider benchmark~\cite{Yu2018a} and over 90\% accuracy on WikiSQL benchmark~\cite{zhongSeq2SQL2017}, which seems to suggest that existing models already solved most problems in this field.
However, the follow-up studies from \citet{Deng2020,gan-etal-2021-towards,Suhr2020,Shaw2020,oren-etal-2020-improving,keysers2020measuring} show that the generalization performance is much worse in more challenging scenarios.
For example, \citet{Deng2020} investigate the cases when the explicit mentions of database columns are removed from the question.
Similarly, \cite{gan-etal-2021-towards} observe that the model accuracy dramatically drops by replacing schema-related words with some synonyms.
On the other hand, \citet{Suhr2020} find that the generalization to other databases is much worse, due to the distribution shift of both questions and SQL queries.
These papers introduce important challenges for improving the generalization performance, i.e., the model trained on a cross-domain text-to-SQL dataset (e.g., Spider \cite{Yu2018a}) does not generalize to a new external database.
However, the performance degradation is somehow expected for the following reasons. First, removing the explicit mentions breaks the assumptions that make the schema linking effective. Second, SQL queries in other databases could come from a different distribution; e.g., according to the hardness criteria defined by Spider benchmark, over 40\% Spider SQL queries are \verb|Medium| hardness, but there are less than 10\% \verb|Medium| SQL queries in the GeoQuery dataset \cite{data-geography-original}.
In this work, we demonstrate that the generalization performance could be poor even when both the NL questions and SQL queries follow the similar distribution to the training set.
Specifically, we constructed Spider-DK, a challenging variant of the Spider development set, with the focus of evaluating the model understanding of domain knowledge.
A domain means a certain type of application scenarios; for example, the Spider benchmark includes various distinct domains such as geography and university. Cross-domain text-to-SQL research aims to build a text-to-SQL model that can generate correct SQL queries and generalize to different domains. Therefore, one main challenge of cross-domain text-to-SQL generalization is to understand different knowledge required by different domains. For example, the university domain usually needs the knowledge of different job titles and genders, while the geography domain emphasizes more on the knowledge of places instead of people.
We show that the state-of-the-art models consistently fail in cases when specific domain knowledge is required for prediction, even if the domain knowledge is moderately mentioned in the training data, and the models accurately predict the corresponding training samples. Such discrepancy suggests that the models do not properly learn the domain knowledge in order to fit the training set, thus improving the model capability to capture the domain knowledge is an important direction towards achieving the cross-domain generalization for text-to-SQL applications.
To our knowledge, we are the first work investigating the text-to-SQL model capability of understanding the domain knowledge provided in the training set, and generalizing the knowledge to new problems.
\begin{table}[t]
\centering
\resizebox{.99\columnwidth}{!}{
\smallskip\begin{tabular}{cl}
\\ \specialrule{0.08em}{0pt}{4pt}
{\bf T1} & \makecell[c]{SELECT Columns Mentioned by Omission}
\\
{\bf NL} & \textit{Find the \uline{name} of the teacher who \textbf{\textrm{...}}} \\
{\bf SQL} & \ttfamily select \ulverb|firstname| , \ulverb|lastname| from \textbf{\textrm{...}}
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf T2} & \makecell[c]{Simple Inference Required}
\\
{\bf NL} & \textbf{\textrm{...}} \textit{order of their date of birth \uline{from old to young}.} \\
{\bf SQL} & \textbf{\textrm{...}} \ttfamily order by date\_of\_birth \ulverb|asc|
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf T3} &
\makecell[c]{Synonyms Substitution in Cell Value Word}\\
{\bf NL} & \textit{List the state in the \uline{US}} \textbf{\textrm{...}} \\
{\bf SQL} & \textbf{\textrm{...}} \ttfamily where billing\_country = \ulverb|"USA"| \textbf{\textrm{...}}
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf T4} &
\makecell[c]{One Non-Cell Value Word Generate a Condition} \\
{\bf NL} & \textit{How many students got \uline{accepted} after the tryout?} \\
{\bf SQL} & \textbf{\textrm{...}} \ttfamily from tryout where \ulverb|decision="yes"|
\\ \specialrule{0.05em}{4pt}{4pt}
{\bf T5} &
\makecell[c]{Easy to Conflict with other Domains} \\
{\bf NL} & \textbf{\textrm{...}} \textit{with \uline{max speed} higher than 1000.} \\
{\bf SQL} & \textbf{\textrm{...}} \ttfamily where \ulverb|max_speed| > 1000 \\
\bottomrule
\end{tabular}
}
\caption{Five types of domain knowledge extracted from Spider training set. We name them as T1 to T5.}
\label{table:example-of-dk}
\end{table}
\section{Spider-DK Dataset}
\subsection{Overview}
\label{section:2Overall}
We construct the Spider-DK benchmark by selecting samples from the Spider development set that require domain knowledge understanding, and we also manually modify some samples to incorporate domain knowledge.
The purpose of building Spider-DK is to simulate the scenario where specific domain knowledge is involved in the users' utterance query.
Domain knowledge is often used unnoticedly, which makes some domain knowledge unavoidable.
For example, in the T5 of Table \ref{table:example-of-dk}, the direct use of the \verb|max_speed| column annotation raises a domain knowledge problem.
We discuss the details of this problem later in Section \ref{sec:dk}.
Spider-DK contains 535 NL-SQL pairs drawn from the Spider development set, where 270 pairs are the same as the original Spider samples, while the rest 265 pairs are modified to incorporate the domain knowledge.
We categorize the types of domain knowledge required in Spider-DK, which makes it easy for breakdown analysis.
Spider-DK is smaller than the Spider development set, because not every domain or example can be easily modified to incorporate some domain knowledge.
Besides, it is hard to evaluate the model generalization ability for domain knowledge if keeping too many original Spider examples that do not require domain knowledge.
In particular, the distribution of the SQL query hardness in Spider-DK is close to the original Spider, i.e., \verb|easy| accounts for 20.6\%, \verb|medium| accounts for 41.8\%, \verb|hard| accounts for 14.8\%, and \verb|extra hard| accounts for 19.1\%~\footnote{The Spider benchmark defines four hardness levels.}.
We define five types of domain knowledge in Table \ref{table:example-of-dk}. In Spider-DK, \verb|T1| accounts for 28.7\% of samples, \verb|T2| accounts for 24.5\%, \verb|T3| accounts for 27.5\%, \verb|T4| accounts for 8.3\%, and \verb|T5| accounts for 12.5\%.
We curate the Spider-DK by modifying only questions or both questions and SQL queries, as shown in Table \ref{table:spider2spiderdk}.
We carefully add the domain knowledge into the utterance to ensure that the new utterance follows the domain knowledge required by existing Spider samples and does not raise ambiguity.
Most domain knowledge in Spider-DK is similar to that in the Spider training set.
Compared to the evaluation sets in \cite{Suhr2020}, Spider-DK is easier and closer to the training data and focuses only on domain knowledge, and we provide more discussion below.
\begin{table}[t]
\centering
\resizebox{.99\columnwidth}{!}{
\smallskip\begin{tabular}{cl}
\\ \specialrule{0.08em}{0pt}{4pt}
\multicolumn{2}{c}{Only Modify the NL }
\\
{\bf Spider} & \textbf{\textrm{...}} \textit{in the order of birth date.} \\
{\bf Spider-DK} & \textbf{\textrm{...}} \textit{order of their birth date from old to young.}
\\ \specialrule{0.05em}{4pt}{4pt}
\multicolumn{2}{c}{Modify both NL and SQL }
\\
{\bf Spider} & \textit{Compute the average age of dogs.} \\
& \ttfamily select avg(age) from dogs \\
{\bf Spider-DK} & \textit{Compute the average age of \uline{abandoned} dogs.} \\
& \ttfamily select avg(age) from dogs \\
& \ttfamily where {\uline{\ttfamily abandoned\_y = 1}} \\
\bottomrule
\end{tabular}
}
\caption{Examples of Spider question and/or SQL modifications made in Spider-DK.}
\label{table:spider2spiderdk}
\end{table}
\subsection{Domain Knowledge}
\label{sec:dk}
Different SQL databases could require very different domain knowledge. As shown in~\cite{Suhr2020}, the state-of-the-art models on Spider achieve much worse performance on earlier SQL benchmarks such as ATIS and GeoQuery~\cite{data-atis-geography-scholar,data-geography-original}. However, we argue that the failure of generalization is expected to some extent, because without seeing in-domain examples, some domain knowledge required by these datasets is even hard to infer for experienced programmers.
For example, we asked five computer science graduate students to write the SQL query for the question \verb|`how many major cities are there?'| in GeoQuery, but none of them gave the correct answer.
This question requires the domain knowledge that \verb|major| means \verb|`population > 150000'|, which is hard to infer without looking at GeoQuery training set.
Therefore, while acquiring general-purpose domain knowledge is also important, we believe that the failure of generalization to questions requiring similar domain knowledge to the training set could be more problematic, which motivates our design of Spider-DK benchmark.
We study five types of domain knowledge (name them as T1 to T5) shown in Table \ref{table:example-of-dk}. T1 requires the models to understand that the user queries two columns by an omitted expression.
T2 requires the models to infer the correct queries, e.g., if the T2 utterance in Table \ref{table:example-of-dk} modified from \verb|`date of birth'| to \verb|`age'|, the model should output \verb|desc| not \verb|asc|.
Note that the Spider training set contains both \verb|`date of birth'| and \verb|`age'| along with \verb|`old to young'|.
T3 requires the models to recognize the cell value synonym substitution.
Some synonym substitutions base on their adjective form, such as \verb|`singer whose country is France'| and \verb|`French singer'|.
Although the number of T4 is the least in Spider-DK, it is not uncommon in the Spider training set.
Unlike the GeoQuery \verb|major| example mentioned above, T4 only includes the conditions whose column type is similar to boolean.
For example, in Table \ref{table:example-of-dk} and \ref{table:spider2spiderdk}, the column \verb|decision| only contain \verb|yes| and \verb|no|, while \verb|abandoned_y| only contain \verb|1| and \verb|0|.
Therefore, the key to solving T4 is whether the model can distinguish whether the column is a boolean-like type, but the difficulty is that the word varies in different domains.
Although T5 seems simple and does not seem to contain domain knowledge, the models that generate SQL structure and schema items separately are easy to mispredict in T5.
A review \cite{gan-etal-2020-review} shows that most models follow the separate generation pattern, i.e., these models may use the same word twice in both generating schema items and SQL structure.
Because, in other domain training data, the models learn to generate a \verb|max()| function when the utterance contains a word \verb|max|.
Therefore, these models may use the word \verb|max| twice to generate the \verb|max(max_speed)| for T5 utterance instead of a simple \verb|max_speed|.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Neural network, as a branch of machine learning, has been flexing its ability to our modern world.
It helps us deal with ordinary daily tasks or collecting data for statistical purposes, which are mostly mundane tasks.
Some examples are image classification \cite{img_clf}, speech to text conversion \cite{speech} and vehicle license plate recognition \cite{lpr}.
Experts had been developing theories such as back-propagation to equip neural networks with the ability of self-learning, even before powerful computers exist (See \cite{seppo}).
The most common neural network learning processes are based on empirical evidence, from the simplest stochastic gradient descent to the most effective Adaptive Moment Estimation (Adam) algorithm \cite{adam}.
Their performances are compared and concluded based on the datasets in Modified National Institute of Standards and Technology (MNIST) or the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) \cite{mobilenetv2} \cite{ssd}.
These learning algorithms have been put into practical uses despite not having the most rigorous mathematical background.
However, Wu et. al. did showed a deterministic convergence of stochastic gradient descent with some restriction on the activation function \cite{det_sto}.
In contrast to the practical value of neural networks, this paper mainly focus on its theoretical aspect \cite{main}.
Given a function class, we use neural networks to approximate its functions.
We could consider many possible settings, such as the number of layers, number of weights, whether the weights are discrete or the weights are bounded above.
In order to focus on sparsity of neural networks, we will restrict the number of weights during approximation.
Moreover, the weights are assumed to be bounded above and discretized, so that they can be stored in the computer and improve memory efficiency.
We focus on the theoretical limitation of the neural networks and derive important results. These include the fundamental bound theorem of the function class, and representability of certain representation system by neural networks.
The main contribution of this paper is to examine the theoretical properties of two practical examples.
We use neural networks to approximate B-splines and carry out the derivation of optimal exponent of the $\beta$ cartoon-like function class.
Since we have to write these proofs in great detail, we direct the reader to the original papers \sloppy{\cite{main}, \cite{donoho2} and \cite{donoho3}} in case the proof of the other theorems are not given.
If the proof is given, it is either because we have original result as part of the proof, or more details that are not from the original papers are provided.
\section{Notations}
We introduce the definition of neural networks.
A standard neural network should have an input layer of $d$ nodes, followed by $L$ subsequent layers.
Each layer has some nodes, and the connections are only made between consecutive layers. The connections are the edge weights, while each node itself possess a node weight.
We will first define affine linear maps which record the edge and node weights of a neural network.
\begin{definition}\textit{Affine linear map}. A mapping $W:\mathbb R^n\to \mathbb R^m$ is an \emph{affine linear map} if there is $m\times n$ matrix $A$ and $m\times 1$ vector $b$ such that
$$W(x)=Ax+b\quad\text{ for all }x\in\mathbb R^n.$$
The matrix $A$ represent edge weights, $b$ represent node weights.
\end{definition}
\begin{definition}
\textit{Deep Neural Network}. Let $L,d,N_1,\ldots, N_L$ be positive integers with $L\geq 2$. A map $\Phi:\mathbb R^d\to \mathbb R^{N_L}$ given by
$$\Phi(x)=W_L\rho (W_{L-1}\rho(\ldots\rho(W_1(x))))\quad\text{ for }x\in\mathbb R^d,$$
with affine linear maps $W_\ell: \mathbb R^{N_{\ell-1}}\to \mathbb R^{N_\ell}, 1\leq \ell\leq L$, and the nonlinear activation function $\rho$ acting componentwise, is called a \emph{neural network}.
\end{definition}
If the activation function $\rho$ is linear (hence affine linear), then the neural network would equivalent to a single layer neural network.
It is because the composition of affine linear maps is an affine linear map.
Note that in a neural network of $L$ layers, only $L-1$ activation functions were used. Therefore, we can serially concatenating two neural networks without needing an extra layer.
In the general discussion, the total number of nodes in the neural network is denoted as
$$\mathcal N(\Phi)=d+\sum_{\ell=1}^L N_\ell.$$
For the affine linear maps $W_\ell(x)=A_\ell x+b_\ell$ involved in the neural network, we define $\mathcal M(\Phi)$ to be the number of nonzero entries in all $A_\ell$ and $b_\ell$ for $\ell=1,2,\dots,L$. This quantity is also called the \textbf{connectivity} of neural network. We should always assume the neural network is a real-valued function, that is $N_L=1$. The results in this paper can be appropriately generalized to $N_L>1$, but we will not further elaborate that.
Finally, we define the class of neural networks, which focus on the restriction on connectivity as well as the structure of neural networks.
\begin{definition}
\textit{Class of Neural Networks}. We let $\mathcal{NN}_{L,M,d,\rho}$ denote the collection of all neural networks $\Phi:\mathbb R^d\to \mathbb R$ with exactly $L$ layers, connectivity $\mathcal M(\Phi)$ no more than $M$, and activation function $\rho$. When $L=1$ the collection $\mathcal{NN}_{L,M,d,\rho}$ is empty.
Moreover, we define
\begin{align*}\mathcal{NN}_{\infty,M,d,\rho}&=\bigcup_{L\in\mathbb N}\mathcal{NN}_{L,M,d,\rho},\\
\mathcal{NN}_{L,\infty,d,\rho}&=\bigcup_{M\in\mathbb N}\mathcal{NN}_{L,M,d,\rho},\\
\mathcal{NN}_{\infty,\infty,d,\rho}&=\bigcup_{L\in\mathbb N}\mathcal{NN}_{L,\infty,d,\rho}.\end{align*}
\end{definition}
We will be mostly using $\mathcal{NN}_{\infty, M, d, \rho}$ in this paper, it is because we emphasize on the connectivity of neural networks rather than the number of layers.
\section{Evaluate Approximation Quality}
\subsection{Approximation by representation systems}
Before jumping into topics about approximation by neural networks, we should define the function class we are targeting.
Given a Lebesgue measurable set $\Omega\subset \mathbb R^n$, we define $L^2(\Omega)$ to be the collection of all Lebesgue measurable functions $f:\Omega\to \mathbb R$ such that $$\int_\Omega f^2\text{ is finite}. $$
We give $L^2(\Omega)$ the usual metric topology $$\|f\|_{L^2(\Omega)} = \left(\int_\Omega f^2\right)^{1/2}$$
so $L^2(\Omega)$ is a Banach space. Then, we say $\mathcal C$ is a \textbf{function class} if it is a compact subset of $L^2(\Omega)$. A countable collection of functions $\mathcal D$ contained in $L^2(\Omega)$ is called a \textbf{representation system}.
There are many approximation theories concerning approximation of a function class by a representation system.
One familiar examples of approximation is when $\mathcal C=L([0,2\pi])$, the space of periodic Lebesgue-integrable functions on $[0,2\pi]$ with period $2\pi$, and when \sloppy{$\mathcal D=\{1, \sin x, \cos x, \sin 2x, \cos 2x,\ldots\}$}, the countable set of sinusoidal functions.
The results from Fourier series showed that one can use finite linear combination of elements of $\mathcal D$ to approximate $\mathcal C$ arbitrarily well.
Another example is when $\mathcal C=C([0,1])$, the space of continuous functions on $[0,1]$, and when $\mathcal D$ is the collection of monomials $\{1,x,x^2,\dots\}$.
The Bernstein polynomial can be used to approximation functions in $\mathcal C$ arbitrarily well.
Moreover, when the function $f\in \mathcal C$ satisfies $f(0)=f(1)=0$, it can be approximated by polynomials with integer coefficients arbitrarily well \cite[p.~14]{conapprox}.
Based on these existing theories, we could then transfer them to the study of approximation by neural networks. The method of doing so will be introduced in Chapter 6.
To quantify the quality of approximation, we study the \textbf{error of best $M$-term approximation of $f\in\mathcal C$ in $\mathcal D$}.
\begin{definition}
Given $d\in\mathbb N, \Omega\subset \mathbb R^d$, a function class $\mathcal C\subset L^2(\Omega)$, and a representation system $\mathcal D=(\varphi_i)_{i\in I}\subset L^2(\Omega)$, we define, for $f\in\mathcal C$ and $M\in \mathbb N$,
\begin{equation}\label{bestM}\Gamma_M^{\mathcal D}(f):=\inf_{\substack{I_M\subset I, \\ \# I_M=M, (c_i)_{i\in I_M}}} \left\lVert f-\sum_{i\in I_M} c_i\varphi_i\right\rVert_{L^2(\Omega)}.\end{equation}
We call $\Gamma_M^{\mathcal D}(f)$ the \emph{best $M$-term approximation error} of $f$ in $\mathcal D$. If there exists $f_M=\sum_{i\in I_M} c_i\varphi_i$ attains the infimum in (\ref{bestM}), then $f_M$ is a best $M$-term approximation of $f$ in $\mathcal D$.
\end{definition}
It is clear that choosing linear combinations of more terms from $\mathcal D$ will improve the approximation. We thus have
$$\Gamma_M^{\mathcal D}(f)\leq \Gamma_N^{\mathcal D}(f)\quad\text{ if }\quad M\geq N.$$
Assume $\Gamma_M^{\mathcal D}(f)\to 0$ for $f\in \mathcal C$ as $M\to\infty$ for a moment. To determine the speed of convergence to zero, we introduce a positive real number $\gamma$ satisfying
\begin{equation}\label{gamma}\sup_{f\in \mathcal C}\Gamma_M^{\mathcal D}(f)\in \mathcal O(M^{-\gamma})\quad\text{ as }M\to \infty.\end{equation}
The Big-oh notation $\mathcal O(g(\cdot))$ for nonnegative function $g$ is the set of functions $f$ such that $|f(x)|\leq Cg(x)$ for some $C>0$ and for $x$ sufficiently large. In other words, relation (\ref{gamma}) implies there is a positive constant $C$ such that
$$\sup_{f\in\mathcal C}\Gamma_M^{\mathcal D}(f)\leq CM^{-\gamma}\quad\text{ for $M$ sufficiently large.}$$
Note that such positive $\gamma$ might not exists, which can happen if there exists $f\in \mathcal C$ such that $\Gamma_M^{\mathcal D}(f)\not\to 0$, then we could say $\gamma=0$ in this case. Regardless, we should always assume the existence of positive $\gamma$ in the later discussion.
Finally, to pursue an optimal value of $\gamma$, we define the following.
\begin{definition}
Define the supremum of all $\gamma>0$ satisfying estimation (\ref{gamma}) to be $\gamma^*(\mathcal C, \mathcal D)$, called the \emph{best $M$-term approximation rate} of $\mathcal C$ in the representation system $\mathcal D$.
\end{definition}
\subsection{Approximation by neural networks}
There is an analogue definition of best $M$-term approximation error for neural networks. To achieve that, it is appropriate to treat each nonzero edge and node weight in a neural network as an element in a representation system. The following definition will be used.
\begin{definition}
Given $d\in\mathbb N, \Omega\subset \mathbb R^d$, a function class $\mathcal C\subset L^2(\Omega)$, and an activation function $\rho: \mathbb R\to \mathbb R$, we define, for $f\in\mathcal C$ and a positive integer $M$,
\begin{equation}\label{bestMNN}\Gamma_M^{\mathcal N}(f):=\inf_{\Phi\in\mathcal {NN}_{\infty, M, d, \rho}} \lVert f-\Phi\rVert_{L^2(\Omega)}.
\end{equation}
We call $\Gamma_M^{\mathcal N}(f)$ the \emph{best $M$-edge approximation error} of $f$.
\end{definition}
Similarly, we are interested in positive real numbers $\gamma$ such that
\begin{equation}\label{gammaNN}\sup_{f\in\mathcal C}\Gamma_M^{\mathcal {N}}(f)\in\mathcal O(M^{-\gamma})\quad\text{ as }M\to \infty,\end{equation}
so we have
\begin{definition}
The supremum of all $\gamma$ satisfying estimate (\ref{gammaNN}) is denoted by $\gamma_{\mathcal {NN}}^*(\mathcal C,\rho)$, called the \emph{best $M$-edge approximation rate} of $\mathcal C$ by neural networks with activation function $\rho$.
\end{definition}
In fact, these definitions will not be useful at discovering the complexity of a function class.
It has been shown in \cite{donoho} and \cite{grohs} that every dense and countable representation system $\mathcal D\subset L^2(\Omega)$ achieves supremum rate $\gamma^*(\mathcal C, \mathcal D)=\infty$ for every function class $\mathcal C\subset L^2(\Omega)$.
The meaning of infinite $\gamma^*(\mathcal C,\mathcal D)$ implies one can approximate elements in $\mathcal C$ arbitrarily well by finite linear combinations of elements in $\mathcal D$.
An example is when $\mathcal C=C[0,1]$, the space of continuous functions on the unit interval treated as a subspace of $L^2[0,1]$. Consider the sinusoidal basis $B=\{1,\sin2\pi x, \cos 2\pi x, \dots,\sin 2\pi nx, \cos 2\pi nx, \dots\}$, then let $\mathcal D$ be the finite linear combination of elements in $B$ with coefficients being rational numbers.
We observe that $\mathcal D$ is a countable set, and by the result of Fourier series, $\mathcal D$ is dense in $\mathcal C$ in $L^2$ norm.
Therefore, for $f\in \mathcal C$ and given $\varepsilon>0$, we only need to choose one element $g\in \mathcal D$ such that $\|f-g\|_2<\varepsilon$.
The example above shows that $\gamma^*(\mathcal C, \mathcal D)=\infty$ regardless the complexity of $\mathcal C$.
This motivates us to define a new notation in order to capture the complexity of $\mathcal C$. At least, we should expect the following:
Given a fix representation system $\mathcal D$ and two function classes $\mathcal C_1, \mathcal C_2$ such that $\mathcal C_1$ is more complicated than $\mathcal C_2$ (for example, $\mathcal C_2\subset \mathcal C_1$), then the supremum rate of the pair $(\mathcal C_1,\mathcal D)$ should be larger than that of $(\mathcal C_2, \mathcal D)$.
Moreover, we should expect when the function class is complicated enough, the supremum rate will become finite.
We develop these notations in the next chapter.
\section{Effective Search}
Suppose we are performing approximation of a function class with a computer. We store a representation system in it (which could be done by define a function depending on the index $n\in\mathbb N$).
Then, we want to implement a method to search for the best $M$-term approximation of a function $f$.
This would pose a problem for the computer because it is impossible for it to search among infinitely many terms.
Similar situation happens when we want to use a computer to approximate functions by neural networks, with the notation of best $M$-edge approximation error.
Therefore, we will sacrifice some theoretical accuracy to makes it practically possible.
For this reason, we call these practical version of best $M$-term approximation the \quotes{\textit{Effective} best $M$-term approximation}.
\subsection{Effective best M-term approximation}
To solve the problem of searching $M$ terms from all terms of $\mathcal D=\{\varphi_i\}_{i=1}^\infty$, we restrict the search on the first few terms.
Ideally, let $\pi$ be a polynomial with integer coefficients, we only search the $M$ terms from $\{\varphi_1,\varphi_2,\dots,\varphi_{\pi(M)}\}$, called \textbf{polynomial search}.
This makes it possible for a computer to search for candidates of approximation because it knows when to stop the search.
\begin{definition}\label{bestMterm}
Given $d\in\mathbb N, \Omega\subset \mathbb R^d$, a function class $\mathcal C\subset L^2(\Omega)$, and a representation system $\mathcal D=(\varphi_i)_{i\in I}\subset L^2(\Omega)$, the supremum $\gamma>0$ so that there exist a polynomial $\pi$ and a constant $D>0$ such that
\begin{align*}
\sup_{f\in\mathcal C}\inf_{\substack{I_M\subset {1, \ldots,\pi(M)}\\ \# I_M=M, (c_i)_{i\in I_M}, \max_{i\in I_M}|c_i|\leq D}} \left\lVert f-\sum_{i\in I_M}c_i\varphi_i\right\rVert_{L^2(\Omega)}\in\mathcal O(M^{-\gamma})\text{ as }M\to \infty,
\end{align*}
will be denoted by $\gamma^{*, e}(\mathcal C,\mathcal D)$ and referred to as \emph{effective best $M$-term approximation rate} of $\mathcal C$ in the representation system $\mathcal D$.
\end{definition}
Besides using polynomial depth-search, we also restrict the coefficients so that they are bounded above.
This could make the search more feasible.
In the next chapter, we will observe that $\sup_{\mathcal D}\gamma^{*,e}(\mathcal C, \mathcal D)$ is bounded above by a quantity that only depends on the function class $\mathcal C$, where the supremum is taken across all representation systems in $L^2(\Omega)$.
\subsection{Effective best M-edge approximation}
Similarly, it has been shown that any function $f\in C([0,1]^d)$ can be approximated arbitrarily well with a uniform error $\varepsilon>0$ by a three layer neural network with specific settings.
If $\rho: \mathbb R\to \mathbb R$ is an activation function which is infinitely differentiable, strictly increasing, and $\lim_{x\to \infty} \rho(x)=1, \lim_{x\to -\infty} \rho(x)=0$, then we call this a \textbf{plain sigmoidal activation function}.
Maiorov and Pinkus proved the following theorem:
\begin{theorem}\cite{maiorov}
There exists a plain sigmoidal activation function $\rho$ such that for any $d\in\mathbb N$, $f\in C([0,1]^d)$ and $\varepsilon>0$, there is a neural network $\Phi\in\mathcal{NN}_{3, M, d, \rho}$ satisfying
$$\sup_{x\in[0,1]^d} |f(x)-\Phi(x)|\leq \varepsilon.$$
Moreover, the three layers after the input layer has $N_1=3d, N_2=6d+3, N_3=1$ nodes respectively, so the connectivity is $\mathcal M(\Phi)\leq M:=21d^2+15d+3$.
\end{theorem}
Note that this result can be extended to $C(K)$, where $K$ is any compact $n$-dimensional interval of $\mathbb R^n$.
It is surprising that we can consistently using neural networks of three layers to approximate continuous functions of compact support, it is even important to note that the networks can have no more than $21d^2+15d+3$ connectivity, resulting in an approximation obeying arbitrary uniform error.
Therefore, we have $\gamma_{\mathcal{NN}}^*(\mathcal C, \rho) = +\infty$.
However, as pointed out in \cite{main}, the edge and node weights cannot have their magnitude bounded above by $\mathcal O(\varepsilon^{-1})$.
Therefore, it is not reasonable to expect a computer to search for a neural network approximant because it can only worked with bounded weights.
For this reason, we define a class of neural network that puts more restrictions on its weights.
\begin{definition}\label{effectivenn}
Let $d, L, M\in \mathbb N$, $\rho:\mathbb R\to \mathbb R$ an activation function. Let $\pi$ be a polynomial, then $\mathcal {NN}_{L,M,d,\rho}^\pi$ is the class of neural networks in $\mathcal {NN}_{L,M,d,\rho}$ where the weights are bounded above in absolute value by $|\pi(M)|$.
\end{definition}
We don't restrict the weights to be bounded above by some fixed constant, but we allow the bound to grow polynomially as the connectivity.
This relaxation will help us later.
Talking about storing information, suppose we know an integer is between $1$ and $N$ and we want to record it using a bitstring consists of 0s and 1s.
Since a bitstring of length $\ell$ could used to represent $2^\ell$ different objects, we could store the integer from $[1,N]$ with bitstring length $\ell= \lceil \log_2N\rceil$.
Therefore, we only need to pay a small price (note that $\log \pi(M)\in\mathcal O(\log M)$) to have a wider choices of neural network approximants.
A formal discussion will be provided during the proof of fundamental bound theorem in the next chapter.
Now we define the following:
\begin{definition}\label{bestMedge}
Given $d\in\mathbb N, \Omega\subset \mathbb R^d$, a function class $\mathcal C\subset L^2(\Omega)$, and an activation function $\rho:\mathbb R\to \mathbb R$, consider $\gamma>0$ where there exist $L\in\mathbb N$ and a polynomial $\pi$ such that
$$\sup_{f\in\mathcal C}\inf_{\Phi_M \in \mathcal{NN}_{L,M,d,\rho}^\pi} \lVert f-\Phi_M\rVert_{L^2(\Omega)} \in\mathcal O(M^{-\gamma}), \quad M\to \infty.$$
The supremum of all $\gamma>0$ such that the above is satisfied by some $L$ and $\pi$ will be denoted as $\gamma_{\mathcal {NN}}^{*, e}(\mathcal C, \rho)$, referred to as the \emph{effective best $M$-edge approximation rate} by neural networks.
\end{definition}
In the next chapter, we will prove that $\sup_{\rho}\gamma_{\mathcal {NN}}^{*,e}(\mathcal C, \rho)$ is bounded above by a quantity that only depends on $\mathcal C$. Together with the last remark below Definition \ref{bestMterm}, these two results are called the fundamental bound theorem.
\section{Fundamental Bounds of Effective Approximation Rates}
After imposing restrictions in Definitions \ref{bestMterm} and \ref{bestMedge}, we can expect $\gamma^{*,e}(\mathcal C, \mathcal D)$ and $\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)$ are no longer $\infty$.
In fact, there is a universal quantity depending only on the compact function class $\mathcal C$ that forms the fundamental bound of the other two approximation rates.
In words, the fundamental bound means that there is a method of approximation which only depends on the function class $\mathcal C$ itself, which can be done as good as approximation by both neural networks and by representation systems \cite{main}.
We will state the theorems here for the sake of completeness, but only prove the ones related to neural networks.
Now we introduce the min-max rate distortion theory and use it to define the previously described fundamental bound of a function class.
Let $d$ be a positive integer and $\Omega$ a subset of $\mathbb R^d$, and consider function class $\mathcal C\subset L^2(\Omega)$.
The discussion revolves the encode-decode process related to a bitstring, where a bitstring is a string of finite length consists only of digits of 0 and 1.
Fix $\varepsilon>0$.
We choose a positive integer $\ell$ such that every $f\in\mathcal C$ can be encoded using a bitstring of length $\ell$, in a way that after decoding the bitstring, the recovered function $\widetilde f$ satisfies $\|f-\widetilde f\|_{L^2(\Omega)}\leq\varepsilon$.
We emphasize that the length $\ell$ of bitstring should be independent from the choices of $f$.
To save computing resources, we are interested in the minimum of the bitstring length $\ell$ so that every function can be encoded into a bitstring as short as possible, while keeping the distortion error to be $\|f-\widetilde f\|_{L^2(\Omega)}\leq \varepsilon$.
Given a positive integer $\ell$, we define sets of binary encoders and sets of binary decoder to be
$$\mathfrak E^\ell:=\left\{E:\mathcal C\to \{0,1\}^\ell\right\}, \quad \mathfrak D^\ell:=\left\{D:\{0,1\}^\ell\to L^2(\Omega)\right\}.$$
An encoder-decoder pair $(E,D)\in \mathfrak E^\ell\times\mathfrak D^\ell$ is said to achieve uniform error $\varepsilon$ over the function class $\mathcal C$ if
$$\sup_{f\in\mathcal C} \lVert D(E(f))-f\rVert_{L^2(\Omega)}\leq \varepsilon.$$
These motivate the definition of optimal exponent of the function class - the fundamental bound.
\begin{definition}\label{optimalrate}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d,$ and $\mathcal C\subset L^2(\Omega)$. Then, for $\varepsilon>0$, the \emph{minimax code length} $L(\varepsilon,\mathcal C)$ is
$$L(\varepsilon,\mathcal C):=\min\left\{\ell\in \mathbb N: \exists (E,D)\in\mathfrak E^\ell \times \mathfrak D^\ell: \sup_{f\in\mathcal C}\lVert D(E(f))-f\rVert_{L^2(\Omega)}\leq \varepsilon\right\}.$$
Moreover, the \emph{optimal exponent} $\gamma^*(\mathcal C)$ is defined as
$$\gamma^*(\mathcal C):=\sup \left\{\gamma\in\mathbb R: L(\varepsilon,\mathcal C)\in\mathcal O\left(\varepsilon^{-\frac1\gamma}\right), \varepsilon\to 0\right\}.$$
\end{definition}
If $\gamma^*(\mathcal C)$ is large, then $L(\varepsilon,\mathcal C)$ will be small, hence better. If $\gamma^*(\mathcal C)=+\infty$ then $L(\varepsilon,\mathcal C)$ is bounded above as $\varepsilon\to 0$, which means a fixed minimax code length is enough for arbitrary $\varepsilon$.
As noted in the remark below Definition \ref{bestMterm}, it can be shown that $\gamma^{*,e}(\mathcal C, \mathcal D)\leq \gamma^*(\mathcal C)$ for every representation system $\mathcal D$ contained in $L^2(\Omega)$. Indeed, this is a result due to \cite{donoho} and Lemma 5.26 in \cite{grohs}:
\begin{theorem}\label{effboundforD}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d,$ and $\mathcal C\subset L^2(\Omega)$, and assume that the effective best $M$-term approximation rate of $\mathcal C$ in $\mathcal D\subset L^2(\Omega)$ is $\gamma^{*,e}(\mathcal C,\mathcal D)$. Then, we have
$$\gamma^{*,e}(\mathcal C,\mathcal D)\leq \gamma^*(\mathcal C)$$
for every representation system $\mathcal D\subset L^2(\Omega)$.
\end{theorem}
A similar result holds for the approximation rate $\gamma_{\mathcal{NN}}^{*,e}(\mathcal C, \rho)$. We illustrate some of the proofs here, which constitutes of three intermediate lemmas.
\begin{theorem}\label{weightnotbounded}\cite{main}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d, \rho:\mathbb R\to \mathbb R, c>0,$ and $\mathcal C\subset L^2(\Omega)$. Further, let
$$\text{\textbf{Learn}}:\left(0,\frac12\right)\times \mathcal C\to \mathcal{NN}_{\infty,\infty,d,\rho}$$
be a map such that for each pair $(\varepsilon,f)\in (0,1/2)\times \mathcal C$, every weight of the neural network $\text{\textbf{Learn}}(\varepsilon,f)$ is represented by a bitstring with length no more than $\lceil c\log_2(\varepsilon^{-1})\rceil$, while guaranteeing that
$$\sup_{f\in \mathcal C}\lVert f-\text{\textbf{Learn}}(\varepsilon,f)\rVert_{L^2(\Omega)}\leq \varepsilon.$$
Then
$$\sup_{f\in \mathcal C}\mathcal M(\text{\textbf{Learn}}(\varepsilon,f))\notin \mathcal O\left(\varepsilon^{-\frac1\gamma}\right), \varepsilon\to 0, \quad\text{ for all }\gamma>\gamma^*(\mathcal C).$$
\end{theorem}
Note that we demand discrete weight in neural networks. Usually the weights are not necessarily natural numbers, but it must be discrete, e.g., elements from $(1/3^{100})\mathbb Z$, or of the form $\{x_1\cdot 10^{x_2}: (x_1,x_2)\in \mathbb Z\times \mathbb Z\}$.
In further discussion we only need the first type (equally spaced), as it is enough to benefit the approximation.
A proof by contradiction for this theorem can be found in Proposition 3.6 of \cite{main}.
A simple way to explain this theorem, is to note that if $\gamma$ is a real number larger than the optimal exponent of $\mathcal C$, then we will need more than $\mathcal O(\varepsilon^{-1/\gamma})$ connectivity in order for a neural network to approximate a function with error $\leq \varepsilon$.
It is worth to note that if $M>C\varepsilon^{-1/\gamma}$, $M$ denotes the connectivity of neural networks and $C$ is a constant, then $\varepsilon>C^\gamma M^{-\gamma}$.
Thus we can catch a glimpse of the reason why $\|f-\Phi_M\|_{L^2(\Omega)}$ is not of order $\mathcal O(M^{-\gamma})$ in Definition \ref{bestMedge}.
This idea, although not rigorous now, serves as an important point for understanding the fundamental bound theorem.
Observe that Theorem \ref{weightnotbounded} only works for neural network with discretized weights. To extend its use for a more general type of neural networks, we have to establish some kind of approximation by neural networks of discrete weights.
Given a neural network with weights distributed continuously but bounded above.
We could choose a small number $\delta>0$, then choose another neural network with the same structure but tweaking each weight little bit, with at most a distance of $\delta$, so that its weights are discretized.
We found that the settings of activation function is crucial if we want the two neural networks having approximately the same outputs.
\begin{definition}
An activation function $\rho:\mathbb R\to \mathbb R$ is \emph{acceptable} if
\begin{enumerate}
\item it is Lipschitz-continuous, or
\item dominated by a polynomial, which means $\rho$ is differentiable with the existence of a polynomial $q(x)$ such that $|\rho'(x)|\leq |q(x)|$ on $\mathbb R^n$.
\end{enumerate}
\end{definition}
The above two settings are both satisfied by the ReLU function $\rho(x)=\max\{0,x\}$ and the sigmoid function $\rho(x)=(1+e^{-x})^{-1}$, to name but a few.
These settings are imposed so that two neural networks with approximately same weights can have a small error.
The proof below illustrate this idea.
\begin{theorem}\label{discrete}
Let $d,L,k,M\in\mathbb N$, $\eta\in(0,1/2)$, $\Omega\subset \mathbb R^d$ be bounded, and let $\rho:\mathbb R\to \mathbb R$ be an acceptable activation function. Then, there exists a positive integer $m:= m(k,L,\rho)$ such that if $\Phi\in\mathcal{NN}_{L,M,d,\rho}$ is a neural network with connectivity and all its weights bounded (in absolute value) by $\eta^{-k}$, then there is $\widetilde{\Phi}\in\mathcal{NN}_{L,M,d,\rho}$ such that
$$\left\lVert \widetilde{\Phi}-\Phi\right\rVert_{L^\infty(\Omega)}\leq \eta$$
and all weights of $\widetilde{\Phi}$ are elements of $\eta^m \mathbb Z\cap [-\eta^{-k}, \eta^{-k}]$.
\end{theorem}
The proof using Lipschitz-continuous function $\rho$ has been given in \cite[p.~21]{main}. Thus, we provide a proof assuming $\rho$ is dominated by a polynomial $\pi(x)$ on $\mathbb R$ for the sake of completeness.
\begin{proof}
Without loss of generality, we will assume the number of nonzero node weights is upper-bounded by the number of nonzero edge weights, otherwise there will be a node that does not connected by a nonzero edge weight to the next layer, hence we can replace the node weight by zero, resulted in a same neural network.
Let $m$ be a positive integer to be specified later, we choose $\widetilde{\Phi}$ be a neural network having the same structure as $\Phi$, but with each weight replaced by a nearest element in $\eta^m\mathbb Z\cap [-\eta^{-k}, \eta^{-k}]$, so the change in each weight is no more than $\eta^m/2$.
Moreover, for $x\in \eta^m\mathbb Z\cap [-\eta^{-k},\eta^{-k}]$, it can be expressed as $x=\eta^mn$, $n\in\mathbb Z$ with $|n|\leq \eta^{-m-k}$, hence each weight in $\widetilde{\Phi}$ can be represented using no more than $(k+m)\log_2\eta^{-1}+1$ bits (the extra bit is for the sign).
Let $C_{max}=\eta^{-k}$.
Let $C_W$ be the maximum of 1 and the total number of nonzero edge weights and nonzero node weights of $\Phi$, which satisfies $C_W\leq 2M\leq 2\eta^{-k}$.
For $\ell=1,\dots,L-1$ define $\Phi^\ell:\Omega\to \mathbb R^{N_\ell}$ as
$$\Phi^\ell(x):=\rho(W_\ell\rho(\dots\rho(W_1(x))))\quad\text{ for }x\in\Omega,$$
and $\widetilde{\Phi}^\ell$ accordingly.
Note that $\Phi$ is not equal to $\Phi^L$ if similarly defined, as the last layer will not pass through the activation function.
For $\ell=1,\dots,L-1$ we also let
$$e_\ell = \left\lVert\Phi^\ell-\widetilde{\Phi}^\ell\right\rVert_{L^\infty(\Omega, \mathbb R^{N_\ell})}, \quad e_L=\left\lVert\Phi-\widetilde{\Phi}\right\rVert_{L^\infty(\Omega)}.$$
Since $\Omega$ is bounded, we let $C_0$ be the maximum of 1 and $\sup\{|x|; x\in\Omega\}$. Let
$$C_\ell=\max\left\{1,\left\lVert\Phi^\ell\right\rVert_{L^\infty(\Omega,\mathbb R^{N_\ell})}, \left\lVert\widetilde{\Phi}^\ell\right\rVert_{L^\infty(\Omega,\mathbb R^{N_\ell})}\right\}\quad\text{ for }\ell=1,\dots,L-1.$$
Now we use $W_\ell, \widetilde{W_\ell}$ to denote the affine operators on $\Phi^\ell, \widetilde{\Phi}^\ell$, respectively. Then since $\Omega$ is bounded, it is contained in some compact ball $B\subset \mathbb R^d$ of radius $r$ centered at the origin, since $W_1$ and $\widetilde{W_1}$ are continuous, both $W_1(\Omega)$ and $\widetilde{W_1}(\Omega)$ are also contained in some compact set, hence bounded. Using the same argument inductively and notice that $\rho$ is also continuous, we see $$\Omega, W_1(\Omega), \widetilde{W_1}(\Omega), W_{\ell+1}(\Phi^\ell(\Omega)), \widetilde{W_{\ell+1}}(\widetilde\Phi^\ell(\Omega)),\Phi(\Omega), \widetilde{\Phi}({\Omega}) $$
are all bounded in their respective space. Thus we can choose $R>0$ depends on $\eta$ and $k$ so that all sets above are contained in the closed ball of radius $R$ in their respective space.
Note that for $x,y\in \mathbb R$ with $|x|,|y|\leq R$, there is $c$ between $x$ and $y$ such that $|\rho(x)-\rho(y)|=|\rho'(c)(x-y)|\leq |\pi(c)||x-y|\leq AR^n|x-y|$, hence it is reasonable to define $C_\rho=\max\{1, AR^n\}$ as a substitute of related quantity of the Lipschitz-continuous activation function in the original paper. From here we will prove the estimates:
\begin{equation}\label{mainE}e_1\leq C_0C_\rho C_W\eta^m, \text{ and }e_\ell\leq C_\rho C_WC_{\ell-1}\eta^m+C_\rho C_WC_{max}e_{\ell-1}.\end{equation}
We will proceed with induction to prove (\ref{mainE}). For the estimate of $e_1$, write $W_1(x)=B_1x+b_1$ and similarly for $\widetilde{W_1}(x)$, then the number of nonzero numbers in the column matrices $b_1,\widetilde{b_1}$ is upper-bounded by $C_W$ (recall $C_W$ is the maximum of 1 and the total number of nonzero edge weights and node weights). Hence
\begin{align*}
e_1&=\left\lVert \Phi^1-\widetilde{\Phi}^1\right\rVert_{L^\infty(\Omega,\mathbb R^{N_1})}\\
&=\left\lVert \rho(W_1(x))-\rho(\widetilde{W_1}(x))\right\rVert_{L^\infty}\\
&\leq \left\lVert\rho(B_1x+b_1)-\rho(\widetilde{B_1}x+\widetilde{b_1})\right\rVert_{L^\infty} \\
&\leq \sup_{|x|\leq R}|\rho'(x)| \|(B_1-\widetilde{B_1})x+(b_1-\widetilde{b_1})\|_{L^\infty}\\
&\leq \sup_{|x|\leq R}|\rho'(x)| \left(\frac{\eta^m}2\cdot C_WC_0+\frac{\eta^m}2\right)\\
&\leq C_0C_WC_\rho\eta^m.
\end{align*}
This proves the first part of (\ref{mainE}). The estimate above separate the error for $B_1-\widetilde {B_1}$ and $b_1-\widetilde{b_1}$, the term $\eta^m/2$ is the maximum difference between each entry of $b_1,B_1$ and $\widetilde{b_1}, \widetilde{B_1}$, respectively.
In the proof below we will not be separating $B_1$ and $b_1$, assuming the reader knows what is behind the calculation. Next, we have
\begin{align*}
e_2&=\left\lVert\Phi^2-\widetilde{\Phi}^2\right\rVert_{L^\infty}\\
&=\left\lVert\rho(W_2(\Phi^1(x))) -\rho(\widetilde{W_2}(\widetilde{\Phi}^1(x)))\right\rVert_{L^\infty}\\
&\leq \left\lVert\rho(W_2(\Phi^1(x))) -\rho(\widetilde{W_2}(\Phi^1(x)))\right\rVert_{L^\infty}+\left\lVert\rho(\widetilde{W_2}(\Phi^1(x)))-\rho(\widetilde{W_2}(\widetilde{\Phi}^1(x)))\right\rVert_{L^\infty}\\
&\leq C_\rho\left\{ \left(C_W\cdot\dfrac{\eta^m}2\right)\cdot C_1 + C_W\cdot \dfrac{\eta^m}2\right\}+C_\rho C_W C_{max} e_1\\
&\leq C_\rho C_W C_1 \eta^m+C_\rho C_WC_{\max}e_1.
\end{align*}
This agrees with the formula when $\ell=2$. Assume the formula is true for some $\ell$, then
\begin{footnotesize}
\begin{align*}
e_{\ell+1}&=\left\lVert\Phi^{\ell+1}-\widetilde{\Phi}^{\ell+1}\right\rVert_{L^\infty}\\
&=\left\lVert\rho(W_{\ell+1}(\Phi^\ell(x))) -\rho(\widetilde{W_{\ell+1}}(\widetilde{\Phi}^\ell(x)))\right\rVert_{L^\infty}\\
&\leq \left\lVert\rho(W_{\ell+1}(\Phi^\ell(x))) -\rho(\widetilde{W_{\ell+1}}(\Phi^\ell(x)))\right\rVert_{L^\infty}+\left\lVert\rho(\widetilde{W_{\ell+1}}(\Phi^\ell(x))) -\rho(\widetilde{W_{\ell+1}}(\widetilde{\Phi}^\ell(x)))\right\rVert_{L^\infty}\\
&\leq C_\rho\left\{ \left(C_W\cdot\dfrac{\eta^m}2\right)\cdot C_\ell + C_W\cdot \dfrac{\eta^m}2\right\} + C_\rho C_W C_{max} e_\ell \\
&\leq C_\rho C_W C_\ell \eta^m +C_\rho C_W C_{max} e_\ell.
\end{align*}
\end{footnotesize}
Until now, (\ref{mainE}) has been proved by induction.
We also have
\begin{align*}
e_L&=\left\lVert\Phi^L-\widetilde{\Phi}^L\right\rVert_{L^\infty}\\
&=\left\lVert W_L(\Phi^{L-1}(x))-\widetilde{W_L}(\widetilde{\Phi}^{L-1})\right\rVert\\
&\leq \left\lVert W_L(\Phi^{L-1}(x))-\widetilde{W_L}(\Phi^{L-1})\right\rVert_{L^\infty}+\left\lVert \widetilde{W_L}(\Phi^{L-1}(x))-\widetilde{W_L}(\widetilde{\Phi}^{L-1})\right\rVert_{L^\infty}\\
&\leq C_W\dfrac{\eta^m}2 C_{L-1}+C_W\dfrac{\eta^m}2+C_{max}C_We_{L-1}\\
&\leq C_WC_{L-1}\eta^m +C_WC_{max}e_{L-1}.
\end{align*}
For $\ell=1,\dots,L-1$, $0$ below is the zero vector in $\mathbb R^{N_\ell}$:
\begin{align*}
\left\lVert\Phi^\ell(x)\right\rVert_{L^\infty(\Omega,\mathbb R^{N_\ell})}&\leq \left\lVert\Phi^\ell(x)-\rho(0)\right\rVert_{L^\infty}+\left\lVert\rho(0)\right\rVert_{L^\infty}\\
&=|\rho(0)|+\left\lVert\rho(W_\ell\rho(\dots \rho (W_1(x))))-\rho(0)\right\rVert\\
&\leq |\rho(0)|+C_\rho C_WC_{max} C_{\ell-1},
\end{align*}
similar result goes for $\left\lVert\widetilde{\Phi}^\ell(x)\right\rVert_{L^\infty(\Omega, \mathbb R^{N_\ell})}$, thus
$$C_\ell\leq |\rho(0)|+C_\rho C_W C_{max} C_{\ell-1}.$$
Starting from $C_1\leq |\rho(0)|+C_\rho C_W C_{max}C_0$, we can derive
$$C_\ell\leq |\rho(0)|\sum_{k=0}^{\ell-1} (C\rho C_W C_{max})^k + C_0(C_\rho C_W C_{max})^\ell\quad\text{ for }\ell=1,\dots,L-1.$$
Then since $C_\rho C_W C_{max}\geq 1$ and $\rho(0)$ is finite, there is a fixed constant $C'>0$ such that
$$C_\ell\leq C'C_0(C_\rho C_W C_{max})^\ell\quad\text{ for }\ell=1,\dots,L-1.$$
Now since $C_W\leq 2W\leq \eta^{-k-1}$ and $C_{max}\leq \eta^{-k}$, we can choose a fix $p\in \mathbb N$ such that
$$C_\ell \leq C'C_0 C_\rho^\ell \eta^{-\ell (2k+1)}\leq \eta^{-p}\quad\text{ for }\ell=1,\dots,L-1.$$
To make things clear, we then choose $n\in\mathbb N$ such that
$$\max\{C_0C_\rho C_W, C_WC_{max},C_WC_{L-1}, C_\rho C_W C_{\ell-1}, C_\rho C_W C_{max}\}\leq \dfrac{\eta^{-n}}2$$
Using previous estimates we deduce
$$e_\ell \leq \dfrac{\eta^{-n}}2(\eta^m+e_{\ell-1})\quad\text{ for }\ell=1,\dots,L-1$$
with a convention that $e_0=0$. Furthermore, we use induction to prove there is a $r\in\mathbb N$ such that $e_\ell\leq \eta^{m-(\ell-1)n-r}$ for $\ell=1,\dots,L-1$ (in fact taking $r=n$ will suffice). First we have $e_1\leq \frac12 \eta^{m-n}\leq \eta^{m-r}$. Assume $e_\ell\leq \eta^{m-(\ell-1)n-r}$ is true for some $\ell$, then
$$e_{\ell+1}\leq \dfrac{\eta^{-n}}2(\eta^{m}+\eta^{m-(\ell-1)n-r})\leq \frac12 (\eta^{m-n}+\eta^{m-\ell n-r})\leq \eta^{m-\ell n-r}.$$
Using the previous result, we arrive at
$$e_L\leq \dfrac12(\eta^{m-n}+\eta^{m-(L-1)n-r})\leq \eta^{m-(L-1)n-r},$$
thus taking $m=(L-1)n+r+1=Ln+1$ will be enough to show that $e_L = \lVert \Phi-\widetilde{\Phi}\rVert_{L^\infty}\leq \eta$.
\end{proof}
The theorem above can be easily extended to the $L^2$ norm. In the end of the proof we can choose $m$ slightly larger so that $e_L=\| \Phi-\widetilde\Phi\|_{L^\infty}\leq \eta/\sqrt{C_0}$, then
\begin{align*}
\|\Phi-\widetilde\Phi\|_{L^2}&=\left(\int_{\Omega}(\Phi-\widetilde{\Phi})^2 dx\right)^{1/2}\\
&\leq \|\Phi-\widetilde\Phi\|_{L^\infty} \left(\int_\Omega dx\right)^{1/2}\\
&\leq \dfrac{\eta}{\sqrt{C_0}}\cdot \sqrt{C_0}\\
&=\eta.
\end{align*}
Next, we state a converse of Theorem \ref{weightnotbounded} which states that if the encoder-decoder length $\ell$ remains bounded above by $\mathcal O(\varepsilon^{-1/\gamma})$ for some $\gamma>\gamma^*(\mathcal C)$, then the approximation error of all $\lVert f-\Phi\rVert$ will not converges to 0 for infinitely many $M$. Formally, we have the theorem below.
\begin{theorem}\label{conversenotbounded}\cite{main}
Let $d,L\in\mathbb N, \Omega\subset \mathbb R^d$ be bounded, $\pi$ a polynomial, $\mathcal C\subset L^2(\Omega)$, and $\rho:\mathbb R\to \mathbb R$ either Lipschitz-continuous or differentiable such that $\rho'$ is dominated by a polynomial. Then, for all $C>0$ and $\gamma>\gamma^*(\mathcal C)$, we have that
$$\sup_{f\in\mathcal C}\inf_{\Phi\in\mathcal{NN}_{L,M,d,\rho}^\pi} \|f-\Phi\|_{L^2(\Omega)}\geq CM^{-\gamma}\quad\text{ for infinitely many }M\in \mathbb N. $$
\end{theorem}
Now we prove the fundamental bound theorem for neural networks.
\begin{theorem}\label{funbound}\cite{main}
Let $d\in \mathbb N, \Omega\subset \mathbb R^d$ be bounded, and let $\mathcal C\subset L^2(\Omega)$. Then, for any acceptable activation function $\rho:\mathbb R\to \mathbb R$, we have
$$\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)\leq \gamma^*(\mathcal C).$$
\end{theorem}
\begin{proof}
Assume the contrary that $\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)>\gamma^*(\mathcal C)$, then if $\gamma\in(\gamma^*(\mathcal C), \gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho))$, there exists $L\in\mathbb N$ and a polynomial $\pi$ such that
$$\sup_{f\in\mathcal C}\inf_{\Phi_M \in \mathcal{NN}_{L,M,d,\rho}^\pi} \lVert f-\Phi_M\rVert_{L^2(\Omega)} \in\mathcal O(M^{-\gamma}), \quad M\to \infty.$$
However, this result contradicts Theorem \ref{conversenotbounded}.
\end{proof}
\section{Transition from representation system to neural networks}
Before continuing to the next section, we should make a few more definition to grasp the objective of the discussion.
We already knew $\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)\leq \gamma^*(\mathcal C)$, where the last quantity only depends on $\mathcal C$, and the previous quantity depends on the neural network as well.
We will want to choose $\rho$ so that $\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)$ to be as large as possible, but it is not possible to exceed $\gamma^*(\mathcal C)$.
Therefore, we give a terminology when the two constants are equal:
\begin{definition}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d$ be bounded, we say that the function class $\mathcal C\subset L^2(\Omega)$ is \emph{optimally representable by neural networks} with activation function $\rho:\mathbb R\to \mathbb R$ if
$$\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)= \gamma^*(\mathcal C).$$
\end{definition}
From the context of Theorem \ref{effboundforD}, we have similar terminology:
\begin{definition}
Let $d\in \mathbb N, \Omega\subset \mathbb R^d$, and assume that the effective best $M$-term approximation rate of $\mathcal C\subset L^2(\Omega)$ in $\mathcal D\subset L^2(\Omega)$ is $\gamma^{*,e}(\mathcal C,\mathcal D)$. If
$$\gamma^{*,e}(\mathcal C,\mathcal D)=\gamma^*(\mathcal C),$$
then the function class $\mathcal C$ is said to be \emph{optimally representable by representation system} $\mathcal D$.
\end{definition}
In this section, we aimed to establish similar connection between representation systems and neural networks.
We treat the neural networks as a new subject, then transfer useful properties of representation systems to neural networks, in the sense that approximation by representation system are more widely studied.
\begin{definition}\label{transition}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d, \rho:\mathbb R\to \mathbb R$, and $\mathcal D=(\varphi_i)_{i\in I}\subset L^2(\Omega)$ be a representation system. Then, $\mathcal D$ is said to be \emph{representable by neural networks} (with activation function $\rho$) if there exist $L,R\in\mathbb N$ such that for all $\eta>0$ and every $i\in I$, there is a neural network $\Phi_{i,\eta}\in\mathcal {NN}_{L,R,d,\rho}$ with
$$\|\varphi_i-\Phi_{i,\eta}\|_{L^2(\Omega)}\leq \eta.$$
If in addition, the weights of $\Phi_{i,\eta}\in\mathcal{NN}_{L,R,d,\rho}$ are bounded above by $Ai^n\eta^{-n}$ for some $A>0, n\in\mathbb N$, and if $\rho$ is acceptable, then we say that $\mathcal D$ is \emph{effectively representable by neural networks} (with activation function $\rho$).
\end{definition}
Note that we use $R$ instead of $M$ to denote connectivity. This is to be combined with the best $M$-term approximation from $\mathcal D$. Suppose $f$ is a function from the function class $\mathcal C$ and we are using best $M$-term approximation $\sum_{i\in I_M}c_i\varphi_i$ with $\varphi_i\in\mathcal D$ to approximate $f$.
Then since each term from $\mathcal D$ can be approximated using a neural network of connectivity $R$, we can imagine $f$ can be approximated by a composite neural network with a total count of connectivity $RM$.
Formally, we have the theorem below. The proof can be found in Theorem 4.2 from \cite{main}.
\begin{theorem}\label{simpletransfer}
Let $d\in\mathbb N, \Omega\subset \mathbb R^d,$ and $\rho:\mathbb R\to \mathbb R$. Suppose that $\mathcal D=(\varphi_i)_{i\in I}\subset L^2(\Omega)$ is representable by neural networks. Let $f\in L^2(\Omega)$. For $M\in\mathbb N,$ let $f_M=\sum_{i\in I_M}c_i\varphi_i$, $I_M\subset I, \# I_M=M$, satisfying
$$\|f-f_M\|_{L^2(\Omega)}\leq \varepsilon,$$
where $\varepsilon\in(0,1/2)$. Then, there exist $L\in\mathbb N$ (depending on $\mathcal D$ only) and a neural network $\Phi(f,M)\in\mathcal {NN}_{L,M',d,\rho}$ with $M'\in\mathcal O(M)$ satisfying
$$\|f-\Phi(f,M)\|_{L^2(\Omega)}\leq 2\varepsilon.$$
In particular, for all function classes $\mathcal C\in L^2(\Omega)$, it holds that
$$\gamma_{\mathcal {NN}}^*(\mathcal C,\rho)\geq \gamma^*(\mathcal C,\mathcal D).$$
\end{theorem}
We now establish similar result for effective best $M$-term/edge approximation rates.
\begin{theorem}\cite{main}
Let $d\in\mathbb N, \Omega \subset \mathbb R^d$ be bounded, $\rho$ an acceptable activation function, and let $\mathcal C\subset L^2(\Omega)$. Suppose that the representation system $\mathcal D=(\varphi_i)_{i\in\mathbb N}\subset L^2(\Omega)$ is effectively representable by neural networks. Then, for all $\gamma<\gamma^{*,e}(\mathcal C,\mathcal D)$, there exist a polynomial $\pi$, constants $c>0, L\in\mathbb N$, and a map
$$\textbf{Learn}:\left(0,\frac12\right)\times \mathcal C\to \mathcal{NN}_{L,\infty,d,\rho}^\pi,$$ such that for every $f\in\mathcal C$ the weights in $\textbf{Learn}(\varepsilon,f)$ can be represented by no more than $\lceil c\log_2(\varepsilon^{-1})\rceil$ bits while $\|f-\textbf{Learn}(\varepsilon,f)\|_{L^2(\Omega)}\leq \varepsilon$ and $\mathcal M(\textbf{Learn}(\varepsilon,f))\in\mathcal O(\varepsilon^{-1/\gamma})$ for $\varepsilon\to 0$.
In particular, we have $\gamma_{\mathcal{NN}}^{*,e}(\mathcal C, \rho)\geq \gamma^{*,e}(\mathcal C, \mathcal D)$.
\end{theorem}
\begin{proof}
Fix $\gamma<\gamma^{*,e}(\mathcal C,\mathcal D)$, let $M\in\mathbb N$. By the definition of effective best $M$-term approximation rate, there is a polynomial $\pi$, constants $C,D>0$, and $I_M\subset \{1,2,\dots,\pi(M)\}$, $\#I_M=M$, with coefficients $\max_{i\in I_M}|c_i|\leq D$ such that
$$\left\|f-\sum_{i\in I_M}c_i\varphi_i\right\|_{L^2(\Omega)}\leq \dfrac{CM^{-\gamma}}2=\dfrac{\delta_M}2,$$
where we let $\delta_M=CM^{-\gamma}$. By Definition \ref{transition} of effective representability of $\mathcal D$ by neural networks, there are $L,R\in\mathbb N$ such that for each $i\in I_M$ with $\eta:=\delta_M/\max\{1,4\sum_{i\in I_M}|c_i|\}$, there is a neural network $\Phi_{i,\eta}\in\mathcal {NN}_{L,R,d,\rho}$ satisfying
$$\| \varphi_i-\Phi_{i,\eta}\|_{L^2(\Omega)}\leq \eta$$
with the weights of $\Phi_{i,\eta}$ bounded above by $A|i\eta^{-1}|^n$ for some constants $A>0, n\in\mathbb N$. We now define $\Phi(f,M)\in \mathcal{NN}_{L,RM,d,\rho}$ such that it is the result of $\Phi_{i,\eta}, i\in I_M$ operating in parallel then combine the outputs using the coefficients $\{c_i\}_{i\in I_M}$, defined by
$$\Phi(f,M)=\sum_{i\in I_M}c_i \Phi_{i,\eta}.$$
This proves
$$\left\|\sum_{i\in I_M}c_i\varphi_i-\Phi(f,M)\right\|_{L^2(\Omega)}\leq \dfrac{\delta_M}4$$
Now we will represent $\Phi(f,M)$ by another neural network $\widetilde{\Phi}(f,M)\in \mathcal {NN}_{L,RM,d,\rho}$ with discrete weights by using Theorem \ref{discrete}. Since weights of $\Phi(f,M)$ is bounded above by $A|i\eta^{-1}|^n$, with $i\leq \pi(M)$ and $\eta^{-1}\leq \max\{1,4\sum_{i\in I_M}|c_i|\} \delta_M^{-1}\leq \max\{M^\gamma, DM^{\gamma+1}\}$, the weights are polynomially bounded by $\delta_M^{-1}\sim M^{\gamma}$. Thus there is $\widetilde\Phi(f,M)\in \mathcal{NN}_{L,RM,d,\rho}$ with weights represented by no more than $\lceil c\log_2\delta_M^{-1}\rceil$ bits such that
$$\|\Phi(f,M)-\widetilde{\Phi}(f,M)\|_{L^2(\Omega)}\leq \dfrac{\delta_M}4.$$
By three parts of triangle inequality, we have
$$\|f-\widetilde{\Phi}(f,M)\|_{L^2(\Omega)}\leq \delta_M=CM^{-\gamma}.$$
For $\varepsilon\in(0,1/2)$ we define
$$\textbf{Learn}(\varepsilon,f)=\widetilde\Phi(f,M_\varepsilon)\quad\text{ with }\quad M_\varepsilon=\left\lceil\left(\dfrac{C}{\varepsilon}\right)^{1/\gamma}\right\rceil$$
Now we check the map \textbf{Learn} satisfies all of the conditions. First we have $\|f-\widetilde{\Phi}(f,M)\|_{L^2(\Omega)}\leq CM^{-\gamma}\leq \varepsilon$, then all weights of \textbf{Learn} can be represented with no more than $\lceil c\log_2(\delta_{M_\varepsilon}^{-1})\rceil$ bits, then since $\delta_{M_\varepsilon}$ has the same order as $M_\varepsilon^{-\gamma}$, which has the same order as $\varepsilon$, then the weights can be represented by no more than $\lceil c'\log_2(\varepsilon^{-1})\rceil$ bits for some $c'>0$. Since each $\Phi_{i,\eta}$ has no more than $M_\varepsilon$ connectivity, we find that $\textbf{Learn}(\varepsilon,f)$ has no more than $RM_\varepsilon$ connectivity. It is important to note from Definition \ref{transition} that $R$ doesn't depends on $\varepsilon$, hence we may regard it as a constant that doesn't depends on $M_\varepsilon$. Thus $\textbf{Learn}(\varepsilon,f)$ has no more than
$$R(C^{1/\gamma}\varepsilon^{-1/\gamma}+1)\leq 2RC^{1/\gamma}\varepsilon^{-1/\gamma}\in \mathcal O(\varepsilon^{-1/\gamma})$$
connectivity.
\end{proof}
The theorem shows that if $\mathcal D$ is effectively representable by neural networks, then when transfering from representation system to neural networks, we can achieve effective best $M$-edge approximation rate as $\gamma_{\mathcal{NN}}^{*,e}(\mathcal C,\rho)\geq \gamma^{*,e}(\mathcal C,\mathcal D)$. In particular, if the function class $\mathcal C$ can be optimally represented by $\mathcal D$: $\gamma^{*,e}(\mathcal C,\mathcal D)=\gamma^*(\mathcal C)$, then it can also be optimally represented by neural networks with activation function $\rho$: $\gamma_{\mathcal {NN}}^{*,e}(\mathcal C,\rho)=\gamma^*(\mathcal C)$ by fundamental bounds of effective approximation rates.
\textbf{A conclusion on theoretical results}
We have seen that in suitable settings, $\gamma^{*, e}_{\mathcal{NN}}(\mathcal C,\rho)\leq \gamma^{*}(\mathcal C)$ and $\gamma^{*,e}(\mathcal C, \mathcal D)\leq\gamma^*(\mathcal C)$ which is the result from fundamental bound of the function class $\mathcal C$. If in addition, $\mathcal D$ is effectively representable by neural networks, then we have the double inequality:
$$\gamma^{*,e}(\mathcal C, \mathcal D)\leq \gamma_{\mathcal{NN}}^{*,e}(\mathcal C, \rho)\leq \gamma^*(\mathcal C).$$
This inequality is interesting in its own right. From a pair of well-studied function class $\mathcal C$ and representation system $\mathcal D$, if $\mathcal D$ is effectively representable by neural networks with an acceptable activation function, then using neural networks to approximate functions in $\mathcal C$ can be done as good as using representation system, but its behavior is restricted by the optimal exponent $\gamma^*(\mathcal C)$.
\section{Application on B-spline and cartoon-like functions}
In this section we seek practicality of using neural network to approximate B-spline functions. Then, we prove the class of $\beta$ cartoon-like functions has a finite optimal exponent.
\subsection{Choices of activation functions}
We will need to narrow down possible choices of activation functions, so we can have a better control over the behavior of the neural networks.
\begin{definition}\label{sigmoidal}
A continuous function $\rho:\mathbb R\to \mathbb R$ is called a \emph{sigmoidal function} of order $k\in\mathbb N, k\geq 1$, if there exists $C>0$ such that
$$\lim_{x\to -\infty}\frac1{x^k}\rho(x)=0, \lim_{x\to \infty}\frac1{x^k}\rho(x)=1, \quad\text{ and }\quad |\rho(x)|\leq C(1+|x|)^k\;\text{ for }x\in\mathbb R. $$
If in addition, $\rho$ is differentiable, then it is \emph{strongly sigmoidal} of order $k$ provided there exists constants $a,b,C>0$ such that
$$\left|\frac1{x^k}\rho(x)\right|\leq C|x|^{-a}\text{ for } x<0; \left|\frac1{x^k}\rho(x)-1\right|\leq Cx^{-a}\text{ for }x\geq 0;$$
$$|\rho(x)|\leq C(1+|x|)^k, \left|\dfrac d{dx}\rho(x)\right|\leq C|x|^b\text{ for } x\in\mathbb R. $$
\end{definition}
It is worth to note that in practice, most activation functions are sigmoidal function of order $k=0$ or $k=1$ with a similar definition. For example, the sigmoid function
$$\rho(x) = \frac1{1+e^{-x}},$$
and the ReLU (Rectified Linear Unit) function
$$\rho(x) = \max\{0, x\}.$$
Although it is not common to use sigmoidal function of order $2$, let alone even higher order functions, we have include their definitions for the sake of generalization.
\subsection{Approximate B-Spline}
B-splines are classic building blocks in constructing continuous functions with compact support, that is, functions vanishes outside a compact subset of $\mathbb R^N$. The simplest B-spline with order 1, $N_1$, is the characteristic function $\chi_{[0,1]}$ defined on $\mathbb R$, where
$$N_1(x)=\chi_{[0,1]}(x)=\begin{cases}1&\text{ if }x\in [0,1]\\
0&\text{ otherwise}.\end{cases}$$
By induction, one define a higher order B-spline $N_m$, assuming $N_{m-1}$ is known, as the convolution of $N_{m-1}$ and $N_1$. Then we have
$$N_m(x):= (N_{m-1}*N_1)(x)=\int_{\mathbb R}N_{m-1}(x-t)N_1(t)dt=\int_0^1 N_{m-1}(x-t)dt.$$
Some examples of B-splines are
$$N_2(x)=\begin{cases}x&\text{ if }x\in[0,1],\\
2-x&\text{ if }x\in[1,2],\\
0&\text{ otherwise}.\end{cases}$$
and
$$N_3(x)=\begin{cases}\frac12x^2&\text{ if }x\in[0,1],\\
-x^2+3x-\frac32&\text{ if }x\in[1,2],\\
\frac12x^2-3x+\frac92&\text{ if }x\in[2,3],\\
0&\text{ otherwise}.\end{cases}$$
There are many properties of the B-spline. For example, $N_m$ is $(m-2)$-times continuously differentiable nonnegative function which is identically zero outside $[0,m]$. Only continuous differentiability is not so obvious. When $m=2$ we know $N_2(x)$ is continuous. Suppose $N_{m-1}$ is $(m-3)$-times continuously differentiable, then using symmetry of convolution we have
$$N_m(x)=\int_{x-1}^x N_{m-1}(t)dt\implies N_m'(x)=N_{m-1}(x)-N_{m-1}(x-1),$$
then $N_m$ is $(m-2)$-times continuously differentiable because $N_m'$ is $(m-3)$-times continuously differentiable. There are more facts, such as for each $x\in\mathbb R$, the (finite) sum below
$$\sum_{n=-\infty}^\infty N_m(x+n)$$
is equal to 1. Moreover, on each subinterval $[k,k+1]$ where $N_m$ is not identically zero, it is a degree $m-1$ polynomial.
The results from \cite{chui} proved that the representation system of B-splines $\{N_m\}$ is representable by neural networks.
We modify the proof to show that the B-splines is effectively representable by neural networks if the activation function is strongly sigmoidal. In the discussion below, we assume $\rho$ is a fixed strongly sigmoidal function of order $k$ with related constants $a,b,C>0$.
\begin{theorem}\label{effectiveBspline}
Let $\mathcal D=\{N_i\}$ be the representation system of B-splines.
Let $D>0$.
Then, there is a corresponding countable collection $\{B\Phi_{m,D,\varepsilon}\}_{m\geq 1}\subset \mathcal{NN}_{L,R,d,\rho}$ of neural networks such that
$$\|N_m-B\Phi_{m,D,\varepsilon}\|_{L^2([-D,D])}\leq\varepsilon,$$
with the weights of $B\Phi_{m,D,\varepsilon}$ bounded above by $Am^n\varepsilon^{-n}$ for some $A>0, n\in\mathbb N$ independent of $m$, and a strongly sigmoidal function $\rho$ of order $k$.
\end{theorem}
\begin{lemma}\label{firstlemma}
Define $x_+=\max\{0,x\}$. Given $L\in\mathbb N, D>0, \varepsilon>0$, let $\rho$ be a sigmoidal activation function of order $k$, there is a neural network $\Phi_{+,\varepsilon}\in \mathcal{NN}_{L+1, L+1, 1, \rho}$ such that
$$|x_+^{k^L}-\Phi_{+,\varepsilon}(x)|\leq \varepsilon\quad\text{ for }|x|\leq D. $$
In addition, the weights of $\Phi_{+,\varepsilon}$ are bounded above by $\mathcal O(\varepsilon^{-n})$ for some positive integer $n$ that only depends on $L, k, a, b$.
\end{lemma}
Note that the number of layers is the same as connectivity, this neural network $\Phi_{+,\varepsilon}$ only has exactly one nonzero edge weight connecting each consecutive layers, with no node weight.
\begin{proof}
Let $\delta=\left(\dfrac\varepsilon{2^{k+1}C}\right)^{1/k}$, we find $B>1$ such that $$|\rho(x)|\leq \varepsilon |x|^k\text{ for }x<-B, \quad |\rho(x)-x^k|\leq \varepsilon |x|^k\text{ for }x>B.$$
Following strong sigmoidality, we can define $B=\max\{1, (C/\varepsilon)^{1/a}\}$.
We first solve the case for $D=1, L=1$. Define $P_{1,1,\varepsilon}(x)=(\delta/B)^k \rho(Bx/\delta)\in \mathcal{NN}_{2,2,1,\rho}$, then we claim $|x_+^k-P_{1,1,\varepsilon}(x)|\leq \varepsilon$ for $|x|\leq 1$. If $x< -\delta$ then
$$|x_+^k -P_{1,1,\varepsilon}(x)|=|P_{1,1,\varepsilon}(x)|\leq \left(\frac\delta B\right)^k \varepsilon \left(\frac{Bx}{\delta}\right)^k \leq \varepsilon|x|^k\leq \varepsilon.$$
If $-\delta\leq x<0$, then
$$|x_+^k -P_{1,1,\varepsilon}(x)|=|P_{1,1,\varepsilon}(x)|\leq\left(\frac \delta B\right)^k \cdot C(1+B)^k\leq \dfrac\varepsilon{2^{k+1}CB^k}\cdot C(2B)^k<\varepsilon. $$
If $0\leq x\leq \delta$, then
$$|x_+^k-P_{1,1,\varepsilon}(x)|\leq \delta^k + \left(\frac\delta B\right)^k \cdot C(1+B)^k\leq \dfrac\varepsilon2+\dfrac\varepsilon2=\varepsilon.$$
If $x> \delta$, let $y=Bx/\delta>B$ then
$$|x_+^k-P_{1,1,\varepsilon}(x)|=x^k\left|1- \left(\frac\delta{Bx}\right)^k\rho\left(\frac{Bx}\delta\right)\right|\leq |1-y^{-k}\rho(y)|\leq \varepsilon.$$
The weights of $P_{1,1,\varepsilon}$ are bounded above by the order $\mathcal O(\varepsilon^{-\frac1k-\frac1a})$. Indeed, we note that $B/\delta\in \mathcal O(\varepsilon^{-\frac1k-\frac1a})$, hence $\delta/B \in\mathcal O(1)$ by the natural assumption that $\varepsilon<1$.
For a general $D>0$, we let $$P_{1,D,\varepsilon}(x)= D^kP_{1,1,D^{-k}\varepsilon}\left(\frac xD\right)\in \mathcal{NN}_{2,2,1,\rho},$$
then for $|x|\leq D$ we have $|x/D|\leq 1$, hence
$$|x_+^k-P_{1,D,\varepsilon}(x)|=D^k\left|\left(\frac xD\right)_+^k -P_{1,1,D^{-k}\varepsilon}\left(\frac xD\right)\right|\leq D^k(D^{-k}\varepsilon)=\varepsilon.$$
The weights of $P_{1,D,\varepsilon}$ are also bounded above by $\mathcal O(\varepsilon^{-\frac1k -\frac1a})$ because $D$ is independent from $\varepsilon$.
The function $P_{1,1,\varepsilon}$ is continuous, hence uniformly continuous on any compact subset of $\mathbb R$. We thus choose $\eta>0$ such that
$$|P_{1,1,\varepsilon/2}(x)-P_{1,1,\varepsilon/2}(y)|\leq \varepsilon/2\quad\text{ if }|x-y|<\eta, |x|,|y|\leq 2.$$
This is true if we choose $\eta=\frac\varepsilon{2^{b+1}C}\min\{(\frac B\delta)^{k-1-b},1\}$, where $\eta^{-1}\in \mathcal O(\varepsilon^{-N})$ for some positive integer $N\geq 1$ depends on $k,a,b$ only. Then by mean value theorem and strong sigmoidality,
\begin{align*}
|P_{1,1,\varepsilon/2}(x)-P_{1,1,\varepsilon/2}(y)|&=\left(\frac\delta B\right)^{k} \left|\rho\left( \dfrac{Bx}{\delta}\right)-\rho\left(\frac{By}\delta\right)\right|\\
&\leq \left(\dfrac\delta B\right)^k \cdot C\left(\frac{2B}\delta\right)^b \cdot \dfrac B\delta |x-y|\\
&< 2^bC\left(\dfrac \delta B\right)^{k-1-b}\eta\leq \dfrac\varepsilon2.
\end{align*}
Let $P_{2,1,\varepsilon}(x)=P_{1,1,\varepsilon/2}(P_{1,1,\eta}(x))\in \mathcal{NN}_{3,3,1,\rho}$, then for $|x|\leq 1$,
\begin{align*}
|x_+^{k^2}-P_{2,1,\varepsilon}(x)|&\leq |(x_+^k)^k-P_{1,1,\varepsilon/2}(x_+^k)|+|P_{1,1,\varepsilon/2}(x_+^k)-P_{1,1,\varepsilon/2}(P_{1,1,\eta}(x))|\\
&\leq \frac\varepsilon2+\frac\varepsilon2=\varepsilon.
\end{align*}
The weights of $P_{1,1,\eta}$ are bounded above by the order $\mathcal O(\varepsilon^{-N(\frac1k+\frac1a)})$.
Upon concatenation, we find the weights of $P_{2,1,\varepsilon}$ are bounded above by $\mathcal O(\varepsilon^{-N(\frac1k+\frac1a)})$ as well.
Suppose we already defined $P_{\ell-1,1,\varepsilon}\in \mathcal{NN}_{\ell,\ell,1,\rho}$ with weights bounded above by $\mathcal O(\varepsilon^{-N^{\ell-2}(\frac1k+\frac1a)})$, then we define $$P_{\ell,1,\varepsilon}(x)=P_{1,1,\varepsilon/2}(P_{\ell-1,1,\eta}(x))\in\mathcal{NN}_{\ell+1,\ell+1,1,\rho},$$
so that
\begin{align*}
|x_+^{k^\ell}-P_{\ell,1,\varepsilon}(x)|&\leq |(x_+^{k^{\ell-1}})^k-P_{1,1,\varepsilon/2}(x_+^{k^{\ell-1}})|+|P_{1,1,\varepsilon/2}(x_+^{k^{\ell-1}})-P_{1,1,\varepsilon/2}(P_{\ell-1,1,\eta}(x))|\\
&\leq \frac\varepsilon2+\frac\varepsilon2=\varepsilon.
\end{align*}
By induction, the weights of $P_{\ell,1,\varepsilon}$ is bounded above by $\mathcal O(\varepsilon^{-N^{\ell-1} (\frac1k+\frac1a)})$.
Define $$\Phi_{+,\varepsilon}(x)=D^{k^L}P_{L,1,D^{-k^L}\varepsilon}\left( \frac xD\right)\in \mathcal{NN}_{L+1,L+1,1,\rho},$$
we have found a neural network such that $$|\Phi_{+,\varepsilon}(x)-x_+^{k^L}|\leq \varepsilon$$ for $|x|\leq D$, with weights bounded above by $\mathcal O(\varepsilon^{-N^{L-1}(\frac1k+\frac1a)})\subset \mathcal O(\varepsilon^{-n})$ for some $n$ that only depends on $L, k, a, b$.
\end{proof}
Next, we approximate the function $x^{k^L}$.
\begin{lemma}\label{secondlemma}
Under the assumptions of Lemma \ref{firstlemma}, there is a neural network $\Phi_{\varepsilon, D}\in \mathcal{NN}_{L+1,2L+2,1,\rho}$ such that
$$|x^{k^L}-\Phi_{\varepsilon,D}(x)|\leq \varepsilon\quad\text{ for }|x|\leq D. $$
In addition, the weights of $\Phi_{\varepsilon,D}$ are bounded above by $\mathcal O(\varepsilon^{-n})$ for some positive integer $n$ that only depends on $L, k, a, b, D$.
\end{lemma}
\begin{proof}
It can be verified that $x^N = x_+^N +(-1)^N(-x)_+^N$ by separating the cases whether $N$ is even or odd.
Define
$$\Phi_{\varepsilon,D}(x)=\Phi_{+,\varepsilon/2}(x)+(-1)^{k^L}\Phi_{+,\varepsilon/2}(-x)\in \mathcal{NN}_{L+1,2L+2,1,\rho},$$
this neural network is constructed by first feedforwarding $x$ through both $\Phi_{+,\varepsilon/2}$ in parallel, then applying one more affine function (namely, inner product) in the end, it is still $L+1$ layers because neural networks do not need to pass through any activation function in the last layer.
Now we have
\begin{align*}
|x^{k^L}-\Phi_{\varepsilon, D}(x)|&\leq |x_+^N-\Phi_{+,\varepsilon/2}(x)|+|(-1)^{k^L}(-x)_+^{k^L}-(-1)^{k^L}\Phi_{+,\varepsilon/2}(-x)|\\
&\leq \frac\varepsilon2+\frac\varepsilon2=\varepsilon\quad\text{ for }|x|\leq D.
\end{align*}
Moreover, the weights of $\Phi_{\varepsilon, D}$ are still bounded above by $\mathcal O(\varepsilon^{-n})$, with the same $n$ as Lemma \ref{firstlemma}.
\end{proof}
Using techniques from linear algebra, we can approximate the function $x_+$.
\begin{lemma}\label{thirdlemma}
Under the same assumptions in Lemma \ref{firstlemma}, given $\varepsilon>0$ there is a neural network $\Psi_{+,\varepsilon,D}(x)\in \mathcal{NN}_{2,3(k+1),1,\rho}$ such that
$$|x_+-\Psi_{+,\varepsilon,D}(x)|\leq \varepsilon\quad\text{ for }|x|\leq D. $$
In addition, the weights of $\Psi_{+, \varepsilon,D}$ are bounded above by $\mathcal O(\varepsilon^{-n})$ for some positive integer $n$ that only depends on $k$ and $a$.
\end{lemma}
\begin{proof}
The first step is to find constants $\alpha_0,\alpha_1,\dots,\alpha_k$ such that
$$\sum_{\mu=0}^k \alpha_\mu (x+\mu)^k=x\quad \forall x\in\mathbb R,$$
where we emphasize $k$ is the sigmoidal order of the activation function $\rho$. By expanding coefficients, we have $(x+\mu)^k=\sum_{v=0}^k \binom kv x^v\mu^{k-v}$, hence
\begin{align*}
\sum_{\mu=0}^k \alpha_\mu \mu^{k-v}=\begin{cases}1/k&\text{ if }v=1,\\0&\text{ otherwise.}\end{cases}
\end{align*}
This is a linear system in $k+1$ equations with full rank because the coefficient matrix on the left is Vandermonde. Therefore, the solution $\{\alpha_0,\alpha_1,\dots,\alpha_k\}$ exists. Observe the scalars $\{\alpha_0,\dots,\alpha_k\}$ only depends on $k$.
Define $N$ to be the smallest positive integer such that $$N\geq \max\left\{\dfrac{2k^k\sum_\mu |\alpha_\mu|}\varepsilon, k\right\},$$
Then for $x>0$ we have
$$\sum_{\mu=0}^k \alpha_\mu N^{k-1} \left(x+\frac\mu N\right)_+^k=\frac1N \sum_{\mu=0}^k \alpha_\mu(Nx+\mu)^k=\frac1N\cdot Nx=x=x_+.$$
Observe that the above also holds for $x\leq -k/N$ because both sides are zero.
If $-k/N<x\leq 0$ then $\dfrac{\mu-k}N<x+\dfrac\mu N\leq \dfrac\mu N$ and $(x+\mu/N)_+\leq |x+\mu/N|\leq k/N$. We have
$$\left|\sum_{\mu=0}^k \alpha_\mu N^{k-1} \left(x+\frac\mu N\right)_+^k\right|\leq \left(\frac kN\right)^k N^{k-1}\sum_{\mu=0}^k |\alpha_\mu|\leq \frac\varepsilon2.$$
Thus, it is true that
$$\left|\sum_{\mu=0}^k \alpha_\mu N^{k-1} \left(x+\frac\mu N\right)_+^k-x_+\right|\leq \frac\varepsilon2\quad\forall x\in \mathbb R.$$
We then let $\eta=\varepsilon/(2N^{k-1}\sum |\alpha_\mu|)$ and define
$$\Psi_{+,\varepsilon,D}(x)=\sum_{\mu=0}^k N^{k-1}\alpha_\mu P_{1,D+1,\eta}(x+\mu/N)\in \mathcal{NN}_{2,3(k+1),1,\rho}.$$
Two points need to be clarified. Why do we choose $P_{1,D+1,\eta}$ instead of $P_{1,D,\eta}$? It is because we have $N\geq k$ so $0\leq \mu/N\leq 1$, hence the domain for $x+\mu/N$ should be $(-D-1,D+1)$ to account for the extra length of $\mu/N$. Moreover, we know $\Psi_{+,\varepsilon,D}$ is equivalent to $k+1$ neural networks in $\mathcal{NN}_{2,2,1,\rho}$ operating in parallel then taking linear combinations at the end, but why $\Psi_{+,\varepsilon}$ is not necessarily a member of $\mathcal{NN}_{2,2(k+1),1,\rho}$? We note that each subnetwork $P_{1,D+1,\eta}$ does not just take an input $x$, but $x+\mu/N$. The extra $\mu/N$ should be treated as a nonzero node weight (when $\mu>0$) at the first hidden level, hence contribute one more count for the number of weights.
We see
\begin{align*}
&\left|\Psi_{+,\varepsilon,D}(x)-\sum_{\mu=0}^k \alpha_\mu N^{k-1}\left(x+\frac\mu N\right)_+^k\right|\\
\leq& \sum_{\mu=0}^k |\alpha_\mu|N^{k-1}\left|P_{1,D+1,\eta}\left(x+\frac\mu N\right)-\left(x+\frac\mu N\right)_+^k\right|\\
\leq& \eta\sum_{\mu=0}^k |\alpha_\mu|N^{k-1}\\
=&\dfrac\varepsilon2\quad\text{ for }|x|\leq D.
\end{align*}
We conclude $|\Psi_{+,\varepsilon,D}(x)-x_+|\leq \varepsilon$ for $|x|\leq D$.
Moreover, we observe $\eta^{-1}\in \mathcal O(\varepsilon^{-1}\cdot \varepsilon^{-(k-1)})=\mathcal O(\varepsilon^{-k})$ since $N\in\mathcal O(\varepsilon^{-1})$, hence the weights of $\Psi_{+,\varepsilon,D}$ are dominated by $\mathcal O(\max\{\varepsilon^{-k(\frac1k+\frac1a)}, \varepsilon^{-(k-1)}\})$.
\end{proof}
Now we summarize the approximants with their corresponding monomials so far.
$$\Phi_{\varepsilon,D}\text{ used to approximate }x^{k^L}\text{ with error }\varepsilon,\text{ for }|x|\leq D.$$
$$\Psi_{+,\varepsilon,D}\text{ used to approximate }x_+\text{ with error }\varepsilon,\text{ for }|x|\leq D.$$
In the next lemma we use neural network to approximate monomials of any positive integer degree.
\begin{lemma}\label{fourthlemma}
For $D,\varepsilon>0, m\in\mathbb N$, $\rho$ a sigmoidal function of order $k$. Let $L$ be the smallest integer such that $m-1\leq k^L$. Let $M=(k^L+1)(3k+2L+8)$. We construct a network $R_{D,m,\varepsilon}\in \mathcal{NN}_{L+2,M,1,\rho}$ such that
$$|x_+^{m-1}-R_{D,m,\varepsilon}(x)|\leq \varepsilon\quad\text{ for }|x|\leq D.$$
In addition, the weights of $R_{D,m,\varepsilon}$ are bounded above by $\mathcal O(m^N\varepsilon^{-N})$ for some positive integer $N$ that only depends on $L,k,a,b,D$.
\end{lemma}
\begin{proof}
Using the same idea as in Lemma \ref{thirdlemma}, we solve $\{a_i\}_{0\leq i\leq k^L}$ from
\begin{align*}
\sum_{i=0}^{k^L} a_i i^{k^L-v}=\begin{cases}\dbinom{k^L}{m-1}^{-1}&\text{ if }v=m-1,\\ 0&\text{otherwise.}\end{cases}
\end{align*}
The solutions $\{a_i\}$ do not depend on $\varepsilon$ and $D$, but it depends on $m$.
In fact, we write the system above as $Va=b$, where $V$ is the $(k^L+1)\times (k^L+1)$ Vandermonde matrix and $a=[a_0\dots a_{k^K}]^T$. Then $V$ depends on $m$ because $L$ depends on $m$, and we have $L\in \mathcal O(\log m)$. Since $b$ has all entries not larger than 1, $b$ is independent from $m$.
We have $a=V^{-1}b$ and we claim that $\{a_i\}$ are bounded above by $\mathcal O(m^{n'})$ for some positive integer $n'$.
Indeed, \textcolor{blue}{\href{https://proofwiki.org/wiki/Inverse_of_Vandermonde_Matrix}{this proof}} gives a formula for inverse of a Vandermonde matrix, each entry in $V^{-1}$ is bounded above by $\mathcal O((k^L)^{n'})=\mathcal O(m^{n'})$ for some positive integer $n'$.
Next, we see $x^{m-1}=\sum_{i=0}^{k^L} a_i(x+i)^{k^L}$ for all real $x$.
In particular, the expression is also true if we replace $x$ by $x_+$. Let $$\eta=\dfrac{\varepsilon}{2k^L(k^L+2)^{k^L-1} \sum_i |a_i|},$$
we define
$$R_{1,m,\varepsilon}(x)=\sum_{i=0}^{k^L} a_i \Phi_{\eta, k^L+2}(\Psi_{+,\eta,1}(x)+i)\in \mathcal{NN}_{L+2,M,1,\rho},$$
hence the weights of $R_{1,m,\varepsilon}$ is affected by $m$.
For $|x|\leq 1$, we note that if $i=0,1,\dots, k^L$, then
$$|(\Psi_{+,\eta,1}(x)+i)-(x_++i)|\leq \eta.$$
Since $|x_++i|\leq k^L+1$, for $\eta$ small enough (by choosing $\varepsilon$ small enough) we have
$$|\Psi_{+,\eta,1}(x)+i|\leq k^L+2.$$
Now note that if $a,b$ are two real numbers, by mean value theorem
$$|a^{k^L}-b^{k^L}|\leq k^L \max\{|a|,|b|\}^{k^L-1}|a-b|,$$
thus we have
$$\left|(\Psi_{+,\eta,1}(x)+i)^{k^L}-(x_++i)^{k^L}\right|\leq k^L(k^L+2)^{k^L-1}\eta.$$
We also have
$$|\Phi_{\eta, k^L+2}(\Psi_{+,\eta,1}(x)+i)-(\Psi_{+,\eta,1}(x)+i)^{k^L}|\leq \eta.$$
Combining the two inequalities above, we have
$$|\Phi_{\eta, k^L+2}(\Psi_{+,\eta,1}(x)+i)-(x_++i)^{k^L}|\leq 2k^L(k^L+2)^{k^L-1}\eta.$$
To conclude, we observe that
\begin{align*}
\left|R_{1,m,\varepsilon}(x)-\sum_{i=0}^{k^L}(x_++i)^{k^L}\right|&\leq 2k^L(k^L+2)^{k^L-1}\eta\sum_i |a_i|\leq \varepsilon\quad\text{ for }|x|\leq 1.
\end{align*}
For a general $D>0$, we define
$$R_{D,m,\varepsilon}(x)=D^{m-1}R_{1,m,D^{-m+1}\varepsilon}\left(\frac xD\right),$$
then it follows that
$$|x_+^{m-1}-R_{D,m,\varepsilon}(x)|\leq \varepsilon\quad\text{ for }|x|\leq D.$$
Moreover, the weights of $R_{D,m,\varepsilon}$ are bounded above by $\mathcal O(m^n\eta^{-n})$ for some positive integer $n$ depends on $L,k,a,b,D$, which in turn bounded above by $\mathcal O(m^N\varepsilon^{-N})$ for another positive integer $N$ depends on $L,k,a,b,D$ because $\varepsilon\in\mathcal O(\eta m^{n'})$.
\end{proof}
Now we show $N_m(x)$ can be approximated by a neural network with error $\varepsilon$, while its weights are bounded above by $\mathcal O(m^n\varepsilon^{-n})$.
Thus this will prove the representation system $\{N_m\}$ can be effectively represented by neural networks. The theorem below is a reformulation of Theorem \ref{effectiveBspline}, stated with more details.
\begin{theorem}\label{B-spline}
For $L, m, k\in \mathbb N$ with $m\geq 2$, $\varepsilon, D>0$, define $M=(m+1)(k^L+1)(3k+2L+8)$, $\rho$ a sigmoidal function of order $k$. Then there is a network $B\Phi_{m,D,\varepsilon}\in\mathcal{NN}_{L+2, M, 1, \rho}$ such that
$$\|N_m-B\Phi_{m, D,\varepsilon}\|_{L^2([-D,D])}\leq \varepsilon.$$
In addition, the weights of $B\Phi_{m,D,\varepsilon}$ are bounded above by $\mathcal O(m^n\varepsilon^{-n})$ for some positive integer $n$ that only depends on $L,k,a,b,D$.
\end{theorem}
\begin{proof}
The first step is to present $N_m$ in a different way. We claim that
$$N_m(x)=\frac1{(m-1)!}\sum_{j=0}^m \binom mj (-1)^j (x-j)_+^{m-1}\quad\text{ for }m\geq 2.$$
The B-spline $N_2(x)$ is the hat function, defined by $x$ when $x\in[0,1]$ and $2-x$ when $x\in[1,2]$. By carefully consider each unit interval we see
$$N_2(x)=x_+-2(x-1)_++(x-2)_+.$$
To prove for $m>2$ we use induction. Suppose the summation is true for $m=n-1$, then using convolution we have
\begin{align*}
N_n(x)&=\int_0^1 N_{n-1}(x-t)dt\\
&=\dfrac1{(n-2)!}\sum_{j=0}^{n-1}\binom{n-1}j (-1)^j \int_0^1 (x-j-t)_+^{n-2}dt\\
&=\frac1{(n-1)!}\sum_{j=0}^{n-1}\binom{n-1}j (-1)^j[(x-j-t)_+^{n-1}]_1^0\\
&=\frac1{(n-1)!}\sum_{j=0}^{n-1}\binom{n-1}j (-1)^j[(x-j)_+^{n-1}-(x-j-1)_+^{n-1}]\\
&=\frac1{(n-1)!}\Bigg\{x_+^{n-1}+\sum_{j=1}^{n-1}\left[\binom{n-1}j+\binom{n-1}{j-1}\right](-1)^j (x-j)_+^{n-1}\\
&\qquad +\binom{n-1}{n-1}(-1)^n(x-n)_+^{n-1}\Bigg\}\\
&=\frac1{(n-1)!}\sum_{j=0}^n \binom nj(-1)^j (x-j)_+^{n-1}
\end{align*}
For simplicity we write $N_m(x)=\sum_{j=0}^m b_j (x-j)_+^{m-1}$ with $b_j=\frac1{(m-1)!}\binom mj(-1)^j$, then we also have $\sum|b_j|=2^m/(m-1)!$.
Define $\eta=\varepsilon(\sqrt{2D}\sum|b_j|)^{-1}=\varepsilon\dfrac{(m-1)!}{2^{m}\sqrt{2D}}$, using Lemma \ref{fourthlemma} we define
$$B\Phi_{m,D,\varepsilon}(x)=\sum_{j=0}^m b_j R_{D+m,m,\eta}(x),$$
then for $|x|\leq D$,
\begin{align*}
|N_m(x)-B\Phi_{m, D, \varepsilon}|&=\left|\sum_{j=0}^m b_j [(x-j)_+^{m-1}-R_{D+m, m, \eta}(x)]\right|\\
&\leq \eta\sum_{j=0}^m |b_j|\\
&=\dfrac{\varepsilon}{\sqrt{2D}}.
\end{align*}
Then, the $L^2$ norm is estimated as
\begin{align*}
\|N_m-B\Phi_{m,D,\varepsilon}\|_{L^2([-D,D])}^2&=\int_{-D}^D (N_m(x)-B\Phi_{m,D,\varepsilon}(x))^2dx\leq \varepsilon^2,
\end{align*}
which proves the $L^2$ error is bounded above by $\varepsilon$.
Finally, by the previous lemma, the weights of $B\Phi_{m,D,\varepsilon}$ is bounded above by $\mathcal O(m^n \eta^{-n})\subset O(m^n \varepsilon^{-n})$ for some positive integer $n$ because $\eta^{-1}\leq \varepsilon^{-1}$ for sufficiently large $m$.
\end{proof}
As a final remark, since $\mathcal D=\{N_m\}$ is effective representable by neural networks, if we let $\mathcal C$ be the class of B-spline curves, then
$$\gamma^{*,e}(\mathcal C, \mathcal D) \leq \gamma_{\mathcal{NN}}^{*,e}(\mathcal C, \rho)\leq \gamma^*(\mathcal C).$$
This sums up the goodness of approximation of B-spline curves by neural networks. In the next section we focus on the class $\mathcal C$ of beta cartoon-like functions, calculating its optimal exponent $\gamma^*(\mathcal C)$.
\section{Approximate cartoon-like function}
The result of this section is mainly due to \cite{donoho2}.
We show a function class containing the \quotes{$\beta$ cartoon-like functions} has finite optimal exponent equals to $\beta/2$, then we illustrate the proofs.
Recall from Definition \ref{optimalrate}, we know optimal exponent is a quantity intrinsic to the function class.
Although this result is not about neural networks or about representation system, we should remark that by fundamental bound theorem, both effective best $M$-edge/$M$-term approximation rate for the class of $\beta$ cartoon-like functions are bounded above by $\beta/2$.
One major application of neural networks is its ability to approximate cartoon-like function.
Roughly speaking, we consider a function $f:[0,1]^2\to \mathbb R$ which is twice continuously differentiable except on some simple closed curve in $I:=[0,1]^2$, satisfying some regularity conditions.
We consider this type of functions for a natural reason.
Most image pixels are not randomly placed, they are connected in a nice way so that the image has some smooth regions and some boundaries. In this section we only discuss about square containing a single simple closed curve, as this result could be generalized to the cases where multiple curves appear in a square.
First we define the curves in $I$. Generally, we pick an interior point $b_0\in I$ to serve as the center of the closed curve, then use polar coordinate to define the curve.
For $\theta\in[0,2\pi)$ define $\rho(\theta)>0$ be a radius aparts from $b_0$, and that $\rho(0)=\lim_{\theta\to 2\pi-} \rho(\theta)$ (periodic).
In Cartesian coordinate, the curve is given by
$$\alpha(\theta)=b_0+(\rho(\theta)\cos\theta, \rho(\theta)\sin\theta)\quad \forall \theta\in[0,2\pi).$$
Then, for $\beta\in(1,2]$, define $\text{HÖLDER}^\beta(C)$ to be the collection of continuously differentiable polar radius functions $\rho(\theta)$ satisfying the \textbf{boundary regularity condition} below:
$$|\rho'(\theta_1)-\rho'(\theta_2)|\leq C |\theta_1-\theta_2|^{\beta-1}.$$
The infimum of $C>0$ such that the above holds is denoted by $\|\rho\|_{\dot\Lambda^\beta}$. Thus $\|\rho\|_{\dot\Lambda^\beta}\leq C$ for every $\rho\in\text{HÖLDER}^\beta(C)$.
Denote $B(\rho)$ to be the closed region enclosed by the curve $\alpha(\theta)$.
Specifically, we are interested in the following set of curves:
\begin{footnotesize}
$$\text{STAR-SET}^\beta(C)=\left\{B(\rho): B(\rho)\subset \left[\frac1{10},\frac9{10}\right]^2, \frac1{10}\leq \rho(\theta)\leq\frac12, \theta\in [0,2\pi), \rho\in \text{HÖLDER}^\beta(C)\right\}.$$
\end{footnotesize}
The function class we are working in will be
$$\text{STAR}^\beta(C)=\{ \chi_{B(\rho)}: B(\rho)\in \text{STAR-SET}^\beta(C)\},$$
where $\chi_E$ is the characteristic function which is defined by
$$\chi_E(x)=\begin{cases}1&\text{ if }x\in E,\\0&\text{ if }x\notin E.\end{cases}$$
Since the area of $B(\rho)$ is smaller than 1, we have $\|f\|_2<1$ for all $f\in\text{STAR}^\beta(C)$.
The following is our main result:
\begin{theorem}\label{betacurve}
For $C>0$ and $\beta\in (1,2]$, we have
$$\gamma^*(\text{\textup{STAR}}^\beta(C))=\dfrac\beta2.$$
\end{theorem}
The set $\text{STAR}^\beta(C)$ is a much smaller subset of $\mathcal E^\beta(\mathbb R^2, v)$ in \cite[Theorem 6.3]{main}. The result of the theorem is also different from \cite[Theorem 1]{donoho2} because the measurement of optimality is different. However, we prove that the essential results are all equivalent. Moreover, we exclude the case $\beta=1$ since we want the boundary to be continuously differentiable.
In order to prove Theorem \ref{betacurve}, we need four steps.
\begin{enumerate}[1.]
\item An elegant fact from rate-distortion theory \cite{berger} which we are not proving here.
Suppose we have $m$ coins, each coin has equal probability to render head or tail when flipped. Then there are $2^m$ possibilities of flipping these $m$ coins in a row.
Each result from a flip of $m$ coins can be encoded into a bitstring of length $m$, consists of zeros and ones only, then we use the obvious way to recover the state of the $m$ coins.
We are interested in encoding the flipped coins into bitstring of length $R<m$, then design a decoding process to recover the result of flipped coins while accepting some information losses.
If $X$ is the original sequence of $m$ coins, we use $\hat X=\text{Dec}(\text{Enc}(X))$ to denote the sequence after recovery.
The loss of information is simply $\text{Dist}(X,\hat X)$, the number of different heads and tails before and after recovery.
By rate-distortion theory, when $R$ is significantly smaller than $m$, the recovery must suffer a great amount of loss. There is a positive number $D_m(R)$ such that
$$\inf_{\text{Enc}, \text{Dec}}\text{Average}(\text{Dist}(X, \text{Dec}(\text{Enc}(X))))\geq D_m(R)$$ where the infimum is taken across all encoding-decoding process, and the average is taken on all $2^m$ possible sequences of coins flipped.
The rate-distortion theory guarantees that given a positive number $\rho<1/2$, there is a number $D_1(\rho)>0$ such that $R\leq \rho m$ implies $D_m(R)\geq D_1(\rho)m$. This means if we are using considerably less resource to encode the $m$ flipped coins, then a substantial fraction of information will be lost no matter how good the encoding-decoding process is (which agrees with common sense).
\item In the second step, we prove $\text{STAR}^\beta(C)$ contains a copy of $p$-hypercube, denoted by $\ell_0^p$ (which will be defined later) for $p=\frac2{\beta+1}$.
\item Next, we show if a function class $\mathcal F$ contains a copy of $\ell_0^p$ then $\gamma^*(\mathcal F)\leq \frac{2-p}{2p}$ from the result of rate-distortion theory.
Thus when $\mathcal F=\text{STAR}^\beta(C)$ we deduce that $$\gamma^*(\mathcal F)\leq \dfrac{2-\frac2{\beta+1}}{\frac{4}{\beta+1}}=\frac\beta2.$$
\item In the last step, we prove $\gamma^*(\text{STAR}^\beta(C))\geq \dfrac\beta2$ by using the discrete wedgelets dictionary.
\end{enumerate}
We should look at Step 2 and Step 3 at the same time.
Note that $\gamma>\gamma^*(\text{STAR}^\beta(C))$ if for any $\varepsilon\in(0,1/2)$, the minimax code length $L(\varepsilon, \text{STAR}^\beta(C))$ does not obey the order of $\mathcal O(\varepsilon^{-1/\gamma})$ as $\varepsilon\to 0$ (recall Definition \ref{optimalrate}).
Our objective is to prove this is true for every $\gamma>\dfrac{2-p}{2p}$ with $p=\dfrac2{\beta+1}$, thus implies the inequality $\gamma^*(\text{STAR}^\beta(C))\leq \dfrac{2-p}{2p}$.
Since the statement about minimax code length is a negative statement, we do not need to use the whole class $\text{STAR}^\beta(C)$. Instead, it is enough to rely only on the orthogonal hypercube structure embedded in $\text{STAR}^\beta(C)$. To see why hypercube is related, we recall the description of rate-distortion theory in Step 1. It mentioned the information loss when recording a bitstring (or a sequence of coins flipped). Hypercube also has this property, as we can regard each vertice of an $m$-dimensional hypercube as a bitstring of length $m$.
\begin{definition}
A function class $\mathcal F$ is said to contain an \emph{embedded orthogonal hypercube} of dimension $m$ and side $\delta$ if there exist $f_0\in\mathcal F$, and orthogonal functions $\psi_{i,m,\delta}$, $i=1,\dots,m$, with $\|\psi_{i,m,\delta}\|_{L^2}=\delta$, such that the collection of hypercube vertices
$$\mathcal H(m; f_0,(\psi_i)) = \left\{h=f_0+\sum_{i=1}^m \xi_i \psi_{i,m,\delta},\quad \xi_i\in\{0,1\}\right\}$$
is embed in $\mathcal F$, i.e., $\mathcal H(m; f_0,(\psi_i))\subset \mathcal F$.
\end{definition}
Note that orthogonality is meant by $\int \psi_{i,m,\delta}\psi_{j,m,\delta}=0$ for $i\neq j$. Moreover, it should be emphasized that $\mathcal H$ only contains the vertices of the hypercube.
\begin{definition}\label{hypercube}
A function class $\mathcal F$ \emph{contains a copy of $\ell_0^p$} if $\mathcal F$ contains embedded orthogonal hypercubes of dimension $m(\delta)$ and side $\delta$, and if, for some sequence $\delta_k\to 0$, and some constant $C>0$:
$$m(\delta_k)\geq C\delta_k^{-p}, \quad k=k_0,k_0+1,\dots$$
and also $\{m(\delta_k)\}_{k\geq k_0}=n_0+\mathbb N$ for some positive integer $n_0$.
\end{definition}
This definition states clear that if the side length of the hypercube is small, then the dimension should be sufficiently large. The last condition requires the sequence $\{m(\delta_k)\}$ not to be too sparse, which was overlooked in the original paper \cite{donoho}.
Now we prove $\text{STAR}^\beta(C)$ contains a copy of $\ell_0^p$ for $p=2/(\beta+1)$. In short, we construct $m$-dimensional hypercubes consists of $m$ \quotes{flower petals}, since they have disjoint interior, they are orthogonal in the sense of inner product in $L^2([0,1]^2)$.
\begin{theorem}\label{secondstep}
The function class $\text{\textup{STAR}}^\beta(C)$ contains a copy of $\ell_0^p$ for $p=2/(\beta+1)$.
\end{theorem}
\begin{proof}
Let $\varphi\geq 0$ be a smooth function with compact support $\subset [0,2\pi]$, and also have $\varphi(0)=\varphi(2\pi)$. For example one can choose $\varphi(\theta)=\sin\dfrac\theta 2$ on $[0,2\pi]$. We will use this function as a generator to generate a copy of $\ell_0^p$ in $\text{STAR}^\beta(C)$.
We see for $a,b\in[0,2\pi]$,
$$|\varphi'(a)-\varphi'(b)|=\frac12\left|\cos\frac a2-\cos\frac b2\right|\leq \dfrac14|a-b|\leq R|a-b|^{\beta-1}$$
for some number $R$ only depends on $\beta$, hence $\|\varphi\|_{\dot\Lambda^\beta}$ exists and is bounded above by $R$. We also have $\|\varphi\|_{L^1}=\int_0^{2\pi} \sin\frac\theta2 d\theta = 4$. For $\delta>0$ choose
$$m=m(\delta):=\left\lfloor\left(\dfrac{\delta^2}C\dfrac{\|\varphi\|_{\dot\Lambda^\beta}}{\|\varphi\|_{L^1}}\right)^{-1/(\beta+1)}\right\rfloor,\quad A=A(\delta, C):=\dfrac{\delta^2 m^{\beta+1}}{\|\varphi\|_{L^1}}$$
as in \cite{donoho2}. We define $$\varphi_{i,m}(t) = Am^{-\beta} \varphi(mt-2\pi i)\quad\text{ for }i=1,2,\dots,m$$
so $\varphi_{i,m}$ is only supported in $[2\pi i/m, 2\pi (i+1)/m]$. This implies $\{\varphi_{i,m}\}_{i=1}^m$ is orthogonal. We also have
\begin{align*}|\varphi_{i,m}'(a)-\varphi_{i,m}'(b)|&=Am^{1-\beta}|\varphi'(ma-i)-\varphi'(mb-i)|\\
&\leq Am^{1-\beta} \|\varphi\|_{\dot\Lambda^\beta} |ma-mb|^{\beta-1}\\
&=A\|\varphi\|_{\dot\Lambda^\beta} |a-b|^{\beta-1}\end{align*}
This shows $\varphi_{i,m}$ satisfies the boundary regularity condition, and from the settings of $m$ and $A$, $$\|\varphi_{i,m}\|_{\dot\Lambda^\beta}\leq A\|\varphi\|_{_{\dot\Lambda^\beta}}\leq C.$$
Fix an origin at $b_0=(1/2,1/2)$, let $r_0=1/4$ and $f_0=1_{\{r\leq r_0\}}$ be the characteristic function of the circle of radius $r_0$ centered at $b_0$. Then we define
$$\psi_{i,m}=1_{\{r\leq \varphi_{i,m}+r_0\}}-f_0, \quad i=1,2,\dots,m$$
where $1_{\{r\leq \varphi_{i,m}+r_0\}}$ is the characteristic function of the region with radius function $\varphi_{i,m}+r_0$ centered at $b_0$.
The graph of $\psi_{i,m}$ can be treated as one of the $m$ petals of a flower centered at $b_0$.
The collection $\{\psi_{i,m}\}_{i=1}^m$ are orthogonal in $L^2([0,1]^2)$ because $\{\varphi_{i,m}\}_{i=1}^m$ has disjoint support on $[0,2\pi]$.
For $\psi_{i,m}$ to be an element of $\text{STAR}^\beta(C)$, its radius function should between $1/10$ and $1/2$, hence we require $\varphi_{i,m}\leq 1/4$ as well.
We show this can be true if $\delta$ is small enough. For $t\in\mathbb R$,
\begin{align*}
\phi_{i,m}(t)&\leq Am^{-\beta}\\
&=\dfrac{\delta^2 m}{\|\varphi\|_{L^1}}\\
&\leq C' \delta^{\frac{2\beta}{\beta+1}}\leq \frac14
\end{align*}
for a small $\delta$, hence every smaller $\delta$.
For each bitstring $\xi=(\xi_1,\dots,\xi_m)\subset \{0,1\}^m$ of length $m$, we have a correspondence between radius functions and characteristic functions:
$$r_\xi = \dfrac14+\sum_{i=1}^m \xi_i \varphi_{i,m}\iff f_\xi =f_0+\sum_{i=1}^m \xi_i\psi_{i,m},$$
which means if the region centered at $b_0=(1/2,1/2)$ is enclosed by the radius function $r_\xi$, then its characteristic function is $f_\xi$.
Now we know $\{\psi_{i,m}\}$ is a collection of orthogonal functions because they are supported on disjoint region, each of them has length $\Delta = \|\psi_{i,m}\|_2$, being independent of $i$ and satisfying
$$\Delta^2 = \|\psi_{i,m}\|_2^2 = Am^{-\beta} \int_{i/m}^{(i+2\pi)/m}\varphi(mt-i)dt=Am^{-\beta-1} \|\varphi\|_{L^1}=\delta^2,$$
hence $\Delta=\delta$. We have verified the functions $\{\psi_{i,m}\}$ are orthogonal, having lengths $\delta$, satisfying the boundary regularity conditions. Therefore, the hypercube $\mathcal H(m; f_0, (\psi_{i,m}))$ can be embedded in $\text{STAR}^\beta(C)$.
Now we choose $\delta_0>0$ small enough such that $C'\delta_0^{\frac{2\beta}{\beta+1}}\leq1/4$ and also
$$\left\lfloor\left(\dfrac{\delta_0^2}C\dfrac{\|\varphi\|_{\dot\Lambda^\beta}}{\|\varphi\|_{L^1}}\right)^{-1/(\beta+1)}\right\rfloor\geq \frac12 \left(\dfrac{\delta_0^2}C\dfrac{\|\varphi\|_{\dot\Lambda^\beta}}{\|\varphi\|_{L^1}}\right)^{-1/(\beta+1)},$$
Note that the right-hand side of above increases as $\delta\to 0$. For $\delta\in(0,\delta_0)$,
$$m(\delta)\geq \frac12 \left(\dfrac{\delta_0^2}C\dfrac{\|\varphi\|_{\dot\Lambda^\beta}}{\|\varphi\|_{L^1}}\right)^{-1/(\beta+1)}=C_0 \delta^{-\frac2{\beta+1}}$$
for some constant $C_0$ depending only on $\beta, C$ and the choice of $\varphi$. We also choose a sequence $\{\delta_k\}\subset (0,\delta_0)$ such that $m(\delta_{k+1})-m(\delta_k)=1$ for all $k\geq k_0$ sufficiently large, as it is clearly possible from the definition of $m(\delta)$.
\end{proof}
Now we prove an upper bound for $\gamma^*(\text{STAR}^\beta(C))$.
\begin{theorem}\label{thirdstep}
If a function class $\mathcal F$ contains a copy of $\ell_0^p$, $p<2$, and there is $A>0$ such that $\|f\|_{L^2}\leq A$ for all $f\in\mathcal F$, then $\gamma^*(\mathcal F)\leq \dfrac{2-p}{2p}$.
We assume $\mathcal F$ has an orthonormal countable basis $\{\varphi_i\}$ which are not necessarily members of $\mathcal F$, in the sense that $\|\varphi_i\|_{L^2}=1, \int \varphi_i\varphi_j=0$ for $i\neq j$, and that every $f\in\mathcal F$ can be expressed as
$$f(x)=\sum_{i=1}^\infty \theta_i \varphi_i(x)$$
for real numbers $\{\theta_i\}$.
\end{theorem}
\begin{proof}
For $\gamma>(2-p)/(2p)$, we show that for $\varepsilon\in(0,1/2)$, the minimax code length $L(\varepsilon,\mathcal F)\notin \mathcal O(\varepsilon^{-1/\gamma})$ as $\varepsilon\to 0$. This can be done by contradiction. The main idea is to show that if we assume $L(\varepsilon,\mathcal F)\in \mathcal O(\varepsilon^{-1/\gamma})$, then the encoding length (as defined in Definition \ref{optimalrate}) will be too small to preserve enough information about functions in $\mathcal F$. The encoding-decoding process is thus unable to achieve error bound $\sup_{f\in\mathcal F}\|f-D(E(f))\|_{L^2}\leq \varepsilon$ \textit{\textbf{no matter how good}} the encoding-decoding process is. This can be treated as an analogue of a result from rate-distortion theory.
Under the assumption, $\mathcal F$ has an embedded hypercube $\mathcal H$ of side $\delta$ (this can be chosen arbitrarily small) and dimension $m:=m(\delta)\geq C \delta^{-p}$ for some fixed $C>0$. Let $\xi=(\xi_1, \xi_2,\dots,\xi_m)\subset \{0,1\}^m$, then each vertex of the hypercube $H(m; f_0, (\psi_{i,m}))$ can be identified by
$$h=f_0+\sum_{i=1}^m \xi_i \psi_{i,m}.$$
Suppose now we have a method of representing functions $f\in\mathcal F$ approximately by $R$ bits, that is, for all $f\in \mathcal F$ define $e:=\text{Enc}(f)\in \{0,1\}^R$, and define a suitable decoding process maps $e$ to $\widetilde f$.
Now we use the above process to encode the vertices $h$, then we get another point $\widetilde h$ which is not necessarily a vertex of hypercube. Let $\widehat h\in\mathcal H$ be the closest vertex to $\widetilde h$ in $L^2$ norm. Then we can assign a bitstring $\widehat\xi$ to $\widehat h$. Now the process starting from $\xi$ to $\widehat \xi$, can be described as $\widehat \xi = \text{Dec}(\text{Enc}(\xi))$.
By orthogonality of hypercube, we have
\begin{align*}
\|h-\widehat h\|_2^2 &= \delta^2 \sum_{i=1}^m (\xi_i-\widehat\xi_i)^2\\
&=\delta^2 \text{Dist}(\xi, \widehat\xi).\\
\therefore \max_{h\in\mathcal H} \{\|h-\widehat h\|_2^2 \} &\geq\text{Average}_{h\in\mathcal H} \{\|h-\widehat h\|_2^2\}\\
&=\delta^2 \text{Average}_{h\in\mathcal H}(\text{Dist}(\xi, \widehat \xi))\\
&\geq \delta^2 D_m(R)
\end{align*}
where the last result is from rate-distortion theory.
For the sake of contradiction, we now fix $\gamma>(2-p)/(2p)$, then prove that for every pair of encoder-decoder $(E,D)\in\mathfrak E^\ell\times \mathfrak D^\ell$ with $\ell \in \mathcal O(\varepsilon^{-1/\gamma})$, it must result in the uniform error $\sup_{f\in\mathcal F} \|D(E(f))-f\|_{L^2}$ \textbf{not} bounded below by $\varepsilon$. Note that we should consider all kind of encoding process such that it chooses $n$ terms among the first $\pi(n)$ terms for $\pi$ a fixed polynomial. The fruitful result is that the error is not $\mathcal O(\varepsilon)$ for all kind of decoding process.
Note that all $f\in \mathcal F$ has a corresponding basis representation
$$f=\sum_{i=1}^\infty \theta_i\varphi_i.$$
Using polynomial depth-search encoding process, we choose $I_n\subset \{1,2,\dots,\pi(n)\}$ containing $n$ integers, then we choose the candidates $\{\theta_i\}_{i\in I_n}$, namely the sum $\sum_{i\in I_n} \theta_i\varphi_i$ to approximate $f$. However, this is not exactly how we encode $f$, as the coefficients $\{\theta_i\}_{i\in I_n}$ is not discrete. We see that $|\theta_i|\leq \|f\|_2\leq A$, hence we choose a number $\widetilde \theta_i$ from $[-A,A]\cap (\eta)\mathbb Z$ with $\eta=n^{-2/p}$ that is closest to $\theta_i$. Then the encoding-decoding process is symbolically defined by $D(E(f))=\widetilde f=\sum_{i\in I_n} \widetilde \theta_i \varphi_i$ as the coefficients $\widetilde\theta_i$ can be naturally decoded using the same idea. To explicitly write $E(f)$ as a bitstring, we first encode bitstrings of indices. There are $n$ indices, and each indices can be encoded by at most $\log_2\pi(n)=\mathcal O(\log n)$ bits, hence we have encoded $I_n$ using $\mathcal O(n\log n)$ bits. Then, note that $[-A,A]\cap (\eta)\mathbb Z$ contains at most $\lceil2An^{2/p}\rceil$ elements, we encode each coefficients using $\log_2\lceil2An^{2/p}\rceil=\mathcal O(\log n)$ bits, hence we can use $\mathcal O(n\log n)$ to encode the $n$ coefficients. Finally, we explicitly define $E(f)$ to be this bitstring of length $\ell$ with
$$\ell\leq R(n):=\log_2\pi(n)+\log_2\lceil2An^{2/p}\rceil\leq C_1 n\log n.$$
($R(n)$ and $n\log n$ are actually equivalent)
Let $h\in \mathcal H$ be a vertex, using the encoding process above we have $\widetilde h$. Let $\widehat h\in \mathcal H$ be the closest vertex to $\widetilde h$. By rate-distortion theory, for $\rho<1/2$ if $n$ obeys $R(n)\leq \rho m$, then
$$\max_{h\in\mathcal H} \{\|h-\widehat h\|_2^2\}\geq \delta^2 D_m(R(n))\geq \delta^2 D_1(\rho)m.$$
By construction, we have $\|\widetilde h - h\|_2\geq \|\widetilde h-\widehat h\|_2$, thus by triangle inequality we have
$$\|\widehat h-h\|_2\leq \|\widehat h-\widetilde h\|_2+\|\widetilde h-h\|_2\leq 2\|\widetilde h -h\|_2.$$
Now we refer to Definition \ref{optimalrate}. Estimation gives
\begin{align*}
\sup_{f\in\mathcal F} \{\|f-D(E(f))\|_2^2\}&\geq \max_{h\in\mathcal H}\{\|h-\widetilde h\|_2^2\}\\
&\geq \dfrac12 \max_{h\in\mathcal H}\{\|h-\widehat h\|_2^2\}\\
&\geq \dfrac12\delta^2 D_1(\rho)m.
\end{align*}
Now assume $L(\varepsilon,\mathcal F)\in \mathcal O(\varepsilon^{-1/\gamma})$. Fix $\rho<1/2$, we choose $m:=m_k$ to be the smallest integer (see Definition \ref{hypercube}) and $\delta:=\delta_k$ so that $R(n)\leq C_1n\log n\leq \rho m$ and so $m\geq C\delta^{-p}$. From denseness of dimensions as we required in Definition \ref{hypercube}, we also have $\rho(m-1)<C_1n\log n\leq \rho m$. We then have $\delta^2m \geq m\cdot C^{2/p} m^{-2/p}=C^{2/p} m^{-(2-p)/p}\geq C'\left(\dfrac{n\log n+\rho}\rho\right)^{-\frac{2-p}p}$. Notice that if we only know $C_1 n\log n\leq \rho m$ then the last inequality will not be possible, hence it is important that $m$ can be any integer larger than a fixed large integer, not just being sufficiently large, which the latter could lead to $m$ being separated too much (compare Definition \ref{hypercube} and the original definition of hypercube in \cite{donoho2}).
This shows
$$\sup_{f\in\mathcal F} \{\|f-D(E(f))\|_2\}\geq \sqrt{C'} (n\log n+\rho)^{-\frac{2-p}{2p}}\geq C'' \varepsilon^{\frac{2-p}{2p\gamma}}\quad\text{ for small }\varepsilon.$$
Now $\dfrac{2-p}{2p\gamma}<1$, hence the term $\varepsilon^{\frac{2-p}{2p\gamma}}$ is strictly larger than mere $\varepsilon$ when $\varepsilon\to 0$, contradicts to $\sup_{f\in\mathcal F}\{\|f-D(E(f))\|_2\}\leq \varepsilon$ (recall definition of $L(\varepsilon,\text{STAR}^\beta(C))$). We deduce that $L(\varepsilon,\mathcal F)\notin \mathcal O(\varepsilon^{-1/\gamma})$ for every $\gamma>\dfrac{2-p}{2p}$, hence $\gamma^*(\mathcal F)\leq \dfrac{2-p}{2p}$.
\end{proof}
In the original proof, the author did not use the notion of optimal exponent, instead he used another quantity called the \quotes{optimal degree of sparsity}. It concerns the representation $f=\sum_{i=1}^\infty \theta_i \varphi_i$, whether one can tolerate only a few of $\theta_i$ being significantly large, while all the others being negligible, hence the word \enquote{sparsity}. Our proof above transfers result of optimal degree of sparsity to the optimal exponent.
Although not mentioned in \cite{donoho2}, it is important to emphasize that $\text{STAR}^\beta(C)$ does have an orthonormal basis $\{\varphi_i\}$, but they are not elements of the function class itself.
Evidently, each element having $\|\cdot \|_{L^2}=1$ cannot be an element of $\text{STAR}^\beta(C)$.
This is not a problem, as we only extract information about $\theta_i$s in the decomposition $f=\sum \theta_i \varphi_i$. We can also easily find an orthonormal basis of $L^2([0,1]^2)$.
First note that $L^2([0,1]^2)$ is a separable linear space (\cite{brezis}, Theorem 4.13), hence having a countable dense subset $\{\alpha_i\}_{i\in\mathbb N}$.
Then, by using Gram-Schimdt process we can obtain an orthonormal basis $\{\varphi_i\}_{i\in\mathbb N}$.
Any subspace of $L^2([0,1]^2)$ can then be represented as a countable linear combination of elements from this orthonormal basis.
The last step is to show $\text{STAR}^\beta(C)$'s optimal exponent is bounded below by $\frac{2-p}{2p}$. Since this is a constructive proof, it is very detailed. We first list the result below.
\begin{theorem}\label{fourthstep}
The optimal exponent of $\text{STAR}^\beta(C)$ obeys the lower bound $\gamma^*(\text{STAR}^\beta(C))\geq \dfrac\beta2$.
\end{theorem}
We introduce some notations, then we will directly apply Theorem 8.4 from \cite{donoho3}, modify them, in order to prove Theorem \ref{fourthstep} in this paper.
\begin{enumerate}[1.]
\item \textbf{\textit{Edgels and Edgelets}}
Let $m=2^k$ be some dyadic integer, then we define the \textbf{latticework} $\mathcal L(m)$ to be the union of all dyadic squares of edgelength $1/m$ in $[0,1]^2$ where their vertices are integer multiples of $1/m$.
Given two points $v_1,v_2\in[0,1]^2$, an \textbf{edgel} is the line segment $e=\overline{v_1v_2}$.
Given $n=2^J$, and for $m=2^j, 0\leq j\leq J$, we want to define a countable collection of edges connecting from the boundary of squares, because freely joining lines will resulted in an uncountable set of edges. Consider the collection of all dyadic vertices in $\mathcal L(n)$, which we have $(n+1)^2$ of them. Construct an edgel from each pair of vertices gives a collection of $\binom{(n+1)^2}2\in\mathcal O(n^4)$ elements, which is too big. We usually want a smaller collection.
Let $\delta=2^{-J-K}$ be an even finer scale. At boundary of each dyadic square $S$ of length $1/m$, starting from upper-left vertex we pin down vertices $v_{i,S}$ in a clockwise fashion, with consecutive points distance as $\delta$. There are $M_j=4\times 2^{J+K-j}$ vertices on $S$. Let
$$E_\delta(S) = \{e=\overline{v_{i,S}v_{j,S}}, 0\leq i,j< M_j\},$$
then $E_\delta(S)$ contains $\binom{M_j}2$ edgels. We define the set of \textbf{edgelets} $\mathcal E(n,\delta)$ containing all possible edgels in some $E_\delta(S)$ for some $0\leq j\leq J$. The cardinality of $\mathcal E(n,\delta)$ is given as
$$\#\mathcal E(n,\delta)=\sum_{j=0}^J 4^j\binom{M_j}2\leq \sum_{j=0}^J 4^j\dfrac{M_j^2}2=\sum_{j=0}^J 4^j\cdot 8\cdot 2^{2J+2K-2j}=8(J+1)\delta^{-2}.$$
In the case where $\delta=2^{-J}=1/n$, then $\#\mathcal E(n,\delta)\leq 8(\log_2n+1)n^2\in \mathcal O(n^2\log n)$. Hence we can use roughly $\mathcal O(n^2)$ elements to serve as a basis for edgels approximation. Although each edgelet in $\mathcal E(n,\delta)$ only contained in a small dyadic square, but their collection is manageable, they express many rotation, location and anisotropic features. On the other hand, we can chain multiple edgelets together to compensate their ability to approximate longer edgels.
\item \textbf{\textit{Recursive Dyadic Partition (RDP)}}
This is a discussion about an adaptive approach used to approximate functions in $\text{STAR}^\beta(C)$. Generally, a simple closed curve in $[0,1]^2$ might have some very smooth part, and some ill-looking part. Therefore, we hope to using the least resource to approximate them: Using large dyadic squares for the smooth parts, while using finer scale dyadic squares to handle the part with more perturbation.
We define \textbf{Recursive Dyadic Partition (RDP)}. The trivial RDP is $\{[0,1]^2\}$. Now if $\mathcal P$ is an existing RDP, then we choose a square $S\in \mathcal P$, perform a quad-split so it is partition into four smaller dyadic squares (called children). A finer RDP can be construct by simply adding these four squares into $\mathcal P$ and remove the original parent. We can also construct a coarser RDP from an existing one: by merge four adjacent dyadic squares into one larger square (called ancestor), add the larger square and remove four of them from the RDP. We also call four adjacent dyadic squares being siblings.
We can think of RDP as a quadtree $Q$, the bottom-most squares are those elements in the RDP, and in between the tree are ancestors of the elements in the RDP.
Now we involve the concept of edgelets into RDP. For $n=2^J, \delta=2^{J+K}$, we define \textbf{Edgelet-decorated RDP (ED-RDP)} to be an RDP $\mathcal P$ where each $S\in\mathcal P$ can be splitted by an edgelet in $\mathcal E(n,\delta)$. A typical element of an ED-RDP is either a dyadic square (unsplit) or a wedgelet resulted from split square. We also assume that if a square is split into two wedgelets, then the wedgelets cannot be split again. Since each square in an ED-RDP can only be split into at most two wedgelets, if $w$ is a wedgelet already in $\overline{\mathcal P}$, then no other wedgelet can be contained in the same square where $w$ live in. It is because if two wedgelets in $\overline{\mathcal P}$ form a square, then there is no point in splitting the square beforehand. We denote the collection of all ED-RDPs as ED-RDP$(n,\delta)$.
\item \textbf{Averages as $n\times n$ array}
In an ED-RDP$(n,\delta)$, $\overline{\mathcal P}$, we choose a square/wedgelet $P\in \overline{\mathcal P}$ and let $1_P$ be its characteristic function on $[0,1]^2$. Now we define $\widetilde{1_P}$ be an $n\times n$ array such that its $(i,j)$ entry is equal to the average of $1_P$ on the square $S_{i,j}=[(j-1)/n,j/n]\times [(i-1)/n,i/n]$. Formally,
$$(\widetilde{1_P})_{i,j}=\dfrac1{1/n^2}\int_{S_{i,j}} 1_P,$$
and we also define $L(\overline{\mathcal P})$ to be the linear space of $n\times n$ arrays spanned by the collection $\{1_P\}_{P\in\overline{\mathcal P}}$.
We can also regard $\widetilde 1_P$ as a function on $[0,1]^2$, which takes $n^2$ (possibly) different values on each dyadic square, then $L(\overline{\mathcal P})$ can be regarded as a subspace of functions in $L^2([0,1]^2)$.
By construction of $\overline{\mathcal P}$, we know that two different member $P_1,P_2$ have disjoint interior, and if $S$ is a dyadic square of side-length $1/n$ and intersect $P_1$ in interior, then $S$ must be disjoint from $P_2$.
This shows two different $n\times n$ arrays $\widetilde{1_{P_1}}, \widetilde{1_{P_2}}$ are orthogonal, with inner product defined by $(f,g)=\int_{[0,1]^2}fg$.
For $f\in\text{STAR}^\beta(C)$ defined on $[0,1]^2$ and $P\in\overline{\mathcal P}$, we define the projection coefficient by the integral $$f_P:=\int_{[0,1]^2}\dfrac{f\widetilde{1_P}}{(\widetilde{1_P},\widetilde{1_P})}$$
Then we define
$$\text{Ave}(f| \overline{\mathcal P})=\sum_{P\in\overline{\mathcal P}} f_P \widetilde{1_P}$$
which is the least-square projection of $f$ to the linear space $L(\overline{\mathcal P})$. Again, we may regard $\text{Ave}(f|\overline{\mathcal P})$ as an array or as a function on $[0,1]^2$.
\end{enumerate}
Donoho proved the following theorem, noted as Lemma 8.4 in his paper \cite{donoho3}.
\begin{theorem}\label{donohomain}
Let $1<\beta\leq 2, 0<C<\infty$, $n=2^J, \delta=2^{J+K}$. For each $f\in \text{STAR}^\beta(C)$ there exists a corresponding RDP $\mathcal P_f$ with fewer than $n'=K_1\cdot n+K_2$ elements, and a corresponding ED-RDP$(n,\delta)$, denoted as $\overline{\mathcal P}_f$ such that
$$\|f-\text{Ave}(f|\overline{\mathcal P}_f)\|_{L^2}^2\leq K_\beta C_\beta n^{-\beta}+\delta,$$
where $K_1,K_2$ are constants and $K_\beta$ depends on $\beta$ only.
\end{theorem}
We now apply this result to prove Theorem \ref{fourthstep}, with some modification.
\begin{proof}[Proof for Theorem \ref{fourthstep}]
The optimal exponent is based on an encoding-decoding process which encode the function into a bitstring. Therefore, we want to discover a countable dictionary based on $\{1_P\}_{P\in \overline{\mathcal P}}$, take their finite linear combination based on polynomial search-depth, then discretize their coefficients to form an approximation.
The approximant $\text{Ave}(f|\overline{\mathcal P_f})$ is a linear combination of $\{\widetilde 1_P\}_{P\in\overline{\mathcal P}_f}$. This is an orthogonal system as noted before, we can change them into orthonormal system $\{\phi_P\}_{P\in\overline{\mathcal P}_f}$ by define
$$\phi_P = \dfrac{\widetilde 1_P}{\sqrt{(\widetilde 1_P, \widetilde 1_P)}}=\dfrac{\widetilde 1_P}{\sqrt{\int_{[0,1]^2}(\widetilde 1_P)^2}},$$
and it is convenient to use the notation $\|\widetilde 1_P\|_{L^2} = \sqrt{\int_{[0,1]^2} (\widetilde 1_P)^2}$. Then we have the below orthonormal presentation
$$\text{Ave}(f|\overline{\mathcal P}_f)=\sum_{P\in \overline{\mathcal P}_f}f_P\|\widetilde 1_P\|_{L^2} \phi_P.$$
By the classical Cauchy-Schwarz inequality, we have
$$\left|f_P\|\widetilde 1_P\|_{L^2}\right|=\dfrac1{\|\widetilde 1_P\|_{L^2}} \left|\int_{[0,1]^2} f\widetilde 1_P\right|\leq \|f\|_{L^2}\leq 1.$$
Now we choose $\eta=n^{-2}$, then we define $\theta_P$ to be an integer multiple of $\eta$ which is closest to $f_P\|\widetilde 1_P\|_{L^2}$, if there is a tie, we define $\theta_P$ to be closer to zero. This discretization makes $\theta_P\in[-1-\eta,1+\eta]\cap \eta\mathbb Z$, which the latter set has at most $3\eta^{-1}$ elements when $\eta$ is small enough. We define the discrete, slightly distorted approximant of $f$ as
$$\widetilde f = \sum_{P\in \overline{\mathcal P}_f}\theta_P \phi_P.$$
Then we find that when $J=K$, that is $\delta=1/n^2$, we have
\begin{align*}
\|f-\widetilde f\|_{L^2}&\leq \|f-\text{Ave}(f|\overline{\mathcal P}_f)\|_{L^2}+\|\text{Ave}(f|\overline{\mathcal P}_f)-\widetilde f\|_{L^2}\\
&\leq \sqrt{K_\beta C_\beta n^{-\beta}+\delta}+\sum_{P\in \overline{\mathcal P}_f} |\theta_P-f_P\|\widetilde 1_P\|_{L^2}|\\
&\leq K_\beta^{1/2}C_\beta^{1/2}n^{-\beta/2} + \delta^{1/2} +\frac12 (\# \overline{\mathcal P}_f)\eta\\
&\leq K_\beta^{1/2}C_\beta^{1/2}n^{-\beta/2} + n^{-1} + K_3\cdot n^{-1}
\end{align*}
where $K_3$ is some constant. Notice that the term with $n^{-\beta/2}$ dominate the whole expression.
Now we fix $\gamma<\beta/2$. Given $\varepsilon>0$, we exhibit a pair $(E_\ell,D_\ell)$ of encoding-decoding process while encoding length $\ell$ obeys the order $\mathcal O(\varepsilon^{-1/\gamma})$ and the error across all element of $f\in \text{STAR}^\beta(C)$ satisfies $$\sup_{f\in\text{STAR}^\beta(C)}\|f-D_\ell(E_\ell(f))\|_{L^2}\leq \varepsilon.$$
Then this will proves $L(\varepsilon,\text{STAR}^\beta(C))\leq \inf_{(E,D)}\ell\in \mathcal O(\varepsilon^{-1/\gamma})$. All we have now is $\varepsilon>0$ is fixed. We choose a smallest positive integer $J$ such that $n=2^J$ satisfies
$$(K_\beta C_\beta)^{1/2} n^{-\beta/2}+(1+K_3)n^{-1}\leq \varepsilon.$$
This implies the above inequality fails when one substitutes $n$ by $n/2$. Thus
\begin{align*}
(2^{\beta/2}(K_\beta C_\beta)^{1/2}+2(1+K_3))n^{-\beta/2}&\geq (K_\beta C_\beta)^{1/2} \left(\frac n2\right)^{-\beta/2}+(1+K_3)\left(\dfrac n2\right)^{-1}\\
&>\varepsilon,
\end{align*}
which means there is a constant $Q(\beta,C)$ only depends on $\beta$ and $C$ such that
$$n\leq Q(\beta,C)\varepsilon^{-2/\beta}.$$
Next, we describe an encoding process for exactly record $\widetilde f$. Note that solely from $\varepsilon$ we know how to choose $J$. Thus we encode $J$ into bitstring, which uses $\log_2J=\log_2\log_2n$ length.
To record which orthonormal elements $\phi_P$ are used from the $\overline{\mathcal P}_f\in \text{ED-RDP}(n,\delta)$, recall the collection of all edgelets of dyadic squares with side-length $\geq 1/n$, drawn at $\delta=1/n^2$ equally spaced level has cardinality
$$\#\mathcal E(n,\delta)\leq 8(J+1)\delta^{-2}\leq 8(\log_2n+1)n^4\leq 16n^5.$$
This is also our required polynomial search depth, we may index each element of $\mathcal E(n,\delta)$ up to $16n^5$. Moreover, we only choose at most $\# \overline{\mathcal P}_f\leq K_3\cdot n$ elements to approxmiate $f$, each index require $\log_2 16n^5$ bits to encode. Therefore, we record positions of elements by encode their index in order, using at most $K_3\cdot n\log_2(16n^5)$ bits. Then, we record the coefficient $\theta_P$ of each chosen orthonormal element $\phi_P$. Recall each coefficient can be encoded using $\log_2(3\eta^{-1})$ bits, hence we use at most $K_3\cdot n\log_2(3\eta^{-1})=K_3\cdot n\log_2(3n^2)$ bits to encode all coefficients. It is clear from now that we can decode this bitstring back to $\widetilde f$ in the natural way. We conclude that the above process describe an encoding-decoding pair $(E_\ell,D_\ell)$ such that $D_\ell(E_\ell(f))=\widetilde f$ for all $f\in\text{STAR}^\beta(C)$ where the encoder length obeys
$$\ell \leq K_3\cdot n\log_2(16n^5)+K_3\cdot n\log_2(3n^2) + \log_2\log_2n\leq K_4 n\log n $$
for some constant $K_4$ independent from $\beta$ and $C$. Now we find that
\begin{align*}L(\varepsilon,\text{STAR}^\beta(C))\leq K_4n\log n&\leq K_4 Q(\beta,C_\beta)\varepsilon^{-2/\beta} \log(Q(\beta,C_\beta)\varepsilon^{-2/\beta})\in \mathcal O(\varepsilon^{-1/\gamma}).\end{align*}
This completes the proof.
\end{proof}
\section{Conclusion}
In this paper, we focus on the theoretical aspect of neural networks, which we seek the problem of neural network approximation by emphasizing their actual limitation, rather than using learning algorithms such as stochastic gradient descent and adaptive moment estimation. Furthermore, we found that meaningful discussion of approximation quality by neural networks can appear if we introduce some limitation to the settings of neural networks.
This leads us to the effective best $M$-term/$M$-edge approximation rates.
To link neural networks to what we are familiar with, such as the Fourier series, we define (effective) representability of representation systems by neural networks, then transfer existing knowledge about representation systems to neural networks. This transition makes neural networks become an extremely powerful tool in modern technology.
Finally, we introduce two applications by using the previous settings and theoretical results. One is approximate B-spline functions by neural networks, while the former is used to generate B-spline curves. We have shown that the class of B-spline functions is effectively representable by neural networks. This enable us to use neural networks to reconstruct B-spline curves with an acceptable distortion. The other example is to apply rate-distortion theory and discrete wedgelets construction to prove the class of $\beta$ cartoon-like functions has a finite optimal exponent. These two practical adaptations of theories showed that it is possible for us to combine the theoretical and practical aspects of neural networks in the future, implying examine the theoretical limitation of gradient descent, adaptive moment estimation and other learning algorithm might be possible after all. Therefore, the results stated in this paper can be used to pursue the extreme limitation of neural networks.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{section:intro}
The Learning with Errors (LWE) problem has served as a foundation for many lattice-based cryptographic schemes~\cite{peikert2015decade}. Informally, LWE asks one to solve noisy random linear equations. To be more precise,
the goal is to find a secret vector $\bm s \in \mathbb{Z}_q^n$
given polynomially many samples of the form $(\bm a_i, b_i)$, where $\bm a_i \in \mathbb{Z}_q^n$ is uniformly chosen and $b_i \approx \langle \bm a_i, \bm s \rangle \pmod{q}$. In the absence of noise, LWE can be efficiently solved using Gaussian elimination. However, LWE is known to be hard assuming hardness of worst-case lattice problems such as Gap Shortest Vector Problem (GapSVP) or Shortest Independent Vectors Problem (SIVP) in the sense that there is a polynomial-time quantum reduction from these worst-case lattice problems to LWE~\cite{regev2005lwe}.
In this work, we introduce a new problem, called Continuous LWE (CLWE). As the name suggests, this problem can be seen as a continuous analogue of LWE, where equations in $\mathbb{Z}_q^n$ are replaced with vectors in $\mathbb{R}^n$ (see Figure~\ref{fig:plotinhom}).
More precisely, CLWE considers noisy inner products $z_i \approx \gamma \langle \bm y_i, \bm w \rangle \pmod{1}$, where the noise is drawn from a Gaussian distribution of width $\beta > 0$, $\gamma > 0$ is a problem parameter, $\bm w \in \mathbb{R}^{n}$ is a secret unit vector, and the public vectors $\bm y_i \in \mathbb{R}^n$ are drawn from the standard Gaussian. Given polynomially many samples of the form $(\bm y_i, z_i)$, CLWE asks one to find the secret direction $\bm w$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{plots/inhomogeneous_clwe_2d.png}
\caption{Scatter plot of two-dimensional CLWE samples. Color indicates the last ($z$) coordinate.}
\label{fig:plotinhom}
\end{figure}
One can also consider a closely related homogeneous variant of CLWE (see Figure~\ref{fig:plothom}). This distribution, which we call homogeneous CLWE, can be obtained by essentially conditioning on $z_i \approx 0$. It is a mixture of ``Gaussian pancakes'' of width $\approx \beta/\gamma$ in the secret direction and width $1$ in the
remaining $n-1$ directions. The Gaussian components are equally spaced, with a separation of $\approx 1/\gamma$. (See Definition~\ref{def:hclwe} for the precise statement.)
\begin{figure}[ht]
\centering
\begin{minipage}[t]{0.36\textwidth}
\includegraphics[width=\textwidth]{plots/homogeneous_clwe_2d_sparse.png}
\end{minipage}
\begin{minipage}[t]{0.54\textwidth}
\includegraphics[width=\textwidth]{plots/homogeneous_clwe_1d.png}
\end{minipage}\hspace{0.05\textwidth}
\caption{Left: Scatter plot of two-dimensional homogeneous CLWE samples.
Right: Unnormalized probability densities of homogeneous CLWE (blue) and Gaussian (orange) along the hidden direction.}
\label{fig:plothom}
\end{figure}
Our main result is that CLWE (and homogeneous CLWE) enjoy hardness guarantees similar to those of LWE.
\begin{theorem}[Informal]
\label{thm:main-informal}
Let $n$ be an integer, $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that the ratio $\gamma/\beta$ is polynomially bounded.
If there exists an efficient algorithm that solves $\mathrm{CLWE}_{\beta, \gamma}$, then there exists an efficient quantum algorithm that approximates worst-case lattice problems to within polynomial factors.
\end{theorem}
Although we defined CLWE above as a search problem of finding the hidden direction,
Theorem~\ref{thm:main-informal} is actually stronger, and applies to the decision variant of CLWE in which the goal is to distinguish CLWE samples $(\bm y_i, z_i)$ from samples where the noisy inner product $z_i$ is replaced by a random number distributed uniformly on $[0,1)$ (and similarly for the homogeneous variant).
\paragraph{Motivation: Lattice algorithms.}
Our original motivation to consider CLWE is as a possible approach to finding quantum algorithms for lattice problems. Indeed, the reduction above (just like the reduction to LWE~\cite{regev2005lwe}), can be interpreted in an algorithmic way: in order to quantumly solve worst-case lattice problems, ``all'' we have to do is solve CLWE (classically or quantumly). The elegant geometric nature of CLWE opens up a new toolbox of techniques that can potentially be used for solving lattice problems,
such as sum-of-squares-based techniques and algorithms for learning mixtures of Gaussians~\cite{moitrav2010mixture}.
Indeed, some recent algorithms (e.g.,~\cite{klivanskothari2019list-dec,raghavendrayau2020list-dec}) solve problems that include CLWE or homogeneous CLWE as a special case (or nearly so), yet as far as we can tell, so far none of the known results leads to an improvement over the state of the art in lattice algorithms.
To demonstrate the usefulness of CLWE as an algorithmic target, we show in Section~\ref{section:subexp} a simple moment-based algorithm that solves CLWE in time $\exp(\gamma^2)$.
Even though this does not imply subexponential time algorithms for lattice problems (since Theorem~\ref{thm:main-informal} requires $\gamma > \sqrt{n}$), it is interesting to contrast this algorithm with an analogous algorithm for LWE by Arora and Ge~\cite{arora2011subexplwe}. The two algorithms have the same running time (where $\gamma$ is replaced by the absolute noise $\alpha q$ in the LWE samples), and both rely on related techniques (moments in our case, powering in Arora-Ge's), yet the Arora-Ge algorithm is technically more involved than our rather trivial algorithm (which just amounts to computing the empirical covariance matrix). We interpret this as an encouraging sign that CLWE might be a better algorithmic target than LWE.
\paragraph{Motivation: Hardness of learning Gaussian mixtures.}
Learning mixtures of Gaussians is a classical problem in machine learning~\cite{pearson1984gmm}. Efficient algorithms are known for the task if the Gaussian components are guaranteed to be sufficiently well separated (e.g.,~\cite{dasgupta1999gmm,vempala-wang2002spectralgmm,arora-kannan2005,dasgupta-schulman2007em,brubaker-vempala2008pca,regev2017gmm,hopkins2018gmm,kothari-steinhardt2018clustering,diakonikolas2018spherical-gmm}).
Without such strong separation requirements, it is known that efficiently recovering the individual components of a mixture (technically known as ``parameter estimation") is in general impossible~\cite{moitrav2010mixture}; intuitively, this exponential information theoretic lower bound holds because the Gaussian components ``blur into each other", despite being mildly separated pairwise.
This leads to the question of whether there exists an efficient algorithm that can learn mixtures of Gaussians without strong separation requirement, not in the above strong parameter estimation sense (which is impossible), but rather in the much weaker density estimation sense, where the goal is merely to output an approximation of the given distribution's density function. See~\cite{diakonikolas2016structured,moitra2018} for the precise statement and~\cite{diakonikolas2017sqgaussian} where a super-polynomial lower bound for density estimation is shown in the restricted statistical query (SQ) model~\cite{kearnsSQ1998,feldman2017planted-clique}. Our work provides a negative answer to this open question,
showing that learning Gaussian mixtures is computationally difficult even if the goal is only to output an estimate of the density (see Proposition~\ref{prop:mixture-learning-hardness}). It is worth noting that our hard instance has almost non-overlapping components, i.e., the pairwise statistical distance between distinct Gaussian components is essentially 1, a property shared by the SQ-hard instance of~\cite{diakonikolas2017sqgaussian}.
\paragraph{Motivation: Robust machine learning.}
Variants of CLWE have already been analyzed in the context of robust machine learning~\cite{bubeck2019}, in which the goal is to learn a classifier that is robust against adversarial examples at test time~\cite{szegedy2014adversarial-examples}. In particular, Bubeck et al.~\cite{bubeck2019} use the SQ-hard Gaussian mixture instance of Diakonikolas et al.~\cite{diakonikolas2017sqgaussian} to establish SQ lower bounds for
learning a certain binary classification task, which can be seen as a variant of homogeneous CLWE. The key difference between our distribution and that of~\cite{diakonikolas2017sqgaussian,bubeck2019} is that our distribution has equal spacing between the ``layers" along the hidden direction, whereas their ``layers" are centered around roots of Hermite polynomials (the goal being to exactly match the lower moments of the standard Gaussian). The connection to lattices, which we make for the first time here, answers an open question by Bubeck et al.~\cite{bubeck2019}.
As additional evidence of the similarity between homogeneous CLWE and the distribution considered in~\cite{diakonikolas2017sqgaussian, bubeck2019}, we prove a super-polynomial SQ lower bound for homogeneous CLWE (even with super-polynomial precision). For $\gamma=\Omega(\sqrt{n})$, this result translates to an exponential SQ lower bound for exponential precision, which corroborates our computational hardness result based on worst-case lattice problems. The uniform spacing in the hidden structure of homogeneous CLWE leads to a simplified proof of the SQ lower bound compared to previous works, which considered non-uniform spacing between the Gaussian components. Note that computational hardness does not automatically imply SQ hardness as query functions in the SQ framework need not be efficiently computable.
Bubeck et al.~\cite{bubeck2019} were also interested in a variant of the learning problem where instead of \emph{one} hidden direction, there are $m \ge 1$ orthogonal hidden directions. So, for instance, the ``Gaussian pancakes'' in the $m=1$ case above are replaced with ``Gaussian baguettes'' in the case $m=2$, forming an orthogonal grid in the secret two-dimensional space. As we show in Section~\ref{section:k-hc}, our computational hardness easily extends to the $m>1$ case using a relatively standard hybrid argument. The same is true for the SQ lower bound we show in Section~\ref{section:sq-lb} (as well as for the SQ lower bound in~\cite{diakonikolas2017sqgaussian,bubeck2019}; the proof is nearly identical). The advantage of the $m>1$ variant is that the distance between the Gaussian mixture components increases from $\approx 1/\gamma$ (which can be as high as $\approx 1/\sqrt{n}$ if we want our hardness to hold) to $\approx \sqrt{m}/\gamma$ (which can be as high as $\approx 1$ by taking $m \approx n$). This is a desirable feature for showing hardness of robust machine learning.
\paragraph{Motivation: Cryptographic applications.}
Given the wide range of cryptographic applications of LWE~\cite{peikert2015decade}, it is only natural to expect that CLWE would also be useful for some cryptographic tasks, a question we leave for future work. CLWE's clean and highly symmetric definition should make it a better fit for some applications; its continuous nature, however, might require a discretization step due to efficiency considerations.
\paragraph{Analogy with LWE.}
As argued above, there are apparently nontrivial differences between CLWE and LWE, especially in terms of possible algorithmic approaches. However, there is undoubtedly also strong similarity between the two.
In terms of parameters, the $\gamma$ parameter in CLWE (density of layers) plays the role of the absolute noise level $\alpha q$ in LWE. And the $\beta$ parameter in CLWE plays the role of the relative noise parameter $\alpha$ in LWE. Using this correspondence between the parameters, the hardness proved for CLWE in Theorem~\ref{thm:main-informal} is essentially identical to the one proved for LWE in~\cite{regev2005lwe}. The similarity extends even to the noiseless case, where $\alpha = 0$ in LWE and $\beta = 0$ in CLWE. In particular, in Section~\ref{section:lll-clwe} we present an efficient LLL-based algorithm for solving noiseless CLWE, which is analogous to Gaussian elimination for noiseless LWE.
\paragraph{Comparison with previous work.}
The CLWE problem is related to the hard problem introduced in the seminal work of Ajtai and Dwork~\cite{ajtai97adcrypto}. Specifically, both problems involve finding a hidden direction in samples from a continuous distribution. One crucial difference, though, is in the density of the layers. Whereas in our hardness result the separation between the layers can be as large as $\approx 1/\sqrt{n}$, in Ajtai and Dwork the separation is exponentially small. This larger separation in CLWE is more than just a technicality. First, it is the reason we need to employ the quantum machinery from the LWE hardness proof~\cite{regev2005lwe}. Second, it is nearly tight, as demonstrated by the algorithm in Section~\ref{section:subexp}. Third, it is necessary for applications such as hardness of learning Gaussian mixtures. Finally, this larger separation is analogous to the main difference between LWE and earlier work~\cite{regev2004harmonic}, and is what leads to the relative efficiency of LWE-based cryptography.
\paragraph{Acknowledgements.}
We thank Aravindan Vijayaraghavan and Ilias Diakonikolas for useful comments.
\subsection{Technical Overview}
\label{section:technical-overview}
Broadly speaking, our proof follows the iterative structure of the original LWE hardness proof~\cite{regev2005lwe} (in fact, one might say most of the ingredients for CLWE were already present in that 2005 paper!).
We also make use of some recent techniques, such as a way to reduce to decision problems directly~\cite{peikert2017ringlwe}.
In more detail, as in previous work,
our main theorem boils down to solving the following problem: we are given a $\mathrm{CLWE}_{\beta,\gamma}$ oracle and polynomially many samples from $D_{L,r}$, the
discrete Gaussian distribution on $L$ of width $r$,%
\footnote{We actually require samples from $D_{L,r_i}$ for polynomially many $r_i$'s satisfying $r_i \geq r$, see Section~\ref{section:clwe-hardness}.} and our goal is to solve $\mathrm{BDD}_{L^*,\gamma/r}$, which is the problem of finding the closest vector in the dual lattice $L^*$ given a vector $\bm t$ that is within distance $\gamma/r$ of $L^*$. (It is known that $\mathrm{BDD}_{L^*,1/r}$ can be efficiently solved even if all we are given is polynomially many samples from $D_{L,r}$, without any need for an oracle~\cite{aharonov2005conp}; the point here is that the CLWE oracle allows us to extend the decoding radius from $1/r$ to $\gamma/r$.)
Once this is established, the main theorem follows from previous work~\cite{peikert2017ringlwe,regev2005lwe}. Very briefly, the resulting BDD solution is used in a quantum procedure to produce discrete Gaussian samples that are shorter than the ones we started with. This process is then repeated, until eventually we end up with the desired short discrete Gaussian samples. We remark that this process incurs a $\sqrt{n}$ loss in the Gaussian width (Lemma~\ref{lem:reg05quantumstep}), and the reason we require $\gamma \ge 2\sqrt{n}$ is to overcome this loss.
We now explain how we solve the above problem. For simplicity, assume for now that we have a \emph{search} CLWE oracle that recovers the secret exactly. (Our actual reduction is stronger and only requires a \emph{decision} CLWE oracle.) Let the given BDD instance be $\bm u + \bm w$, where $\bm u \in L^*$ and $\|\bm w\| = \gamma/r$. We will consider the general case of $\|\bm w\| \le \gamma/r$ in Section~\ref{section:clwe-hardness}.
The main idea is to generate CLWE samples whose secret is essentially the desired BDD solution $\bm w$, which would then complete the proof. To begin, take a sample from the discrete Gaussian distribution $\bm y \sim D_{L,r}$ (as provided to us) and consider the inner product
\begin{align*}
\langle \bm y, \bm u + \bm w \rangle = \langle \bm y, \bm w \rangle \pmod 1 \; ,
\end{align*}
where the equality holds since $\langle \bm y, \bm u \rangle \in \mathbb{Z}$ by definition.
The $(n+1)$-dimensional vector $(\bm y, \langle \bm y, \bm w \rangle \bmod 1)$ is almost a CLWE sample (with parameter $\gamma$ since $\gamma = r\|\bm w\|$ is the width of $\langle \bm y, \bm w \rangle$) --- the only problem is that in CLWE the $\bm y$'s need to be distributed according to a standard Gaussian, but here the $\bm y$'s are distributed according to a \emph{discrete} Gaussian over $L$. To complete the transformation into bonafide CLWE samples, we add Gaussian noise of appropriate variance to both $\bm y$ and $\langle \bm y, \bm w \rangle$ (and rescale $\bm y$ so that it is distributed according to the standard Gaussian distribution). We then apply the search $\mathrm{CLWE}_{\beta,\gamma}$ oracle on these CLWE samples to recover $\bm w$ and thereby solve $\mathrm{BDD}_{L^*,\gamma/r}$.
As mentioned previously, our main result actually uses a \emph{decision} CLWE oracle, which does not recover the secret $\bm w$ immediately. Working with this decision oracle requires some care. To that end, our proof will incorporate the ``oracle hidden center'' finding procedure from~\cite{peikert2017ringlwe}, the details of which can be found in Section~\ref{section:solve-bdd-with-clwe}.
\section{Preliminaries}
\begin{definition}[Statistical distance] For two distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ over $\mathbb{R}^n$ with density functions $\phi_1$ and $\phi_2$, respectively, we define the \emph{statistical distance} between them as
\begin{align*}
\Delta(\mathcal{D}_1,\mathcal{D}_2) = \frac{1}{2}\int_{\mathbb{R}^n}|\phi_1(\bm x)-\phi_2(\bm x)|d\bm x
\; .
\end{align*}
\end{definition}
We denote the statistical distance by $\Delta(\phi_1,\phi_2)$ if only the density functions are specified.
Moreover, for random variables $X_1 \sim \mathcal{D}_1$ and $X_2 \sim \mathcal{D}_2$, we also denote $\Delta(X_1,X_2) = \Delta(\mathcal{D}_1,\mathcal{D}_2)$. One important fact is that applying (possibly a randomized) function cannot increase statistical distance, i.e., for random variables $X, Y$ and function $f$,
\begin{align*}
\Delta(f(X),f(Y)) \leq \Delta(X,Y)
\; .
\end{align*}
We define the \emph{advantage} of an algorithm $\mathcal{A}$ solving the decision problem of distinguishing two distributions $\mathcal{D}_n$ and $\mathcal{D}'_n$ parameterized by $n$ as
\begin{align*}
\Bigl| \Pr_{x \sim \mathcal{D}_n}[\mathcal{A}(x) = \mathrm{YES}] - \Pr_{x \sim \mathcal{D}'_n}[\mathcal{A}(x) = \mathrm{YES}] \Bigr|
\; .
\end{align*}
Moreover, we define the \emph{advantage} of an algorithm $\mathcal{A}$ solving the \emph{average-case} decision problem of distinguishing two distributions $\mathcal{D}_{n, s}$ and $\mathcal{D}'_{n, s}$ parameterized by $n$ and $s$, where $s$ is equipped with some distribution $\mathcal{S}_n$, as
\begin{align*}
\Bigl| \Pr_{s \sim \mathcal{S}_n}[\mathcal{A}^{\mathcal{B}_{n, s}}(1^n) = \mathrm{YES}] - \Pr_{s \sim \mathcal{S}_n}[\mathcal{A}^{\mathcal{B}'_{n, s}}(1^n) = \mathrm{YES}] \Bigr|
\; ,
\end{align*}
where $\mathcal{B}_{n, s}$ and $\mathcal{B}_{n, s}$ are respectively the sampling oracles of $\mathcal{D}_{n, s}$ and $\mathcal{D}'_{n, s}$.
We say that an algorithm $\mathcal{A}$ has \emph{non-negligible advantage} if its advantage is a non-negligible function in $n$, i.e., a function in $\Omega(n^{-c})$ for some constant $c > 0$.
\subsection{Lattices and Gaussians}
\paragraph{Lattices.}
A \emph{lattice} is a discrete additive subgroup of $\mathbb{R}^n$.
Unless specified otherwise, we assume all lattices are full rank, i.e., their linear span is $\mathbb{R}^n$.
For an $n$-dimensional lattice $L$, a set of linearly independent vectors $\{\bm b_1, \dots, \bm b_n\}$ is called a \emph{basis} of $L$ if $L$ is generated by the set, i.e., $L = B \mathbb{Z}^n$ where $B = [\bm b_1, \dots, \bm b_n]$.
The \emph{determinant} of a lattice $L$ with basis $B$ is defined as $\det(L) = |\det(B)|$; it is easy to verify that the determinant does not depend on the choice of basis.
The \emph{dual lattice} of a lattice $L$, denoted by $L^*$, is defined as
\begin{align*}
L^* = \{ \bm y \in \mathbb{R}^n \mid \langle \bm x, \bm y \rangle \in \mathbb{Z} \text{ for all } \bm x \in L\}
\; .
\end{align*}
If $B$ is a basis of $L$ then $(B^T)^{-1}$ is a basis of $L^*$; in particular, $\det(L^*) = \det(L)^{-1}$.
\begin{definition} For an $n$-dimensional lattice $L$ and $1 \le i \le n$, the \emph{$i$-th successive minimum} of $L$ is defined as
\begin{align*}
\lambda_i(L) = \inf \{r \mid \dim(\operatorname{span}(L \cap \overline{B}(\bm 0,r))) \geq i\}
\; ,
\end{align*}
where $\overline{B}(\bm 0,r)$ is the closed ball of radius $r$ centered at the origin.
\end{definition}
We define the function $\rho_s(\bm x) = \exp(-\pi\|\bm x/s\|^2)$. Note that $\rho_s(\bm x) / s^n$, where $n$ is the dimension of $\bm x$, is the probability density of the Gaussian distribution with covariance $s^2/(2\pi)\cdot I_n$.
\begin{definition}[Discrete Gaussian] For lattice $L \subset \mathbb{R}^n$, vector $\bm y \in \mathbb{R}^n$, and parameter $r > 0$, the \emph{discrete Gaussian distribution} $D_{\bm y+L,r}$ on coset $\bm y+L$ with width $r$ is defined to have support $\bm y+L$ and probability mass function proportional to $\rho_r$.
\end{definition}
For $\bm y = \bm 0$, we simply denote the discrete Gaussian distribution on lattice $L$ with width $r$ by $D_{L,r}$.
Abusing notation, we denote the $n$-dimensional \emph{continuous Gaussian distribution} with zero mean and isotropic variance $r^2/(2\pi)$ as $D_{\mathbb{R}^n,r}$.
Finally, we omit the subscript $r$ when $r = 1$ and refer to $D_{\mathbb{R}^n}$ as the \emph{standard} Gaussian (despite it having covariance $I_n/(2\pi)$).
\begin{claim}[{\cite[Fact 2.1]{peikert2010sampler}}]
\label{claim:complete-squares}
For any $r_1, r_2 > 0$ and vectors $\bm x, \bm c_1, \bm c_2 \in \mathbb{R}^n$,
let $r_0 = \sqrt{r_1^2 + r_2^2}$, $r_3 = r_1 r_2 / r_0$, and $\bm c_3 = (r_3/r_1)^2 \bm c_1 + (r_3/r_2)^2 \bm c_2$.
Then
\begin{align*}
\rho_{r_1}(\bm x-\bm c_1) \cdot \rho_{r_2}(\bm x - \bm c_2) = \rho_{r_0}(\bm c_1 - \bm c_2) \cdot \rho_{r_3}(\bm x-\bm c_3)
\; .
\end{align*}
\end{claim}
\paragraph{Fourier analysis.} We briefly review basic tools of Fourier analysis required later on. The Fourier transform of a function $f: \mathbb{R}^n \to \mathbb{C}$ is defined to be
\begin{align*}
\hat{f}(\bm w) = \int_{\mathbb{R}^n} f(\bm x)e^{-2\pi i \langle \bm x, \bm w \rangle}d\bm x
\; .
\end{align*}
An elementary property of the Fourier transform is that if $f(\bm w) = g(\bm w+\bm v)$ for some $\bm v \in \mathbb{R}^n$, then $\hat{f}(\bm w) = e^{2\pi i \langle \bm v, \bm w \rangle}\hat{g}(\bm w)$. Another important fact is that the Fourier transform of a Gaussian is also a Gaussian, i.e., $\hat{\rho} = \rho$; more generally, $\hat{\rho}_s = s^n \rho_{1/s}$.
We also exploit the Poisson summation formula stated below. Note that we denote by $f(A) = \sum_{\bm x \in A} f(\bm x)$ for any function $f$ and any discrete set $A$.
\begin{lemma}[Poisson summation formula] For any lattice $L$ and any function $f$,\footnote{To be precise, $f$ needs to satisfy some niceness conditions; this will always hold in our applications.}
\label{lem:poisson-sum}
\begin{align*}
f(L) = \det(L^*)\cdot \widehat{f}(L^*)
\; .
\end{align*}
\end{lemma}
\paragraph{Smoothing parameter.} An important lattice parameter induced by discrete Gaussian which will repeatedly appear in our work is the \emph{smoothing parameter}, defined as follows.
\begin{definition}[Smoothing parameter]
For lattice $L$ and real $\varepsilon > 0$, we define the \emph{smoothing parameter} $\eta_{\varepsilon}(L)$ as
\begin{align*}
\eta_{\varepsilon}(L) = \inf \{s \mid \rho_{1/s}(L^* \setminus \{\bm 0\}) \leq \varepsilon\}
\; .
\end{align*}
\end{definition}
Intuitively, this parameter is the width beyond which the discrete Gaussian distribution behaves like a continuous Gaussian. This is formalized in the lemmas below.
\begin{lemma}[{\cite[Claim 3.9]{regev2005lwe}}]
\label{lem:smoothing-gaussian}
For any $n$-dimensional lattice $L$, vector $\bm u \in \mathbb{R}^n$, and $r,s > 0$ satisfying $rs/t \geq \eta_\varepsilon(L)$ for some $\varepsilon < \frac{1}{2}$, where $t = \sqrt{r^2+s^2}$, the statistical distance between $D_{\bm u+L,r} + D_{\mathbb{R}^n, s}$ and $D_{\mathbb{R}^n, t}$ is at most $4\varepsilon$.
\end{lemma}
\begin{lemma}[{\cite[Lemma 2.5]{peikert2017ringlwe}}]
\label{lem:smoothing-uniform}
For any $n$-dimensional lattice $L$, real $\varepsilon > 0$, and $r \geq \eta_{\varepsilon}(L)$, the statistical distance between $D_{\mathbb{R}^n, r} \bmod L$ and the uniform distribution over $\mathbb{R}^n / L$ is at most $\varepsilon/2$.
\end{lemma}
Lemma~\ref{lem:smoothing-gaussian} states that if we take a sample from $D_{L,r}$ and add continuous Gaussian noise $D_{\mathbb{R}^n,s}$ to the sample, the resulting distribution is statistically close to $D_{\mathbb{R}^n,\sqrt{r^2+s^2}}$, which is precisely what one gets by adding two continuous Gaussian distributions of width $r$ and $s$. Unless specified otherwise, we always assume $\varepsilon$ is negligibly small in $n$, say $\varepsilon = \exp(-n)$. The following are some useful upper and lower bounds on the smoothing parameter $\eta_\varepsilon(L)$.
\begin{lemma}[{\cite[Lemma 2.6]{peikert2017ringlwe}}]
\label{lem:smoothing-dual}
For any $n$-dimensional lattice $L$ and $\varepsilon = \exp(-c^2n)$,
\begin{align*}
\eta_\varepsilon(L) \leq c\sqrt{n}/\lambda_1(L^*)
\; .
\end{align*}
\end{lemma}
\begin{lemma}[{\cite[Lemma 3.3]{micciancio2007average}}]
\label{lem:smoothing-primal}
For any $n$-dimensional lattice $L$ and $\varepsilon > 0$,
\begin{align*}
\eta_\varepsilon(L) \leq \sqrt{\frac{\ln(2n(1+1/\varepsilon))}{\pi}}\cdot \lambda_n(L)
\; .
\end{align*}
\end{lemma}
\begin{lemma}[{\cite[Claim 2.13]{regev2005lwe}}]
\label{lem:smoothing-lb}
For any $n$-dimensional lattice $L$ and $\varepsilon > 0$,
\begin{align*}
\eta_\varepsilon(L) \geq \sqrt{\frac{\ln 1/\varepsilon}{\pi}}\cdot\frac{1}{\lambda_1(L^*)}
\; .
\end{align*}
\end{lemma}
\paragraph{Computational problems.}
GapSVP and SIVP are among the main computational problems on lattices and are believed to be computationally hard (even with quantum computation) for polynomial approximation factor $\alpha(n)$. We also define two additional problems, DGS and BDD.
\begin{definition}[GapSVP]
For an approximation factor $\alpha = \alpha(n)$, an instance of $\mathrm{GapSVP}_\alpha$ is given by an $n$-dimensional lattice $L$ and a number $d > 0$. In \textnormal{YES} instances, $\lambda_1(L) \leq d$, whereas in \textnormal{NO} instances, $\lambda_1(L) > \alpha \cdot d$.
\end{definition}
\begin{definition}[SIVP]
For an approximation factor $\alpha = \alpha(n)$, an instance of $\mathrm{SIVP}_\alpha$ is given by an $n$-dimensional lattice $L$. The goal is to output a set of $n$ linearly independent lattice vectors of length at most $\alpha \cdot \lambda_n(L)$.
\end{definition}
\begin{definition}[DGS]
For a function $\varphi$ that maps lattices to non-negative reals, an instance of $\mathrm{DGS}_\varphi$ is given by a lattice $L$ and a parameter $r \geq \varphi(L)$. The goal is to output an independent sample whose distribution is within negligible statistical distance of $D_{L,r}$.
\end{definition}
\begin{definition}[BDD]
For an $n$-dimensional lattice $L$ and distance bound $d > 0$, an instance of $\mathrm{BDD}_{L,d}$ is given by a vector $\bm t = \bm w + \bm u$, where $\bm u \in L$ and $\|\bm w\| \leq d$. The goal is to output $\bm w$.
\end{definition}
\subsection{Learning with errors}
\label{prelim:lwe}
We now define the learning with errors (LWE) problem. This definition will not be used in the sequel, and is included for completeness. Let $n$ and $q$ be positive integers, and $\alpha > 0$ an error rate. We denote the quotient ring of integers modulo $q$ as $\mathbb{Z}_q = \mathbb{Z}/q\mathbb{Z}$ and quotient group of reals modulo the integers as $\mathbb{T} = \mathbb{R}/\mathbb{Z} = [0, 1)$.
\begin{definition}[LWE distribution] For integer $q \ge 2$ and vector $\bm s \in \mathbb{Z}_q^n$, the \emph{LWE distribution} $A_{\bm s, \alpha}$ over $\mathbb{Z}_q^n \times \mathbb{T}$ is sampled by independently choosing uniformly random $\bm a \in \mathbb{Z}_q^n$ and $e \sim D_{\mathbb{R},\alpha}$, and outputting $(\bm a, (\langle \bm a, \bm s \rangle/q + e) \bmod 1)$.
\end{definition}
\begin{definition} For an integer $q = q(n) \geq 2$ and error parameter $\alpha = \alpha(n) > 0$, the average-case decision problem $\mathrm{LWE}_{q,\alpha}$ is to distinguish the following two distributions over $\mathbb{Z}_q^n \times \mathbb{T}$: (1) the LWE distribution $A_{\bm s, \alpha}$ for some uniformly random $\bm s \in \mathbb{Z}_q^n$ (which is fixed for all samples), or (2) the uniform distribution.
\end{definition}
\subsection{Continuous learning with errors}
\label{prelim:comb}
We now define the CLWE distribution, which is the central subject of our analysis.
\begin{definition}[CLWE distribution]
For unit vector $\bm w \in \mathbb{R}^n$ and parameters $\beta, \gamma > 0$, define the \emph{CLWE distribution} $A_{{\bm w}, \beta, \gamma}$ over $\mathbb{R}^{n+1}$ to have density at $(\bm y,z)$ proportional to
\begin{align*}
\rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(z+k-\gamma\langle \bm y, \bm w \rangle)
\; .
\end{align*}
\end{definition}
Equivalently, a sample $(\bm y, z)$ from the CLWE distribution $A_{\bm w, \beta, \gamma}$ is given by the $(n+1)$-dimensional vector $(\bm y, z)$ where $\bm y \sim D_{\mathbb{R}^n}$ and
$z = (\gamma \langle \bm y, \bm w \rangle + e) \bmod 1$ where $e \sim D_{\mathbb{R},\beta}$.
The vector $\bm w$ is the hidden direction, $\gamma$ is the density of layers, and $\beta$ is the noise added to each equation. From the CLWE distribution, we can arrive at the homogeneous CLWE distribution by conditioning on $z = 0$. A formal definition is given as follows.
\begin{definition}[Homogeneous CLWE distribution]\label{def:hclwe}
For unit vector $\bm w \in \mathbb{R}^n$ and parameters $\beta, \gamma > 0$, define the \emph{homogeneous CLWE distribution} $H_{\bm w, \beta, \gamma}$ over $\mathbb{R}^n$ to have density at $\bm y$ proportional to
\begin{align}\label{eqn:hclwe-def}
\rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(k-\gamma\langle \bm y, \bm w \rangle)
\; .
\end{align}
\end{definition}
The homogeneous CLWE distribution can be equivalently defined as a mixture of Gaussians.
To see this, notice that Eq.~\eqref{eqn:hclwe-def} is equal to
\begin{align}\label{eqn:hclwe-mixture-def}
\sum_{k \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot
\rho(\pi_{\bm w^\perp}(\bm y)) \cdot \rho_{\beta/\sqrt{\beta^2+\gamma^2}}\Big(\langle \bm y, \bm w \rangle -\frac{\gamma}{\beta^2+\gamma^2}k\Big) \; ,
\end{align}
where $\pi_{\bm w^\perp}$ denotes the projection on the orthogonal space to $\bm w$.
Hence, $H_{\bm w, \beta, \gamma}$ can be viewed as a mixture of Gaussian components of width
$\beta/\sqrt{\beta^2+\gamma^2}$ (which is roughly $\beta/\gamma$ for $\beta \ll \gamma$) in the secret direction, and width $1$ in the orthogonal space. The components are equally spaced, with a separation of $\gamma/(\beta^2+\gamma^2)$ between them (which is roughly $1/\gamma$ for $\beta \ll \gamma$).
We remark that the integral of~\eqref{eqn:hclwe-def} (or equivalently, of~\eqref{eqn:hclwe-mixture-def}) over all $\bm y$ is
\begin{align}\label{eqn:hclwe-def-normalization}
Z = \frac{\beta}{\sqrt{\beta^2+\gamma^2}} \cdot \rho\Bigg(\frac{1}{\sqrt{\beta^2+\gamma^2}}\mathbb{Z}\Bigg) \; .
\end{align}
This is easy to see since the integral over $\bm y$ of the product of the last two $\rho$ terms in~\eqref{eqn:hclwe-mixture-def} is $\beta/\sqrt{\beta^2+\gamma^2}$ independently of $k$.
\begin{definition} For parameters $\beta, \gamma > 0$, the average-case decision problem $\mathrm{CLWE}_{\beta, \gamma}$ is to distinguish the following two distributions over $\mathbb{R}^n \times \mathbb{T}$: (1) the CLWE distribution $A_{\bm w, \beta, \gamma}$ for some uniformly random unit vector $\bm w \in \mathbb{R}^n$ (which is fixed for all samples), or (2) $D_{\mathbb{R}^n} \times U$.
\end{definition}
\begin{definition} For parameters $\beta, \gamma > 0$, the average-case decision problem $\mathrm{hCLWE}_{\beta, \gamma}$ is to distinguish the following two distributions over $\mathbb{R}^n$: (1) the homogeneous CLWE distribution $H_{\bm w, \beta, \gamma}$ for some uniformly random unit vector $\bm w \in \mathbb{R}^n$ (which is fixed for all samples), or (2) $D_{\mathbb{R}^n}$.
\end{definition}
Note that $\mathrm{CLWE}_{\beta,\gamma}$ and $\mathrm{hCLWE}_{\beta,\gamma}$ are defined as average-case problems. We could have equally well defined them to be worst-case problems, requiring the algorithm to distinguish the distributions for \emph{all} hidden directions $\bm w \in \mathbb{R}^n$. The following claim shows that the two formulations are equivalent.
\begin{claim} \label{claim:ic-worst-to-ic}
For any $\beta, \gamma > 0$,
there is a polynomial-time reduction from worst-case $\mathrm{CLWE}_{\beta,\gamma}$ to (average-case) $\mathrm{CLWE}_{\beta,\gamma}$.
\end{claim}
\begin{proof}
Given CLWE samples $\{(\bm y_i,z_i)\}_{i=1}^{\operatorname{poly}(n)}$ from $A_{\bm w,\beta,\gamma}$, we apply a random rotation $\bm R$, giving us samples of the form $\{(\bm R\bm y_i,z_i\}_{i=1}^{\operatorname{poly}(n)}$. Since the standard Gaussian is rotationally invariant and $\langle \bm y, \bm w \rangle = \langle \bm R \bm y, \bm R^T\bm w \rangle$, the rotated CLWE samples are distributed according to $A_{\bm R^T\bm w,\beta,\gamma}$. Since $\bm R$ is a random rotation, the random direction $\bm R^T \bm w$ is uniformly distributed on the sphere.
\end{proof}
\section{Hardness of CLWE}
\label{section:clwe-hardness}
\subsection{Background and overview}
\label{section:clwe-background}
In this section, we give an overview of the quantum reduction from worst-case lattice problems to CLWE. Our goal is to show that we can efficiently solve worst-case lattice problems, in particular GapSVP and SIVP, using an oracle for $\mathrm{CLWE}$ (and with quantum computation). We first state our main theorem, which was stated informally as Theorem~\ref{thm:main-informal} in the introduction.
\begin{theorem}
\label{thm:clwe-intro}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ be such that $\gamma/\beta$ is polynomially bounded. Then there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{CLWE}_{\beta,\gamma}$.
\end{theorem}
Using standard reductions from GapSVP and SIVP to DGS (see, e.g.,~\cite[Section 3.3]{regev2005lwe}), our main theorem immediately implies the following corollary.
\begin{corollary}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded. Then, there is a polynomial-time quantum reduction from $\mathrm{SIVP}_\alpha$ and $\mathrm{GapSVP}_\alpha$ to $\mathrm{CLWE}_{\beta,\gamma}$ for some $\alpha = \tilde{O}(n/\beta)$.
\end{corollary}
Based on previous work, to prove Theorem~\ref{thm:clwe-intro}, it suffices to prove the following lemma, which is the goal of this section.
\begin{lemma}
\label{lem:bdddgs-to-clwe}
Let $\beta=\beta(n) \in (0,1)$ and $\gamma=\gamma(n) \geq 2\sqrt{n}$ such that $q = \gamma/\beta$ is polynomially bounded. There exists a probabilistic polynomial-time (classical) algorithm with access to an oracle that solves $\mathrm{CLWE}_{\beta,\gamma}$, that takes as input a lattice $L \subset \mathbb{R}^n$, parameters $\beta, \gamma$, and $r \geq 2q \cdot \eta_{\varepsilon}(L)$, and $\operatorname{poly}(n)$ many samples from the discrete Gaussian distribution $D_{L,r_i}$ for $\operatorname{poly}(n)$ parameters $r_i \geq r$ and solves $\mathrm{BDD}_{L^*,d}$ for $d = \gamma/(\sqrt{2}r)$.
\end{lemma}
In other words, we can implement an oracle for $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r)}$ using polynomially many discrete Gaussian samples and the CLWE oracle as a sub-routine.
The proof of Lemma~\ref{lem:bdddgs-to-clwe} will be given in Section~\ref{section:clwe-from-bdddgs}
(which is the novel contribution) and Section~\ref{section:solve-bdd-with-clwe}
(which mainly follows~\cite{peikert2017ringlwe}).
In the rest of this subsection, we briefly explain how Theorem~\ref{thm:clwe-intro} follows from Lemma~\ref{lem:bdddgs-to-clwe}.
This derivation is already implicit in past work~\cite{peikert2017ringlwe,regev2005lwe}, and is included here mainly for completeness.
Readers familiar with the reduction may skip directly to Section~\ref{section:clwe-from-bdddgs}.
The basic idea is to start with samples from a very wide discrete Gaussian (which can be efficiently sampled) and then iteratively sample from narrower discrete Gaussians, until eventually we end up with short discrete Gaussian samples, as required (see Figure~\ref{fig:lwe-diagram}). Each iteration consists of two steps: the first classical step is given by Lemma~\ref{lem:bdddgs-to-clwe}, allowing us to solve BDD on the dual lattice; the second step is quantum and is given in Lemma~\ref{lem:reg05quantumstep} below, which shows that solving BDD leads to sampling from narrower discrete Gaussian.
\begin{figure}[ht]
\centering
\input{plots/clwe-iteration.tex}
\caption{Two iterations of the reduction.}
\label{fig:lwe-diagram}
\end{figure}
\begin{lemma}[{\cite[Lemma 3.14]{regev2005lwe}}]
\label{lem:reg05quantumstep}
There exists an efficient quantum algorithm that, given any $n$-dimensional lattice $L$, a number $d < \lambda_1(L^*)/2$, and an oracle that solves $\mathrm{BDD}_{L^*,d}$, outputs a sample from $D_{L,\sqrt{n}/(\sqrt{2}d)}$.
\end{lemma}
Similar to~\cite{peikert2017ringlwe}, there is a subtle requirement in Lemma~\ref{lem:bdddgs-to-clwe} that we need discrete Gaussian samples from several different parameters $r' \geq r$. However, this is a non-issue since an oracle for $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r)}$ also solves $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r')}$ for any $r' \ge r$, so Lemma~\ref{lem:reg05quantumstep} in fact allows us to efficiently sample from $D_{L,r'\sqrt{n}/\gamma}$ for any $r' \ge r$.
\subsection{CLWE samples from BDD}
\label{section:clwe-from-bdddgs}
In this subsection we prove Lemma~\ref{lem:bdd-to-clwe}, showing how to generate CLWE samples from the given BDD instance using discrete Gaussian samples.
In the next subsection we will show how to solve the BDD instance by applying the decision CLWE oracle to these samples, thereby completing the proof of Lemma~\ref{lem:bdddgs-to-clwe}.
\begin{lemma}
\label{lem:bdd-to-clwe}
There is an efficient algorithm that takes as input an $n$-dimensional lattice $L$, a vector $\bm w + \bm u$ where $\bm u \in L^*$, reals $r, s_1, s_2 > 0$ such that $rs_1/\sqrt{\|\bm w\|^2(r s_1/s_2)^2+t^2} \geq \eta_\varepsilon(L)$ for some $\varepsilon < \frac{1}{2}$ and $t = \sqrt{r^2+s_1^2}$, and samples from $D_{L,r}$, and outputs samples that are within statistical distance $8 \varepsilon$ of the CLWE distribution $A_{\bm w', \beta, \gamma}$ for $\bm w' = \bm w/\|\bm w\|$, $\beta = \|\bm w\|\sqrt{(rs_1/t)^2+(s_2/\|\bm w\|)^2}$ and $\gamma = \|\bm w\|r^2/t$.
\end{lemma}
\begin{proof}
We start by describing the algorithm. For each $\bm x$ from the given samples from $D_{L,r}$, do the following. First, take the inner product $\langle \bm x, \bm w + \bm u \rangle$, which gives us
\begin{align*}
\langle \bm x, \bm w + \bm u \rangle &= \langle \bm x, \bm w \rangle \bmod 1
\; .
\end{align*}
Appending this inner product modulo 1 to the sample $\bm x$, we get $(\bm x, \langle \bm x, \bm w \rangle \bmod 1)$.
Next, we ``smooth out" the lattice structure of $\bm x$ by adding Gaussian noise $\bm v \sim D_{\mathbb{R}^n,s_1}$ to $\bm x$ and $e \sim D_{\mathbb{R},s_2}$ to $\langle \bm x, \bm w \rangle$ (modulo 1). Then, we have
\begin{align}
(\bm x + \bm v, (\langle \bm x, \bm w \rangle + e) \bmod 1) \label{eqn:clwe-sample-raw}\; .
\end{align}
Finally, we normalize the first component by $t$ so that its marginal distribution has unit width, giving us
\begin{align}
((\bm x + \bm v)/t,(\langle \bm x, \bm w \rangle + e) \bmod 1) \label{eqn:clwe-sample-raw-normalized}\;,
\end{align}
which the algorithm outputs.
Our goal is to show that the distribution of \eqref{eqn:clwe-sample-raw-normalized} is within statistical distance $8\varepsilon$ of the CLWE distribution $A_{\bm w',\beta,\gamma}$, given by
\begin{align*}
(\bm y', (\gamma \langle \bm y', \bm w' \rangle + e') \bmod 1) \; ,
\end{align*}
where $\bm y' \sim D_{\mathbb{R}^n}$ and $e' \sim D_{\mathbb{R},\beta}$.
Because applying a function cannot increase statistical distance (specifically, dividing the first component by $t$ and taking mod $1$ of the second), it suffices to show that the distribution of
\begin{align}
(\bm x + \bm v, \langle \bm x, \bm w \rangle + e) \label{eqn:clwe-sample-1}\; ,
\end{align}
is within statistical distance $8\varepsilon$ of that of
\begin{align}
(\bm y, (r/t)^2 \langle \bm y, \bm w \rangle + e') \label{eqn:clwe-sample-2}\; ,
\end{align}
where $\bm y \sim D_{\mathbb{R}^n,t}$ and $e' \sim D_{\mathbb{R},\beta}$. First, observe that by Lemma~\ref{lem:smoothing-gaussian}, the statistical distance between the marginals on the first component (i.e., between $\bm x +\bm v$ and $\bm y$) is at most $4\varepsilon$. It is therefore sufficient to bound the statistical distance between the second components conditioned on any fixed value $\overline{\bm y}$ of the first component.
Conditioned on the first component being $\overline{\bm y}$, the second component in~\eqref{eqn:clwe-sample-1} has the same distribution as
\begin{align}
\langle \bm x + \bm h , \bm w \rangle \label{eqn:clwe-sample-3}
\end{align}
where $\bm h \sim D_{\mathbb{R}^n,s_2/\|\bm w\|}$,
and the second component in~\eqref{eqn:clwe-sample-2} has the same distribution as
\begin{align}
\langle (r/t)^2 \overline{\bm y} + \bm h' , \bm w \rangle \label{eqn:clwe-sample-4}
\end{align}
where $\bm h' \sim D_{\mathbb{R}^n,\beta/\|\bm w\|}$.
By Claim~\ref{clm:lattice_conditional} below, conditioned on $\bm x+\bm v = \overline{\bm y}$, the distribution of $\bm x$ is
$(r/t)^2\overline{\bm y} + D_{L-(r/t)^2\overline{\bm y}, rs_1/t}$. Therefore, by Lemma~\ref{lem:smoothing-gaussian}, the conditional distribution of $\bm x + \bm h$ given $\bm x+\bm v=\overline{\bm y}$ is within statistical distance $4 \varepsilon$ of that of $(r/t)^2\overline{\bm y} + \bm h'$. Since statistical distance cannot increase by applying a function (inner product with $\bm w$ in this case), \eqref{eqn:clwe-sample-3} is within statistical distance $4\varepsilon$ of \eqref{eqn:clwe-sample-4}. Hence, the distribution of \eqref{eqn:clwe-sample-1} is within statistical distance $8\varepsilon$ of that of \eqref{eqn:clwe-sample-2}.
\end{proof}
\begin{claim}
\label{clm:lattice_conditional}
Let $\bm y = \bm x + \bm v$, where $\bm x \sim D_{L,r}$ and $\bm v \sim D_{\mathbb{R}^n,s}$. Then, the conditional distribution of $\bm x$ given $\bm y = \overline{\bm y}$ is $(r/t)^2\overline{\bm y} + D_{L-(r/t)^2\overline{\bm y}, rs/t}$ where $t = \sqrt{r^2+s^2}$.
\end{claim}
\begin{proof}
Observe that $\bm x$ conditioned on $\bm y = \overline{\bm y}$ is a discrete random variable supported on $L$.
The probability of $\bm x$ given $\bm y = \overline{\bm y}$ is proportional to
\begin{align*}
\rho_r(\bm x) \cdot \rho_s(\overline{\bm y}-\bm x) = \rho_t(\overline{\bm y}) \cdot \rho_{rs/t}(\bm x-(r/t)^2\overline{\bm y}) \propto \rho_{rs/t}(\bm x-(r/t)^2\overline{\bm y})
\; ,
\end{align*}
where the equality follows from Claim~\ref{claim:complete-squares}. Hence, the conditional distribution of $\bm x-(r/t)^2\bm y$ given $\bm y = \overline{\bm y}$ is $D_{L-(r/t)^2\overline{\bm y}, rs/t}$.
\end{proof}
\subsection{Solving BDD with the CLWE oracle}
\label{section:solve-bdd-with-clwe}
In this subsection, we complete the proof of Lemma~\ref{lem:bdddgs-to-clwe}. We first give some necessary background on the Oracle Hidden Center Problem (OHCP)~\cite{peikert2017ringlwe}. The problem asks one to search for a ``hidden center" $\bm w^*$ using a decision oracle whose acceptance probability depends only on the distance to $\bm w^*$. The problem's precise statement is as follows.
\begin{definition}[OHCP]
\label{definition:ohcp}
For parameters $\varepsilon, \delta \in [0,1)$ and $\zeta \geq 1$, the $(\varepsilon, \delta, \zeta)$-\emph{OHCP} is an approximate search problem that tries to find the ``hidden" center $\bm w^*$. Given a scale parameter $d > 0$ and access to a randomized oracle $\mathcal{O} : \mathbb{R}^n \times \mathbb{R}^{\geq 0} \rightarrow \{0,1\}$ such that its acceptance probability $p(\bm w,t)$ only depends on $\exp(t)\|\bm w-\bm w^*\|$ for some (unknown) ``hidden center" $\bm w^* \in \mathbb{R}^n$ with $\delta d \leq \|\bm w^*\| \leq d$ and for any $\bm w \in \mathbb{R}^n$ with $\|\bm w-\bm w^*\| \leq \zeta d$, the goal is to output $\bm w$ s.t.\ $\|\bm w-\bm w^*\| \leq \varepsilon d$.
\end{definition}
Notice that OHCP corresponds to our problem since we want to solve BDD, which is equivalent to finding the ``hidden" offset vector $\bm w^*$, using a decision oracle for $\mathrm{CLWE}_{\beta, \gamma}$. The acceptance probability of the $\mathrm{CLWE}_{\beta,\gamma}$ oracle will depend on the distance between our guess $\bm w$ and the true offset $\bm w^*$. For OHCP, we have the following result from~\cite{peikert2017ringlwe}.
\begin{lemma}[\cite{peikert2017ringlwe}, Proposition 4.4]
\label{lem:ohcp}
There is a poly$(\kappa, n)$-time algorithm that takes as input a confidence parameter $\kappa \geq 20 \log(n+1)$ (and the scale parameter $d > 0$) and solves $(\exp(-\kappa), \exp(-\kappa), 1+1/\kappa)$-OHCP in dimension $n$ except with probability $\exp(-\kappa)$, provided that the oracle $\mathcal{O}$ corresponding to the OHCP instance satisfies the following conditions. For some $p(\infty) \in [0, 1]$ and $t^* \ge 0$,
\begin{enumerate}
\item $p(\bm 0, t^*) - p(\infty) \geq 1/\kappa$;
\item $|p(\bm 0, t) - p(\infty)| \leq 2 \exp(-t/\kappa)$ for any $t \geq 0$; and
\item $p(\bm w, t)$ is $\kappa$-Lipschitz in $t$ for any $\bm w \in \mathbb{R}^n$ such that $\|\bm w\| \leq (1+1/\kappa)d$ \;.
\end{enumerate}
Furthermore, each of the algorithm's oracle calls takes the form $\mathcal{O}(\cdot,i\Delta)$ for some $\Delta < 1$ that depends only on $\kappa$ and $n$ and $0 \leq i \leq \operatorname{poly}(\kappa,n)$.
\end{lemma}
The main idea in the proof of Lemma~\ref{lem:ohcp} is performing a guided random walk with advice from the decision oracle $\mathcal{O}$. The decision oracle $\mathcal{O}$ rejects a random step with high probability if it increases the distance $\|\bm w - \bm w^*\|$. Moreover, there is non-negligible probability of decreasing the distance by a factor $\exp(1/n)$ unless $\log \|\bm w-\bm w^*\| \leq -\kappa$. Hence, with sufficiently many steps, the random walk will reach $\widehat{\bm w}$, a guess of the hidden center, which is within $\exp(-\kappa)$ distance to $\bm w^*$ with high probability.
Our goal is to show that we can construct an oracle $\mathcal{O}$ satisfying the above conditions using an oracle for $\mathrm{CLWE}_{\beta, \gamma}$. Then, it follows from Lemma~\ref{lem:ohcp} that BDD with discrete Gaussian samples can be solved using an oracle for CLWE. We first state some lemmas useful for our proof. Lemma~\ref{lem:closest-plane} is Babai's closest plane algorithm and Lemma~\ref{lem:tvbound} is an upper bound on the statistical distance between two one-dimensional Gaussian distributions.
\begin{lemma}[\cite{lenstra1982lll,babai1986cvp}]
\label{lem:closest-plane}
For any $n$-dimensional lattice $L$, there is an efficient algorithm that solves $\mathrm{BDD}_{L,d}$ for $d = 2^{-n/2}\cdot \lambda_1(L)$.
\end{lemma}
\begin{lemma}[{\cite[Theorem 1.3]{devroye2018tv}}] For all $\mu_1, \mu_2 \in \mathbb{R}$, and $\sigma_1, \sigma_2 > 0$, we have
\label{lem:tvbound}
\begin{align*}
\Delta\big(\mathcal{N}(\mu_1,\sigma_1),\mathcal{N}(\mu_2,\sigma_2)\big) \leq \frac{3|\sigma_1^2-\sigma_2^2|}{2\max(\sigma_1^2,\sigma_2^2)}+\frac{|\mu_1-\mu_2|}{2\max(\sigma_1,\sigma_2)}
\; ,
\end{align*}
where $\mathcal{N}(\mu,\sigma)$ denotes the Gaussian distribution with mean $\mu$ and standard deviation $\sigma$.
\end{lemma}
Now, we prove Lemma~\ref{lem:bdddgs-to-clwe}, restated below.
{
\def\ref{lem:bdddgs-to-clwe}{\ref{lem:bdddgs-to-clwe}}
\begin{lemma}
Let $\beta=\beta(n) \in (0,1)$ and $\gamma=\gamma(n) \geq 2\sqrt{n}$ such that $q = \gamma/\beta$ is polynomially bounded. There exists a probabilistic polynomial-time (classical) algorithm with access to an oracle that solves $\mathrm{CLWE}_{\beta,\gamma}$, that takes as input a lattice $L \subset \mathbb{R}^n$, parameters $\beta, \gamma$, and $r \geq 2q \cdot \eta_{\varepsilon}(L)$, and $\operatorname{poly}(n)$ many samples from the discrete Gaussian distribution $D_{L,r_i}$ for $\operatorname{poly}(n)$ parameters $r_i \geq r$ and solves $\mathrm{BDD}_{L^*,d}$ for $d = \gamma/(\sqrt{2}r)$.
\end{lemma}
\addtocounter{theorem}{-1}
}
\begin{proof}
Let $d' = (1-1/(2n))\cdot d$. By~\cite[Corollary 2]{lyubashevsky2009bdd}, it suffices to solve $\mathrm{BDD}_{L^*,d'}$.
Let $\kappa = \operatorname{poly}(n)$ with $\kappa \geq 8qn\ell$ be such that the advantage of our $\mathrm{CLWE}_{\beta, \gamma}$ oracle is at least $1/\kappa$, where $\ell \geq 1$ is the number of samples required by the oracle.
Given as input a lattice $L \subset \mathbb{R}^n$, a parameter $r \geq 2q \cdot \eta_{\varepsilon}(L)$, samples from $D_{L,r_i}$ for $1 \leq i \leq \operatorname{poly}(n)$, and a BDD instance $\bm w^* + \bm u$ where $\bm u \in L^*$ and $\|\bm w^*\| \leq d'$, we want to recover $\bm w^*$. Without loss of generality, we can assume that $\|\bm w^*\| \geq \exp(-n/2)\cdot \lambda_1(L^*) \geq (2q/r)\cdot \exp(-n/2)$ (Lemma~\ref{lem:smoothing-lb}), since we can otherwise find $\bm w^*$ efficiently using Babai's closest plane algorithm (Lemma~\ref{lem:closest-plane}).
We will use the CLWE oracle to simulate an oracle $\mathcal{O}: \mathbb{R}^n \times \mathbb{R}^{\ge 0} \rightarrow \{0,1\}$ such that the probability that $\mathcal{O}(\bm w,t)$ outputs 1 (``accepts") only depends on $\exp(t)\|\bm w-\bm w^*\|$. Our oracle $\mathcal{O}$ corresponds to the oracle in Definition~\ref{definition:ohcp} with $\bm w^*$ as the ``hidden center". We will use Lemma~\ref{lem:ohcp} to find $\bm w^*$.
On input $(\bm w, t)$, our oracle $\mathcal{O}$ receives $\ell$ independent samples from $D_{L,\exp(t)r}$. Then, we generate CLWE samples using the procedure from Lemma~\ref{lem:bdd-to-clwe}. The procedure takes as input these $\ell$ samples, the vector $\bm u + \bm w^* - \bm w$ where $\bm u \in L^*$, and parameters $\exp(t) r, \exp(t) s_1, s_2$. Our choice of $s_1$ and $s_2$ will be specified below. Note that the CLWE oracle requires the ``hidden direction" $(\bm w-\bm w^*)/\|\bm w-\bm w^*\|$ to be uniformly distributed on the unit sphere. To this end, we apply the worst-to-average case reduction from Claim~\ref{claim:ic-worst-to-ic}. Let $S_{\bm w, t}$ be the resulting CLWE distribution. Our oracle $\mathcal{O}$ then calls the $\mathrm{CLWE}_{\beta,\gamma}$ oracle on $S_{\bm w,t}^\ell$ and outputs 1 if and only if it accepts.
Using the oracle $\mathcal{O}$, we can run the procedure from Lemma~\ref{lem:ohcp} with confidence parameter $\kappa$ and scale parameter $d'$. The output of this procedure will be some approximation $\widehat{\bm w}$ to the oracle's ``hidden center" with the guarantee that $\|\widehat{\bm w}-\bm w^*\| \leq \exp(-\kappa)d'$. Finally, running Babai's algorithm on the vector $\bm u+\bm w^*-\widehat{\bm w}$ will give us $\bm w^*$ exactly since
\begin{align*}
\|\widehat{\bm w}-\bm w^*\| \leq \exp(-\kappa)d \leq \beta\exp(-\kappa)/\eta_\varepsilon(L) \leq 2^{-n}\lambda_1(L^*)
\; ,
\end{align*}
where the last inequality is from Lemma~\ref{lem:smoothing-dual}.
The running time of the above procedure is clearly polynomial in $n$. It remains to check that our oracle $\mathcal{O}$ (1) is a valid instance of $(\exp(-\kappa),\exp(-\kappa),1+1/\kappa)$-OHCP with hidden center $\bm w^*$ and (2) satisfies all the conditions of Lemma~\ref{lem:ohcp}. First, note that $S_{\bm w, t}$ will be negligibly close in statistical distance to the CLWE distribution with parameters
\begin{align*}
\beta' &= \sqrt{(\exp(t)\|\bm w-\bm w^*\|)^2s_1'^2+s_2^2}
\; , \\
\gamma' &= \exp(t)\|\bm w-\bm w^*\|r'
\; ,
\end{align*}
where $r' = r^2/\sqrt{r^2+s_1^2}$ and $s_1' = rs_1/\sqrt{r^2+s_1^2}$ as long as $r,s_1,s_2$ satisfy the conditions of Lemma~\ref{lem:bdd-to-clwe}. Then, we set $s_1 = r/(\sqrt{2}q)$ and choose $s_2$ such that
\begin{align*}
s_2^2 = {\beta}^2 - (s_1'/r')^2{\gamma}^2 = {\beta}^2 - (s_1/r)^2{\gamma}^2 = {\beta}^2/2
\; .
\end{align*}
Lemma~\ref{lem:bdd-to-clwe} requires $rs_1/\sqrt{r^2\|\bm w-\bm w^*\|^2(s_1/s_2)^2+r^2+s_1^2} \geq \eta_{\varepsilon}(L)$. We know that $r \geq 2q\cdot \eta_{\varepsilon}(L)$ and $s_1 \geq \sqrt{2}\cdot \eta_{\varepsilon}(L)$, so it remains to determine a sufficient condition for the aforementioned inequality. Observe that for any $\bm w$ such that $\|\bm w-\bm w^*\| \leq d$, the condition $s_2 \geq 2d\cdot\eta_\varepsilon(L)$ is sufficient. Since $r \geq 2(\gamma/\beta)\cdot \eta_{\varepsilon}(L)$, this translates to $s_2 \geq \beta/(\sqrt{2})$. Hence, the transformation from Lemma~\ref{lem:bdd-to-clwe} will output samples negligibly close to CLWE samples for our choice of $s_1$ and $s_2$ as long as $\|\bm w-\bm w^*\| \leq d$ (beyond the BDD distance bound $d'$).
Since $S_{\bm w,t}$ is negligibly close to the CLWE distribution, the acceptance probability $p(\bm w,t)$ of $\mathcal{O}$ only depends on $\exp(t)\|\bm w-\bm w^*\|$. Moreover, by assumption $\|\bm w^*\| \geq \exp(-n/2) \cdot (2q/r) \geq \exp(-\kappa)d'$. Hence, $\mathcal{O}, \kappa, d'$ correspond to a valid instance of $(\exp(-\kappa),\exp(-\kappa),1+1/\kappa)$-OHCP with ``hidden center" $\bm w^*$.
Next, we show that $p(\bm w,t)$ of $\mathcal{O}$ satisfies all three conditions of Lemma~\ref{lem:ohcp} with $p(\infty)$ taken to be the acceptance probability of the CLWE oracle on samples from $D_{\mathbb{R}^n} \times U$.
Item~1 of Lemma~\ref{lem:ohcp} follows from our assumption that our $\mathrm{CLWE}_{\beta,\gamma}$ oracle has advantage $1/\kappa$, and by our choice of $r$, $s_1$, and $s_2$, when $t^* = \log(\gamma/(\|\bm w^*\|r')) > \log(\sqrt{2})$, the generated CLWE samples satisfy $\gamma'(t^*) = \gamma$ and $\beta'(t^*) = \beta$. Hence, $p(\bm 0,t^*) - p(\infty) \geq 1/\kappa$.
We now show that Item~2 holds, which states that $|p(\bm 0,t)-p(\infty)| \leq 2 \exp(-t/\kappa)$ for any $t > 0$. We will show that $S_{\bm 0, t}$ converges exponentially fast to $D_{\mathbb{R}^n} \times U$ in statistical distance. Let $f(\bm y,z)$ be the probability density of $S_{\bm 0, t}$. Then,
\begin{align*}
\Delta(S_{\bm 0,t},D_{\mathbb{R}^n}\times U) &= \frac{1}{2}\int |f(z|\bm y)-U(z)|\rho(\bm y)d\bm y dz \\
&= \frac{1}{2} \int \Big(\int |f(z|\bm y)-U(z)|dz\Big)\rho(\bm y) d\bm y
\; .
\end{align*}
Hence, it suffices to show that the conditional density of $z$ given $\bm y$ for $S_{\bm 0,t}$ converges exponentially fast to the uniform distribution on $\mathbb{T}$. Notice that the conditional distribution of $z$ given $\bm y$ is the Gaussian distribution with width parameter $\beta' \geq \exp(t)\|\bm w^*\|r/(2q) \geq \exp(t-n/2)$, where we have used our assumption that $\|\bm w^*\| \geq (2q/r)\cdot \exp(-n/2)$. By Lemma~\ref{lem:smoothing-dual} applied to $\mathbb{Z}$, we know that $\beta'$ is larger than $\eta_{\varepsilon}(\mathbb{Z})$ for $\varepsilon = \exp(-\exp(2t-n))$. Hence, one sample from this conditional distribution is within statistical distance $\varepsilon$ of the uniform distribution by Lemma~\ref{lem:smoothing-uniform}. By the triangle inequality applied to $\ell$ samples,
\begin{align*}
\Delta\Big(S_{\bm 0, t}^\ell, (D_{\mathbb{R}^n} \times U)^\ell\Big) \leq \min(1, \ell \exp(-\exp(2t-n))) \leq 2\exp(-t/\kappa)
\; ,
\end{align*}
where in the last inequality, we use the the fact that we can choose $\kappa$ to be such that $2\exp(-t/\kappa) \geq 1$ unless $t \geq \kappa/2$. And when $t \geq \kappa/2 \geq 4qn\ell$, we have $\ell \exp(-\exp(2t-n)) \ll \exp(-t/\kappa)$.
It remains to verify Item~3, which states that $p(\bm w, t)$ is $\kappa$-Lipschitz in $t$ for any $\|\bm w\| \leq (1+1/\kappa)d' \leq d$. We show this by bounding the statistical distance between $S_{\bm w,t_1}$ and $S_{\bm w,t_2}$ for $t_1 \geq t_2$. With a slight abuse in notation, let $f_{t_i}(\bm y,z)$ be the probability density of $S_{\bm w,t_i}$ and let $(\beta_i, \gamma_i)$ be the corresponding CLWE distribution parameters. For simplicity, also denote the hidden direction by $\bm w' = (\bm w-\bm w^*)/\|\bm w-\bm w^*\|$. Then,
\begin{align}
\Delta(f_{t_1}, f_{t_2})
&= \frac{1}{2}
\int \Big(\int |f_{t_1}(z|\bm y)-f_{t_2}(z|\bm y)|dz\Big) \rho(\bm y)d\bm y \nonumber \\
&= \int \Delta\Big(\mathcal{N}(\gamma_1\langle\bm y,\bm w'\rangle,\beta_1/\sqrt{2\pi}),\mathcal{N}(\gamma_2\langle\bm y,\bm w'\rangle,\beta_2/\sqrt{2\pi})\Big) \rho(\bm y)d\bm y \nonumber \\
&\leq \frac{1}{2} \int \Big(3(1-(\beta_2/\beta_1)^2) + \sqrt{2\pi}(\gamma_1-\gamma_2)/\beta_1\cdot|\langle \bm y, \bm w' \rangle|\Big)\cdot \rho(\bm y)d\bm y \label{eqn:devroye-tv}\\
&\leq \operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)]
\cdot \Big(1-\exp(-2(t_1-t_2))\Big) \text{ where } M(\bm y)
= \frac{1}{2}\Big(3+2\sqrt{\pi} q \cdot|\langle \bm y, \bm w' \rangle|\Big) \nonumber \\
&\leq \operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)] \cdot 2(t_1-t_2) \label{eqn:linear-bound} \\
&\leq (\kappa/\ell)\cdot (t_1-t_2) \label{eqn:exp-half-gaussian}
\; ,
\end{align}
where \eqref{eqn:devroye-tv} follows from Lemma~\ref{lem:tvbound}, \eqref{eqn:linear-bound} uses the fact that $1-\exp(-2(t_1-t_2)) \leq 2(t_1-t_2)$, and \eqref{eqn:exp-half-gaussian} uses the fact that $\operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)] \leq 4q \leq \kappa/(2\ell)$. Using the triangle inequality over $\ell$ samples, the statistical distance between $S_{\bm w,t_1}^\ell$ and $S_{\bm w,t_2}^\ell$ is at most
\begin{align*}
\min(1,\ell\cdot(\kappa/\ell)(t_1-t_2)) \leq \kappa(t_1-t_2)
\; .
\end{align*}
Therefore, $p(\bm w,t)$ is $\kappa$-Lipschitz in $t$.
\end{proof}
\section{Hardness of Homogeneous CLWE}
\label{section:hc}
In this section, we show the hardness of homogeneous CLWE by reducing from CLWE, whose hardness was established in the previous section.
The main step of the reduction is to transform CLWE samples to homogeneous CLWE samples using rejection sampling (Lemma~\ref{lem:ic-to-hc}).
Consider the samples $(\bm y, z) \sim A_{\bm w,\beta,\gamma}$ in $\mathrm{CLWE}_{\beta,\gamma}$. If we condition $\bm y$ on $z = 0 \pmod{1}$ then we get exactly samples $\bm y \sim H_{\bm w,\beta,\gamma}$ for $\mathrm{hCLWE}_{\beta,\gamma}$. However, this approach is impractical as $z = 0 \pmod{1}$ happens with probability 0. Instead we condition $\bm y$ on $z \approx 0 \pmod{1}$ somehow. One can imagine that the resulting samples $\bm y$ will still have a ``wavy" probability density in the direction of $\bm w$ with spacing $1/\gamma$, which accords with the picture of homogeneous CLWE. To avoid throwing away too many samples, we will do rejection sampling with some small ``window" $\delta = 1/\operatorname{poly}(n)$. Formally, we have the following lemma.
\begin{lemma}
\label{lem:ic-to-hc}
There is a $\operatorname{poly}(n, 1/\delta)$-time probabilistic algorithm that takes as input a parameter $\delta \in (0,1)$ and samples from $A_{\bm w,\beta,\gamma}$, and outputs samples from $H_{\bm w,\sqrt{\beta^2+\delta^2},\gamma}$.
\end{lemma}
\begin{proof}
Without loss of generality assume that $\bm w = \bm e_1$.
By definition, the probability density of sample $(\bm y, z) \sim A_{\bm w,\beta,\gamma}$ is
\begin{align*}
p(\bm y, z) = \frac{1}{\beta}\cdot \rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(z+k-\gamma y_1)
\; .
\end{align*}
Let $g : \mathbb{T} \to [0,1]$ be the function $g(z) = g_0(z) / M$, where $g_0(z) = \sum_{k \in \mathbb{Z}} \rho_\delta(z+k)$
and $M = \sup_{z \in \mathbb{T}} g_0(z)$.
We perform rejection sampling on the samples $(\bm y, z)$ with acceptance probability $\Pr[\mathrm{accept} | \bm y, z] = g(z)$.
We remark that $g(z)$ is efficiently computable (see~\cite[Section 5.2]{BrakerskiLPRS13}).
The probability density of outputting $\bm y$ and accept is
\begin{align*}
\int_\mathbb{T} p(\bm y, z) g(z) d z
&= \frac{\rho(\bm y)}{\beta M} \cdot \int_\mathbb{T} \sum_{k_1, k_2 \in \mathbb{Z}} \rho_\beta(z+k_1-\gamma y_1) \rho_\delta(z+k_2) d z \\
&= \frac{\rho(\bm y)}{\beta M} \cdot \int_\mathbb{T} \sum_{k, k_2 \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\delta^2}}(k-\gamma y_1) \rho_{\beta\delta/\sqrt{\beta^2+\delta^2}} \Bigl( z+k_2+\frac{\delta^2 (k-\gamma y_1)}{\beta^2+\delta^2} \Bigr) d z \\
&= \frac{\delta}{M \sqrt{\beta^2+\delta^2}} \cdot \rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\delta^2}}(k-\gamma y_1)
\; ,
\end{align*}
where the second equality follows from Claim~\ref{claim:complete-squares}.
This shows that the conditional distribution of $\bm y$ upon acceptance is indeed $H_{\bm e_1,\sqrt{\beta^2+\delta^2},\gamma}$.
Moreover, a byproduct of this calculation is that the expected acceptance probability is $\Pr[\mathrm{accept}] = Z \delta / (M \sqrt{\beta^2+\delta^2})$, where, according to Eq.~\eqref{eqn:hclwe-def-normalization},
\begin{align*}
Z
&= \sqrt\frac{\beta^2+\delta^2}{\beta^2+\delta^2+\gamma^2} \cdot \rho_{\sqrt{\beta^2+\delta^2+\gamma^2}}(\mathbb{Z}) \\
&= \sqrt{\beta^2+\delta^2} \cdot \rho_{1/\sqrt{\beta^2+\delta^2+\gamma^2}}(\mathbb{Z}) \\
&\ge \sqrt{\beta^2+\delta^2}
\; ,
\end{align*}
and the second equality uses Lemma~\ref{lem:poisson-sum}.
Observe that
\begin{align*}
g_0(z) &= \sum_{k \in \mathbb{Z}} \rho_\delta(z+k) \\
&\leq 2 \cdot \sum_{k = 0}^\infty \rho_\delta(k) \\
&< 2 \cdot \sum_{k = 0}^\infty \exp(-\pi k)
< 4
\end{align*}
since $\delta < 1$, implying that $M \le 4$.
Therefore, $\Pr[\mathrm{accept}] \ge \delta/4$, and so the rejection sampling procedure has $\operatorname{poly}(n, 1/\delta)$ expected running time.
\end{proof}
The above lemma reduces CLWE to homogeneous CLWE with slightly worse parameters. Hence, homogeneous CLWE is as hard as CLWE.
Specifically, combining Theorem~\ref{thm:clwe-intro} (with $\beta$ taken to be $\beta/\sqrt{2}$) and Lemma~\ref{lem:ic-to-hc} (with $\delta$ also taken to be $\beta/\sqrt{2}$), we obtain the following corollary.
\begin{corollary}
\label{cor:hc}
For any $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded,
there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{2n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{corollary}
\section{Hardness of Density Estimation for Gaussian Mixtures}
\label{section:mixture-hardness}
In this section, we prove the hardness of density estimation for $k$-mixtures of $n$-dimensional Gaussians by showing a reduction from homogeneous CLWE. This answers an open question regarding its computational complexity~\cite{diakonikolas2016structured,moitra2018}.
We first formally define density estimation for Gaussian mixtures.
\begin{definition}[Density estimation of Gaussian mixtures]
Let $\mathcal{G}_{n,k}$ be the family of $k$-mixtures of $n$-dimensional Gaussians. The problem of \emph{density estimation} for $\mathcal{G}_{n,k}$ is the following. Given $\delta > 0$ and sample access to an unknown $P \in \mathcal{G}_{n,k}$, with probability $9/10$, output a hypothesis distribution $Q$ (in the form of an evaluation oracle) such that $\Delta(Q,P) \le \delta$.
\end{definition}
For our purposes, we fix the precision parameter $\delta$ to a very small constant, say, $\delta = 10^{-3}$. Now we show a reduction from $\mathrm{hCLWE}_{\beta,\gamma}$ to the problem of density estimation for Gaussian mixtures. Corollary~\ref{cor:hc} shows that $\mathrm{hCLWE}_{\beta,\gamma}$ is hard for $\gamma \ge 2\sqrt{n}$ (assuming worst-case lattice problems are hard). Hence, by taking $\gamma = 2\sqrt{n}$ and $g(n) = O(\log n)$ in Proposition~\ref{prop:mixture-learning-hardness}, we rule out the possibility of a $\operatorname{poly}(n,k)$-time density estimation algorithm for $\mathcal{G}_{n,k}$ under the same hardness assumption.
\begin{proposition}
\label{prop:mixture-learning-hardness}
Let $\beta = \beta(n) \in (0,1/32)$, $\gamma = \gamma(n) \ge 1$, and $g(n) \ge 4\pi$. For $k = 2\gamma \sqrt{g(n)/\pi}$, if there is an $\exp(g(n))$-time algorithm that solves density estimation for $\mathcal{G}_{n,2k+1}$, then there is a $O(\exp(g(n)))$-time algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{proposition}
\begin{proof}
We apply the density estimation algorithm $\mathcal{A}$ to the unknown given distribution $P$. As we will show below, with constant probability, it outputs a density estimate $f$ that satisfies $\Delta(f,P) < 2\delta = 2 \cdot 10^{-3}$ (and this is even though $H_{\bm w,\beta,\gamma}$ has infinitely many components). We then test whether $P = D_{\mathbb{R}^n}$ or not using the following procedure. We repeat the following procedure $m=1/(6\sqrt{\delta})$ times. We draw $\bm x \sim D_{\mathbb{R}^n}$ and check whether the following holds
\begin{align}
\frac{f(\bm x)}{D(\bm x)} \in [1-\sqrt{\delta},1+\sqrt{\delta}] \label{eqn:equality-test}\;,
\end{align}
where $D$ denotes the density of $D_{\mathbb{R}^n}$. We output $P = D_{\mathbb{R}^n}$ if Eq.~\eqref{eqn:equality-test} holds for all $m$ independent trials and $P = H_{\bm w,\beta,\gamma}$ otherwise.
Since $\Delta(H_{\bm w,\beta,\gamma},D_{\mathbb{R}^n}) > 1/2$ (Claim~\ref{claim:hclwe-tv-distance}), it is not hard to see that this test solves $\mathrm{hCLWE}_{\beta,\gamma}$ with probability at least $2/3$ (see~\cite[Observation 24]{rubinfeld-servedio2009monotone} for a closely related statement). Moreover, the total running time is $O(\exp(g(n))$ since this test uses a constant number of samples.
If $P = D_{\mathbb{R}^n}$, it is obvious that $\mathcal{A}$ outputs a close density estimate with constant probability since $D_{\mathbb{R}^n} \in \mathcal{G}_{n,2k+1}$. It remains to consider the case $P = H_{\bm w,\beta,\gamma}$. To this end, we observe that $H_{\bm w,\beta,\gamma}$ is close to a $(2k+1)$-mixture of Gaussians. Indeed, by Claim~\ref{claim:hclwe-truncation} below,
\begin{align}
\Delta(H_{\bm w,\beta,\gamma},H^{(k)}) \le 2\exp(-\pi\cdot k^2/(\beta^2+\gamma^2)) < 2\exp(-\pi \cdot k^2/(2\gamma^2)) \nonumber \;,
\end{align}
where $H^{(k)}$ is the distribution given by truncating $H_{\bm w,\beta,\gamma}$ to the $(2k+1)$ central mixture components.
Hence, the statistical distance between the joint distribution of $\exp(g(n))$ samples from $H_{\bm w,\beta,\gamma}$ and that of $\exp(g(n))$ samples from $H^{(k)}$ is bounded by
\begin{align}
2\exp(-\pi \cdot k^2/(2\gamma^2))\cdot\exp(g(n)) = 2\exp(-g(n)) \le 2\exp(-4\pi) \; .\nonumber
\end{align}
Since the two distributions are statistically close, a standard argument shows that $\mathcal{A}$ will output $f$ satisfying $\Delta(f,H_{\bm w,\beta,\gamma}) \le \Delta(f,H^{(k)}) + \Delta(H^{(k)},H_{\bm w,\beta,\gamma}) < 2\delta$ with constant probability.
\end{proof}
\begin{claim}
\label{claim:hclwe-tv-distance}
Let $\beta = \beta(n) \in (0,1/32)$ and $\gamma = \gamma(n) \ge 1$. Then,
\begin{align*}
\Delta(H_{\bm w,\beta,\gamma},D_{\mathbb{R}^n}) > 1/2\;.
\end{align*}
\end{claim}
\begin{proof}
Let $\gamma' = \sqrt{\beta^2+\gamma^2} > \gamma$. Let $\bm y \in \mathbb{R}^n$ be a random vector distributed according to $H_{\bm w,\beta,\gamma}$. Using the Gaussian mixture form of~\eqref{eqn:hclwe-mixture-def}, we observe that $\langle \bm y, \bm w \rangle \bmod{\gamma/\gamma'^2}$ is distributed according to $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$. Since statistical distance cannot increase by applying a function (inner product with $\bm w$ and then applying the modulo operation in this case), it suffices to lower bound the statistical distance between $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$ and $D \bmod{\gamma/\gamma'^2}$, where $D$ denotes the 1-dimensional standard Gaussian.
By Chernoff, for all $\zeta>0$, at least $1-\zeta$ mass of $D_{\beta/\gamma'}$ is contained in $[- a \cdot (\beta/\gamma'), a \cdot (\beta/\gamma')]$, where $a = \sqrt{\log(1/\zeta)}$. Hence, $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$ is at least $1-2a\beta \gamma'/\gamma-\zeta$ far in statistical distance from the uniform distribution over $\mathbb{R}/(\gamma/\gamma'^2)\mathbb{Z}$, which we denote by $U$.
Moreover, by Lemma~\ref{lem:smoothing-uniform} and Lemma~\ref{lem:smoothing-dual}, $D \bmod{\gamma/\gamma'^2}$ is within statistical distance $\varepsilon/2 = \exp(-\gamma'^4/\gamma^2)/2$ from $U$. Therefore,
\begin{align}
\Delta(D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2},D \bmod{\gamma/\gamma'^2})
&\ge \Delta(D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2},U) - \Delta(U,D \bmod{\gamma/\gamma'^2}) \nonumber \\
&\ge 1-2a\beta\gamma'/\gamma-\zeta-\varepsilon/2 \nonumber \\
&> 1-2\sqrt{2}a\beta-\zeta-\exp(-\gamma^2)/2 \label{eqn:hclwe-tv-plug-in-values} \\
&> 1/2 \nonumber\;,
\end{align}
where we set $\zeta = \exp(-2)$ and use the fact that $\beta \le 1/32$ and $\gamma \ge 1$ in \eqref{eqn:hclwe-tv-plug-in-values}.
\end{proof}
\begin{claim}
\label{claim:hclwe-truncation}
Let $\beta = \beta(n) \in (0,1), \gamma = \gamma(n) \ge 1$, and $k \in \mathbb{Z}^{+}$. Then,
\begin{align}
\Delta(H_{\bm w,\beta,\gamma},H^{(k)}) \le 2\exp(-\pi\cdot k^2/(\beta^2+\gamma^2))\nonumber \;,
\end{align}
where $H^{(k)}$ is the distribution given by truncating $H_{\bm w,\beta,\gamma}$ to the central $(2k+1)$ mixture components.
\end{claim}
\begin{proof}
We express $H_{\bm w,\beta,\gamma}$ in its Gaussian mixture form given in Eq.~\eqref{eqn:hclwe-mixture-def} and define a random variable $X$ taking on values in $\mathbb{Z}$ such that the probability of $X = i$ is equal to the probability that a sample comes from the $i$-th component in $H_{\bm w,\beta,\gamma}$. Then, we observe that $H^{(k)}$ is the distribution given by conditioning on $|X| \le k$. Since $X$ is a discrete Gaussian random variable with distribution $D_{\mathbb{Z},\sqrt{\beta^2+\gamma^2}}$, we observe that $\Pr[|X| > k] \le \varepsilon := 2\exp(-\pi \cdot k^2/(\beta^2+\gamma^2))$ by~\cite[Lemma 2.8]{micciancio-peikert2012trapdoor}.
Since conditioning on an event of probability $1-\varepsilon$ cannot change the statistical distance by more than $\varepsilon$, we have
\begin{align}
\Delta(H_{\bm w,\beta,\gamma}, H^{(k)}) \le \varepsilon \nonumber \;.
\end{align}
\end{proof}
\section{LLL Solves Noiseless CLWE}
\label{section:lll-clwe}
The noiseless CLWE problem ($\beta = 0$) can be solved in polynomial time using LLL. This applies both to the homogeneous and the inhomogeneous versions, as well as to the search version. The argument can be extended to the case of exponentially small $\beta>0$.
The key idea is to take samples $(\bm y_i, z_i)$, and find integer coefficients $c_1,\ldots,c_m$ such that $\bm y = \sum_{i=1}^m c_i \bm y_i$ is short, say
$\|\bm y\| \ll 1/\gamma$. By Cauchy-Schwarz, we then have that $\gamma \langle \bm y, \bm w \rangle = \sum_{i=1}^m c_i z_i$ over the reals (not modulo 1!). This is formalized in Theorem~\ref{thm:lll-noiseless-clwe}. We first state Minkowski's Convex Body Theorem, which we will use in the proof of our procedure.
\begin{lemma}[\cite{minkowski1910geometrie}]
\label{lem:minkowski-cvx}
Let $L$ be a full-rank $n$-dimensional lattice. Then, for any centrally-symmetric convex set $S$, if $\operatorname{vol}(S) > 2^n \cdot |\det(L)|$, then $S$ contains a non-zero lattice point.
\end{lemma}
\begin{theorem}
\label{thm:lll-noiseless-clwe}
Let $\gamma = \gamma(n)$ be a polynomial in $n$. Then, there exists a polynomial-time algorithm for solving $\mathrm{CLWE}_{0,\gamma}$.
\end{theorem}
\begin{proof}
Take $n+1$ CLWE samples $\{(\bm y_i,z_i)\}_{i=1}^{n+1}$ and consider the matrix
\begin{align*}
Y = \begin{bmatrix}
\bm y_1 & \cdots & \bm y_n & \bm y_{n+1} \\
0 & \cdots & 0 & \delta
\end{bmatrix} \; ,
\end{align*}
where $\delta = 2^{-3n^2}$.
Consider the lattice $L$ generated by the columns of $Y$. Since $\bm y_i$'s are drawn from the Gaussian distribution, $L$ is full rank. By Hadamard's inequality, and the fact that with probability exponentially close to $1$, $\|\bm y_i\| \leq \sqrt{n}$ for all $i$, we have
\begin{align*}
|\det(L)| \leq \delta \cdot n^{n/2} < 2^{-2n^2} \; .
\end{align*}
Now consider the $n$-dimensional cube $S$ centered at $\bm 0$ with side length $2^{-n}$. Then, $\operatorname{vol}(S) = 2^{-n^2}$, and by Lemma~\ref{lem:minkowski-cvx}, $L$ contains a vector $\bm v$ satisfying $\|\bm v\|_{\infty} \leq 2^{-n}$ and so $\| \bm v \|_2 \leq \sqrt{n}\cdot 2^{-n}$.
Applying the LLL algorithm~\cite{lenstra1982lll} gives us an integer combination of the columns of $Y$ whose length is within $2^{(n+1)/2}$ factor of the shortest vector in $L$, which will therefore have $\ell_2$ norm less than $\sqrt{n} \cdot 2^{-(n-1)/2}$.
Let $\bm y$ be the corresponding combination of the $\bm y_i$ vectors (which is equivalently given by the first $n$ coordinates of the output of LLL) and
$z \in (-1/2,1/2]$ a representative of the corresponding integer combination of the $z_i$ mod 1.
Then, we have $\|\bm y\|_2 \leq \sqrt{n} \cdot 2^{-(n-1)/2}$ and therefore we obtain the linear equation $\gamma \cdot \langle \bm y,\bm w \rangle = z$ over the reals (without mod 1).
We now repeat the above procedure $n$ times, and recover $\bm w$ by solving the resulting $n$ linear equations.
It remains to argue why the $n$ vectors $\bm y$ we collect are linearly independent.
First, note that the output $\bm y$ is guaranteed to be a non-zero vector since with probability $1$, no integer combination of the Gaussian distributed $\bm y_i$ is $\bm 0$.
Next, note that LLL is equivariant to rotations, i.e., if we rotate the input basis then the output vector will also be rotated by the same rotation. Moreover, spherical Gaussians are rotationally invariant. Hence, the distribution of the output vector $\bm y \in \mathbb{R}^n$
is also rotationally invariant. Therefore, repeating the above procedure $n$ times will give us $n$ linearly independent vectors.
\end{proof}
\section{Subexponential Algorithm for Homogeneous CLWE}
\label{section:subexp}
For $\gamma = o(\sqrt{n})$, the covariance matrix will reveal the discrete structure of homogeneous CLWE, which will lead to a subexponential time algorithm for the problem. This clarifies why the hardness results of homogeneous CLWE do not extend beyond $\gamma \geq 2\sqrt{n}$.
We define \emph{noiseless homogeneous CLWE distribution} $H_{\bm w, \gamma}$ as $H_{\bm w, \beta, \gamma}$ with $\beta = 0$.
We begin with a claim that will allow us to focus on the noiseless case.
\begin{claim}
\label{claim:noiseless-is-sufficient}
By adding Gaussian noise $\ngauss{\beta/\gamma}$ to $H_{\bm w,\gamma}$ and then rescaling by a factor of $\gamma/\sqrt{\beta^2+\gamma^2}$, the resulting distribution is $H_{\bm w, \tilde{\beta}, \tilde{\gamma}}$, where $\tilde{\gamma} = \gamma/\sqrt{1+(\beta/\gamma)^2}$ and $\tilde{\beta} = \tilde{\gamma}(\beta/\gamma)$.\footnote{%
Equivalently, in terms of the Gaussian mixture representation of Eq.~\eqref{eqn:hclwe-mixture-def}, the resulting distribution has layers spaced by $1/\sqrt{\gamma^2+\beta^2}$
and of width $\beta/\sqrt{\gamma^2+\beta^2}$.
}
\end{claim}
\begin{proof}
Without loss of generality, suppose $\bm w = \bm e_1$.
Let $\bm z \sim H_{\bm w,\gamma} + \ngauss{\beta/\gamma}$ and $\tilde{\bm z} = \gamma\bm z/\sqrt{\beta^2+\gamma^2}$.
It is easy to verify that the marginals density of $\tilde{\bm z}$ on subspace $\bm e_1^\perp$ will simply be $\rho$.
Hence we focus on calculating the density of $z_1$ and $\tilde{z}_1$.
The density can be computed by convolving the probability densities of $H_{\bm w,\gamma}$ and $\ngauss{\beta/\gamma}$ as follows.
\begin{align*}
H_{\bm w,\gamma} * \ngauss{\beta/\gamma}(z_1) &\propto \sum_{k \in \mathbb{Z}} \rho(k/\gamma)\cdot \rho_{\beta/\gamma}(z_1-k/\gamma) \\
&= \rho_{\sqrt{\beta^2+\gamma^2}/\gamma}(z_1) \cdot \sum_{k \in \mathbb{Z}} \rho_{\beta/\sqrt{\beta^2+\gamma^2}}\Big(k / \gamma - \frac{\gamma^2}{\beta^2+\gamma^2}z_1 \Big) \\
&= \rho(\tilde{z}_1) \cdot \sum_{k \in \mathbb{Z}} \rho_{\tilde{\beta}}\Big(k - \tilde{\gamma} \tilde{z}_1\Big)
\; ,
\end{align*}
where the second to last equality follows from Claim~\ref{claim:complete-squares}.
This verifies that the resulting distribution is indeed $H_{\bm w, \tilde{\beta}, \tilde{\gamma}}$.
\end{proof}
Claim~\ref{claim:noiseless-is-sufficient} implies an homogeneous CLWE distribution with $\beta > 0$ is equivalent to a noiseless homogeneous CLWE distribution with independent Gaussian noise added. We will first analyze the noiseless case and then derive the covariance of noisy (i.e., $\beta > 0$) case by adding independent Gaussian noise and rescaling.
\begin{lemma}
\label{lem:covariance-hclwe-noiseless}
Let $\Sigma \succ 0$ be the covariance matrix of the
$n$-dimensional noiseless homogeneous CLWE distribution
$H_{\bm w,\gamma}$ with $\gamma \ge 1$. Then,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi} I_n \Big\| \geq \gamma^2 \exp(-\pi\gamma^2) \; ,
\end{align*}
where $\|\cdot\|$ denotes the spectral norm.
\end{lemma}
\begin{proof}
Without loss of generality, let $\bm w = \bm e_1$.
Then $H_{\bm w,\gamma} = D_{L} \times D_{\mathbb{R}^{n-1}}$ where $L$ is the one-dimensional lattice $(1/\gamma)\mathbb{Z}$.
Then, $\Sigma = \operatorname{diag}(\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2], \frac{1}{2\pi},\dots,\frac{1}{2\pi})$, so it suffices to show that
\begin{equation*}
\Big| \operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2] - \frac{1}{2\pi} \Big| \ge \gamma^2 \exp(-\pi\gamma^2)
\; .
\end{equation*}
Define $g(x) = x^2 \cdot \rho(x)$.
The Fourier transform of $\rho$ is itself; the Fourier transform of $g$ is given by
\begin{align*}
\widehat{g}(y) = \Big(\frac{1}{2\pi}-y^2\Big) \rho(y)
\; .
\end{align*}
By definition and Poisson's summation formula (Lemma~\ref{lem:poisson-sum}), we have
\begin{align}
\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2]
&= \frac{g(L)}{\rho(L)} \nonumber \\
&= \frac{\det(L^*)\cdot \widehat{g}(L^*)}{\det(L^*)\cdot \rho(L^*)}
= \frac{\widehat{g}(L^*)}{\rho(L^*)} \nonumber \; ,
\end{align}
where $L^* = ((1/\gamma)\mathbb{Z})^* = \gamma \mathbb{Z}$.
Combining this with the expression for $\widehat{g}$, we have
\begin{align*}
\Bigl|\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2]-\frac{1}{2\pi}\Bigr| &= \frac{\sum_{y \in L^*}y^2\rho(y)}{1+\rho(L^*\setminus\{0\})} \\
&\geq \gamma^2 \exp(-\pi \gamma^2) \; ,
\end{align*}
where we use the fact that for $\gamma \ge 1$,
\begin{align*}
\rho(\gamma \mathbb{Z}\setminus\{0\}) \le
\rho(\mathbb{Z}\setminus\{0\}) &< 2\sum_{k=1}^\infty \exp(-\pi k) = \frac{2\exp(-\pi)}{1-\exp(-\pi)} < 1
\; .
\qedhere
\end{align*}
\end{proof}
Combining Claim~\ref{claim:noiseless-is-sufficient} and Lemma~\ref{lem:covariance-hclwe-noiseless}, we get the following corollary for the noisy case.
\begin{corollary}
\label{cor:covariance-hclwe}
Let $\Sigma \succ 0$ be the covariance matrix of
$n$-dimensional homogeneous CLWE distribution
$H_{\bm w,\beta,\gamma}$ with $\gamma \ge 1$ and $\beta > 0$. Then,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi} I_n \Big\| \geq \gamma^2 \exp(-\pi(\beta^2+\gamma^2)) \; ,
\end{align*}
where $\|\cdot\|$ denotes the spectral norm.
\end{corollary}
\begin{proof}
Using Claim~\ref{claim:noiseless-is-sufficient}, we can view samples from $H_{\bm w,\beta,\gamma}$ as samples from $H_{\bm w,\gamma'}$ with independent Gaussian noise of width $\beta'/\gamma'$ added and rescaled by $\gamma'/\sqrt{\beta'^2+\gamma'^2}$, where $\beta', \gamma'$ are given by
\begin{align*}
\beta' &= \beta \sqrt{1+(\beta/\gamma)^2} \; , \\
\gamma' &= \sqrt{\beta^2+\gamma^2} \;.
\end{align*}
Let $\Sigma$ be the covariance of $H_{\bm w,\beta,\gamma}$ and let $\Sigma_0$ be the covariance of $H_{\bm w,\gamma'}$. Since the Gaussian noise added to $H_{\bm w,\gamma'}$ is independent and $\beta'/\gamma' = \beta/\gamma$,
\begin{align*}
\Sigma = \frac{1}{1+(\beta/\gamma)^2}\Big(\Sigma_0 + \frac{(\beta/\gamma)^2}{2\pi} I_n\Big) \;.
\end{align*}
Hence,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi}I_n\Big\| &= \frac{1}{1+(\beta/\gamma)^2} \Big\|\Big(\Sigma_0 + \frac{(\beta/\gamma)^2}{2\pi}I_n\Big)-\frac{1+(\beta/\gamma)^2}{2\pi}I_n\Big\| \\
&= \frac{1}{1+(\beta/\gamma)^2}\Big\|\Sigma_0 - \frac{1}{2\pi}I_n \Big\| \\
&\geq \gamma^2 \exp(-\pi(\beta^2+\gamma^2)) \; .
\end{align*}
where the last inequality follows from Lemma~\ref{lem:covariance-hclwe-noiseless}.
\end{proof}
We use the following lemma, which provides an upper bound on the error in estimating the covariance matrix by samples. The sub-gaussian norm of a random variable $Y$ is defined as $\|Y\|_{\psi_2} = \inf\{t > 0 \mid \mathbb{E}[\exp(Y^2/t^2)] \leq 2\}$ and that of an $n$-dimensional random vector $\bm y$ is defined as $\|\bm y\|_{\psi_2} = \sup_{\bm u \in \mathbb{S}^{n-1}}\|\langle \bm y, \bm u \rangle\|_{\psi_2}$.
\begin{lemma}[{\cite[Theorem 4.6.1]{vershynin2018high}}]
\label{lem:covariance-estimate}
Let $A$ be an $m\times n$ matrix whose rows $A_i$ are independent, mean zero, sub-gaussian isotropic random vectors in $\mathbb{R}^n$. Then for any $u \geq 0$ we have
\begin{align*}
\Big\|\frac{1}{m}A^TA-I_n\Big\| \leq K^2 \max(\delta,\delta^2) \; \text{ where } \delta = C\Big(\sqrt{\frac{n}{m}} +\frac{u}{\sqrt{m}}\Big)\;,
\end{align*}
with probability at least $1-2e^{-u^2}$ for some constant $C > 0$. Here, $K = \max_i \|A_i\|_{\psi_i}$.
\end{lemma}
Combining Corollary~\ref{cor:covariance-hclwe} and Lemma~\ref{lem:covariance-estimate}, we have the following theorem for distinguishing homogeneous CLWE distribution and Gaussian distribution.
\begin{theorem}
\label{thm:subexp-hclwe}
Let $\gamma = n^{\varepsilon}$, where $\varepsilon < 1/2$ is a constant, and let $\beta = \beta(n) \in (0,1)$. Then, there exists an $\exp(O(n^{2\varepsilon}))$-time algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{theorem}
\begin{proof}
Our algorithm takes $m$ samples from the unknown input distribution $P$ and computes the sample covariance matrix $\Sigma_m = (1/m)A^TA$, where $A$'s rows are the samples, and its eigenvalues $\mu_1, \ldots, \mu_n$. Then, it determines whether $P$ is a homogeneous CLWE distribution or not by testing that
\begin{align*}
\Bigl|\mu_i - \frac{1}{2\pi}\Bigr| \le \frac{1}{2}\cdot \gamma^2\exp(-\pi (\beta^2+\gamma^2)) \; \text{ for all } i \in [n]\;.
\end{align*}
The running time of this algorithm is $O(n^2 m) = \exp(O(n^{2\varepsilon}))$. To show correctness, we first consider the case $P = D_{\mathbb{R}^n}$. The standard Gaussian distribution satisfies the conditions of Lemma~\ref{lem:covariance-estimate} (after rescaling by $1/(2\pi)$). Hence, the eigenvalues of $\Sigma_m$ will be within distance $O(\sqrt{n/m})$ from $1/(2\pi)$ with high probability.
Now consider the case $P = H_{\bm w,\beta,\gamma}$. We can assume $\bm w=\bm e_1$ without loss of generality since eigenvalues are invariant under rotations. Denote by $\bm y$ a random vector distributed according to $H_{\bm w,\beta,\gamma}$ and $\sigma^2 = \operatorname*{\mathbb{E}}_{\bm y \sim H_{\bm w,\beta,\gamma}}[y_1^2]$. The covariance of $P$ is given by
\begin{align}
\Sigma = \begin{pmatrix} \sigma^2 & \bm 0 \\ \bm 0 & \frac{1}{2\pi}I_{n-1} \end{pmatrix} \label{eqn:hclwe-covariance-matrix} \; .
\end{align}
Now consider the sample covariance $\Sigma_m$ of $P$ and denote by $\sigma_m^2 = \bm w^T\Sigma_m\bm w = (1/m)\sum_{i=1}^m A_{i1}^2$. Since $A_{i1}$'s are sub-gaussian random variables~\cite[Lemma 2.8]{micciancio-peikert2012trapdoor}, $\sigma_m^2-\sigma^2$ is a sum of $m$ independent, mean-zero, sub-exponential random variables. For $m = \omega(n)$, Bernstein's inequality~\cite[Corollary 2.8.3]{vershynin2018high} implies that $|\sigma_m^2-\sigma^2| = O(\sqrt{n/m})$ with high probability. By Corollary~\ref{cor:covariance-hclwe}, we know that
\begin{align*}
\Big|\sigma^2 - \frac{1}{2\pi}\Big| \ge \gamma^2\exp(-\pi(\beta^2+\gamma^2)) \;.
\end{align*}
Hence, if we choose $m = \exp(c\gamma^2)$ with some sufficiently large constant $c$, then $\Sigma_m$ will have an eigenvalue that is noticeably far from $1/(2\pi)$ with high probability.
\end{proof}
\section{SQ Lower Bound for Homogeneous CLWE}
\label{section:sq-lb}
Statistical Query (SQ) algorithms~\cite{kearnsSQ1998} are a restricted class of algorithms that are only allowed to query expectations of functions of the input distribution without directly accessing individual samples. To be more precise, SQ algorithms access the input distribution indirectly via the STAT($\tau$) oracle, which given a query function $f$ and data distribution $D$, returns a value contained in the interval $\mathbb{E}_{x \sim D} [f(x)]+[-\tau, \tau]$ for some precision parameter $\tau$.
In this section, we prove SQ hardness of distinguishing homogeneous CLWE distributions from the standard Gaussian. In particular, we show that SQ algorithms that solve homogeneous CLWE require super-polynomial number of queries even with super-polynomial precision. This is formalized in Theorem~\ref{thm:hclwe-sq-lb}.
\begin{theorem}
\label{thm:hclwe-sq-lb}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq \sqrt{2}$. Then, any (randomized) SQ algorithm with precision $\tau \geq 4 \cdot \exp(-\pi \cdot \gamma^2/4)$ that successfully solves $\mathrm{hCLWE}_{\beta, \gamma}$ with probability $\eta > 1/2$ requires at least $(2\eta-1)\cdot \exp(c n)\cdot \tau^2\beta^2/(4\gamma^2)$ statistical queries of precision $\tau$ for some constant $c > 0$.
\end{theorem}
Note that when $\gamma = \Omega(\sqrt{n})$ and $\gamma/\beta = \operatorname{poly}(n)$, even
exponential precision $\tau = \exp(-O(n))$ results in a query lower bound that grows as $\exp(\tilde{\Omega}(n))$. This establishes an unconditional hardness result for SQ algorithms in the parameter regime $\gamma = \Omega(\sqrt{n})$, which is consistent with our computational hardness result based on worst-case lattice problems. The uniform spacing in homogeneous CLWE distributions gives us tight control over their pairwise correlation (see definition in \eqref{eqn:pairwise-corr}), which leads to a simple proof of the SQ lower bound.
We first provide some necessary background on the SQ framework. We denote by $\mathcal{B}(\mathcal{U},D)$ the decision problem in which the input distribution $P$ either equals $D$ or belongs to $\mathcal{U}$, and the goal of the algorithm is to identify whether $P=D$ or $P \in \mathcal{U}$. For our purposes, $D$ will be the standard Gaussian $D_{\mathbb{R}^n}$ and $\mathcal{U}$ will be a finite set of homogeneous CLWE distributions. Abusing notation, we denote by $D(x)$ the density of $D$. Following \cite{feldman2017planted-clique}, we define the \emph{pairwise correlation} between two distributions $P, Q$ relative to $D$ as
\begin{align}
\chi_D(P,Q) := \mathbb{E}_{\bm x \sim D} \left[\left(\frac{P(\bm x)}{D(\bm x)}-1 \right)\cdot\left(\frac{Q(\bm x)}{D(\bm x)}-1 \right) \right] = \mathbb{E}_{\bm x \sim D} \left[\frac{P(\bm x)Q(\bm x)}{D(\bm x)^2}\right] -1 \label{eqn:pairwise-corr}\; .
\end{align}
Lemma~\ref{lem:decision-lb} below establishes a lower bound on the number of statistical queries required to solve $\mathcal{B}(\mathcal{U},D)$ in terms of pairwise correlation between distributions in $\mathcal{U}$.
\begin{lemma}[{\cite[Lemma 3.10]{feldman2017planted-clique}}]
\label{lem:decision-lb}
Let $D$ be a distribution and $\mathcal{U}$ be a set of distributions both over a domain $X$ such that for any $P, Q \in \mathcal{U}$
\begin{align}
|\chi_D(P,Q)| \leq \begin{cases} \delta &\mbox{if } P = Q \\
\varepsilon &\mbox{otherwise }\; \end{cases} \nonumber\;.
\end{align}
Let $\tau \ge \sqrt{2\varepsilon}$. Then, any (randomized) SQ algorithm that solves $\mathcal{B}(\mathcal{U},D)$ with success probability $\eta > 1/2$ requires at least $(2\eta-1)\cdot|\mathcal{U}|\cdot\tau^2/(2\delta)$ queries to $\operatorname{STAT}(\tau)$.
\end{lemma}
The following proposition establishes a tight upper bound on the pairwise correlation between homogeneous CLWE distributions. To deduce Theorem~\ref{thm:hclwe-sq-lb} from Lemma~\ref{lem:decision-lb} and Proposition~\ref{prop:avg-corr}, we take a set of unit vectors $\mathcal{U}$ such that any two distinct vectors $\bm v, \bm w \in \mathcal{U}$ satisfy $|\langle \bm v, \bm w \rangle| \leq 1/\sqrt{2}$, and identify it with the set of homogeneous CLWE distributions $\{H_{\bm w,\beta,\gamma}\}_{\bm w \in \mathcal{U}}$. A standard probabilistic argument shows that such a $\mathcal{U}$ can be as large as $\exp(\Omega(n))$, which proves Theorem~\ref{thm:hclwe-sq-lb}.
\begin{proposition}
\label{prop:avg-corr}
Let $\bm v, \bm w \in \mathbb{R}^n$ be unit vectors and let $H_{\bm v}, H_{\bm w}$ be $n$-dimensional homogeneous CLWE distributions with parameters $\gamma \geq 1, \beta \in (0,1)$, and hidden direction $\bm v$ and $\bm w$, respectively. Then, for any $\alpha \ge 0$ that satisfies $\gamma^2(1-\alpha^2) \ge 1$,
\begin{align}
|\chi_{D}(H_{\bm v},H_{\bm w})| \leq \begin{cases} 2(\gamma/\beta)^2 &\text{ if } \bm v = \bm w \\ 8\exp(-\pi\cdot \gamma^2(1-\alpha^2)) &\text{ if } |\langle \bm v, \bm w \rangle| \leq \alpha \end{cases} \nonumber \;.
\end{align}
\end{proposition}
\begin{proof}
We will show that computing $\chi_D(H_{\bm v},H_{\bm w})$ reduces to evaluating the Gaussian mass of two lattices $L_1$ and $L_2$ defined below. Then, we will tightly bound the Gaussian mass using Lemma~\ref{lem:poisson-sum} and Lemma~\ref{lem:smoothing-primal}, which will result in upper bounds on $|\chi_D(H_{\bm v},H_{\bm w})|$. We define $L_1$ and $L_2$ by specifying their bases $B_1$ and $B_2$, respectively.
\begin{align*}
B_1 &= \frac{1}{\sqrt{\beta^2+\gamma^2}} \begin{pmatrix} 1 & 0 \\
0 & 1 \end{pmatrix}
\; ,\\
B_2 &= \frac{1}{\sqrt{\beta^2+\gamma^2}}\begin{pmatrix} 1 & 0 \\
-\frac{\alpha \gamma^2}{\zeta\sqrt{\beta^2+\gamma^2}} & \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \end{pmatrix}
\; ,
\end{align*}
where $\zeta = \sqrt{(\beta^2+\gamma^2) -\alpha^2\gamma^4/(\beta^2+\gamma^2)}$. Then the basis of the dual lattice $L_1^*$ and $L_2^*$ is $B_1^{-T}$ and $B_2^{-T}$, respectively. Note that $\lambda_2(L_1)^2 = 1/(\beta^2+\gamma^2)$ and that the two columns of $B_2$ have the same norm, and so
\begin{align}
\lambda_2(L_2)^2 &\leq \frac{1}{\beta^2+\gamma^2}\cdot \max\Big\{1+\frac{\alpha^2\gamma^4}{\zeta^2(\beta^2+\gamma^2)},\frac{\beta^2+\gamma^2}{\zeta^2}\Big\} \nonumber\\
&= \frac{1}{\zeta^2} \label{eqn:lambda2-general} \\
&\leq \frac{1}{\gamma^2(1-\alpha^2)} \label{eqn:lambda2-simple} \; .
\end{align}
Now define the density ratio $a(t) := H(t)/D(t)$, where $D$ is the standard Gaussian and $H$ is the marginal distribution of homogeneous CLWE with parameters $\beta, \gamma$ along the hidden direction. We immediately obtain
\begin{align}
a(t) &= \frac{1}{Z} \sum_{k \in \mathbb{Z}} \rho_{\beta/\gamma}(t-k/\gamma) \label{eq:sq-density-ratio}
\; ,
\end{align}
where $Z = \int_\mathbb{R} \rho(t) \cdot \sum_{k \in \mathbb{Z}} \rho_{\beta/\gamma}(t-k/\gamma) dt$. By Eq.~\eqref{eqn:hclwe-def-normalization}, $Z$ is given by
\begin{align*}
Z = \frac{\beta}{\sqrt{\beta^2+\gamma^2}} \cdot \rho\Bigg(\frac{1}{\sqrt{\beta^2+\gamma^2}}\mathbb{Z}\Bigg) \; .
\end{align*}
Moreover, we can express $Z^2$ in terms of the Gaussian mass of $(L_1)$ as
\begin{align*}
Z^2 = \frac{\beta^2}{\beta^2+\gamma^2}\cdot \rho(L_1) \; .
\end{align*}
$\chi_D(H_{\bm v},H_{\bm w})$ can be expressed in terms of $a(t)$ as
\begin{align}
\chi_D(H_{\bm v},H_{\bm w}) = \operatorname*{\mathbb{E}}_{\bm x \sim D}\Big[a(\langle \bm x, \bm w \rangle)\cdot a(\langle \bm x, \bm v \rangle)\Big] - 1 \label{eqn:avg-corr-simplified}\;.
\end{align}
Without loss of generality, assume $\bm v = \bm e_1$ and $\bm w = \alpha \bm e_1 + \xi \bm e_2$, where $\xi = \sqrt{1-\alpha^2}$. We first compute the pairwise correlation for $\bm v \neq \bm w$. For notational convenience, we denote by $\varepsilon = 8\cdot\exp(-\pi\cdot\gamma^2(1-\alpha^2))$.
\begin{align}
\chi_{D}(H_{\bm v},H_{\bm w}) + 1&= \operatorname*{\mathbb{E}}_{\bm x \sim D} \Big[a(x_1)\cdot a(\alpha x_1 + \xi x_2)\Big] \nonumber \\
&= \frac{1}{Z^2}\sum_{k, \ell \in \mathbb{Z}} \int \int \rho_{\beta}(\gamma x_1 - k)\cdot \rho_{\beta}((\gamma \alpha x_1 + \gamma \xi x_2) - \ell)\cdot \rho(x_1) \cdot \rho(x_2) dx_1 dx_2 \nonumber \\
&= \frac{1}{Z^2}\cdot\frac{\beta}{\sqrt{(\gamma\xi)^2+\beta^2}}\sum_{k, \ell \in \mathbb{Z}} \int \rho_{\beta}(\gamma x_1 - k) \cdot \rho(x_1) \cdot \rho_{\sqrt{1+\beta^2/(\gamma\xi)^2}} (\ell/(\gamma\xi)-(\alpha/\xi) x_1) dx_1 \nonumber\\
&= \frac{1}{Z^2}\cdot\frac{\beta}{\sqrt{(\gamma\xi)^2+\beta^2}}\cdot\frac{\beta\sqrt{(\gamma\xi)^2+\beta^2}}{\zeta\sqrt{\beta^2+\gamma^2}} \sum_{k, \ell \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot \rho_{\zeta}\Big(\ell - \gamma^2 \alpha \cdot k/(\beta^2+\gamma^2)\Big) \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot \frac{\sum_{k, \ell \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot \rho_{\zeta}\Big(\ell - \gamma^2 \alpha \cdot k/(\beta^2+\gamma^2)\Big)}{\rho(L_1)} \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\rho(L_2)}{\rho(L_1)} \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot\frac{\det(L_2^*)}{\det(L_1^*)} \cdot \frac{\rho(L_2^*)}{\rho(L_1^*)}\nonumber \\
&= \frac{\rho(L_2^*)}{\rho(L_1^*)} \label{eqn:poisson-distinct-directions}\\
&\in \Big[\frac{1}{1+\varepsilon}, 1+\varepsilon\Big] \nonumber \; ,
\end{align}
In \eqref{eqn:poisson-distinct-directions}, we used the Poisson summation formula (Lemma~\ref{lem:poisson-sum}). The last line follows from \eqref{eqn:lambda2-simple} and Lemma~\ref{lem:smoothing-primal}, which implies that for any 2-dimensional lattice $L$ satisfying $\lambda_2(L) \leq 1$,
\begin{align}
\rho(L^*\setminus\{\bm 0\}) \leq 8\exp(-\pi/\lambda_2(L)^2) \label{eqn:2d-gaussian-mass-bound}\; .
\end{align}
Now consider the case $\bm v = \bm w$. Using \eqref{eqn:lambda2-general}, we get an upper bound $\lambda_2(L_2) \leq 1/\beta$ when $\alpha = 1$. It follows that $\lambda_2((\beta/\gamma)L_2) \le 1/\gamma \le 1$. Hence,
\begin{align}
\chi_{D}(H_{\bm v},H_{\bm v}) + 1
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\rho(L_2)}{\rho(L_1)} \nonumber\\
&\leq \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot \frac{\rho((\beta/\gamma)L_2)}{\rho(L_1)} \nonumber \\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\det((\gamma/\beta)L_2^*)}{\det(L_1^*)} \cdot \frac{\rho((\gamma/\beta)L_2^*)}{\rho(L_1^*)} \nonumber \\
&= \frac{\gamma^2}{\beta^2}\cdot\frac{\rho((\gamma/\beta)L_2^*)}{\rho(L_1^*)} \label{eqn:poisson-same-direction}\\
&\leq 2(\gamma/\beta)^2 \label{eqn:chi-correlation-ub} \; .
\end{align}
where we used Lemma~\ref{lem:poisson-sum} in \eqref{eqn:poisson-same-direction} and in \eqref{eqn:chi-correlation-ub}, we used \eqref{eqn:2d-gaussian-mass-bound} and the fact that $\lambda_2((\beta/\gamma)L_2) \leq 1$ to deduce $\rho((\gamma/\beta)L_2^*\setminus\{\bm 0\}) \leq 1$.
\end{proof}
\section{Extension of Homogeneous CLWE to \texorpdfstring{$m \ge 1$}{m>=1} Hidden Directions}
\label{section:k-hc}
In this section, we generalize the hardness result to the setting where the homogeneous CLWE distribution has $m \ge 1$ hidden directions.
The proof is a relatively standard hybrid argument.
\begin{definition}[$m$-Homogeneous CLWE distribution]
For $0 \le m \le n$, matrix $\bm W \in \mathbb{R}^{n \times m}$ with orthonormal columns $\bm w_1,\ldots,\bm w_m$, and $\beta, \gamma > 0$, define the \emph{$m$-homogeneous CLWE distribution} $H_{\bm W, \beta, \gamma}$ over $\mathbb{R}^n$ to have density at $\bm y$ proportional to
\begin{align*}
\rho(\bm y) \cdot \prod_{i = 1}^m \sum_{k \in \mathbb{Z}} \rho_\beta(k-\gamma\langle \bm y, \bm w_i \rangle)
\; .
\end{align*}
\end{definition}
Note that the $0$-homogeneous CLWE distribution is just $D_{\mathbb{R}^n}$ regardless of $\beta$ and $\gamma$.
\begin{definition} For parameters $\beta, \gamma > 0$ and $1 \le m \le n$, the average-case decision problem $\mathrm{hCLWE}_{\beta, \gamma}^{(m)}$ is to distinguish the following two distributions over $\mathbb{R}^n$: (1) the $m$-homogeneous CLWE distribution $H_{\bm W, \beta, \gamma}$ for some matrix $\bm W \in \mathbb{R}^{n \times m}$ (which is fixed for all samples) with orthonormal columns chosen uniformly from the set of all such matrices, or (2) $D_{\mathbb{R}^n}$.
\end{definition}
\begin{lemma}
\label{lem:hc-to-k-hc}
For any $\beta, \gamma > 0$ and positive integer $m = m(n)$ such that $m \le n$ and $n - m = \Omega(n^c)$ for some constant $c > 0$,
if there exists an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$ with non-negligible advantage,
then there exists an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$ with non-negligible advantage.
\end{lemma}
\begin{proof}
Suppose $\mathcal{A}$ is an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$ with non-negligible advantage
in dimension $n$.
Then consider the following algorithm $\mathcal{B}$ that uses $\mathcal{A}$ as an oracle and solves $\mathrm{hCLWE}_{\beta,\gamma}$ in dimension $n' = n-m+1$.
\begin{enumerate}
\item Input: $n'$-dimensional samples, drawn from either $\mathrm{hCLWE}_{\beta,\gamma}$ or $D_{\mathbb{R}^{n'}}$;
\item Choose $0 \le i \le m-1$ uniformly at random;
\item Append $m-1 = n-n'$ coordinates to the given samples, where the first $i$ appended coordinates are drawn from $H_{\bm I_i, \beta, \gamma}$ (with $\bm I_i$ denoting the rank-$i$ identity matrix) and the rest of the coordinates are drawn from $D_{\mathbb{R}^{m - i -1}}$;
\item Rotate the augmented samples using a uniformly random rotation from the orthogonal group $O(n)$;
\item Call $\mathcal{A}$ with the samples and output the result.
\end{enumerate}
As $n = O({n'}^{1/c})$, $\mathcal{B}$ is an efficient algorithm.
Moreover, the samples passed to $\mathcal{A}$ are effectively drawn from either $\mathrm{hCLWE}_{\beta,\gamma}^{(i+1)}$ or $\mathrm{hCLWE}_{\beta,\gamma}^{(i)}$.
Therefore the advantage of $\mathcal{B}$ is at least $1/m$ fraction of the advantage of $\mathcal{A}$, which would be non-negligible (in terms of $n$, and thus also in terms of $n'$) as well.
\end{proof}
Combining Corollary~\ref{cor:hc} and Lemma~\ref{lem:hc-to-k-hc}, we obtain the following corollary.
\begin{corollary}
For any $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded,
and positive integer $m = m(n)$ such that $m \le n$ and $n - m = \Omega(n^c)$ for some constant $c > 0$,
there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{2 n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$.
\end{corollary}
\printbibliography
\end{document}
\section{Introduction}
\label{section:intro}
The Learning with Errors (LWE) problem has served as a foundation for many lattice-based cryptographic schemes~\cite{peikert2015decade}. Informally, LWE asks one to solve noisy random linear equations. To be more precise,
the goal is to find a secret vector $\bm s \in \mathbb{Z}_q^n$
given polynomially many samples of the form $(\bm a_i, b_i)$, where $\bm a_i \in \mathbb{Z}_q^n$ is uniformly chosen and $b_i \approx \langle \bm a_i, \bm s \rangle \pmod{q}$. In the absence of noise, LWE can be efficiently solved using Gaussian elimination. However, LWE is known to be hard assuming hardness of worst-case lattice problems such as Gap Shortest Vector Problem (GapSVP) or Shortest Independent Vectors Problem (SIVP) in the sense that there is a polynomial-time quantum reduction from these worst-case lattice problems to LWE~\cite{regev2005lwe}.
In this work, we introduce a new problem, called Continuous LWE (CLWE). As the name suggests, this problem can be seen as a continuous analogue of LWE, where equations in $\mathbb{Z}_q^n$ are replaced with vectors in $\mathbb{R}^n$ (see Figure~\ref{fig:plotinhom}).
More precisely, CLWE considers noisy inner products $z_i \approx \gamma \langle \bm y_i, \bm w \rangle \pmod{1}$, where the noise is drawn from a Gaussian distribution of width $\beta > 0$, $\gamma > 0$ is a problem parameter, $\bm w \in \mathbb{R}^{n}$ is a secret unit vector, and the public vectors $\bm y_i \in \mathbb{R}^n$ are drawn from the standard Gaussian. Given polynomially many samples of the form $(\bm y_i, z_i)$, CLWE asks one to find the secret direction $\bm w$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{plots/inhomogeneous_clwe_2d.png}
\caption{Scatter plot of two-dimensional CLWE samples. Color indicates the last ($z$) coordinate.}
\label{fig:plotinhom}
\end{figure}
One can also consider a closely related homogeneous variant of CLWE (see Figure~\ref{fig:plothom}). This distribution, which we call homogeneous CLWE, can be obtained by essentially conditioning on $z_i \approx 0$. It is a mixture of ``Gaussian pancakes'' of width $\approx \beta/\gamma$ in the secret direction and width $1$ in the
remaining $n-1$ directions. The Gaussian components are equally spaced, with a separation of $\approx 1/\gamma$. (See Definition~\ref{def:hclwe} for the precise statement.)
\begin{figure}[ht]
\centering
\begin{minipage}[t]{0.36\textwidth}
\includegraphics[width=\textwidth]{plots/homogeneous_clwe_2d_sparse.png}
\end{minipage}
\begin{minipage}[t]{0.54\textwidth}
\includegraphics[width=\textwidth]{plots/homogeneous_clwe_1d.png}
\end{minipage}\hspace{0.05\textwidth}
\caption{Left: Scatter plot of two-dimensional homogeneous CLWE samples.
Right: Unnormalized probability densities of homogeneous CLWE (blue) and Gaussian (orange) along the hidden direction.}
\label{fig:plothom}
\end{figure}
Our main result is that CLWE (and homogeneous CLWE) enjoy hardness guarantees similar to those of LWE.
\begin{theorem}[Informal]
\label{thm:main-informal}
Let $n$ be an integer, $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that the ratio $\gamma/\beta$ is polynomially bounded.
If there exists an efficient algorithm that solves $\mathrm{CLWE}_{\beta, \gamma}$, then there exists an efficient quantum algorithm that approximates worst-case lattice problems to within polynomial factors.
\end{theorem}
Although we defined CLWE above as a search problem of finding the hidden direction,
Theorem~\ref{thm:main-informal} is actually stronger, and applies to the decision variant of CLWE in which the goal is to distinguish CLWE samples $(\bm y_i, z_i)$ from samples where the noisy inner product $z_i$ is replaced by a random number distributed uniformly on $[0,1)$ (and similarly for the homogeneous variant).
\paragraph{Motivation: Lattice algorithms.}
Our original motivation to consider CLWE is as a possible approach to finding quantum algorithms for lattice problems. Indeed, the reduction above (just like the reduction to LWE~\cite{regev2005lwe}), can be interpreted in an algorithmic way: in order to quantumly solve worst-case lattice problems, ``all'' we have to do is solve CLWE (classically or quantumly). The elegant geometric nature of CLWE opens up a new toolbox of techniques that can potentially be used for solving lattice problems,
such as sum-of-squares-based techniques and algorithms for learning mixtures of Gaussians~\cite{moitrav2010mixture}.
Indeed, some recent algorithms (e.g.,~\cite{klivanskothari2019list-dec,raghavendrayau2020list-dec}) solve problems that include CLWE or homogeneous CLWE as a special case (or nearly so), yet as far as we can tell, so far none of the known results leads to an improvement over the state of the art in lattice algorithms.
To demonstrate the usefulness of CLWE as an algorithmic target, we show in Section~\ref{section:subexp} a simple moment-based algorithm that solves CLWE in time $\exp(\gamma^2)$.
Even though this does not imply subexponential time algorithms for lattice problems (since Theorem~\ref{thm:main-informal} requires $\gamma > \sqrt{n}$), it is interesting to contrast this algorithm with an analogous algorithm for LWE by Arora and Ge~\cite{arora2011subexplwe}. The two algorithms have the same running time (where $\gamma$ is replaced by the absolute noise $\alpha q$ in the LWE samples), and both rely on related techniques (moments in our case, powering in Arora-Ge's), yet the Arora-Ge algorithm is technically more involved than our rather trivial algorithm (which just amounts to computing the empirical covariance matrix). We interpret this as an encouraging sign that CLWE might be a better algorithmic target than LWE.
\paragraph{Motivation: Hardness of learning Gaussian mixtures.}
Learning mixtures of Gaussians is a classical problem in machine learning~\cite{pearson1984gmm}. Efficient algorithms are known for the task if the Gaussian components are guaranteed to be sufficiently well separated (e.g.,~\cite{dasgupta1999gmm,vempala-wang2002spectralgmm,arora-kannan2005,dasgupta-schulman2007em,brubaker-vempala2008pca,regev2017gmm,hopkins2018gmm,kothari-steinhardt2018clustering,diakonikolas2018spherical-gmm}).
Without such strong separation requirements, it is known that efficiently recovering the individual components of a mixture (technically known as ``parameter estimation") is in general impossible~\cite{moitrav2010mixture}; intuitively, this exponential information theoretic lower bound holds because the Gaussian components ``blur into each other", despite being mildly separated pairwise.
This leads to the question of whether there exists an efficient algorithm that can learn mixtures of Gaussians without strong separation requirement, not in the above strong parameter estimation sense (which is impossible), but rather in the much weaker density estimation sense, where the goal is merely to output an approximation of the given distribution's density function. See~\cite{diakonikolas2016structured,moitra2018} for the precise statement and~\cite{diakonikolas2017sqgaussian} where a super-polynomial lower bound for density estimation is shown in the restricted statistical query (SQ) model~\cite{kearnsSQ1998,feldman2017planted-clique}. Our work provides a negative answer to this open question,
showing that learning Gaussian mixtures is computationally difficult even if the goal is only to output an estimate of the density (see Proposition~\ref{prop:mixture-learning-hardness}). It is worth noting that our hard instance has almost non-overlapping components, i.e., the pairwise statistical distance between distinct Gaussian components is essentially 1, a property shared by the SQ-hard instance of~\cite{diakonikolas2017sqgaussian}.
\paragraph{Motivation: Robust machine learning.}
Variants of CLWE have already been analyzed in the context of robust machine learning~\cite{bubeck2019}, in which the goal is to learn a classifier that is robust against adversarial examples at test time~\cite{szegedy2014adversarial-examples}. In particular, Bubeck et al.~\cite{bubeck2019} use the SQ-hard Gaussian mixture instance of Diakonikolas et al.~\cite{diakonikolas2017sqgaussian} to establish SQ lower bounds for
learning a certain binary classification task, which can be seen as a variant of homogeneous CLWE. The key difference between our distribution and that of~\cite{diakonikolas2017sqgaussian,bubeck2019} is that our distribution has equal spacing between the ``layers" along the hidden direction, whereas their ``layers" are centered around roots of Hermite polynomials (the goal being to exactly match the lower moments of the standard Gaussian). The connection to lattices, which we make for the first time here, answers an open question by Bubeck et al.~\cite{bubeck2019}.
As additional evidence of the similarity between homogeneous CLWE and the distribution considered in~\cite{diakonikolas2017sqgaussian, bubeck2019}, we prove a super-polynomial SQ lower bound for homogeneous CLWE (even with super-polynomial precision). For $\gamma=\Omega(\sqrt{n})$, this result translates to an exponential SQ lower bound for exponential precision, which corroborates our computational hardness result based on worst-case lattice problems. The uniform spacing in the hidden structure of homogeneous CLWE leads to a simplified proof of the SQ lower bound compared to previous works, which considered non-uniform spacing between the Gaussian components. Note that computational hardness does not automatically imply SQ hardness as query functions in the SQ framework need not be efficiently computable.
Bubeck et al.~\cite{bubeck2019} were also interested in a variant of the learning problem where instead of \emph{one} hidden direction, there are $m \ge 1$ orthogonal hidden directions. So, for instance, the ``Gaussian pancakes'' in the $m=1$ case above are replaced with ``Gaussian baguettes'' in the case $m=2$, forming an orthogonal grid in the secret two-dimensional space. As we show in Section~\ref{section:k-hc}, our computational hardness easily extends to the $m>1$ case using a relatively standard hybrid argument. The same is true for the SQ lower bound we show in Section~\ref{section:sq-lb} (as well as for the SQ lower bound in~\cite{diakonikolas2017sqgaussian,bubeck2019}; the proof is nearly identical). The advantage of the $m>1$ variant is that the distance between the Gaussian mixture components increases from $\approx 1/\gamma$ (which can be as high as $\approx 1/\sqrt{n}$ if we want our hardness to hold) to $\approx \sqrt{m}/\gamma$ (which can be as high as $\approx 1$ by taking $m \approx n$). This is a desirable feature for showing hardness of robust machine learning.
\paragraph{Motivation: Cryptographic applications.}
Given the wide range of cryptographic applications of LWE~\cite{peikert2015decade}, it is only natural to expect that CLWE would also be useful for some cryptographic tasks, a question we leave for future work. CLWE's clean and highly symmetric definition should make it a better fit for some applications; its continuous nature, however, might require a discretization step due to efficiency considerations.
\paragraph{Analogy with LWE.}
As argued above, there are apparently nontrivial differences between CLWE and LWE, especially in terms of possible algorithmic approaches. However, there is undoubtedly also strong similarity between the two.
In terms of parameters, the $\gamma$ parameter in CLWE (density of layers) plays the role of the absolute noise level $\alpha q$ in LWE. And the $\beta$ parameter in CLWE plays the role of the relative noise parameter $\alpha$ in LWE. Using this correspondence between the parameters, the hardness proved for CLWE in Theorem~\ref{thm:main-informal} is essentially identical to the one proved for LWE in~\cite{regev2005lwe}. The similarity extends even to the noiseless case, where $\alpha = 0$ in LWE and $\beta = 0$ in CLWE. In particular, in Section~\ref{section:lll-clwe} we present an efficient LLL-based algorithm for solving noiseless CLWE, which is analogous to Gaussian elimination for noiseless LWE.
\paragraph{Comparison with previous work.}
The CLWE problem is related to the hard problem introduced in the seminal work of Ajtai and Dwork~\cite{ajtai97adcrypto}. Specifically, both problems involve finding a hidden direction in samples from a continuous distribution. One crucial difference, though, is in the density of the layers. Whereas in our hardness result the separation between the layers can be as large as $\approx 1/\sqrt{n}$, in Ajtai and Dwork the separation is exponentially small. This larger separation in CLWE is more than just a technicality. First, it is the reason we need to employ the quantum machinery from the LWE hardness proof~\cite{regev2005lwe}. Second, it is nearly tight, as demonstrated by the algorithm in Section~\ref{section:subexp}. Third, it is necessary for applications such as hardness of learning Gaussian mixtures. Finally, this larger separation is analogous to the main difference between LWE and earlier work~\cite{regev2004harmonic}, and is what leads to the relative efficiency of LWE-based cryptography.
\paragraph{Acknowledgements.}
We thank Aravindan Vijayaraghavan and Ilias Diakonikolas for useful comments.
\subsection{Technical Overview}
\label{section:technical-overview}
Broadly speaking, our proof follows the iterative structure of the original LWE hardness proof~\cite{regev2005lwe} (in fact, one might say most of the ingredients for CLWE were already present in that 2005 paper!).
We also make use of some recent techniques, such as a way to reduce to decision problems directly~\cite{peikert2017ringlwe}.
In more detail, as in previous work,
our main theorem boils down to solving the following problem: we are given a $\mathrm{CLWE}_{\beta,\gamma}$ oracle and polynomially many samples from $D_{L,r}$, the
discrete Gaussian distribution on $L$ of width $r$,%
\footnote{We actually require samples from $D_{L,r_i}$ for polynomially many $r_i$'s satisfying $r_i \geq r$, see Section~\ref{section:clwe-hardness}.} and our goal is to solve $\mathrm{BDD}_{L^*,\gamma/r}$, which is the problem of finding the closest vector in the dual lattice $L^*$ given a vector $\bm t$ that is within distance $\gamma/r$ of $L^*$. (It is known that $\mathrm{BDD}_{L^*,1/r}$ can be efficiently solved even if all we are given is polynomially many samples from $D_{L,r}$, without any need for an oracle~\cite{aharonov2005conp}; the point here is that the CLWE oracle allows us to extend the decoding radius from $1/r$ to $\gamma/r$.)
Once this is established, the main theorem follows from previous work~\cite{peikert2017ringlwe,regev2005lwe}. Very briefly, the resulting BDD solution is used in a quantum procedure to produce discrete Gaussian samples that are shorter than the ones we started with. This process is then repeated, until eventually we end up with the desired short discrete Gaussian samples. We remark that this process incurs a $\sqrt{n}$ loss in the Gaussian width (Lemma~\ref{lem:reg05quantumstep}), and the reason we require $\gamma \ge 2\sqrt{n}$ is to overcome this loss.
We now explain how we solve the above problem. For simplicity, assume for now that we have a \emph{search} CLWE oracle that recovers the secret exactly. (Our actual reduction is stronger and only requires a \emph{decision} CLWE oracle.) Let the given BDD instance be $\bm u + \bm w$, where $\bm u \in L^*$ and $\|\bm w\| = \gamma/r$. We will consider the general case of $\|\bm w\| \le \gamma/r$ in Section~\ref{section:clwe-hardness}.
The main idea is to generate CLWE samples whose secret is essentially the desired BDD solution $\bm w$, which would then complete the proof. To begin, take a sample from the discrete Gaussian distribution $\bm y \sim D_{L,r}$ (as provided to us) and consider the inner product
\begin{align*}
\langle \bm y, \bm u + \bm w \rangle = \langle \bm y, \bm w \rangle \pmod 1 \; ,
\end{align*}
where the equality holds since $\langle \bm y, \bm u \rangle \in \mathbb{Z}$ by definition.
The $(n+1)$-dimensional vector $(\bm y, \langle \bm y, \bm w \rangle \bmod 1)$ is almost a CLWE sample (with parameter $\gamma$ since $\gamma = r\|\bm w\|$ is the width of $\langle \bm y, \bm w \rangle$) --- the only problem is that in CLWE the $\bm y$'s need to be distributed according to a standard Gaussian, but here the $\bm y$'s are distributed according to a \emph{discrete} Gaussian over $L$. To complete the transformation into bonafide CLWE samples, we add Gaussian noise of appropriate variance to both $\bm y$ and $\langle \bm y, \bm w \rangle$ (and rescale $\bm y$ so that it is distributed according to the standard Gaussian distribution). We then apply the search $\mathrm{CLWE}_{\beta,\gamma}$ oracle on these CLWE samples to recover $\bm w$ and thereby solve $\mathrm{BDD}_{L^*,\gamma/r}$.
As mentioned previously, our main result actually uses a \emph{decision} CLWE oracle, which does not recover the secret $\bm w$ immediately. Working with this decision oracle requires some care. To that end, our proof will incorporate the ``oracle hidden center'' finding procedure from~\cite{peikert2017ringlwe}, the details of which can be found in Section~\ref{section:solve-bdd-with-clwe}.
\section{Preliminaries}
\begin{definition}[Statistical distance] For two distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ over $\mathbb{R}^n$ with density functions $\phi_1$ and $\phi_2$, respectively, we define the \emph{statistical distance} between them as
\begin{align*}
\Delta(\mathcal{D}_1,\mathcal{D}_2) = \frac{1}{2}\int_{\mathbb{R}^n}|\phi_1(\bm x)-\phi_2(\bm x)|d\bm x
\; .
\end{align*}
\end{definition}
We denote the statistical distance by $\Delta(\phi_1,\phi_2)$ if only the density functions are specified.
Moreover, for random variables $X_1 \sim \mathcal{D}_1$ and $X_2 \sim \mathcal{D}_2$, we also denote $\Delta(X_1,X_2) = \Delta(\mathcal{D}_1,\mathcal{D}_2)$. One important fact is that applying (possibly a randomized) function cannot increase statistical distance, i.e., for random variables $X, Y$ and function $f$,
\begin{align*}
\Delta(f(X),f(Y)) \leq \Delta(X,Y)
\; .
\end{align*}
We define the \emph{advantage} of an algorithm $\mathcal{A}$ solving the decision problem of distinguishing two distributions $\mathcal{D}_n$ and $\mathcal{D}'_n$ parameterized by $n$ as
\begin{align*}
\Bigl| \Pr_{x \sim \mathcal{D}_n}[\mathcal{A}(x) = \mathrm{YES}] - \Pr_{x \sim \mathcal{D}'_n}[\mathcal{A}(x) = \mathrm{YES}] \Bigr|
\; .
\end{align*}
Moreover, we define the \emph{advantage} of an algorithm $\mathcal{A}$ solving the \emph{average-case} decision problem of distinguishing two distributions $\mathcal{D}_{n, s}$ and $\mathcal{D}'_{n, s}$ parameterized by $n$ and $s$, where $s$ is equipped with some distribution $\mathcal{S}_n$, as
\begin{align*}
\Bigl| \Pr_{s \sim \mathcal{S}_n}[\mathcal{A}^{\mathcal{B}_{n, s}}(1^n) = \mathrm{YES}] - \Pr_{s \sim \mathcal{S}_n}[\mathcal{A}^{\mathcal{B}'_{n, s}}(1^n) = \mathrm{YES}] \Bigr|
\; ,
\end{align*}
where $\mathcal{B}_{n, s}$ and $\mathcal{B}_{n, s}$ are respectively the sampling oracles of $\mathcal{D}_{n, s}$ and $\mathcal{D}'_{n, s}$.
We say that an algorithm $\mathcal{A}$ has \emph{non-negligible advantage} if its advantage is a non-negligible function in $n$, i.e., a function in $\Omega(n^{-c})$ for some constant $c > 0$.
\subsection{Lattices and Gaussians}
\paragraph{Lattices.}
A \emph{lattice} is a discrete additive subgroup of $\mathbb{R}^n$.
Unless specified otherwise, we assume all lattices are full rank, i.e., their linear span is $\mathbb{R}^n$.
For an $n$-dimensional lattice $L$, a set of linearly independent vectors $\{\bm b_1, \dots, \bm b_n\}$ is called a \emph{basis} of $L$ if $L$ is generated by the set, i.e., $L = B \mathbb{Z}^n$ where $B = [\bm b_1, \dots, \bm b_n]$.
The \emph{determinant} of a lattice $L$ with basis $B$ is defined as $\det(L) = |\det(B)|$; it is easy to verify that the determinant does not depend on the choice of basis.
The \emph{dual lattice} of a lattice $L$, denoted by $L^*$, is defined as
\begin{align*}
L^* = \{ \bm y \in \mathbb{R}^n \mid \langle \bm x, \bm y \rangle \in \mathbb{Z} \text{ for all } \bm x \in L\}
\; .
\end{align*}
If $B$ is a basis of $L$ then $(B^T)^{-1}$ is a basis of $L^*$; in particular, $\det(L^*) = \det(L)^{-1}$.
\begin{definition} For an $n$-dimensional lattice $L$ and $1 \le i \le n$, the \emph{$i$-th successive minimum} of $L$ is defined as
\begin{align*}
\lambda_i(L) = \inf \{r \mid \dim(\operatorname{span}(L \cap \overline{B}(\bm 0,r))) \geq i\}
\; ,
\end{align*}
where $\overline{B}(\bm 0,r)$ is the closed ball of radius $r$ centered at the origin.
\end{definition}
We define the function $\rho_s(\bm x) = \exp(-\pi\|\bm x/s\|^2)$. Note that $\rho_s(\bm x) / s^n$, where $n$ is the dimension of $\bm x$, is the probability density of the Gaussian distribution with covariance $s^2/(2\pi)\cdot I_n$.
\begin{definition}[Discrete Gaussian] For lattice $L \subset \mathbb{R}^n$, vector $\bm y \in \mathbb{R}^n$, and parameter $r > 0$, the \emph{discrete Gaussian distribution} $D_{\bm y+L,r}$ on coset $\bm y+L$ with width $r$ is defined to have support $\bm y+L$ and probability mass function proportional to $\rho_r$.
\end{definition}
For $\bm y = \bm 0$, we simply denote the discrete Gaussian distribution on lattice $L$ with width $r$ by $D_{L,r}$.
Abusing notation, we denote the $n$-dimensional \emph{continuous Gaussian distribution} with zero mean and isotropic variance $r^2/(2\pi)$ as $D_{\mathbb{R}^n,r}$.
Finally, we omit the subscript $r$ when $r = 1$ and refer to $D_{\mathbb{R}^n}$ as the \emph{standard} Gaussian (despite it having covariance $I_n/(2\pi)$).
\begin{claim}[{\cite[Fact 2.1]{peikert2010sampler}}]
\label{claim:complete-squares}
For any $r_1, r_2 > 0$ and vectors $\bm x, \bm c_1, \bm c_2 \in \mathbb{R}^n$,
let $r_0 = \sqrt{r_1^2 + r_2^2}$, $r_3 = r_1 r_2 / r_0$, and $\bm c_3 = (r_3/r_1)^2 \bm c_1 + (r_3/r_2)^2 \bm c_2$.
Then
\begin{align*}
\rho_{r_1}(\bm x-\bm c_1) \cdot \rho_{r_2}(\bm x - \bm c_2) = \rho_{r_0}(\bm c_1 - \bm c_2) \cdot \rho_{r_3}(\bm x-\bm c_3)
\; .
\end{align*}
\end{claim}
\paragraph{Fourier analysis.} We briefly review basic tools of Fourier analysis required later on. The Fourier transform of a function $f: \mathbb{R}^n \to \mathbb{C}$ is defined to be
\begin{align*}
\hat{f}(\bm w) = \int_{\mathbb{R}^n} f(\bm x)e^{-2\pi i \langle \bm x, \bm w \rangle}d\bm x
\; .
\end{align*}
An elementary property of the Fourier transform is that if $f(\bm w) = g(\bm w+\bm v)$ for some $\bm v \in \mathbb{R}^n$, then $\hat{f}(\bm w) = e^{2\pi i \langle \bm v, \bm w \rangle}\hat{g}(\bm w)$. Another important fact is that the Fourier transform of a Gaussian is also a Gaussian, i.e., $\hat{\rho} = \rho$; more generally, $\hat{\rho}_s = s^n \rho_{1/s}$.
We also exploit the Poisson summation formula stated below. Note that we denote by $f(A) = \sum_{\bm x \in A} f(\bm x)$ for any function $f$ and any discrete set $A$.
\begin{lemma}[Poisson summation formula] For any lattice $L$ and any function $f$,\footnote{To be precise, $f$ needs to satisfy some niceness conditions; this will always hold in our applications.}
\label{lem:poisson-sum}
\begin{align*}
f(L) = \det(L^*)\cdot \widehat{f}(L^*)
\; .
\end{align*}
\end{lemma}
\paragraph{Smoothing parameter.} An important lattice parameter induced by discrete Gaussian which will repeatedly appear in our work is the \emph{smoothing parameter}, defined as follows.
\begin{definition}[Smoothing parameter]
For lattice $L$ and real $\varepsilon > 0$, we define the \emph{smoothing parameter} $\eta_{\varepsilon}(L)$ as
\begin{align*}
\eta_{\varepsilon}(L) = \inf \{s \mid \rho_{1/s}(L^* \setminus \{\bm 0\}) \leq \varepsilon\}
\; .
\end{align*}
\end{definition}
Intuitively, this parameter is the width beyond which the discrete Gaussian distribution behaves like a continuous Gaussian. This is formalized in the lemmas below.
\begin{lemma}[{\cite[Claim 3.9]{regev2005lwe}}]
\label{lem:smoothing-gaussian}
For any $n$-dimensional lattice $L$, vector $\bm u \in \mathbb{R}^n$, and $r,s > 0$ satisfying $rs/t \geq \eta_\varepsilon(L)$ for some $\varepsilon < \frac{1}{2}$, where $t = \sqrt{r^2+s^2}$, the statistical distance between $D_{\bm u+L,r} + D_{\mathbb{R}^n, s}$ and $D_{\mathbb{R}^n, t}$ is at most $4\varepsilon$.
\end{lemma}
\begin{lemma}[{\cite[Lemma 2.5]{peikert2017ringlwe}}]
\label{lem:smoothing-uniform}
For any $n$-dimensional lattice $L$, real $\varepsilon > 0$, and $r \geq \eta_{\varepsilon}(L)$, the statistical distance between $D_{\mathbb{R}^n, r} \bmod L$ and the uniform distribution over $\mathbb{R}^n / L$ is at most $\varepsilon/2$.
\end{lemma}
Lemma~\ref{lem:smoothing-gaussian} states that if we take a sample from $D_{L,r}$ and add continuous Gaussian noise $D_{\mathbb{R}^n,s}$ to the sample, the resulting distribution is statistically close to $D_{\mathbb{R}^n,\sqrt{r^2+s^2}}$, which is precisely what one gets by adding two continuous Gaussian distributions of width $r$ and $s$. Unless specified otherwise, we always assume $\varepsilon$ is negligibly small in $n$, say $\varepsilon = \exp(-n)$. The following are some useful upper and lower bounds on the smoothing parameter $\eta_\varepsilon(L)$.
\begin{lemma}[{\cite[Lemma 2.6]{peikert2017ringlwe}}]
\label{lem:smoothing-dual}
For any $n$-dimensional lattice $L$ and $\varepsilon = \exp(-c^2n)$,
\begin{align*}
\eta_\varepsilon(L) \leq c\sqrt{n}/\lambda_1(L^*)
\; .
\end{align*}
\end{lemma}
\begin{lemma}[{\cite[Lemma 3.3]{micciancio2007average}}]
\label{lem:smoothing-primal}
For any $n$-dimensional lattice $L$ and $\varepsilon > 0$,
\begin{align*}
\eta_\varepsilon(L) \leq \sqrt{\frac{\ln(2n(1+1/\varepsilon))}{\pi}}\cdot \lambda_n(L)
\; .
\end{align*}
\end{lemma}
\begin{lemma}[{\cite[Claim 2.13]{regev2005lwe}}]
\label{lem:smoothing-lb}
For any $n$-dimensional lattice $L$ and $\varepsilon > 0$,
\begin{align*}
\eta_\varepsilon(L) \geq \sqrt{\frac{\ln 1/\varepsilon}{\pi}}\cdot\frac{1}{\lambda_1(L^*)}
\; .
\end{align*}
\end{lemma}
\paragraph{Computational problems.}
GapSVP and SIVP are among the main computational problems on lattices and are believed to be computationally hard (even with quantum computation) for polynomial approximation factor $\alpha(n)$. We also define two additional problems, DGS and BDD.
\begin{definition}[GapSVP]
For an approximation factor $\alpha = \alpha(n)$, an instance of $\mathrm{GapSVP}_\alpha$ is given by an $n$-dimensional lattice $L$ and a number $d > 0$. In \textnormal{YES} instances, $\lambda_1(L) \leq d$, whereas in \textnormal{NO} instances, $\lambda_1(L) > \alpha \cdot d$.
\end{definition}
\begin{definition}[SIVP]
For an approximation factor $\alpha = \alpha(n)$, an instance of $\mathrm{SIVP}_\alpha$ is given by an $n$-dimensional lattice $L$. The goal is to output a set of $n$ linearly independent lattice vectors of length at most $\alpha \cdot \lambda_n(L)$.
\end{definition}
\begin{definition}[DGS]
For a function $\varphi$ that maps lattices to non-negative reals, an instance of $\mathrm{DGS}_\varphi$ is given by a lattice $L$ and a parameter $r \geq \varphi(L)$. The goal is to output an independent sample whose distribution is within negligible statistical distance of $D_{L,r}$.
\end{definition}
\begin{definition}[BDD]
For an $n$-dimensional lattice $L$ and distance bound $d > 0$, an instance of $\mathrm{BDD}_{L,d}$ is given by a vector $\bm t = \bm w + \bm u$, where $\bm u \in L$ and $\|\bm w\| \leq d$. The goal is to output $\bm w$.
\end{definition}
\subsection{Learning with errors}
\label{prelim:lwe}
We now define the learning with errors (LWE) problem. This definition will not be used in the sequel, and is included for completeness. Let $n$ and $q$ be positive integers, and $\alpha > 0$ an error rate. We denote the quotient ring of integers modulo $q$ as $\mathbb{Z}_q = \mathbb{Z}/q\mathbb{Z}$ and quotient group of reals modulo the integers as $\mathbb{T} = \mathbb{R}/\mathbb{Z} = [0, 1)$.
\begin{definition}[LWE distribution] For integer $q \ge 2$ and vector $\bm s \in \mathbb{Z}_q^n$, the \emph{LWE distribution} $A_{\bm s, \alpha}$ over $\mathbb{Z}_q^n \times \mathbb{T}$ is sampled by independently choosing uniformly random $\bm a \in \mathbb{Z}_q^n$ and $e \sim D_{\mathbb{R},\alpha}$, and outputting $(\bm a, (\langle \bm a, \bm s \rangle/q + e) \bmod 1)$.
\end{definition}
\begin{definition} For an integer $q = q(n) \geq 2$ and error parameter $\alpha = \alpha(n) > 0$, the average-case decision problem $\mathrm{LWE}_{q,\alpha}$ is to distinguish the following two distributions over $\mathbb{Z}_q^n \times \mathbb{T}$: (1) the LWE distribution $A_{\bm s, \alpha}$ for some uniformly random $\bm s \in \mathbb{Z}_q^n$ (which is fixed for all samples), or (2) the uniform distribution.
\end{definition}
\subsection{Continuous learning with errors}
\label{prelim:comb}
We now define the CLWE distribution, which is the central subject of our analysis.
\begin{definition}[CLWE distribution]
For unit vector $\bm w \in \mathbb{R}^n$ and parameters $\beta, \gamma > 0$, define the \emph{CLWE distribution} $A_{{\bm w}, \beta, \gamma}$ over $\mathbb{R}^{n+1}$ to have density at $(\bm y,z)$ proportional to
\begin{align*}
\rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(z+k-\gamma\langle \bm y, \bm w \rangle)
\; .
\end{align*}
\end{definition}
Equivalently, a sample $(\bm y, z)$ from the CLWE distribution $A_{\bm w, \beta, \gamma}$ is given by the $(n+1)$-dimensional vector $(\bm y, z)$ where $\bm y \sim D_{\mathbb{R}^n}$ and
$z = (\gamma \langle \bm y, \bm w \rangle + e) \bmod 1$ where $e \sim D_{\mathbb{R},\beta}$.
The vector $\bm w$ is the hidden direction, $\gamma$ is the density of layers, and $\beta$ is the noise added to each equation. From the CLWE distribution, we can arrive at the homogeneous CLWE distribution by conditioning on $z = 0$. A formal definition is given as follows.
\begin{definition}[Homogeneous CLWE distribution]\label{def:hclwe}
For unit vector $\bm w \in \mathbb{R}^n$ and parameters $\beta, \gamma > 0$, define the \emph{homogeneous CLWE distribution} $H_{\bm w, \beta, \gamma}$ over $\mathbb{R}^n$ to have density at $\bm y$ proportional to
\begin{align}\label{eqn:hclwe-def}
\rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(k-\gamma\langle \bm y, \bm w \rangle)
\; .
\end{align}
\end{definition}
The homogeneous CLWE distribution can be equivalently defined as a mixture of Gaussians.
To see this, notice that Eq.~\eqref{eqn:hclwe-def} is equal to
\begin{align}\label{eqn:hclwe-mixture-def}
\sum_{k \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot
\rho(\pi_{\bm w^\perp}(\bm y)) \cdot \rho_{\beta/\sqrt{\beta^2+\gamma^2}}\Big(\langle \bm y, \bm w \rangle -\frac{\gamma}{\beta^2+\gamma^2}k\Big) \; ,
\end{align}
where $\pi_{\bm w^\perp}$ denotes the projection on the orthogonal space to $\bm w$.
Hence, $H_{\bm w, \beta, \gamma}$ can be viewed as a mixture of Gaussian components of width
$\beta/\sqrt{\beta^2+\gamma^2}$ (which is roughly $\beta/\gamma$ for $\beta \ll \gamma$) in the secret direction, and width $1$ in the orthogonal space. The components are equally spaced, with a separation of $\gamma/(\beta^2+\gamma^2)$ between them (which is roughly $1/\gamma$ for $\beta \ll \gamma$).
We remark that the integral of~\eqref{eqn:hclwe-def} (or equivalently, of~\eqref{eqn:hclwe-mixture-def}) over all $\bm y$ is
\begin{align}\label{eqn:hclwe-def-normalization}
Z = \frac{\beta}{\sqrt{\beta^2+\gamma^2}} \cdot \rho\Bigg(\frac{1}{\sqrt{\beta^2+\gamma^2}}\mathbb{Z}\Bigg) \; .
\end{align}
This is easy to see since the integral over $\bm y$ of the product of the last two $\rho$ terms in~\eqref{eqn:hclwe-mixture-def} is $\beta/\sqrt{\beta^2+\gamma^2}$ independently of $k$.
\begin{definition} For parameters $\beta, \gamma > 0$, the average-case decision problem $\mathrm{CLWE}_{\beta, \gamma}$ is to distinguish the following two distributions over $\mathbb{R}^n \times \mathbb{T}$: (1) the CLWE distribution $A_{\bm w, \beta, \gamma}$ for some uniformly random unit vector $\bm w \in \mathbb{R}^n$ (which is fixed for all samples), or (2) $D_{\mathbb{R}^n} \times U$.
\end{definition}
\begin{definition} For parameters $\beta, \gamma > 0$, the average-case decision problem $\mathrm{hCLWE}_{\beta, \gamma}$ is to distinguish the following two distributions over $\mathbb{R}^n$: (1) the homogeneous CLWE distribution $H_{\bm w, \beta, \gamma}$ for some uniformly random unit vector $\bm w \in \mathbb{R}^n$ (which is fixed for all samples), or (2) $D_{\mathbb{R}^n}$.
\end{definition}
Note that $\mathrm{CLWE}_{\beta,\gamma}$ and $\mathrm{hCLWE}_{\beta,\gamma}$ are defined as average-case problems. We could have equally well defined them to be worst-case problems, requiring the algorithm to distinguish the distributions for \emph{all} hidden directions $\bm w \in \mathbb{R}^n$. The following claim shows that the two formulations are equivalent.
\begin{claim} \label{claim:ic-worst-to-ic}
For any $\beta, \gamma > 0$,
there is a polynomial-time reduction from worst-case $\mathrm{CLWE}_{\beta,\gamma}$ to (average-case) $\mathrm{CLWE}_{\beta,\gamma}$.
\end{claim}
\begin{proof}
Given CLWE samples $\{(\bm y_i,z_i)\}_{i=1}^{\operatorname{poly}(n)}$ from $A_{\bm w,\beta,\gamma}$, we apply a random rotation $\bm R$, giving us samples of the form $\{(\bm R\bm y_i,z_i\}_{i=1}^{\operatorname{poly}(n)}$. Since the standard Gaussian is rotationally invariant and $\langle \bm y, \bm w \rangle = \langle \bm R \bm y, \bm R^T\bm w \rangle$, the rotated CLWE samples are distributed according to $A_{\bm R^T\bm w,\beta,\gamma}$. Since $\bm R$ is a random rotation, the random direction $\bm R^T \bm w$ is uniformly distributed on the sphere.
\end{proof}
\section{Hardness of CLWE}
\label{section:clwe-hardness}
\subsection{Background and overview}
\label{section:clwe-background}
In this section, we give an overview of the quantum reduction from worst-case lattice problems to CLWE. Our goal is to show that we can efficiently solve worst-case lattice problems, in particular GapSVP and SIVP, using an oracle for $\mathrm{CLWE}$ (and with quantum computation). We first state our main theorem, which was stated informally as Theorem~\ref{thm:main-informal} in the introduction.
\begin{theorem}
\label{thm:clwe-intro}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ be such that $\gamma/\beta$ is polynomially bounded. Then there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{CLWE}_{\beta,\gamma}$.
\end{theorem}
Using standard reductions from GapSVP and SIVP to DGS (see, e.g.,~\cite[Section 3.3]{regev2005lwe}), our main theorem immediately implies the following corollary.
\begin{corollary}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded. Then, there is a polynomial-time quantum reduction from $\mathrm{SIVP}_\alpha$ and $\mathrm{GapSVP}_\alpha$ to $\mathrm{CLWE}_{\beta,\gamma}$ for some $\alpha = \tilde{O}(n/\beta)$.
\end{corollary}
Based on previous work, to prove Theorem~\ref{thm:clwe-intro}, it suffices to prove the following lemma, which is the goal of this section.
\begin{lemma}
\label{lem:bdddgs-to-clwe}
Let $\beta=\beta(n) \in (0,1)$ and $\gamma=\gamma(n) \geq 2\sqrt{n}$ such that $q = \gamma/\beta$ is polynomially bounded. There exists a probabilistic polynomial-time (classical) algorithm with access to an oracle that solves $\mathrm{CLWE}_{\beta,\gamma}$, that takes as input a lattice $L \subset \mathbb{R}^n$, parameters $\beta, \gamma$, and $r \geq 2q \cdot \eta_{\varepsilon}(L)$, and $\operatorname{poly}(n)$ many samples from the discrete Gaussian distribution $D_{L,r_i}$ for $\operatorname{poly}(n)$ parameters $r_i \geq r$ and solves $\mathrm{BDD}_{L^*,d}$ for $d = \gamma/(\sqrt{2}r)$.
\end{lemma}
In other words, we can implement an oracle for $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r)}$ using polynomially many discrete Gaussian samples and the CLWE oracle as a sub-routine.
The proof of Lemma~\ref{lem:bdddgs-to-clwe} will be given in Section~\ref{section:clwe-from-bdddgs}
(which is the novel contribution) and Section~\ref{section:solve-bdd-with-clwe}
(which mainly follows~\cite{peikert2017ringlwe}).
In the rest of this subsection, we briefly explain how Theorem~\ref{thm:clwe-intro} follows from Lemma~\ref{lem:bdddgs-to-clwe}.
This derivation is already implicit in past work~\cite{peikert2017ringlwe,regev2005lwe}, and is included here mainly for completeness.
Readers familiar with the reduction may skip directly to Section~\ref{section:clwe-from-bdddgs}.
The basic idea is to start with samples from a very wide discrete Gaussian (which can be efficiently sampled) and then iteratively sample from narrower discrete Gaussians, until eventually we end up with short discrete Gaussian samples, as required (see Figure~\ref{fig:lwe-diagram}). Each iteration consists of two steps: the first classical step is given by Lemma~\ref{lem:bdddgs-to-clwe}, allowing us to solve BDD on the dual lattice; the second step is quantum and is given in Lemma~\ref{lem:reg05quantumstep} below, which shows that solving BDD leads to sampling from narrower discrete Gaussian.
\begin{figure}[ht]
\centering
\input{plots/clwe-iteration.tex}
\caption{Two iterations of the reduction.}
\label{fig:lwe-diagram}
\end{figure}
\begin{lemma}[{\cite[Lemma 3.14]{regev2005lwe}}]
\label{lem:reg05quantumstep}
There exists an efficient quantum algorithm that, given any $n$-dimensional lattice $L$, a number $d < \lambda_1(L^*)/2$, and an oracle that solves $\mathrm{BDD}_{L^*,d}$, outputs a sample from $D_{L,\sqrt{n}/(\sqrt{2}d)}$.
\end{lemma}
Similar to~\cite{peikert2017ringlwe}, there is a subtle requirement in Lemma~\ref{lem:bdddgs-to-clwe} that we need discrete Gaussian samples from several different parameters $r' \geq r$. However, this is a non-issue since an oracle for $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r)}$ also solves $\mathrm{BDD}_{L^*,\gamma/(\sqrt{2}r')}$ for any $r' \ge r$, so Lemma~\ref{lem:reg05quantumstep} in fact allows us to efficiently sample from $D_{L,r'\sqrt{n}/\gamma}$ for any $r' \ge r$.
\subsection{CLWE samples from BDD}
\label{section:clwe-from-bdddgs}
In this subsection we prove Lemma~\ref{lem:bdd-to-clwe}, showing how to generate CLWE samples from the given BDD instance using discrete Gaussian samples.
In the next subsection we will show how to solve the BDD instance by applying the decision CLWE oracle to these samples, thereby completing the proof of Lemma~\ref{lem:bdddgs-to-clwe}.
\begin{lemma}
\label{lem:bdd-to-clwe}
There is an efficient algorithm that takes as input an $n$-dimensional lattice $L$, a vector $\bm w + \bm u$ where $\bm u \in L^*$, reals $r, s_1, s_2 > 0$ such that $rs_1/\sqrt{\|\bm w\|^2(r s_1/s_2)^2+t^2} \geq \eta_\varepsilon(L)$ for some $\varepsilon < \frac{1}{2}$ and $t = \sqrt{r^2+s_1^2}$, and samples from $D_{L,r}$, and outputs samples that are within statistical distance $8 \varepsilon$ of the CLWE distribution $A_{\bm w', \beta, \gamma}$ for $\bm w' = \bm w/\|\bm w\|$, $\beta = \|\bm w\|\sqrt{(rs_1/t)^2+(s_2/\|\bm w\|)^2}$ and $\gamma = \|\bm w\|r^2/t$.
\end{lemma}
\begin{proof}
We start by describing the algorithm. For each $\bm x$ from the given samples from $D_{L,r}$, do the following. First, take the inner product $\langle \bm x, \bm w + \bm u \rangle$, which gives us
\begin{align*}
\langle \bm x, \bm w + \bm u \rangle &= \langle \bm x, \bm w \rangle \bmod 1
\; .
\end{align*}
Appending this inner product modulo 1 to the sample $\bm x$, we get $(\bm x, \langle \bm x, \bm w \rangle \bmod 1)$.
Next, we ``smooth out" the lattice structure of $\bm x$ by adding Gaussian noise $\bm v \sim D_{\mathbb{R}^n,s_1}$ to $\bm x$ and $e \sim D_{\mathbb{R},s_2}$ to $\langle \bm x, \bm w \rangle$ (modulo 1). Then, we have
\begin{align}
(\bm x + \bm v, (\langle \bm x, \bm w \rangle + e) \bmod 1) \label{eqn:clwe-sample-raw}\; .
\end{align}
Finally, we normalize the first component by $t$ so that its marginal distribution has unit width, giving us
\begin{align}
((\bm x + \bm v)/t,(\langle \bm x, \bm w \rangle + e) \bmod 1) \label{eqn:clwe-sample-raw-normalized}\;,
\end{align}
which the algorithm outputs.
Our goal is to show that the distribution of \eqref{eqn:clwe-sample-raw-normalized} is within statistical distance $8\varepsilon$ of the CLWE distribution $A_{\bm w',\beta,\gamma}$, given by
\begin{align*}
(\bm y', (\gamma \langle \bm y', \bm w' \rangle + e') \bmod 1) \; ,
\end{align*}
where $\bm y' \sim D_{\mathbb{R}^n}$ and $e' \sim D_{\mathbb{R},\beta}$.
Because applying a function cannot increase statistical distance (specifically, dividing the first component by $t$ and taking mod $1$ of the second), it suffices to show that the distribution of
\begin{align}
(\bm x + \bm v, \langle \bm x, \bm w \rangle + e) \label{eqn:clwe-sample-1}\; ,
\end{align}
is within statistical distance $8\varepsilon$ of that of
\begin{align}
(\bm y, (r/t)^2 \langle \bm y, \bm w \rangle + e') \label{eqn:clwe-sample-2}\; ,
\end{align}
where $\bm y \sim D_{\mathbb{R}^n,t}$ and $e' \sim D_{\mathbb{R},\beta}$. First, observe that by Lemma~\ref{lem:smoothing-gaussian}, the statistical distance between the marginals on the first component (i.e., between $\bm x +\bm v$ and $\bm y$) is at most $4\varepsilon$. It is therefore sufficient to bound the statistical distance between the second components conditioned on any fixed value $\overline{\bm y}$ of the first component.
Conditioned on the first component being $\overline{\bm y}$, the second component in~\eqref{eqn:clwe-sample-1} has the same distribution as
\begin{align}
\langle \bm x + \bm h , \bm w \rangle \label{eqn:clwe-sample-3}
\end{align}
where $\bm h \sim D_{\mathbb{R}^n,s_2/\|\bm w\|}$,
and the second component in~\eqref{eqn:clwe-sample-2} has the same distribution as
\begin{align}
\langle (r/t)^2 \overline{\bm y} + \bm h' , \bm w \rangle \label{eqn:clwe-sample-4}
\end{align}
where $\bm h' \sim D_{\mathbb{R}^n,\beta/\|\bm w\|}$.
By Claim~\ref{clm:lattice_conditional} below, conditioned on $\bm x+\bm v = \overline{\bm y}$, the distribution of $\bm x$ is
$(r/t)^2\overline{\bm y} + D_{L-(r/t)^2\overline{\bm y}, rs_1/t}$. Therefore, by Lemma~\ref{lem:smoothing-gaussian}, the conditional distribution of $\bm x + \bm h$ given $\bm x+\bm v=\overline{\bm y}$ is within statistical distance $4 \varepsilon$ of that of $(r/t)^2\overline{\bm y} + \bm h'$. Since statistical distance cannot increase by applying a function (inner product with $\bm w$ in this case), \eqref{eqn:clwe-sample-3} is within statistical distance $4\varepsilon$ of \eqref{eqn:clwe-sample-4}. Hence, the distribution of \eqref{eqn:clwe-sample-1} is within statistical distance $8\varepsilon$ of that of \eqref{eqn:clwe-sample-2}.
\end{proof}
\begin{claim}
\label{clm:lattice_conditional}
Let $\bm y = \bm x + \bm v$, where $\bm x \sim D_{L,r}$ and $\bm v \sim D_{\mathbb{R}^n,s}$. Then, the conditional distribution of $\bm x$ given $\bm y = \overline{\bm y}$ is $(r/t)^2\overline{\bm y} + D_{L-(r/t)^2\overline{\bm y}, rs/t}$ where $t = \sqrt{r^2+s^2}$.
\end{claim}
\begin{proof}
Observe that $\bm x$ conditioned on $\bm y = \overline{\bm y}$ is a discrete random variable supported on $L$.
The probability of $\bm x$ given $\bm y = \overline{\bm y}$ is proportional to
\begin{align*}
\rho_r(\bm x) \cdot \rho_s(\overline{\bm y}-\bm x) = \rho_t(\overline{\bm y}) \cdot \rho_{rs/t}(\bm x-(r/t)^2\overline{\bm y}) \propto \rho_{rs/t}(\bm x-(r/t)^2\overline{\bm y})
\; ,
\end{align*}
where the equality follows from Claim~\ref{claim:complete-squares}. Hence, the conditional distribution of $\bm x-(r/t)^2\bm y$ given $\bm y = \overline{\bm y}$ is $D_{L-(r/t)^2\overline{\bm y}, rs/t}$.
\end{proof}
\subsection{Solving BDD with the CLWE oracle}
\label{section:solve-bdd-with-clwe}
In this subsection, we complete the proof of Lemma~\ref{lem:bdddgs-to-clwe}. We first give some necessary background on the Oracle Hidden Center Problem (OHCP)~\cite{peikert2017ringlwe}. The problem asks one to search for a ``hidden center" $\bm w^*$ using a decision oracle whose acceptance probability depends only on the distance to $\bm w^*$. The problem's precise statement is as follows.
\begin{definition}[OHCP]
\label{definition:ohcp}
For parameters $\varepsilon, \delta \in [0,1)$ and $\zeta \geq 1$, the $(\varepsilon, \delta, \zeta)$-\emph{OHCP} is an approximate search problem that tries to find the ``hidden" center $\bm w^*$. Given a scale parameter $d > 0$ and access to a randomized oracle $\mathcal{O} : \mathbb{R}^n \times \mathbb{R}^{\geq 0} \rightarrow \{0,1\}$ such that its acceptance probability $p(\bm w,t)$ only depends on $\exp(t)\|\bm w-\bm w^*\|$ for some (unknown) ``hidden center" $\bm w^* \in \mathbb{R}^n$ with $\delta d \leq \|\bm w^*\| \leq d$ and for any $\bm w \in \mathbb{R}^n$ with $\|\bm w-\bm w^*\| \leq \zeta d$, the goal is to output $\bm w$ s.t.\ $\|\bm w-\bm w^*\| \leq \varepsilon d$.
\end{definition}
Notice that OHCP corresponds to our problem since we want to solve BDD, which is equivalent to finding the ``hidden" offset vector $\bm w^*$, using a decision oracle for $\mathrm{CLWE}_{\beta, \gamma}$. The acceptance probability of the $\mathrm{CLWE}_{\beta,\gamma}$ oracle will depend on the distance between our guess $\bm w$ and the true offset $\bm w^*$. For OHCP, we have the following result from~\cite{peikert2017ringlwe}.
\begin{lemma}[\cite{peikert2017ringlwe}, Proposition 4.4]
\label{lem:ohcp}
There is a poly$(\kappa, n)$-time algorithm that takes as input a confidence parameter $\kappa \geq 20 \log(n+1)$ (and the scale parameter $d > 0$) and solves $(\exp(-\kappa), \exp(-\kappa), 1+1/\kappa)$-OHCP in dimension $n$ except with probability $\exp(-\kappa)$, provided that the oracle $\mathcal{O}$ corresponding to the OHCP instance satisfies the following conditions. For some $p(\infty) \in [0, 1]$ and $t^* \ge 0$,
\begin{enumerate}
\item $p(\bm 0, t^*) - p(\infty) \geq 1/\kappa$;
\item $|p(\bm 0, t) - p(\infty)| \leq 2 \exp(-t/\kappa)$ for any $t \geq 0$; and
\item $p(\bm w, t)$ is $\kappa$-Lipschitz in $t$ for any $\bm w \in \mathbb{R}^n$ such that $\|\bm w\| \leq (1+1/\kappa)d$ \;.
\end{enumerate}
Furthermore, each of the algorithm's oracle calls takes the form $\mathcal{O}(\cdot,i\Delta)$ for some $\Delta < 1$ that depends only on $\kappa$ and $n$ and $0 \leq i \leq \operatorname{poly}(\kappa,n)$.
\end{lemma}
The main idea in the proof of Lemma~\ref{lem:ohcp} is performing a guided random walk with advice from the decision oracle $\mathcal{O}$. The decision oracle $\mathcal{O}$ rejects a random step with high probability if it increases the distance $\|\bm w - \bm w^*\|$. Moreover, there is non-negligible probability of decreasing the distance by a factor $\exp(1/n)$ unless $\log \|\bm w-\bm w^*\| \leq -\kappa$. Hence, with sufficiently many steps, the random walk will reach $\widehat{\bm w}$, a guess of the hidden center, which is within $\exp(-\kappa)$ distance to $\bm w^*$ with high probability.
Our goal is to show that we can construct an oracle $\mathcal{O}$ satisfying the above conditions using an oracle for $\mathrm{CLWE}_{\beta, \gamma}$. Then, it follows from Lemma~\ref{lem:ohcp} that BDD with discrete Gaussian samples can be solved using an oracle for CLWE. We first state some lemmas useful for our proof. Lemma~\ref{lem:closest-plane} is Babai's closest plane algorithm and Lemma~\ref{lem:tvbound} is an upper bound on the statistical distance between two one-dimensional Gaussian distributions.
\begin{lemma}[\cite{lenstra1982lll,babai1986cvp}]
\label{lem:closest-plane}
For any $n$-dimensional lattice $L$, there is an efficient algorithm that solves $\mathrm{BDD}_{L,d}$ for $d = 2^{-n/2}\cdot \lambda_1(L)$.
\end{lemma}
\begin{lemma}[{\cite[Theorem 1.3]{devroye2018tv}}] For all $\mu_1, \mu_2 \in \mathbb{R}$, and $\sigma_1, \sigma_2 > 0$, we have
\label{lem:tvbound}
\begin{align*}
\Delta\big(\mathcal{N}(\mu_1,\sigma_1),\mathcal{N}(\mu_2,\sigma_2)\big) \leq \frac{3|\sigma_1^2-\sigma_2^2|}{2\max(\sigma_1^2,\sigma_2^2)}+\frac{|\mu_1-\mu_2|}{2\max(\sigma_1,\sigma_2)}
\; ,
\end{align*}
where $\mathcal{N}(\mu,\sigma)$ denotes the Gaussian distribution with mean $\mu$ and standard deviation $\sigma$.
\end{lemma}
Now, we prove Lemma~\ref{lem:bdddgs-to-clwe}, restated below.
{
\def\ref{lem:bdddgs-to-clwe}{\ref{lem:bdddgs-to-clwe}}
\begin{lemma}
Let $\beta=\beta(n) \in (0,1)$ and $\gamma=\gamma(n) \geq 2\sqrt{n}$ such that $q = \gamma/\beta$ is polynomially bounded. There exists a probabilistic polynomial-time (classical) algorithm with access to an oracle that solves $\mathrm{CLWE}_{\beta,\gamma}$, that takes as input a lattice $L \subset \mathbb{R}^n$, parameters $\beta, \gamma$, and $r \geq 2q \cdot \eta_{\varepsilon}(L)$, and $\operatorname{poly}(n)$ many samples from the discrete Gaussian distribution $D_{L,r_i}$ for $\operatorname{poly}(n)$ parameters $r_i \geq r$ and solves $\mathrm{BDD}_{L^*,d}$ for $d = \gamma/(\sqrt{2}r)$.
\end{lemma}
\addtocounter{theorem}{-1}
}
\begin{proof}
Let $d' = (1-1/(2n))\cdot d$. By~\cite[Corollary 2]{lyubashevsky2009bdd}, it suffices to solve $\mathrm{BDD}_{L^*,d'}$.
Let $\kappa = \operatorname{poly}(n)$ with $\kappa \geq 8qn\ell$ be such that the advantage of our $\mathrm{CLWE}_{\beta, \gamma}$ oracle is at least $1/\kappa$, where $\ell \geq 1$ is the number of samples required by the oracle.
Given as input a lattice $L \subset \mathbb{R}^n$, a parameter $r \geq 2q \cdot \eta_{\varepsilon}(L)$, samples from $D_{L,r_i}$ for $1 \leq i \leq \operatorname{poly}(n)$, and a BDD instance $\bm w^* + \bm u$ where $\bm u \in L^*$ and $\|\bm w^*\| \leq d'$, we want to recover $\bm w^*$. Without loss of generality, we can assume that $\|\bm w^*\| \geq \exp(-n/2)\cdot \lambda_1(L^*) \geq (2q/r)\cdot \exp(-n/2)$ (Lemma~\ref{lem:smoothing-lb}), since we can otherwise find $\bm w^*$ efficiently using Babai's closest plane algorithm (Lemma~\ref{lem:closest-plane}).
We will use the CLWE oracle to simulate an oracle $\mathcal{O}: \mathbb{R}^n \times \mathbb{R}^{\ge 0} \rightarrow \{0,1\}$ such that the probability that $\mathcal{O}(\bm w,t)$ outputs 1 (``accepts") only depends on $\exp(t)\|\bm w-\bm w^*\|$. Our oracle $\mathcal{O}$ corresponds to the oracle in Definition~\ref{definition:ohcp} with $\bm w^*$ as the ``hidden center". We will use Lemma~\ref{lem:ohcp} to find $\bm w^*$.
On input $(\bm w, t)$, our oracle $\mathcal{O}$ receives $\ell$ independent samples from $D_{L,\exp(t)r}$. Then, we generate CLWE samples using the procedure from Lemma~\ref{lem:bdd-to-clwe}. The procedure takes as input these $\ell$ samples, the vector $\bm u + \bm w^* - \bm w$ where $\bm u \in L^*$, and parameters $\exp(t) r, \exp(t) s_1, s_2$. Our choice of $s_1$ and $s_2$ will be specified below. Note that the CLWE oracle requires the ``hidden direction" $(\bm w-\bm w^*)/\|\bm w-\bm w^*\|$ to be uniformly distributed on the unit sphere. To this end, we apply the worst-to-average case reduction from Claim~\ref{claim:ic-worst-to-ic}. Let $S_{\bm w, t}$ be the resulting CLWE distribution. Our oracle $\mathcal{O}$ then calls the $\mathrm{CLWE}_{\beta,\gamma}$ oracle on $S_{\bm w,t}^\ell$ and outputs 1 if and only if it accepts.
Using the oracle $\mathcal{O}$, we can run the procedure from Lemma~\ref{lem:ohcp} with confidence parameter $\kappa$ and scale parameter $d'$. The output of this procedure will be some approximation $\widehat{\bm w}$ to the oracle's ``hidden center" with the guarantee that $\|\widehat{\bm w}-\bm w^*\| \leq \exp(-\kappa)d'$. Finally, running Babai's algorithm on the vector $\bm u+\bm w^*-\widehat{\bm w}$ will give us $\bm w^*$ exactly since
\begin{align*}
\|\widehat{\bm w}-\bm w^*\| \leq \exp(-\kappa)d \leq \beta\exp(-\kappa)/\eta_\varepsilon(L) \leq 2^{-n}\lambda_1(L^*)
\; ,
\end{align*}
where the last inequality is from Lemma~\ref{lem:smoothing-dual}.
The running time of the above procedure is clearly polynomial in $n$. It remains to check that our oracle $\mathcal{O}$ (1) is a valid instance of $(\exp(-\kappa),\exp(-\kappa),1+1/\kappa)$-OHCP with hidden center $\bm w^*$ and (2) satisfies all the conditions of Lemma~\ref{lem:ohcp}. First, note that $S_{\bm w, t}$ will be negligibly close in statistical distance to the CLWE distribution with parameters
\begin{align*}
\beta' &= \sqrt{(\exp(t)\|\bm w-\bm w^*\|)^2s_1'^2+s_2^2}
\; , \\
\gamma' &= \exp(t)\|\bm w-\bm w^*\|r'
\; ,
\end{align*}
where $r' = r^2/\sqrt{r^2+s_1^2}$ and $s_1' = rs_1/\sqrt{r^2+s_1^2}$ as long as $r,s_1,s_2$ satisfy the conditions of Lemma~\ref{lem:bdd-to-clwe}. Then, we set $s_1 = r/(\sqrt{2}q)$ and choose $s_2$ such that
\begin{align*}
s_2^2 = {\beta}^2 - (s_1'/r')^2{\gamma}^2 = {\beta}^2 - (s_1/r)^2{\gamma}^2 = {\beta}^2/2
\; .
\end{align*}
Lemma~\ref{lem:bdd-to-clwe} requires $rs_1/\sqrt{r^2\|\bm w-\bm w^*\|^2(s_1/s_2)^2+r^2+s_1^2} \geq \eta_{\varepsilon}(L)$. We know that $r \geq 2q\cdot \eta_{\varepsilon}(L)$ and $s_1 \geq \sqrt{2}\cdot \eta_{\varepsilon}(L)$, so it remains to determine a sufficient condition for the aforementioned inequality. Observe that for any $\bm w$ such that $\|\bm w-\bm w^*\| \leq d$, the condition $s_2 \geq 2d\cdot\eta_\varepsilon(L)$ is sufficient. Since $r \geq 2(\gamma/\beta)\cdot \eta_{\varepsilon}(L)$, this translates to $s_2 \geq \beta/(\sqrt{2})$. Hence, the transformation from Lemma~\ref{lem:bdd-to-clwe} will output samples negligibly close to CLWE samples for our choice of $s_1$ and $s_2$ as long as $\|\bm w-\bm w^*\| \leq d$ (beyond the BDD distance bound $d'$).
Since $S_{\bm w,t}$ is negligibly close to the CLWE distribution, the acceptance probability $p(\bm w,t)$ of $\mathcal{O}$ only depends on $\exp(t)\|\bm w-\bm w^*\|$. Moreover, by assumption $\|\bm w^*\| \geq \exp(-n/2) \cdot (2q/r) \geq \exp(-\kappa)d'$. Hence, $\mathcal{O}, \kappa, d'$ correspond to a valid instance of $(\exp(-\kappa),\exp(-\kappa),1+1/\kappa)$-OHCP with ``hidden center" $\bm w^*$.
Next, we show that $p(\bm w,t)$ of $\mathcal{O}$ satisfies all three conditions of Lemma~\ref{lem:ohcp} with $p(\infty)$ taken to be the acceptance probability of the CLWE oracle on samples from $D_{\mathbb{R}^n} \times U$.
Item~1 of Lemma~\ref{lem:ohcp} follows from our assumption that our $\mathrm{CLWE}_{\beta,\gamma}$ oracle has advantage $1/\kappa$, and by our choice of $r$, $s_1$, and $s_2$, when $t^* = \log(\gamma/(\|\bm w^*\|r')) > \log(\sqrt{2})$, the generated CLWE samples satisfy $\gamma'(t^*) = \gamma$ and $\beta'(t^*) = \beta$. Hence, $p(\bm 0,t^*) - p(\infty) \geq 1/\kappa$.
We now show that Item~2 holds, which states that $|p(\bm 0,t)-p(\infty)| \leq 2 \exp(-t/\kappa)$ for any $t > 0$. We will show that $S_{\bm 0, t}$ converges exponentially fast to $D_{\mathbb{R}^n} \times U$ in statistical distance. Let $f(\bm y,z)$ be the probability density of $S_{\bm 0, t}$. Then,
\begin{align*}
\Delta(S_{\bm 0,t},D_{\mathbb{R}^n}\times U) &= \frac{1}{2}\int |f(z|\bm y)-U(z)|\rho(\bm y)d\bm y dz \\
&= \frac{1}{2} \int \Big(\int |f(z|\bm y)-U(z)|dz\Big)\rho(\bm y) d\bm y
\; .
\end{align*}
Hence, it suffices to show that the conditional density of $z$ given $\bm y$ for $S_{\bm 0,t}$ converges exponentially fast to the uniform distribution on $\mathbb{T}$. Notice that the conditional distribution of $z$ given $\bm y$ is the Gaussian distribution with width parameter $\beta' \geq \exp(t)\|\bm w^*\|r/(2q) \geq \exp(t-n/2)$, where we have used our assumption that $\|\bm w^*\| \geq (2q/r)\cdot \exp(-n/2)$. By Lemma~\ref{lem:smoothing-dual} applied to $\mathbb{Z}$, we know that $\beta'$ is larger than $\eta_{\varepsilon}(\mathbb{Z})$ for $\varepsilon = \exp(-\exp(2t-n))$. Hence, one sample from this conditional distribution is within statistical distance $\varepsilon$ of the uniform distribution by Lemma~\ref{lem:smoothing-uniform}. By the triangle inequality applied to $\ell$ samples,
\begin{align*}
\Delta\Big(S_{\bm 0, t}^\ell, (D_{\mathbb{R}^n} \times U)^\ell\Big) \leq \min(1, \ell \exp(-\exp(2t-n))) \leq 2\exp(-t/\kappa)
\; ,
\end{align*}
where in the last inequality, we use the the fact that we can choose $\kappa$ to be such that $2\exp(-t/\kappa) \geq 1$ unless $t \geq \kappa/2$. And when $t \geq \kappa/2 \geq 4qn\ell$, we have $\ell \exp(-\exp(2t-n)) \ll \exp(-t/\kappa)$.
It remains to verify Item~3, which states that $p(\bm w, t)$ is $\kappa$-Lipschitz in $t$ for any $\|\bm w\| \leq (1+1/\kappa)d' \leq d$. We show this by bounding the statistical distance between $S_{\bm w,t_1}$ and $S_{\bm w,t_2}$ for $t_1 \geq t_2$. With a slight abuse in notation, let $f_{t_i}(\bm y,z)$ be the probability density of $S_{\bm w,t_i}$ and let $(\beta_i, \gamma_i)$ be the corresponding CLWE distribution parameters. For simplicity, also denote the hidden direction by $\bm w' = (\bm w-\bm w^*)/\|\bm w-\bm w^*\|$. Then,
\begin{align}
\Delta(f_{t_1}, f_{t_2})
&= \frac{1}{2}
\int \Big(\int |f_{t_1}(z|\bm y)-f_{t_2}(z|\bm y)|dz\Big) \rho(\bm y)d\bm y \nonumber \\
&= \int \Delta\Big(\mathcal{N}(\gamma_1\langle\bm y,\bm w'\rangle,\beta_1/\sqrt{2\pi}),\mathcal{N}(\gamma_2\langle\bm y,\bm w'\rangle,\beta_2/\sqrt{2\pi})\Big) \rho(\bm y)d\bm y \nonumber \\
&\leq \frac{1}{2} \int \Big(3(1-(\beta_2/\beta_1)^2) + \sqrt{2\pi}(\gamma_1-\gamma_2)/\beta_1\cdot|\langle \bm y, \bm w' \rangle|\Big)\cdot \rho(\bm y)d\bm y \label{eqn:devroye-tv}\\
&\leq \operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)]
\cdot \Big(1-\exp(-2(t_1-t_2))\Big) \text{ where } M(\bm y)
= \frac{1}{2}\Big(3+2\sqrt{\pi} q \cdot|\langle \bm y, \bm w' \rangle|\Big) \nonumber \\
&\leq \operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)] \cdot 2(t_1-t_2) \label{eqn:linear-bound} \\
&\leq (\kappa/\ell)\cdot (t_1-t_2) \label{eqn:exp-half-gaussian}
\; ,
\end{align}
where \eqref{eqn:devroye-tv} follows from Lemma~\ref{lem:tvbound}, \eqref{eqn:linear-bound} uses the fact that $1-\exp(-2(t_1-t_2)) \leq 2(t_1-t_2)$, and \eqref{eqn:exp-half-gaussian} uses the fact that $\operatorname*{\mathbb{E}}_{\bm y \sim \rho}[M(\bm y)] \leq 4q \leq \kappa/(2\ell)$. Using the triangle inequality over $\ell$ samples, the statistical distance between $S_{\bm w,t_1}^\ell$ and $S_{\bm w,t_2}^\ell$ is at most
\begin{align*}
\min(1,\ell\cdot(\kappa/\ell)(t_1-t_2)) \leq \kappa(t_1-t_2)
\; .
\end{align*}
Therefore, $p(\bm w,t)$ is $\kappa$-Lipschitz in $t$.
\end{proof}
\section{Hardness of Homogeneous CLWE}
\label{section:hc}
In this section, we show the hardness of homogeneous CLWE by reducing from CLWE, whose hardness was established in the previous section.
The main step of the reduction is to transform CLWE samples to homogeneous CLWE samples using rejection sampling (Lemma~\ref{lem:ic-to-hc}).
Consider the samples $(\bm y, z) \sim A_{\bm w,\beta,\gamma}$ in $\mathrm{CLWE}_{\beta,\gamma}$. If we condition $\bm y$ on $z = 0 \pmod{1}$ then we get exactly samples $\bm y \sim H_{\bm w,\beta,\gamma}$ for $\mathrm{hCLWE}_{\beta,\gamma}$. However, this approach is impractical as $z = 0 \pmod{1}$ happens with probability 0. Instead we condition $\bm y$ on $z \approx 0 \pmod{1}$ somehow. One can imagine that the resulting samples $\bm y$ will still have a ``wavy" probability density in the direction of $\bm w$ with spacing $1/\gamma$, which accords with the picture of homogeneous CLWE. To avoid throwing away too many samples, we will do rejection sampling with some small ``window" $\delta = 1/\operatorname{poly}(n)$. Formally, we have the following lemma.
\begin{lemma}
\label{lem:ic-to-hc}
There is a $\operatorname{poly}(n, 1/\delta)$-time probabilistic algorithm that takes as input a parameter $\delta \in (0,1)$ and samples from $A_{\bm w,\beta,\gamma}$, and outputs samples from $H_{\bm w,\sqrt{\beta^2+\delta^2},\gamma}$.
\end{lemma}
\begin{proof}
Without loss of generality assume that $\bm w = \bm e_1$.
By definition, the probability density of sample $(\bm y, z) \sim A_{\bm w,\beta,\gamma}$ is
\begin{align*}
p(\bm y, z) = \frac{1}{\beta}\cdot \rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_\beta(z+k-\gamma y_1)
\; .
\end{align*}
Let $g : \mathbb{T} \to [0,1]$ be the function $g(z) = g_0(z) / M$, where $g_0(z) = \sum_{k \in \mathbb{Z}} \rho_\delta(z+k)$
and $M = \sup_{z \in \mathbb{T}} g_0(z)$.
We perform rejection sampling on the samples $(\bm y, z)$ with acceptance probability $\Pr[\mathrm{accept} | \bm y, z] = g(z)$.
We remark that $g(z)$ is efficiently computable (see~\cite[Section 5.2]{BrakerskiLPRS13}).
The probability density of outputting $\bm y$ and accept is
\begin{align*}
\int_\mathbb{T} p(\bm y, z) g(z) d z
&= \frac{\rho(\bm y)}{\beta M} \cdot \int_\mathbb{T} \sum_{k_1, k_2 \in \mathbb{Z}} \rho_\beta(z+k_1-\gamma y_1) \rho_\delta(z+k_2) d z \\
&= \frac{\rho(\bm y)}{\beta M} \cdot \int_\mathbb{T} \sum_{k, k_2 \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\delta^2}}(k-\gamma y_1) \rho_{\beta\delta/\sqrt{\beta^2+\delta^2}} \Bigl( z+k_2+\frac{\delta^2 (k-\gamma y_1)}{\beta^2+\delta^2} \Bigr) d z \\
&= \frac{\delta}{M \sqrt{\beta^2+\delta^2}} \cdot \rho(\bm y) \cdot \sum_{k \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\delta^2}}(k-\gamma y_1)
\; ,
\end{align*}
where the second equality follows from Claim~\ref{claim:complete-squares}.
This shows that the conditional distribution of $\bm y$ upon acceptance is indeed $H_{\bm e_1,\sqrt{\beta^2+\delta^2},\gamma}$.
Moreover, a byproduct of this calculation is that the expected acceptance probability is $\Pr[\mathrm{accept}] = Z \delta / (M \sqrt{\beta^2+\delta^2})$, where, according to Eq.~\eqref{eqn:hclwe-def-normalization},
\begin{align*}
Z
&= \sqrt\frac{\beta^2+\delta^2}{\beta^2+\delta^2+\gamma^2} \cdot \rho_{\sqrt{\beta^2+\delta^2+\gamma^2}}(\mathbb{Z}) \\
&= \sqrt{\beta^2+\delta^2} \cdot \rho_{1/\sqrt{\beta^2+\delta^2+\gamma^2}}(\mathbb{Z}) \\
&\ge \sqrt{\beta^2+\delta^2}
\; ,
\end{align*}
and the second equality uses Lemma~\ref{lem:poisson-sum}.
Observe that
\begin{align*}
g_0(z) &= \sum_{k \in \mathbb{Z}} \rho_\delta(z+k) \\
&\leq 2 \cdot \sum_{k = 0}^\infty \rho_\delta(k) \\
&< 2 \cdot \sum_{k = 0}^\infty \exp(-\pi k)
< 4
\end{align*}
since $\delta < 1$, implying that $M \le 4$.
Therefore, $\Pr[\mathrm{accept}] \ge \delta/4$, and so the rejection sampling procedure has $\operatorname{poly}(n, 1/\delta)$ expected running time.
\end{proof}
The above lemma reduces CLWE to homogeneous CLWE with slightly worse parameters. Hence, homogeneous CLWE is as hard as CLWE.
Specifically, combining Theorem~\ref{thm:clwe-intro} (with $\beta$ taken to be $\beta/\sqrt{2}$) and Lemma~\ref{lem:ic-to-hc} (with $\delta$ also taken to be $\beta/\sqrt{2}$), we obtain the following corollary.
\begin{corollary}
\label{cor:hc}
For any $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded,
there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{2n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{corollary}
\section{Hardness of Density Estimation for Gaussian Mixtures}
\label{section:mixture-hardness}
In this section, we prove the hardness of density estimation for $k$-mixtures of $n$-dimensional Gaussians by showing a reduction from homogeneous CLWE. This answers an open question regarding its computational complexity~\cite{diakonikolas2016structured,moitra2018}.
We first formally define density estimation for Gaussian mixtures.
\begin{definition}[Density estimation of Gaussian mixtures]
Let $\mathcal{G}_{n,k}$ be the family of $k$-mixtures of $n$-dimensional Gaussians. The problem of \emph{density estimation} for $\mathcal{G}_{n,k}$ is the following. Given $\delta > 0$ and sample access to an unknown $P \in \mathcal{G}_{n,k}$, with probability $9/10$, output a hypothesis distribution $Q$ (in the form of an evaluation oracle) such that $\Delta(Q,P) \le \delta$.
\end{definition}
For our purposes, we fix the precision parameter $\delta$ to a very small constant, say, $\delta = 10^{-3}$. Now we show a reduction from $\mathrm{hCLWE}_{\beta,\gamma}$ to the problem of density estimation for Gaussian mixtures. Corollary~\ref{cor:hc} shows that $\mathrm{hCLWE}_{\beta,\gamma}$ is hard for $\gamma \ge 2\sqrt{n}$ (assuming worst-case lattice problems are hard). Hence, by taking $\gamma = 2\sqrt{n}$ and $g(n) = O(\log n)$ in Proposition~\ref{prop:mixture-learning-hardness}, we rule out the possibility of a $\operatorname{poly}(n,k)$-time density estimation algorithm for $\mathcal{G}_{n,k}$ under the same hardness assumption.
\begin{proposition}
\label{prop:mixture-learning-hardness}
Let $\beta = \beta(n) \in (0,1/32)$, $\gamma = \gamma(n) \ge 1$, and $g(n) \ge 4\pi$. For $k = 2\gamma \sqrt{g(n)/\pi}$, if there is an $\exp(g(n))$-time algorithm that solves density estimation for $\mathcal{G}_{n,2k+1}$, then there is a $O(\exp(g(n)))$-time algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{proposition}
\begin{proof}
We apply the density estimation algorithm $\mathcal{A}$ to the unknown given distribution $P$. As we will show below, with constant probability, it outputs a density estimate $f$ that satisfies $\Delta(f,P) < 2\delta = 2 \cdot 10^{-3}$ (and this is even though $H_{\bm w,\beta,\gamma}$ has infinitely many components). We then test whether $P = D_{\mathbb{R}^n}$ or not using the following procedure. We repeat the following procedure $m=1/(6\sqrt{\delta})$ times. We draw $\bm x \sim D_{\mathbb{R}^n}$ and check whether the following holds
\begin{align}
\frac{f(\bm x)}{D(\bm x)} \in [1-\sqrt{\delta},1+\sqrt{\delta}] \label{eqn:equality-test}\;,
\end{align}
where $D$ denotes the density of $D_{\mathbb{R}^n}$. We output $P = D_{\mathbb{R}^n}$ if Eq.~\eqref{eqn:equality-test} holds for all $m$ independent trials and $P = H_{\bm w,\beta,\gamma}$ otherwise.
Since $\Delta(H_{\bm w,\beta,\gamma},D_{\mathbb{R}^n}) > 1/2$ (Claim~\ref{claim:hclwe-tv-distance}), it is not hard to see that this test solves $\mathrm{hCLWE}_{\beta,\gamma}$ with probability at least $2/3$ (see~\cite[Observation 24]{rubinfeld-servedio2009monotone} for a closely related statement). Moreover, the total running time is $O(\exp(g(n))$ since this test uses a constant number of samples.
If $P = D_{\mathbb{R}^n}$, it is obvious that $\mathcal{A}$ outputs a close density estimate with constant probability since $D_{\mathbb{R}^n} \in \mathcal{G}_{n,2k+1}$. It remains to consider the case $P = H_{\bm w,\beta,\gamma}$. To this end, we observe that $H_{\bm w,\beta,\gamma}$ is close to a $(2k+1)$-mixture of Gaussians. Indeed, by Claim~\ref{claim:hclwe-truncation} below,
\begin{align}
\Delta(H_{\bm w,\beta,\gamma},H^{(k)}) \le 2\exp(-\pi\cdot k^2/(\beta^2+\gamma^2)) < 2\exp(-\pi \cdot k^2/(2\gamma^2)) \nonumber \;,
\end{align}
where $H^{(k)}$ is the distribution given by truncating $H_{\bm w,\beta,\gamma}$ to the $(2k+1)$ central mixture components.
Hence, the statistical distance between the joint distribution of $\exp(g(n))$ samples from $H_{\bm w,\beta,\gamma}$ and that of $\exp(g(n))$ samples from $H^{(k)}$ is bounded by
\begin{align}
2\exp(-\pi \cdot k^2/(2\gamma^2))\cdot\exp(g(n)) = 2\exp(-g(n)) \le 2\exp(-4\pi) \; .\nonumber
\end{align}
Since the two distributions are statistically close, a standard argument shows that $\mathcal{A}$ will output $f$ satisfying $\Delta(f,H_{\bm w,\beta,\gamma}) \le \Delta(f,H^{(k)}) + \Delta(H^{(k)},H_{\bm w,\beta,\gamma}) < 2\delta$ with constant probability.
\end{proof}
\begin{claim}
\label{claim:hclwe-tv-distance}
Let $\beta = \beta(n) \in (0,1/32)$ and $\gamma = \gamma(n) \ge 1$. Then,
\begin{align*}
\Delta(H_{\bm w,\beta,\gamma},D_{\mathbb{R}^n}) > 1/2\;.
\end{align*}
\end{claim}
\begin{proof}
Let $\gamma' = \sqrt{\beta^2+\gamma^2} > \gamma$. Let $\bm y \in \mathbb{R}^n$ be a random vector distributed according to $H_{\bm w,\beta,\gamma}$. Using the Gaussian mixture form of~\eqref{eqn:hclwe-mixture-def}, we observe that $\langle \bm y, \bm w \rangle \bmod{\gamma/\gamma'^2}$ is distributed according to $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$. Since statistical distance cannot increase by applying a function (inner product with $\bm w$ and then applying the modulo operation in this case), it suffices to lower bound the statistical distance between $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$ and $D \bmod{\gamma/\gamma'^2}$, where $D$ denotes the 1-dimensional standard Gaussian.
By Chernoff, for all $\zeta>0$, at least $1-\zeta$ mass of $D_{\beta/\gamma'}$ is contained in $[- a \cdot (\beta/\gamma'), a \cdot (\beta/\gamma')]$, where $a = \sqrt{\log(1/\zeta)}$. Hence, $D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2}$ is at least $1-2a\beta \gamma'/\gamma-\zeta$ far in statistical distance from the uniform distribution over $\mathbb{R}/(\gamma/\gamma'^2)\mathbb{Z}$, which we denote by $U$.
Moreover, by Lemma~\ref{lem:smoothing-uniform} and Lemma~\ref{lem:smoothing-dual}, $D \bmod{\gamma/\gamma'^2}$ is within statistical distance $\varepsilon/2 = \exp(-\gamma'^4/\gamma^2)/2$ from $U$. Therefore,
\begin{align}
\Delta(D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2},D \bmod{\gamma/\gamma'^2})
&\ge \Delta(D_{\beta/\gamma'} \bmod{\gamma/\gamma'^2},U) - \Delta(U,D \bmod{\gamma/\gamma'^2}) \nonumber \\
&\ge 1-2a\beta\gamma'/\gamma-\zeta-\varepsilon/2 \nonumber \\
&> 1-2\sqrt{2}a\beta-\zeta-\exp(-\gamma^2)/2 \label{eqn:hclwe-tv-plug-in-values} \\
&> 1/2 \nonumber\;,
\end{align}
where we set $\zeta = \exp(-2)$ and use the fact that $\beta \le 1/32$ and $\gamma \ge 1$ in \eqref{eqn:hclwe-tv-plug-in-values}.
\end{proof}
\begin{claim}
\label{claim:hclwe-truncation}
Let $\beta = \beta(n) \in (0,1), \gamma = \gamma(n) \ge 1$, and $k \in \mathbb{Z}^{+}$. Then,
\begin{align}
\Delta(H_{\bm w,\beta,\gamma},H^{(k)}) \le 2\exp(-\pi\cdot k^2/(\beta^2+\gamma^2))\nonumber \;,
\end{align}
where $H^{(k)}$ is the distribution given by truncating $H_{\bm w,\beta,\gamma}$ to the central $(2k+1)$ mixture components.
\end{claim}
\begin{proof}
We express $H_{\bm w,\beta,\gamma}$ in its Gaussian mixture form given in Eq.~\eqref{eqn:hclwe-mixture-def} and define a random variable $X$ taking on values in $\mathbb{Z}$ such that the probability of $X = i$ is equal to the probability that a sample comes from the $i$-th component in $H_{\bm w,\beta,\gamma}$. Then, we observe that $H^{(k)}$ is the distribution given by conditioning on $|X| \le k$. Since $X$ is a discrete Gaussian random variable with distribution $D_{\mathbb{Z},\sqrt{\beta^2+\gamma^2}}$, we observe that $\Pr[|X| > k] \le \varepsilon := 2\exp(-\pi \cdot k^2/(\beta^2+\gamma^2))$ by~\cite[Lemma 2.8]{micciancio-peikert2012trapdoor}.
Since conditioning on an event of probability $1-\varepsilon$ cannot change the statistical distance by more than $\varepsilon$, we have
\begin{align}
\Delta(H_{\bm w,\beta,\gamma}, H^{(k)}) \le \varepsilon \nonumber \;.
\end{align}
\end{proof}
\section{LLL Solves Noiseless CLWE}
\label{section:lll-clwe}
The noiseless CLWE problem ($\beta = 0$) can be solved in polynomial time using LLL. This applies both to the homogeneous and the inhomogeneous versions, as well as to the search version. The argument can be extended to the case of exponentially small $\beta>0$.
The key idea is to take samples $(\bm y_i, z_i)$, and find integer coefficients $c_1,\ldots,c_m$ such that $\bm y = \sum_{i=1}^m c_i \bm y_i$ is short, say
$\|\bm y\| \ll 1/\gamma$. By Cauchy-Schwarz, we then have that $\gamma \langle \bm y, \bm w \rangle = \sum_{i=1}^m c_i z_i$ over the reals (not modulo 1!). This is formalized in Theorem~\ref{thm:lll-noiseless-clwe}. We first state Minkowski's Convex Body Theorem, which we will use in the proof of our procedure.
\begin{lemma}[\cite{minkowski1910geometrie}]
\label{lem:minkowski-cvx}
Let $L$ be a full-rank $n$-dimensional lattice. Then, for any centrally-symmetric convex set $S$, if $\operatorname{vol}(S) > 2^n \cdot |\det(L)|$, then $S$ contains a non-zero lattice point.
\end{lemma}
\begin{theorem}
\label{thm:lll-noiseless-clwe}
Let $\gamma = \gamma(n)$ be a polynomial in $n$. Then, there exists a polynomial-time algorithm for solving $\mathrm{CLWE}_{0,\gamma}$.
\end{theorem}
\begin{proof}
Take $n+1$ CLWE samples $\{(\bm y_i,z_i)\}_{i=1}^{n+1}$ and consider the matrix
\begin{align*}
Y = \begin{bmatrix}
\bm y_1 & \cdots & \bm y_n & \bm y_{n+1} \\
0 & \cdots & 0 & \delta
\end{bmatrix} \; ,
\end{align*}
where $\delta = 2^{-3n^2}$.
Consider the lattice $L$ generated by the columns of $Y$. Since $\bm y_i$'s are drawn from the Gaussian distribution, $L$ is full rank. By Hadamard's inequality, and the fact that with probability exponentially close to $1$, $\|\bm y_i\| \leq \sqrt{n}$ for all $i$, we have
\begin{align*}
|\det(L)| \leq \delta \cdot n^{n/2} < 2^{-2n^2} \; .
\end{align*}
Now consider the $n$-dimensional cube $S$ centered at $\bm 0$ with side length $2^{-n}$. Then, $\operatorname{vol}(S) = 2^{-n^2}$, and by Lemma~\ref{lem:minkowski-cvx}, $L$ contains a vector $\bm v$ satisfying $\|\bm v\|_{\infty} \leq 2^{-n}$ and so $\| \bm v \|_2 \leq \sqrt{n}\cdot 2^{-n}$.
Applying the LLL algorithm~\cite{lenstra1982lll} gives us an integer combination of the columns of $Y$ whose length is within $2^{(n+1)/2}$ factor of the shortest vector in $L$, which will therefore have $\ell_2$ norm less than $\sqrt{n} \cdot 2^{-(n-1)/2}$.
Let $\bm y$ be the corresponding combination of the $\bm y_i$ vectors (which is equivalently given by the first $n$ coordinates of the output of LLL) and
$z \in (-1/2,1/2]$ a representative of the corresponding integer combination of the $z_i$ mod 1.
Then, we have $\|\bm y\|_2 \leq \sqrt{n} \cdot 2^{-(n-1)/2}$ and therefore we obtain the linear equation $\gamma \cdot \langle \bm y,\bm w \rangle = z$ over the reals (without mod 1).
We now repeat the above procedure $n$ times, and recover $\bm w$ by solving the resulting $n$ linear equations.
It remains to argue why the $n$ vectors $\bm y$ we collect are linearly independent.
First, note that the output $\bm y$ is guaranteed to be a non-zero vector since with probability $1$, no integer combination of the Gaussian distributed $\bm y_i$ is $\bm 0$.
Next, note that LLL is equivariant to rotations, i.e., if we rotate the input basis then the output vector will also be rotated by the same rotation. Moreover, spherical Gaussians are rotationally invariant. Hence, the distribution of the output vector $\bm y \in \mathbb{R}^n$
is also rotationally invariant. Therefore, repeating the above procedure $n$ times will give us $n$ linearly independent vectors.
\end{proof}
\section{Subexponential Algorithm for Homogeneous CLWE}
\label{section:subexp}
For $\gamma = o(\sqrt{n})$, the covariance matrix will reveal the discrete structure of homogeneous CLWE, which will lead to a subexponential time algorithm for the problem. This clarifies why the hardness results of homogeneous CLWE do not extend beyond $\gamma \geq 2\sqrt{n}$.
We define \emph{noiseless homogeneous CLWE distribution} $H_{\bm w, \gamma}$ as $H_{\bm w, \beta, \gamma}$ with $\beta = 0$.
We begin with a claim that will allow us to focus on the noiseless case.
\begin{claim}
\label{claim:noiseless-is-sufficient}
By adding Gaussian noise $\ngauss{\beta/\gamma}$ to $H_{\bm w,\gamma}$ and then rescaling by a factor of $\gamma/\sqrt{\beta^2+\gamma^2}$, the resulting distribution is $H_{\bm w, \tilde{\beta}, \tilde{\gamma}}$, where $\tilde{\gamma} = \gamma/\sqrt{1+(\beta/\gamma)^2}$ and $\tilde{\beta} = \tilde{\gamma}(\beta/\gamma)$.\footnote{%
Equivalently, in terms of the Gaussian mixture representation of Eq.~\eqref{eqn:hclwe-mixture-def}, the resulting distribution has layers spaced by $1/\sqrt{\gamma^2+\beta^2}$
and of width $\beta/\sqrt{\gamma^2+\beta^2}$.
}
\end{claim}
\begin{proof}
Without loss of generality, suppose $\bm w = \bm e_1$.
Let $\bm z \sim H_{\bm w,\gamma} + \ngauss{\beta/\gamma}$ and $\tilde{\bm z} = \gamma\bm z/\sqrt{\beta^2+\gamma^2}$.
It is easy to verify that the marginals density of $\tilde{\bm z}$ on subspace $\bm e_1^\perp$ will simply be $\rho$.
Hence we focus on calculating the density of $z_1$ and $\tilde{z}_1$.
The density can be computed by convolving the probability densities of $H_{\bm w,\gamma}$ and $\ngauss{\beta/\gamma}$ as follows.
\begin{align*}
H_{\bm w,\gamma} * \ngauss{\beta/\gamma}(z_1) &\propto \sum_{k \in \mathbb{Z}} \rho(k/\gamma)\cdot \rho_{\beta/\gamma}(z_1-k/\gamma) \\
&= \rho_{\sqrt{\beta^2+\gamma^2}/\gamma}(z_1) \cdot \sum_{k \in \mathbb{Z}} \rho_{\beta/\sqrt{\beta^2+\gamma^2}}\Big(k / \gamma - \frac{\gamma^2}{\beta^2+\gamma^2}z_1 \Big) \\
&= \rho(\tilde{z}_1) \cdot \sum_{k \in \mathbb{Z}} \rho_{\tilde{\beta}}\Big(k - \tilde{\gamma} \tilde{z}_1\Big)
\; ,
\end{align*}
where the second to last equality follows from Claim~\ref{claim:complete-squares}.
This verifies that the resulting distribution is indeed $H_{\bm w, \tilde{\beta}, \tilde{\gamma}}$.
\end{proof}
Claim~\ref{claim:noiseless-is-sufficient} implies an homogeneous CLWE distribution with $\beta > 0$ is equivalent to a noiseless homogeneous CLWE distribution with independent Gaussian noise added. We will first analyze the noiseless case and then derive the covariance of noisy (i.e., $\beta > 0$) case by adding independent Gaussian noise and rescaling.
\begin{lemma}
\label{lem:covariance-hclwe-noiseless}
Let $\Sigma \succ 0$ be the covariance matrix of the
$n$-dimensional noiseless homogeneous CLWE distribution
$H_{\bm w,\gamma}$ with $\gamma \ge 1$. Then,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi} I_n \Big\| \geq \gamma^2 \exp(-\pi\gamma^2) \; ,
\end{align*}
where $\|\cdot\|$ denotes the spectral norm.
\end{lemma}
\begin{proof}
Without loss of generality, let $\bm w = \bm e_1$.
Then $H_{\bm w,\gamma} = D_{L} \times D_{\mathbb{R}^{n-1}}$ where $L$ is the one-dimensional lattice $(1/\gamma)\mathbb{Z}$.
Then, $\Sigma = \operatorname{diag}(\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2], \frac{1}{2\pi},\dots,\frac{1}{2\pi})$, so it suffices to show that
\begin{equation*}
\Big| \operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2] - \frac{1}{2\pi} \Big| \ge \gamma^2 \exp(-\pi\gamma^2)
\; .
\end{equation*}
Define $g(x) = x^2 \cdot \rho(x)$.
The Fourier transform of $\rho$ is itself; the Fourier transform of $g$ is given by
\begin{align*}
\widehat{g}(y) = \Big(\frac{1}{2\pi}-y^2\Big) \rho(y)
\; .
\end{align*}
By definition and Poisson's summation formula (Lemma~\ref{lem:poisson-sum}), we have
\begin{align}
\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2]
&= \frac{g(L)}{\rho(L)} \nonumber \\
&= \frac{\det(L^*)\cdot \widehat{g}(L^*)}{\det(L^*)\cdot \rho(L^*)}
= \frac{\widehat{g}(L^*)}{\rho(L^*)} \nonumber \; ,
\end{align}
where $L^* = ((1/\gamma)\mathbb{Z})^* = \gamma \mathbb{Z}$.
Combining this with the expression for $\widehat{g}$, we have
\begin{align*}
\Bigl|\operatorname*{\mathbb{E}}_{x \sim D_{L}}[x^2]-\frac{1}{2\pi}\Bigr| &= \frac{\sum_{y \in L^*}y^2\rho(y)}{1+\rho(L^*\setminus\{0\})} \\
&\geq \gamma^2 \exp(-\pi \gamma^2) \; ,
\end{align*}
where we use the fact that for $\gamma \ge 1$,
\begin{align*}
\rho(\gamma \mathbb{Z}\setminus\{0\}) \le
\rho(\mathbb{Z}\setminus\{0\}) &< 2\sum_{k=1}^\infty \exp(-\pi k) = \frac{2\exp(-\pi)}{1-\exp(-\pi)} < 1
\; .
\qedhere
\end{align*}
\end{proof}
Combining Claim~\ref{claim:noiseless-is-sufficient} and Lemma~\ref{lem:covariance-hclwe-noiseless}, we get the following corollary for the noisy case.
\begin{corollary}
\label{cor:covariance-hclwe}
Let $\Sigma \succ 0$ be the covariance matrix of
$n$-dimensional homogeneous CLWE distribution
$H_{\bm w,\beta,\gamma}$ with $\gamma \ge 1$ and $\beta > 0$. Then,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi} I_n \Big\| \geq \gamma^2 \exp(-\pi(\beta^2+\gamma^2)) \; ,
\end{align*}
where $\|\cdot\|$ denotes the spectral norm.
\end{corollary}
\begin{proof}
Using Claim~\ref{claim:noiseless-is-sufficient}, we can view samples from $H_{\bm w,\beta,\gamma}$ as samples from $H_{\bm w,\gamma'}$ with independent Gaussian noise of width $\beta'/\gamma'$ added and rescaled by $\gamma'/\sqrt{\beta'^2+\gamma'^2}$, where $\beta', \gamma'$ are given by
\begin{align*}
\beta' &= \beta \sqrt{1+(\beta/\gamma)^2} \; , \\
\gamma' &= \sqrt{\beta^2+\gamma^2} \;.
\end{align*}
Let $\Sigma$ be the covariance of $H_{\bm w,\beta,\gamma}$ and let $\Sigma_0$ be the covariance of $H_{\bm w,\gamma'}$. Since the Gaussian noise added to $H_{\bm w,\gamma'}$ is independent and $\beta'/\gamma' = \beta/\gamma$,
\begin{align*}
\Sigma = \frac{1}{1+(\beta/\gamma)^2}\Big(\Sigma_0 + \frac{(\beta/\gamma)^2}{2\pi} I_n\Big) \;.
\end{align*}
Hence,
\begin{align*}
\Big\|\Sigma - \frac{1}{2\pi}I_n\Big\| &= \frac{1}{1+(\beta/\gamma)^2} \Big\|\Big(\Sigma_0 + \frac{(\beta/\gamma)^2}{2\pi}I_n\Big)-\frac{1+(\beta/\gamma)^2}{2\pi}I_n\Big\| \\
&= \frac{1}{1+(\beta/\gamma)^2}\Big\|\Sigma_0 - \frac{1}{2\pi}I_n \Big\| \\
&\geq \gamma^2 \exp(-\pi(\beta^2+\gamma^2)) \; .
\end{align*}
where the last inequality follows from Lemma~\ref{lem:covariance-hclwe-noiseless}.
\end{proof}
We use the following lemma, which provides an upper bound on the error in estimating the covariance matrix by samples. The sub-gaussian norm of a random variable $Y$ is defined as $\|Y\|_{\psi_2} = \inf\{t > 0 \mid \mathbb{E}[\exp(Y^2/t^2)] \leq 2\}$ and that of an $n$-dimensional random vector $\bm y$ is defined as $\|\bm y\|_{\psi_2} = \sup_{\bm u \in \mathbb{S}^{n-1}}\|\langle \bm y, \bm u \rangle\|_{\psi_2}$.
\begin{lemma}[{\cite[Theorem 4.6.1]{vershynin2018high}}]
\label{lem:covariance-estimate}
Let $A$ be an $m\times n$ matrix whose rows $A_i$ are independent, mean zero, sub-gaussian isotropic random vectors in $\mathbb{R}^n$. Then for any $u \geq 0$ we have
\begin{align*}
\Big\|\frac{1}{m}A^TA-I_n\Big\| \leq K^2 \max(\delta,\delta^2) \; \text{ where } \delta = C\Big(\sqrt{\frac{n}{m}} +\frac{u}{\sqrt{m}}\Big)\;,
\end{align*}
with probability at least $1-2e^{-u^2}$ for some constant $C > 0$. Here, $K = \max_i \|A_i\|_{\psi_i}$.
\end{lemma}
Combining Corollary~\ref{cor:covariance-hclwe} and Lemma~\ref{lem:covariance-estimate}, we have the following theorem for distinguishing homogeneous CLWE distribution and Gaussian distribution.
\begin{theorem}
\label{thm:subexp-hclwe}
Let $\gamma = n^{\varepsilon}$, where $\varepsilon < 1/2$ is a constant, and let $\beta = \beta(n) \in (0,1)$. Then, there exists an $\exp(O(n^{2\varepsilon}))$-time algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$.
\end{theorem}
\begin{proof}
Our algorithm takes $m$ samples from the unknown input distribution $P$ and computes the sample covariance matrix $\Sigma_m = (1/m)A^TA$, where $A$'s rows are the samples, and its eigenvalues $\mu_1, \ldots, \mu_n$. Then, it determines whether $P$ is a homogeneous CLWE distribution or not by testing that
\begin{align*}
\Bigl|\mu_i - \frac{1}{2\pi}\Bigr| \le \frac{1}{2}\cdot \gamma^2\exp(-\pi (\beta^2+\gamma^2)) \; \text{ for all } i \in [n]\;.
\end{align*}
The running time of this algorithm is $O(n^2 m) = \exp(O(n^{2\varepsilon}))$. To show correctness, we first consider the case $P = D_{\mathbb{R}^n}$. The standard Gaussian distribution satisfies the conditions of Lemma~\ref{lem:covariance-estimate} (after rescaling by $1/(2\pi)$). Hence, the eigenvalues of $\Sigma_m$ will be within distance $O(\sqrt{n/m})$ from $1/(2\pi)$ with high probability.
Now consider the case $P = H_{\bm w,\beta,\gamma}$. We can assume $\bm w=\bm e_1$ without loss of generality since eigenvalues are invariant under rotations. Denote by $\bm y$ a random vector distributed according to $H_{\bm w,\beta,\gamma}$ and $\sigma^2 = \operatorname*{\mathbb{E}}_{\bm y \sim H_{\bm w,\beta,\gamma}}[y_1^2]$. The covariance of $P$ is given by
\begin{align}
\Sigma = \begin{pmatrix} \sigma^2 & \bm 0 \\ \bm 0 & \frac{1}{2\pi}I_{n-1} \end{pmatrix} \label{eqn:hclwe-covariance-matrix} \; .
\end{align}
Now consider the sample covariance $\Sigma_m$ of $P$ and denote by $\sigma_m^2 = \bm w^T\Sigma_m\bm w = (1/m)\sum_{i=1}^m A_{i1}^2$. Since $A_{i1}$'s are sub-gaussian random variables~\cite[Lemma 2.8]{micciancio-peikert2012trapdoor}, $\sigma_m^2-\sigma^2$ is a sum of $m$ independent, mean-zero, sub-exponential random variables. For $m = \omega(n)$, Bernstein's inequality~\cite[Corollary 2.8.3]{vershynin2018high} implies that $|\sigma_m^2-\sigma^2| = O(\sqrt{n/m})$ with high probability. By Corollary~\ref{cor:covariance-hclwe}, we know that
\begin{align*}
\Big|\sigma^2 - \frac{1}{2\pi}\Big| \ge \gamma^2\exp(-\pi(\beta^2+\gamma^2)) \;.
\end{align*}
Hence, if we choose $m = \exp(c\gamma^2)$ with some sufficiently large constant $c$, then $\Sigma_m$ will have an eigenvalue that is noticeably far from $1/(2\pi)$ with high probability.
\end{proof}
\section{SQ Lower Bound for Homogeneous CLWE}
\label{section:sq-lb}
Statistical Query (SQ) algorithms~\cite{kearnsSQ1998} are a restricted class of algorithms that are only allowed to query expectations of functions of the input distribution without directly accessing individual samples. To be more precise, SQ algorithms access the input distribution indirectly via the STAT($\tau$) oracle, which given a query function $f$ and data distribution $D$, returns a value contained in the interval $\mathbb{E}_{x \sim D} [f(x)]+[-\tau, \tau]$ for some precision parameter $\tau$.
In this section, we prove SQ hardness of distinguishing homogeneous CLWE distributions from the standard Gaussian. In particular, we show that SQ algorithms that solve homogeneous CLWE require super-polynomial number of queries even with super-polynomial precision. This is formalized in Theorem~\ref{thm:hclwe-sq-lb}.
\begin{theorem}
\label{thm:hclwe-sq-lb}
Let $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq \sqrt{2}$. Then, any (randomized) SQ algorithm with precision $\tau \geq 4 \cdot \exp(-\pi \cdot \gamma^2/4)$ that successfully solves $\mathrm{hCLWE}_{\beta, \gamma}$ with probability $\eta > 1/2$ requires at least $(2\eta-1)\cdot \exp(c n)\cdot \tau^2\beta^2/(4\gamma^2)$ statistical queries of precision $\tau$ for some constant $c > 0$.
\end{theorem}
Note that when $\gamma = \Omega(\sqrt{n})$ and $\gamma/\beta = \operatorname{poly}(n)$, even
exponential precision $\tau = \exp(-O(n))$ results in a query lower bound that grows as $\exp(\tilde{\Omega}(n))$. This establishes an unconditional hardness result for SQ algorithms in the parameter regime $\gamma = \Omega(\sqrt{n})$, which is consistent with our computational hardness result based on worst-case lattice problems. The uniform spacing in homogeneous CLWE distributions gives us tight control over their pairwise correlation (see definition in \eqref{eqn:pairwise-corr}), which leads to a simple proof of the SQ lower bound.
We first provide some necessary background on the SQ framework. We denote by $\mathcal{B}(\mathcal{U},D)$ the decision problem in which the input distribution $P$ either equals $D$ or belongs to $\mathcal{U}$, and the goal of the algorithm is to identify whether $P=D$ or $P \in \mathcal{U}$. For our purposes, $D$ will be the standard Gaussian $D_{\mathbb{R}^n}$ and $\mathcal{U}$ will be a finite set of homogeneous CLWE distributions. Abusing notation, we denote by $D(x)$ the density of $D$. Following \cite{feldman2017planted-clique}, we define the \emph{pairwise correlation} between two distributions $P, Q$ relative to $D$ as
\begin{align}
\chi_D(P,Q) := \mathbb{E}_{\bm x \sim D} \left[\left(\frac{P(\bm x)}{D(\bm x)}-1 \right)\cdot\left(\frac{Q(\bm x)}{D(\bm x)}-1 \right) \right] = \mathbb{E}_{\bm x \sim D} \left[\frac{P(\bm x)Q(\bm x)}{D(\bm x)^2}\right] -1 \label{eqn:pairwise-corr}\; .
\end{align}
Lemma~\ref{lem:decision-lb} below establishes a lower bound on the number of statistical queries required to solve $\mathcal{B}(\mathcal{U},D)$ in terms of pairwise correlation between distributions in $\mathcal{U}$.
\begin{lemma}[{\cite[Lemma 3.10]{feldman2017planted-clique}}]
\label{lem:decision-lb}
Let $D$ be a distribution and $\mathcal{U}$ be a set of distributions both over a domain $X$ such that for any $P, Q \in \mathcal{U}$
\begin{align}
|\chi_D(P,Q)| \leq \begin{cases} \delta &\mbox{if } P = Q \\
\varepsilon &\mbox{otherwise }\; \end{cases} \nonumber\;.
\end{align}
Let $\tau \ge \sqrt{2\varepsilon}$. Then, any (randomized) SQ algorithm that solves $\mathcal{B}(\mathcal{U},D)$ with success probability $\eta > 1/2$ requires at least $(2\eta-1)\cdot|\mathcal{U}|\cdot\tau^2/(2\delta)$ queries to $\operatorname{STAT}(\tau)$.
\end{lemma}
The following proposition establishes a tight upper bound on the pairwise correlation between homogeneous CLWE distributions. To deduce Theorem~\ref{thm:hclwe-sq-lb} from Lemma~\ref{lem:decision-lb} and Proposition~\ref{prop:avg-corr}, we take a set of unit vectors $\mathcal{U}$ such that any two distinct vectors $\bm v, \bm w \in \mathcal{U}$ satisfy $|\langle \bm v, \bm w \rangle| \leq 1/\sqrt{2}$, and identify it with the set of homogeneous CLWE distributions $\{H_{\bm w,\beta,\gamma}\}_{\bm w \in \mathcal{U}}$. A standard probabilistic argument shows that such a $\mathcal{U}$ can be as large as $\exp(\Omega(n))$, which proves Theorem~\ref{thm:hclwe-sq-lb}.
\begin{proposition}
\label{prop:avg-corr}
Let $\bm v, \bm w \in \mathbb{R}^n$ be unit vectors and let $H_{\bm v}, H_{\bm w}$ be $n$-dimensional homogeneous CLWE distributions with parameters $\gamma \geq 1, \beta \in (0,1)$, and hidden direction $\bm v$ and $\bm w$, respectively. Then, for any $\alpha \ge 0$ that satisfies $\gamma^2(1-\alpha^2) \ge 1$,
\begin{align}
|\chi_{D}(H_{\bm v},H_{\bm w})| \leq \begin{cases} 2(\gamma/\beta)^2 &\text{ if } \bm v = \bm w \\ 8\exp(-\pi\cdot \gamma^2(1-\alpha^2)) &\text{ if } |\langle \bm v, \bm w \rangle| \leq \alpha \end{cases} \nonumber \;.
\end{align}
\end{proposition}
\begin{proof}
We will show that computing $\chi_D(H_{\bm v},H_{\bm w})$ reduces to evaluating the Gaussian mass of two lattices $L_1$ and $L_2$ defined below. Then, we will tightly bound the Gaussian mass using Lemma~\ref{lem:poisson-sum} and Lemma~\ref{lem:smoothing-primal}, which will result in upper bounds on $|\chi_D(H_{\bm v},H_{\bm w})|$. We define $L_1$ and $L_2$ by specifying their bases $B_1$ and $B_2$, respectively.
\begin{align*}
B_1 &= \frac{1}{\sqrt{\beta^2+\gamma^2}} \begin{pmatrix} 1 & 0 \\
0 & 1 \end{pmatrix}
\; ,\\
B_2 &= \frac{1}{\sqrt{\beta^2+\gamma^2}}\begin{pmatrix} 1 & 0 \\
-\frac{\alpha \gamma^2}{\zeta\sqrt{\beta^2+\gamma^2}} & \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \end{pmatrix}
\; ,
\end{align*}
where $\zeta = \sqrt{(\beta^2+\gamma^2) -\alpha^2\gamma^4/(\beta^2+\gamma^2)}$. Then the basis of the dual lattice $L_1^*$ and $L_2^*$ is $B_1^{-T}$ and $B_2^{-T}$, respectively. Note that $\lambda_2(L_1)^2 = 1/(\beta^2+\gamma^2)$ and that the two columns of $B_2$ have the same norm, and so
\begin{align}
\lambda_2(L_2)^2 &\leq \frac{1}{\beta^2+\gamma^2}\cdot \max\Big\{1+\frac{\alpha^2\gamma^4}{\zeta^2(\beta^2+\gamma^2)},\frac{\beta^2+\gamma^2}{\zeta^2}\Big\} \nonumber\\
&= \frac{1}{\zeta^2} \label{eqn:lambda2-general} \\
&\leq \frac{1}{\gamma^2(1-\alpha^2)} \label{eqn:lambda2-simple} \; .
\end{align}
Now define the density ratio $a(t) := H(t)/D(t)$, where $D$ is the standard Gaussian and $H$ is the marginal distribution of homogeneous CLWE with parameters $\beta, \gamma$ along the hidden direction. We immediately obtain
\begin{align}
a(t) &= \frac{1}{Z} \sum_{k \in \mathbb{Z}} \rho_{\beta/\gamma}(t-k/\gamma) \label{eq:sq-density-ratio}
\; ,
\end{align}
where $Z = \int_\mathbb{R} \rho(t) \cdot \sum_{k \in \mathbb{Z}} \rho_{\beta/\gamma}(t-k/\gamma) dt$. By Eq.~\eqref{eqn:hclwe-def-normalization}, $Z$ is given by
\begin{align*}
Z = \frac{\beta}{\sqrt{\beta^2+\gamma^2}} \cdot \rho\Bigg(\frac{1}{\sqrt{\beta^2+\gamma^2}}\mathbb{Z}\Bigg) \; .
\end{align*}
Moreover, we can express $Z^2$ in terms of the Gaussian mass of $(L_1)$ as
\begin{align*}
Z^2 = \frac{\beta^2}{\beta^2+\gamma^2}\cdot \rho(L_1) \; .
\end{align*}
$\chi_D(H_{\bm v},H_{\bm w})$ can be expressed in terms of $a(t)$ as
\begin{align}
\chi_D(H_{\bm v},H_{\bm w}) = \operatorname*{\mathbb{E}}_{\bm x \sim D}\Big[a(\langle \bm x, \bm w \rangle)\cdot a(\langle \bm x, \bm v \rangle)\Big] - 1 \label{eqn:avg-corr-simplified}\;.
\end{align}
Without loss of generality, assume $\bm v = \bm e_1$ and $\bm w = \alpha \bm e_1 + \xi \bm e_2$, where $\xi = \sqrt{1-\alpha^2}$. We first compute the pairwise correlation for $\bm v \neq \bm w$. For notational convenience, we denote by $\varepsilon = 8\cdot\exp(-\pi\cdot\gamma^2(1-\alpha^2))$.
\begin{align}
\chi_{D}(H_{\bm v},H_{\bm w}) + 1&= \operatorname*{\mathbb{E}}_{\bm x \sim D} \Big[a(x_1)\cdot a(\alpha x_1 + \xi x_2)\Big] \nonumber \\
&= \frac{1}{Z^2}\sum_{k, \ell \in \mathbb{Z}} \int \int \rho_{\beta}(\gamma x_1 - k)\cdot \rho_{\beta}((\gamma \alpha x_1 + \gamma \xi x_2) - \ell)\cdot \rho(x_1) \cdot \rho(x_2) dx_1 dx_2 \nonumber \\
&= \frac{1}{Z^2}\cdot\frac{\beta}{\sqrt{(\gamma\xi)^2+\beta^2}}\sum_{k, \ell \in \mathbb{Z}} \int \rho_{\beta}(\gamma x_1 - k) \cdot \rho(x_1) \cdot \rho_{\sqrt{1+\beta^2/(\gamma\xi)^2}} (\ell/(\gamma\xi)-(\alpha/\xi) x_1) dx_1 \nonumber\\
&= \frac{1}{Z^2}\cdot\frac{\beta}{\sqrt{(\gamma\xi)^2+\beta^2}}\cdot\frac{\beta\sqrt{(\gamma\xi)^2+\beta^2}}{\zeta\sqrt{\beta^2+\gamma^2}} \sum_{k, \ell \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot \rho_{\zeta}\Big(\ell - \gamma^2 \alpha \cdot k/(\beta^2+\gamma^2)\Big) \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot \frac{\sum_{k, \ell \in \mathbb{Z}} \rho_{\sqrt{\beta^2+\gamma^2}}(k) \cdot \rho_{\zeta}\Big(\ell - \gamma^2 \alpha \cdot k/(\beta^2+\gamma^2)\Big)}{\rho(L_1)} \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\rho(L_2)}{\rho(L_1)} \nonumber\\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot\frac{\det(L_2^*)}{\det(L_1^*)} \cdot \frac{\rho(L_2^*)}{\rho(L_1^*)}\nonumber \\
&= \frac{\rho(L_2^*)}{\rho(L_1^*)} \label{eqn:poisson-distinct-directions}\\
&\in \Big[\frac{1}{1+\varepsilon}, 1+\varepsilon\Big] \nonumber \; ,
\end{align}
In \eqref{eqn:poisson-distinct-directions}, we used the Poisson summation formula (Lemma~\ref{lem:poisson-sum}). The last line follows from \eqref{eqn:lambda2-simple} and Lemma~\ref{lem:smoothing-primal}, which implies that for any 2-dimensional lattice $L$ satisfying $\lambda_2(L) \leq 1$,
\begin{align}
\rho(L^*\setminus\{\bm 0\}) \leq 8\exp(-\pi/\lambda_2(L)^2) \label{eqn:2d-gaussian-mass-bound}\; .
\end{align}
Now consider the case $\bm v = \bm w$. Using \eqref{eqn:lambda2-general}, we get an upper bound $\lambda_2(L_2) \leq 1/\beta$ when $\alpha = 1$. It follows that $\lambda_2((\beta/\gamma)L_2) \le 1/\gamma \le 1$. Hence,
\begin{align}
\chi_{D}(H_{\bm v},H_{\bm v}) + 1
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\rho(L_2)}{\rho(L_1)} \nonumber\\
&\leq \frac{\sqrt{\beta^2+\gamma^2}}{\zeta}\cdot \frac{\rho((\beta/\gamma)L_2)}{\rho(L_1)} \nonumber \\
&= \frac{\sqrt{\beta^2+\gamma^2}}{\zeta} \cdot \frac{\det((\gamma/\beta)L_2^*)}{\det(L_1^*)} \cdot \frac{\rho((\gamma/\beta)L_2^*)}{\rho(L_1^*)} \nonumber \\
&= \frac{\gamma^2}{\beta^2}\cdot\frac{\rho((\gamma/\beta)L_2^*)}{\rho(L_1^*)} \label{eqn:poisson-same-direction}\\
&\leq 2(\gamma/\beta)^2 \label{eqn:chi-correlation-ub} \; .
\end{align}
where we used Lemma~\ref{lem:poisson-sum} in \eqref{eqn:poisson-same-direction} and in \eqref{eqn:chi-correlation-ub}, we used \eqref{eqn:2d-gaussian-mass-bound} and the fact that $\lambda_2((\beta/\gamma)L_2) \leq 1$ to deduce $\rho((\gamma/\beta)L_2^*\setminus\{\bm 0\}) \leq 1$.
\end{proof}
\section{Extension of Homogeneous CLWE to \texorpdfstring{$m \ge 1$}{m>=1} Hidden Directions}
\label{section:k-hc}
In this section, we generalize the hardness result to the setting where the homogeneous CLWE distribution has $m \ge 1$ hidden directions.
The proof is a relatively standard hybrid argument.
\begin{definition}[$m$-Homogeneous CLWE distribution]
For $0 \le m \le n$, matrix $\bm W \in \mathbb{R}^{n \times m}$ with orthonormal columns $\bm w_1,\ldots,\bm w_m$, and $\beta, \gamma > 0$, define the \emph{$m$-homogeneous CLWE distribution} $H_{\bm W, \beta, \gamma}$ over $\mathbb{R}^n$ to have density at $\bm y$ proportional to
\begin{align*}
\rho(\bm y) \cdot \prod_{i = 1}^m \sum_{k \in \mathbb{Z}} \rho_\beta(k-\gamma\langle \bm y, \bm w_i \rangle)
\; .
\end{align*}
\end{definition}
Note that the $0$-homogeneous CLWE distribution is just $D_{\mathbb{R}^n}$ regardless of $\beta$ and $\gamma$.
\begin{definition} For parameters $\beta, \gamma > 0$ and $1 \le m \le n$, the average-case decision problem $\mathrm{hCLWE}_{\beta, \gamma}^{(m)}$ is to distinguish the following two distributions over $\mathbb{R}^n$: (1) the $m$-homogeneous CLWE distribution $H_{\bm W, \beta, \gamma}$ for some matrix $\bm W \in \mathbb{R}^{n \times m}$ (which is fixed for all samples) with orthonormal columns chosen uniformly from the set of all such matrices, or (2) $D_{\mathbb{R}^n}$.
\end{definition}
\begin{lemma}
\label{lem:hc-to-k-hc}
For any $\beta, \gamma > 0$ and positive integer $m = m(n)$ such that $m \le n$ and $n - m = \Omega(n^c)$ for some constant $c > 0$,
if there exists an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$ with non-negligible advantage,
then there exists an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}$ with non-negligible advantage.
\end{lemma}
\begin{proof}
Suppose $\mathcal{A}$ is an efficient algorithm that solves $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$ with non-negligible advantage
in dimension $n$.
Then consider the following algorithm $\mathcal{B}$ that uses $\mathcal{A}$ as an oracle and solves $\mathrm{hCLWE}_{\beta,\gamma}$ in dimension $n' = n-m+1$.
\begin{enumerate}
\item Input: $n'$-dimensional samples, drawn from either $\mathrm{hCLWE}_{\beta,\gamma}$ or $D_{\mathbb{R}^{n'}}$;
\item Choose $0 \le i \le m-1$ uniformly at random;
\item Append $m-1 = n-n'$ coordinates to the given samples, where the first $i$ appended coordinates are drawn from $H_{\bm I_i, \beta, \gamma}$ (with $\bm I_i$ denoting the rank-$i$ identity matrix) and the rest of the coordinates are drawn from $D_{\mathbb{R}^{m - i -1}}$;
\item Rotate the augmented samples using a uniformly random rotation from the orthogonal group $O(n)$;
\item Call $\mathcal{A}$ with the samples and output the result.
\end{enumerate}
As $n = O({n'}^{1/c})$, $\mathcal{B}$ is an efficient algorithm.
Moreover, the samples passed to $\mathcal{A}$ are effectively drawn from either $\mathrm{hCLWE}_{\beta,\gamma}^{(i+1)}$ or $\mathrm{hCLWE}_{\beta,\gamma}^{(i)}$.
Therefore the advantage of $\mathcal{B}$ is at least $1/m$ fraction of the advantage of $\mathcal{A}$, which would be non-negligible (in terms of $n$, and thus also in terms of $n'$) as well.
\end{proof}
Combining Corollary~\ref{cor:hc} and Lemma~\ref{lem:hc-to-k-hc}, we obtain the following corollary.
\begin{corollary}
For any $\beta = \beta(n) \in (0,1)$ and $\gamma = \gamma(n) \geq 2\sqrt{n}$ such that $\gamma/\beta$ is polynomially bounded,
and positive integer $m = m(n)$ such that $m \le n$ and $n - m = \Omega(n^c)$ for some constant $c > 0$,
there is a polynomial-time quantum reduction from $\mathrm{DGS}_{2\sqrt{2 n}\eta_\varepsilon(L)/\beta}$ to $\mathrm{hCLWE}_{\beta,\gamma}^{(m)}$.
\end{corollary}
\printbibliography
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Many Android apps collect a lot of privacy-sensitive information from users and share it with multiple parties, e.g., app servers, ad/tracking companies. Such data collection and sharing, leading to privacy breaches, has been extensively studied~\cite{50_ways,yang2013appintent,naseri2019accessileaks,pham2019hidemyapp}. However, these studies mostly deal with information exposure via insecure channels (e.g., incorrect deployment of HTTPS, or using HTTP), or via the standard HTTPS channel. On the other hand, some apps use additional side/covert channels, standard and non-standard protocols, with/without additional encryption layers (\emph{custom encryption}), for data transmission. The privacy profile of these apps (some of which are very popular) remains largely unscrutinized, even though some prominent examples exist (e.g., deceptive location-tracking by the ad company InMobi, fined by the US FTC~\cite{inmobi}).
Challenges of studying these channels include dealing with non-standard protocols (e.g., custom implementations over TCP/UDP), and detecting and decrypting
\mbox{custom encryption layers.}
Several studies~\cite{obfus_res,50_ways,spreitzer2018scandroid,block2017autonomic} have focused on the design, detection, and prevention of side and covert channels. Continella et al.~\cite{obfus_res} designed a framework to detect privacy leaks that is resilient to various obfuscation techniques (e.g., encoding, formatting). Reardon et al.~\cite{50_ways} looked into network traffic collected from apps/libraries to identify side and covert channels used to send sensitive information. Spreitzer et al.~\cite{spreitzer2018scandroid} developed an automated framework to detect side channel leaks (e.g., transmitted/received bytes, free/used space, CPU time) from Android APIs. Limitations of these studies include: lack of (or insufficient) support for non-HTTP protocols, custom encryption layers (beyond HTTPS),
and modern anti-reverse engineering evasion techniques; handling only a few fixed privacy-sensitive items (e.g., Android ID, contacts, IMEI, location, and phone number) sent over custom encrypted channels; shallow interaction with the apps (e.g., lack of sign-in/up support); and dependence on old/obsolete versions of Android.
By addressing the above limitations of state-of-the-art privacy analysis tools, we design and implement \textit{\textit{ThirdEye}}\footnote{In addition to permission checks and network flow monitoring, we add a \emph{third} perspective to observe app behaviors. In many Asian legends, the \emph{third eye} is meant to provide ``perception beyond ordinary sight'' -- see \url{https://en.wikipedia.org/wiki/Third_eye}.} that can effectively and automatically detect privacy exposures in non-standard channels over HTTP/HTTPS and non-HTTP protocols, where apps use one or more layers of \textit{custom encryption}. We also consider custom encryption use and covert channels over storage media. \textit{ThirdEye}\ is designed for efficient and large-scale automated analysis, with real Android devices.
\new{
With \textit{ThirdEye}, we target the following goals in regards to the use of non-standard custom encryption channels: (i) to effectively and efficiently detect privacy leaks that occur through these channels; (ii) to identify security weaknesses introduced by these channels; (iii) to perform a measurement study of the prevalence of privacy leakage and security weaknesses in commonly used Android apps, due to these channels.
}
\textit{ThirdEye}\ is powered by four main components: the \emph{device manager} orchestrates app installs/launches/uninstalls on real Android devices, while maintaining the connection with a test desktop; the \emph{UI interactor} systematically traverses menus and UI items for comprehensive functionality coverage of an app; the \emph{operations logger} performs network/cryptographic/file API instrumentations using \textit{Frida} API hooking for capturing all inputs/outputs from these APIs; the \emph{data flow inspector} detects privacy and security issues in the collected network traffic/files. Besides privacy breaches, we also identify several security weaknesses in these non-standard channels, including: the use of fixed keys and weak/broken algorithms for encryption/decryption in files and network communication.
We implement \textit{\textit{ThirdEye}} on Android 12, which significantly extends privacy and security features compared to older versions; note that several past tools (designed for standard protocols primarily HTTP/S) are becoming much less effective or even incompatible on the newer versions of Android. Our UI interactor is more comprehensive and systematic than Android Monkey; we explore all UI elements based on their parameters and avoid visiting duplicate elements and pop-up advertisements/in-app purchases. The ability to handle automated sign-up and sign-in support (where possible) helps us cover app functionalities beyond the login prompt (where most tools stop). For improved automation on real Android devices, we provide comprehensive recovery from app crashing/freezing, and phone states that impede effective analysis (e.g., apps that can change WiFi settings). We address common anti-evasion techniques (e.g., bypass root/package installer/mock location detection) to increase our app coverage. However, our implementation is currently unable to decode complex obfuscation, and protocols such as QUIC, DTLS; we also do not support app code in Android NDK.
Our implementation requires significant engineering efforts (approx.\ 5.5K and 1.5K LOC of Python and JavaScript code) to realize our design goals. We also leverage several existing tools including \textit{Frida}, \textit{Androguard}~\cite{androguard}, \textit{mitmproxy}~\cite{mitmproxy}, \textit{tcpdump}~\cite{tcpdump}, \textit{AndroidViewClient}~\cite{androidviewclient}, \textit{Python Translators Library}~\cite{translators} (for Google translation), \textit{adb}~\cite{adb} and Android internal commands. We mainly use Frida~\cite{Frida} to collect cryptographic parameters, trace shared storage and app-generated network traffic, and evade anti-reverse engineering mitigations. Additionally, we integrate Frida and Androguard to create a rule-based API logger that allows us to detect and collect non-SDK encryption/decryption APIs parameters. We use mitmproxy and tcpdump to capture HTTP/S and non-HTTP/S traffic, respectively. Our UI interactor is built on top of AndroidViewClient to traverse different objects, including buttons and inputs. We use the Google Translate API to enable support of non-English apps.
\vspace{2pt}
\noindent Our contributions and notable findings include:
\noindent(\textbf{1}) We design \textit{\textit{ThirdEye}} to find privacy and security exposures from various non-standard and covert channels. Unlike past work, \textit{\textit{ThirdEye}} can uncover privacy exposures and security issues
in both HTTP/HTTPS and non-HTTP protocols (i.e., protocols over TCP/UDP), and shared storage (on-device).
\noindent(\textbf{2}) Our implementation of \textit{ThirdEye}\ enables efficient, large-scale automated analysis of thousands of apps on real Android devices. We used two Android devices (Pixel 4 and Pixel 6) running factory images with Android 12, to evaluate \numc{15522} top-apps from various categories in Androidrank~\cite{androidrank}; \numc{12598} (out of \numc{15522}, 81.2\%) apps were successfully analyzed (others failed for various download/compatibility issues). \textit{ThirdEye}\ successfully uncovered numerous novel privacy leakages and security issues.
\noindent(\textbf{3}) We identify \printpercentoutof{2887}{12598} apps use custom encryption/decryption for network transmission and storing content in the shared device storage; \printpercentoutof{2383}{2887} of them transmit the on-device information that is commonly used for user tracking (e.g., advertising ID, router SSID, device email, list of the installed apps).
More importantly, for at least
one on-device info item, \printpercentoutof{2156}{2383} of the apps send it only under custom encryption to at least
one host, and \printpercentoutof{1719}{2383} apps send it only under custom encryption. %
All these serious privacy leakages would be missed by existing analysis tools.
\noindent(\textbf{4}) Besides privacy leakage, we also identify that the use of custom encryption channels can seriously undermine data security, e.g., due to the use of fixed keys, insecure cryptographic parameters and weak/broken cipher algorithms (e.g., RC4, DES). \numc{299} apps transmit their insecure encrypted content over plain HTTP and non-HTTP protocols. %
In addition, we identified \numc{22} apps that perform their authentication over a secure channel (HTTPS) and then expose their authentication token over an insecure channel (HTTP and non-HTTP).
All these security issues enable a network adversary to read (even modify) sensitive plaintext information from encrypted traffic (e.g., using extracted fixed keys or breaking weak ciphers/keys).
\noindent(\textbf{5}) We also identify security and privacy issues beyond custom encrypted channels. For example,
we found that \numc{102} apps transmit their neighbor's wireless SSIDs to possibly track nearby users and their locations; \numc{202} apps collect/share the Android \textit{dummy0} interface information (not protected by runtime/special permissions) that can be used for user tracking;
\numc{26} apps appear to allow UDP amplification, which can possibly be exploited in DDoS attacks.
\noindent(\textbf{6}) Besides app servers, tracking domains also receive various on-device information via non-standard channels. For example, \textit{appsflyer.com} may receive (depending on the app that includes the appsflayer SDK) items such as WiFi ESSID, WiFi MAC, operator, device email, build fingerprint, ad ID, and device ID, from 1386 of our tested apps with cumulative installations of over 24 billions. %
We will open source our tool at: \url{https://github.com/SajjadPourali/ThirdEye}.
We notified Google about the major privacy issues that we observed. We also contacted developers of 47 apps with significant security risks.
\begin{newtext}
\section{Threat model}
As we explore security issues due to the use of non-standard communication and custom encryption besides privacy exposure, here we also provide our threat model with different types of attackers, their capabilities, and goals. We exclude attacks that require compromising a user device or an app server. The attacks also do not involve other parties in the Android ecosystem such as device OEM providers, app developers, and app stores. The attacker cannot break modern crypto primitives, except when a key is exposed, or when a weak primitive is used—e.g., the attacker can brute-force a DES key, but not an AES-128 key (unless, e.g., a fixed AES key embedded in the app is used). The attacker can also monitor app behaviors on her own device (e.g., function hooking in a rooted phone), unless the app deploys active anti-analysis techniques that cannot be easily bypassed.
\subhead{On-path network attacker} This adversary has full access to the network communication between an app user and an app server, and can decrypt the encrypted content of network traffic, if insecure cryptographic keys (e.g., fixed keys extracted from an app), and weak algorithms (e.g., DES) are used. Such decryption will directly allow the adversary to access privacy-sensitive user information. The adversary may also get access to authentication tokens (if present) from such network traffic, which may lead to session hijacking and account takeover attacks.
\subhead{Co-located app attacker} This adversary has a regular app installed on the victim user's device. With such co-located malicious apps, the attacker can access shared encrypted files saved by other apps on the same device, and decrypt such content, if insecure cryptographic keys or weak algorithms are used for encryption. This decryption may also expose a user's private data.
\subhead{Device-owner attacker} In this case, we treat the device owner as the attacker, who would like to access protected (e.g., under custom encryption) service provider-content saved or processed on the device itself. This access may allow the attacker e.g., free access to VPN premium/paid services from the app provider.
\end{newtext}
\section{System design}
In this section, we provide our design details; see Figure~\ref{fig:system_design} for an overview. To determine privacy and security issues resulting from non-standard and covert channels in apps,
we leverage network traffic captured from communication channels (HTTP/HTTPS and non-HTTP protocols), and cryptographic API parameters, and file operations during app execution. Our methodology requires rooted Android devices, and
consists of four main modules: the \textit{device manager} controls test devices and ensures that test prerequisites are satisfied; the \textit{UI interactor}
traverses and interacts with app menus to maximize code coverage; the \textit{operations logger}
locates cryptographic APIs, instruments cryptographic API parameters, and captures network traffic, extracts file operations; and the \textit{data flow inspector}
processes data flows to detect privacy and security issues.\looseness=-1
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.61]{./charts/system_diagram.pdf}
\vspace{-5pt}
\caption{\textit{ThirdEye}\ design overview}
\label{fig:system_design}
\end{figure}
\subsection{Device manager}
As part of setting up the prerequisites, the device manager initiates a connection between the test desktop and Android devices through
ADB, and ensures that the devices are connected to the Internet. If the connection through ADB is successful, we uninstall all non-system apps (e.g., YouTube, Google Chrome) except our helper app that fixes the GPS location,
and prepare the device(s) for orchestration.
Given a candidate list of apps,
the device manager performs a cleanup (e.g., remove SD card content) before loading each app, removes the remnants left from the run of the previous app, downloads the corresponding app APK file from Google Play, or from alternative marketplaces (\url{apkpure.com} and \url{apktada.com}),
installs the app on the device,
sets all required runtime and special permissions, and proceeds with the analysis. Otherwise, the app is skipped.
Among special permissions, we consider only \textit{Accessibility}, and \textit{Notification Listener}; we exclude the ones requiring specific setup (e.g., VR permission), or the ones that can significantly affect UI/device operations (e.g., \textit{Display over other} or \textit{Device admin}).
The device manager then launches the app and monitors its progress. It can also detect and recover from possible failures (e.g., app closures, Internet outages).
It closes all installed apps to reduce the chance of UI misinteractions and capturing traffic originated from them (including traffic originated from the OS), but keeps running Android system services and other required apps (e.g., Android launcher) to keep the device functional.
To bypass commonly used runtime anti-reverse engineering protections and simulation of benign device conditions, we use seven modules: root-detection bypass, mock location detection, package installer detection, detection of the use of Frida, certificate pinning, ADB detection (detailed in Appendix~\ref{sec:runtime_anti_rev}).
\subsection{UI interactor} \label{sec:ui_interface_interactions}
During app execution, %
the UI interactor interacts with the app to increase the code coverage. %
We ensure that the target app is running in the foreground, and then explore and find different UI elements (including buttons, inputs, check-boxes) and interact with them (see Fig.~\ref{fig:ui_interaction} in the appendix).
To find inputs/clickable UI elements, we use a predefined keyword list (in English); see Table~\ref{tab:keyword_list} in the appendix.
To accommodate UI elements with non-English labels, we use the \textit{googletrans} Python module~\cite{googletrans} to translate the labels into English.
We then populate the input fields (if any) using a predefined list of inputs, and trigger the clickable UI elements based on the priority of a clickable element, which is determined based on its position in the list of keywords (e.g., the keyword of \textit{not now} %
has a higher priority than \textit{click}).
After each click,
we add the clicked element to a list to avoid redundant actions. Clicking on an element may open new activities or trigger actions, and we follow the same steps for those new activities until all elements in the foreground app UI are explored. The \textit{back} button is clicked to go to the previous UI window (also used to avoid pop-up advertisements and in-app purchase windows).
We also identify and utilize the sign-up/sign-in functionalities to login to apps, e.g., by first using our Google test sign-in credentials in supported apps, and then creating an app-specific account (if possible); see Sec.~\ref{sec:interact_with_elements} for details.
\subsection{Operations logger} \label{sec:logger}
Apps may use socket APIs to communicate through non-HTTP channels in the transport layer and above (i.e., over TCP/UDP).
We use tcpdump~\cite{tcpdump} to store all network traffic in \textit{pcap} files, and thus, capture both HTTP and non-HTTP flows. We also log network tuples by hooking relevant APIs using Frida, to capture app specific network communication over sockets.
For detecting covert channels and misuses in shared storage, we hook \textit{open} and \textit{move} file API methods, to detect files that are opened/moved during an app's execution; we save these files for further analysis.
We use mitmproxy to capture/decrypt HTTPS traffic. The tcpdump data (not limited to HTTP/HTTPS) along with mitmproxy traffic obtained during network instrumentation is used to identify non-HTTP traffic. %
For cryptographic instrumentation, we capture (through Frida API hooking) API parameters used in cryptographic operations: plaintext, ciphertext, keys, initialization vectors (IVs), and cipher algorithms. To extract the parameters of Android SDK API, we hook \textit{init()}, \textit{update()} and \textit{doFinal()} API methods (of \textit{javax.crypto.Cipher API}~\cite{class_cipher}); note that \textit{getIV()} and \textit{getAlgorithm()} methods are called by the \textit{init() hook}. We define a non-SDK API as a third-party library used in an app, or a custom functionality implemented in an app that is not part of the Android SDK. %
Currently, we do not specifically handle obfuscated non-SDK APIs;
we look for \textit{encrypt} and \textit{decrypt} strings in method names to identify non-SDK APIs, and such keywords matching will not work with all obfuscated code.
If we find that an app uses encrypted/covert channels, we test the app on two separate devices, to identify fixed cryptographic keys used by the app.
We label a cryptographic key as fixed, when the same key value is returned from multiple runs of the instrumentation (on the same device and on different devices). %
\subsection{Data flow inspector}
This module detects privacy/security issues in non-standard and covert
channels in the collected network traffic.
We also leverage the collected parameters (i.e., ciphertext, plaintext, algorithm, key, IV) of encryption/decryption functions by hooking cryptographic API methods. Then we search the logged ciphertext from the captured content, and map/store the ciphertext with their corresponding plaintext.
We also check files stored on the device, including images, audio and video files. We categorize the captured content into HTTP, HTTPS, non-HTTP, and file.
For privacy issues, we extract Personal Identifiable Information (PII) and personal data (e.g., contacts, messages, images, audio, video) stored on the device to identify privacy exposures. We create copies of this data in different encoding formats (e.g., Base64, hex), and search these copies (exact and partial matches) within the network traffic, and magic headers (i.e., file signatures used to determine the content of a file) of transmitted media content (image, audio, video) originated from the device (i.e., outgoing). Finally, we store the results of the search content in a database.%
For security issues, we check for situations where traffic sent over secure network channels are subsequently sent over insecure channels using HTTP/non-HTTP --- e.g., an authentication token sent over HTTPS, is subsequently sent over HTTP.
We also look for fixed/hard-coded keys in the app/library code, and the use of weak encryption algorithms (e.g., RC4) to encrypt data that is passed through insecure channels.
For covert channels, we check for files on shared storage that are opened by different apps. If we find files with common paths reused in multiple apps, we flag those as possible covert channels. %
\section{Implementation}
We use Python to implement \textit{ThirdEye}, and leverage the use of other Android command line utilities (e.g., ADB) to manage the orchestration of app executions. In addition, we use tcpdump and mitmproxy to capture network traffic and decrypt HTTPS communication.
We use Frida to instrument API methods,
implement the UI interactor component by extending AndroidViewClient.
We discuss our implementation details below. %
\subsection{Pre-execution steps}
To prepare an Android device for instrumentation, we first manually set the Android built-in WiFi proxy on the target device and import initial data on the device, including sample media files, contacts, and SMS messages. We also remove the device lock and increase display sleep timeout to avoid deadlocks in the UI interactor module.
We then use the \emph{app manager} component to handle downloading, installing and executing the latest and most compatible version (for our device hardware and OS) of a target app from Google Play. The app manager utilizes the UI interactor module to open and interact with Google Play, which is used to install apps; see Sec.~\ref{sec:Google-Play-store-integration}. The \textit{apkpure.com} and \textit{apktada.com} marketplaces are also checked if a target app is unavailable in Google Play (apps may fail to install from Google Play due to e.g., region locking).
During the first install of an app, we store all APK and OBB files, to avoid downloading them again for subsequent testing. %
The available permissions on a target device and the runtime permissions required to launch an app on the given device are extracted using the \textit{package manager (pm list permissions)}~\cite{pm} and \textit{dumpsys package <package>}~\cite{dumpsys} commands, respectively. We grant all the requested runtime permissions using the \textit{pm grant <package> <permission>} command.
Apps may also request special permissions (e.g., \textit{Accessibility}, \textit{Notification Listener}), which are only set via Android settings. We use the \textit{dumpsys package <package>} command to fetch services that request special permissions, and then execute the \textit{settings put} or \textit{cmd}~\cite{cmd} command (depending on the type of the requested permission) to grant the special permissions. %
\subsection{In-execution controller} %
We make sure that only our target app is installed on the device. Package names of all installed apps on the target device are extracted from \textit{cmd package list packages} command. These package names are matched with that of the target app and allowed system apps (i.e., dependencies). Any apps with unmatched package names (i.e., non-system apps) are removed. Then,
prior to an app execution, we also ensure that all opened unwanted apps (e.g., Camera, Contact)
are closed using the \textit{pm clear} command~\cite{adb_clear}.
We also verify that the ADB connection between the test desktop and devices, and the Internet connection from the devices are successful.
We detect apps running in the background and foreground using the \textit{dumpsys activity activities} command~\cite{dumpsys}.
The output of this command returns structures\footnote{\url{https://android.googlesource.com/platform/frameworks/base/+/7efcc0c/services/java/com/android/server/am/ActivityStack.java}} showing foreground activity (\textit{mResumedActivity}) and background activity (\textit{mLastPausedActivity}) information. We make sure that the test app is always running in the foreground.
\begin{sloppypar}
Apps with certain permissions (e.g., CHANGE\_WIFI\_STATE) can perform disruptive operations (e.g., change WiFi connectivity state, screen rotation), which can affect our app analysis.\footnote{Although the \textit{setWifiEnabled} method was deprecated in Android 10, it still works in Android 12, for apps built with a lower SDK API level --- see \url{https://developer.android.com/reference/android/net/wifi/WifiManager.html\#setWifiEnabled(boolean)}}
If an app disables WiFi, the Internet connection is lost, and if the screen rotates, a click event may trigger at the wrong position of the screen; to avoid such situations, \textit{svc wifi enable}~\cite{turn_on_off_wifi},
and \textit{content insert}~\cite{rotate_android_devices},
commands are used, respectively. Furthermore, because of the variation in the strength of the GPS signals received by a device, searching for the exact GPS location in the saved network traffic is problematic. The received GPS coordinates from satellites may vary slightly even when the device position is fixed.
Therefore, to return a fixed GPS value, we use our own GPS mocking app.
\end{sloppypar}
During UI interactions, it is possible to have accidental app closures or crashes.
Crashed or frozen apps are identified by inspecting the \textit{mCurrentFocus} structure that contains the current foreground window activity details. This structure is returned from the \textit{dumpsys activity activities} command. Therefore, \textit{mCurrentFocus} structure is inspected prior to UI interactions to detect crashes/freezes. The timestamp of the crash/freeze (if any) is also recorded. For crashed apps, we extract error logs using \textit{logcat}~\cite{logcat} (frozen apps do not produce any error logs). If an app crashes at startup, we try to rerun the app up to five times before skipping it. If the app crashes during execution, information collected so far is saved. %
When the app analysis completes, we save the analysis data on the device, if any (i.e., \textit{pcap} file, and files created by the apps). %
\subsection{User interface interaction}
We implement this module by extending the AndroidViewClient library that is designed to automate test scripts. AndroidViewClient provides UI based device interaction APIs (e.g., \textit{find}, \textit{click}, \textit{fill}). To find UI elements, it requires matching (exact/partial) keywords in a predefined list with UI element labels. Therefore, proper knowledge of the app view is required to determine what keyword list to use.
\subsubsection{UI element finder}
We use the \textit{dump} function in AndroidViewClient to get the foreground window content that contains all the window elements. If the element text in the UI window is non-English, the specific language is automatically detected and translated by googletrans~\cite{googletrans}.
To speed up the translation process, we store the translation result (i.e., original text and its English translation) in a database. We check this database before using googletrans for determining the translation of non-English text in the foreground window of the next app.
\subsubsection{UI element selection} \label{element_selection}
We create two separate lists for clickable and fillable elements.
The priority of selecting an element from these lists depends on the order of the elements in them (e.g., the keyword \textit{not now} has a higher priority than \textit{click}). The priority of an element depends on the order of appearance in the UI, e.g., \textit{accept}/\textit{submit} elements will appear after clicking \textit{agree}; see Table~\ref{tab:keyword_list} and Table~\ref{tab:input_list} in the appendix. We create the priority order list based on our observations from manually exploring several apps.
We then match the elements in the keyword list (based on the priority order) with the elements of app UI in foreground.
The clickable list contains the popularly used keywords in terms of clickability, with an optional exclude keyword list for each keyword to prevent interacting with similar words --- e.g., keyword \textit{agree} has an exclude list that contains \textit{agreement} and \text{disagree}. While \textit{agree} and \textit{agreement} are similar words, if an element with \textit{agree} is clicked, then clicking on a \text{disagree} element (an opposite action) is not possible.
The fillable list contains common keywords along with fillable values --- e.g., keyword \textit{email} with \textit{mymail@email.com} value (see Table~\ref{tab:input_list} in the appendix).
\begin{sloppypar}
To select clickable elements, we consider UI elements with the checkable/clickable property enabled, and have at least one of the following values in class attributes: \textit{android.widget.checkedtextview}, \textit{android.
view.view}, \textit{android.widget.button}, \textit{android.
widget.textview}, \textit{ndroid.widget.imageview} and \textit{android.widget.imagebutton}. To select fillable elements, we consider UI elements with the \textit{android.widget.edittext} value in class attributes.
\end{sloppypar}
\subsubsection{Interacting with UI elements} \label{sec:interact_with_elements}
Prior to interacting with UI elements, we wait for \numc{10} seconds to allow the target app to load. Then we find and fill all the fillable UI elements of the app UI (running in the foreground). If an app (e.g., a secure wallet app) prompts for the number pad, e.g., for a custom security PIN, %
we key in the digit 9 for 10 times, which we later search in the collected network traces and files (any fixed numeric sequence can be used).
\subhead{Identifying duplicate UI element visits}
To prevent duplicate visits to a UI element, we assign a unique ID to each element.
The ID is the concatenation of the element's \textit{view} attributes, and a \textit{window-hash}, defined as SHA256 of the \textit{dumpsys window | grep applicationId} command.
The element attributes (of the \textit{view}) that we leverage are: element ID, clickable, enabled. We order the element attributes prior to concatenating with \textit{windows-hash}.
Since the unique ID is composed by preserving the order of element interactions,
we prevent duplicate visits to UI elements and UI paths.
\begin{sloppypar}
\subhead{Identifying pop-up advertisements}
Prior to interacting with an app window, we check if it contains a pop-up advertisement; if so, the \textit{back} button is triggered to traverse to the previous app window. We currently consider ads served by the two most common (pop-up) ad platforms as we empirically observed in the top-100 Androidrank apps: \textit{Google AdMob}~\cite{Admob} for non-gaming apps and \textit{Unity Ad Units}~\cite{AdUnits} for gaming apps.
We detect AdMob pop-up ads using
the \textit{com.google.android.gms.ads.AdActivity} activity, and Unity pop-up ads using
\textit{com.unity3d.ads.adunit.AdUnitActivity} and \textit{com.unity3d.services.ads.adunit.AdUnitActivity} activities. The support for other ad management SDKs can be easily added.
\end{sloppypar}
\begin{sloppypar}
\subhead{Identifying Google in-app purchases}
In-app purchase features can negatively affect the analysis by deviating the UI interactor to deal with third-party components, instead of the app itself. To address this issue, we use \textit{dumpsys activity} to detect the Google Play in-app billing (in-app purchase) activities. We identified the activities that belong to the Google's in-app purchase windows.\footnote{The activities are: \textit{com.google.android.finsky.activities.MarketDeepLinkHandlerActivity}, \textit{com.google.android.finsky.billing.acquire.LockToPortraitUiBuilderHostActivity}, and
\textit{com.google.android.finsky.billing.acquire.SheetUiBuilderHostActivity}.} Therefore, we trigger the \textit{back} button to go to the previous window, if we encounter a Google in-app billing window.
\end{sloppypar}
\subhead{Google sign-in authentication}
If the \textit{google} keyword appears in the clickable UI elements,
it usually indicates the app's support for Google sign-in; we also check for relevant activities.\footnote{The activities are: \textit{com.google.android.gms/signin.activity.ConsentActivity} and \textit{com.google.android.gms/auth.uiflows.consent.BrowserConsentActivity}.} We then use the email address registered with the device to authenticate. The UI interactor also grants access for sign-in activities and relevant permissions by clicking the \text{confirm} button (if prompted).
\subhead{Terminating UI interaction}
To prevent the exhaustion of system resources, \textit{\textit{ThirdEye}} interacts with an app until one of the following conditions is met: (i) the number of interaction attempts with UI elements reaches 100; (ii) no new elements are found; (iii) the app cannot be opened
even after 5 consecutive attempts; (iv)
the duration of the interactions reaches 5 minutes. %
\subsubsection{Google Play Store integration}
\label{sec:Google-Play-store-integration}
We use Google Play as the default app market. The target app's installation window is opened by using the Android Intent-filter, \url{market://details?id=PKGNAME}.
Then the UI interactor (see Sec.~\ref{sec:ui_interface_interactions}) installs the app from Google Play, by clicking the \textit{Install} button (if available).
We check every 10 seconds for the presence of an \textit{open} button, to confirm that the app is successfully installed; after 200 seconds, the installation is skipped. To deal with common app installation prompts (e.g., agreeing to install, consenting to permissions, providing credit card information), we handle various UI elements with labels including: \textit{try again}, \textit{retry}, \textit{accept}, \textit{update}, \textit{skip}, \textit{accept}, \textit{no thanks}, \textit{continue}, and \textit{ok}.\looseness=-1
\subsection{Instrumentation}
We describe the following instrumentation methods used in \textit{ThirdEye}. %
We complement the use of Frida to comprehensively collect the instrumented data.
\subhead{Network and file instrumentations}
We use tcpdump to collect non-HTTP traffic, and mitmproxy to capture HTTP/HTTPS traffic.
We use the global proxy settings of Android devices to forward the HTTP/HTTPS traffic to an mitmproxy server running on the test desktop. As some apps ignore the proxy setting,
we hook (via Frida) the remote TCP connections with port 443 that bypass the global proxy, and forward the traffic to our desktop mitmproxy. %
For files, we hook \textit{open}, \textit{remove}, \textit{rename}, \textit{read} and \textit{write} Bionic library functions,
which are used for shared storage operations. These functions cover file operations used in both Android SDK and NDK. We store read and write buffers, and process them later.
\subhead{Rule-based API hooking}
We implement a rule-based hooking module using Androguard
and Frida.
This module provides the ability to define selection criteria and actions on API methods in DEX files to choose and trigger dynamic actions (e.g., logging or changing parameters) by accepting callback functions. We use Androguard to select methods based on defined criteria and Frida to perform the defined action. Androguard is used to fetch all the declared API methods in the DEX files that use \textit{EncodedMethod} (an Androguard Object), which contains the method name, parameters, return type, class name, Dalvik bytecode (of the method). Since Androguard works with \textit{Dalvik} method definition syntax, and Frida uses Java method definition syntax, our module maps Androguard results to Java format. Then we create a hooking script for Frida, based on the defined callback functions that would be executed by Frida when called.
We primarily use this module to evade root detection and to log non-SDK encryption/decryption methods; however, it is generic enough to be reused for other purposes. %
\subhead{Cryptographic instrumentation}
To collect cryptographic parameters, we log the input parameters, return value, execution timestamp and object ID of each method.
For this purpose, we hook \textit{init()} (for the parameters such as the key, IV, algorithm, and operation type), and \textit{doFinal()} and \textit{update()} (for plaintext and ciphertext) Android SDK cryptographic API methods from \textit{javax.crypto.Cipher}.
To relate these API calls in sequence, %
we use their object IDs and execution timestamp. \new{Note that besides \emph{decrypt} functions, we can also collect necessary plaintext items only from \emph{encrypt} functions---i.e., we log the inputs before they are encrypted, and thus we are unaffected by apps' not invoking \emph{decrypt} functions.} %
Android SDK cryptographic APIs cover both single-part and multi-part encryption/decryption. Multi-part operations are usually used when the data is not contiguous in memory, e.g., large files, socket streams. To defragment multi-part blocks, we trace back \textit{update()} and \textit{doFinal()} functions based on their object hashcode~\cite{java7_api_spec}
and calling timestamp, until a \textit{javax.crypto.Cipher} object initialization or a cryptographic parameter modification occurs.
We also look for non-SDK cryptographic APIs in apps. We leverage our rule based logger to find all methods with \textit{encrypt} and \textit{decrypt} in their method names, which at least accept one argument in byte or string types, and return the byte or string type. After identifying the specific APIs methods, we automate the creation of corresponding Frida API method hooks, and log their arguments and return values. In addition, we observe the logged arguments to detect potential cryptographic keys by looking for arguments that are of \numc{128}, \numc{192}, or \numc{256} bits in length, which come with other arguments that have any length except \numc{128}, \numc{192}, or \numc{256} bits.
We identify nested encryption/decryption by recursively checking ciphertext for the corresponding plaintext in the collected instrumented data. For each level of nested encryption, we create a new encryption entity with corresponding parameters. If the nested plaintext is compressed, we also consider its decompressed value.
\begin{sloppypar}
\subhead{Android ID, PII and device Info}
We manually run \textit{getprop}, \textit{ifconfig} and \textit{dumpsys} commands (using ADB) to extract all available persistent PII and device information in JSON format except for three identifiers, which are not persistent --- i.e., \textit{Android ID}, \textit{Advertising ID} and \textit{Dummy0 Address} that are automatically extracted during app interaction by calling \textit{getAdvertisingIdInfo} API, \textit{Settings.Secure{\#}ANDROID\_ID} API and \textit{ifconfig} command (with ADB), respectively. Note that apps installed on devices using Android 8.0 and above get a unique Android ID value for each app, which is composed of the app sign-in key, device user ID and device name. The non-persistent data is individually stored for each analyzed app, allowing us to perform multi-device analysis by choosing the appropriate PII items collected during our inspection.%
\end{sloppypar}
\subsection{Inspection}\label{sec:inspection}
We categorize the collected network communication and file operations of each app: HTTP, HTTPS, non-HTTP, file. Then we store the details of the instrumented data (i.e., destination, direction to/from the device, headers of HTTP/HTTPS traffic and content) in separate lists maintained for each category. Before storing the information in the lists, we use \textit{python-magic}~\cite{python_magic} to identify and decompress the compressed data (if any). We then search PII data on these lists.
\subhead{Non-HTTP communication}
To extract non-HTTP communication, we remove system traffic from the captured pcap file, then we parse it using dpkt~\cite{dpkt}, to determine if the corresponding TCP/UDP packets are non-HTTP. The application layer protocols for non-HTTP traffic do not include TLS or HTTP/S.%
\subhead{Defragmentation}
Fragmentation can affect our inspection of network traffic because the standard MTU~\cite{what_is_mtu}
of IP datagrams over Ethernet is \numc{1500} bytes (same as in the WiFi interface~\cite{rfc0894}). Therefore, any IP datagram over \numc{1500} bytes will be fragmented. As a result, we will not find PII values (if exist) that are split between multiple packets. To overcome this problem, we defragment to recover the original data of the fragmented TCP packets,
and use the dpkt~\cite{dpkt} library to parse TCP and UDP traffic data.
\subhead{Identifying encrypted data}
We extract ciphertext values from cryptographic APIs (e.g., \textit{Cipher}) and search them in lists created for all categories (i.e., HTTP, HTTPS, non-HTTP, file). If a ciphertext value is found in the content of any of the lists, we add its cryptographic parameters to a new list with the same name and an additional \textit{encrypted} suffix. Apps can send the ciphertext in chunks. Therefore, we split the ciphertext into 18 bytes chunks (assuming 128-bit blocks), which reduces the chance of getting identical blocks by covering more than one block of a cipher text, prior to searching them in the lists pertaining to different categories. %
\subhead{Search strategy}
The content in the network traffic can be transformed into different forms.
It is also possible that one (e.g., capitalize, upper case, lower case, Base64) or more (e.g., \textit{md5-hex} --- creates an MD5 hash with hex encoding, \textit{sha1-hex}, \textit{sha256-hex}, \textit{md5-raw}, \textit{sha1-raw}, \textit{sha256-raw}) of these transformations are applied to the content.
Therefore, we compile a set of values (e.g., PII, list of keywords, cryptographic keys and fillable content; see Table~\ref{tab:input_list} in the appendix), and apply the mentioned transformations to each value in the set, and save them in a separate list, which we then use to search and identify privacy/security issues.
\subhead{Detecting insecure cryptographic parameters}
We use \textit{apktool}~\cite{apktool} to unpack APK files, and search the collected keys in different encoding formats (plain, Base64, hex case-insensitive) over all the unpacked content of APK files, to determine if any of the fixed keys are hard-coded (see Sec.~\ref{sec:logger}).
Thereafter, we collect the keys of the traced ciphertexts in the network communication or files.
If we detect hard-coded/fixed (i.e., reused) keys from the network communication in multiple runs, and on the same or different devices, we mark them as insecure keys. %
\subhead{App and system traffic separation}
The captured traffic from tcpdump and mitmproxy may contain traffic from system processes running on a device, which is separate from the app traffic. To ensure that we only analyze the traffic of the target app, we filter the captured network packets using the collected network tuples by API hooking and their timestamps (see Table~\ref{tab:instrumentation_network_api} in appendix). We hook the process ID of the target app, to ensure system/app traffic separation --- all hooks are at the app level. %
\section{Results}
In this section, we summarize our findings on privacy and security issues of Android apps that use non-standard and covert communication channels. %
\new{
Instead of choosing top-downloaded apps, which may not cover various app categories, we selected apps from Androidrank~\cite{androidrank} for our evaluation. Androidrank ranks Google Play apps in 33 categories based on various metrics such as total downloads, total number of user ratings, average user ratings. We collected all available \numc{15522} %
unique free apps for our evaluation from all categories (note that there are overlaps between app categories). This dataset contains apps that are highly popular globally (e.g., 1B+ installs), but also apps that are top-ranked (within top-500) in a specialized app category with a relatively small number of installations (e.g., 10K+); see Fig.~\ref{fig:comp_pct_apps_installs} in the appendix.
}
\textit{\textit{ThirdEye}} could download \numc{15327} apps,
and successfully analyzed \numc{12598} apps; the remaining \numc{2729} apps failed for various reasons, e.g., app incompatibility with Android 12, geo-blocking, unknown reverse engineering protection, and app-crashing due to the use of Frida method hooking.
We ran our experiments between Nov.\ 25, 2021--Jan.\ 6, 2022.
We used two Android devices (Pixel 4 and Pixel 6) running factory images with Android 12, and a desktop running Ubuntu 21.04, Core i9-10900, 64GB RAM, 2TB storage. Most apps finished their execution (i.e., all their UI interactions were completed) within 5 minutes; we terminated the execution of \printpercentoutof{1329}{12598} apps at the 5-minute threshold (see Fig.~\ref{fig:app_interaction_timings} in the appendix).
For a summary of our results, see Tables~\ref{tab:on-device-info} and \ref{tab:content_types_on_channels}. We also provide several examples of prominent privacy/security issues from our findings in Sec.~\ref{sec:discussion}. \new{We report some additional network security results in Appendix~\ref{sec:netsec-misc}.}
We categorize privacy-sensitive data into \textit{Device}, \textit{Network}, \textit{Network Location}, \textit{GPS Location}, and \textit{User} categories; see Table~\ref{tab:on-device-info}.
We also label the likely use of the available data into \textit{Persistent ID}, \textit{Short-term}, \textit{Profiling}, \textit{Location Data}, and \textit{User Assets}. Items labeled as \textit{Persistent ID} and \textit{Short-term} are generally used for tracking; \textit{Persistent IDs} do not change with time, and \textit{Short-term} items can identify a user for a short duration (can be used for long-term tracking if combined with other items). Profiling items can identify a user, or a user-group to a varying degree, the accuracy of which improves when combined.
\newcommand{\STAB}[1]{\begin{tabular}{@{}c@{}}#1\end{tabular}}
\begin{table*}[htb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ @{}c@{ } l l l |r r |r r |r r |r r r r}
\multirow{2}{*}[0ex]{\small \textbf{Category}}&\multirow{2}{*}[0ex]{\small \textbf{Data Type}}&\multirow{2}{*}[0ex]{\small \textbf{Protection level}}&\multicolumn{1}{l}{\multirow{2}{*}[0ex]{\small \textbf{Purpose/Use}}} &\multicolumn{2}{c}{\textbf{HTTPS}}&\multicolumn{2}{c}{\textbf{HTTP}}&\multicolumn{2}{c}{\textbf{Non--HTTP}}&\multicolumn{3}{c}{\textbf{Network-wide}} \\
&&&&{\cellcolor{gray!25}\small Regular}&{\cellcolor{gray!25}\small \shortstack{Custom\\Encrypted}}&{\cellcolor{gray!25}\small Regular}&{\cellcolor{gray!25}\small \shortstack{Custom\\Encrypted}}&{\cellcolor{gray!25}\small Regular}&{\cellcolor{gray!25}\small \shortstack{Custom\\Encrypted}}&{\cellcolor{gray!25}\small Regular}&{\cellcolor{gray!25}\small \shortstack{Custom\\Encrypted}}&{\cellcolor{gray!25}\small Overall}\\
\hline
\multirow{11}{*}{\STAB{\rotatebox[origin=c]{90}{{\small Device}}}}
&Device ID&Normal&Persistent ID$^\ddagger$&4504&526&347&100&30&10&4678&595&4818\\
&Advertising ID&Normal&Persistent ID$^\dagger$&7812&1990&312&100&5&3&7841&2034&8006\\
&Bootloader&Normal&Profiling&131&27&2&0&1&0&133&27&160\\
&Build Fingerprint&Normal&Profiling&474&51&8&8&3&0&482&58&532\\
&CPU Model&Normal&Profiling&795&128&22&3&4&0&814&131&919\\
&Display ID&Normal&Profiling&10376&1705&1925&18&8&0&10725&1712&10726\\
&Device Name&Normal&Profiling&3960&1605&190&14&20&6&4057&1614&4918\\
&Device Resolution&Normal&Profiling&489&29&28&4&0&0&515&33&540\\
&Device ABI&Normal&Profiling&1754&1548&26&11&6&0&1552&1773&2971\\
&Device Model&Normal&Profiling&12031&1966&2029&92&93&8&12289&2007&12289\\
&Dummy0 Interface&Normal&Short-term&183&8&5&3&0&3&187&14&201\\
\hline
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{{\small Network}}}}
&Operator&Normal&Profiling&5253&1595&106&17&12&0&5274&1607&5563\\
&Device WiFi IP&Normal&Short-term&1030&172&23&12&21&2&1060&179&1210\\
&Device WiFi IP6&Normal&Short-term&64&10&2&1&0&0&65&11&75\\
&Device Proxy IP&Normal&Short-term&36&11&2&5&0&1&38&17&53\\
&Default Gateway IP&Normal&Short-term&736&139&27&11&13&3&764&149&888\\
&WiFi MAC&Normal&Persistent ID$^\dagger$&63&9&19&9&2&3&80&17&88\\
\hline
\multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{\parbox{1cm}{\small\centering Network\\Location}}}}
&Router ESSID&Dangerous&Location Data&216&39&4&8&0&5&218&51&260\\
&Router BSSID&Dangerous&Location Data&207&37&19&4&0&5&217&46&255\\
&neighbor Router ESSID&Dangerous&Location Data&74&15&2&2&0&0&75&17&91\\
&neighbor Router BSSID&Dangerous&Location Data&61&22&0&1&0&0&61&13&74\\
\hline
\multirow{3}{*}{\STAB{\rotatebox[origin=c]{90}{\parbox{1.1cm}{\small\centering GPS\\Location}}}}
&GPS ($\leq$7 meter accuracy) &Dangerous&Location Data&1352&68&65&14&1&0&1397&80&1448\\
&GPS (78 meter accuracy) &Dangerous&Location Data&1637&71&81&15&1&0&1687&84&1738\\
&GPS (787 meter accuracy) &Dangerous&Location Data&1742&74&84&16&1&0&1792&88&1844\\
\hline
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{{\small User Assets}}}}
&List of Apps&Normal&Profiling&38&15&4&5&1&0&41&20&61\\
&SMS&Dangerous&User Asset&1&0&0&0&0&0&1&0&1\\
&Phone Number&Dangerous&Persistent ID&26&5&1&1&0&0&27&6&32\\
&Contacts&Dangerous&User Asset&7&1&0&0&0&0&7&1&8\\
&Device Email&Dangerous&Persistent ID&924&42&21&3&1&0&941&44&966\\
&User Files&Dangerous&User Asset&-&45&-&7&-&0&-&52&52\\\hline
\end{tabular}
}
\caption{Transmission of the on-device information over the network -- without or with custom encryption, listed under the \emph{Regular} and \emph{Encrypted} columns, respectively. Items marked with $^\dagger$ can be fixed or reset/changed by the user or system choice; $^\ddagger$ marked items are considered persistent up to Android version 8 (app-specific afterward). The last column (Overall) represents all the apps that use regular or custom encrypted channels, excluding the apps that use both channels for the same data type.}%
\label{tab:on-device-info}
\vspace{-10pt}
\end{table*}
\subsection{\mbox{Characteristics of encrypted communication}}
\label{Sec:Result-Characteristics}
\subhead{Prevalence of the use of encryption}
We found that \printpercentoutof{6075}{12598} apps triggered encryption/decryption related calls from our Frida API hooking. From these apps, we identified \printpercentoutof{2887}{6075} apps send network traffic, and %
use file storage with data originating from the hooked encryption/decryption calls; the remaining apps possibly use such calls for internal/local purposes.
We found 4 apps that used two nested layers of encryption, although no relevant traffic was observed during our test window; e.g., \textit{com.mci.balagh} (Ministry of Commerce of Saudi Arabia) app, hard-coded its remote server address in an encrypted form (nested), and subsequently decrypted twice. %
In terms of encryption type, we observed \numc{2597}, \numc{598}, \numc{119} apps used symmetric, public key, non-SDK encryption,
respectively;
see Table~\ref{tab:prevalance_enc} (in the appendix).
\subhead{Encrypted communication content} To identify the type of content sent over encrypted channels, we created a list of keywords (see the \textit{Data Type} column in Table~\ref{tab:on-device-info}):
device information used for tracking (e.g., network operator, build fingerprint), network information (e.g., device MAC), GPS coordinates in different accuracies, network location (e.g., via own/neighbor router info), and user assets (e.g., contact list, SMS). We also extract authentication tokens and session IDs embedded
in JSON, XML, HTTP headers, form-urlencoded, and form-data data structures, besides authentication passwords (see the \textit{User Credentials} column in Table~\ref{tab:content_types_on_channels}). We did not verify the tokens used for \textit{User Credentials} (except a few selected ones for manual verification, e.g., \textit{com.peppermint.livechat.findbeauty}).
Apps also exchange symmetric encryption keys over HTTP/HTTPS and non-standard channels: 82 apps sent and 10 apps received such keys over HTTP; 154 apps sent and 71 received such keys over HTTPS; and 8 apps sent such keys over non-HTTP.
\subhead{Encrypted communication channels}
To understand information leakage between different transmission channels, we categorize such channels into the following four categories.
We consider that an app transmits a leaked item (e.g., Device ID) through a \textit{Regular} channel, if the app shares the item using HTTP/S; the app may also apply custom encryption for this transmission (e.g., to the same or different hosts). For \textit{Custom Encrypted}, the leaked item is shared via at least one channel after processing the item with one or more additional encryption layers; the same item may also be shared via \textit{Regular} channels. We use \textit{Custom Encrypted for Some Hosts} for apps that share the leaked item with one or more distinct remote hosts, only under custom encryption; this leakage will be missed by other tools (although the same information leakage will be detected for other hosts when shared via \emph{Regular} channels).
If an app uses only custom encrypted channels for sharing the leaked item, which is not shared via \emph{Regular} channels, we count such app under \textit{Only Custom Encrypted}; existing tools cannot detect any leakage from this category.
See Table~\ref{tab:content_types_on_channels} for overall results for these channels, and Sec.~\ref{sec:discussion} for prominent examples.
\begin{sloppypar}
\subhead{Recipients of encrypted traffic} %
\numc{1291} and \numc{786} unique remote servers with registered domain names and subdomain names, respectively, were contacted by the \numc{2887} apps that used additional layers of encryption. See Table~\ref{tab:encrypted_destinations} (in the appendix) for the top-10 remote servers (all tracking SDKs) receiving various on-device information. Some destinations may receive several on-device information items (e.g., \textit{appsflyer} receives items such as WiFi ESSID, WiFi MAC, operator, device email, build fingerprint, advertisement ID, device ID), and other destinations may receive very basic items (e.g., \textit{scorecardresearch.com} receive only advertisement ID). %
More importantly,
22 apps sent users' GPS coordinates to these domains:
\numc{10} apps to \textit{appsflyer.com} (3 only under custom encryption), \numc{8} apps to \textit{supersonicads.com}, \numc{3} apps to \textit{batch.com} (2 only under custom encryption), and one app to \textit{pangle.io}.
\textit{Appsflyer.com} also received search terms from two applications (\textit{ru.labirint.android} and \textit{com.lotte}), and the user-entered phone number from another app (\textit{vn.gapo.app}).
\end{sloppypar}
\subhead{Encrypted channels with packers}
Android apps can use packers to protect apps from being copied or altered (e.g., by encrypting class DEX files). We used \textit{APKiD}\footnote{\textit{APKiD} (\url{https://githubhelp.com/l9sk/APKiD}) provides information on how an APK is formulated, e.g., compilers, packers, obfuscators.}
to identify the prevalence of packers in apps. We found that \printpercentoutof{121}{12598} apps use packers for Java implementations. These apps can contain some API methods not detectable by common static analysis tools (e.g., \textit{Apktool} and Androguard). \textit{\textit{ThirdEye}} uncovered \printpercentoutof{51}{121} apps that use cryptographic APIs, and \printpercentoutof{34}{51} apps that use custom-encrypted channels leveraging packers.
In addition, by analyzing 20 randomly selected apps that use \textit{appsflyer} tracking SDK, we found all of those apps included packed \textit{appsflyer} SDK, but \textit{APKiD} failed to identify the packed SDK. This SDK was used in \printpercentoutof{1386}{2887} apps that used custom-encrypted channels to send tracking information (see under ``Recipients of encrypted traffic'').
\begin{table*}[!htb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ l l r r r r r r r r r r r}
\multirow{2}{*}[0ex]{\textbf{Protocol}} & \multirow{2}{*}[0ex]{\textbf{Channel}} & \multirow{2}{*}[0ex]{\textbf{Device}}& \multirow{2}{*}[0ex]{\textbf{Network}} &\multicolumn{2}{c}{\textbf{Network Location}}&\multirow{2}{*}[0ex]{\begin{tabular}{@{}c@{}}\textbf{GPS} \\ \textbf{Location}\end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}}\textbf{User} \\ \textbf{Assets}\end{tabular}}& \multicolumn{2}{c}{\textbf{Credentials}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}}\textbf{Key} \\ \textbf{Transmission}\end{tabular}}\\
&&&&\cellcolor{gray!25}Own Router&\cellcolor{gray!25}Neighbor Router&&&\cellcolor{gray!25}Password&\cellcolor{gray!25}Token&\\ %
\hline
\multirow{4}{*}{HTTP} & Regular &2109&150&20&2&197&26&8&157&0\\
& Custom Encrypted &191&34&8&2&17&15&0&20&87\\
& Custom Encrypted for Some Hosts &93&32&7&2&13&13&0&17&86\\
& {\bf Only Custom Encrypted} &\textbf{36}&\textbf{17}&\textbf{7}&\textbf{2}&\textbf{10}&\textbf{13}&\textbf{0}&\textbf{15}&\textbf{86}\\
\hline
\multirow{4}{*}{HTTPS} & Regular &12178&5640&256&80&2442&985&327&9019&0\\
& Custom Encrypted &2272&1686&49&20&87&104&15&378&182\\
& Custom Encrypted for Some Hosts &1953&1663&45&20&83&97&15&316&181\\
& {\bf Only Custom Encrypted} &\textbf{1443}&\textbf{429}&\textbf{39}&\textbf{19}&\textbf{46}&\textbf{85}&\textbf{15}&\textbf{221}&\textbf{181}\\
\hline
\multirow{4}{*}{Non-HTTP} &Regular &120&26&0&0&1&2&0&0&0\\
& Custom Encrypted&10&5&5&0&0&0&0&4&8\\
& Custom Encrypted for Some Hosts&10&5&5&0&0&0&0&4&8\\
& {\bf Only Custom Encrypted} &\textbf{3}&\textbf{1}&\textbf{5}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{1}&\textbf{8}\\
\hline%
\multirow{4}{*}{Overall} & Regular&12420&5690&266&81&2576&1006&334&9063&0\\
& Custom Encrypted&2350&1707&61&22&102&117&15&398&263\\
& Custom Encrypted for Some Hosts &1996&1681&58&22&95&109&15&337&263\\
& {\bf Only Custom Encrypted} &\textbf{1481}&\textbf{451}&\textbf{51}&\textbf{21}&\textbf{56}&\textbf{97}&\textbf{15}&\textbf{237}&\textbf{263}\\
\hline
\end{tabular}}
\caption{Content types sent over different protocols and channels. For channel categories, see ``Encrypted communication channel'' in Sec.~\ref{Sec:Result-Characteristics}.} %
\label{tab:content_types_on_channels}
\vspace{-10pt}
\end{table*}
\subsection{Insecure key management and weak ciphers}
We found \printpercentoutof{2421}{2887} apps sent data with custom encryption using fixed keys
(on the same device in two different installations); \printpercentoutof{2112}{2421} apps used symmetric and \printpercentoutof{502}{2421} apps used public-key ciphers. %
On the other hand, \printpercentoutof{1780}{2421} apps used fixed keys across devices; \printpercentoutof{1593}{1780} and \printpercentoutof{341}{1780} of them used symmetric and public-key ciphers, respectively.
Moreover, we identified \printpercentoutof{561}{2421} apps with hard-coded keys. %
We also identified \printpercentoutof{154}{2421} apps used both symmetric and public-key ciphers with fixed keys.
We also observed that \numc{27} apps used custom encryption to store their content in the device shared storage; \numc{26} apps used symmetric keys, and one used a public key; \numc{9} apps stored ciphertext (generated using symmetric fixed keys) in shared storage, %
exposing
various content types (e.g., device information, inputs, network data) to other apps; see Table~\ref{tab:onsecure_key_mgmt} (in the appendix) and Sec.~\ref{Sec:Privacy_exposures_from_files}.
In terms of the use of broken/weak cryptographic algorithms, and short-length keys, we observed that even Android 12's cryptographic API does not restrict such usage; see Table~\ref{tab:weak_cipher_protocols} (in appendix). We identified \printpercentoutof{262}{2887} apps used insecure algorithms, e.g., DES (\numc{106}), RC4 (\numc{3}), 3DES (\numc{34}), RSA-384 (\numc{1}), and RSA-512 (\numc{60}). %
The use of fixed keys and weak ciphers can lead to serious privacy/security issues, depending on the app; see Sec.~\ref{sec:discussion} under ``New security vulnerabilities''. Note that if an app uses a fixed/hard-coded key to encrypt data sent over HTTPS, then this will not lead to data exposure to a network attacker.
\subsection{Apps sending geolocation information}
\subhead{GPS and router SSID}
We observed that \printpercentoutof{2727}{12598} apps sent GPS coordinates~\cite{wiki:Decimal_degrees} and router's SSID to remote servers; \printpercent{129}{2727} of them used additional encryption to send this information; see Table~\ref{tab:on-device-info}.
Interestingly, \numc{197} apps sent GPS coordinates to third-party services, but not to their own servers.
For example, the official app of Russian Post (\textit{com.octopod.russianpost.client.android}) sent GPS coordinates (via regular HTTPS) to \textit{tracker-api.my.com}, a subsidiary of the Russian social media company VK (\url{vk.vom}). %
On the other hand, \textit{com.cashingltd.cashing}, \textit{com.tdcm.android.trueyou} sent GPS coordinates to \textit{appsflyer.com} only under custom encryption.
\subhead{Neighbor's router scanning}
Apps with location permission can collect BSSID, ESSID from the app user's router, as well as all nearby wireless routers. Such router IDs have been used to determine physical location since 2011 (e.g.,~\cite{bssid-google-2011}), and currently public databases of such ID-location mapping exist for hundreds of millions of routers (see e.g., \url{wigle.net}, \url{mylnikov.org}); this has also been exacerbated by the increasing adoption of IPv6~\cite{bssid-bh2021}. A user's location-capable app thus can reveal not only her location, but also the location of her neighbors (irrespective of the apps/devices used by them). We found \numc{102} apps that sent neighboring router IDs to their servers (notable apps: PayPal, PayPal Business, Yandex, Mail.ru, VK, Kaspersky Security and VPN). More importantly, \printpercent{22}{102} apps sent such IDs only via custom encrypted channels; a notable example: Baidu Search (\textit{com.baidu.searchbox}). Even after a user moves to a new location with her old router, her location change can still be exposed, if she has a neighbor with a location-capable app. The \numc{102} apps that we identified, have been mostly downloaded by users from Russia (\numc{66480721} users), Brazil (\numc{41163244}), Indonesia (\numc{9566304}), USA (\numc{8802562}), and India (\numc{6749443}); estimated download numbers are from \textit{similarweb}~\cite{similarweb} (Q2, 2021, for Google Play apps). Some of these non-Google-Play apps are also very popular; e.g., \textit{com.baidu.searchbox} and \textit{com.sina.weibo}, ranked 9th and 12th, respectively, in \url{AppinChina.co} app store.
\subsection{Exposures via files} %
\label{Sec:Privacy_exposures_from_files}
\subhead{Leftover files in external storage}
Among our analyzed apps that created files in external storage, 128 apps stored device information, 12 stored GPS coordinates, and 10 stored network information.
\printpercentoutof{27}{150} of these apps used \textit{custom encryption}\ to store content in external storage; \printpercentoutof{9}{27} apps stored device info and one of those apps stored authentication tokens; e.g., \textit{ru.mediafort.povarenok} stored the DES-encrypted value of the device email (i.e., device account) in \textit{mediafort/data.txt}; \textit{tw.comico} stored user authentication tokens with a fixed key.
\subhead{Covert channels}
We found \numc{44} apps stored device information into common folder paths in shared storage. There were \numc{104} apps that checked the existence of these paths. These files can be used as inter-process communication (IPC)/covert channels --- \numc{4} apps from different vendors wrote the device WiFi MAC address to \textit{.cc/.adfwe.dat} file path; \numc{8} apps from different vendors check the existence of this path; \numc{20} apps saved the MD5 hashed value of WiFi interface MAC address; \numc{67} apps check the existence of the path. Moreover, we detect that \textit{app.buzz.share} app and its lite version with over \numc{110} million downloads stored identifiers, such as device ID, in a file (\textit{bytedance/device\_parameters\_i18n.dat}), encrypted with DES.
Three more apps from different vendors saved the same data, key, and encryption algorithm information to the same path.
\begin{newtext}
\section{Effectiveness and Limitations}
\subhead{Overall effectiveness}
We verified our initial results through manual inspection, refined the tool before conducting the large-scale study. Note that we do not have any ground-truth on the targeted leakage, and we also cannot rely on any existing tools for accuracy; e.g., AGRIGENTO~\cite{obfus_res} could have been used in some limited cases (e.g., for the data types considered by the tool), but it is now outdated (designed for Android 4).
\textit{ThirdEye}'s effectiveness is apparent from the numerous new privacy exposures of various types that we uncovered. %
However, for some apps, our analysis may fail to fully identify the security and privacy issues due to the use of non-standard and custom encryption channels---see below under limitations. We first summarize the strengths of \textit{ThirdEye}\ components, which, in combination, contribute to our overall effectiveness.
Our UI interactor (partially) supports custom registration/login and Google sign-in, detects already explored widgets/objects to prevent duplicate interaction/exploration, and avoids non-targeted activities (e.g., ads). In contrast, Android Monkey lacks these features, and hence can take longer for the same code coverage and miss anything beyond the login page. We report the results of a preliminary experimental comparison with Monkey below.
Our operations logger performs network/cryptographic/file API instrumentations. It is resistant to obfuscation/packing for identifying Android SDK cryptographic APIs, supports HTTP/S and non-HTTP protocols, supports (unobfuscated) 3rd-party encryption/decryption API, supports defragmentation of multi-part cryptographic functions and network packets. These features help us to understand a lot of custom-encrypted and non-HTTP traffic, and identify more privacy exposures compared to existing work.
Our data flow inspector detects privacy issues in the collected network traffic/files, by matching actual plaintext (collected by the app operations logger) and ciphertext from the network (after handling any IP defragmentation)---i.e., our reported exposures indeed happen during app runtime.
This helps us to avoid false positives. We reliably detect the use of weak cryptographic keys and algorithms; support various privacy-sensitive items (can be easily extended); support various encoding schemes, and nested encoding and encryption; support file detection within encrypted traffic; and distinguish between app and OS traffic. These features make the data flow inspector to accurately detect privacy and (potential) security problems.
\subhead{UI interactor vs.\ Android Monkey}
To compare the effectiveness of our UI interactor against Android Monkey (commonly used in past studies~\cite{50_ways,obfus_res,bakopoulou2020exposures}), we randomly chose 150 apps that exceeded the 5-minute threshold from our results. We set up two new experiments with a 10-minute threshold (following~\cite{50_ways}): in one experiment we used our UI interactor, and in another, we used the Monkey as the UI exerciser. We also configured Monkey to generate a large number of UI events, by setting a short interval of 0.3 seconds between events. Note that our interactor generates far fewer events---on average, 10 seconds per event, as we keep states to avoid duplicate events, perform text analysis, and use the online Google Translate service. We used the latest versions of the 150 randomly chosen apps, and 115 of them completed the analysis without any unexpected termination (in both Monkey and UI-interactor; we did not consider any partial results in this test).
In the end, our interactor spent about 7.4 minutes (444.36 seconds) on average for each app, while Monkey used the full 10-minute window (600 seconds) for each app. We compared our UI interactor and Monkey in terms of the detected various privacy-related items (a total of 24 types): on average, \textit{ThirdEye}\ detected approximately 6.5\% more apps with privacy leaks with our UI interactor compared to Monkey; see Fig.~\ref{fig:comp_pct_apps_data_types} (in the appendix). Most apps transmitted more privacy items when instrumented with our UI interactor. We also found that Monkey sent more duplicate items to the same host, or to new hosts (not detected by us). Most new hosts in Monkey received device names that appeared in the user-agent of an app's WebView pages that we intentionally avoided interaction with (e.g., ad windows, non-Google 3rd-party logins).
Additionally, we manually checked the support of login for these apps as the UI interactor can detect privacy leaks from app features available only after a successful login. We detected that 77/115 apps require authentication: 40 only supported app-specific authentication; 34 supported both Google and app-specific authentication; and 3 supported only Google sign-in. We succeeded to automatically login to 19 apps with Google sign-in and to 4 apps with app-specific registration/sign-in. %
This partial support of login helps us to explore more app features and related leaks compared to Monkey.
\subhead{Analysis time threshold: 5 vs.\ 10-minute window}
From the UI interactor vs.\ Monkey experiment, we estimate the undetected privacy leaks due to our 5-minute test window by comparing leaks that occur before and after the threshold. Overall, there are more leaks detected with a higher threshold, but the difference is not very significant. 6/115 apps sent the following privacy-related items after the 5-minute threshold: 1 sent the device name, 2 device email, 1 WiFi info (router BSSID, ESSID, and neighbor router ESSID), 1 dummy0 interface, 1 device name; i.e., 109/115 apps did not leak any new privacy-related items after the 5-minute threshold (and before the end of the 10-minute window). We also observed that most apps requiring over 5 minutes, are WebView apps with lots of pages/widgets. Note that, the analysis duration is configurable---a trade-off between coverage vs.\ total analysis time/resources.
\end{newtext}
\subhead{Limitations}
(1) Although we were able to identify PII information sent over the network with multiple forms of obfuscations (e.g., encryption, encoding, hashing), our results are a lower bound as we may not have identified traffic with more complex or unknown obfuscation techniques.
(2) Besides obfuscated PII, obfuscated method names may also reduce \textit{ThirdEye}'s effectiveness, as we rely on method names for hooking possible encryption/decryption functions. Obfuscation tools such as ProGuard cannot modify method names in the Android cryptographic SDK (or any Android Framework APIs), allowing us to hook such functions successfully. However, these tools may hide from us the names of the custom-developed cryptographic functions, and as such, \textit{ThirdEye}\ cannot (automatically) find and hook these functions. From our measurement, we found a total of 119 apps that used non-SDK encryption; these apps either did not use obfuscation, or used some tools which did not obfuscate the method names. However, we could not measure how many apps with non-SDK encryption that we missed. Past studies measured the overall use of obfuscation tools by app developers, e.g., 24.92\% of 1.7M apps were obfuscated according to Wermke et al.~\cite{wermke2018large} (but no data on the use of non-SDK cryptographic implementations).
(3) Our instrumentations do not cover apps built solely using the native NDKs.
Instead, our methodology indirectly covers NDK functions that are wrapped in SDKs.
(4) The AndroidViewClient that we used to automate UI interactions, cannot handle animated UI elements (e.g., a button with an animation). We also do not handle UI elements created with third-party views (i.e., not extending \textit{View}~\cite{view} class) and images. Our support for authentication is also limited to Google sign-in and custom registrations, and our UI interactor currently does not complete steps that require verification via SMS/email for registration/login. For apps with unhandled UI elements and logins, we currently fail to detect privacy and security issues in features behind these elements or logins. (5)
As we do not know which apps, or which app features in an app may use non-standard/custom-encrypted channels, there is no guarantee that our UI interactor would trigger \emph{all} such covert channel related behaviors/features. We systematically go through all app UIs and trigger as many actions as we can to find these channels, if used by a target app.
(6) Our network instrumentation currently does not support HTTP/3 (QUIC), DTLS, and TLS without HTTPS.(7) Our Frida instrumentation works for the evaluated apps; however, if apps use advanced techniques, such as observing the memory map, our instrumentation can still be detected.
\section{Case Studies and Discussion} %
\label{sec:discussion}
In this section, we summarize privacy implications of the use of non-standard and covert channels to collect/send sensitive personal/device information. We also discuss the new security vulnerabilities introduced by these channels. We highlight such critical privacy and security implications using high-profile apps/SDKs as examples; see also Table~\ref{tab:nice_table} in the appendix. %
\subhead{Hiding privacy exposures} %
As we observed, a significant number of apps use non-standard and covert channels to hide the collection of PII/device information --- shared with their own app servers or third-party servers/trackers, or both. This may be due to increased scrutiny by the app markets, e.g., Google Play Protect~\cite{google_play_protect},
or due to the added privacy measures in newer versions of Android (10 and up); we cannot be certain about app developers' motivation on this.
However, such practices are certainly detrimental to user privacy.
We list a few examples below.
\\$\bullet$ Dailyhunt (\textit{com.eterno}, 100M+ installs), a top news app in India, sends users' contact list to its servers using an additional encrypted channel over HTTPS. It compresses and encrypts each contact's details using AES-128-CBC with a random key and a null IV, and sends the encrypted contacts to its server. The random key is also sent encrypted under a hard-coded RSA-1024 key. %
\\$\bullet$ SHAREit (\textit{com.lenovo.anyshare.gps}, 1B+ installs), an extremely popular app to securely share/manage large files, sends device GPS location to third-party adv/tracking domains (\textit{adcs.rqmob.com}~\cite{list_base_rules_block_apps}
and \textit{dt.beyla.site}~\cite{sitereview_dt_beyla_site})
under custom encryption over HTTP, and to \textit{mcds.wshareit.com} over regular HTTPS. For custom encryption, \textit{SHAREit} uses an AES-128 random key generated on the target device, which is sent encrypted via HTTP under a hard-coded RSA-1024 key.
Similarly, the Amazon Alexa (\textit{com.amazon.dee.app}, 50M+ installs) app sends the device email and the WiFi IP address (as cookie parameters) to their servers, only under custom encryption over HTTPS. %
This app is used to set up Alexa-enabled devices for automated tasks (e.g., creating shopping lists).
\\$\bullet$ With IPv6, the device interface's hardware MAC address is embedded in the IPv6 address~\cite{bssid-bh2021}, which is made available via the dummy interface (dummy0)~\cite{creating_dummy_interface}. Although the MAC address is randomized (by default from Android 10), the corresponding IPv6 address is fixed until the next reboot of the device~\cite{dummy_interface}, and as such, can be used to track users between device reboots (a relatively infrequent event).
This technique is apparently being used by \numc{202} apps, %
including very high-profile apps such as \textit{com.baidu.searchbox}, and \textit{com.paypal.android.p2pmobile}, where the apps explicitly collect the dummy0 interface information but use IPv4 for communication;
\printpercentoutof{14}{202} of them send such info via non-standard channels. %
\subhead{New security vulnerabilities}
We list example apps where new vulnerabilities resulted from the use of custom encrypted channels.
\\$\bullet$ UC Browser (\textit{com.UCMobile.intl}), a mobile browser with 500M+ installs, sends device information (e.g., device ID, operator, WiFi MAC, advertisement ID), and GPS location over a custom encryption channel under plain HTTP protocol with fixed-keys, and thus exposes the collected information to any network adversary.
\\$\bullet$ CamScanner (\textit{com.intsig.camscanner}), a widely-used app for document scanning (3.9M+ installations), encrypts a user's Firebase token using a random symmetric key, which is then encrypted using a hard-coded RSA-512 public key; the resulting ciphertext (the Firebase token and random key) is then sent to \textit{54.183.90.125:8090} and \textit{54.177.44.214:8090} using a non-standard protocol over TCP. We extracted the corresponding RSA-512 private key, and then we could recover the symmetric key and in turn, access the plaintext Firebase token,
just by collecting ciphertext from the network.
\\$\bullet$ We found \numc{22} apps that sent authentication tokens over a secure HTTPS channel initially, but then exposed such tokens over an insecurely-implemented encrypted channel over HTTP or non-HTTP; e.g., \textit{com.peppermint.livechat.findbeauty} (a dating app, 5M+ installations) sent the user token over a non-HTTP channel that is AES-ECB encrypted with a hard-coded key.
\\$\bullet$ We also found 5 VPN clients use the shadowsocks~\cite{shadowsocks} protocol to receive free and premium server credentials from a proxy server through an encrypted channel. After decrypting the credentials on the device, it checks whether the user can authenticate and connect to premium or free servers. Since this check occurs at the client side, and \textit{\textit{ThirdEye}} can find the corresponding encryption parameters, we obtained connection information and credentials (e.g., server address, password) required to connect to the server.\\
\begin{newtext}
\subhead{To use or not to use non-standard channels and custom encryption} %
We checked several apps manually to understand their reasons for using non-standard and custom encryption channels. The examples we observed do not clearly justify such channels, at least not in an obvious way (there may be deployment/operation constraints we are not aware of). Notable cases with raw TCP/UDP connections: Forex Event - Platform Trading (\textit{com.bonus.welfare}) uses a plain TCP channel to receive the latest shares and forex events; Modern Combat 5 (\textit{com.gameloft.android.ANMP.GloftM5HM}) uses a TCP channel apparently as a game control channel, and sends game server details and the user access tokens as plaintext; and Netspark Real-time filter (\textit{con.netspark.mobile}), a parental control app, sends real-time device activities (e.g., application-related events) to their server using a plain UDP channel.
The use of custom encryption should be avoided in general; as evident from our results, most app developers fail to use such encryption securely, e.g., about 87\% of apps used fixed keys for their symmetric cryptographic operations, where the ciphertext is indeed sent to the network. For specific app issues, we suggest the following fixes. For example, Recipes in Russian (\textit{ru.mediafort.povarenok}) could use their own public key to encrypt and store the device email on the shared storage; Comico (\textit{tw.comico}) could do the same to store their user authentication tokens; CamScanner's RSA-512, and both Dailyhunt and SHAREit's RSA-1024 keys could be replaced with a stronger one (e.g., RSA-2048); UC Browser could simply use HTTPS (instead of custom encryption over HTTP with a fixed key); for the 5 VPN apps that expose premium account checks at the end client side, these apps should perform the validation at their server-ends; and 22 apps that use custom encryption to share securely-established session tokens, should simply use HTTPS.
In the end, custom encryption is generally not the solution for any of the reasons that we observed---all of which can be easily met by Android's default crypto support. Besides using HTTPS properly for communication, app developers should rely on Android Keystore for local key management, and Android EncryptedFile and EncryptedSharedPreferences for securely storing local data~\cite{work_with_data_more_securely}.
To protect confidentiality of selected private content against third-party content-scanning/distribution services (e.g., allowing CDNs to scan HTTPS traffic), custom encryption may be used, but only under HTTPS (to limit any weakness of custom encryption to the CDNs, instead of any on-path attacker). To avoid the use of custom encryption over non-standard channels, e.g., AES-over-UDP/TCP, developers should instead choose QUIC.
\end{newtext}
\section{Related work}
\subhead{Privacy leakage via covert channels}
Side channels allow apps to access protected data/system resources, while with covert channels, an app can share permission protected data with another app that leaks permission-protected information. Reardon et al.~\cite{50_ways} automated the execution of \numc{88000} apps (at system and network levels), and monitored sensitive data sent over network traffic by apps, and scanned apps that should not have access to the transmitted data, due to lack of permissions.
The authors also reverse-engineered the respective components of apps to determine how the data was accessed, and studied the abuse from side and covert channels.
Examples from their findings include: 5 apps collect MAC addresses of connected WiFi base stations from ARP cache; an analytic provider (\textit{Unity}) obtained device MAC address using the \textit{ioctl} system call (42 apps were found to exploit this); third-party libraries from \textit{Baidu} and \textit{Salmonads}, wrote phone's IMEI to the SD card, and other apps that do not have permission to access the IMEI, can read from the SD card (13 such apps were found). They also found that \printpercentoutof{153}{88000} apps used hard-coded encryption keys.
Palfinger et al.~\cite{androtime} built a framework to identify timing side channels (e.g., via querying installed apps, active accounts, files, browser logins) in Android APIs.
The leaked information can be used to fingerprint users, identify user habits or infer user identity.
Bakopoulou et al.~\cite{bakopoulou2020exposures} intercepted the network traffic from 400 popular apps (with monkeyrunner), and performed manual/automated analysis to understand PII exposures. They found 29 apps exposed the ad ID and location info via UDP; 7 apps exposed Android ID, and another exposed username over plain TCP. %
We implement \textit{ThirdEye}\ to detect information leaks from non-standard and covert channels not reported in past studies --- e.g., malicious apps revealing neighbor's BSSID, obfuscation using nested encryption/decryption.
In addition to HTTP/HTTPS, we also capture traffic from other network protocols above TCP/UDP.
We found \printpercentoutof{2880}{2887} apps send/receive data over custom encrypted channels to/from the network; \printpercentoutof{414}{2880} of these apps used hard-coded keys on their communications. We also look for more fine-grained privacy-oriented sensitive information --- e.g., GPS coordinates with different accuracy, and user credentials.
\subhead{Obfuscation-resilient privacy leakage detection tools}
Mobile apps and ad libraries can leverage various obfuscation techniques (i.e., encoding, encryption, formatting) to hide the leakage of private information of users. Continella et al.~\cite{obfus_res} developed a tool (\textit{AGRIGENTO}) based on blackbox differential analysis (i.e., using a baseline, and observing the network traffic flow after modifying the sources of private information) for privacy leak detection resilient to underlying obfuscations. AGRIGENTO (implemented on Android 4) captures HTTP/HTTPS traffic using mitmproxy. The authors evaluated AGRIGENTO using the most popular 100 apps from Google Play, and identified 46 of them had privacy leaks; with manual inspection, the authors found that \printpercentoutof{4}{46} of those apps were false positives.
AGRIGENTO does not consider non-deterministic sources (e.g., one-time non-reusable keys, authentication tokens), and focuses on privacy leakages only from deterministic sources, i.e., Android ID, contacts, ICCID, IMEI, IMSI, location, MAC address, and phone number.
With AGRIGENTO, Continella et al.~\cite{obfus_res} also found false positives of specific sources of information leaked in a number of apps --- Android ID (5), IMEI (9) MAC address (11), IMSI (13), ICCID (13), location (11), phone number (16), contacts (13).
In contrast to AGRIGENTO, \textit{\textit{ThirdEye}} uses more comprehensive UI interactions, and relies on deep packet inspection; therefore, it can capture more privacy leaks from \emph{both} deterministic and non-deterministic sources.
\subhead{UI automation frameworks}
Past work~\cite{50_ways,obfus_res,syscall,explore_arm} has mostly relied on Appium~\cite{appium} and monkeyrunner~\cite{monkeyrunner} for Android UI automation. Appium uses app-specific scripts to drive automation relating to interactions with UI elements. %
Monkeyrunner solely uses random clicks on UI elements for automation. %
Dynodroid~\cite{dynodroid} focuses on processing automatic input. SmartDroid~\cite{smartdroid,dcdroid} automatically reveals UI-based trigger conditions of sensitive behaviors of Android apps, but it cannot interact with WebView (commonly used by recent apps). Patel et al.~\cite{patel2018effectiveness} found that random testing with monkeyrunner is extremely efficient, effective and leads to a higher coverage~\cite{patel2018effectiveness}. In contrast, Wang et al.~\cite{dcdroid} argue that monkeyrunner is unsuited for UI automation (for testing specific SSL/TLS vulnerabilities), as its random clicks do not precisely target the specific area on the UI.
They leverage AndroidViewClient for UI interactions (e.g., check a radio button, input content to a text box), based on the priority of a UI element; the priority depends on the vulnerabilities in the SSL/TLS implementation. %
In contrast, we set the priority of UI element interaction in a list of clickable/fillable elements.
Our UI interactor is also built on top of AndroidViewClient, which has a better code coverage (e.g., accommodate interacting with UI elements that have non-English labels), not restricted to triggering UI elements associated to vulnerable SSL/TLS implementations, and
supports running on multiple devices.%
\subhead{Root detection evasion} %
We implement effective evasion mechanisms to bypass various root detection techniques incorporated by some apps. We use rule-based API hooking, and support both Android SDK and NDK based root detection. We surpass the capabilities built into common tools including \textit{RootCloak}~\cite{rootcloak}, \textit{RootCloak Plus}~\cite{rootcloakplus} and \textit{xCon}~\cite{xcon}, and handle more modern root detection measures; \textit{RootCloak} only supports up to Android \numc{6} and cannot bypass other tools/libraries like the \textit{RootBeer}~\cite{rootbeer}. We support Android 12 and can bypass more complicated techniques, including the latest version of \textit{RootBeer}, which is used
in \printpercentoutof{178}{6075} apps that trigger encryption/decryption APIs in our test.
\subhead{Defense against anti-reverse engineering techniques}
Android apps are prone to efficient reverse engineering, as apps are written in a high-level language (i.e., Java) that can be decompiled into simple bytecode~\cite{lim2016android}. To protect apps from reverse engineering, past studies~\cite{ghosh2013shielding,xu2015toward,sun2015android} have %
discussed different obfuscation, dynamic code loading, packing, encryption and anti-debugging techniques and their detection and evasion.
We use Frida~\cite{Frida} API hooking to implement evasion techniques to protect against dynamic anti-reverse engineering bypassing detection techniques (e.g., the use of root access, package installer, mock location, certificate pinning, ADB).
Some of our techniques are more effective (e.g., mock location detection) compared to existing anti-evasion measures.
\begin{newtext}
\subhead{Summary of differences with existing work}
AGRIGENTO~\cite{obfus_res} is closest to our work; however, we cannot compare with it experimentally (developed for now outdated Android 4). In terms of methodology, AGRIGENTO detects leakage of eight predefined, deterministic privacy-sensitive values: AndroidID, contacts, ICCID, IMEI, IMSI, location, MAC-address, and phone-number. %
We detect both fixed and dynamic values from deterministic/non-deterministic sources, as we have access to the plaintext corresponding to the full request. Also, due to the use of differential analysis in AGRIGENTO, there are a significant number of false positives.
Reardon et al.~\cite{50_ways} looked into unauthorized access and transmission of private data where an app does not have the necessary permissions. However, they did not address authorized/unauthorized privacy leakage via encrypted (beyond HTTPS) or non-HTTP channels, which is the focus of \textit{ThirdEye}. More concretely, from our results as summarized in Table~\ref{tab:content_types_on_channels}, everything except the ``Regular'' channels will be missed by other tools, except AGRIGENTO (albeit partial detection-only, as discussed above). Note that AGRIGENTO also does not consider security problems, and as such, issues reported under ``Credentials'' and ``Key Transmission'' in Table~\ref{tab:content_types_on_channels} will be missed. We also detect more privacy-sensitive data types (a total of 30 in the current implementation), compared to existing work (a total of 11 types in~\cite{obfus_res, 50_ways}). This can be attributed to our use of various known techniques in combination, such as bypassing runtime evasion, collecting non-HTTP traffic via tcpdump, logging non-SDK encryption/decryption methods and cryptographic APIs using rule based API-hooking (which can capture any runtime activities based on predefined criteria).
\end{newtext}
\section{Conclusion}
While considering the significant threat arising from non-standard and covert channels in the Android ecosystem, a better understanding of privacy exposures and security issues is necessary. However, identifying privacy exposure via such channels is not straightforward.
Thus, users and app market providers would remain unaware of such privacy leakages, and security problems introduced by these channels.
We introduce \textit{ThirdEye}, a tool that can detect covert channels with multiple levels of obfuscation (e.g., encrypted data over HTTPS, encryption at nested levels). We also found security weaknesses caused by the use of custom-encrypted/covert channels (e.g., vulnerable keys and encryption algorithms).
With the findings and contributions from our study, we hope to spur further research in the context of non-standard and covert channels.
\begin{acks}
We are grateful to all anonymous CCS2022 reviewers for their insightful suggestions, comments, and guiding us in the final version of this paper. We also appreciate the help we received from the members of Concordia's Madiba Security
Research Group. The third author is supported in part by an
NSERC Discovery Grant.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\medskip Introduction\label{introduction}}
Numerical simulations of the freely decaying, incompressible Navier-Stokes
equations in two dimensions have shown that under appropriate conditions and
after a relatively short period of chaotic mixing, the vorticity becomes
strongly localized in a collection of vortices which move in a background of
weak vorticity gradients [\cite{mcwilliams}]. As long as their sizes are much
smaller than the extension of the domain, the collection of vortices may
evolve self-similarly in time [\cite{santangelo} ,\cite{car1} ,\cite{car2}
,\cite{cardoso} ,\cite{borue} ,\cite{Warn}] until one large-scale structure
remains. If the corresponding Reynolds number is large enough, the time
evolution of these so-called coherent structures is usually given by a uniform
translation or rotation and by relatively slow decay and diffusion, the last
two are due to the presence of a non-vanishing viscosity. In other words, in a
co-translating or co-rotating frame of reference, one has quasi-stationary
structures (QSS) which are, to a good approximation, stationary solutions of
the inviscid Euler equations. Accordingly, their corresponding
vorticity\footnote{We use the subscript $S$ in order to indicate that the
field corresponds to an observed QSS.} fields $\omega_{S}(x,y)$ and stream
functions $\psi_{S}(x,y)$ are, to a good approximation, functionally related,
i.e., $\omega_{S}(x,y)\approx\omega_{S}(\psi_{S}(x,y)).$ Similar phenomena
have been observed in the quasi two-dimensional flows studied in the
laboratory [\cite{flor} ,\cite{marteau}]. The only exception to this rule is
provided by the large-scale, oscillatory states that occasionally result at
the end of the chaotic mixing period [\cite{segre} ,\cite{brands2}]. In many
cases, e.g., when the initial vorticity field is randomly distributed in
space, the formation of the QSS corresponds to the segregation of
different-sign vorticity and the subsequent coalescence of equal-sign
vorticity, i.e., to a \emph{spatial demixing} of vorticity.\newline Besides
the theoretical fluid-dynamics context, a good understanding of the
above-described process has implications in many other physically interesting
situations like: geophysical flows [\cite{hopf}], plasmas in magnetic fields
[\cite{kraich2} ], galaxy structure [\cite{stellar}], etc. For these reasons
numerical and experimental studies are still being performed and have already
led to a number of ``scatter plots'', i.e., to the determination of the
$\omega_{S}$-$\psi_{S}$ functional relation as a characterization of the QSS
which appear under different circumstances. Simultaneously, on the theoretical
side, approaches have been proposed which attempt at, among other things,
predicting the QSS directly from the initial vorticity field; if successful in
this, such methods would also alleviate the need of performing costly
numerical and laboratory studies.
The above-mentioned studies point out the large enstrophy decay that often
takes place during the formation of the QSS; sometimes, also the evolution of
the skewness is reported. But for these two lowest-order moments, little
attention has been paid to the evolution of the vorticity-area distribution,
defined in equation (\ref{G}), during the formation of the QSS. In the context
of 2D flows, this distribution plays a very important role: with appropriate
boundary conditions, it is conserved by the inviscid Euler equations and the
stationary solutions are the maximizers of the energy for the given vorticity
distribution [\cite{Arnold1} ,\cite{Arnold2}], see also Subsection
\ref{vorti-redistribu}\ref{general vorti}.
In the present work we study the time evolution of the vorticity-area
distrbution in two-dimensional, incompressible and viscous fluids. Many of the
ideas we present should be applicable also to more realistic systems, e.g.,
when potential vorticity is the Lagrangian invariant. The paper is structured
as follows: in the following Section, we derive, from the Navier-Stokes
equation, the time evolution of the vorticity-area density. It is an
advection-diffusion equation with a time dependent, \emph{negative} diffusion
coefficient. For the purpose of illustration, explicit calculations are
presented in Subsection \ref{Vorticity-area}\ref{Gaussian} for the case of a
Gaussian monopole and, in Subsection \ref{Vorticity-area}\ref{self-similar},
for the case of self-similar decay described by Bartello and Warn in
[\cite{Warn}]. Based on these diffusion coefficients, it would be interesting
to set up a classification distinguishing different various typical scenarios
leading to the QSS. Considering the QSS, a very natural but difficult question
arises about its relation to theoretical predictions, namely, it is not
trivial to quantify how good or bad the agreement between observation and
prediction is, as already stressed, e.g., in [\cite{Majda} ]. For this and
related reasons, in Section \ref{vorti-redistribu}, we show that a perfect
agreement between an observed QSS and the corresponding prediction obtained
through the statistical-mechanics approach as developed by J. Miller et al.
[\cite{millerPRL} ,\cite{robertJSP} ,\cite{millerPRA}] and by Robert and
Sommeria [\cite{robertfrans} ,\cite{robsom}] would imply the equality of the
difference in the moments of the initial and final vorticity distributions on
the one hand and a set of quantities that can be directly obtained from the
experimental $\omega_{S}$-$\psi_{S}$ relation on the other side. The details
of the proof can be found in the Appendix. In \ref{vorti-redistribu}\ref{new
ExperimentAnalysis}, we discuss how to use these quantities as yardsticks in
order to quantify the validity of the statistical-mechanics approach in
numerical and laboratory experiments. In the last Section we summarize our
results and add some comments.
\section{Vorticity-area distribution\label{Vorticity-area}}
\subsection{Time evolution\label{anti-diffusion}}
It turns out that the vorticity-area density undergoes an anti-diffusion
process as we next show. The time dependent vorticity-area density
$G(\sigma,t)$ is given by
\begin{equation}
G(\sigma,t):=\int_{A}\!dxdy\,\delta(\sigma-\omega(x,y,t)),\label{G}%
\end{equation}
where $A$ denotes the domain and the vorticity field $\omega(x,y,t)\mathbf{:=}%
\vec{k}\cdot(\mathbf{\nabla}\times\vec{v})$ evolves according to the
Navier-Stokes equation
\begin{equation}
\frac{\partial\omega}{\partial t}+\vec{v}\cdot\mathbf{\nabla}\omega=\nu
\Delta\omega,\label{NS}%
\end{equation}
where the incompressibility condition $\mathbf{\nabla\cdot}\vec{v}=0$ has been
taken into account. The Navier-Stokes equation determines the time evolution
of the vorticity-area density; one has
\begin{align*}
\frac{\partial G(\sigma,t)}{\partial t} & =\int_{A}\!dxdy\,\delta^{\prime
}(\sigma-\omega(x,y,t))\vec{v}\cdot\mathbf{\nabla}\omega-\nu\int
_{A}\!dxdy\,\delta^{\prime}(\sigma-\omega(x,y,t))\Delta\omega\\
& =-\int_{A}\!dxdy\,\vec{v}\cdot\mathbf{\nabla}\delta(\sigma-\omega
(x,y,t))-\nu\frac{\partial\;}{\partial\sigma}\int_{A}\!dxdy\,\delta
(\sigma-\omega(x,y,t))\Delta\omega.
\end{align*}
We will assume impermeable boundaries $\partial A$, i.e., that the velocity
component perpendicular to $\partial A$ vanishes. Therefore, the first
integral in the last expression is zero. Partial integration of the second
integral leads to
\begin{equation}
\frac{\partial G(\sigma,t)}{\partial t}=-\nu\frac{\partial^{2}\;}%
{\partial\sigma^{2}}\int_{A}\!dxdy\,\delta(\sigma-\omega(x,y,t))\left|
\mathbf{\nabla}\omega\right| ^{2}-\nu\frac{\partial\;}{\partial\sigma}%
\oint_{\partial A}\delta(\sigma-\omega)\mathbf{\nabla}\omega\cdot\vec{n}\,dl.
\end{equation}
The last term represents the net vorticity generation or destruction that
occurs on the boundary $\partial A$ of the domain, $\vec{n}$ is the outward
oriented, unit vector normal to the boundary. This source term vanishes in
some special cases like doubly periodic boundary conditions or when the
support of the vorticity field remains always away from the boundary.
\newline From the definition of $G(\sigma,t)$ one sees that if $G(\sigma,t)=0$
and $\left| \mathbf{\nabla}\omega\right| $ is finite, then also the
integrals in the last expression must vanish. Consequently, we can define an
effective diffusion coefficient $D(\sigma,t)$ through
\begin{equation}
\nu\int_{A}\!dxdy\,\delta(\sigma-\omega(x,y,t))\left| \mathbf{\nabla}%
\omega\right| ^{2}=:D(\sigma,t)G(\sigma,t),\label{difcoef}%
\end{equation}
From this definiton it is clear that $D(\sigma,t)\geq0$ is the average of
$\nu\left| \nabla\omega\right| ^{2}$ over the area on which the vorticity
takes the value $\sigma.$ Therefore, the time evolution of $G(\sigma,t)$ can
be written as an advection-diffusion equation in $\sigma$-space with a source
term, i.e.,
\begin{equation}
\frac{\partial G(\sigma,t)}{\partial t}=-\frac{\partial\;}{\partial\sigma
}\left[ s(\sigma,t)+\frac{\partial D(\sigma,t)}{\partial\sigma}%
G(\sigma,t)+D(\sigma,t)\frac{\partial G(\sigma,t)}{\partial\sigma}\right]
,\label{dG/dt}%
\end{equation}
with a \emph{negative }diffusion coefficient $-D(\sigma,t),$ a ``velocity
field'' $\partial D(\sigma,t)/\partial\sigma$ and where minus the $\sigma
$-derivative of $s(\sigma,t):=\nu\oint_{\partial A}\delta(\sigma
-\omega)\mathbf{\nabla}\omega\cdot\vec{n}\,dl$ is the vorticity source at the boundary.
Introducing the vorticity moments
\[
\Gamma_{m}(t):=\int\!d\sigma\,\sigma^{m}G(\sigma,t)=\int_{A}\!dxdy\,\omega
^{m}(x,y,t),
\]
one can check that the equations above imply that the first moment of the
distribution $G(\sigma,t),$ i.e., the total circulation $\Gamma_{1}$, evolves
according to
\[
\frac{d\Gamma_{1}}{dt}=\int\!d\sigma\,s(\sigma,t),
\]
and that the even moments $\Gamma_{2n}(t):=\int\!d\sigma\,\sigma^{2n}%
G(\sigma,t)$ change in time according to
\[
\frac{d\Gamma_{2n}}{dt}=2n\int\!d\sigma\,\sigma^{m-1}s(\sigma,t)-\nu
2n(2n-1)\int\!dxdy\,\omega^{2(n-1)}\left| \mathbf{\nabla}\omega\right| ^{2}.
\]
In particular, when the boundary source term $s(\sigma,t)$ vanishes, the total
circulation $\Gamma_{1}$ is conserved and the even moments \emph{decay} in
time
\begin{equation}
\frac{d\Gamma_{2n}}{dt}=-\nu2n(2n-1)\int\!dxdy\,\omega^{2(n-1)}\left|
\mathbf{\nabla}\omega\right| ^{2}\leq0.\label{decay}%
\end{equation}
Therefore, in $\sigma$-space one has anti-diffusion at all times and with
$\left| \mathbf{\nabla}\omega\right| \rightarrow_{t\rightarrow\infty}0$ when
the boundary source term $s(\sigma,t)$ vanishes, the final vorticity-area
distribution is
\[
\lim_{t\rightarrow\infty}G(\sigma,t)=A\delta(\sigma-\bar{\omega}%
)\quad\mathrm{and}\quad\lim_{t\rightarrow\infty}D(\sigma,t)=0,
\]
where $A$ is the area of the domain and $\bar{\omega}$ is the average
vorticity, i.e., $\bar{\omega}=\Gamma_{1}/A.$
In most cases, it is not possible to make an a priori calculation of the
effective diffusion coefficient $D(\sigma,t).$ On the other hand, its
computation is straightforward if a solution of the Navier-Stokes equation is
known. These diffusion coefficients and, in particular the product
$D(\sigma,t)G(\sigma,t),$ confer Figs. 1 and 2 in \ref{self-similar}, may lead
to a classification of different scenarios for and stages in the formation of
QSS. It is also worthwhile recalling that conditional averages very similar to
$D(\sigma,t),$ confer (\ref{difcoef}), play an important role in, e.g., the
advection of passive scalars by a random velocity field. In the case of
passive-scalar advection by self-similar, stationary turbulent flows,
Kraichnan has proposed a way of computing such quantities [ \cite{Kraichnan}%
].\newline For the purpose of illustration, in the next Subsection, we use an
exact analytic solution in order to compute the corresponding $G(\sigma,t)$
and $D(\sigma,t).$ Another tractable case occurs when the flow evolves
self-similarly; in Subsection \ref{Vorticity-area}\ref{self-similar} we apply
these ideas to the self-similar evolution data obtained by Bartello and Warn
[\cite{Warn}].
\subsection{A simple example\label{Gaussian}}
As a simple example consider the exact solution of the Navier-Stokes equation
in an infinite domain given by a Gaussian monopole with circulation
$\Gamma_{1}$,
\[
\omega_{G}(x,y,t)=\Gamma_{1}\left( 4\pi\nu t\right) ^{-1}\exp(-r^{2}/4\nu
t),\;\mathrm{with}\;r^{2}:=x^{2}+y^{2}\;\mathrm{and}\;t\geq0.
\]
Then, in cylindrical coordinates $(r,\phi),$%
\begin{align*}
\delta(\sigma-\omega_{G}(x,y,t))\,dxdy & =\delta(r^{2}-R^{2}(\sigma
,t))\,\frac{1}{2}dr^{2}d\phi/\left| \frac{\partial\omega_{G}}{\partial r^{2}%
}\right| _{r^{2}=R^{2}},\\
\mathrm{where}\;\left. \omega_{G}(x,y,t)\right| _{r=R} & =\sigma
,\;\mathrm{i.e.,}\;R^{2}(\sigma,t):=-4\nu t\ln(\frac{\sigma4\pi\nu t}%
{\Gamma_{1}})\\
\mathrm{and}\;\left. \frac{\partial\omega_{G}}{\partial r^{2}}\right|
_{r^{2}=R^{2}} & =-\frac{\sigma}{4\nu t},
\end{align*}
and we have that for such a Gaussian monopole, the vorticity-area density is
\begin{align*}
G_{G}(\sigma,t) & =0,\;\mathrm{for}\;\sigma<0,\\
& =4\pi\nu t\sigma^{-1},\;\mathrm{for}\;0<\sigma\leq\sigma_{\max}(t),\\
& =0,\;\mathrm{for}\;\sigma_{\max}(t)<\sigma,\\
\mathrm{with}\;\sigma_{\max}(t) & \equiv\Gamma_{1}\left( 4\pi\nu t\right)
^{-1}.
\end{align*}
The divergence at $\sigma\rightarrow0$ is due to the increasingly large areas
occupied by vanishingly small vorticity associated with the tails of the
Gaussian profile. Due to this divergence, the density $G_{G}(\sigma,t)$ is not
integrable as it should since the domain $A$ is infinite. In spite of this
divergence, all the $\sigma$-moments are finite, in particular, the first
moment, i.e., the circulation, equals $\Gamma_{1}$ and the second moment,
i.e., the enstrophy, is $\Gamma_{1}^{2}/8\pi\nu t.$ As expected, the
circulation is constant in time while the enstrophy decays to zero\footnote{By
contrast, the \emph{spatial} second moments of $\omega(x,y,t)$ \emph{increase}
in time like $2\nu t$, e.g., $\int\int\!dxdyx^{2}\omega_{G}(x,y,t)=2\nu t.$}.
It is also interesting to notice that while the maximum vorticity value,
$\sigma_{\max}(t)=\Gamma_{1}/4\pi\nu t,$ occupies only one point, i.e., a set
of zero dimension, the density $G_{G}(\sigma,t)$ remains finite for
$\sigma\nearrow\sigma_{\max}(t),$ more precisely, $\lim_{\sigma\nearrow
\sigma_{\max}(t)}G_{G}(\sigma,t)=\left( 4\pi\nu t\right) ^{2}/\Gamma_{1}.$
Moreover, in this simple example one can also compute
\[
\nu\int\!dxdy\,\delta(\sigma-\omega_{G}(x,y,t))\left| \mathbf{\nabla}%
\omega_{G}\right| ^{2}=4\pi\nu\sigma\ln\left( \frac{\sigma_{\max}(t)}%
{\sigma}\right) ,\;\mathrm{for}\;0<\sigma\leq\sigma_{\max}(t),
\]
so that the corresponding effective diffusion coefficient is
\[
D_{G}(\sigma,t)=\frac{\sigma^{2}}{t}\ln\left( \frac{\sigma_{\max}(t)}{\sigma
}\right) ,\;\mathrm{for}\;0<\sigma\leq\sigma_{\max}(t).
\]
The vanishing of this $D_{G}(\sigma,t)$ with $\sigma\rightarrow0$ corresponds
to the vanishingly small spatial gradients of vorticity at large distances
from the vortex core; this gradient vanishes also at the center of the vortex
leading to a (weaker) vanishing of $D_{G}(\sigma,t)$ for $\sigma\nearrow
\sigma_{\max}(t)=\Gamma_{1}/4\pi\nu t.$ The Gaussian vortex is totally
dominated by viscosity and yet the corresponding effective diffusion
coefficient $D_{G}(\sigma,t)$ is \emph{not} proportional to the viscosity
$\nu,$ as one would naively expect, but to $\ln\nu.$ In Fig. 1 we plot the
dimensionless quantities $\left( 4\pi\nu\sigma_{\max}\right) ^{-1}%
D_{G}(\sigma,t)G_{G}(\sigma,t)$ and $\Gamma_{1}\left( 4\pi\nu\sigma_{\max
}^{3}\right) ^{-1}D_{G}(\sigma,t)$ as functions of $\sigma/\sigma_{\max}$.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.8265in,
width=2.6792in
]%
{fig1.eps}%
\caption{Plot of the dimensionless quantities (upper curve)
$(4\pi\nu\sigma_{\max})^{-1}D_{G}(\sigma,t)G_{G}(\sigma,t)$ %
and (lower curve) %
$\Gamma_{1}(4\pi\nu\sigma_{\max}^{3})^{-1}D_{G}(\sigma,t)$, %
both as function of %
$x\equiv\sigma/\sigma_{\max}$ %
in the case of a Gaussian monopole.}%
\end{center}
\end{figure}
Fig.1. Plot of the dimensionless quantities (full curve) $(4\pi\nu\sigma
_{\max})^{-1}D_{G}(\sigma,t)G_{G}(\sigma,t)$ and (dotted curve) $\Gamma
_{1}(4\pi\nu\sigma_{\max}^{3})^{-1}D_{G}(\sigma,t)$, both as function of
$x\equiv\sigma/\sigma_{\max}$ in the case of a Gaussian monopole.
\subsection{Self-similar decay\label{self-similar}}
In the case of a collection of vortices that evolves self-similarly in time,
it is convenient to introduce the dimensionless independent variable
$\xi:=(\sigma-\bar{\omega})t$ and the dimensionless functions\footnote{We
assume that $A$ is finite and that $t>0.$} $\tilde{G}(\xi):=G(\sigma
-\bar{\omega},t)/At$ and $\tilde{D}(\xi):=D(\sigma-\bar{\omega},t)/(\sigma
-\bar{\omega})^{3}$. When the boundary source term $s(\sigma,t)$ vanishes, as
it does in the case of a doubly periodic domain or when the vorticity support
remains well separated from the boundary, the self-similar form of
(\ref{dG/dt}) is
\[
\frac{d\;}{d\xi}\left( \xi\tilde{G}(\xi)\right) =-\frac{d^{2}\;}{d\xi^{2}%
}\left( \xi^{3}\tilde{D}(\xi)\tilde{G}(\xi)\right) ,
\]
or
\[
\xi\tilde{G}(\xi)=-\frac{d\;}{d\xi}\left( \xi^{3}\tilde{D}(\xi)\tilde{G}%
(\xi)\right) +cte.
\]
Assuming that there are no singularities in the vorticity field, one has
$G(\sigma,t)\underset{\left| \sigma\right| \rightarrow\infty}{\rightarrow}0$
and it follows then that the constant in the last expression must be zero.
Measuring the self-similar density $\tilde{G}(\xi),$ one can solve the last
equation for the corresponding diffusion coefficient $\tilde{D}(\xi)$ and get
\[
\tilde{D}(\xi)=-\frac{1}{\xi^{3}\tilde{G}(\xi)}\int_{b}^{\xi}ds\,s\tilde
{G}(s),
\]
where the value of the lower limit of integration $b$ must be chosen according
to an appropriate ``boundary condition'' as illustrated below. If for large
$\left| \xi\right| $ the dimensionless vorticity density $\tilde{G}(\xi)$
decays algebraically like $\left| \xi\right| ^{-2\alpha}$ (see below) and
$2\alpha<3,$ then it follows that the most general decay of $\tilde{D}(\xi)$
is of the form $\left| \xi\right| ^{-1}+a\left| \xi\right| ^{(2\alpha-3)}$
with $a$ an appropriate constant.
We apply these results to two specific cases: 1) If the self-similar vorticity
distribution happens to be Gaussian, i.e., if $\tilde{G}(\xi)=\left( \xi
_{o}\sqrt{2\pi}\right) ^{-1}\exp\left( -\xi^{2}/2\xi_{o}^{2}\right) .$
Then, with $\tilde{D}(\pm\infty)=0$ as ``boundary condition'' one obtains that
$\tilde{D}(\xi)=\xi_{o}^{2}\xi^{-3}$ or, going back to the original
quantities, $G(\sigma,t)=At\left( \xi_{o}\sqrt{2\pi}\right) ^{-1}\exp\left[
-t^{2}(\sigma-\bar{\omega})^{2}/2\xi_{o}^{2}\right] $ and the negative
diffusion coefficient is $-D(\sigma,t)=-\xi_{o}^{2}t^{-3}.$ This is the only
self-similar case with a $\sigma$-independent $D\left( \sigma,t\right) .$ In
this case, the time-evolution equation (\ref{dG/dt}) takes a particularly
simple form, namely
\begin{align*}
\frac{\partial G(\sigma,t)}{\partial t} & =-\xi_{o}^{2}t^{-3}\frac
{\partial^{2}G(\sigma,t)}{\partial\sigma^{2}},\\
\mathrm{i.e.}\;\frac{\partial G(\sigma,\tau)}{\partial\tau} & =+\frac{1}%
{2}\frac{\partial^{2}G(\sigma,\tau)}{\partial\sigma^{2}}\;\mathrm{with}%
\;\tau:=\xi_{o}^{2}t^{-2}.
\end{align*}
In agreement with our findings in IIA, we see that in this case the squared
width of the Gaussian distribution \emph{decreases} in time like $\tau=\xi
_{o}^{2}t^{-2}.$\newline 2) The second application is to the self-similar
distributions found by Bartello and Warn [\cite{Warn}] in their simulations
performed in a doubly periodic domain of size $A$. Qualitatively speaking,
their results can be summarized by the following expression
\begin{align}
\tilde{G}_{s}(\xi) & =c\left( \xi_{o}^{2}+\xi^{2}\right) ^{-\alpha
},\;\left| \xi\right| \leq\xi_{M}\label{refer}\\
\tilde{G}_{s}(\xi) & =0,\;\left| \xi\right| >\xi_{M}.\nonumber
\end{align}
with $\xi_{o}\simeq10,$ $\alpha\simeq0.7$ and $\xi_{M}$ growing\footnote{This
time-dependence destroys the exact self-similarity of these solutions.}
approximately like $\sqrt{t}$, from 200 to 500. The value of $c$ being such
that $\int\!d\xi\,\tilde{G}(\xi)=1.$ Vorticity values such that $\left|
\sigma\right| t<\xi_{o}$ are associated mainly with thin filaments in the
background ``sea'' while those such that $\left| \sigma\right| t>\xi_{o}$
correspond to the localized vortices. At the positions with the largest
vorticity value the gradient $\nabla\omega$ vanishes, therefore, it is natural
to take as ``boundary condition'' $\tilde{D}_{s}(\xi_{M})=0.$ One gets then
the following effective diffusion coefficient in vorticity-space
\[
\tilde{D}_{s}(\xi)=\frac{\left( \xi_{o}^{2}+\xi^{2}\right) ^{\alpha}%
}{2(1-\alpha)\left| \xi\right| ^{3}}\left[ \left( \xi_{o}^{2}+\xi_{M}%
^{2}\right) ^{\left( 1-\alpha\right) }-\left( \xi_{o}^{2}+\xi^{2}\right)
^{(1-\alpha)}\right] ,\;\left| \xi\right| \leq\xi_{M}(t).
\]
Going back to the original variables, this effective diffusion coefficient
reads,
\[
D_{s}(\sigma,t)=\frac{\left( \xi_{o}^{2}+\sigma^{2}t^{2}\right) ^{\alpha}%
}{2(1-\alpha)t^{3}}\left[ \left( \xi_{o}^{2}+\xi_{M}^{2}\right) ^{\left(
1-\alpha\right) }-\left( \xi_{o}^{2}+\sigma^{2}t^{2}\right) ^{(1-\alpha
)}\right] ,\quad\left| \sigma\right| t\leq\xi_{M}(t).
\]
The average of $\left| \nabla\omega\right| ^{2}$ in the thin filaments is
not zero so that at $\sigma=0$ we have
\[
D_{s}(0,t)=\frac{\xi_{o}^{2}}{2(1-\alpha)t^{3}}\left[ \left( 1+\frac{\xi
_{M}^{2}}{\xi_{o}^{2}}\right) ^{\left( 1-\alpha\right) }-1\right]
\simeq\frac{\xi_{o}^{2\alpha}\xi_{M}^{2\left( 1-\alpha\right) }}%
{2(1-\alpha)t^{3}}.
\]
The origin, $\sigma=0$ is a local minimum of $D_{s}(\sigma,t),$ moreover there
is one maximum, namely
\[
\max D(\sigma,t)=\alpha^{\frac{\alpha}{1-\alpha}}\frac{\xi_{o}^{2}+\xi_{M}%
^{2}(t)}{2t^{3}}\quad at\quad\sigma^{\ast}t=\sqrt{\alpha^{\frac{\alpha
}{1-\alpha}}\left( \xi_{o}^{2}+\xi_{M}^{2}(t)\right) -\xi_{o}^{2}}%
\simeq\alpha^{\frac{\alpha}{2(1-\alpha)}}\xi_{M}.
\]
For large $t,$ the maximum decays like $t^{-2}$ while $D(0,t)$ decays faster,
namely like $t^{-(2+\alpha)}.$ \newline In Fig.2, we plot the dimensionless
quantities $t^{3}D_{s}(\sigma,t)\xi_{M}^{-2}$ and $4\left( cA\xi
_{M}^{2(1-\alpha)}\xi_{o}^{2\alpha}\right) ^{-1}t^{2}D_{s}(\sigma
,t)G_{s}(\sigma,t)$ as functions of $x\equiv\sigma t/\xi_{M}$.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.3886in,
width=3.5034in
]%
{fig2.eps}%
\caption{Plot of the dimensionless quantities (full curve) $t^{3}D_{s}%
(\sigma,t)\xi_{M}^{-2}$ and (dotted curve) $4\left( cA\xi_{M}^{2(1-\alpha
)}\xi_{o}^{2\alpha}\right) ^{-1}t^{2}D_{s}(\sigma,t)G_{s}(\sigma,t)$ as
functions of $x\equiv\sigma t/\xi_{M}$ in the case of self-similar decay with
$\xi_{o}=10$ and for $\xi_{M}=300$, as discussed in the main text.}%
\end{center}
\end{figure}
\section{The changes in the vorticity-area
distribution\label{vorti-redistribu}}
\subsection{General considerations\label{general vorti}}
In comparison to the case of a passive scalar, the \emph{spatial }distribution
of vorticity does not play such a central role as one realizes from Arnold's
observation that the stationary solutions of the Euler equations in two
dimensions correspond to energy extrema under the constraint of fixed
vorticity areas [\cite{Arnold1} ,\cite{Arnold2}]. Arnold's observation says
then that the stationary solutions of the 2D Euler equations, $\omega
_{S}(x,y),$ are the states with extremal values of the energy compatible with
the vorticity-area density
\[
G_{S}(\sigma):=\int\!dxdy\,\delta(\sigma-\omega_{S}(x,y)).
\]
Therefore, the vorticity-area density $G_{S}(\sigma)$ of a QSS (and the
geometry of the domain) determine $\omega_{S}(x,y),$ the spatial distribution
of vorticity in the coherent structure. From this we conclude that when
studying the process leading to the QSS in viscous fluids, special attention
should be paid to the differences between the initial vorticity-area density
$G(\sigma,0)$ and $G_{S}(\sigma),$ the one in the QSS. A convenient way of
studying these changes would be through the differences in the moments of
these distributions, which will be denoted by $\Delta_{n},$ i.e.,
\begin{align*}
\Delta_{n} & :=\Gamma_{n}^{o}-\Gamma_{n}^{S},\;\mathrm{with}\\
\Gamma_{n}^{o} & :=\int\!dxdy\,\omega^{n}(x,y,0)=\int\!d\sigma\,\sigma
^{n}G(\sigma,0)\;\mathrm{and}\\
\Gamma_{n}^{S} & :=\int\!dxdy\,\omega_{S}^{n}(x,y)=\int\!d\sigma\,\sigma
^{n}G_{S}(\sigma).
\end{align*}
The dimensionless ratios $\Delta_{n}/\Gamma_{n}^{S}$ would offer a good
characterization of the changes experienced by the vorticity-area
distribution. In the next Subsections we present another possibility, linked
to the predictions of a QSS according to a statistical-mechanics approach,
which is constructed from an $\omega_{S}(\psi_{S})$ relation measured either
in experiments or in numerical simulations.
\subsection{The changes in the moments according to the statistical mechanics
approach\label{New yardsticks}}
It is proven in the Appendix that when the quasi-stationary vorticity field
$\omega_{S}(x,y)$ and the initial field $\omega(x,y,0)$ are related as
predicted by the statistical mechanical theory then the observed $\Delta_{n} $
take the values $\delta_{n}$%
\begin{align}
\delta_{n} & =\int\!dxdy\,i_{n}(\psi),\label{recursion2}\\
\mathrm{with}\;i_{1} & :=0\;\mathrm{and}\nonumber\\
i_{n+1}(\psi) & =\left[ \Omega(\psi)-\frac{1}{\beta}\frac{d\;}{d\psi
}\right] i_{n}(\psi)+\left( -\beta\right) ^{-1}\frac{d\Omega^{n}}{d\psi
}.\nonumber
\end{align}
defined in terms of the associted$\ \Omega(\psi)$ relation and an inverse
temperature $\beta.$ In particular, for $n\leq4,$ we have
\begin{align}
\delta_{1} & =0,\label{chiqui}\\
\delta_{2} & =-\beta^{-1}\int\!dxdy\,(d\Omega/d\psi),\nonumber\\
\delta_{3} & =\beta^{-2}\int\!dxdy\left[ \frac{d^{2}\Omega}{d\psi^{2}%
}-3\beta\Omega\frac{d\Omega}{d\psi}\right] ,\nonumber\\
\delta_{4} & =-\beta^{-3}\int\!dxdy\,\left[ \frac{d^{3}\Omega}{d\psi^{3}%
}-4\beta\Omega\frac{d^{2}\Omega}{d\psi^{2}}+6\beta^{2}\Omega^{2}\frac{d\Omega
}{d\psi}-3\beta\left( \frac{d\Omega}{d\psi}\right) ^{2}\right] .\nonumber
\end{align}
In the next Subsection we propose the use of $\delta_{n}$ as yardsticks in
order to quantify the departure of the observed changes $\Delta_{n}$ from the
theoretical predictions.
\subsection{Analysis of experimental results\label{new ExperimentAnalysis}}
In many cases, the predictions obtained from different theories are not very
different and it is not obvious which prediction agrees better with the
experimental data, see, e.g., [\cite{Majda} ,\cite{brandsX}] . Therefore, it
is important to develop objective, quantitative measures of such an agreement.
The results of the previous Subsection lead us to conclude that there is
useful information encoded in the functional dependence of $\omega_{S}%
(\psi_{S})$ and that this information can be used for the quantification of
the vorticity redistribution process in any experiment or numerical
simulation, i.e., also when (\ref{w-psi}) and (\ref{moments}) do not
necessarily hold, as long as there is no leakage and creation or destruction
of vorticity at the boundary is negligeable. We propose that one should
proceed as follows: \newline 1) Identify the predicted$\ \Omega(\psi)$
relation of the preceeding Subsection and the Appendix with the $\omega
_{S}(x,y)\approx\omega_{S}(\psi_{S})$ of the observed QSS; usually, this
satisfies $d\omega_{S}/d\psi_{S}\neq0,$ \newline 2) Determine an effective
value of $\beta$ from $\Delta_{2},$ the measured change in the second moment
of the vorticity-area distribution and from the measured $\omega_{S}(\psi
_{S})\ $relation by, confer the second line of (\ref{chiqui}),
\begin{equation}
\beta:=-\frac{\int\!dxdy\,(d\omega_{S}/d\psi_{S})}{\Delta_{2}}.\label{T2}%
\end{equation}
Since $\Delta_{2}\geq0,$ the sign of $\beta$ is always opposite to that of
$d\omega_{S}/d\psi_{S}.$\newline 3) Using this value of $\beta$ compute from
equation (\ref{recursion2}), with $\Omega(\psi)$ replaced by $\omega_{S}%
(\psi_{S}),$ the values of the yardsticks $\delta_{n}$ for $n\geq3.$%
\newline 4) The measured changes in the third and higher moments, $\Delta_{n}$
with $n\geq3,$ should be quantified by the dimensionless\ numbers $\alpha
_{n}:=\Delta_{n}/\delta_{n}.$ These numbers are all equal to 1 if and only if
the QSS agrees with the statistical mechanical prediction corresponding to the
initial distribution $G(\sigma,0)$ with equal initial and final energies,
\begin{equation}
\omega_{S}(\psi_{S})\;\mathrm{corresponds\;to\;a\;statitistical\;equilibrium}%
\leftrightarrow\alpha_{3}=\cdots=\alpha_{n}=\cdots=1.\label{agreement}%
\end{equation}
It is for this reason that one should prefer these dimensionless quantities to
other ones like, e.g., $\Delta_{n}/\Gamma_{n}^{S}$.
In closing, it may be worthwhile recalling that, at least in some cases, the
agreement between an experiment and the statistical mechanical prediction can
be greatly improved by taking as ``initial condition'' not the field at the
start of the experiment but a later one, after some preliminary mixing has
taken place but well before the QSS appears, see [ \cite{brands}]. This
improvement can be quantified by measuring the convergence of the
corresponding $\alpha_{n}=\Delta_{n}/\delta_{n}$ towards 1. In other cases, a
detailed consideration of the boundary is necessary and, sometimes, the
satistical mechanics approach may be applicable in a well-chosen subdomain,
see [\cite{chavan} ,\cite{brandsX}].
\subsection{Examples\label{examples}}
A possible $\Omega(\psi)$ relation resulting from the statistical mechanical
theory that may be compared successfully to many experimentally found curves
is
\begin{equation}
\Omega_{t}(\psi)=\Omega_{o}\frac{\sinh\chi\psi}{B+C\cosh\chi\psi
},\label{hyper}%
\end{equation}
with appropriately chosen constants $\Omega_{o},B,C$ and $\chi.$ In
particular, the flattening in the scatter plots observed in [\cite{rasmussen}]
can be fitted by this expression while the case $C=0,$ corresponds to the
identical point-vortices model [\cite{montPFA} ,\cite{montPFA2}] and the case
$\chi\rightarrow0$ with $\chi\Omega_{o}/(B+C)\rightarrow$finite, corresponds
to a linear scatter-plot. In all these cases, the $\delta_{n}$ can be derived,
as explained in the Appendix, by means of a cumulant generating function,
confer (\ref{integral2}),
\begin{align*}
\kappa_{t}(\lambda,\psi) & =-\beta\Omega_{o}\int^{\lambda}\frac{\sinh
\chi\left( \psi+\xi\right) }{B+C\cosh\chi\left( \psi+\xi\right) }d\xi\\
& =-\frac{\beta\Omega_{o}}{\chi C}\ln\frac{B+C\cosh\chi\left( \psi
+\lambda\right) }{B+C\cosh\chi\psi}.
\end{align*}
Expanding this in powers of $\lambda$ leads to the cumulants of the
microscopic vorticity distribution. In particular,
\begin{align*}
\left\langle \sigma^{2}\right\rangle _{t}-\Omega_{t}^{2} & =-\beta^{-1}%
\frac{d\Omega_{t}}{d\psi}\\
& =-\frac{\chi\Omega_{o}}{\beta}\frac{B\cosh\chi\psi+C}{\left( B+C\cosh
\chi\psi\right) ^{2}}.
\end{align*}
Integrating this over the area, one obtains that
\[
\Delta_{2}^{t}=-\frac{\chi\Omega_{o}}{\beta}\int\!dxdy\,\frac{B\cosh\chi
\psi+C}{\left( B+C\cosh\chi\psi\right) ^{2}}.
\]
Knowledge of $\Delta_{2}^{t}$ and of $\Omega_{t}(\psi)$ allows us to determine
the value of $\beta.$ Once $\beta$ has been fixed, one can apply
(\ref{recursion2}) and (\ref{agreement}) in order to quantify the higher-order
moments $\Delta_{n}$ and in so doing to estimate the validity of equation
(\ref{hyper}) as the prediction from the statistical-mechanics approach.
If the scatter-plot is linear, $\Omega(\psi)=k\psi,$ then (\ref{integral2})
tells us that
\[
\kappa_{linear}(\lambda,\psi)=-\beta k\left[ \lambda\psi+\frac{1}{2}%
\lambda^{2}\right] .
\]
This implies a simple relation between the initial vorticity-area distribution
$G(\sigma,0)$ and $G_{S}(\sigma),$ the distribution in the QSS, namely
\[
\int\!d\sigma\,\exp(-\lambda\beta\sigma)G\left( \sigma,0\right) =\exp
(\frac{1}{2}\lambda^{2}\beta k)\int\!d\sigma\,\exp(-\lambda\beta\sigma
)G_{S}\left( \sigma\right) .
\]
Recall that, as already indicated immediately after (\ref{T2}), $\beta
k\leq0.$ Assuming that the Laplace transformation can be inverted, we get for
the linear case $\Omega(\psi)=k\psi,$ that
\[
G\left( \sigma,0\right) =\int\!d\tau\,\exp(-\frac{\left( \sigma
-\tau\right) ^{2}}{2\left| \beta k\right| })G_{S}\left( \tau\right)
\]
In particular, if the QSS has a Gaussian vorticity-area density with a
variance equal to $\Sigma^{2}$ and the predictions of the statistical
mechanical approach hold, then the initial distribution $G\left(
\sigma,0\right) $ must also be a Gaussian with variance equal to $\left(
\Sigma^{2}-\beta k\right) \geq\Sigma^{2},$ in agreement with the results in [
\cite{millerPRA} ,\cite{thesis}]. Notice that knowing $G(\sigma,0)$ and the
energy of the initial state is not enough in order to determine the spatial
dependence of the initial vorticity field $\omega(x,y,0)$.
\section{Conclusions\label{Conclusions}}
In this paper, the object at the center of our attention has been the
vorticity-area density $G(\sigma,t)$ and its time evolution in
two-dimensional, viscous flows. In Section \ref{Vorticity-area} we have shown
that this density evolves according to an advection-diffusion equation,
equation (\ref{dG/dt}), with a time dependent, negative diffusion coefficient.
If vorticity is destroyed or created at the domain boundaries then the
evolution equation contains also a source term. The equation is exact: it
follows from the Navier-Stokes equation with no approximations made. For the
purpose of illustration, explicit calculations have been presented for the
case of a Gaussian monopole in \ref{Vorticity-area}\ref{anti-diffusion} and
for the case of self-similar decay in \ref{Vorticity-area}\ref{self-similar}.
We think that it will be instructive to apply these ideas to the analysis of
data that is available from numerical simulations and laboratory experiments.
In fact, it should be possible to determine this effective diffusion
coefficient $D(\sigma,t)$ on the basis of such data. Then it would be of
interest to establish a quantitative classification of the QSS formation
processes, e.g., by considering the various possible behaviours of this
coefficient and, in particular, confer\ Figs.\ 1 and 2, of $G(\sigma
,t)D(\sigma,t)$. In the case of self-similar decay, one could attempt a
closure approximation in order to predict the effective diffusion coefficient
like it is done, for very similar quantities, in the theory of passive-scalar
dispersion by random velocity fields, e.g., Kraichnan's linear Ansatz
[\cite{Kraichnan}].\newline In Section \ref{vorti-redistribu} we considered
the changes in $G(\sigma,t)$ when starting from an arbitrary vorticity field
and ending at a high-Reynolds' number, quasi-stationary state characterized by
an $\omega_{S}(\psi_{S})$ relation. In \ref{vorti-redistribu}\ref{New
yardsticks} and in the Appendix, we showed how to generate from such an
$\omega$-$\psi$ plot an infinite set of moments $\delta_{n}$, confer
(\ref{recursion2}). The changes $\Delta_{n}$ in the moments of the vorticity
distribution that are observed in a numerical simulation or in the laboratory
equal these $\delta_{n}$ if and only if the initial and final distributions
are related to each other in the way predicted by the statistical mechanical
approach. Therefore, these changes in the vorticity distribution moments can
be quantified in terms of the dimensionless ratios $\Delta_{n}/\delta_{n}.$
The deviations of the ratios $\Delta_{n}/\delta_{n}$ from the value 1 as
determined on the basis of the data gives a direct way of quantifying the
validity of the statistical-mechanics approach. In \ref{vorti-redistribu}%
\ref{new ExperimentAnalysis} we discussed how to apply this to experimental
measurements provided that the leakage or creation and destruction of
vorticity at the boundaries is negligeable. Finally, in Subsection
\ref{vorti-redistribu}\ref{examples}, we considered two relevant $\omega
$-$\psi$ relations: a linear one and a much more general one, given by
equation (\ref{hyper}). Many of these ideas should be applicable also to more
realistic systems, e.g., when potential vorticity is a Lagrangian invariant.
{\large Acknowledgements}: We have benefitted from numerous discussions with
H. Brands, from P.-H. Chavanis' comments and from the comments of the
referees. The interest shown by H. Clercx, G.-J. van Heijst, S. Maassen and
J.J. Rasmussen has been most stimulating.
\section{Appendix\label{appendix}}
We prove now that if and only if the QSS happens to coincide with the
prediction of the statistical mechanical approach, then the changes in the
moments $\Delta_{n}$ take the values $\delta_{n}$ as defined in
(\ref{recursion2}). \newline Recall that in the statistical mechanical
approach one identifies $\omega_{S}(x,y),$ the vorticity of the QSS, with
$\left\langle \sigma\right\rangle :=\int\!d\sigma\,\sigma\rho(\sigma,\psi),$
the average value of the microscopic vorticity $\sigma$ with respect to a
vorticity distribution $\rho(\sigma,\psi)$ which is given by
\begin{align}
\rho(\sigma,\psi) & :=Z^{-1}\exp\left[ -\beta\sigma\psi(x,y)+\mu
(\sigma)\right] ,\label{w-psi}\\
\mathrm{with\;}Z(\psi) & :=\int\!d\sigma\,\exp\left[ -\beta\sigma\psi
+\mu(\sigma)\right] \nonumber\\
\mathrm{and\;define\;}\Omega(\psi) & :=\left\langle \sigma\right\rangle
=\int\!d\sigma\,\sigma\rho(\sigma,\psi).\nonumber
\end{align}
In (\ref{w-psi}), $\beta$ and $\mu(\sigma)$ are Lagrange multipliers such that
the energy per unit mass and the microscopic-vorticity area distribution
$g(\sigma):=\int\!dxdy\,\rho(\sigma,\psi(x,y))$ have the same values as in the
initial distribution, i.e., $g(\sigma)=G(\sigma,0)$. Consequently, the
spatially integrated moments are given by
\begin{align}
\int\!dxdy\,\left\langle \sigma^{n}\right\rangle & =\int\!d\sigma
\,\sigma^{n}g(\sigma)=\int\!d\sigma\,\sigma^{n}G(\sigma,0)=\Gamma_{n}%
^{o}\label{moments}\\
\mathrm{while\;}\Gamma_{n}^{S} & =\int\!dxdy\,\Omega^{n}(\psi)=\int
\!dxdy\,\left\langle \sigma\right\rangle ^{n}.\nonumber
\end{align}
Denote by $\delta_{n}$ the predicted change in the $n$-th moment, i.e.,
\begin{equation}
\delta_{n}=\int\!dxdy\,\left[ \left\langle \sigma^{n}\right\rangle
-\left\langle \sigma\right\rangle ^{n}\right] =:\int\!dxdy\,i_{n}%
.\label{Deltas}%
\end{equation}
To the probability distribution $\rho(\sigma,\psi),$ defined in (\ref{w-psi}),
we associate a cumulant generating function
\begin{equation}
\kappa(\lambda,\psi):=\ln\left\langle \exp\left( -\lambda\beta\sigma\right)
\right\rangle .\label{generate2}%
\end{equation}
This satisfies
\begin{equation}
\frac{\partial\kappa(\lambda,\psi)}{\partial\lambda}=-\beta\Omega(\psi
+\lambda)\,,\label{integral2}%
\end{equation}
as it can be shown by first noticing that
\[
\left\langle \exp\left( -\lambda\beta\sigma\right) \right\rangle
=\frac{Z(\psi+\lambda)}{Z(\psi)}%
\]
so that $\kappa(\lambda,\psi)=\ln Z(\psi+\lambda)-\ln Z(\psi)$ and then using
$\int\!d\sigma\,\sigma\rho(\sigma,\psi+\lambda)=\Omega(\psi+\lambda)$ for the
computation of $\partial\kappa/\partial\lambda$. Expanding both sides of
(\ref{integral2}) in powers of $\lambda$, it follows that $\kappa_{n}(\psi),$
the $n$-th local cumulant of $-\beta\sigma,$ is related to $\Omega(\psi)$ by
\begin{equation}
\kappa_{n}(\psi)=-\beta\frac{d^{n-1}}{d\psi^{n-1}}\Omega(\psi).\label{kappas2}%
\end{equation}
For example, for $1\leq n\leq4,$ these equalities read
\begin{align*}
\beta\left\langle \sigma\right\rangle & =\beta\Omega,\\
\beta^{2}\left[ \left\langle \sigma^{2}\right\rangle -\Omega^{2}\right] &
=-\beta\frac{d\Omega}{d\psi},\\
\beta^{3}\left[ \left\langle \sigma^{3}\right\rangle -3\Omega\left\langle
\sigma^{2}\right\rangle +2\Omega^{3}\right] & =\beta\frac{d^{2}\Omega
}{d\psi^{2}},\\
\beta^{4}\left[ \left\langle \sigma^{4}\right\rangle -3\left\langle
\sigma^{2}\right\rangle ^{2}-4\left\langle \sigma^{3}\right\rangle
\Omega+12\left\langle \sigma^{2}\right\rangle \Omega^{2}-\Omega^{4}\right]
& =-\beta\frac{d^{3}\Omega}{d\psi^{3}}.
\end{align*}
For our purposes, it is essential to eliminate from these equations products
like $\Omega\left\langle \sigma^{2}\right\rangle $ and $\left\langle
\sigma^{2}\right\rangle ^{2}$ because their integrals over the whole domain
cannot be related to known quantities. In fact, the differences $\delta_{n}$
can be directly expressed in terms of $\Omega(\psi)$ and its derivatives,
i.e., in terms of known quantities, as we now show. To this end, one considers
first the generating function of the local moment differences $i_{n}$, which
will be denoted by $i(\lambda,\psi).$ One has that, confer (\ref{Deltas}),
\[
i(\lambda,\psi):=\left\langle \exp\left( -\lambda\beta\sigma\right)
\right\rangle -\exp\left( -\lambda\beta\left\langle \sigma\right\rangle
\right) ,
\]
This is related to the cumulants generating function $\kappa(\lambda,\psi),$
confer equation (\ref{generate2}), by
\[
\kappa(\lambda,\psi)=\ln\left[ i(\lambda,\psi)+\exp\left( -\lambda
\beta\left\langle \sigma\right\rangle \right) \right] .
\]
Expanding both sides of this identity in powers of $\lambda$ and making use of
(\ref{kappas2}), one obtains the recursive expressions for the $i_{n}(\psi)$
given in equation (\ref{recursion2}); integrating these over the area $A,$ one
finally gets the results stated in \ref{vorti-redistribu}\ref{New yardsticks}.
\label{Biblio}\nopagebreak[4]
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Although multicore hardware is now pervasive there are relatively few legacy codes
that can profit from it. In some cases the underlying algorithm seems inherently sequential,
rendering parallelization difficult if not impossible. In other cases the underlying algorithm
may in principle be easy to parallelize. Nevertheless, existing
widely-used and debugged sequential codes
may be extremely complex and adding parallelization directly into such codes can
be difficult and error prone.
A simpler approach would be to design a wrapper that provides the parallelization
and hopefully requires very little modification of the existing code.
This approach was successfully applied by the authors to
the vertex enumeration code \progname{lrs}\xspace, resulting in the parallel \progname{mplrs}\xspace code~\cite{AJ15b}.
The ideas used in \progname{mplrs}\xspace can be used for a wide variety of tree search algorithms
including reverse search, backtracking, branch and bound, etc.
This motivated us to develop a generic wrapper, \progname{mts}\xspace, for parallelizing them.
In this note we explain how to modify a reverse search code to enable parallelization
by \progname{mts}\xspace.
We then describe in detail how to parallelize two simple existing reverse search codes.
Finally, we give computational results showing nearly linear speedup on a cluster with
192 cores.
\section{Background}
\label{sec:back}
Reverse search is a technique for generating large, relatively unstructured, sets of discrete
objects~\cite{AF93}.
Some simple C implementations were given in
the tutorial~\cite{tutorial} which we will extend in this note to
allow parallel processing via the \progname{mts}\xspace package.
We first give an outline of reverse search and the necessary
modifications required for parallelization.
This description is essentially the one given in~\cite{AJ15b}.
In its most basic form, reverse search can be viewed as the traversal of a spanning tree, called the reverse
search tree $T$, of a graph $G=(V,E)$ whose nodes are the objects to be generated. Edges in the graph are
specified by an adjacency oracle, and the subset of edges of the reverse search tree are
determined by an auxiliary function, which can be thought of as a local search function $f$ for an
optimization problem defined on the set of objects to be generated. One vertex, $v^*$, is designated
as the {\em target} vertex. For every other vertex $v \in V$
repeated application of $f$ must generate a
path in $G$ from $v$ to $v^*$. The set of these paths defines the reverse search tree $T$, which has root $v^*$.
A reverse search is initiated at $v^*$, and only edges of the reverse search tree are traversed.
When a node is visited, the corresponding object is output. Since there is no possibility of
visiting a node by different paths, the visited nodes do not
need to be stored. Backtracking can be performed in the
standard way using a stack, but this is not required as the local search function can be used for
this purpose. This implies two critical features that are useful for effective parallelization.
Firstly, it is not necessary to store more than one node of the tree at any
given time and no database is required for visited nodes.
Secondly, it is possible to {\em restart} the enumeration process from
any given node in the tree using only a description of this one node.
In the basic setting described here a few properties are required. Firstly, the
underlying graph $G$ must be connected and an upper bound on the maximum vertex degree, $\Delta$, must
be known. The performance of the method depends on $G$ having $\Delta$ as low as
possible. The adjacency oracle must be capable of generating the adjacent vertices of some given
vertex $v$ sequentially and without repetition. This is done by specifying a function
$\textrm{Adj}(v,j)$, where $v$ is a vertex of $G$ and $j = 1,2,\ldots,\Delta$. Each value of $\textrm{Adj}(v, j)$ is
either a vertex adjacent to $v$ or null. Each vertex adjacent to $v$ appears precisely once as $j$ ranges
over its possible values. For each vertex $v \neq v^*$
the local search function $f(v)$ returns the tuple $(u,j)$ where $v = \textrm{Adj}(u,j)$ such that $u$
is $v$'s parent in $T$.
Pseudocode is given in Algorithm~\ref{rsalg1} and C implementations for
several simple enumeration problems are given at~\cite{tutorial}.
For convenience later, we do not output the root vertex $v^*$ in the pseudocode shown.
Note that the vertices are output as a continuous stream.
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{rs}{$v^*$, $\Delta$, $\textrm{Adj}$, $f$}
\State $v \gets v^*$~~~$j \gets 0$~~~$\ensuremath{\mathit{depth}}\xspace \gets 0 $
\Repeat
\While {$j < \Delta$}
\State $j \gets j+1$
\If {$f(\textrm{Adj}(v,j)) = v$} \Comment{forward step}
\State $v \gets \textrm{Adj}(v,j)~~~~~$
\State $j \gets 0$
\State $\ensuremath{\mathit{depth}}\xspace \gets \ensuremath{\mathit{depth}}\xspace+1$
\State {\bf output $v$}
\EndIf
\EndWhile
\If {$\ensuremath{\mathit{depth}}\xspace > 0$} \Comment{backtrack step}
\State $(v,j) \gets f(v)$
\State $\ensuremath{\mathit{depth}}\xspace \gets \ensuremath{\mathit{depth}}\xspace-1 $
\EndIf
\Until {$\ensuremath{\mathit{depth}}\xspace=0$ {\bf and} $j=\Delta$}
\EndProcedure
\end{algorithmic}
\caption{Generic Reverse Search}
\label{rsalg1}
\end{algorithm}
Observe that Algorithm~\ref{rsalg1} does not
require the parameter $v^*$ to be the root of the search tree. If
an arbitrary node in the tree is given, the algorithm reports the subtree
rooted at this node and terminates. This is the key property that
allows parallelization, as we discuss next.
\section{Maximum depth and budgeting}
\label{sec:budg}
In order to parallelize Algorithm~\ref{rsalg1} we need to break up the search
of the entire tree $T$
into searching a set of subtrees.
We do this in two ways: by limiting the search depth and by limiting the number
of nodes visited. In both cases, unexplored subtrees are generated and these
must be flagged for later processing.
To implement this we will supply three additional parameters:
\begin{itemize}
\item
\ensuremath{\mathit{start\_vertex}}\xspace is the vertex from which the reverse search should be initiated and replaces $v^*$.
\item
\ensuremath{\mathit{max\_depth}}\xspace is the depth at which forward steps are terminated.
\item
\ensuremath{\mathit{max\_nodes}}\xspace is the number of nodes to generate before terminating and reporting unexplored subtrees.
\end{itemize}
Both \ensuremath{\mathit{max\_depth}}\xspace and \ensuremath{\mathit{max\_nodes}}\xspace are assumed to be positive, for otherwise there is no work to do.
The modified algorithm is shown in Algorithm~\ref{bts}.
\begin{algorithm}[htb]
\begin{algorithmic}[1]
\Procedure{bts}{$\ensuremath{\mathit{start\_vertex}}\xspace$, $\Delta$, $\textrm{Adj}$, $f$, $\ensuremath{\mathit{max\_depth}}\xspace$, $\ensuremath{\mathit{max\_nodes}}\xspace$}
\State $j \gets 0~~~v \gets \ensuremath{\mathit{start\_vertex}}\xspace~~~\ensuremath{\mathit{count}}\xspace \gets 0 ~~~\ensuremath{\mathit{depth}}\xspace \gets 0$
\Repeat
\State $\ensuremath{\mathit{unexplored}}\xspace \gets \ensuremath{\mathbf{false}}\xspace$
\While {$j < \Delta$ {\bf and} $\ensuremath{\mathit{unexplored}}\xspace = \ensuremath{\mathbf{false}}\xspace$ }
\State $j \gets j+1$
\If {$f(\textrm{Adj}(v,j)) = v$} \Comment{forward step}
\State $v \gets \textrm{Adj}(v,j)~~~~~$
\State $j \gets 0$
\State $\ensuremath{\mathit{count}}\xspace \gets \ensuremath{\mathit{count}}\xspace+1$
\State $\ensuremath{\mathit{depth}}\xspace \gets \ensuremath{\mathit{depth}}\xspace + 1$
\If {$\ensuremath{\mathit{count}}\xspace \ge \ensuremath{\mathit{max\_nodes}}\xspace$ {\bf or} $\ensuremath{\mathit{depth}}\xspace = \ensuremath{\mathit{max\_depth}}\xspace$} \Comment{budget is exhausted}
\State $\ensuremath{\mathit{unexplored}}\xspace \gets \ensuremath{\mathbf{true}}\xspace$
\EndIf
\State \ensuremath{\textrm{put\_output}}\xspace $(v,\ensuremath{\mathit{unexplored}}\xspace)$
\EndIf
\EndWhile
\If {$\ensuremath{\mathit{depth}}\xspace > 0$} \Comment{backtrack step}
\State $(v,j) \gets f(v)$
\State $\ensuremath{\mathit{depth}}\xspace \gets \ensuremath{\mathit{depth}}\xspace - 1$
\EndIf
\Until {$\ensuremath{\mathit{depth}}\xspace = 0$ {\bf and} $j=\Delta$}
\EndProcedure
\end{algorithmic}
\caption{Budgeted Reverse Search}
\label{bts}
\end{algorithm}
Comparing Algorithm~\ref{rsalg1} and Algorithm~\ref{bts}, we note several changes.
Firstly an integer variable \ensuremath{\mathit{count}}\xspace is introduced to keep track
of how many tree nodes have been visited, a process we call {\em budgeting}.
Secondly, a flag \ensuremath{\mathit{unexplored}}\xspace is introduced to distinguish the tree nodes whose subtrees
have not been explored. It is initialized as {\em false} on line 4.
The flag is set to \ensuremath{\mathbf{true}}\xspace in line 13 if either
the budget of \ensuremath{\mathit{max\_nodes}}\xspace has been exhausted
or a depth of \ensuremath{\mathit{max\_depth}}\xspace has been reached.
Each node encountered on a forward step is output via the routine \ensuremath{\textrm{put\_output}}\xspace
on line 15.
In single processor mode the output is simply sent to the output file with a flag added
to unexplored nodes. In multi-processor mode, the output is synchronized and
unexplored nodes are returned to the controlling master process.
Backtracking is as in Algorithm~\ref{rsalg1}.
After each backtrack step the \ensuremath{\mathit{unexplored}}\xspace flag is set to \ensuremath{\mathbf{false}}\xspace in line 4.
If the budget constraint has been exhausted then \ensuremath{\mathit{unexplored}}\xspace will again be set
to \ensuremath{\mathbf{true}}\xspace in line 13 after the first forward step.
In this way all unexplored siblings of nodes on the backtrack path to the root are flagged
in \ensuremath{\textrm{put\_output}}\xspace.
If the budget is not exhausted then forward steps continue until either it is, \ensuremath{\mathit{max\_depth}}\xspace is reached, or we reach a leaf.
To output all nodes in the subtree of $T$ rooted at $v$ we set
$\ensuremath{\mathit{start\_vertex}}\xspace=v$, $\ensuremath{\mathit{max\_nodes}}\xspace=+\infty$ and $\ensuremath{\mathit{max\_depth}}\xspace=+\infty$.
This reduces to Algorithm~\ref{rsalg1} if $v=v^*$.
To break up $T$ into subtrees we have two options that can be combined.
Firstly we can set the \ensuremath{\mathit{max\_depth}}\xspace parameter resulting in all nodes at
that depth to be flagged as unexplored.
Secondly we can set the budget parameter \ensuremath{\mathit{max\_nodes}}\xspace.
In this case, once this many nodes have been explored the current node
and all unexplored siblings on the backtrack path to the root are output
and flagged as unexplored.
Consider the tree in Figure~\ref{fig:example} which has 25 nodes, $\Delta=6$ and is
rooted at vertex 0.
For convenience the nodes are numbered 0,1,\ldots,24 in reverse search order
but this is in no way essential.
If we set $\ensuremath{\mathit{max\_depth}}\xspace=1$ and $\ensuremath{\mathit{max\_nodes}}\xspace=+\infty$ then only nodes 1,7,18,22 are visited
and output
with $\ensuremath{\mathit{unexplored}}\xspace=\ensuremath{\mathbf{true}}\xspace$.
Now suppose we set the parameter $\ensuremath{\mathit{max\_nodes}}\xspace=13$ and $\ensuremath{\mathit{max\_depth}}\xspace=+\infty$.
Firstly nodes 1,\ldots,12 are output
with $\ensuremath{\mathit{unexplored}}\xspace=\ensuremath{\mathbf{false}}\xspace$.
Then, nodes 13,15,16,18,22 are output with $\ensuremath{\mathit{unexplored}}\xspace=\ensuremath{\mathbf{true}}\xspace$.
Alternatively, if we set $\ensuremath{\mathit{max\_nodes}}\xspace=8$ then nodes 1,2,\ldots,7 are output with $\ensuremath{\mathit{unexplored}}\xspace=\ensuremath{\mathbf{false}}\xspace$
and nodes 8,9,10,11,15,16,18,22 are output with $\ensuremath{\mathit{unexplored}}\xspace=\ensuremath{\mathbf{true}}\xspace$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[grow cyclic, align=flush center,
level 1/.style={level distance=3cm,sibling angle=90},
level 2/.style={level distance=1.6cm,sibling angle=45}]
\node{$0$}
child { node {$1$}
child { node {$2$} }
child { node {$3$} }
child { node {$4$} }
child { node {$5$} }
child { node {$6$} }
}
child { node {$7$}
child { node {$8$} }
child { node {$9$} }
child { node {$10$} }
child { node {$11$}
child { node {$12$} }
child { node {$13$}
child { node {$14$} }
}
}
child { node {$15$} }
child { node {$16$}
child { node {$17$} }
}
}
child { node {$18$}
child { node {$19$} }
child { node {$20$ } }
child { node {$21$} }
}
child { node {$22$}
child { node {$23$} }
child { node {$24$} }
};
\end{tikzpicture}
\caption{Tree with 25 nodes and $\Delta=6$}
\label{fig:example}
\end{figure}
\section{An example \progname{mts}\xspace interface: topsorts}
\label{sec:mts}
In the tutorial~\cite{tutorial} a C implementation ({\em per.c}) is given
for the reverse search algorithm for generating permutations.
A small modification of this code generates all
linear extensions of a partially ordered set that is given by
a directed acyclic graph (DAG). Such linear extensions are also called
topological sorts or topological orderings.
The code modification is given as Exercise 5.1 and a solution to
the exercise
({\em topsorts.c}) appears at the URL~\cite{tutorial}.
In this section we describe in detail how to modify this code to allow
parallelization via the \progname{mts}\xspace interface.
It is convenient to describe the procedure as two phases. Phase
1 implements max-depth and budgeting and organizes the internal data in a suitable
way. This involves modifying an implementation
of Algorithm~\ref{rsalg1} to an implementation of Algorithm~\ref{bts}
that can be independently tested. We also prepare a global data structure bts\_data
which contains problem data obtained from the input.
In Phase 2 we build a node structure for use by the \progname{mts}\xspace wrapper and add necessary
routines to allow initialization and I/O in a parallel setting. In practice
this involves sharing a common header file with \progname{mts}\xspace. The resulting program
can be compiled as a stand-alone code or as a parallel code with no change in the
source files.
The code shown in the two subsections below is essentially that as given at
\cite{tutorial} with a few nonessential parts deleted.
\subsection{Phase 1 : atopsorts.c}
\begin{itemize}
\item
Internal data grouped in bts\_data structure
{\footnotesize
\begin{lstlisting}
struct bts_data {
int n; /* number of nodes and edges in graph */
int m;
int A[100][100]; /* the graph */
int countonly; /* TRUE means nodes are counted but not output */
};
\end{lstlisting}
}
\item
Initialization of bts\_data from the input file via {\em bts\_init}:
{\footnotesize
\begin{lstlisting}
bts_data *bts_init(int argc, char **argv)
{
int i,j,k,m,n;
bts_data *b = malloc(sizeof(bts_data));
b->countonly=FALSE;
/* process command line arguments */
for (i=1; i<argc; i++)
if (strcmp(argv[i],"-countonly") == 0)
b->countonly=TRUE;
/* read input file and build bts_data */
scanf(
n=b->n;
m=b->m;
for (i=1; i<=n; i++)
for (j=1; j<=n; j++)
b->A[i][j]=0;
for (k=1; k<=m; k++)
{
scanf(
b->A[i][j]=1;
}
return b;
}
\end{lstlisting}
}
\item
\ensuremath{\mathit{max\_nodes}}\xspace and \ensuremath{\mathit{max\_depth}}\xspace parameters are introduced to the reverse
search code as described in Algorithm~\ref{bts}:
{\footnotesize
\begin{lstlisting}
long bts(bts_data *b, perm v, long max_depth, long max_nodes)
{
int j=0, depth=0, unexplored;
long count=0;
int n = b->n;
int maxdeg=n-1;
do
{
unexplored = FALSE;
while (j < maxdeg && !unexplored)
{
j++;
if (reverse(b, v, j))
{ /* forward step */
Adj(v, j);
depth++;
count++;
if (count >= max_nodes || depth == max_depth)
unexplored=TRUE;
put_output(b,v,depth,unexplored);
j = 0;
}
} /* end while */
if(depth > 0)
{ /* backtrack step */
j = backtrack(b, v);
depth--;
}
} while(depth > 0 || j < maxdeg);
return count;
}
\end{lstlisting}
}
\item
{\em main()} is modified to call {\em bts\_init}, process command line arguments to test
budgeting and call {\em bts()} to perform the tree search:
{\footnotesize
\begin{lstlisting}
int main(int argc, char **argv)
/* generate topsorts with budgeting and bts_data structure */
{
bts_data *b;
perm v;
long count;
int i,n;
long max_nodes=LONG_MAX, max_depth=LONG_MAX;
b=bts_init(argc,argv);
n=b->n;
for (i=1; i<=n; i++)
v[i] = i;
for (i=1; i<argc; i++)
{
if (strcmp(argv[i],"-maxd") == 0)
max_depth = atoi(argv[i+1]);
if (strcmp(argv[i],"-maxnodes") == 0)
max_nodes = atoi(argv[i+1]);
}
/* the tree search is now initiated from root */
put_output(b,v,0,0); /* first output */
count = bts(b,v,max_depth,max_nodes);
/* finish up */
printf("\nnumber of permutations
free(b);
return 0;
}
\end{lstlisting}
}
\end{itemize}
\subsection{Phase 2 : btopsorts.c}
\label{subsec:btopphase2}
In the second phase we add the `hooks' that allow communication with \progname{mts}\xspace.
This involves defining a Node structure which holds all necessary information
about a node in the search tree. The roots of unexplored subtrees are maintained
by \progname{mts}\xspace for parallel processing. Therefore whenever a search terminates due
to the \ensuremath{\mathit{max\_nodes}}\xspace or \ensuremath{\mathit{max\_depth}}\xspace restrictions, the Node structure of each unexplored
tree node is returned to \progname{mts}\xspace. As we do not wish to customize \progname{mts}\xspace for
each application, we use a very generic node structure. The user should pack
and unpack the necessary data into this structure as required. The Node
structure is defined in the {\em mts.h} header.
We now give some more specific details.
Most of the code is common to single and multithread mode. The few
exceptional cases are handled by
the compile switch {\em \#ifdef MTS}.
\begin{itemize}
\item
The generic Node data structure in {\em mts.h} is:
{\footnotesize
\begin{lstlisting}
typedef struct node_v {
long *vlong; /* tree node longs (for user) */
unsigned int size_vlong;/* number of longs in vlong */
int *vint; /* tree node ints (for user) */
unsigned int size_vint; /* number of ints in vint */
char *vchar; /* tree node chars (for user) */
unsigned int size_vchar;/* number of chars in vchar */
float *vfloat; /* tree node floats (for user) */
unsigned int size_vfloat;/*number of floats in vfloat */
int unexplored; /* this subtree is unexplored */
long depth; /* depth of v in tree */
struct node_v *next; /* for list of unexplored nodes */
} Node;
\end{lstlisting}
}
This is not user modifiable without changing \progname{mts}\xspace.c itself.
\item
The tree search begins from a single root node, which is allocated and
initialized in:
{\footnotesize
\begin{lstlisting}
Node *get_root(bts_data *b)
/* return a node which is the root of the search tree */
{
int i,n;
Node *root = malloc(sizeof(Node));
n=b->n;
root->vlong = malloc(sizeof(long)*(n+1));
root->size_vlong = n+1;
root->vint = NULL; root->size_vint =0;
root->vchar = NULL; root->size_vchar = 0;
root->vfloat = NULL; root->size_vfloat = 0;
root->depth = 0;
root->unexplored = FALSE;
for (i=1; i<=n; i++) /* root permutation for topsort is 1,2,...,n */
root->vlong[i] = i;
if (b->countonly == FALSE)
put_output(b,root); /* first output ! */
return root;
}
\end{lstlisting}
}
In {\em btopsorts.c} a call to {\em get\_root()} is made immediately after the call to {\em bts\_init()}.
When using \progname{mts}\xspace, a call is made to {\em get\_root()} as part of the
initialization process. This is called only once, \progname{mts}\xspace deals with transferring
nodes between processors.
\item
Options for {\em btopsorts.c} are collected in the structure:
{\footnotesize
\begin{lstlisting}
const mtsoption bts_options[] = {
{"-countonly", 0}, /* -countonly option has no parameters */
{"-prune", 1}, /* -prune option has one parameter */
};
const int bts_options_size = 2;
\end{lstlisting}
}
which has currently two user defined options.
The first option, {\em -countonly}, takes no parameters and the second, {\em -prune} (described in Section~\ref{extras}),
takes one.
Additional options can be added here as long as bts\_options\_size is updated
and the options do not
conflict with the \progname{mts}\xspace option list (described in Section~\ref{subsec:mtsopts}).
\item
The input file is handled by \progname{mts}\xspace which converts it to a string pointed to by file pointer $f$. The
user simply needs to convert all {\em scanf(\ldots)} statements to
{\em fscanf(f, \ldots)}.
{\em mtslib.c} contains {\em get\_input()} to convert input to Data for internal use.
No user modification is required or recommended.
Output is handled by \progname{mts}\xspace in multiprocessor mode. Output to stdout and stderr
is controlled by the values set for b$\rightarrow$output and b$\rightarrow$err in {\em bts\_init()}.
This is transparent to the user, who needs
only replace all instances of {\em printf(\ldots)} by {\em tsprintf(b$\rightarrow$output,\ldots)}
and also instances of {\em fprintf(stderr,\ldots)} by {\em tsprintf(b$\rightarrow$err,\ldots)}.
These I/O handling considerations require a slight modification of
{\em bts\_data} and associated declarations:
Note in particular the extra two lines required in the definition of {\em bts\_data}
and the compile switch.
{\footnotesize
\begin{lstlisting}
#ifdef MTS
#define tsprintf stream_printf
#define tstream mts_stream
#else
#define tsprintf fprintf
#define tstream FILE
#endif
typedef long Perm[100];
/* bts_data is built from the input and can be user modified */
/* except where mentioned */
struct bts_data {
int n; /* number of nodes and edges in graph */
int m;
int A[100][100]; /* the graph */
int countonly; /* TRUE means nodes are counted but not output */
int prune; /* mark unexplored only if nchild > prune */
tstream *output; /* output stream for stdout */
tstream *err; /* output stream for stderr */
};
\end{lstlisting}
}
\item
A few small changes to {\em put\_output()}, making use of the compile switch,
are required
to be sure the output goes in the right place:
{\footnotesize
\begin{lstlisting}
void put_output(bts_data *b, Node *treenode)
{
int i;
if(!b->countonly)
{
for (i=1; i<=b->n; i++)
tsprintf(b->output,"
tsprintf(b->output, " d
#ifndef MTS
if (treenode->unexplored)
tsprintf(b->output, " *unexplored");
#endif
tsprintf(b->output, "\n");
}
#ifdef MTS
if (treenode->unexplored)
return_unexp(treenode);
#endif
}
\end{lstlisting}
}
The {\em return\_unexp()} routine is supplied in {\em mts.c}.
\item
A few small changes are required to {\em bts\_init()}. Omitting
code already given above, this becomes:
{\footnotesize
\begin{lstlisting}
bts_data *bts_init(int argc, char **argv, Data *in, int proc_no)
{
int i, j, k, n,m;
FILE *f;
bts_data *b = malloc(sizeof(bts_data));
b->countonly=FALSE;
/* process command line arguments to btopsorts - as above, omitted */
f = open_string(in->input);
#ifdef MTS
b->output = open_stream(MTSOUT);
b->err = open_stream(MTSERR);
#else
b->output = stdout;
b->err = stderr;
#endif
i = fscanf(f,
if (i == EOF )
{
tsprintf(b->err, "no input found\n");
fclose(f);
#ifdef MTS
close_string();
#endif
return NULL;
}
n=b->n;
m=b->m;
/* read input data containing the input graph - as above, omitted */
return b;
}
\end{lstlisting}
}
\item
Finally there is {\em bts\_finish()} which is called by the master program once
after all worker processors have terminated. This routine is not required for reverse search
applications and is included only for future compatibility.
It is intended to be used by applications that use shared\_data, such as
branch and bound, to provide
a final solution to a problem.
{\footnotesize
\begin{lstlisting}
void bts_finish(bts_data *bdata, shared_data *sdata, int size, int checkpointing)
{
return;
}
\end{lstlisting}
}
\end{itemize}
By itself {\em btopsorts.c} compiles to the standalone program \progname{btopsorts}\xspace, but it
becomes the parallel code \progname{mtopsorts}\xspace if compiled with the
{\em mts.c} wrapper using the MTS flag.
\subsection{Extras}
\label{extras}
There are some additional features implemented in \progname{mts}\xspace that are not essential
but may prove useful.
They are documented in the code available at~\cite{tutorial} and include:
\begin{itemize}
\item
{\em cleanstop()} and {\em emergencystop()}.
These allow a user process to cleanly close down the entire parallel execution.
\item
Shared data. This allows user processes to pass data back to \progname{mts}\xspace
which may be useful for other processes. This is not required for reverse
search applications but will be useful for applications such as branch and bound,
satisfiability, game tree search, etc.
\item
Output blocks. Although \progname{mts}\xspace guarantees that output from each call to
{\em stream\_printf} is synchronized with other processes, it may be
desirable to synchronize a block of {\em stream\_printf} output. If
the application opens an \emph{output block}, all output produced until
the application closes this \emph{output block} (or \emph{bts} returns) will
be printed as a block without interruption from other workers.
\item
Pruning. The efficiency of \progname{mts}\xspace depends on keeping the job list non-empty
until the end of the computation, without letting it get too large. Depending on
the application, there may be a substantial restart cost for each unexplored
subtree. Surely there is no need to return a leaf as an unexplored node, and the
{\em prune=0} option checks for this. Further, if an unexplored node has only
one child it may be advantageous to explore further, terminating either at
a leaf or at a node with two or more children, which is returned
as {\em unexplored}. The {\em prune=1}
option handles this condition, meaning that no isolated nodes or paths are
returned as unexplored.
Note that pruning is not a built-in \progname{mts}\xspace option; it is an example option
that applications may wish to include. An example of pruning is implemented
in \progname{mtopsorts}\xspace, as mentioned in Section~\ref{subsec:btopphase2}.
\end{itemize}
\subsection{A second example : spanning trees}
\label{sec:tree}
In the tutorial~\cite{tutorial} a C implementation ({\em tree.c}) is given
for the reverse search algorithm for all spanning trees of the complete graph.
An extension of this to generate all spanning trees of
a given graph is stated as Exercise 6.3.
Applying Phase 1 and 2 as described above results in the codes {\em atree.c}
and {\em btree.c}. The \progname{mts}\xspace wrapper may be directly compiled with {\em btree.c}
to provide the parallel implementation \progname{mtree}\xspace.
All of these codes are given at the URL~\cite{tutorial}.
\section{Using the \progname{mts}\xspace interface}
\label{subsec:mtsopts}
\subsection{Building \progname{mts}\xspace}
Applications may have additional requirements, but the
requirements for \progname{mts}\xspace are fairly simple. The most important
is an MPI implementation. We develop and test on Open MPI and Intel MPI,
but any reasonably modern implementation should work. We use the common
\textsf{mpicc} compiler wrapper to build \progname{mts}\xspace. This is installed as part of
the MPI implementation; note that when installing MPI via binary packages,
the development package (if separate from the runtime
support) will likely be needed to compile.
\progname{mts}\xspace should work on any SMP workstation, properly-configured cluster or general-purpose
supercomputer. Configuring MPI for clusters is beyond the scope of
this document and depends on the implementation chosen.
\subsection{\progname{mts}\xspace options}
\label{options}
Similar to the {\em bts\_options} array shown in
Section~\ref{subsec:btopphase2}, the built-in \progname{mts}\xspace options are collected in
an {\em mts\_options} array in {\em mts.c}:
{\footnotesize
\begin{lstlisting}
const mtsoption mts_options[] = { /* external linkage for */
{"-maxd", 1}, /* curious applications */
{"-maxnodes", 1},
{"-scale", 1},
{"-lmin", 1},
{"-lmax", 1},
{"-maxbuf", 1},
{"-freq", 1},
{"-hist", 1},
{"-checkp", 1},
{"-stop", 1},
{"-restart", 1}
};
const int mts_options_size = 11;
\end{lstlisting}
}
\noindent which currently has eleven built-in options, which are similar to options
supported by \progname{mplrs}\xspace~\cite{AJ15b}. The first option {\em -maxd} takes a parameter that
specifies \ensuremath{\mathit{max\_depth}}\xspace, and likewise {\em -maxnodes} takes a parameter
specifying \ensuremath{\mathit{max\_nodes}}\xspace. The \ensuremath{\mathit{max\_nodes}}\xspace parameter is scaled by the scaling
factor given by the {\em -scale} option. The {\em -lmin} and {\em -lmax}
parameters specify when the budget is modified based on the number of unexplored
subtrees available. The idea is to use smaller budgets to quickly grow the
job list when it is small, and larger budgets to avoid overhead when the job
list is sufficiently large.
More precisely, the \ensuremath{\mathit{max\_depth}}\xspace is used only if the
number of unexplored subtrees available ($|L|$ in~\cite{AJ15b}) is less than
the number of processes in use times the \ensuremath{\mathit{lmin}}\xspace value. The $\ensuremath{\mathit{max\_nodes}}\xspace$ value
is scaled by the scaling factor if the number of unexplored subtrees available
is larger than the number of processes times the \ensuremath{\mathit{lmax}}\xspace value.
While these parameters have default values used in this paper, it is very
likely that different applications will require different values
to attain good performance. The next two parameters, {\em -hist} and
{\em -freq} allow one to specify \emph{histogram} and \emph{frequency} files.
These collect statistics on the degree of parallelization obtained, and can
easily be plotted using \progname{gnuplot}\xspace. This is intended to help the user tune
\progname{mts}\xspace to their application. We explain the format and usage of these files
in Section~\ref{hist}.
The last three options relate to checkpoint files and restarts.
The {\em -checkp} file is used to specify a \emph{checkpoint} file. If the run
is interrupted (e.g., by the application using \emph{cleanstop()} to request
a checkpoint), \progname{mts}\xspace
will save the current list of unexplored nodes, shared data, and other
information needed to restart. The {\em -restart} option is used to
specify a checkpoint file to restart from. The {\em -stop} option is
used to specify a \emph{stop} file. When this option is used, \progname{mts}\xspace will
periodically check for the existence of the specified file. If it exists,
\progname{mts}\xspace will checkpoint and exit when possible.
Finally, the option {\em -maxbuf} controls the ``streaminess''
of the output, where the parameter
specifies a number of bytes. If the buffer is larger than the specified
number of bytes and ends in a newline, it is flushed if possible. Smaller
values increase streaminess but may increase overhead.
Currently the default parameters are:
\[
\ensuremath{\mathit{max\_depth}}\xspace=2~~ \ensuremath{\mathit{max\_nodes}}\xspace=5000~~ \ensuremath{\mathit{scale}}\xspace=40~~ \ensuremath{\mathit{lmin}}\xspace=1~~ \ensuremath{\mathit{lmax}}\xspace=3~~ \ensuremath{\mathit{maxbuf}}\xspace=1048576.
\]
\section{Computational results}
The tests were performed at Kyoto University on \compname{mai32}\xspace, a cluster of 5 nodes
with a total of 192 identical processor cores, consisting of:
\begin{itemize}
\item
\compname{mai32abcd}\xspace: 4 nodes, each containing: 2x Opteron 6376 (16-core 2.3GHz), 32GB memory, 500GB hard drive (128 cores in total)
\item
\compname{mai32ef}\xspace: 4x Opteron 6376 (16-core 2.3GHz), 64 cores, 256GB memory, 4TB hard drive.
\end{itemize}
\subsection{Linear extensions: \progname{mtopsorts}\xspace}
\label{subsec:mtop}
The tests were performed using the following codes:
\begin{itemize}
\item
\progname{VR}\xspace: obtained from the Combinatorial Object Server~\cite{COS}, generates linear
extensions in lexicographic order via the
Varol-Rotem algorithm~\cite{VR81} (Algorithm V in Section 7.2.1.2 of~\cite{knuth11})
\item
\progname{Genle}\xspace: also obtained from~\cite{COS}, generates linear extensions in Gray code order
using the algorithm of Pruesse and Rotem~\cite{PR91}
\item
\progname{btopsorts}\xspace: derived from the reverse search code {\em topsorts.c}~\cite{tutorial} as
described in Section~\ref{sec:mts}
\item
\progname{mtopsorts}\xspace: \progname{mts}\xspace parallelization of \progname{btopsorts}\xspace.
\end{itemize}
For the tests all codes were used in count-only mode due to the enormous output that
would otherwise be generated. All codes were used with default parameters given at the end of
Section~\ref{options}.
The problems chosen were the following graphs which are listed in order of increasing
edge density:
\begin{itemize}
\item
{\em pm22} : the partial order $a_1 < a_2,\ldots, a_{21} < a_{22}$,
$a_1 < a_3 <\ldots< a_{21}$ that generates all perfect matchings on 22 elements
\item
{\em cat42} : the partial order $a_1 < a_2,\ldots, a_{41} < a_{42}$;
$~~~a_1 < a_3,\ldots, a_{39} < a_{41}$; $~~~a_2 < a_4,\ldots, a_{40} < a_{42}$
that generates all the 2 x 21 Young Tableaux, whose cardinality is the
21st Catalan number
\item
$K_{8,9}$ : the partial order $a_1 < a_9,\ldots, a_1 < a_{17}$
$~~ldots~~$ $a_8 < a_9,\ldots, a_8 < a_{17}$ corresponding to
the complete bipartite graph $K_{8,9}$
with all edges directed from the smaller part to the larger.
\end{itemize}
The constructions for the first two partial orders
are well known (see, e.g., Section 7.2.1.2 of~\cite{knuth11}).
\begin{table}[htbp]
\centering
\scalebox{0.88}{
\begin{tabular}{||c c c c||c|c|c||c|c|c|c|c||}
\hline
Graph &m&n & No. of perms &\progname{VR}\xspace& \progname{Genle}\xspace & \progname{btopsorts}\xspace &\multicolumn{5}{|c||}{\progname{mtopsorts}\xspace } \\
& & & & & & & 12 & 24 & 48 & 96 & 192 \\
\hline
{\em pm22} &22&21 & 13,749,310,575 & 179 & 14 & 12723 & 1172 &595& 360&206 &125 \\
& & & & & & (1) & (11) & (21) & (35) & (62) & (102) \\
\hline
{\em cat42} &42 &61 & 24,466,267,020 & 654 & 171 &45674 &4731 &2699&1293&724 &408 \\
& & & & & & (1) & (9.7) & (17) & (35) & (63) & (112) \\
\hline
$K_{8,9}$ &17&72 & 14,631,321,600 & 159 & 5 & 8957 &859&445&249 & 137 &85 \\
& & & & & & (1) & (10) & (20) & (36) & (65) & (105) \\
\hline
\end{tabular}
}
\caption{Linear extensions: \compname{mai32}\xspace, times in secs (speedup on \progname{btopsorts}\xspace)}
\label{tab:tops}
\end{table}
The computational results are given in Table~\ref{tab:tops}. First observe that the reverse search
code \progname{btopsorts}\xspace is very slow, over 900 times slower than \progname{Genle}\xspace and
over 70 times slower than \progname{VR}\xspace on {\em pm22} for example. However
the parallel \progname{mts}\xspace code obtains excellent speedups and is faster than \progname{VR}\xspace on all problems when
192 cores are used.
\subsection{Spanning trees: \progname{mtree}\xspace}
The tests were performed using the following codes:
\begin{itemize}
\item
\progname{grayspan}\xspace: Knuth's implementation~\cite{knuthcode} of an algorithm that generates all spanning trees of a given graph,
changing only one edge at a time, as described in
Malcolm Smith's M.S. thesis, {\em Generating spanning trees} (University
of Victoria, 1997)
\item
\progname{grayspspan}\xspace: Knuth's improved implementation of \progname{grayspan}\xspace:
``This program combines the ideas of \progname{grayspan}\xspace
and {\em spspan}, resulting in a glorious routine that generates
all spanning trees of a given graph, changing only one edge at a time,
with `guaranteed efficiency'---in the sense that the total running
time is $O(m+n+t)$ when there are $m$ edges, $n$ vertices, and $t$
spanning trees.''\cite{knuthcode}
\item
\progname{btree}\xspace: derived from the reverse search code {\em tree.c}~\cite{tutorial} as described in Section~\ref{sec:tree}
\item
\progname{mtree}\xspace: \progname{mts}\xspace parallelization of \progname{btree}\xspace.
\end{itemize}
\noindent
Both \progname{grayspan}\xspace and \progname{grayspspan}\xspace are described in detail in Knuth~\cite{knuth11}.
Again all codes were used in count-only mode
and with the default parameters given at the end of Section~\ref{options}.
The problems chosen were the following graphs which are listed in order of increasing
edge density:
\begin{itemize}
\item
{\em 8-cage} : the Tutte-Coxeter graph, which is the unique minimal graph of girth 8
\item
$P_5C_5$ : 5X5 cylinder $P_5 \Box C_5$
\item
$C_5C_5$ : 5X5 cylinder $C_5 \Box C_5$
\item
$K_{7,7}$ : the complete bipartite graph with 7 vertices in each part
\item
$K_{12}$ : the complete graph on 12 vertices.
\end{itemize}
The latter 4 graphs were motivated by Table 5 in~\cite{knuth11}: $P_5C_5$ appears therein
and the other graphs are larger versions of examples in the table.
\label{sec:comp}
\begin{table}[h!tbp]
\centering
\scalebox{0.9}{
\begin{tabular}{||c c c c||c|c|c||c|c|c|c|c||}
\hline
Graph &m&n & No. of trees &\progname{grayspan}\xspace& \progname{grayspspan}\xspace & \progname{btree}\xspace &\multicolumn{5}{|c||}{\progname{mtree}\xspace } \\
& & & & & & & 12 & 24 & 48 & 96 & 192 \\
\hline
{\em 8-cage} &30&45 & 23,066,015,625 & 3166 & 730 & 10008 &1061 &459& 238&137 & 92 \\
& & & & & & (1) & (9.4) & (21) & (42) & (73) & (109) \\
\hline
$P_5C_5$ & 25& 45 & 38,720,000,000 & 3962 & 1212& 8918 & 851 &455& 221&137 & 122 \\
& & & & & & (1) & (10) & (20) & (40) & (65) & (73) \\
\hline
$C_5C_5$ & 25 & 50 &1,562,500,000,000 & 131092 & 41568&230077 &26790 &13280&7459& 4960 & 4244 \\
& & & & & & (1) & (8.6) & (17) & (31) & (46) & (54) \\
\hline
$ K_{7,7}$ & 14 & 49 & 13,841,287,201 & 699 & 460 & 2708 &259 &142& 68& 51 & 61 \\
& & & & & & (1) & (10) & (19) & (40) & (53) & (44) \\
\hline
$K_{12}$ & 12 & 66 & 61,917,364,224 & 2394 & 1978 & 3179 &310 &172& 84& 97 & 148 \\
& & & & & & (1) & (10) & (18) & (38) & (33) & (21) \\
\hline
\end{tabular}
}
\caption{Spanning tree generation: \compname{mai32}\xspace, times in secs (speedup on \progname{btree}\xspace)}
\label{tab:trees}
\end{table}
The computational results are given in Table~\ref{tab:trees}. This time the reverse search
code is a bit more competitive: about 3 times slower than \progname{grayspan}\xspace and
about 14 times slower than \progname{grayspspan}\xspace on {\em 8-cage} for example.
The parallel \progname{mts}\xspace code runs about as fast as \progname{grayspspan}\xspace on all problems when
12 cores are used and is significantly faster after that. Near linear speedups are
obtained up to 48-cores but then tail off. For the two dense graphs
$ K_{7,7}$ and $K_{12}$ the performance of \progname{mts}\xspace is actually worse with 192 cores than with 96.
\section{Evaluating and improving performance}
\label{hist}
The amount of work contained in a node can vary dramatically between
applications, and so \progname{mts}\xspace includes features intended to help tune
its budgeting to a particular application. We briefly mentioned
histogram and frequency files in Section~\ref{subsec:mtsopts};
here we take a closer look at using these to improve performance.
We focus on \progname{mtopsorts}\xspace and the $K_{8,9}$ instance introduced in
Section~\ref{subsec:mtop}.
When \progname{mts}\xspace is used with the \emph{-hist} option, it periodically writes
a line to the specified histogram file. The line contains (in order
and separated by whitespace):
\begin{enumerate}
\item Time in seconds since execution began (floating point)
\item The number of workers currently busy
\item The number of unexplored nodes currently in the joblist
\item The number of workers owing the master a message about
unexplored nodes
\item Unused (currently $0$)
\item Unused (currently $0$)
\item The total number of unexplored nodes (jobs) the master has seen
since execution began.
\end{enumerate}
For example, the following is part of a histogram file generated by
\progname{mtopsorts}\xspace using 128 cores and default parameters:
\begin{verbatim}
1.003161 114 1780 114 0 0 6297
2.005773 120 1675 120 0 0 9946
3.006864 117 1679 117 0 0 13665
\end{verbatim}
This indicates that at (approximately) 3 seconds, there were
117 busy workers, 1679 jobs in the list, 117 workers owing messages, and
13665 total jobs had existed (i.e. completed jobs, currently executing
jobs, and the jobs currently in the list).
The number of workers owing messages to the master is always at least
the number of busy workers (since any busy worker must report the number
of unexplored nodes it will return). It could become larger, for example
if these messages are being delayed due to a saturated interconnect.
However, the other entries are likely to be of more interest.
The \progname{mts}\xspace distribution includes a script (\texttt{plotL.gp}) to produce
graphical plots from these histogram files using \progname{gnuplot}\xspace. The script
produces three plots, with time on the horizontal axis and the
second to fourth columns on the vertical axis. Figure~\ref{fig:defplot}
contains the plots produced for the \progname{mtopsorts}\xspace run with default parameters
in the example above.
\begin{figure}[h!tbp]
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89default-busy}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89default-L}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89default-msg}
\end{minipage}
\caption{Histograms for \progname{mtopsorts}\xspace on $K_{8,9}$: busy workers, job list size, messages owed }
\label{fig:defplot}
\end{figure}
In Figure~\ref{fig:defplot} using the default parameters, we see that the master is struggling to keep
all workers busy despite having jobs available. This suggests that we can
improve performance by using better parameters.
In this case, a larger \emph{-scale} or \emph{-maxnodes} value may help,
since it will allow workers to do more work (assuming a sufficiently large
subproblem) before contacting the master.
\begin{figure}[h!tbp]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89maxn1000000-busy}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89maxn1000000-L}
\end{minipage}
\caption{Histograms with \emph{-maxnodes} $1000000$:
busy workers (l), joblist size (r)}
\label{fig:maxnplot}
\end{figure}
For example, we might try increasing \emph{-maxnodes}.
Figure~\ref{fig:maxnplot} shows\footnote{We omit the third plot; the number
of messages owed normally tracks the number of busy workers.}
the result of using the (much larger) value $1000000$. There
we see the opposite problem; many workers become idle
because the job list becomes empty. This suggests that \emph{-maxnodes}
$1000000$ is not suitable in this case; even when the job list is nearly
empty the master schedules jobs with large budgets.
Instead of dramatically increasing \emph{-maxnodes}, we might try a
larger \emph{-scale} together with a modest increase in \emph{-maxnodes}.
This will leave budgets relatively low when the job list is small,
and increase the budget by a larger amount when the job list is large.
Figure~\ref{fig:scaleplot} shows the result of using a value of $200$ for
scaling and $10000$ for \emph{-maxnodes}. These parameters produce less
than half the number of total number of jobs compared to the default
parameters, and increase overall performance by about five
percent on this particular input.
\begin{figure}[h!tbp]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89scale200maxn10000-busy}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/plotL-k89scale200maxn10000-L}
\end{minipage}
\caption{Histograms with \emph{-scale} $200$ \emph{-maxnodes}
$10000$ on $K_{8,9}$:
busy workers (l), joblist size (r)}
\label{fig:scaleplot}
\end{figure}
In addition to the performance histograms, \progname{mts}\xspace can generate
\emph{frequency} files. These files contain one number per line,
giving the values that the application's \emph{bts()} function
returns on each job. In our sample applications, this value is
the number of nodes that \emph{bts()} visited -- and so the
frequency file contains the actual size of each job. This can
provide statistical information about the tree that is helpful
when tuning the parameters for better performance. For example,
it may be particularly helpful to implement and use pruning if many
jobs correspond to leaves. Likewise, increasing the budget will
have limited effect if only few jobs use the full budget.
The \progname{mts}\xspace distribution contains a script (\texttt{plotD.gp}) that
uses \progname{gnuplot}\xspace to plot the distribution of subproblem sizes.
Figure~\ref{fig:freqplot} shows the distribution of subproblem
sizes that was produced in a run of \progname{mtopsorts}\xspace on $K_{8,9}$ with
default parameters.
\begin{figure}[h!tbp]
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/freqk89-default-full}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{badparams/freqk89-default-small}
\end{minipage}
\caption{Subproblem sizes in $K_{8,9}$, default parameters: all (l) small subproblems only (r)}
\label{fig:freqplot}
\end{figure}
\section{Conclusions}
We have presented a generic framework for parallelizing reverse search codes requiring only
minimal changes to the legacy code. Two features of our approach are that the parallelizing wrapper
does not need to be user modified and the modified legacy code can be tested in standalone
single processor mode. Applying this framework to
two very basic reverse search codes we obtained comparable results
to that previously obtained by the customized \progname{mplrs}\xspace wrapper applied to the
the complex \progname{lrs}\xspace code~\cite{AJ15b}. We expect that many other
reverse search applications will obtain similar speedups when parallelized with \progname{mts}\xspace.
Ongoing work is to extend the use of \progname{mts}\xspace to more complex search tree algorithms
such as branch and bound, SAT solvers, game tree search, etc. Since these applications require
some sharing of common data it will be interesting to see if the same
sorts of speedups can be obtained.
\bibliographystyle{spmpsci}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the recent years, technological advancements in computational power have fueled an impressive progress in the development of AI tasks based on deep learning. The introduction of AlexNet\cite{AlexNet} in 2012, marked a new beginning in Deep Neural Networks (DNN) and computer vision space. Since then, many models that could challenge -or even outperform- human estimation accuracy have been developed for a variety of computer vision tasks. However, the gains in terms of accuracy of a model, usually come at the cost of longer inference time and high resource requirements.
The execution of DNN tasks quickly finds its way into handheld, embedded, or edge platforms, that are typically resource constrained. The choice of the suitable platform depends mainly on the power consumption of the system and its performance on specific tasks. This work captures the performance characterisation of a number of widely available resource-constrained embedded devices on three widely adopted inference tasks. The selected models are presented in \autoref{DNNModels}, alongside with information regarding their way of utilization and computational demands.
\begin{table}[t]
\caption{DNN models used in the evaluation}
\vspace{-0.05in}
\resizebox{\columnwidth}{!}{
\setlength\tabcolsep{2.5pt}
\begin{tabular}{llllrrr}
\hline
\textbf{Models} & \textbf{Task} & \textbf{Dataset} & \textbf{Year} & \textbf{MAC(G)} & \textbf{Params(M)} & \textbf{Top-1} \\
\textbf{} & \textbf{} & \textbf{} & \textbf{} & \textbf{} & \textbf{} & \textbf{ Accuracy}\\ \hline
Alexnet\cite{AlexNet} & Image & ImageNet & 2012 & 0.73 & 60.97 & 56.62\%\\
InceptionV1\cite{IncV1} & Classification & ImageNet & 2014 & 1.59 & 7.00 & 68.70\%\\
InceptionV4\cite{IncV4} & & ImageNet & 2017 & 12.27 & 42.62 & 80.08\%\\
SqueezeNetV1.1\cite{squeeze} & & ImageNet & 2016 & 0.39 & 1.24 & 58.18\%\\
DenseNet121\cite{dense} & & ImageNet & 2017 & 3.08 & 7.98 & 74.43\%\\
DenseNet161\cite{dense} & & ImageNet & 2017 & 8.52 & 28.73 & 77.14\%\\
DenseNet169\cite{dense} & & ImageNet & 2017 & 3.72 & 14.15 & 75.60\%\\
ResNet50\cite{res} & &ImageNet & 2015 & 3.87 & 25.56 & 76.13\%\\
ResNet101\cite{res} & & ImageNet & 2015 & 7.59 & 44.55 & 77.37\%\\
ResNet152\cite{res} & & ImageNet & 2015 & 11.30 & 60.19 & 78.31\%\\
VGG16\cite{vgg} & & ImageNet & 2014 & 15.47 & 138.36 & 71.59\%\\
VGG19\cite{vgg} & & ImageNet & 2014 & 19.63 & 143.67 & 72.38\%\\
MobileNetV1\cite{MobileNets} & & ImageNet & 2017 & 0.57 & 4.23 & 70.81\% \\
MobileNetV2\cite{MobileNets} & & ImageNet & 2017 & 0.44 & 3.51 & 71.90\%\\ \hline
SSD300\cite{ssd} & Object & VOC0712 & 2016 & 31.37 & 26.28 & 75.8 mAP\\
SSD512\cite{ssd} & Detection & VOC0712 & 2016 & 90.21 & 27.19 & 78.5 mAP\\
YOLO-V3\cite{yolov3} & & COCO & 2018 & 27.27 & 61.87 & 55.3 mAP\\ \hline
FCN8\cite{fcn8} & Image & VOC0712 & 2015 & 181.55 & 134.49 & 75.9\%\\
Dilation\cite{dilation} & Segmentation & CityScapes & 2015 & 2650.00 & 134.46 & - \tablefootnote{No official report is available for the specific dataset.} \\ \hline
\end{tabular}}
\label{DNNModels}
\vspace{-0.15in}
\end{table}
A large spectrum of the available platforms are combined with a software development kit (SDK) tailored for the optimised execution of DNNs. These kits provide software support and include a set of tools and libraries that implement low-level functions for the specific hardware architecture, aiming to maximise the system's performance under machine learning workloads. The existence of such software support is instrumental for the hardware platform to deliver high performance under machine learning workloads, dictating the consideration of the available SDK when it comes to the performance comparison of these devices under ML loads.
The focus of this study is on characterizing the performance of widely utilised embedded systems for inference in real-life applications. The paper provides information on the obtained latency of the systems under investigation under batch size of 1, aiming to characterise applications that are latency-critical, as well as larger batch sizes to explores the batch-size impact on system throughput for throughput-oriented applications. Furthermore, we extend our analysis by providing real-time power consumption measurements across all different inference workloads, as well on energy efficiency of the embedded platforms executing each available model.
The following sections of the paper are organized as follows. Section II presents related previous works. Section III describes the hardware platforms that were used along with a brief overview of the available toolkits for accelerating the DNN inference. Sections IV and V present the experimental results and a discussion on them respectively, and Section V provides our conclusions.
\section{Related Work}
Some notable works have analyzed the execution performance of DNN models on various deployment platforms. In their work \cite{blott}, Blott {\em et. al.} are focused on how accelerators are benefit from different optimizations techniques, like pruning, quantization and numerical representations. However, their experiments include a small number of available public models, and they focus only on performance capabilities of their platforms, rather than real power consumption and energy efficiency. Ignatov {\em et. al.} in \cite{AI} present an extended benchmark analysis of the Android smartphones' performance, by running various DNNs and tasks on their available System on Chip (SoC). Another approach for increasing the performance of Convolutional Neural Networks (CNNs) was made by Hegde {\em et. al.} in \cite{CaffePresso}. Their work is focused on running CNNs on hardware accelerators which are integrated in commodity SoCs platforms. Almeida {\em et. al.} \cite{Embench} present a deep performance analysis of modern commodity platforms that can be used on desktops and server environments. Their work is mainly focused on running DNN models for the image classification task but without utilizing the available SDK toolsets which might accelerate the inference execution. MLPerf benchmark \cite{mlperf} reports a number of results on both inference and training tasks focusing on specific scenarios and metrics. The authors have provided a protocol for users to investigate the performance of the system and upload their results.
This work is aligned with the above approaches and contributes to the field by focusing on the performance characterisation of a number of resource-constrained platforms, under a large number of workloads across three difference computer vision tasks, providing insights on their performance as well on their real-time power consumption and energy efficiency.
\section{Hardware Platforms} \label{HW}
Our performance landscape is based on resource-constrained platforms. These platforms are usually integrated into embedded systems or desktop environment where both computational and memory resources are limited. By following this approach, our platform list includes embedded SoCs that are especially designed for inference tasks, as well general purpose units that are commonly included on many modern commodity systems.
\subsection{Intel}
The spectrum of Intel's platforms is very wide, as it includes numerous processor units for either desktops or high-end server environments. A conventional solution, for resource constrained systems and for edge computing, is a high end general purpose CPU, such as i7 6700. This processor unit offers 4 physical cores, but with HyperThreading enabled it reaches 8 logical cores. Also, its base frequency is set at 3.4 GHz with a thermal design power (TDP) of 65W. I7-6700 is integrated into a desktop environment along with 8GB DDR4 RAM memory. However, many systems are designed in order to provide high performance with low power consumption. For that purpose, Intel has designed the Neural Compute Stick 2 (NCS2), a special platform that was based on Myriad X Visual Processor Unit. NCS2 can deliver high computational performance in computer vision tasks in about 1 watt. Both platforms have been tested under the Intel's OpenVino\textsuperscript{TM} toolkit, which enables deep learning inference at the edge and at the same time maximizes the performance of Intel's platforms. The preferred precision for DNN models is single-precision floating point (FP32) for CPUs, but other types of precision can be also supported, such as half-precision floating point (FP16) and fixed point. On the other hand, NCS2 can support only models with FP16 precision.
\subsection{Nvidia}
High end GPUs are constantly used for achieving high throughput in computer vision tasks, either for training or inference. Such platforms are massively parallel and throughput oriented, but also they are a more power costly option compare to CPUs. An example is the mid-range GPU Nvidia GeForce GTX 1060 6GB, its peak power is 120W. GeForce GTX 1060 6GB integrates 1280 CUDA cores, but the absence of Tensor cores negatively affects the performance of FP16 precision. A power efficient option that is recommended by Nvidia is Jetson TX2. This embedded system is the second installment of the Jetson family and it is a 15W AI supercomputer at its maximum performance. Jetson TX2 integrates a 256-core GPU under Nvidia Pascal\textsuperscript{TM} Architecture, which supports the execution of models with both FP32 and FP16 precision by providing full-rate performance, contrary to the aforementioned platform. Furthermore, Jetson TX2 integrates a 8GB LPDDR4 RAM which is shared between the integrated GPU and CPU. In order to achieve higher performance on both hardware platforms provided by Nvidia, we have developed the computer vision tasks by implementing TensorRT. TensorRT is a special SDK, for increasing throughput and reducing latency in deep learning inference applications. In order for a platform to exploit the benefits of TensorRT, a system, that integrates a Nvidia's GPU, must support both CUDA and cuDNN.
\subsection{Arm}
Many power-constrained systems (i.e. mobile devices) integrate ARM Cortex-A series processors due to their relative high performance and very low power consumption, and, more recently, their support for 64-bit instruction sets. In order to fully utilise Cortex-A CPUs for deep learning inference, ARM provides the NN SDK that optimises the mapping of machine learning workloads on ARM devices. This SDK is accompanied by Arm's Compute Library, a repository of low-level functions for the accelerated execution of -mainly of computer vision- algorithms and functions on ARM processors. In our performance analysis we use the ARM Cortex-A57, a processor unit with 4 physical cores and a frequency of 2GHz, and the ARM Cortex-A53, a processor that integrates 4 physical cores clocked at 1.5 GHz. The utilised precision of the tested DNN models is FP32, but fixed point is also supported by the SDK. ARM Cortex-A57 is accompanied by 8 GB RAM, where ARM Cortex-A53 by 4 GB.
\begin{table*}[t]
\centering
\caption{Inference Time (msec) per DNN model and platform (Batch Size = 1)}
\vspace{-0.05in}
\centering
\begin{tabular}{lrrrrrrrrr}
\hline
& \textbf{Intel} & \textbf{Intel} & \textbf{Nvidia} & \textbf{Nvidia} & \textbf{Nvidia} & \textbf{Arm} & \textbf{Arm} & \textbf{Xilinx} & \textbf{Xilinx} \\
\textbf{Models} & \textbf{i7 6700} & \textbf{NCS2} & \textbf{GTX 1060} & \textbf{TX2(FP32)} & \textbf{TX2(FP16)} & \textbf{Cortex-A57} & \textbf{Cortex-A53} & \textbf{ZCU102(DPU)} & \textbf{Alveo U50(DPU)} \\ \hline
\textbf{Alexnet} & 18.32 & 24.90 & 2.78 & 11.43 & 7.40 & 141.00 & 95.00 & - & - \\
\textbf{Inception-V1} & 16.11 & 23.65 & 3.71 & 9.83 & 5.87 & 213.50 & 375.00 & 11.00 & 9.15\\
\textbf{Inception-V4} & 94.69 & 141.98 & 22.34 & 83.92 & 41.41 & 1170.00 & 2078.50 & - & 39.51 \\
\textbf{SqueezeNet1.1} & 3.13 & 10.21 & 2.15 & 3.38 & 2.32 & 77.50 & 106.00 & - & 5.45 \\
\textbf{DenseNet 121} & 27.00 & 50.20 & 13.37 & 29.74 & 20.21 & 339.00 & 667.00 & - & - \\
\textbf{DenseNet 161} & 66.53 & 139.38 & 22.58 & 66.03 & 45.46 & 773.00 & 1455.50 & - & -\\
\textbf{DenseNet 169} & 32.08 & 64.80 & 17.65 & 36.93 & 26.65 & 425.00 & 821.00 & & -\\
\textbf{ResNet 50} & 33.81 & 56.98 & 5.77 & 21.51 & 11.96 & 807.50 & 1008.50 & 21.50 & 14.74\\
\textbf{ResNet 101} & 63.19 & 102.30 & 9.62 & 38.2 & 20.92 & 1441.00 & 1890.50 & 35.00 & 22.70\\
\textbf{ResNet 152} & 92.97 & 152.95 & 13.98 & 55.15 & 30.18 & 2028.50 & 2662.50 & 48.00 & 30.96\\
\textbf{VGG16} & 142.58 & 177.89 & 10.83 & 65.88 & 38.03 & 1029.50 & 1595.50 & 54.50 & 43.47\\
\textbf{VGG19} & 145.24 & 215.96 & 12.70 & 79.32 & 45.66 & 1344.00 & 2026.50 & 62.50 & 50.52\\
\textbf{MobileNetV1} & 5.58 & 20.67 & 1.91 & 6.09 & 5.36 & 82.20 & 158.10 & - & -\\
\textbf{MobileNetV2} & 5.51 & 31.16 & 3.16 & 8.79 & 7.49 & 127.60 & 214.80 & - & -\\
\hline
\textbf{SSD300} & 184.40 & 610.55 & 21.12 & 118.17 & 117.64 & - & - & - & -\\
\textbf{SSD512} & 559.33 & 1383.74 & 43.39 & 279.07 & 278.85 & - & - & - & -\\
\textbf{YOLO-V3} & 204.25 & 596.23 & 36.85 & 277.00 & 253.93 & - & - & - & -\\ \hline
\textbf{FCN8} & 1117.67 & - & 91.29 & 609.02 & 633.60 & - & - & - & -\\
\textbf{Dilation} & 14377.20 & - & 1289.65 & 10738.42 & 10534.96 & - & - & - & -\\ \hline
\end{tabular}
\label{InferenceResults}
\vspace{-0.15in}
\end{table*}
\subsection{Xilinx}
FPGAs are re-programmable platforms, that can be tailored to implement custom processor units or co-processors for specific tasks. Following this approach, Xilinx has released Vitis AI\footnote{https://github.com/Xilinx/Vitis-AI}, a development kit with all the comprehensive tools that enables the fast development of systems targeting FPGAs, providing at the same time a highly optimised mapping of a DNN workload to an FPGA. Along with the tools, Xilinx has designed a special Deep-Learning Processor Unit (DPU) that can be mapped into specific Xilinx FPGA platforms, in order to enable deep learning inference. However, DPU can only manage fixed point DNN models. Even though, the use of fixed point computations has important performance and resource advantages, the conversion is performed without losing prediction accuracy. The conversion of models into their fixed point precision as well as all DPU supported system calls for inference execution are part of Vitis AI. In our case, two boards are used for mapping the DPU; ZCU102 by using DNNDK and Alveo U50 by using VART. All model transformations into the fixed point precision are performed using the Vitis AI toolset. Furthermore, multiple threads can be used to increase the throughput and the overall performance of the DPU. The reported performance in the next section is the best we achieved by using multiple threads.
\begin{table}
\centering
\caption{Targeted platforms and SDKs}
\vspace{-0.05in}
\setlength\tabcolsep{2.5pt}
\begin{tabular}{lll}
\hline
\textbf{Vendor} & \textbf{Platform} & \textbf{SDK
\\ \hline
Intel & Neural Compute Stick 2 (NC2) & OpenVino
\\ \hline
Nvidia & Jetson TX2 & TensorRT
\\ \hline
Arm & Cortex-A57 & ArmNN SDK
\\ \hline
Arm & Cortex-A53 & ArmNN SDK
\\ \hline
Xilinx & ZCU102 (DPU) & DNNDK (Vitis AI)
\\ \hline
Xilinx & Alveo U50 (DPU) & VART (Vitis AI)
\\ \hline
Intel & i7 6700 & OpenVino
\\ \hline
Nvidia & GeForce GTX1060 6GB & TensorRT
\\ \hline
\end{tabular}
\label{Platforms}
\vspace{-0.15in}
\end{table}
\section{Evaluation}
The workloads (i.e. DNN models) defined in \autoref{DNNModels} are used for the evaluation of the above systems (Section \ref{HW}). Each workload was developed by integrating the corresponding SDK of the target platform with the main application. In order to provide an insight on the relative performance of the embedded systems with respect to systems without such constraints, the process was extended to target a desktop-rated CPU (Intel i7 6700 with 4 physical cores and 8 logical cores (using HyperThreading) with a base frequency of 3.4 GHz), and GPU (Nvidia GeForce GTX 1060 6GB, with 1280 CUDA cores and a peak power of 120W). \autoref{Platforms} lists the targeted platforms and their associated SDKs.
\subsection{System preparation}
Tensorflow and Caffe, two of the most widely used frameworks for machine learning application development, are supported by all platforms under investigation. The provided SDKs, accept models described in these frameworks and, prior to deployment, they perform their conversion to an Intermediate Representation (IR) format, performing a number of optimisations tailored to the targeted device architecture in order to improve the performance of the deployed system. By using the function calls that each toolkit provides, the new representation is mapped into the target platform for an optimised execution.
In all DNN deployments that are investigated in this work, the above pre-processing step is performed capturing the impact of the tools' optimisations in the performance of the system. The reported inference time assumes that the data to be processed are already loaded on the off-chip memory of the system, and only reflects the time that it takes for the platform to execute the workload. More specifically, in the case of Jetson, Arm board, and Xilinx DPU, all the input data and parameters of the models are assumed to be on the memory of the board, where for both the i7 CPU and NCS2 the data are assumed to be on the host memory, while in GTX 1060 on the GPU board respectively.
It should be noted that the maximum batch size that could be investigated for each embedded platform was dictated by the amount of off-chip memory. As such, the maximum batch size is set to 128 for Image Classification, and to 8 for Image Segmentation. Nevertheless, a number of platforms failed to execute the workload under certain batch size values, as they were running out of memory.
\begin{figure*}[t]
\vspace{-0.05in}
\centering
\includegraphics[width=1\textwidth]{pareto.pdf}
\vspace{-0.45in}
\caption{ {Accuracy (\%) vs Inference Time (ms) and Pareto Front (black line) for Top-5 (red triangle) \& Top-1 (blue square) accuracy for Image Classification Networks. At all cases Batch Size equals to 1.}}
\label{fig:pareto}
\vspace{-0.15in}
\end{figure*}
\subsection{Inference Time} \label{Inference Time}
\autoref{InferenceResults} reports the inference time of each platform-DNN model pair when a latency critical application is targeted, by setting batch size equal to 1.
Both Arm Cortex-A CPUs need considerably more time to execute the DNNs compared to all the other platforms, where the A57 achieves shorter inference time than A53, ranging between 24\% for ResNet152 and 49\% for DenseNet121. The next performant platform is NCS2, which demonstrated up to $18x$ faster inference results relative to the weak ARM processor for the Image Classification models, and its performance is comparable to the desktop CPU (i.e. i7); both systems utilise Intel's OpenVINO\textsuperscript{TM} toolkit. However, NCS2 does not perform in par to the CPU device for Object Detection tasks, achieving inference times around 3 times longer that the i7 CPU. Image Segmentation performance results could not be obtained for the NCS2, as the application was timed out by the toolkit after a long execution.
Jetson TX2 outperforms Intel's NC2 platform by a factor of $2.52x$ on average, able as well to execute the Image Segmentation tasks. When an FP16 regime is used, TX2 is outperformed by the desktop GPU GTX1060 by a factor of $2.20x$ on average on Image Classification. One of the key differences between GTX 1060 and Jetson TX2 is the memory system: data in TX2 are fetched from a LPDDR4 RAM, while in GTX1060 from a Video RAM that delivers higher memory bandwidth. For Object Detection and Image Segmentation the obtained latency gap between the two platforms increases further, and the GTX 1060 is faster by an order of magnitude compare to TX2. Considering different precisions, FP16 in Jetson TX2 is faster by $35\%$ on average compare to FP32 across all Image Classification models. However, this improvement does not apply in the other two tasks, where the obtained latency results for both precision formats are similar.
Finally, the performance using the Xilinx DPU on both FPGAs boards, Alveo U50 and ZCU102, fall between the TX2 under FP32 and FP16. All deployed models are in fixed point precision, and so their complexity and computational demands are significantly lower compared to their floating point-based models. Additionally, Alveo U50 achieves on average $37\%$ better performance compared to ZCU102 on image classification models. Meanwhile, results for some networks couldn't be extracted, as the tested models aren't supported by the Vitis AI toolchain.
\begin{figure*}{}
\vspace{-0.05in}
\centering
\includegraphics[width=0.9\textwidth]{graphs.pdf}
\vspace{-0.15in}
\caption{Platform throughput for various DNN workloads and input batch sizes.}
\label{fig:throughput}
\vspace{-0.15in}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{power.pdf}
\caption{{Dynamic Power Consumption of embedded platforms and Alveo U50 for Batch Size = 1.}}
\label{fig:power}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{energy_delay.pdf}
\caption{{Energy Delay Product of embedded platforms and Alveo U50 for Batch Size = 1.}}
\label{fig:delay}
\end{figure}
\begin{comment}
\begin{table*}[t]
\centering
\caption{Maximum achieved Throughput (Images/Sec), first value in each cell, and in which Batch Size the maximum value is achieved, second value in each cell.}
\vspace{-0.05in}
\centering
\begin{tabular}{lrrrrrrrr}
\hline
\textbf{Models} & \textbf{\begin{tabular}[c]{@{}r@{}}Intel\\ i7 6700\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Intel\\ NCS2\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Nvidia \\ GTX 1060\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Nvidia \\ Jetson TX2(FP32)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Nvidia \\ Jetson TX2(FP16)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Arm\\ Cortex-A57\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Arm\\ Cortex-A53\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Xilinx \\ DPU\end{tabular}} \\ \hline
\textbf{Alexnet} & 217.21 / 128 & 45.20 / 64 & 2393.64 / 128 & 310.85 / 128 & 533.71 / 128 & 20.99 / 128 & 10.94 / 64 & - \\
\textbf{Inception-V1} & 102.56 / 128 & 47.94 / 128 & 969.42 / 128 & 134.11 / 128 & 240.42 / 128 & 4.68 / 1 & 2.67 / 1 & 315.27 / 64 \\
\textbf{Inception-V4} & 15.31 / 128 & 7.24 / 128 & 134.18 / 128 & 16.14 / 128 & 33.01 / 128 & 0.85 / 1 & 0.48 / 1 & - \\
\textbf{SqueezeNet1.1} & 334.56 / 4 & 125.40 / 128 & 3185.82 / 128 & 412.38 / 128 & 695.79 / 128 & 12.90 / 1 & 9.43 / 1 & - \\
\textbf{DenseNet 121} & 37.04 / 1 & 21.08 / 128 & 293.46 / 128 & 43.79 / 128 & 60.76 / 32 & 3.08 / 4 & 1.55 / 2 & - \\
\textbf{DenseNet 161} & 15.03 / 1 & 7.35 / 64 & 123.14 / 128 & 17.19 / 128 & 25.24 / 64 & 1.40 / 8 & 0.72 / 2 & - \\
\textbf{DenseNet 169} & 31.17 / 1 & 16.13 / 128 & 230.52 / 128 & 34.01 / 128 & 48.18 / 128 & 2.48 / 2 & 1.24 / 2 & - \\
\textbf{ResNet 50} & 53.17 / 128 & 18.44 / 128 & 470.85 / 128 & 60.36 / 128 & 113.04 / 128 & 3.34 / 32 & 1.73 / 32 & 150.77 / 64 \\
\textbf{ResNet 101} & 26.54 / 128 & 10.05 / 64 & 267.30 / 128 & 32.73 / 128 & 61.76 / 128 & 1.91 / 64 & 0.95 / 32 & 86.43 / 64 \\
\textbf{ResNet 152} & 17.61 / 128 & 6.71 / 128 & 175.29 / 32 & 22.36 / 128 & 42.21 / 128 & 1.33 / 64 & 0.66 / 64 & 61.33 / 128 \\
\textbf{VGG16} & 13.11 / 128 & 5.73 / 128 & 162.38 / 64 & 19.39 / 16 & 35.86 / 32 & 1.58 / 8 & 0.86 / 8 & 50.89 / 64 \\
\textbf{VGG19} & 10.14 / 128 & 4.71 / 128 & 130.26 / 128 & 15.39 / 16 & 28.17 / 32 & 1.38 / 32 & 0.74 / 16 & 45.78 / 64 \\
\textbf{MobileNetV1} & 241.64 / 64 & 55.64 / 128 & 1672.33 / 128 & 211.90 / 128 & 230.64 / 128 & 12.17 / 1 & 6.33 / 1 & - \\
\textbf{MobileNetV2} & 268.21 / 64 & 35.31 / 64 & 1270.72 / 128 & 166.87 / 32 & 168.87 / 64 & 8.55 / 16 & 4.61 / 64 & - \\ \hline
\textbf{SSD300} & 5.42 / 1 & 1.64 / 1 & 47.35 / 1 & 8.46 / 1 & 8.50 / 1 & - & - & - \\
\textbf{SSD512} & 1.79 / 1 & 0.72 / 1 & 23.04 / 1 & 3.58 / 1 & 3.59 / 1 & - & - & - \\
\textbf{YOLO-V3} & 4.89 / 1 & 1.68 / 1 & 27.13 / 1 & 3.61 /1 & 3.94 / 1 & - & - & - \\ \hline
\textbf{FCN8} & 1.03 / 8 & - & 12.12 / 8 & 1.43 / 8 & 1.58 / 1 & - & - & - \\
\textbf{Dilation} & 0.07 / 2 & - & 0.78 / 1 & 0.09 / 2 & 0.10 / 2 & - & - & - \\ \hline
\end{tabular}
\label{ThroughputBatchSize}
\vspace{-0.15in}
\end{table*}
\end{comment}
\subsection{Pareto Efficiency}
While \autoref{InferenceResults} provides a clear image about the capabilities of each platform in various inference applications, a correlation between inference time and model's accuracy in a specific platform is needed. In \autoref{fig:pareto}, we have scattered the models from ImageNet dataset, based on Top-1 \& Top-5 Accuracy and inference Time. We select to visualize Jetson TX2 by tuning the application to both available representative formats, Arm Cortex A-57 tuning the application for FP32 models and the integrated DPU in Alveo U50 by using INT8 representative format, while keeping each batch to one single image for an inference run. Models with low inference time tend to have worse accuracy, as happens with SqueezeNet and AlexNet models. On the other hand InceptionV4 is one of the slowest tested models, however it provides the best results on both Top-1 \& Top-5 accuracy.
Pareto Front tries to answer the question of which model is best for each case depending on the preferred platform. The previously described configurations include a large spectre across platforms and representative formats that can be leveraged by a developer. In Jetson TX2, in both representative formats, ResNet models tend to have higher accuracy and lower execution times than the DenseNet counterparts. Meanwhile, for applications where inference time is critical, the use of SqueezeNet or MobileNet models is preferred over AlexNet and InceptionV1, as the accuracy is significantly higher on each case. However, the benefits in performance utilizing FP16 precision apply differently in each network, Jetson TX2 columns in \autoref{Inference Time}. It is safely to assume that networks belonging in Pareto Front may vary by changing the network's precision in a single platform. On the other hand, Arm Cortex A-57 have quite different results. Densenet networks are preferred in the specific processor over the ResNet counterparts, while MobilenetV1 performs way better than Squeezenet1.1 by having the same inference time. Lastly, DPU on Alveo U50 tends to have similar behaviour with Jetson TX2, but the unavailability of few models reduces the choices. Generally, the use of a model that belongs to Pareto Front line is the appropriate choice depending on the requirements of the application and the targeted platform.
\subsection{Throughput}
In subsection \ref{Inference Time}, we evaluated the performance of our platforms with each batch including only one image during an inference run, and the comparison metric was the total inference time of the procedure. As platforms may perform better by processing multiple images in parallel, we evaluate the performance for batch sizes ranging from 1 to 128 for Image Classification and to 8 for Image Segmentation for all tested platforms. Then, we have calculated the throughput results for each scenario and report them in \autoref{fig:throughput}. Both axis in the graphs are in logarithmic scale in order to visualize better the differences in throughput between platforms.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{energy_image.pdf}
\caption{{Energy per Image of embedded platforms and Alveo U50 for Batch Size = 1.}}
\label{fig:energy}
\end{figure}
The throughput of NCS2 has a minimal improvement that cannot even be clearly discerned in the graphs, irrespective of the DNN model and in comparison to the other platforms. The best performance of this platform is mostly achieved by streaming batches with only one image, as inference time increases linearly with respect to batch size. On the other hand the two ARM Cortex-A CPUs exhibit the same behaviour as the batch size increases, again irrespective of the model. The throughput of both processor units increases to a point where the metric cannot be further improved. Both ARM CPUs tend to perform better with larger batch sizes, but their performance is still very limited compared to the other platforms. Note that in the case of DenseNet and MobileNet models throughput tends to get worse as the batch size increases.
As batch size is increasing, the high-end CPU from Intel behaviour depends on the DNN model of our task. Generally the throughput is slightly improving up to a point where a batch includes 16 or 32 images, except for AlexNet where the throughput keeps increasing further. However, for DenseNet and SqueezeNet the best performance is achieved when batch size equals to 1.
For the remaining platforms, as batch size increases the throughput is also improves and more images can be processed each second. However, this improvement is minimal on Jetson TX2, regardless of the model's precision. On the other hand, the throughput of the standalone GPU is greatly improved in all tested models, as this metric tends to keep increasing even in large batch sizes. Last but not least, by increasing the batch size from one to two or four, DPU in both FPGA platforms displays a huge improvement at its performance and the throughput is increased significantly. However, increasing the batch size further offers minimal improvement in throughput.
The DNNs that are used for image segmentation task require a huge amount of MAC operations, by at least an order of magnitude more that the other models of our list. Due to limited memory resources on most of our platforms, measurements for larger batch sizes were not possible. Using the Dilation network, the throughput tends to worse in both CPU and standalone GPU, while in Jetson the value remains almost stable. On the contrary, FCN8 tend to be performed slightly better. Lastly, from both of these graphs we can export the conclusion that throughput in image segmentation task converge in both floating precision formats in Jetson TX2.
\subsection{Power Consumption and Energy per Image}
While inference time and throughput provide useful insights about the performance on a selected configuration (preferred model and platform), the power consumption of a target system running a vision kernel is another important factor, especially in embedded devices. \autoref{fig:power} reports real-time results of power consumption that each embedded platform needs to process batches of one image for each model. Each entry of the graph is obtained by measuring the vision kernel's dynamic power while excluding the static power required to power the rest of the platform. So, each value represent the extra power that the platform consumes from an initial idle state. Furthermore, in \autoref{fig:energy} we provide energy per image results of each configuration, while \autoref{fig:delay} reports the energy delay product (EDP). Both figures are normalized, where Jetson TX2, tuned with FP16 networks, is used as base for comparison between different platforms.
At first glance, Jetson TX2 consumes on average $7.72$ Watts when models with FP32 precision are used. By using a the lower supported representative format, FP16 for Jetson, the average power consumption can be reduced almost by a trivial $4.58$\%. On contrast, the use of ARM processors for computer vision inference tasks, can minimize the power consumption to a few watts. Especially, less than two watts are needed for Cortex-A53 in order to execute all ImageNet models, for image classification kernel. Finally, the power results for the platform that integrates the DPU accelerator are quite different from all above cases. The power that the FPGAs need for the inference procedure increases as the complexity of the model increases. So, a less computational intensive network with few MAC operations like InceptionV1 or Squeezenet1.1 needs fewer watts than ResNet or VGG networks. Meanwhile, using DPU with ZCU102 can challenge both the power efficient ARM CPUs. On the other hand, Alveo U50 consumes on average $17.13$ and up to $20$ Watts, a non-trivial amount compared to embedded platforms. However, this higher dynamic power consumption is expected from Alveo, as it is a FPGA card designed for Data Centers.
Although, the use of a lower precision does not affect significantly the power consumption, the energy that each image needs is reduced by almost 20\%, mainly because of the lower inference time. So, the use of a FP16 model is both time and energy efficient. On the other hand, by using the power efficient ARM CPUs, each inference task needs considerably more energy than the other platforms, that exceeds $5x$ in some cases. However, the energy consumption of the integrated DPU in ZCU102, is lower even from Jetson TX2. So, an inference application can leverage the efficiency of the ZCU102 platform in both power and energy, by outperforming at the same time all platforms except the dedicated devices; GTX1060 and Alveo U50. The latter platform has an expected high energy consumption for the inference workload, due to the high needed power.
Energy delay product is used as a metric to balance the energy consumption with the delay that occurs by the execution of the model during the inference task. Specifically, as we report in \autoref{fig:delay}, EDP for both ARM processors is exceptionally high compared to other platforms, as their energy consumption is significantly affected by the inference time. Meanwhile, the energy consumption on both Jetson TX2 and ZCU102 is not affected at all by inference time, as the results can be compared with the corresponding energy per image measurements in \autoref{fig:energy}. Finally, energy delay product in Alveo U50 depends on the kernel's workload. Workloads with low inference time, such as SqueezeNet1.1 and InceptionV1, present higher EDP compared to the rest of the models, as the execution time does not balance well with the energy consumption.
\section{Discussion}
For portability reasons, mapping inference tasks onto particular commodity hardware platforms typically does not involve the use the platform-specific SDKs and/or libraries. Unfortunately, this approach leads to sub-optimal mapping as the generic approaches and libraries cannot exploit all the platform compute resources as well as it is possible using the more customized tools and libraries.
Also, the use of special processor units such NCS2 and DPU is not possible. In order to avoid such problems and also maximize the performance of an inference process, the use of software support is the only solution. At first, the existing toolkits transform the initial model into an optimal IR, and then they map it into the platform for a more efficient execution.
During the previous section we offered an extensive overview of the performance and energy consumption of the most commodity platforms. While most of the exported results for inference time are quite good -even in platforms with RISC architecture like ARM Cortex-A series and to a greater extent in NCS2, in most cases the inference process includes large sets of images. Increasing the batch size affects differently each platform: in most cases, larger batch size leads to an increase of the throughput, though this improvement is limited by the memory bandwidth and the computational resources. Jetson TX2 is an example of this statement, as its shared memory between the CPU and GPU as well its limited memory bandwidth, limits the performance of the target integrated GPU. Similarly, the data transfer into NCS2 is a high cost procedure as it is plugged in a USB port, while this platform has also limited computational capabilities compare to a general purpose processor unit.
A lot of effort is made towards developing customized networks to improve the performance of inference tasks targeting specific platforms, like FPGAs or GPUs. While we avoid utilizing such networks on our results, the scope of this paper is a unified performance landscape for a fair comparison between the tested platforms utilizing their respective SDK. However, most of the adopted networks are used as a starting point, either for application or network development. ResNet networks and Yolo-V3 are widely adopted networks from the community, while MobileNet dominates in latency-critical applications. Furthermore, both VGG16 and VGG19 are integrated as a core part of larger networks for feature extraction, like SSD for object detection and Dilation for image segmentation networks.
The standard representative format for deep neural networks is FP32, as such a high precision format is very important for high accuracy for the networks' results. Maintaining FP32 format for network's training is necessary, as the network must propagate its results back in order to acquire better accuracy by modifying the weights of each layer. Keeping such a high precision is optional for inference tasks. So hardware vendors, especially Xilinx, are pushing the developers to utilize low precision networks in their inference tasks. Lowering the precision format to FP16, or using techniques like quantization to acquire INT8 networks, come over with trade-offs that have to do with lower accuracy in network's results. However, the benefits are huge, as Jetson TX2 is significantly faster using FP16 precision in Image Classification task, and NCS2 can compete even a high-end CPU. DPU is another example that utilizing INT8 precision networks can help outperform most of platforms, while keeping the task power and energy efficient.
In addition to image classification, other computer vision tasks are being used extensively by the community. While more and more models are developed to support these kind of tasks, the computational and memory demands of these DNNs are very large. Still, the increase of the batch size or the transformation of the model into FP16 precision may not affect the overall performance, as it tends to be stable in any case.
\section{Conclusion}
In this work, we have performed an experimental evaluation of embedded DNN platforms using their respective SDKs and state of the art networks for image classification, object detection and image segmentation. The evaluation shows that both Xilinx DPU and Jetson TX2 achieve the best performance for latency critical applications across all embedded devices, with the Jetson being able to support the execution of more state-of-the-art models compared to Xilinx DPU. Moreover, the evaluation of the embedded systems for throughput oriented applications shows that their performance under those benchmarks is largely insensitive to batch size, and maximum utilisation of the device can be achieved with modest batch sizes. The device that demonstrated the highest throughput gains with respect to batch size is the Xilinx DPUs, as its performance improves significantly when the batch size is increased from 1 to 4. Furthermore, insights about power demands and energy consumption are given for each configuration, confirming that the utilization of lower precision and the use of special processor units comes with significant benefits. In summary, the results provide a reference guide for the expected performance as well the real-time power and energy consumption of the above widely used platforms under state of the art DNNs workloads that could guide researchers and practitioners in adopting a specific platform for a target system.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the standard model (SM), the decay \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ has a highly suppressed
rate\cite{Buras:2003td} of $\ensuremath{{\cal B}} = (3.42\pm0.54)\times 10^{-9}$ since
it involves a $b \ensuremath{\rightarrow}\xspace s$ transition and requires an internal quark
annihilation which further suppresses the decay relative to the
electroweak `penguin' $b \ensuremath{\rightarrow}\xspace s \gamma$ decay. In addition, the decays
are helicity suppressed by factors of $m_{\ell}^{2}$. To date these
decays have not been observed; upper limits on these branching
fractions are a topic of frequent updates at the
$B$-factories\cite{Chang:2003yy,Aubert:2004gm} (for the decay $\ensuremath{B^0_d\ensuremath{\rightarrow}\xspace \mu^+\mu^-}$)
and the Tevatron.\cite{Abazov:2004dj,CDF:prelim} Currently the best
upper limit is from the CDF collaboration\cite{CDF:prelim} with
$\ensuremath{{\cal B}}(\ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}) < 1.0\times 10^{-7}$ at 95\% confidence limit.
Since these processes are highly suppressed in the SM, they are
potentially sensitive probes of physics beyond the SM (see
Fig.~\ref{f:b2llnp}). In the minimal supersymmetric extension of the
SM (MSSM) the branching fraction for these decays can be substantially
enhanced, especially at large $\tan \beta$.\cite{Babu:1999hn} For MSSM
with modified minimal flavor violation at large $\tan\beta$, the
branching fraction can be increased by up to four orders of
magnitude.\cite{Bobeth:2002ch} \ensuremath{B^0_{s (d)}\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ decays can also be enhanced in
specific models containing leptoquarks~\cite{Davidson:1993qk} and
supersymmetric (SUSY) models without R-parity.\cite{Roy:1991qr}
\begin{figure}[!htb]
\centerline{\psfig{file=fig1.eps,width=2.6in}}
\caption{Feynman graphs for the decay \ensuremath{B^0_{s (d)}\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ illustrating possible
new physics contributions.}
\label{f:b2llnp}
\end{figure}
There has been some
interest\cite{Kane:2003wg,Baek:2004tm,Dedes:2004yc} in using the decay
mode \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ to `measure' the key parameter \ensuremath{\tan\beta}\xspace\ of the MSSM and to
constrain other extensions of the SM. Lower bounds on \ensuremath{\tan\beta}\xspace\ can be
obtained from $\ensuremath{{\cal B}}(\ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-})$ with general model-independent
assumptions. Since \ensuremath{\tan\beta}\xspace\ is also constrained from above due to general
principles, a lower bound is tantamount to a measurement of \ensuremath{\tan\beta}\xspace.
\section{The CMS Experiment}
The Compact Muon Solenoid (CMS) detector is well suited for the study
of leptonic $B$ decays. The main components for this analysis are the
tracker and muon systems.
The CMS tracker is an all-silicon detector, comprises an inner pixel
vertex detector plus an outer strip track detector, and is immersed in
a magnetic field of 4\Tesla. The pixel detector provides
high-precision measurements of space points close to the interaction
region for vertexing and effective pattern recognition in the
high-track multiplicity environment at the LHC. The pixel detector is
composed of 1440 modules arranged in three barrel layers (at a radial
distance of $r = 4.4,\, 7.3,\, 10.2\ensuremath{\mbox{\,cm}}\xspace$ from the beampipe) and four
endcap disks (at $z = \pm34.5,\, \pm46.5\ensuremath{\mbox{\,cm}}\xspace$ from the interaction
point). The barrel detector comprises 672 modules and 96
half-modules, the forward detector is built of 96 blades with 672
modules. With a pixel size of $d_\phi\times d_z = 100\ensuremath{\,\mu\mbox{m}\xspace}\times
150\ensuremath{\,\mu\mbox{m}\xspace}$, a hit resolution of 10--20\ensuremath{\,\mu\mbox{m}\xspace}\ is achieved. The strip
tracker allows the precise determination of charged particle momenta.
It is arranged in the central part as inner (TIB) and outer (TOB)
barrel, and in the forward regions as inner discs (TID) and endcaps
(TEC). The barrel part consists of 10 layers (4 in the TIB, 6 in the
TOB) and 12 layers in the forward part (3 in the TID, 9 in the TEC).
The pitch in the strip tracker varies between 80--180\ensuremath{\,\mu\mbox{m}\xspace}. The
material inside the active volume of the tracker increases from
$\approx 0.4X_0$ at pseudo-rapidity $\eta = 0$ to around $1X_0$ at
$|\eta| \approx 1.6$, before decreasing to $\approx 0.6X_0$ at $|\eta|
= 2.5$.
The CMS muon system, incorporated into the magnet return yoke, is
divided into a barrel ($|\eta| < 1.2$) and forward parts ($1.2 <
|\eta| < 2.4$). In the barrel region, where the neutron induced
background and the muon rate is small, drift tube (DT) chambers are
used. In the two endcaps cathode strip chambers (CSC) are deployed.
In addition, resistive plate chambers (RPC) are used both in the
barrel and the endcap region. The RPC spatial resolution is coarser
than for the DT and CSC, but their excellent time resolution allows
the unambiguous identification of the correct bunch crossing.
\section{Event Samples}
Monte Carlo (MC) event samples were generated with {\sc Pythia~6.227}
and passed through a full detector simulation based on {\sc Geant~4}.
On average five pile-up events were included, appropriate for a
luminosity of $\ensuremath{{\cal L}} = 2\times10^{33} \ensuremath{\mbox{\,cm}}\xspace^{-2}\sec^{-1}$.
Both signal and background MC event samples have been generated as
minimum bias QCD events. In the signal sample, \ensuremath{B_s}\xspace\ mesons decay as
\ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}. The muons are required to have transverse momentum $\mbox{$p_\perp$}\xspace^\mu >
3\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$ and $|\eta^\mu| < 2.4$; the \ensuremath{B_s}\xspace\ must have $\mbox{$p_\perp$}\xspace^{\ensuremath{B_s}\xspace} > 5\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$.
The background sample contains two muons with $\mbox{$p_\perp$}\xspace > 3\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$ and
$|\eta| < 2.4$. Their separation in azimuth and pseudorapidity
$\ensuremath{\Delta R(\mu\mu)}\equiv \sqrt{\Delta\phi^2 + \Delta\eta^2}$ is required to be $0.3
< \ensuremath{\Delta R(\mu\mu)} < 1.8$. Currently the background simulation does not include
muons due to hadronic in-flight decays or punch-through. It is
estimated that this hadronic component will increase the background
level by about 10\%. Background events from rare
$B_d,\,B_u,\,B_s,\,B_c,\,\Lambda_b$ decays are not included.
In total 20000 signal events and about 15000 background events have
been analyzed. The small size of the background sample is the
limiting factor of the present study.
\section{Trigger Strategy}
The CMS detector has a twofold trigger strategy. The first level
trigger, with a latency of $3.2\,\mu\sec$, is based on information
from the calorimeters and the muon system. The threshold for
inclusive isolated single muons is at $\mbox{$p_\perp$}\xspace^\mu>14\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$, for dimuon
events both muons must have $\mbox{$p_\perp$}\xspace>3\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$. The expected trigger rates
amount to 2.7\ensuremath{\mbox{\, kHz}}\xspace\ and 0.9\ensuremath{\mbox{\, kHz}}\xspace, respectively. The L1 trigger output
rate is at most 100\ensuremath{\mbox{\, kHz}}\xspace.
The high-level trigger (HLT) is a software trigger, running on a large
processor farm. The HLT reduces the overall trigger rate by three
orders of magnitude to about 100\ensuremath{\mbox{\, Hz}}\xspace. To fit into the tight time
constraints imposed by the high input rate, tracking at the HLT is
sped up by two concepts: (1) `Regional seed generation' limits track
seeding to specific regions of interest, {\it e.g.}, a cone around the L1
muon candidate direction. (2) `Partial tracking' pursues track
reconstruction only until some criteria are met, {\it e.g.}, a resolution of
2\% on the transverse momentum. Already with six reconstructed hits,
both the efficiency and the resolution are comparable to the full
tracking performance.
The HLT strategy for \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ proceeds along the following path. (1)
Verification of the two L1 muon candidates. (2) Tracks reconstructed
only with the pixel detector are used to compute a list of possible
primary vertices, the three most significant are retained. (3)
Regional track reconstruction with up to six hits is performed in
cones around the L1 muon candidates. (4) Reconstructed tracks with
$\mbox{$p_\perp$}\xspace > 4\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$ are paired and retained if their invariant mass falls
into predefined regions for the signal and sidebands. (5) The two
tracks must have opposite charge and are fit to a common vertex; the
event is retained only when the fit $\chi^2 < 20$ and the
three-dimensional flight length $l_{3d} > 150\ensuremath{\,\mu\mbox{m}\xspace}$. With this
selection, the event rate was estimated\cite{Sphicas:2002gg} to be
$<1.7\ensuremath{\mbox{\, Hz}}\xspace$.
\section{Offline Analysis}
The offline analysis selection focuses on a secondary vertex that is
well measured and separated from the primary vertex and consistent
with the decay of an isolated \ensuremath{B_s}\xspace\ meson.
The primary vertex is determined from all tracks with $\mbox{$p_\perp$}\xspace > 0.9\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$,
because of the rather low-multiplicity track environment.
The two muons must have opposite charge, $\mbox{$p_\perp$}\xspace^\mu >
\vuse{cut:default:ptlo} $, and $|\eta^\mu| < \vuse{cut:default:etahi}
$. The azimuthal and pseudorapidity separation of the two muons
$\vuse{cut:default:rmmlo} < \ensuremath{\Delta R(\mu\mu)} < \vuse{cut:default:rmmhi} $ provides
a powerful reduction of gluon-gluon fusion background with both
$b$-hadrons decaying semileptonically: The muons of those $b$-hadrons
tend to be back-to-back, while the signal shows a peaked distribution
with a maximum at $\ensuremath{\Delta R(\mu\mu)} \sim 1$.
\ensuremath{B_s}\xspace\ candidates are formed by vertexing the two muon candidates. The
vertex quality is required to be $\chi^2 < \vuse{cut:default:chi2} $.
The transverse momentum vector of the \ensuremath{B_s}\xspace\ candidate must be close to
the displacement of the secondary vertex from the primary vertex: the
cosine of the opening angle $\alpha$ between the two vectors must
fulfill $\cos(\alpha) > \vuse{cut:default:cosalpha} $. The
significance of the \ensuremath{B_s}\xspace\ candidate flight length $l_{xy}$ in the
transverse plane is defined as $l_{xy}/\sigma_{xy}$ (illustrated in
Fig.~\ref{f:selection}), where $\sigma_{xy}$ is the error on the
flight length with mean $\langle\sigma_{xy}\rangle = 120\ensuremath{\,\mu\mbox{m}\xspace}$; we
require $l_{xy}/\sigma_{xy} > \vuse{cut:default:lxy/sxy} $.
\begin{figure}[!htb]
\centerline{\psfig{file=fig2.eps,width=2.6in}}
\caption{Decay length significance in
the transverse plane for signal and background MC events. Both
histograms are normalized to unity.}
\label{f:selection}
\end{figure}
In high-\mbox{$p_\perp$}\xspace\ gluon-splitting events the \ensuremath{b\overline b}\xspace\ quark pair moves
closely together due to their boost, and the two decay vertices of the
resulting $b$-hadrons cannot be well separated in all cases. However,
because of color reconnection, the hadronic activity around the dimuon
direction is enhanced compared to the signal decay (where only one
colorless \ensuremath{B_s}\xspace\ meson decays). This is exploited in isolation
requirements: The isolation $I$ is determined from the dimuon
transverse momentum and charged tracks with $\mbox{$p_\perp$}\xspace>0.9\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$ in a cone
with half-radius $r = \vuse{cut:default:isocone} $ around the dimuon
direction as $I \equiv \mbox{$p_\perp$}\xspace^{\mu\mu}/(\mbox{$p_\perp$}\xspace^{\mu\mu} + \sum_{trk}|\mbox{$p_\perp$}\xspace|)$;
we require $I > \vuse{cut:default:isolation} $.
\begin{figure}[!htb]
\centerline{\psfig{file=fig4.eps,width=2.6in}}
\caption{Dimuon mass distribution in signal MC events. The curve
is a fit of two Gaussians, the displayed parameters indicate the
average mean and sigma. The histogram is normalized to unity.}
\label{f:breco}
\end{figure}
Figure~\ref{f:breco} illustrates the mass resolution obtained on the
signal MC event sample. The distribution is fit with two Gaussians,
the quoted width $\sigma = \vuse{mBgSigma:s0} \ensuremath{\mbox{\,Me\kern -0.1em V}}\xspace$ is determined
according to
\begin{eqnarray*}
\sigma^2 = \frac{N_n^2\sigma_n^2 + N_w^2\sigma_w^2}{N_n^2 + N_w^2},
\end{eqnarray*}
where $\sigma_n = \vuse{mBg1s:s0} \ensuremath{\mbox{\,Me\kern -0.1em V}}\xspace $ ($\sigma_w = \vuse{mBg2s:s0} \ensuremath{\mbox{\,Me\kern -0.1em V}}\xspace$)
and $N_n = \vuse{mBg1n:s0} $ ($N_w = \vuse{mBg2n:s0} $) are the width
and normalization of the narrow (wide) Gaussian, respectively.
Given the limited statistics of the background sample, no event
remains after the application of all selection requirements. However,
the absence of correlation to the other selection requirements allows
a factorization of the isolation and $\chi^2$ requirements from the
other cuts in the determination of the total background rejection
factor. The dominant sources of uncertainty on the signal
($\pm\vuse{sgError} \%$) and background ($^{+\vuse{bgError} }_{-100}
\%$) yield are the statistical component of the background sample, the
impact of the misalignment on the transverse flight length
significance, and the assumption of factorizing cuts.
The total selection efficiency for signal events is $\varepsilon_S =
\vuse{eAllCutsFact:s0} \pm \vuse{eAllCutsFactE:s0} $ and the
background reduction factor is $\varepsilon_B = \vuse{eAllCutsFact:m0}
$. With this selection, the first $\vuse{Lumi:d0} \ensuremath{\mbox{\,fb}^{-1}}\xspace$ of
integrated luminosity will yield $n_S = \vuse{nAllCutsFact:s0}
\pm\vuse{nMassAllCutsFactE:s0} $ signal events and $n_B =
\vuse{nMassAllCutsFact:m0} ^{+\vuse{nMassAllCutsFactE:m0}
}_{-\vuse{nMassAllCutsFact:m0} } $ background events in a mass window
of $m_{\ensuremath{B_s}\xspace} \pm 0.1\ensuremath{\mbox{\,Ge\kern -0.1em V}}\xspace$. With this background estimate, the upper
limit on the branching fraction is $\ensuremath{{\cal B}}(\ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}) \le
\vuse{ExpectedUpperLimit} $ at the 90\% C.L.
\section{Conclusions}
This study is limited by the size of the background MC sample. In the
future, it will include larger background samples and a detailed
simulation of rare $b$-hadron decays. The search for \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ promises
an interesting start-up analysis with the possibility of setting tight
constraints on the MSSM. With sufficient integrated luminosity, the
precision measurement of the \ensuremath{B^0_s\ensuremath{\rightarrow}\xspace \mu^+\mu^-}\ branching fraction will set
constraints on models of new physics.
\input references.tex
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsection*{Outline of the proof}
Outside of applying the hyperbolicity axioms of Masur and Minsky our methods of proof, although intricate, are mostly self contained, depending on basic tools from the theory of group actions on trees including Bass-Serre theory and Stallings folds. Beyond the methods there are important motivations coming from the proof of Masur and Minsky, in particular the definition of the projection maps that play a role in verifying the Masur--Minsky axioms.
\subparagraph{Section~\ref{SectionFreeSplittingComplex}.} We give the basic concepts underlying the construction of the free splitting complex $\FS(F_n)$, including definitions of collapse maps, and Lemma~\ref{LemmaFSSimplices} which contains the technical results about free splittings that are needed to verify that $\FS(F_n)$ is, indeed, a simplicial complex. The proof of that lemma is given in Section~\ref{SectionFSSimplicesProof}. Collapse maps are also needed to understand the first barycentric subdivision $\FS'(F_n)$, which is what we actually use in our proof of hyperbolicity. In brief, $\FS'(F_n)$ has a \emph{vertex} for each conjugacy class of free splitting $F \act T$, and an oriented edge for each collapse relation $T \collapsesto S$. Since the composition of two collapse maps is a collapse map, the collapse relation is transitive, from which it follows that each geodesic in the \nb{1}skeleton of $\FS'(F_n)$ is a ``zig-zag path'' that alternates between collapses and expansions.
\subparagraph{Sections~\ref{SectionFoldPaths} and~\ref{SectionMasurMinsky}.} Following Stallings method \cite{Stallings:folding} as extended by Bestvina and Feighn \cite{BestvinaFeighn:bounding}, we define a system of paths in $\FS'(F)$ called \emph{fold paths}. We also review the criterion for hyperbolicity due to Masur and Minsky \cite{MasurMinsky:complex1}, which is concerned with familes of paths and projection maps to those paths that satisfy certain axioms, which we refer to as the \emph{Coarse Retraction}, \emph{Coarse Lipschitz}, and \emph{Strong Projection} axioms.
The first step of progress on the Main Theorem is the statement of Proposition~\ref{PropFoldContractions} which asserts the existence of a system of projection maps, one such map from the ambient space $\FS'(F_n)$ to each fold path, that satisfy the Masur-Minsky axioms.
\subparagraph{Section~\ref{SectionCombing}.} We introduce the concept of combing of fold paths. The combing process has as input a fold path $S_0 \mapsto\cdots\mapsto S_K$ plus a single edge in $\FS'(F_n)$ with one endpoint $S_K$ and opposite endpoint denoted $S'_K$, which can be either a collapse $S_K \collapsesto S'_K$ or an expand $S_K \expandsto S'_K$. The output is a fold path (roughly speaking) from some $S'_0$ to $S'_K$ which stays a uniformly bounded distance from the input path, and which has the following rather strong asynchronous fellow traveller property: every free splitting along the input fold path from $S_0$ to $S_K$ is connected by a single edge to some free splitting along the output path from $S'_0$ to $S'_K$. The result of the combing process is a \emph{combing rectangle}, the general form of which is depicted in Figure~\ref{FigureCombingRectangle}. These rectangles are certain commutative diagrams of fold maps and collapse maps that can be viewed as living in the \nb{1}skeleton of $\FS'(F_n)$. We use many such diagrams throughout the paper, both as formal tools and as visualization aids.
Section~\ref{SectionCombingRectangles} contains basic definitions and properties regarding combing rectangles. In this section we also take the next step of progress in the proof of the Main Theorem, by using combing to define the system of projections maps to fold paths, and we state Proposition~\ref{PropProjToFoldPath} which asserts that these particular projection maps satisfy the Mazur Minsky axioms. Section~\ref{SectionCombingConstructions} contains the statements and proofs of various useful constructions of combing rectangles.
\subparagraph{Section~\ref{SectionFSU}.} We introduce \emph{free splitting units} as a way of subdividing a fold path into subpaths each of which has uniformly bounded diameter in $\FS'(F_n)$ (see Proposition~\ref{LemmaUnitsLipschitz}) but which nevertheless measure progress through $\FS'(F_n)$ (as stated later in Proposition~\ref{PropFoldPathQuasis}). Section~\ref{SectionDiamBounds} contains important diameter bounds for subsegments of fold paths. Section~\ref{SectionFSUDefsPropsApps} uses these diameter bounds to formulate the definition of free splitting units. Once they are defined, we are able to use the diameter bounds to quickly verify the \emph{Coarse Retraction} axiom; see Proposition~\ref{PropCoarseRetract}.
\subparagraph{Section~\ref{SectionMainProof}.} We verify the \emph{Coarse Lipschitz} and \emph{Strong Projection} axioms, completing the proof of the Main Theorem. In this section we also verify that when a fold path is parameterized by free splitting units it becomes a quasigeodesic in $\FS'(F_n)$; see Proposition~\ref{PropFoldPathQuasis}. See the beginning of Section~\ref{SectionMainProof} for a sketch of the proof of the Main Theorem.
\section{The free splitting complex}
\label{SectionFreeSplittingComplex}
We begin with some basic notations used throughout the paper.
For the rest of the paper we shall fix a free group $F$ of finite rank~$\ge 2$.
A \emph{graph} is a 1-dimensional simplicial complex equipped with the CW topology. A \emph{tree} $T$ is a contractible graph. \emph{Simplicial maps} between graphs and trees are maps taking each vertex to a vertex, and taking each edge to a vertex or to another edge preserving barycentric coordinates. We use $G \act T$ to denote an action of a group $G$ on~$T$, which by definition is a homomorphism $G \mapsto \Aut(T)$ from $G$ to the group of simplicial automorphisms of $T$. The action associates to each $\gamma \in G$ a simplicial automorphism of $T$ denoted $x \mapsto \gamma \cdot x$, a notation that extends to subsets of $T$ by $\gamma \cdot A = \{\gamma \cdot x \suchthat x \in A\}$. The \emph{stabilizer} of a subset $A \subset T$ is the subgroup $\Stab_T(A) = \{\gamma \in G \suchthat \gamma \cdot A = A\}$. Given two actions $G \act S,T$, a function $f \colon S \to T$ is said to be \emph{equivariant} if $f(\gamma \cdot x) = \gamma \cdot f(x)$ for all $x \in S$, $\gamma \in G$.
Given a set $A$ and a subset $B \subset A$ we denote the set theoretic complement as~$A-B$. Given a graph $X$ and a subgraph $Y \subset X$ we denote the graph theoretic complement as $X \setminus Y$, whose topological description is the closure of $X-Y$.
\subsection{Free splittings, maps, natural vertices and edges, edgelets}
\label{SectionBasic}
Recall from the introduction that a \emph{free splitting of $F$}\index{free splitting} is an action $F \act T$ where $T$ is a tree that is not a point, the action is \emph{minimal} meaning that there is no proper $F$-invariant subtree, and for every edge $e \subset T$ the subgroup $\Stab_T(e)$ is trivial. We use without comment the basic fact that every homeomorphism of a tree $T$ either fixes a point or translates along a properly embedded copy of $\reals$ called its \emph{axis}, and that minimality of an action $F \act T$ is equivalent to the statement that $T$ is the union of the axes of the elements of $F$ that have no fixed point in~$T$. We also use without comment the fact that every free splitting is cocompact, that is, there is a finite number of orbits of vertices and of edges; this follows from Bass-Serre theory \cite{ScottWall} combined with the fact that the rank of $F$ is finite.
Given a free splitting $F \act T$, from Bass-Serre theory \cite{ScottWall} it follows that the set of conjugacy classes in $F$ of nontrivial vertex stabilizers of $T$ forms a free factor system in the sense of \BookOne, which means that by appropriate choice of representatives $H_1 = \Stab_T(v_1),\ldots,H_k=\Stab_T(v_k)$ of each conjugacy class --- where $v_1,\ldots,v_k$ are the corresponding vertex orbit representatives --- there exists a free factorization of the form $F = H_1 * \cdots * H_k * B$, with $B$ possibly trivial. We refer to this free factor system as the \emph{vertex group system} of~$F \act T$, and denote it $\F(T)$. Notice that a free splitting $F \act T$ is properly discontinuous if and only if $\F(T) = \emptyset$, if and only if every vertex has finite valence.
\begin{definition}[Maps between free splittings]
Given free splittings $F \act S,T$, a \indexemph{map} from $S$ to $T$ is defined to be an $F$-equivariant simplicial map $f \colon S \to T$.
\end{definition}
We will encounter several different kinds of maps, most commonly ``collapse maps'' defined in Section~\ref{SectionCollapseMaps}, ``foldable maps'' defined in Section~\ref{SectionFoldableMaps}, and ``folds'' defined in Section~\ref{SectionFoldFactorizations}. The category of maps will usually suffice for much of this paper, but we will occasionally have to consider more general equivariant continuous functions between free splittings, for example conjugacies.
We will sometimes emphasize the role of the action of $F$ by referring to a ``free splitting over $F$'' or a ``map over $F$'', and we shall use similar terminology for more complicated objects introduced later on that are built out of free splittings and maps over $F$.
Recall from the introduction that a \indexemph{conjugacy} between free splittings $F \act S,T$ is an equivariant homeomorphism between $S$ and $T$. A conjugacy \emph{need not be} a map as just defined, i.e.\ it need not take vertices to vertices or edges to edges, and even if it does it need not preserve barycentric coordinates. Notice that if one is given a map $f \colon S \to T$ as just defined --- an equivariant simplicial map --- then $f$ is a conjugacy if and only if it is locally injective: for if $f$ is locally injective then it is evidently injective, and it is surjective by minimality of the action $F \act T$, and so $f$ is a simplicial isomorphism and hence a homeomorphism.
Given a free splitting $F \act T$, recall also from the introduction the \indexemph{natural cell structure} on $T$, a CW structure whose $0$-skeleton is the set of \emph{natural vertices} which are the vertices of valence~$\ge 3$. Implicit in the definition of the natural cell structure is the fact that each point of~$T$ which is not a natural vertex is contained in the interior of a unique \emph{natural edge}, which is an arc of $T$ each of whose endpoints is a natural vertex and none of whose interior points is a natural vertex. If this fact were not true then $T$ would contain a valence~$1$ vertex, violating minimality, or $T$ would contain arbitrarily long simplicial arcs with no natural vertices. In the latter case, by cocompactness it would follow that $T$ is homeomorphic to a line: but then either the action would be properly discontinuous implying that $F$ has rank~$1$ which is a contradiction; or the kernel of the action would be a free factor of corank~$1$, contradicting that edge stabilizers are trivial. We have also defined the notion of a $k$-edge free splitting $F \act T$ meaning that $T$ has $k$ orbits of natural edges; this notion is invariant under conjugacy. In terms of Bass-Serre theory \cite{ScottWall}, the number of orbits of natural vertices of a free splitting $F \act T$ equals the number of points in the quotient graph of groups $T/F$ which either have a nontrivial group or have valence~$\ge 3$.
The word ``natural'' in this context refers to naturality in the category of free splittings and conjugacies: every conjugacy is an automorphism of the natural cell structure, and in particular preserves the numbers of orbits of natural vertices and edges. On this basis one might have wished to refer to a valence~$1$ vertex as ``natural'', were it not for the fact that $T$ has no vertices of valence~$1$, by virtue of minimality of the action $F \act T$.
\paragraph{Remark on terminology.} Outside of discussions involving natural cell structures and nonsimplicial conjugacies, we work primarily in the simplicial category: a free splitting $F \act T$ comes equipped with a simplicial structure on the tree $T$ which is invariant under the action of~$F$; maps between free splittings are $F$-equivariant simplicial maps. This will be particularly convenient when we encounter subcomplexes of the simplicial structure which are not subcomplexes of the natural cell structure, for example in the results of Sections~\ref{SectionPushingDownPeaks} and~\ref{SectionProofFSUContraction} where the heart of the proof of the Main Theorem resides.
For any free splitting $F \act T$, in order to distinguish between the natural edges of~$T$ and the edges of the given simplicial structure on $T$ we shall refer to the latter as the \emph{edgelets}\index{edgelet} of $T$. This word is meant to evoke the phenomenon that, fairly often, there are many, many, many edgelets in a single natural edge, and we often visualize the edgelets as being very, very, very tiny.
\subsection{Collapse maps.}
\label{SectionCollapseMaps}
In order to define the free splitting complex of $F$ rigorously we need some preliminaries regarding collapse maps.
Given two free splittings $F \act S,T$, a map $f \colon S \to T$ is called a \emph{collapse map}\index{map!collapse}\index{collapse} if $f$ is injective over the interior of each edgelet of $T$. The \emph{collapsed subgraph} $\sigma \subset S$ is the $F$-equivariant subgraph which is the union of those edgelets of $F$ which are collapsed to a vertex by the map $f$. We put $\sigma$ into the notation by writing $f \colon S \xrightarrow{[\sigma]} T$, the square brackets highlighting that $\sigma$ is the name of the collapsed graph, whereas the notation $S \xrightarrow{f} T$ tells us the name of the collapse map $f$ itself. Note that $\sigma \subset S$ is a \emph{proper} subgraph, meaning that $\sigma \ne S$.
Here are some basic facts about collapse maps. Items~\pref{ItemCollapseComponents} and~\pref{ItemNondegIsComponent} will be used without mention throughout the paper. Item~\pref{ItemCollapseFrontierHull} will be needed for the proof of Proposition~\ref{PropCBE}.
\begin{lemma}\label{LemmaCollapseProps}
For any free splittings $F \act S,T$, any collapse map $f \colon S \xrightarrow{[\sigma]} T$, and any vertex $v \in T$, the following hold:
\begin{enumerate}
\item \label{ItemCollapseComponents}
The subgraph $f^\inv(v)$ is connected.
\item \label{ItemNondegIsComponent}
$f^\inv(v)$ does not degenerate to a point if and only if it is a component of $\sigma$.
\item \label{ItemCollapseFrontierHull}
$f^\inv(v)$ is the convex hull of its frontier in $S$.
\end{enumerate}
\end{lemma}
\begin{proof} Denote $\sigma_v = f^\inv(v)$. Given vertices $w_1 \ne w_2 \in \sigma_v$, if the segment $[w_1,w_2]$ does not map to $v$ then $f[w_1,w_2]$ is a nondegenerate finite tree and there must exist two edgelets in $[w_1,w_2]$ with the same image in that tree, contradicting the definition of a collapse~map; this proves that $\sigma_v$ is connected. If $\sigma_v$ is nondegenerate, i.e.\ if it contains an edgelet, then each of its edgelets being in $\sigma$ it follows by connectivity that $\sigma_v$ is a subset of $\sigma$. It is moreover a maximal connected subset of $\sigma$ --- a component of $\sigma$ --- because any edgelet of~$S$ incident to a vertex of $\sigma_v$ but not in $\sigma_v$ does not have constant image under $f$ and so is not contained in $\sigma$. This proves~\pref{ItemCollapseComponents} and~\pref{ItemNondegIsComponent}.
To prove \pref{ItemCollapseFrontierHull}, let $\Fr$ be the frontier of $\sigma_v$ in $S$ and let $\Hull \subset S$ be the convex hull of $\Fr$. By connectivity we have $\Hull \subset \sigma_v$. If the opposite conclusion did not hold then there would be an edgelet $e \subset \sigma_v \setminus \Hull$. Only one of its two complementary components $S \setminus e = S_0 \disjunion S_1$ can contain a point of $\Fr$, and so up to interchanging indices we have $\Hull \subset S_0$. Since $S_1$ is disjoint from $\Fr$ but contains the point $x = e \intersect S_1 \subset e \subset \sigma_v$, it follows that $S_1 \subset \sigma_v \subset \sigma$. The point $x$ is the unique frontier point of $S_1$. Choose $\gamma \in F$ having an axis $L$ contained in $S_1$. Let $z$ be the point of $L$ closest to $x$. For each $y \in S \setminus S_1$, $z$ is also the point of $L$ closest to $y$, and so $\gamma(z)$ is the point of $L$ closest to $\gamma(y)$. But $\gamma(z) \ne z$ and so $\gamma(y) \in S_1 \subset \sigma$, implying that $y \in \sigma$ and contradicting properness of $\sigma$.
\end{proof}
From Lemma~\ref{LemmaCollapseProps}~\pref{ItemCollapseComponents}, given a collapse map $f \colon S \xrightarrow{[\sigma]} T$ it follows that $\sigma$ determines $T$ up to simplicial conjugacy, in that the map $S \mapsto T$ induces a simplicial isomorphism between $T$ and the quotient tree obtained from $S$ by collapsing each component of $\sigma$ to a point, and furthermore this simplicial isomorphism is $F$-equivariant. In this situation we often say that \emph{$T$ is obtained by collapsing $\sigma$}.
Furthermore, any choice of collapsed subgraph may be used, in the sense that for any free splitting $F \act S$ and any $F$-equivariant, proper subgraph $\sigma \subset S$ there exists a free splitting $T$ and a collapse map $S \xrightarrow{[\sigma]} T$. The tree $T$ is defined as the quotient of $S$ obtained by collapsing to a point each component of $\sigma$. Since $\sigma$ is proper, $T$ is not a point. Since $\sigma$ is equivariant, the action $F \act S$ descends to an action $F \act T$. This action is minimal because $T$ is a union of axes of elements of $F$: for each edge $e \subset T$ there exists a unique pre-image edge $e' \subset S$ such that $e'$ maps to $e$, and there exists $\gamma \in F$ whose axis in $S$ contains $e'$, so the axis of $\gamma$ in $T$ contains~$e$. The stabilizer of an edge $e \subset T$ equals the stabilizer of the pre-image edge and so is trivial. This shows that $F \act T$ is a free splitting, and by construction the quotient map $S \xrightarrow{[\sigma]} T$ is a collapse map.
The (nonsimplicial) conjugacy type of the collapsed tree actually depends only on the ``natural core'' of the collapsed subgraph. To be precise, given a free splitting $F \act S$ and a proper, $F$-equivariant subgraph $\sigma \subset S$, define the \emph{natural core}\index{natural core} of $\sigma$ to be the largest natural subcomplex of $S$ contained in $\sigma$ whose components are all nondegenerate. For any collapse maps $S \xrightarrow{[\sigma]} T$, $S \xrightarrow{[\sigma']} T'$, if $\sigma,\sigma'$ have the same natural core then there exists a conjugacy $T \to T'$, although this conjugacy need not be a simplicial map with respect to the given simplicial structures of $T,T'$.
\bigskip
Given free splittings $F \act S,T$, we say that $S$ \emph{collapses} to $T$\index{collapse} or that $T$ \emph{expands} to~$S$,\index{expansion} denoted $S \collapsesto T$ or $T \expandsto S$, if there exists a function $S \mapsto T$ which is a collapse map with respect to some simplicial subdivisions of the natural cell structures on $S$ and~$T$. These relations are well-defined on the conjugacy classes of $S,T$, indeed $S \collapsesto T$ if and only if there exist a function $S \mapsto T$ which is a collapse map with respect to the natural cell structures themselves. Even when it is known that $S \collapsesto T$, notice that there might not exist a collapse map $S \mapsto T$ without first changing the simplicial structures on $S$ and/or~$T$, for example if $T$ is subdivided so finely that it has more edgelet orbits than~$S$. The collapse and expand relations are transitive, e.g.\ if $S \collapsesto S' \collapsesto S''$ then $S \collapsesto S''$, for if $S \mapsto S' \mapsto S''$ are collapse maps of natural cell structures then the composition $S \mapsto S''$ is a collapse map of natural cell structures.
In several places throughout the paper we use without comment the fact that every free splitting $F \act T$ has a \emph{properly discontinuous expansion} $T \expands S$, meaning that the free splitting $F \act S$ is properly discontinuous; see \cite{HandelMosher:distortion}, Section~3.2 for a proof, under the heading ``How to construct trees in $\K^T_n$'', Steps~1 and~2. When a properly discontinuous expansion $T \expands S$ is chosen, with collapse map $S \xrightarrow{[\sigma]} T$, the vertex group system of $T$ is represented in $S$ as the conjugacy classes of the stabilizers of the infinite components of~$\sigma$.
\subsection{The free splitting complex in terms of collapse maps.}
\label{SectionFSInTermsOfCollapsing}
The following result contains the technical facts needed to justify the construction of the simplicial complex $\FS(F)$. For any free splitting $F \act T$ and any proper $F$-invariant natural subgraph $\sigma \subset T$ let $T \xrightarrow{[\sigma]} T_\sigma$ be the corresponding collapse map, the quotient map obtained by collapsing to a point each component of $\sigma$. If $T$ is a $(K+1)$-edge free splitting then for each $k=0,\ldots,K$ let $\F_k(T)$ be the set of conjugacy classes of $(k+1)$-edge free splittings of the form $T_\sigma$, indexed by those natural subgraphs $\sigma \subset T$ that contain exactly $K-k$ natural edge orbits of $T$. There are exactly $\binom{K+1}{k+1}=\frac{(K+1)!}{(k+1)! (K-k)!}$ choices of such $\sigma$, although a priori one does not know where the cardinality of the set $\F_k(T)$ lies in the interval from $1$ to $\binom{K+1}{k+1}$, because one does not know whether collapsing two distinct $F$-invariant natural subgraphs results in nonconjugate free splittings. Furthermore one does not know a priori how the conjugacy class of $T$ depends on, say, the set $\F_0(T)$ of conjugacy classes of 1-edge collapses of~$T$. The following lemma resolves these issues as one might hope; the lemma will be proved in Section~\ref{SectionFSSimplicesProof}.
\begin{lemma}\label{LemmaFSSimplices} For any free splittings $F \act T,T'$ the following hold:
\begin{enumerate}
\item\label{ItemSimplexExists}
For any two $F$-equivariant natural subgraphs $\sigma_1,\sigma_2 \subset T$ we have $\sigma_1 = \sigma_2$ if and only if $T_{\sigma_1}$, $T_{\sigma_2}$ are conjugate.
\item\label{ItemSimplexUnique}
$\F_0(T)=\F_0(T')$ if and only if $T,T'$ are conjugate.
\end{enumerate}
\end{lemma}
By applying item~\pref{ItemSimplexExists} of this lemma we may define a collapse $T \collapse U$ to be \emph{proper}\index{collapse!proper and improper} if it satisfies any of the following equivalent conditions: $U,T$ are not conjugate; \emph{for any} map $T \xrightarrow{[\sigma]} U$ which is a collapse map with respect to some subdivision of the natural cell structures, the natural core of $\sigma$ is nonempty. We also refer to the collapse maps of the latter type as \emph{proper collapse maps}. Notice that properness of a collapse relation $T \collapse U$ is also equivalent to the statement that \emph{there exists} a map $T \xrightarrow{[\sigma]} U$ which is a collapse map with respect to some subdivision of the natural structures, such that the natural core of $\sigma$ is nonempty. A collapse relation $T \collapse U$ which is not proper is~\emph{improper}.
Before proving this lemma we apply it to the construction of $\FS(F)$. From item~(1) it follows that we can associate an abstract $K$-simplex denoted $\<T\>$ to the conjugacy class of each $(K+1)$-edge free splitting $F \act T$, where the $k$-dimensional faces of $\<T\>$ are labelled by the conjugacy classes of those free splittings of the form~$T_\sigma$ such that $\sigma$ contains exactly $K-k$ natural edge orbits of $T$, and where $T_\sigma$ is a face of $T_{\sigma'}$ if and only if $\sigma' \subset \sigma$. We can then glue these simplices together, where for each collapse relation $T \collapses U$ the simplex $\<U\>$ is glued to the unique face of the simplex $\<T\>$ that is labelled by the conjugacy class of $U$ and where the gluing preserves the labelling of subfaces. From item~(2) it follows that the result of these gluings is a simplicial complex. We have proved:
\begin{corollary}
There exists a simplicial complex $\FS(F)$ whose $K$-simplices $\<T\>$ are in one-to-one correspondence with the conjugacy classes of $K+1$-edge free splittings $F \act T$, such that for any pair of simplices $\<T\>$, $\<U\>$ we have $\<U\> \subset \<T\>$ if and only if $U \expandsto T$.
\end{corollary}
The alternate and more well known approach to this corollary is to appeal to Hatcher's construction of the sphere complex \cite{Hatcher:HomStability}; see for example Aramayona--Souto \cite{AramayonSouto:FreeSplittings} which constructs the 1-skeleton of $\FS(F)$ in this manner.
The dimension of $\FS(F)$ equals $3 \cdot \rank(F) - 4$, the number $3 \cdot \rank(F)-3$ being the maximum number of natural edge orbits of a free splitting $F \act T$, the maximum occuring if and only if every natural vertex of $T$ has valence~$3$ (which implies that the action $F \act T$ is properly discontinuous).
We usually work with the first barycentric subdivision of $\FS(F)$, denoted $\FS'(F)$. Gromov hyperbolicity of $\FS(F)$ and $\FS'(F)$ are equivalent because, as with any connected simplicial complex, the identity map is a quasi-isometry between their geodesic simplicial metrics (connectivity follows from Hatcher's proof of contractibility \cite{Hatcher:HomStability}, or from the construction of Stallings fold paths reviewed in Section~\ref{SectionFoldPaths}). The simplicial complex $\FS'(F)$ has one $0$-simplex associated to each conjugacy class of free splittings, and it has a $k$-simplex associated to each sequence of conjugacy classes of free splittings obtained from any chain of $k$ proper expansions $T_0 \expandsto T_1 \expandsto \cdots \expandsto T_k$. In particular, an edge in $\FS'(F)$ oriented from $S$ to $T$ can be written uniquely as either an expand $S \expandsto T$ or a collapse $S \collapsesto T$; uniqueness follows from asymmetry of the collapse relation, which is a consequence of Lemma~\ref{LemmaFSSimplices}~\pref{ItemSimplexExists}.
As mentioned earlier, the relations of collapse and expand are transitive. It follows that every geodesic in the one-skeleton of $\FS'(F)$ can be written as an alternating sequence of expands and collapses, for example starting with an expand $T_0 \expand T_1 \collapse T_2 \expand T_3 \collapse T_4 \expand T_5 \collapse \cdots$ or starting with a collapse $T_0 \collapse T_1 \expand T_2 \collapse T_3 \expand T_4 \collapse T_5 \expand \cdots$. Any edge path in $\FS'(F)$ that alternates between expands and collapses is called a \emph{zig-zag path}\index{zig-zag} in $\FS'(F)$.
Throughout the paper, given free splittings $F \act S,T$, we use the notation $d(S,T)$ to denote the length of the shortest edge path in the simplicial complex $\FS'(F)$ between the vertices represented by $S$ and $T$. We must prove that this metric is Gromov hyperbolic.
\subsection{Proof of Lemma \ref{LemmaFSSimplices}}
\label{SectionFSSimplicesProof}
While the proof is surely standard, we are not aware of any proof in the literature, so we provide the details.
To each free splitting $F \act S$ and each oriented natural edge $\eta \subset S$ we associate a clopen decomposition $\partial F = \C_-(\eta) \disjunion \C_+(\eta)$ as follows. Choose a proper expansion $S \expandsto R$ with collapse map $f \colon R \to S$. Let $\eta_R \subset R$ be the unique oriented natural edge that maps to $\eta$ under the collapse $R \mapsto S$. The subgraph $R \setminus \eta_R$ has two components, incident to initial and terminal vertices of $\eta_R$, whose end spaces are $\C_-(\eta), \C_+(\eta) \subset \partial F$, respectively. If one chooses any other proper expansion $S \expandsto R'$ with oriented natural edge $\eta_{R'}$ mapping to $\eta$, then as shown in \cite{HandelMosher:distortion} Lemma~17 there exists a sequence of collapses and expansions $R=R_0,\ldots,R_K=R'$ and oriented natural edges $\eta_R=\eta_{R_0} \subset R_0, \eta_{R_1} \subset R_1, \ldots, \eta_{R'}=\eta_{R_K} \subset R_K$ such that for each $k=1,\ldots,K$ the edges $\eta_{k-1}$, $\eta_k$ correspond to each other under the collapse map between $R_{k-1}$ and $R_k$ (whichever direction that map goes). It immediately follows that $\C_-(\eta_k)$, $\C_+(\eta_k)$ are each constant along this sequence. This shows that $\C_-(\eta),\C_+(\eta)$ are both well-defined independent of the choice of $R$. Denote the unordered pair by $\C(\eta)=\{\C_-(\eta),\C_+(\eta)\}$.
Note that for each $\gamma \in F$ and $\eta \subset S$ we have $\C(\gamma \cdot \eta) = \gamma \cdot \C(\eta)$. Also, given natural edges $\eta \ne \eta' \subset S$ we have $\C(\eta) \ne \C(\eta')$: for the proof we may assume $S$ is proper, so $S \setminus (\eta \union \eta')$ has three components, each infinite; choosing a ray in each we see that three of the four sets $\C_-(\eta) \intersect \C_-(\eta')$, \, $\C_-(\eta) \intersect \C_+(\eta')$, \, $\C_+(\eta) \intersect \C_-(\eta')$, \, $\C_+(\eta) \intersect \C_+(\eta')$ are nonempty, and so $\C(\eta) \ne \C(\eta')$. Also, for any collapse $S \xrightarrow{g} T$ and any edges $\eta_S \subset S$, $\eta_T \subset T$ such that $g(\eta_S)=\eta_T$, we have $\C(\eta_S) = \C(\eta_T)$, for in defining $\C(\eta_S)$ we can choose any proper expansion with collapse $R \xrightarrow{f} S$, in defining $\C(\eta_T)$ we can choose the same $R$ with collapse $R \xrightarrow{f} S \xrightarrow{g} T$, and one sees that the same edge of $R$ maps to $\eta_S$ and to $\eta_T$ under these collapse maps.
Consider now $T$, $T_{\sigma_1}$, and $T_{\sigma_2}$ as in~\pref{ItemSimplexExists} and suppose there exists a conjugacy $T_{\sigma_1} \to T_{\sigma_2}$, inducing a bijection of natural edges. If $e_i \subset T_{\sigma_i}$ $(i=1,2)$ correspond under this bijection, pull back under the collapse maps $T \to T_{\sigma_i}$ to obtain natural edges $e'_i \subset T$. From the previous paragraph it follows that $\C(e'_1) = \C(e_1) = \C(e_2) = \C(e'_2)$ which implies that $e'_1=e'_2$. Thus, a natural edge of $T$ is collapsed by $T \mapsto T_{\sigma_1}$ if and only if it is collapsed by $T \mapsto T_{\sigma_2}$, which implies that $\sigma_1=\sigma_2$. This proves~\pref{ItemSimplexExists}.
\bigskip
To prove \pref{ItemSimplexUnique}, given a free splitting $F \act T$, let $\C(T) = \union_{\eta\subset T} \{\C_-(\eta), \C_+(\eta)\}$ taken over all oriented natural edges $\eta \subset T$. The set $\C(T)$ is an $F$-invariant set of clopens in $\partial F$ depending only on the conjugacy class of $T$. Since $\C(T) = \union_{T' \in \F_0(T)} \C(T')$, it follows that $\F_0(T)$ determines $\C(T)$, and so it suffices to show that $\C(T)$ determines the conjugacy class of $T$. The set $\C(T)$ does determine the oriented edges of $T$, which are in bijective, $F$-equivariant correspondence with $\C(T)$ itself via $\eta \leftrightarrow \C_+(\eta)$. Also, the unoriented edges of $T$ are in bijective, $F$-equivariant correspondence with subsets of $\C(T)$ of cardinality~$2$ which are partitions of $\partial F$. It remains to show that $\C(T)$ also determines the vertices of $T$ and the ``initial vertex'' relation between oriented edges and vertices.
Associated to each natural vertex $v \in T$ there is a subset $\D(v) \subset \C(T)$ consisting of all $\C_+(\eta) \in \C(T)$ such that $v$ is the initial vertex of $\eta$. If we can show that $\C(T)$ determines the collection $\{\D(v) \suchthat \text{$v$ is a natural vertex of $T$}\}$ then we will be done, because the initial vertex relation is then also determined: $v$ is an initial vertex of $\eta$ if and only if $\C_+(\eta) \in \D(v)$. Noting that the valence of $v$ equals the cardinality of $\D(v)$, we show first that $\C(T)$ determines the finite cardinality sets~$\D(v)$.
Define a relation on the set of subsets of $\C(T)$: given two subsets $\D,\C \subset \C(T)$ we write $\D \sqsubset \C$ if for every $D \in D$ there exists $C \in \C$ such that $D \subset C$.
If $v \in T$ is a natural vertex of finite valence then $\D(v)$ is a partition of $\partial F$ of finite cardinality $\ge 3$. Furthermore, for every cardinality~2 subset $\C \subset \C(T)$ which is a partition of $\partial F$ --- i.e.\ every subset of the form $\C=\{\C_-(\eta),\C_+(\eta)\}$ for some oriented natural edge $\eta \subset T$ --- if $\D(v) \sqsubset \C$ then there exists $D \in \D(v)$ and $C \in \C$ such that $D=C$.
We claim that the converse holds: suppose $\D \subset \C(T)$ is a partition of $\partial F$ of finite cardinality $\ge 3$, and suppose that $\D$ satisfies the property that for every $\C \subset \C(T)$ of cardinality~$2$ which is a partition of $\partial F$, if $\D \sqsubset \C$ then there exists $D \in \D$ and $C \in \C$ such that $D=C$; then it follows that there exists a natural vertex $v \in T$ such that $\D=\D(v)$. To prove this claim, write $\D = \{\C_+(\eta_i)\}_{i \in I}$ for some finite set~$I$. Note that if $i \ne j \in I$ then $\eta_i,\eta_j$ have disjoint interiors, because otherwise they are opposite orientations of the same edge $\eta$ and $\D=\{\C_+(\eta),\C_-(\eta)\}$, contradicting that $\D$ has cardinality $\ge 3$. Also, if $i \ne j$ then the shortest path in~$T$ intersecting both of $\eta_i,\eta_j$ intersects them in their respective initial vertices, because $\C_+(\eta_i) \intersect \C_+(\eta_j) = \emptyset$. It follows that $T - \union_{i \in I} \interior(\eta_i)$ has a component $\tau$ that intersects each $\eta_i$ in its initial endpoint. If $\eta$ is any oriented natural edge such that $\tau\intersect \eta$ is the initial vertex of $\eta$ then $\C_+(\eta) \in \D$, for otherwise $\C_+(\eta) \subset \partial F - \union \D$, contradicting that $\D$ is a partition of $\partial F$. It follows that $\{\eta_i\}$ is precisely the set of oriented edges not in $\tau$ but with initial vertex in $\tau$. Suppose that $\tau$ is a nondegenerate tree. If $\tau$ has finite diameter, pick any natural edge $\eta \subset \tau$, and note that $\D \sqsubset \C(\eta)$ but there does not exist any $D \in \D$ and $C \in \C(\eta)$ for which $D=C$, a contradiction. If $\tau$ has infinite diameter, any ray in $\tau$ determines an element of $\partial F - \union \D$, contradicting that $\D$ partitions $\partial F$. It follows that $\tau$ is a degenerate tree, a natural vertex $v \in T$ of cardinality $\ge 3$, and that $D=\D(v)$, proving the claim.
To summarize, the natural, finite valence vertices of $T$ are determined by $\C(T)$ in the following manner: they are in $F$-equivariant bijective correspondence, via the correspondence $v \leftrightarrow \D(v)$, with the subsets $\D \subset \C(T)$ which are partitions of $\partial F$ of finite cardinality $\ge 3$, having the property that for every every two-element subset $\E \subset \C(T)$ which is a partition of $\partial F$, if $\D \sqsubset \E$ then there exists $D \in \D$ and $E \in \E$ such that $D=E$.
It remains to describe a similar scheme by which $\C(T)$ determines the infinite valence vertices of~$T$. If $v \in T$ is a natural vertex of infinite valence then $\D(v)$ is an infinite partition of $\partial F - \partial \Stab_T(v)$, it is invariant under the action of the free factor $\Stab_T(v) \subgroup F$ on $\partial F$, and it has the property that for any cardinality~$2$ subset $\C \subset \C(T)$ which is a partition of $\partial F$, if $\D(v) \sqsubset \C$ then there exists $D \in \D(v)$ and $C \in \C$ such that $D=C$. Conversely, let $\D \subset \C(T)$ be an infinite subset for which there exists a proper, nontrivial free factor $A \subgroup F$ such that $\D$ is a clopen partition of $\partial F - \partial A$ and $\D$ is invariant under the action of $A$ on $\C(T)$, and for any cardinality~$2$ subset $\E \subset \C(T)$ which is a partition of $\partial F$, if $\D \sqsubset \E$ then there exists $D \in \D$ and $E \in \E$ such that $D=E$. Under these conditions we must prove that there exists a vertex $v \in A$ such that $A = \Stab_T(v)$ and $\D=\D(v)$. Just as in the finite valence case, writing $\D = \{\C_+(\eta_i)\}_{i \in I}$ where the index set $I$ is now infinite, there is a component $\tau$ of $T - \union_{i \in I} \interior(\eta_i)$ that intersects each $\eta_i$ in its initial endpoint. Since $\D$ is $A$-invariant, the collection $\{\eta_i\}_{i \in I}$ is also $A$-invariant, and so $\tau$ is $A$-invariant. The set $\{\eta_i\}_{i \in I}$ is precisely the set of oriented natural edges not in $\tau$ but with initial vertex in $\tau$, for if $\eta$ is an oriented natural edge such that $\tau\intersect \eta$ is the initial vertex of $\eta$ then $\C_+(\eta) \subset \partial F - \partial A$, and if $\C_+(\eta) \not\in \D$ then $\C_+(\eta) \subset (\partial F - \partial A) - \union \D$, contradicting that $\D$ is a partition of $\partial F - \partial A$.
If $\tau$ is nondegenerate and of finite diameter then we obtain the same contradiction as in the case where $\D$ is finite. Suppose $\tau$ is nondegenerate and of infinite diameter. The action of the free factor $A$ on $T$ has a unique, minimal invariant subtree $T^A$, and so $T^A \subset \tau$. If $T^A$ is nondegenerate then for any edge $\eta \subset T^A$ we have $\D \sqsubset \C_\pm(\eta)$ but no $D \in \D$ equals any $C \in \C(\eta)$, a contradiction. The tree $T^A$ is therefore degenerate, $T^A = \{v\}$ where $v \in T$ is the unique vertex for which $\Stab_T(v)=A$. Any ray in $\tau$ therefore defines an element of $\partial F - \partial A$, but the element defined is not in $\union\D$, a contradiction. It follows that $\tau$ must be degenerate, $\tau=\{v\}$ and $\D=\D(v)$ for some natural vertex $v \in T$, and the proof of Lemma \ref{LemmaFSSimplices} is~complete.
\subparagraph{Remark.} For any free splitting $F \act S$, any self-conjugacy $f \colon S \to S$ restricts to the identity map on the vertex set of $S$, because $f$ maps each natural edge $\eta \subset S$ to itself preserving orientation. This is true because, as shown at the beginning of the proof of the corollary, if $\eta \ne \eta' \subset S$ are natural edges then $\C(\eta) \ne \C(\eta')$, and $\C_-(\eta) \ne \C_+(\eta)$.
\section{Fold paths}
\label{SectionFoldPaths}
We define the class of fold paths between vertices of $\FS'(F)$, using a method pioneered by Stallings \cite{Stallings:folding} for factoring maps of graphs into products of folds. This method was extended to the category of group actions on trees by Bestvina and Feighn \cite{BestvinaFeighn:bounding}. We refer to the latter paper for some details, although these details are considerably simplified in the category of free splittings.
\subsection{Directions, gates, and foldable maps}
\label{SectionFoldableMaps}
First we set up some of the basic definitions which are used throughout the paper. We will also prove a tree-theoretic version of the First Derivative Test, Lemma~\ref{LemmaFDT}.
Given any graph $X$ and a vertex $v \in X$, the set of \emph{directions of $X$ at $v$},\index{direction} denoted $D_v X$, is defined to be the set of germs of oriented arcs in $X$ with initial vertex~$v$. Each direction at $v$ is uniquely represented by an oriented edgelet with initial vertex $v$. The union of the sets $D_v X$ over all vertices $v \in X$ is denoted $DX$. Given a subgraph $Y \subset X$, the subset of $DX$ represented by oriented edgelets $e \subset X \setminus Y$ having initial vertex in $Y$ is denoted $D_Y X$.
Given two free splittings $F \act S,T$ and a map $f \colon S \to T$, the \emph{derivative}\index{derivative} of $f$ is a partially defined map $df \colon DS \to DT$ whose domain is the set of directions of oriented edgelets $e$ on which $f$ is nonconstant, and whose value on the direction of $e$ is the direction of the oriented edgelet $f(e)$. Given a subgraph $W \subset S$, if $f$ is nonconstant on each edgelet representing a direction in the set $D_W S$ then we obtain by restriction a map $d_W f \colon D_W S \to DT$; as a special case, when $W=\{v\}$ is a vertex we obtain a map $d_v f \colon D_v S \to D_{f(v)} T$.
Suppose now that the map $f \colon S \to T$ is nonconstant on all edgelets of $S$, so $df \colon DS \to DT$ has full domain of definition. For each vertex $v \in S$ the set $D_v S$ partitions into \emph{gates}\index{gate} which are the nonempty subsets of the form $(d_v f)^\inv(\delta)$ for $\delta \in D_{f(v)} T$. Every gate is a finite set, indeed we have:
\begin{lemma}\label{LemmaGateBound}
For any free splittings $F \act S,T$, for any map $f \colon S \to T$ which is nonconstant on each edgelet of $S$, and for any vertex $v \in S$, the cardinality of each gate of $D_v S$ is $\le 2 \rank(F)$.
\end{lemma}
\begin{proof}
Let $e_1,\ldots,e_M \subset S$ be oriented edgelets with initial vertex $v$ representing a gate of $D_v S$. These oriented edgelets are all in distinct orbits under the action of $F$, for otherwise their common image in $T$ would have a nontrivial stabilizer. It follows that in the quotient graph of groups $S/F$, the quotients of $e_1,\ldots,e_M$ represent $M$ distinct directions at the quotient of $v$. It therefore suffices to bound the valence of each vertex in the quotient graph of groups of a free splitting. Without decreasing the valence at the quotient of $v$, one can blow up all other vertex orbits so that the only vertex orbit with nontrivial stabilizers is the orbit of $v$. Then, still without decreasing quotient valence, one can inductively collapse natural edges whose endpoints are in different vertex orbits. When this process stops, the quotient graph of groups is a rose with one natural vertex (possibly having nontrivial vertex group) and with $\le \rank(F)$ edges, whose natural vertex has valence $\le 2 \rank(F)$.
\end{proof}
\begin{definition}[Foldable maps and edgelets]
\label{DefinitionFoldableMaps}
A map $f \colon S \to T$ is \emph{foldable}\index{map!foldable}\index{foldable map} if it satisfies either of the following two equivalent statements:
\begin{description}
\item[Natural edge definition of foldable:] $f$ is injective on each natural edge of $S$ and $f$ has $\ge 3$ gates at each natural vertex of $S$.
\item[Edgelet definition of foldable:] $f$ is injective on every edgelet, $f$ has $\ge 2$ gates at every vertex, and $f$ has $\ge 3$ gates at every natural vertex.
\end{description}
We will without warning switch between these two definitions whenever it is convenient. Notice that the restrictions on the number of gates are significant only at vertices of finite valence, because every gate is a finite set; for example, if every natural vertex of $S$ has nontrivial stabilizer then every map defined on $S$ which is injective on natural edges is foldable. Notice also that foldability of $f$ depends only on the natural cell structures on $S$ and $T$, not on the given simplicial structures; to put it more formally, foldability is an invariant of $f$ in the category of equivariant continuous functions between free splittings of $F$.
Given free splittings $F \act S,T$, a foldable map $f \colon S \to T$, and an edgelet $e \subset T$, an \emph{$e$-edgelet of $f$}\index{edgelet!of a foldable map} is an edgelet of $S$ that is mapped to $e$ by~$f$.
In Lemma~\ref{LemmaFoldableExistence} below we shall prove the existence of foldable maps in the appropriate context.
\end{definition}
\subparagraph{Remark.} In other treatments of Stallings folds we have not seen any analogue of our gate~$\ge 3$ condition on natural vertices. This condition is crucial to the diameter bound obtained in Lemma~\ref{LemmaBROneB}, as well as in the heart of the proof of the Main Theorem, particularly in the proof of Proposition~\ref{PropPushdownInToto}, Step 3.
\subparagraph{The First Derivative Test.} The first derivative test of calculus implies that if the derivative of a function has no zeroes then local extreme values occur only at endpoints of the domain.
\begin{lemma}[The First Derivative Test]\label{LemmaFDT}
Suppose that $f \colon S \to T$ is a foldable map of free splittings. Given a connected subgraph $W \subset S$ and a vertex $v \in W$, if $f(v)$ has valence~1 in the subgraph $f(W) \subset T$ then $v$ is a frontier point of W.
\end{lemma}
\begin{proof} If $v$ is an interior point of $W$ then $D_v W = D_v S$, and since $f$ has $\ge 2$ gates at~$v$ it follows that $d_v f(D_v W)$ has cardinality $\ge 2$, implying that $f(v)$ has valence~$\ge 2$ in~$f(W)$.
\end{proof}
\subsection{Construction of foldable maps}
Given free splittings $F \act S,T$, a fold path from $S$ to $T$ will be defined by factoring a foldable map $S \mapsto T$. Although a foldable map does not always exist, one will exist after moving $S$ a distance at most~2 in $\FS'(F)$.
\begin{lemma}
\label{LemmaFoldableExistence}
For any free splittings $F \act S, T$ there exist free splittings $S',S''$ and a foldable map $S'' \mapsto T$ such that $S \expandsto S' \collapsesto S''$.
\end{lemma}
\begin{proof} Fix the free splitting $F \act T$. Given a free splitting $F \act R$, let $\M(R,T)$ denote the set of all equivariant continuous functions $f \colon R \to T$ that take each natural vertex of $R$ to a vertex of $T$ and whose restriction to each natural edge of $R$ is either injective or constant. It follows that $f$ is a map with respect to the \emph{pullback} simplicial structure on $R$ whose vertex set consists of all points that map to vertices of $T$ and that are not in the interior of a natural edge of $R$ that is collapsed by $f$. The edges of this simplicial structure on $R$ will be referred to as \emph{pullback edgelets of $f$}.
Choose any expansion $S \expandsto S'$ so that $F \act S'$ is properly discontinuous, which implies that the set $\M(S',T)$ is nonempty. Amongst all elements of $\M(S',T)$ choose $f \colon S' \to T$ to maximize the number of orbits of natural edges of $S'$ on which $f$ is constant. By collapsing each such natural edge we define a collapse map $S' \mapsto S''$ and an induced function which is an element of the set $\M(S'',T)$. By maximality of $f$ it follows that \emph{any} element of $\M(S'',T)$ is injective on each natural edge of $S''$, for otherwise by composing the collapse map $S' \mapsto S''$ with an element of $\M(S'',T)$ that collapses some natural edge of $S''$ we obtain an element of $\M(S',T)$ that collapses a larger number of natural edge orbits than $f$ does, a contradiction.
We find a foldable element of $\M(S'',T)$ by solving optimization problems. First we prove that if $g \in \M(S'',T)$ minimizes the number of orbits of pullback edgelets then $g$ has $\ge 2$ gates at each vertex of $S''$. Suppose there is a vertex $v \in S''$ at which $g$ has only~$1$ gate. Let $K$ be the valence of $v$; note that $K \ge 3$ because $g$ is injective on natural edges. Let $\eta_1,\ldots,\eta_K$ be the oriented natural edges of $S''$ with initial vertex~$v$. Let $e_1,\ldots,e_K$ be the initial pullback edgelets of $\eta_1,\ldots,\eta_K$, and let $w_1,\ldots,w_K$ be the terminal endpoints of $e_1,\ldots,e_K$, respectively. We have $f(e_1)=\cdots=f(e_K)=e$ for some oriented edge $e \subset T$ with initial vertex $f(v)$ and opposite vertex $w=f(w_1)=\ldots=f(w_K)$. Consider first the case that $e_i \ne \eta_i$ for each $i$, and so we can isotope each restricted map $g \restrict \eta_i$ by pushing $g(v)$ across $e$ to $w$ by an isotopy supported in a neighborhood of $e_i$, and we can extend these isotopies to an equivariant homotopy of $g$, to produce an element of $\M(S'',T)$ that has $K$ fewer orbits of pullback edgelets than $g$ has, a contradiction. Consider next the case that $e_i=\eta_i$ for certain values of $i=1,\ldots,K$. If $v,w_i$ are in distinct $F$-orbits for each such $i$ then we can equivariantly homotope $g$, pushing $g(v)$ across $e$ to $w$, so as to collapse each $e_i$ for which $e_i=\eta_i$, to produce an element of $\M(S'',T)$ that collapses each of the natural edges $\eta_i$ such that $e_i=\eta_i$, a contradiction. In the remaining case there exists some $i=1,\ldots,K$ such that $e_i=\eta_i$ and $w_i=\gamma(v)$ for some $\gamma \in F$, and it follows that $w=\gamma(v)$. The edges $e_i \subset S''$ and $e \subset T$ are therefore fundamental domains for the actions of $\gamma$ on its axes in $S''$, $T$, respectively. It follows that the direction of $\gamma^\inv(e_i)$ at $v$ maps to the direction of $\gamma^\inv(\eta)$ at $g(v)$ which is \emph{not equal to} the direction of $\eta$ at $g(v)$, contradicting that $g$ has a single gate at $v$.
Next we prove that among all $g \in \M(S'',T)$ that minimize the number of orbits of pullback edges, there is at least one which is foldable, having $\ge 3$ gates at each natural vertex. This is achieved, mostly, by solving another optimization problem. Define the \emph{edgelet vector} of $g$ to be the vector of positive integers $L_g$ indexed by the natural edge orbits of $S$, whose entry $L_g(e)$ corresponding to a natural edge $e \subset S$ is the number of pullback edgelets in $e$. Define $\Length(L_g)$ to be the sum of its entries, which equals the number of pullback edgelet orbits of $g$, a number which has already been minimized so as to guarantee~$\ge 2$ gates at each vertex. Define $\Energy(L_g)$ to be the sum of the squares of its entries. We have the inequality $\Energy(L_g) \le (\Length(L_g))^2$. Amongst all $g \in \M(S'',T)$ with minimal value of $\Length(L_g)$, choose $g$ so as to maximize $\Energy(L_g)$.
We claim that with energy maximized as above, one of the following holds:
\begin{enumerate}
\item \label{ItemNotHoop}
$g$ has $\ge 3$ gates at each natural vertex, and so $g$ is foldable.
\item \label{ItemHoop}
$S''$ has exactly one natural vertex orbit, $g$ has two gates at every natural vertex, and each natural edge of $S''$ has its two directions lying in distinct gate orbits.
\end{enumerate}
To prove this dichotomy, suppose that $g$ has exactly two gates at some natural vertex~$v$. The gates must have the same cardinality: otherwise, by doing a valence~2 homotopy, pushing $g(v)$ across one edge of $T$ in the image direction of the larger of the two gates at~$v$, one reduces the total number of pullback edgelets. Now consider $g_1,g_2 \in \M(S'',T)$ defined by the two possible valence~2 homotopies at $v$, pushing $g(v)$ across the two edges of $T$ in the two image directions of the two gates at $v$. Note that the average of the two vectors $L_{g_1}, L_{g_2}$ is the vector $L_g$. It follows that $L_g = L_{g_1} = L_{g_2}$, for otherwise, by convexity of energy, one of $\Energy(L_{g_1})$ or $\Energy(L_{g_2})$ would be larger than $\Energy(g)$. It also follows that $S''$ has exactly one natural vertex orbit, for otherwise $v$ would be connected across a natural edge $e$ to some natural vertex in a different orbit, implying that one of $L_{g_1}(e)$, $L_{g_2}(e)$ equals $L_g(e)+1$ and the other equals $L_g(e)-1$. It also follows that each natural edge $e$ has one end in the orbit of one gate at $v$ and opposite end in the orbit of the other gate at $v$, for otherwise one of $L_{g_1}(e)$, $L_{g_2}(e)$ would equal $L_g(e)+2$ and the other equals $L_g(e)-2$. This shows that $g$ satisfies item~\pref{ItemHoop}.
\medskip
To finish up we show that if $g$ satisfies \pref{ItemHoop} then there exists $g' \in \M(S'',T)$ which satisfies~\pref{ItemNotHoop}. Item~\pref{ItemHoop} implies that there is an orientation of the natural edges of $S''$ such that at each natural vertex $v \in S''$, the directions with initial vertex $v$ form one gate of $g'$ at $v$ denoted $D^+_v$, and the directions with terminal vertex $v$ form the other gate denoted~$D^-_v$.
Pick a natural vertex $v \in S''$. Let $\tau$ be the subtree of $S''$ consisting of the union of all oriented rays in $S''$ with initial vertex $v$. The restriction of $g$ to each such ray is injective and proper, and their initial directions all map to the same direction in $T$, so it follows that the subtree $g(\tau) \subset T$ has a valence~$1$ vertex at $g(v)$ and no other valence~$1$ vertex. Also, if we orient each edge of $g(\tau)$ to point away from the vertex $g(v)$ then the map $g \colon \tau \to g(\tau)$ preserves orientation. Furthermore $g(\tau)$ is not itself just a ray, for if it were then $T$ would be just a line, an impossibility for a free splitting of a free group of rank~$\ge 2$. Let $w \in g(\tau)$ be the vertex of $g(\tau)$ of valence~$\ge 3$ which is closest to $g(v)$. Define $g' \colon S'' \to T$ by mapping $v$ to $w$, extending equivariantly to the orbit of $v$, and extending equivariantly to an embedding on each edge of~$S''$.
We claim that $g'$ has one gate at $v$ corresponding to each direction of $g(\tau)$ at $w$, which implies that $g'$ is foldable. To see why, first note that the set $D^-_v$ is mapped by $d_v g'$ to the unique direction of the segment $[w,g(v)]$ at $w$. Next note that each direction in the set $D^+_v$ is mapped by $d_v g'$ to one of the directions of $T$ at $w$ distinct from the direction of $[w,g(v)]$; furthermore each such direction is in the image of $d_v g'$ because $g'$ maps $\tau$ onto $f(\tau) \setminus [w,g(v)]$ by an orientation preserving map.
This completes the proof of Lemma~\ref{LemmaFoldableExistence}.
\end{proof}
\subsection{Folds}
\label{SectionFoldFactorizations}
Given free splittings $F \act S,T$ and a foldable map $f \colon S \to T$, we say that $f$ is a \emph{fold}\index{map!fold}\index{fold}\index{fold map} if there exist oriented natural edges $\eta,\eta' \subset S$ with the same initial vertex $v$, and there exist nondegenerate initial segments $e \subset \eta$, $e' \subset \eta'$ which are subcomplexes of~$S$ with the same positive number of edgelets, such that if we let $\phi \colon e \to e'$ denote the unique orientation preserving simplicial isomorphism, then for all $x \ne x' \in S$ we have $f(x)=f(x')$ if and only if there exists $\gamma \in F$ such that (up to interchanging $x,x'$) $\gamma\cdot x \in e$ and $\phi(\gamma \cdot x)=\gamma \cdot x' \in e'$. We also say that the map $f$ \emph{folds the segments $e$ and~$e'$}.
The pair of segments $e,e'$ determines the free splitting $F \act T$ up to simplicial conjugacy, namely $F \act T$ is conjugate to the equivariant quotient complex of $S$ obtained by equivariantly identifying $e$ and $e'$ via $\phi \colon e \to e'$. In this context we shall say that the free splitting $T$ is determined by \emph{folding the segments $e,e'$}.
Letting $d,d' \in D_v S$ denote the initial directions of $e,e'$ respectively, we also say that $f$ \emph{folds the directions $d,d'$}, although $d,d'$ do not determine the segments $e,e'$ and they need not determine $T$ up to conjugacy. Notice that $d,d'$ are in different orbits under the action $\Stab_S(v) \act D_v S$ (equivalently under the action $F \act D S$), for otherwise the segment $f(e)=f(e') \subset T$ would have nontrivial stabilizer.
Folds are classified according to the properness of the inclusions $e \subset \eta$, $e' \subset \eta'$, as follows. If $e,e'$ are both proper initial segments of $\eta,\eta'$ then we say that $f$ is a \emph{partial} fold; otherwise $f$ is a \emph{full fold}.\index{fold!full} If $f$ is a full fold and exactly one of $e,e'$ is proper then we say that $f$ is a \emph{proper} full fold;\index{fold!full!proper} otherwise, when $e=\eta$ and $e'=\eta'$, we say that $f$ is an \emph{improper} full fold.\index{fold!full!improper} For later purposes we note that if $f$ is a full fold then every natural vertex of $T$ is the image of a natural vertex of $S$; and even when $f$ is a partial fold, every natural vertex of $T$ which is not in the orbit of the image of the terminal endpoints of the folded edges $e,e'$ is the image of a natural vertex of $S$.
In the terminology of \cite{BestvinaFeighn:bounding}, folds between free splittings can also be classified into two types as follows. If the opposite vertices $w,w'$ of $e,e'$ are in different $F$-orbits one gets a type IA fold;\index{fold!type IA} in this case the stabilizer of the vertex $W=f(w)=f(w')$ is the subgroup generated by the stabilizers of $w,w'$, which (if nontrivial) is a free factor whose rank is the sum of the ranks of the stabilizers of $w$ and $w'$. If $w,w'$ are in the same $F$-orbit then one gets a type IIIA fold,\index{fold!type IIIA} and the stabilizer of the vertex $W$ is the subgroup generated by the stabilizer of $w$ and an element $\gamma \in F$ such that $\gamma(w)=w'$, which is a free factor whose rank is $1$ plus the rank of the stabilizer of $w$. Notice that a type IIIA fold is only possible if $f$ is a partial fold or an improper full fold, because a natural and an unnatural vertex can never be in the same orbit. We refer to \cite{BestvinaFeighn:bounding} for an understanding of the map on quotient graphs of groups $S/F \to T/F$ which is induced by a fold $f \colon S \to T$.
The following lemma and its proof are well known in the narrower context of the first barycentric subdivision of the spine of outer space.
\begin{lemma}
\label{LemmaFoldDistance}
For any fold $f \colon S \to T$, the distance in $\FS'(F)$ from $S$ to $T$ equals $1$ or~$2$.
\end{lemma}
\begin{proof} Let $f$ fold oriented segments $e,e'$ with common initial endpoint $v$ and opposite endpoints $w,w'$. After possibly subdividing $S$ and $T$ so that $e,e'$ each contain $\ge 2$ edgelets, the map $f$ can be factored into two maps as $S \xrightarrow{g} U \xrightarrow{h} T$, where $g$ folds the initial edgelets $e_0 \subset e$, $e'_0 \subset e'$, and $h$ folds the $g$-images of the terminal segments $e_1 = e \setminus e_0$, $e'_1 = e' \setminus e'_0$. Letting $\hat e = g(e_0)=g(e'_0) \subset U$ and $\sigma_0 = F \cdot \hat e \subset U$, resubdividing $S$ there is an expansion $S \expands U$ defined by a collapse map $U \xrightarrow{[\sigma_0]} S$. Also, letting $\sigma_1 = F \cdot (g(e_1) \union g(e'_1)) \subset U$, after resubdividing $T$ there is a collapse $U \collapsesto T$ defined by a collapse map $U \xrightarrow{[\sigma_1]} T$. It follows that $d(S,T) \le 2$ in $\FS'(F)$.
It remains to show that $d(S,T) \ne 0$, that is, $S,T$ are not conjugate free splittings. Since each fold map is foldable, the natural vertex $v$ has~$\ge 3$ gates with respect to $f$. It therefore has $\ge 3$ gates with respect to $g$, and so $g(v) \in U$ is natural. It follows that $\hat e$ is a natural edge of $U$, having one endpoint at $g(v)$ and opposite endpoint of valence~$3$ in $U$. The subgraph $\sigma_0 \subset U$ is therefore natural, and it follows from Lemma~\ref{LemmaFSSimplices} that $S$ is not conjugate to $U$. The free splittings $U,T$ may or may not be conjugate, depending on whether at least one of $g(e_1), g(e_2) \subset U$ is a natural edge. If neither of $g(e_1)$, $g(e_2)$ is natural then $T$ is conjugate to $U$, and so $T$ is not conjugate to $S$. If one or both of $g(e_1)$, $g(e_2)$ is natural then (after resubdividing $T$) the collapse $U \collapsesto T$ may also defined by collapsing the natural subgraph $\hat\sigma_1 \subset U$ which is the union of the $F$ orbits of whichever of $g(e_1)$, $g(e_2)$ is natural; but $\sigma_0 \ne \hat\sigma_1$ and so by Lemma~\ref{LemmaFSSimplices} we conclude that $S,T$ are not conjugate.
\end{proof}
\subsection{Fold sequences and fold paths}
Consider free splittings $F \act S,T,U$ and a sequence of maps of the form $S \xrightarrow{h} U \xrightarrow{g} T$. Letting $f = g \composed h \colon S \to T$, we say that $h$ is a \emph{maximal fold factor of $f$} if the following hold: $h$ is a fold map that folds oriented initial segments $e,e' \subset S$ of oriented natural edges $\eta,\eta' \subset S$, respectively, and $e,e'$ are the maximal initial subsegments of $\eta,\eta'$ such that in $T$ we have $f(e)=f(e')$. Recall from the definition of a fold that $e,e'$ are edgelet paths with the same number of edgelets.
\subparagraph{Fold sequences.} Consider a sequence of free splittings and maps of the form $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2}\cdots\xrightarrow{f_K} S_K$, $K \ge 0$. In this context we will always denote
$$f^i_j = f_j \composed \cdots \composed f_{i+1} \colon S_i \to S_j, \quad\text{for}\quad 0 \le i < j \le K.
$$
We say that this is a \emph{fold sequence}\index{fold sequence} if the following holds:
\begin{enumerate}
\item \label{ItemOuterFoldable}
$f^0_K \colon S_0 \to S_K$ is a foldable map.
\item \label{ItemConjOrMaxFold}
Each map $f_{i+1} \colon S_{i} \to S_{i+1}$ is a maximal fold factor of the map $f^i_K \colon S_{i} \to S_K$, for $0 \le i < K$.
\end{enumerate}
It follows from \pref{ItemOuterFoldable} and \pref{ItemConjOrMaxFold} that
\begin{enumeratecontinue}
\item \label{ItemEachFoldable}
$f^i_j \colon S_i \to S_j$ is a foldable map for each $0 \le i < j \le K$.
\end{enumeratecontinue}
To prove \pref{ItemEachFoldable}, starting from the base assumption \pref{ItemOuterFoldable}, and assuming by induction that $f^{i-1}_K = f^i_K \composed f_i$ is foldable, we prove that $f^i_K$ is foldable. By \pref{ItemConjOrMaxFold} the map $f_i$ is a maximal fold factor of $f^{i-1}_K$. The map $f^{i-1}_K$ is injective on each edgelet of $S_{i-1}$, and each edgelet of $S_i$ is the $f_i$ image of some edgelet of $S_{i-1}$, so $f^i_K$ is injective on each edgelet. Consider a vertex $v \in S_i$ and a vertex $u \in S_{i-1}$ for which $f_i(u)=v$. The number of $f^i_K$-gates at $v$ is greater than or equal to the number of $f^{i-1}_K$ gates at $u$ which is $\ge 2$, and furthermore if $u$ is natural then this number is $\ge 3$. This covers all cases except for when $v$ is natural and each such $u$ has valence~$2$. Since $f_i$ is a maximal fold factor of $f^{i-1}_K$, this is only possible if $f$ is a partial fold that folds segments $e,e' \subset S_{i-1}$ such that if $w,w'$ denote the terminal endpoints of $e,e'$ then $v=f_i(w)=f_i(w')$. If $f_i$ is a type IA fold, that is if $w,w'$ are in different orbits, then $v$ has valence~$3$, and by maximality of the fold $f_i$ it follows that the three directions at $v$ are all in different gates with respect to $f^i_K$. If $f_i$ is a type IIIA fold, that is if $w,w'$ are in the same orbit, say $\gamma \cdot w = w'$ for a nontrivial $\gamma \in F$, then $\Stab_{S_i}(v)$ contains $\gamma$ and so is nontrivial, and hence $v$ has infinitely many gates with respect to~$f^i_K$. This proves by induction that each $f^i_K$ is foldable. Next, to prove that $f^i_j$ is foldable, given a vertex $v \in S_i$ we simply note that the decomposition of $D_v S_i$ into $f^i_j$-gates is a refinement of the decomposition into $f^i_K$ gates, of which there are $\ge 2$, and $\ge 3$ if $v$ is natural. This completes the proof that \pref{ItemOuterFoldable} and \pref{ItemConjOrMaxFold} imply~\pref{ItemEachFoldable}.
In this proof we have shown the following fact which will be useful in Lemma~\ref{LemmaFoldSequenceConstruction} below when we construct fold sequences:
\begin{lemma}\label{LemmaMaxFactor} For any foldable map $S \xrightarrow{f} T$ and any factorization of $f$ into two maps of the form $S \xrightarrow{k} U \xrightarrow{g} T$, if $k$ is a maximal fold factor of~$f$ then the map $g \colon U \to T$ is also foldable.
\qed\end{lemma}
The implication of this lemma is \emph{false} if one allows $k$ to be a partial fold which is not a maximal fold factor of $f$, for in that case the map $g \colon U \to T$ will have only 2~gates at the valence~$3$ vertex which is the $k$-image of the terminal endpoints of oriented segments $e,e'$ that are folded by $k$.
\subparagraph{Fold paths.} A \emph{fold path}\index{fold path} in $\FS'(F)$ is any sequence of vertices represented by free splittings $F \act S_0,S_1,\ldots,S_K$ for which there exists a fold sequence $S_0 \mapsto S_1 \mapsto\cdots\mapsto S_K$; we also say that this fold path has \emph{$K$-steps}.
Strictly speaking a fold path need not be the sequence of vertices along an actual edge path in the simplicial complex $\FS'(F)$, because the size of the step from $S_{i-1}$ to $S_i$ is either $1$ or~$2$; see Lemma~\ref{LemmaFoldDistance}. If one so desires one can easily interpolate the gap between $S_{i-1}$ and $S_i$ by an edge path of length~$1$ or $2$, to get an actual edge path from $S_0$ to $S_K$.
We define two fold sequences to be \emph{equivalent}\index{fold sequence!equivalence} if they have the same length and there is a commutative diagram of the form
$$\xymatrix{
S_0 \ar[r] \ar[d] & S_1 \ar[r] \ar[d] & \cdots \ar[r] & S_{K-1} \ar[r] \ar[d] & S_K \ar[d] \\
S'_0 \ar[r] & S'_1 \ar[r] & \cdots \ar[r] & S_{K'-1} \ar[r] & S'_K
}$$
where the top and bottom rows are the two given fold sequences and each vertical arrow is a conjugacy. Note that the vertical arrows are \emph{not} required to be ``maps'' as we have defined them, in that they need not be simplicial. For example, if the bottom row is obtained by taking the $400^{\text{th}}$ barycentric subdivision of each 1-simplex in the top row then the two fold sequences are equivalent.
Equivalent fold sequences determine the same fold path, but the converse is false. A counterexample consisting of a $1$-step fold path is given at the end of this section.
\subparagraph{Construction of fold factorizations.} Having constructed many foldable maps in Lemma~\ref{LemmaFoldableExistence}, to construct many fold paths it suffices to factor each foldable map as a fold sequence.
Given free splittings $F \act S,T$ and a foldable map $S \xrightarrow{f} T$, a \emph{fold factorization}\index{fold factorization} of $f$ is any fold sequence $S_0 \mapsto S_1 \mapsto\cdots\mapsto S_K$ which factors $f$ as shown in the following commutative diagram:
$$\xymatrix{
S \ar@{=}[r] \ar@/^2pc/[rrrrr]^{f} & S_0 \ar[r]^{f_1} & S_1 \ar[r]^{f_2} & \cdots \ar[r]^{f_K} & S_K \ar@{=}[r] & T
}$$
A fold factorization of any foldable map can be constructed by an inductive process described in \cite{BestvinaFeighn:bounding}, with considerable simplification arising from the fact that all edgelet stabilizers are trivial in $T$. We give this simplified argument here.
\begin{lemma}
\label{LemmaFoldSequenceConstruction}
For any free splittings $F \act S,T$, every foldable map $f \colon S \mapsto T$ has a fold factorization.
\end{lemma}
\begin{proof} If $f$ is a simplicial isomorphism then we are done, with a fold factorization of length~$K=0$. Otherwise, we use the following obvious but key fact:
\begin{description}
\item[Local to global principle:] Any simplicial map between trees which is locally injective is globally injective. If furthermore it is surjective then it is a simplicial isomorphism.
\end{description}
\noindent
For the inductive step we show that every foldable map $S \xrightarrow{f} T$ which is not a homeomorphism factors into maps as $S \xrightarrow{k} U \xrightarrow{g} T$ where $k$ is a maximal fold factor of~$f$. By the \emph{Local to global principle}, plus the fact that $F \act T$ is minimal, it follows that $f$ is surjective and so $f$ is not locally injective. We may therefore find a vertex $v \in S$ and two directions $d,d' \in D_v S$ such that $d_v f(d)=d_v f(d')$. Let $\eta,\eta'$ be the oriented natural edges with initial vertex~$v$ and initial directions $d,d'$. Let $e \subset \eta$, $e' \subset \eta'$ be the maximal initial segments such that $f(e)=f(e')$. Noting that $e,e'$ are subcomplexes with the same number of edgelets, let $h \colon e \to e'$ be the unique orientation preserving simplicial homeomorphism. Define $k \colon S \to U$ to be the quotient map obtained by equivariantly identifying $e$ and $e'$, and let $g \colon U \to T$ be the induced map. As indicated in \cite{BestvinaFeighn:bounding}, $U$ is a tree and the induced action $F \act U$ is minimal. The map $k$ is simplicial by construction, from which it follows that $g$ is simplicial as well. The stabilizer of each edgelet of $U$ is trivial because it is contained in the stabilizer of its image in $T$ under $g$ which is trivial, and so $F \act U$ is a free splitting. By construction the map $k \colon S \to U$ is a maximal fold factor of the foldable map $f$.
To support the inductive step we must prove that $U$ has fewer edgelet orbits than~$S$, which follows from the fact that the initial edgelets of $e$ and $e'$ are in different orbits of the action $F \act S$, because they have the same image edgelet in $T$ and its stabilizer is trivial.
The fold factorization of $f=f^0_T \colon S=S_0 \to T$ may now be constructed as follows. Assuming $f^0_T$ is not locally injective, factor $f^0_T$ into maps as $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f^1_T} T$ where $f_1$ is a maximal fold factor of $f^0_T$. The induced map $f^1_T$ is foldable by Lemma~\ref{LemmaMaxFactor}, and the number of edgelet orbits of $S_1$ is smaller than the number of edgelet orbits of $S_0$. The process therefore continues by induction on the number of edgelet orbits, stopping at $S=S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2}\cdots\xrightarrow{f_K} S_K \xrightarrow{f^K_T} T$ when $f^K_T$ is locally injective and therefore a simplicial conjugacy, and we identify $S_K=T$.
\end{proof}
\paragraph{Remark.} The \emph{Local to global principle} may be used to construct fold factorizations with various special properties. In particular, if $\beta \subset S$ is a subtree on which $f$ is not locally injective then we may choose the folded edges $\eta,\eta'$ to lie in $\beta$. This is used in the proof of Lemma~\ref{LemmaBRNatural}.
\paragraph{Counterexample: inequivalent folds.} \qquad\qquad We describe two inequivalent folds $\ti f, \ti f' \colon S_0 \to S_1$ that determine the same $1$~step fold path $S_0,S_1$ in $\FS'(F)$. Both of the actions $F \act S_0,S_1$ are properly discontinuous. We first describe the quotient marked graphs $G_0 = S_0/F$, $G_1=S_1/F$ and the induced homotopy equivalences $f, f' \colon G_0 \to G_1$. The marked graph $G_0$ has a valence~$4$ vertex $v$ with the following incident directions: directed natural edges $a,b$ with initial vertex $v$, and a directed natural edge $c$ with initial and terminal vertex $v$; subject to this description, $G_0$ is then filled out to be a marked graph in an arbitrary manner. The marked graph $G_1$ is defined to have the same underlying unmarked graph as $G_0$. The homotopy equivalences $f,f' \colon G_0 \to G_1$ are defined so that $f(a)=ca$, $f'(b) = c^\inv b$, and $f,f'$ are the identity elsewhere. Clearly $f,f'$ are homotopic, by a homotopy which spins the $c$ loop once around itself and is stationary on $G_0 \setminus (a \union b \union c)$. The marking on $G_1$ is defined by pushing forward the marking on $G_0$ via either of $f,f'$, and so each of $f,f'$ preserves marking. Consider the universal covering maps $S_i \mapsto G_i$, $i=0,1$. We may choose $F$-equivariant lifts $\ti f, \ti f' \colon S_0 \to S_1$ which are the two fold maps at issue. If they were equivalent then, since any self-conjugacy of $S_0$ or of $S_1$ fixes each vertex and each oriented natural edge (see the \emph{Remark} at the end of Section~\ref{SectionFreeSplittingComplex}), each direction in $D S_0$ would have the same image in $D S_1$ under $d \ti f$ and $d \ti f'$. However, fixing a lift $\ti v$ and lifts $\ti a, \ti b, \ti c$ of $a,b,c$ with initial vertex $\ti v$ and a lift $\ti c'$ of $c$ with terminal vertex $\ti v$, we have $d \ti f(\ti a) = d \ti f(\ti c)$ but $d \ti f'(\ti a) \ne d \ti f'(\ti c)$.
\section{The Masur-Minsky axioms}
\label{SectionMasurMinsky}
Our proof that $\FS(F)$ is hyperbolic uses the axioms introduced by Masur and Minsky \cite{MasurMinsky:complex1} for their proof that the curve complex of a finite type surface is hyperbolic. The axioms require existence of a family of paths which satisfy a strong projection property. For this purpose we shall use fold paths: Proposition~\ref{PropFoldContractions} stated at the end of this section says, roughly speaking, that fold paths in $\FS'(F)$ satisfy the Masur-Minsky axioms.
First we give an intuitive explanation of the content of Proposition~\ref{PropFoldContractions} by giving an outline of the Masur-Minsky axioms as they would apply to fold paths. The axioms require that a map be defined which is a kind of projection from $\FS'(F)$ to each fold path $S_0, S_1, \ldots, S_K$. To make things work the range of the projection is taken to be the parameter interval $[0,K]$ of the fold path, giving the projection map the form $\pi \colon \FS'(F) \to [0,K]$. When one projects two vertices of $\FS'(F)$ to two parameters $l \le k \in [0,K]$, one is interested in the ``diameter (of the subpath) between these two parameters'', which means the diameter of the set $\{S_l, S_{l+1}, \ldots, S_k\}$ in $\FS'(F)$. There are three axioms. The \emph{Coarse Retraction} bounds the diameter between each $k \in [0,K]$ and its projection~$\pi(S_k) \in [0,K]$. The \emph{Coarse Lipschitz} axiom bounds the diameter between the projections $\pi(T),\pi(T') \in [0,K]$ of two nearby vertices $T,T' \in \FS'(F)$. The \emph{Strong Contraction} axiom says roughly that, for each metric ball in $\FS'(F)$ that stays a bounded distance away from the fold path, if one takes the sub-ball having a certain proportion of the total radius, the diameter between the projections of any two vertices in the subball is bounded. All the bounds occurring in this discussion must be uniform, depending only on the rank of~$F$.
In fact rather than using fold paths themselves, we use fold sequences. As we have seen in the counterexample at the end of Section~\ref{SectionFoldPaths}, the same fold path $S_0,\ldots,S_K$ can be represented by inequivalent fold sequences, and the projection maps $\FS'(F) \to [0,K]$ of these two fold sequences may be different. This kind of situation is handled formally be expressing the Masur-Minsky axioms in terms of ``families'' of paths which allow a path to occur repeatedly in the family.
\bigskip
Given integers $i, j$ we adopt interval notation $[i,j]$ for the set of all integers between $i$ and $j$ inclusive, regardless of the order of $i,j$.
Consider a connected simplicial complex $X$ with the simplicial metric. A \emph{path} in $X$ is just a finite sequence of $0$-simplices $\gamma(0),\gamma(1),\ldots,\gamma(K)$, which we write in function notation as $\gamma \colon [0,K] \to X$. A \emph{family of paths} in $X$ is an indexed collection $\{\gamma_i\}_{i \in \I}$ of paths in $X$; we allow repetition in the family. A family of paths in $X$ is said to be \emph{almost transitive} if there exists a constant $A$ such that for any $0$-simplices $v,w$ there is a path $\gamma \colon [0,K] \to X$ in the family such that all of the distances $d(v,\gamma(0))$, $d(\gamma(0),\gamma(1))$, \ldots, $d(\gamma(K-1),\gamma(K))$, $d(\gamma(K),w)$ are $\le A$.
Given a path $\gamma \colon [0,K] \to X$ and a function $\pi \colon X \to [0,K]$, called the ``projection map'' to the path $\gamma$, and given constants $a,b,c > 0$, consider the following three axioms:
\begin{description}
\item[Coarse retraction:] For all $k \in [0,K]$ the set $\gamma[k,\pi(\gamma(k))]$ has diameter $\le c$.
\item[Coarse Lipschitz:] For all vertices $v,w \in X$, if $d(v,w) \le 1$ then the set $\gamma[\pi(v),\pi(w)]$ has diameter~$\le c$.
\item[Strong contraction:] For all vertices $v,w \in X$, if $d(v,\gamma[0,K]) \ge a$, and if $d(w,v) \le b \cdot d(v,\gamma[0,K])$, then the set $\gamma[\pi(v),\pi(w)]$ has diameter $\le c$.
\end{description}
\begin{theorem}[\cite{MasurMinsky:complex1}, Theorem 2.3]
Given a connected simplicial complex $X$, if there exists an almost transitive family of paths $\{\gamma_i\}_{i \in I}$ in $X$ and for each $i \in I$ a projection map $\pi_i \colon X \to [0,K]$ to the path $\gamma_i \colon [0,K] \to X$ such that the \emph{Coarse retraction}, the \emph{Coarse Lipschitz}, and the \emph{Strong contraction} axioms all hold with uniform constants $a,b,c>0$ for all $i \in I$, then $X$ is hyperbolic.
\end{theorem}
\subparagraph{Remarks.} Our notion of ``almost transitivity'' is not quite the same as ``coarse transitivity'' used in \cite{MasurMinsky:complex1}, which requires that the paths in the set be continuous and that there is a constant $D$ such that any two points at distance $\ge D$ are connected by a path in the set. However, the proof of equivalence of the two forms of the axioms, one with ``almost transitive'' and the other with ``coarse transitive'', is very simple, and is left to the reader. The set of fold paths in $\FS'(F)$ is almost transitive with constant $A = 2$: for any free splittings $S,T$, by moving $S$ a distance~$\le 2$ one obtains a naturally foldable morphism to $T$ (Lemma~\ref{LemmaFoldableExistence}), which has a fold factorization (Section~\ref{SectionFoldFactorizations}), and consecutive free splittings in such a factorization have distance~$\le 2$ (Lemma~\ref{LemmaFoldDistance}).
The concept of a ``family of paths'' is left undefined in \cite{MasurMinsky:complex1} but the proof of the above theorem and the application to curve complexes given in \cite{MasurMinsky:complex1} clearly indicate that an indexed family with repetition is allowed. On top of that, given any indexed family satisfying the hypothesis of the theorem, if we removed repetition by kicking out all but one copy of each path then the resulting family would still satisfy the hypotheses of the theorem. In our situation, although we use fold paths in our application of the above theorem, we shall index them by (equivalence classes of) fold sequences; thus, we allow for the possibility that two inequivalent fold sequences representing the same fold path might have somewhat different projection maps.
\bigskip
Notice that the \emph{Strong contraction} axiom, unlike the \emph{Coarse Lipschitz} axiom, is not symmetric in the variables $v,w$. For our proof we shall need to extend the applicability of the \emph{Strong contraction} axiom by further desymmetrizing it:
\begin{description}
\item[Desymmetrized strong contraction:] For all vertices $v,w \in X$, if $\pi(w) \le \pi(v)$ in the interval $[0,K]$, if $d(v,\gamma[0,K]) \ge a$, and if $d(w,v) \le b \cdot d(v,\gamma[0,K])$, then the set $\gamma[\pi(v),\pi(w)]$ has diameter $\le c$.
\end{description}
\begin{lemma}\label{LemmaDesymmetrization}
For all constants $a,b,c > 0$ there exist constants $A,B > 0$ such that the \emph{desymmetrized strong contraction} axiom with constants $a$, $b$, and $c$ implies the \emph{strong contraction} axiom with constants $A$, $B$, and $C=c$.
\end{lemma}
\begin{proof} Set $A=4a$ and $B=\min\{1/4,3b/4\}$. We need only show that if $\pi(w) > \pi(v)$ in $[0,K]$, if $d(v,\gamma[0,K]) \ge A$ and if $d(w,v) \le B \cdot d(v,\gamma[0,K])$, then $d(w,\gamma[0,K]) \ge a$ and $d(v,w) \le b \cdot d(w,\gamma[0,K])$. We have
\begin{align*}
d(w,\gamma[0,K]) &\ge d(v,\gamma[0,K]) - d(w,v) \\
&\ge d(v,\gamma[0,K]) - \frac{1}{4} \cdot d(v,\gamma[0,K]) \\
&\ge \frac{3}{4} \cdot d(v,\gamma[0,K]) \ge 3a \ge a \\
\intertext{and}
d(v,w) &\le \frac{3}{4} \cdot b \cdot d(v,\gamma[0,K]) \\
&\le \frac{3}{4} \cdot b \cdot \frac{4}{3} d(w,\gamma[0,K]) = b \cdot d(w,\gamma[0,K])
\end{align*}
\end{proof}
We now define the path family $\{\gamma_i\}_{i \in \I}$ in $\FS'(F)$ that we use to prove the Main Theorem. Having associated to each fold sequence a fold path, which clearly depends only on the equivalence class of that fold sequence, the index set is defined to be the set of equivalence classes of fold sequences.
To prove the Main Theorem, by combining the Masur--Minsky theorem, almost transitivity of fold paths, and Lemma~\ref{LemmaDesymmetrization}, it therefore suffices to prove:
\begin{proposition}\label{PropFoldContractions}
Associated to each fold sequence $S_0 \mapsto\cdots\mapsto S_K$ in $\FS'(F)$ there is a projection map $\pi \colon \FS'(F) \to [0,K]$, depending only on the equivalence class of the fold sequence, such that the \emph{Coarse retraction}, the \emph{Coarse Lipschitz}, and the \emph{Desymmetrized strong contraction} axioms all hold, with constants $a,b,c$ depending only on $\rank(F)$.
\end{proposition}
The next step in the proof of the Main Theorem will be taken with the formulation of Proposition~\ref{PropProjToFoldPath}, where the projection maps are defined.
\subparagraph{Remark.} Theorem 2.3 of \cite{MasurMinsky:complex1} contains an additional conclusion, which in our context says that fold paths may be reparameterized to become uniform quasigeodesics in $\FS'(F_n)$, although the reparameterization does not fall out explicitly from their proof. Our method of proof will actually yield an explicit quasigeodesic reparameterization of fold paths, in terms of the ``free splitting units'' introduced in Section~\ref{SectionFSU}. See Proposition~\ref{PropFoldPathQuasis} for the statement and proof regarding this reparameterization.
\section{Combing}
\label{SectionCombing}
In this section we define a combing method for fold sequences. Roughly speaking, given a fold sequence $S_0 \mapsto \cdots \mapsto S_K$ and a free splitting $T'$ which differs from $S_K$ by a single edge in $\FS'(F)$, we want a construction which combs backwards to produce a fold sequence $T_0 \mapsto\cdots\mapsto T_K=T'$ in which each $T_k$ differs from the corresponding $S_k$ by at most a single edge in $\FS'(K)$. We would like to give this construction in two cases, depending on whether the oriented edge from $S_K$ to $T'$ is a collapse $S_K \collapse T'$ or an expansion $S_K \expand T'$. In the case of a collapse $S_K \collapse T'$ there is indeed a process of ``combing by collapse'' which produces a fold sequence as stated; see Proposition~\ref{PropCBC}. In the case of an expansion $S_K \expand T'$, although there is a process of ``combing by expansion'', the sequence $T_0 \mapsto\cdots\mapsto T_K=T'$ produced need not be a fold sequence, instead it belongs to a broader class of map sequences that we refer to as ``foldable sequences''; see Proposition~\ref{PropCBE}. It is an important part of our theory that both combing processes are closed on the collection of foldable sequences; combing by collapse is closed as well on the smaller collection of fold sequences.
In Section~\ref{SectionCombingRectangles} we define the collection of foldable sequences on which combing will be defined, and we define \emph{combing rectangles} which are the commutative diagrams of foldable sequences and collapse maps that are used to describe combing; see Figure~\ref{FigureCombingRectangle}. We also prove Lemma~\ref{LemmaCombingProperties} which says that combing by collapse is closed on foldable sequences.
The two main combing processes --- combing by collapse, and combing by expansion --- are described in Section~\ref{SectionCombingConstructions}. In Section~\ref{SectionCombRectOps} we will also give some methods for constructing new combing rectangles by composing or decomposing old ones.
\bigskip
Also in Section~\ref{SectionCombingRectangles}, combing rectangles will be used to define the projection map from $\FS'(F)$ to each fold path $S_0 \mapsto\cdots\mapsto S_K$, and we will state Proposition~\ref{PropProjToFoldPath} which says that these projection maps satisfy the requirements of the Masur-Minsky axioms.
Combing rectangles will be important structures for the rest of the paper. Much of the geometric intuition behind our methods involves visualizing combing rectangles and other, more complicated diagrams of free splittings and maps as objects sitting in the complex $\FS'(F)$, and visualizing various methods for geometrically manipulating these objects. The technical details of the proof of the Main Theorem will involve a calculus of combing rectangles, which is based on the constructions of combing rectangles given in Sections~\ref{SectionCombingConstructions} and~\ref{SectionCombRectOps}.
\subsection{Combing rectangles and the projection onto fold paths}
\label{SectionCombingRectangles}
\paragraph{Foldable sequences.} Consider a sequence of free splittings and maps over $F$ of the form $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2}\cdots\xrightarrow{f_K} S_K$, and recall the notation $f^i_j = f_{i+1}\composed\cdots\composed f_j \colon S_i \to S_j$ for each $0 \le i < j \le K$. This sequence is said to be a \emph{foldable sequence}\index{foldable sequence} over $F$ if for each $i=0,\ldots,K$ the map $f^i_K \colon S_i \to S_K$ is a foldable map. It follows that each of the maps $f^i_j \colon S_i \to S_j$ is a foldable map, $0 \le i < j \le K$, because for each vertex $v \in S_i$, the $f^i_j$-gate decomposition of $D_v S_i$ is a refinement of the $f^i_K$-gate decomposition.
\paragraph{Combing rectangles.} A \emph{combing rectangle}\index{combing rectangle} over $F$ is a commutative diagram of maps over $F$ having the form depicted in Figure~\ref{FigureCombingRectangle}, such that:
\begin{enumerate}
\item The top horizontal row is a foldable sequence.
\item Each vertical arrow $\pi_i \colon S_i \to T_i$ is a collapse map with collapsed subgraph $\sigma_i \subset S_i$ indicated in the notation.
\item For all $i=1,\ldots,K$ we have $\sigma_{i-1} = f^\inv_i(\sigma_i)$. Equivalently, for all $0 \le i < j \le K$ we have $\sigma_i = (f^i_j)^\inv(\sigma_j)$.
\end{enumerate}
\begin{figure}[h]
$$\xymatrix{
S_0 \ar[r]^{f_1} \ar[d]_{[\sigma_0]}^{\pi_0}
& \cdots \ar[r]^{f_{i-1}}
& S_{i-1} \ar[d]_{[\sigma_{i-1}]}^{\pi_{i-1}} \ar[r]^{f_i}
& S_i \ar[d]_{[\sigma_i]}^{\pi_i} \ar[r]^{f_{i+1}}
& \cdots \ar[r]^{f_K}
& S_K \ar[d]_{[\sigma_K]}^{\pi_K} \\
T_0 \ar[r]^{g_1}
& \cdots \ar[r]^{g_{i-1}} & T_{i-1} \ar[r]^{g_i}
& T_i \ar[r]^{g_{i+1}}
& \cdots \ar[r]^{g_K}
& T_K
}
$$
\caption{A combing rectangle. Horizontal sequences are foldable, the top by definition and the bottom by Lemma \ref{LemmaCombingProperties}. Vertical arrows are collapses and $\sigma_{i-1}=f_i^\inv(\sigma_i)$.}
\label{FigureCombingRectangle}
\end{figure}
As mentioned earlier, combing is not closed on the set of fold sequences. We will eventually prove that combing is closed on the set of all foldable sequences; the following proves this in part, by showing closure under ``combing by collapse''.
\begin{lemma} \label{LemmaCombingProperties} For any combing rectangle notated as in Figure~\ref{FigureCombingRectangle}, the bottom row is a foldable sequence.
\end{lemma}
We put off the proof of Lemma~\ref{LemmaCombingProperties} until after the definition of the projection map.
\paragraph{Projecting onto fold paths.} Given a free splitting~$F \act T$, a fold sequence $S_0 \mapsto\cdots\mapsto S_K$, and an integer $k \in [0,K]$, a \emph{projection diagram from $T$ to $S_0 \mapsto\cdots\cdots S_K$ of depth $k$}\index{projection diagram} is a commutative diagram of free splittings and maps over $F$ of the form depicted in Figure~\ref{FigureProjDiagram}, such that each horizontal row is a foldable sequence, and the two rectangles shown are combing rectangles.
\begin{figure}[h]
$$\xymatrix{
T_0 \ar[r] \ar[d] & \cdots \ar[r] & T_{k} \ar[r] \ar[d] & T \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{A projection diagram of depth $k$ from $T$ to $S_0\mapsto\cdots\mapsto S_K$.}
\label{FigureProjDiagram}
\end{figure}
The \emph{projection} $\pi(T) \in [0,\ldots,K]$ of $T$ to $S_0\mapsto\cdots\mapsto S_K$ is defined to be the maximum depth of any projection diagram from a free splitting conjugate to $T$ to a fold sequence equivalent to $S_0 \mapsto\cdots\mapsto S_K$, if such a diagram exists, and $\pi(T)=0$ otherwise. It is clear that this gives a well-defined function $\pi \colon \FS'(F) \to [0,\ldots,K]$ that depends only on the equivalence class of the fold sequence $S_0 \mapsto\cdots\mapsto S_K$.
One way to understand this definition is to think of $\FS'(F)$ as being Gromov hyperbolic and to think of fold paths as being quasigeodesic, all of which are true a posteriori assuming that Proposition~3.2 is true. That being so, given a fold path $S_0 \mapsto\cdots\mapsto S_K$ and $T$ projecting to $\pi(T) \in [0,\ldots,K]$, by moving to some point $S'_0$ nearby $S_0$ we should obtain a fold path from $S'_0$ to $T$ having an initial segment that fellow travels the given fold path from $S_0$ to $S_{\pi(T)}$ and no farther. In defining the projection map as above, the intuition is that combing rectangless provide an adequate definition of fellow traveling. The technical details of the definition were crafted to what would work in our proofs, but also received some original motivation from results of \cite{MasurMinsky:complex1} which amount to a proof that for any finite type oriented surface $S$, splitting sequences of train tracks on $S$ define quasigeodesics in the curve complex of~$S$. In particular, Lemma~4.4 of that paper --- which can be regarded as a verification of the \emph{Coarse Lipschitz} axiom --- contains the statement ``$\beta \in PE(\sigma)$'', and if one works out the train track diagram for that statement one obtains a rather strong analogue of our projection diagram above.
The rest of the paper is devoted to the proof of the following, which immediately implies Proposition~\ref{PropFoldContractions} and therefore implies the Main Theorem:
\begin{proposition}\label{PropProjToFoldPath}
There exist $a,b,c>0$ depending only on $\rank(F)$ such that for any fold sequence $S_0 \mapsto\cdots\mapsto S_K$ in $\FS'(F)$, the projection map $\pi \colon \FS'(F) \to [0,\ldots,K]$ defined above satisfies the \emph{Coarse retraction}, \emph{Coarse Lipschitz}, and \emph{Desymmetrized strong contraction} axioms with constants $a,b,c$.
\end{proposition}
The \emph{Coarse Retraction} axiom is proved in Proposition~\ref{PropCoarseRetract} and the other two axioms are proved in Section~\ref{SectionMainProof}.
\bigskip
We now turn to:
\begin{proof}[Proof of Lemma \ref{LemmaCombingProperties}.]
Following the notation of Figure~\ref{FigureCombingRectangle}, we must show that each map $g^i_K \colon T_i \to T_K$ is foldable. First note that $g^i_K$ is injective on each edgelet $e \subset T_i$, because $e = \pi_i(\ti e)$ for some edgelet $\ti e \subset S_i \setminus \sigma_i$, so $f^i_K(\ti e) \subset S_K \setminus \sigma_K$, so $\pi_K(f^i_K(\ti e)) = g^i_K(\pi_i(\ti e)) = g^i_K(e)$ is an edgelet of $T_K$.
Given a vertex $w \in T_i$, we must show that $g^i_K$ has $\ge 2$ gates at $w$, and that if $w$ is natural then $g^i_K$ has $\ge 3$ gates. Let $w' = g^i_K(w) \in T_K$. We have a subgraph $W' = \pi_K^\inv(w') \subset S_K$, and a subgraph $W = \pi_i^\inv(w) \subset S_i$ such that $f^i_K(W) \subset W'$. Note that each direction in $D_W S_i$ is based at a frontier vertex of $W$ and is represented by an edgelet of $S_i \setminus \sigma_i$, and similarly for $D_{W'} S_K$, and so these direction sets are in the domain of definition of the derivative maps $d \pi_i$, $d \pi_K$, respectively. We have a commutative diagram of derivatives
$$\xymatrix{
D_W S_i \ar[r]^{d f^i_K} \ar[d]_{d\pi_i} & D_{W'} S_K \ar[d]^{d\pi_K} \\
D_w T_i \ar[r]_{d_w g^i_K} & D_{w'} T_K
}$$
in which the vertical maps are bijections and so $d \pi_i$ induces a bijection between gates of $d_w g^i_K$ and point pre-images of the map in the top row. The valence of $w$ therefore equals the cardinality of the set $D_W S_i$, and the number of gates of $g^i_K$ at $w$ equals the cardinality of the image of the map in the top row. If $w$ has valence $\ge 2$ (resp.\ $\ge 3$) then we must prove that the image of the map in the top row has cardinality $\ge 2$ (resp.~$\ge 3$).
Suppose that $w$ is a valence~$2$ vertex contained in the interior of a natural edge $\eta \subset T_i$. The subgraph $W$ is either a point or a segment contained in the interior of a natural edge $\ti\eta \subset S_i$ such that $\pi_i(\ti\eta)=\eta$. Let $e_1,e_2 \subset \eta$ be the two oriented edgelets incident to $w$, representing the two directions of the set $D_w T_i$. Let $\ti e_1, \ti e_2 \subset \ti\eta \setminus W$ be the two oriented edgelets incident to the endpoints of $W$ representing the two elements of the set $D_W S_i$, indexed so that $\pi_i(\ti e_j)=e_j$, $j=1,2$. Since $f^i_K$ is injective on $\ti\eta$ it follows that $f^i_K(\ti e_1)$, $f^i_K(\ti e_2)$ are distinct edgelets of~$S_K$. Noting that $g^i_K(e_j) = g^i_K(\pi_i(\ti e_j)) = \pi_K(f^i_K(\ti e_i))$ for $j=1,2$, it follows that these are two distinct edgelets of $T_K$, and so $g^i_K$ has 2 gates at~$w$.
Suppose now that $w$ is a vertex of valence~$\ge 3$, so the set $D_W S_i$ has cardinality~$\ge 3$. If $W$ is a point then it has valence~$\ge 3$ and, since $f^i_K$ is foldable, there are $\ge 3$ gates of $f^i_K$ in $D_W S_i$; it follows that there are $\ge 3$ gates of $g^i_K$ in $D_w S_i$. If $W$ has infinite diameter then $D_W S_i$ is infinite and so $w$ has infinite valence, implying that $g^i_K$ has infinitely many gates at $w$. If $W$ does not contain a natural vertex of $S_i$ then it is a segment in the interior of a natural edge of $S_i$ implying that $w$ has valence~$2$, a contradiction.
We have reduced to the case that the graph $W$ has finite diameter, is not a point, and contains a natural vertex of $S_i$. The graph $f^i_K(W)$ also has finite diameter and is not a point, and so has $P \ge 2$ vertices of valence~$1$ (the cardinality $P$ may be countably infinite). Let $X \subset W$ be a set consisting of one vertex of $W$ in the preimage of each valence~$1$ vertex of $f^i_K(W)$. By the First Derivative Test, each $x \in X$ is a frontier vertex of $W$. Choosing a direction $\delta_x \in D_W S_i$ based at each $x \in X$, it follows that the directions $d f^i_K(\delta_x)$ are based at $P$ distinct points of $S_K$ and are therefore $P$ distinct directions in the set $D_{W'} S_K$. If $P \ge 3$ then we are done.
We have reduced further to the subcase that $P=2$, and so $f^i_K(W)$ is a segment with endpoints $u_1,u_2$. We have $X = \{x_1,x_2\}$ with $f^i_K(x_j)=u_j$. Consider a natural vertex $v \in S_i$ such that $v \in W$, and its image $v' = f^i_K(v) \in f^i_K(W)$. Since $f^i_K$ is foldable, there are $\ge 3$ gates at $v$ with respect to $f^i_K$. If $v' = u_j$ then at least one of the gates at $v$ maps to a direction at $u_j$ which is distinct from the direction $d f^i_K (\delta_{x_j})$ and from the unique direction of the segment $f^i_K(W)$ at $u_j$, and so we have found a third direction in the set $D_{W'} S_K$. If $v'$ is an interior point of the segment $f^i_K(W)$ then at least one of the gates at $v$ maps to a direction at $v'$ distinct from the two directions of the segment $f^i_K(W)$ at $v'$ and again we have found a third direction in $D_{W'} S_K$.
\end{proof}
\subsection{Combing by collapse and combing by expansion}
\label{SectionCombingConstructions}
In approaching the proof of Proposition~\ref{PropProjToFoldPath}, one immediately confronts the need for constructions of combing rectangles, in order to construct the projection diagrams needed to compute projection maps. This section and the next contain the constructions of combing rectangles that we use for this purpose.
Our first construction of combing rectangles shows how to comb a foldable sequence followed by a collapse map.
\begin{proposition}[Combing by collapse]\label{PropCBC}
For each foldable sequence $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2} \cdots \xrightarrow{f_K} S_K$, and for each collapse $S_K \xrightarrow{[\sigma_K]} T'$ there exists a combing rectangle of the form shown in Figure~\ref{FigureCombingRectangle} such that $T_K=T$.
\end{proposition}
\begin{proof} Define an equivariant subgraph $\sigma_i \subset S_i$ using the definition of a combing rectangle: starting with $\sigma_K \subset S_K$, by induction define $\sigma_i = f^\inv_{i+1}(\sigma_{i+1})$. Since $\sigma_K \subset S_K$ is a proper equivariant subgraph it follows by induction that each $\sigma_i \subset S_i$ is a proper equivariant subgraph, and so free splittings $F \act T_i$ with collapse maps $S_i \xrightarrow{[\sigma_i]} T_i$ and induced maps $g_{i} \colon T_{i-1} \to T_i$ are all defined, and the squares are all evidently commutative.
\end{proof}
We remark that the cheapness of the above proof is slightly offset by the modest expense of proving that the $T_i$ sequence is foldable, which was done back in Lemma~\ref{LemmaCombingProperties}.
\bigskip
Next we explain how to comb a foldable sequence followed by an expansion. In sharp contrast to the case of combing by collapse, both the construction of the combing rectangle and the proof that the resulting map sequence is foldable are very intricate in the case of combing by expansion.
\begin{proposition}[Combing by expansion]\label{PropCBE}
For each foldable sequence $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2} \cdots \xrightarrow{f_K} S_K$, each expansion $S_K \expandsto T'$, and each collapse map $\pi' \colon T' \to S_K$, there exists a combing rectangle of the form
$$\xymatrix{
S_0 \ar[r]^{f_1}
& \cdots \ar[r]^{f_{i-1}}
& S_{i-1} \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}}
& \cdots \ar[r]^{f_K}
& S_K \\
T_0 \ar[r]^{g_1} \ar[u]^{[\sigma_0]}_{\pi_0}
& \cdots \ar[r]^{g_{i-1}}
& T_{i-1} \ar[u]^{[\sigma_{i-1}]}_{\pi_{i-1}} \ar[r]^{g_i}
& T_i \ar[u]^{[\sigma_i]}_{\pi_i} \ar[r]^{g_{i+1}}
& \cdots \ar[r]^{g_K}
& T_K \ar[u]^{[\sigma_K]}_{\pi_K=\pi'} \ar@{=}[r]
& T'
}
$$
\end{proposition}
\textbf{Remark.} Implicit in the conclusion via the definition of combing rectangle is that the sequence $T_0\xrightarrow{g_1}\cdots\xrightarrow{g_K} T_K$ is foldable.
\begin{proof} We will construct this combing rectangle in two steps. In Step~1 we produce a commutative diagram of free splittings and maps of the form
$$\xymatrix{
S_0 \ar[r]^{f_1}
& \cdots \ar[r]^{f_{i-1}}
& S_{i-1} \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}}
& \cdots \ar[r]^{f_K}
& S_K \\
U_0 \ar[r]^{h_1} \ar[u]_{[\sigma'_0]}^{\pi'_0}
& \cdots \ar[r]^{h_{i-1}}
& U_{i-1} \ar[u]_{[\sigma'_{i-1}]}^{\pi'_{i-1}} \ar[r]^{h_i}
& U_i \ar[u]_{[\sigma'_i]}^{\pi'_i} \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K \ar[u]_{[\sigma'_K=\sigma']}^{\pi'_K} \ar@{=}[r]
& T'
}
$$
in which each $\pi'_i$ is a collapse and $h_i^\inv(\sigma'_i)=\sigma'_{i-1}$, but the $U$ row slightly fails to be foldable in that certain explicitly described natural vertices of $U_i$ are ``bad'' by fault of having only $2$~gates with respect to~$h^{i}_K \colon U_i \to U_K$. One of these gates will always be a singleton, and so each ``bad natural vertex'' will be incident to a ``bad natural edge''. In Step~2 we will repair this problem by splitting each bad natural edge, to produce a commutative diagram of the form
$$\xymatrix{
U_0 \ar[r]^{h_1}
& \cdots \ar[r]^{h_{i-1}}
& U_{i-1} \ar[r]^{h_i}
& U_i \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K \ar@{=}[r]
& T' \\
T_0 \ar[r]^{g_1} \ar[u]^{\mu_0}
& \cdots \ar[r]^{g_{i-1}}
& T_{i-1} \ar[u]^{\mu_{i-1}} \ar[r]^{g_i}
& T_i \ar[u]^{\mu_i} \ar[r]^{g_{i+1}}
& \cdots \ar[r]^{g_K}
& T_K \ar@{=}[u]^{\mu_K} \ar@{=}[r]
& T'
}
$$
The $T$ row will be a foldable sequence. The $\mu_i$ maps are not collapses but instead are ``multifolds'' that invert the splitting process. The desired combing rectangle will be obtained by concatenating these two rectangles: the composition $\pi_i \colon T_i \xrightarrow{\mu_i} U_i \xrightarrow{\pi'_i} S_i$ will indeed be a collapse map, which collapses the subgraph $\sigma_i = \mu_i^\inv(\sigma'_i) \subset T_i$.
\subparagraph{Step 1.} The free splitting $F \act U_i$ is defined to be the minimal subtree of the pushout of $S_i$ and $T'$. Here are more details. As a set, the pushout of $S_i$ and $T'$ is
$$\wedge(S_i,T') = \{(x,y) \in S_i \cross T' \suchthat f^i_K(x)=\pi'(y) \}
$$
The action $F \act \wedge(S_i,T')$ is obtained by restricting the diagonal action $F \act S_i \cross T'$. The restrictions of the two projection maps are denoted
$$\pi'_i \colon \wedge(S_i,T') \to S_i \quad\text{and}\quad h^i_{T'} \colon \wedge(S_i,T') \to T'
$$
Both are clearly $F$-equivariant and we have $f^i_K \composed \pi'_i = \pi' \composed h^i_{T'} \colon \wedge(S_i,T') \to S_K$. As~a graph, the vertices and edgelets of the pushout are as follows. A vertex is a pair $(v,w) \in \wedge(S_i,T')$ such that $v$ is a vertex of $S_i$ and $w$ is a vertex of $T'$. Edgelets are of two types. First, a \emph{collapsed edgelet} is one of the form $v \cross e'$ where $v \in S_i$ is a vertex and $e' \subset \sigma' \subset T'$ is an edgelet such that $\pi'(e') = f^i_K(v)$; the barycentric coordinates on $e'$ induce those on $v \cross e'$ via the projection $h^i_{T'}$. Second, to each edgelet $e \subset S_i$ there corresponds a unique edgelet $e' \subset T'$ with the property that $f^i_K(e)=\pi'(e')$ (uniqueness follows since $\pi'$ is a collapse map), and there corresponds in turn an \emph{uncollapsed} edgelet $\ti e = \wedge(e,e') = \{(x,y) \in \wedge(S_i,T') \suchthat x \in e, y \in e'\}$ of $\wedge(S_i,T')$ with barycentric coordinates induced via the map $f^i_k \composed \pi'_i = \pi' \composed h^i_{T'}$ which takes $\ti e$ bijectively to the edgelet $f^i_K(e)=\pi'(e')$ of~$S_K$. The action of $F$ on $\wedge(S_i,T')$ and the projection maps $\pi'_i$, $h^i_{T'}$ are each simplicial. The simplicial complex $\wedge(S_i,T')$ is 1-dimensional by construction. It is furthermore a tree, in that removal of the interior of each edgelet separates, because the simplicial map $\pi'_i \colon \wedge(S_i,T') \to S_i$ is injective over the interior of each edgelet of $S_i$, and for each vertex $x \in S_i$ the subcomplex $(\pi'_i)^\inv(x)$ is a tree (mapped by a simplicial isomorphism to the tree $(\pi')^\inv(f^i_K(x)) \subset T'$). The action $F \act S_i$ has no point fixed by each element of $F$, and so neither does the action $F \act \wedge(S_i,T')$; it follows that the $F$-tree $\wedge(S_i,T')$ contains a unique minimal $F$-invariant subtree which, by definition, is $U_i$. For each edgelet $e \subset \wedge(S_i,T')$, its stabilizer is contained in $\Stab_{S_i}(\pi'_i(e))$ if $e$ is uncollapsed and in $\Stab_{T_i}(h^i_{T'}(e))$ if $e$ is collapsed, and in either case is trivial. This proves that $F \act U_i$ is a free splitting.
Here are some structural facts about the tree $U_i$. For each edgelet $e \subset S_i$, the edgelet $\ti e \subset \wedge(S_i,T')$ is the unique one mapped to $e$ via $\pi'_i$, and since $F \act S_i$ is minimal the map $\pi'_i \colon U_i \to S_i$ is surjective which forces $\ti e$ to be contained in $U_i$. This also shows that $\pi'_i$ is a collapse map. The union of the collapsed edgelets of the pushout $\wedge(S_i,T')$ forms a subgraph $\Sigma_i \subset \wedge(S_i,T')$ with one component $\Sigma_{i,v} = (\pi'_i)^\inv(v)$ for each vertex $v \in S_i$ such that $(\pi')^\inv(f^i_K(v))$ is a component of~$\sigma'$; the map $h^i_{T'}$ restricts to a simplicial isomorphism between these components. The subgraph of $\sigma'_i \subset U_i$ that is collapsed by $\pi'_i \colon U_i \to S_i$ is the union of those components of $\Sigma_i \intersect U_i$ that contain at least one edge. Each of these components has the form $\sigma'_{i,v} = \Sigma_{i,v} \intersect U_i$ when this intersection contains at least one edge; by convention we set $\sigma'_{i,v} = \emptyset$ otherwise. See below for a more detailed description of various features of $\sigma'_{i,v}$.
There is an induced map $h_i \colon \wedge(S_{i-1},T') \to \wedge(S_i,T')$ which is defined by the formula $h_i(x,y)=(f_i(x),y)$, which makes sense because for each $(x,y) \in \wedge(S_{i-1},T')$ we have $f^i_K(f_{i-1}(x)) = f^{i-1}_K(x)=\pi'(y)$. The commutativity equation $\pi'_i \composed h_i = f_i \composed \pi'_{i-1}$ is immediate. Since $U_i$ is the minimal subtree of $\wedge(S_i,T')$ it follows that $h_i(U_{i-1}) \supset U_i$, but we are not yet in a position to prove the opposite inclusion, not until we have established that the map $h^i_{T'} \colon U_i \to T'$ has $\ge 2$ gates at each vertex.
\subparagraph{Preparation for Step 2.} Here are some structural facts about the components of $\sigma'_i$. Consider a vertex $v \in S_i$ for which $\sigma'_{i,v} \ne \emptyset$ and so is a component of $\sigma'_i$. Given an oriented edgelet $e \subset S_i$ we abuse notation by writing $e \in D_v S_i$ to mean that $v$ is the initial vertex of~$e$. There is a function $\xi_{i,v} \colon D_v S_i \to U_i$ where for each $e \in D_v S_i$ we define $\xi_{i,v}(e) \in \sigma'_{i,v}$ to be the initial vertex of the corresponding oriented edgelet $\ti e \subset U_i$. Note that the set $\image(\xi_{i,v})$ is the topological frontier of the subtree $\sigma'_{i,v}$ in the tree $U_i$. By Lemma~\ref{LemmaCollapseProps}~\pref{ItemCollapseFrontierHull} it follows that $\sigma'_{i,v}$ is the convex hull of the set $\image(\xi_{i,v})$ in $U_i$. Notice also that the function $\xi_{i,v}$ is constant on each gate of $D_v S_i$ with respect to the map $f^i_K$, for if $e_1,e_2 \in D_v S_i$ are in the same gate then $f^i_K(e_1)=f^i_K(e_2)$ is a single edgelet in $S_K$ which lifts to a unique edgelet $e' \subset T'$ and we have
$$h^i_K(\ti e_1) = \wedge(f^i_K(e_1),e') = \wedge(f^i_K(e_2),e') = h^i_K(\ti e_2)
$$
and so the initial endpoints of $\ti e_1$ and $\ti e_2$ have the same image under $h^i_K$. But these initial endpoints are in the graph $\sigma'_{i,v}$ on which $h^i_K$ is injective, so these initial endpoints are equal. Letting $\Gamma_v S_i$ denote the set of gates of $f^i_K$ in $D_v S_i$, the map $\xi_{i,v}$ induces a map which we also denote $\xi_{i,v} \colon \Gamma_v S_i \to \sigma'_{i,v}$ whose image is also the frontier of $\sigma'_{i,v}$.
We now study the extent to which the maps $h^i_K \colon U_i \to U_K$ are foldable. Note first that we may identify $T'$ with the pushout $\wedge(S_K,T')$ and so we may identify $U_K = T'$ and $\sigma'_K = \sigma'$ up to simplicial conjugacy and we may identify $h^i_K=h^i_{T'}$, in particular the gates of $h^i_K$ and of $h^i_{T'}$ are therefore identical. We will show that $h^i_{T'}$ has $\ge 2$ gates at each vertex of $U_i$, so a vertex is either \emph{good} meaning it has valence~$\ge 3$ and~$\ge 3$ gates or valence~$2$ and~$2$ gates, or \emph{bad} meaning it has valence~$\ge 3$ but only~$2$ gates. We shall do this through a case analysis, going through various cases of good vertices and narrowing down to one remaining case which is bad. This will yield an explicit description of the bad vertices which will be used in describing the free splitting $F \act T_i$.
Fix a vertex $u = (v,w) \in U_i$, so if $\sigma'_{i,v} \ne \emptyset$ then $u \in \sigma'_{i,v}$. Denote $x = f^i_K(v) = \pi'(w)$.
Consider first the case that $\sigma'_{i,v} = \emptyset$; we shall show that $u$ is good. We have a commutative diagram of derivative maps
$$\xymatrix{
D_v S_i \ar[r]^{d_v f^i_K} & D_x S_K \\
D_u U_i \ar[u]^{d_u \pi'_k} \ar[r]^{d_u h^i_{T'}} & D_w T' \ar[u]_{d_w \pi'}
}$$
where the left arrow is a bijection, i.e. the valences of $u$ and $v$ are equal. Also, the set $\image(d_u h^i_{T'})$ is in the domain of definition of the right arrow and the right arrow is an injection on its domain of definition. The number of gates at $u,v$ are therefore equal. Since $f^i_K$ is foldable it follows that $u$ is good.
Consider now the case that $\sigma'_{i,v} \ne \emptyset$. To simplify notation we denote $W=\sigma'_i$ and $W_v = \sigma'_{i,v}$. Each gate of $h^i_{T'}$ in $DU_i$ is contained either in $DW$ or its complement $D(U_i \setminus W) = D U_i - D W$, because $W=\sigma'_i = (h^i_{T'})^\inv(\sigma')$ implying that $h^i_{T'}$ never maps a direction of $W$ and a direction of $U_i \setminus W$ to the same direction of $T'$. Since $h^i_{T'}$ is injective on $W_v$, each direction in the set $D_u W_v$ constitutes an entire gate of $D_u U_i$. Gates at $u$ in the complement $D_u U_i - D_u W_v$ exist if and only if $u$ is a frontier vertex of $W_v$, if and only if $u$ is in the image of $\xi_{i,v} \colon D_v S_i \to W_v$.
Consider the subcase that $v$ has valence~$2$ in $S_i$. The graph $W_v$ is then a segment contained in the interior of a natural edge of $U_i$. The vertex $u$ therefore has valence~$2$ in~$U_i$, with either $2$ directions in $W_v$ or one each in $W_v$ and in $U_i \setminus W_v$, and in either case these 2 directions are mapped by $h^i_{T'}$ to two different directions in $T'$ and so $u$ is good.
Consider the subcase that $v$ has valence~$\ge 3$ in $S_i$. If the valence of $u$ in $W_v$ plus the number of gates at $u$ in the complement of $W_v$ is $\ge 3$ then $h^i_{T'}$ has $\ge 3$ gates at $u$, so $u$ is good. If $u$ is an interior vertex of $W_v$ then $u$ has valence~$\ge 2$ in $W_v$ by minimality of $F \act U_i$; furthermore, the valences of $u$ in $W_v$ and in $U_i$ are equal and the number of gates of $h^i_{T'}$ at $u$ equals the valence, so $u$ is good. If $u$ is a frontier vertex of valence~$\ge 2$ in $W_v$ then $u$ has $\ge 1$ gate in the complement of $W_v$ and we considered this case already and showed that $u$ is good. If $u$ is a frontier vertex of valence~$1$ in $W_v$ and if $u$ has $\ge 2$ gates in the complement of $W_v$ then we have also considered this case already and showed that $u$ is good. If $u$ is a frontier vertex of valence~$1$ in $W_v$ and $u$ has exactly 1 direction in the complement of $W_v$ then $u$ has valence~$2$ in $U_i$ and $2$ gates, so $u$ is good.
The only case that remains, and the case that characterizes when $u$ is bad, is when $v$ has valence~$\ge 3$ in $S_i$, $u$ is a frontier vertex of $W_v$, $u$ has valence~$1$ in $W_v$, $u$ has exactly one gate in the complement of $W_v$, and that gate has cardinality~$\xi_u \ge 2$ called the \emph{external valence} of $u$. When in this case, let $\zeta_u$ be the unique natural edge of $U_i$ with endpoint $u$ and with direction at $u$ equal to the unique direction of $W_v$ at $u$. We call $\zeta_u$ the \emph{bad natural edge} incident to $u$. Let $z_u$ be the natural endpoint of $\zeta_u$ opposite~$u$.
We claim that for each bad natural vertex $u \in U_i$ we have $\zeta_u \subset W_v$; the only way this could fail is if $W_v$ is an edgelet path whose vertices apart from $u$ all have valence~$2$ in $U_i$, implying that $f^i_K$ has 2 gates at the natural vertex $v$, contradicting that $f^i_K$ is foldable. We claim also that $z_u$ is good; otherwise it would follow that $W_v = \zeta_u = \zeta_{z_u}$ which again would imply the contradiction that $f^i_K$ has 2 gates at~$v$.
The union of the bad natural edges of $U_i$ forms an equivariant natural subgraph denoted $Z_i = \union\zeta_u \subset U_i$. The natural edges of its complement $U_i \setminus Z_i$ are the \emph{good natural edges} of $U_i$, some of which may be contained in $W$, some in $U_i \setminus W$, and some in neither. The endpoints of a good natural edge need not be good. From the description of bad natural edges it follows that each component of $Z_i$ contains a unique good vertex $z$ and is the union of some number $m \ge 1$ of bad natural edges with endpoint $z$, forming a star graph with $m$ valence~$1$ vertices apart from $z$.
\subparagraph{Step 2.} Ignoring the simplicial structure for the moment, define the free splitting $F \act T_i$ to be the one obtained from $F \act U_i$ by collapsing the bad subgraph $Z_i \subset U_i$. Let $\rho_i \colon U_i \xrightarrow{[Z_i]} T_i$ be the collapse map. Note that $\rho_i$ restricts to an equivariant bijection from the good natural vertices of $U_i$ to the natural vertices of $T_i$, because $Z_i$ is a natural subgraph each of whose components contains exactly one good natural vertex. Also, $\rho_i$ induces a bijection from the good natural edges of $U_i$ --- those in $U_i \setminus Z_i$ --- to the natural edges of~$T_i$: denote this correspondence by $\ti\eta \leftrightarrow \eta$ for each good natural edge $\eta \subset T_i$, and note that $\rho_i$ maps $\ti\eta$ homeomorphically to $\eta$.
Define the map $\mu_i \colon T_i \to U_i$ as follows. The restriction of $\mu_i$ to the natural vertices of $T_i$ is the equivariant bijection onto the good natural vertices of $U_i$ that is obtained by inverting~$\rho_i$. The endpoints of each natural edge of $T_i$ map to distinct points of $U_i$, and so $\mu_i$ may be extended equivariantly and continuously to be an injection on each natural edge of $T_i$.
Define the simplicial structure on $T_i$ to be the unique one with respect to which $\mu_i$ is a simplicial map: its vertices are the inverse image under $\mu_i$ of the vertices of $U_i$; each of its edgelets maps via $\mu_i$ by simplicial isomorphism to an edgelet of~$U_i$.
Define the subgraph $\sigma_i \subset T_i$ to be $\mu_i^\inv(\sigma'_i)$; we will see below that $\pi'_i \composed \mu_i \colon T_i \to S_i$ is a collapse map which collapses the subgraph $\sigma_i$.
Knowing that $\mu_i$ is injective on each natural edge of $T_i$, we describe the image of each natural edge as follows. The notation $u \mapsto z_u$, which so far defines an equivariant function from the bad natural vertices of $U_i$ to the good natural vertices of $U_i$, extends to all natural vertices of $U_i$ by defining $z_u=u$ when $u$ is good. For each natural vertex $u \in U_i$ we have $\mu_i(\rho_i(u)) = z_u$: if $u=z_u$ is good this is because $\mu_i$ and $\rho_i$ are inverse bijections between good natural vertices of~$U_i$ and all natural vertices of $T_i$; if $u$ is bad then $u$ and $z_u$ are contained in the same component of $Z_i$ so $\rho_i(u)=\rho_i(z_u)$ and hence $\mu_i(\rho_i(u))=\mu_i(\rho_i(z_u))=z_u$. Given a natural edge $\eta \subset T_i$ with corresponding good natural edge $\ti\eta \subset U_i$, letting $u_1,u_2 \in U_i$ be the endpoints of $\ti\eta$ and letting $z_i=z_{u_i} \in U_i$, it follows that $\mu_i(\eta) = \mu_i(\rho_i(\ti\eta))$ is the arc in $U_i$ connecting $z_1$ to $z_2$, which is just the union of $\ti\eta$ together with the bad natural edges incident to whichever of $u_1,u_2$ are bad.
From this description of $\mu_i$ we derive a few more properties of $\mu_i$, giving details about its behavior over good and bad natural edges of $U_i$, and its behavior on natural edges and natural vertices of $T_i$.
\begin{description}
\item[(a) $\mu_i$ over good natural edges of $U_i$:] the map $\mu_i$ is injective over the interior of each good natural edge $\ti\eta \subset U_i$, the closure of $\mu_i^\inv(\interior(\eta))$ is an edgelet path contained in $\eta$, and the restriction of $\mu_i$ to this edgelet path is a simplicial isomorphism onto $\ti\eta$.
\item[(b) $\mu_i$ over bad natural edges of $U_i$:] for each bad natural edge $\zeta_u \subset U_i$ oriented to have terminal point $u$ and initial point $z_u$, letting $\chi_u$ be the external valence of $u$, letting $\ti\eta_\ell \subset U_i$ ($\ell=1,\ldots,\chi_u$) be the oriented good natural edges with common initial point $u$, and letting $\eta_\ell = \rho_i(\ti\eta_\ell) \subset T_i$ be the corresponding oriented natural edges with common initial point $w=\rho_i(u)$, there exist initial segments $[w,w_\ell] \subset \eta_\ell$, $\ell = 1,\ldots,\chi_u$, such that $\mu_i$ maps each $[w,w_\ell]$ to $\zeta_u$ by a simplicial isomorphism and such that $\mu_i^\inv(\zeta_u) = \union_{\ell=1}^{\chi_u} [w,w_\ell] \subset \sigma_i$. Furthermore each $w_\ell$ is a valence~$1$ vertex of $\sigma_i$.
\end{description}
Intuitively (a) and (b) together say that $\mu_i$ is a ``partial multifold'', which for each of its gates identifies proper initial segments of the oriented natural edges representing that gate. Perhaps the only nonobvious part of (a) and (b) is the last sentence of (b). For each bad natural vertex $u \in U_i$, from (a) and the previous sentences of (b) it follows that $\mu_i^\inv(u) = \{w_1,\ldots,w_{\chi_u}\}$, and that for each $\ell=1,\ldots,\chi_u$ the vertex $w_\ell$ is contained in the interior of the natural edge $\eta_\ell$, one direction being in the segment $[w_u,w_\ell] \subset \sigma_i$ and the other direction being in the closure of $\mu_i^\inv(\interior(\eta_\ell))$ which is in $T_i \setminus \sigma_i$, and so $w_\ell$ has valence~$1$ in $\sigma_i$.
\begin{description}
\item[(c) $\mu_i$ on natural edges of $T_i$:] The restriction of $\mu_i$ to each good natural edge of $T_i$ is injective. Furthermore, an embedded edgelet path $\alpha \subset U_i$ is the $\mu_i$-image of some good natural edge of $T_i$ if and only if the endpoints of $\alpha$ are good natural vertices of $U_i$, no interior point of $\alpha$ is a good natural vertex, and $h^i_K \restrict \alpha$ is injective.
\end{description}
Only the ``if'' part of~(c) is not obvious. Let $\alpha\subset U_i$ be an embedded edgelet path whose only good natural vertices are its endpoints, and suppose that $h^i_K \restrict \alpha$ is injective. If $\alpha$ contains no bad natural vertex then $\alpha = \ti\eta$ is a good natural edge with associated natural edge $\eta \subset T_i$ and $\alpha=\mu_i(\eta)$. If $u \in \alpha$ is a bad natural vertex then $u \in \interior(\alpha)$, and since $h^i_K \restrict \alpha$ is injective it follows that one direction of $\alpha$ at $u$ is the direction of the bad natural arc~$\zeta_u$, whose opposite good natural endpoint $z_u$ must be an endpoint of $\alpha$; the edgelet path $\alpha$ is therefore the concatenation of some natural edge $\ti\eta \subset U_i \setminus Z_i$ with any bad natural edges incident to the endpoints of $\ti\eta$, and it follows that $\alpha = \mu_i(\eta)$.
\begin{description}
\item[(d) $d\mu_i$ at natural vertices of $T_i$:] \qquad For each natural vertex $v \in T_i$, the map \break $d_v \mu_i \colon D_v T_i \to D_{\mu_i(v)} U_i$ is surjective.
\end{description}
To justify~(d), the vertex $\mu_i(v)$ is a good natural vertex of $U_i$. Consider a direction $d \in D_{\mu_i(v)} U_i$. If $d$ is the initial direction of some oriented good natural edge $\ti\eta \subset U_i$ corresponding to an oriented natural edge $\eta \subset T_i$, it follows that the initial vertex of $\eta$ equals $v$ and the initial direction of $\eta$ maps to $d$. If $d$ is the initial direction of some bad oriented natural edge $\zeta_u \in U_i$ with opposite bad natural vertex $u$, let $\ti\eta$ be any of the good natural edges incident to $u$ oriented with initial vertex $u$, and let $\eta \subset T_i$ be the corresponding oriented natural edge, and it follows that the initial vertex of $\eta$ again equals $v$ and that the initial direction of $\eta$ maps to $d$.
We now prove that we have a collapse map $\pi_i = \pi'_i \composed \mu_i \colon T_i \xrightarrow{\sigma_i = (\mu_i)^\inv(\sigma'_i)} S_i$. Clearly an edgelet of $T_i$ is in $\sigma_i$ if and only its image under $\mu_i$ is in $\sigma'_i$ if and only if its image under $\pi_i=\pi'_i\composed\mu_i$ is a point. Given an edgelet $e \subset S_i$, the collapse map $\pi'_i$ is injective over the interior of $e$, so there is a unique edgelet $e' \subset U_i$ mapped to $e$ by $\pi'_i$, and $e' \not\subset \sigma'_i$; it follows that $e' \not\subset Z_i$ and so by item (a) above the map $\mu_i$ is injective over the interior of~$e'$; therefore $\pi_i$ is injective over the interior of~$e$.
Putting off for the moment the issue of defining the maps $g_i \colon T_{i-1} \to T_i$, we \emph{define} the maps $g^i_K \colon T_i \to T_K$ as follows. First note that the map $\mu_K \colon T_K \to U_K$ is evidently a simplicial isomorphism, and so we may identify $T_K$ with $U_K$ and with $T'$. We now define $g^i_K$ to be the composition $T_i \xrightarrow{\mu_i} U_i \xrightarrow{h^i_K} U_K \xrightarrow{(\mu_K)^\inv} T_K$. The map $g^i_K$ is foldable, equivalently $h^i_K \composed \mu_i \colon T_i \to U_K$ is foldable, for the following reasons: by (c) the map $h^i_K$ is injective on natural edges of $T_i$; for each natural vertex $v \in T_i$, its image $\mu_i(v) \in U_i$ is a good natural vertex and so has $\ge 3$ gates with respect to $h^i_K$, and by (d) the derivative map $d_v \mu_i \colon D_v T_i \to D_{\mu_i(v)} U_i$ is surjective, which implies that $h^i_K \composed \mu_i$ has $\ge 3$ gates at $v$.
All that remains is to define a map $g_i \colon T_{i-1} \to T_i$ so that the commutativity equation $h_i \composed \mu_{i-1} = \mu_i \composed g_i$ holds, for by combining this with the equation $h^{i-1}_K = h_K \composed\cdots\composed h_i$ it immediately follows that $g^{i-1}_K = g_K \composed\cdots\composed g_{i}$ and so the map sequence $T_0 \xrightarrow{g_1}\cdots\xrightarrow{g_K} T_K$ is defined and is foldable.
Consider a natural vertex $v \in T_{i-1}$. Its image $\mu_{i-1}(v) \in U_{i-1}$ is a good natural vertex and so has $\ge 3$ gates with respect to $h^{i-1}_K$, implying that $h_i(\mu_{i-1}(v)) \in U_i$ has $\ge 3$ gates with respect to $h^i_K$ and so is a good natural vertex, and hence there is a unique natural vertex in $T_i$ that maps to $h_i(\mu_{i-1}(v))$ which we take to be $g_i(v)$. We have thus defined $g_i$ so as to satisfy the commutativity equation on each natural vertex $v \in T_{i-1}$.
Consider a natural edge $\eta \subset T_{i-1}$ with natural endpoints $v_0 \ne v_1$. Its image $\mu_{i-1}(\eta) \subset U_{i-1}$ is the arc with good natural endpoints $\mu_{i-1}(v_0) \ne \mu_{i-1}(v_1)$. By (c) above the map $h^{i-1}_K = h^i_K \composed h_i$ is injective on the arc $\mu_{i-1}(\eta)$, implying that $h_i$ is injective on $\mu_{i-1}(\eta)$ and that $h^i_K$ is injective on the arc $h_i(\mu_{i-1}(\eta)) \subset U_i$, the latter of which has good natural endpoints $h_i(\mu_{i-1}(v_0)) \ne h_i(\mu_{i-1}(v_1))$. Subdividing the arc $h_i(\mu_{i-1}(\eta))$ at all interior good natural vertices of $U_i$ we write it as a concatenation:
$$h_i(\mu_{i-1}(\eta)) = \alpha_1 * \cdots * \alpha_M
$$
Each of the arcs $\alpha_m$, $m=1,\ldots,M$ has good natural endpoints, no good natural interior points, and the map $h^i_K$ is injective on $\alpha_m$, and so by~(c) there is a unique natural edge $\hat\alpha_m \subset T_i$ mapped by $\mu_i$ to $\alpha_m$ by a simplicial isomorphism. Since every good natural vertex in $U_i$ has a unique natural pre-image in $T_i$, it follows that we may concatenate to obtain an arc $\hat\alpha_1 * \cdots * \hat\alpha_m \subset T_i$, and furthermore the restriction $\mu_i \restrict \hat\alpha_1 * \cdots * \hat\alpha_m$ is a simplicial isomorphism onto $h_i(\mu_{i-1}(\eta))$. Inverting this restriction we may then define
$$g_i \restrict \eta = (\mu_i \restrict \hat\alpha_1 * \cdots * \hat\alpha_m)^\inv \composed (h_i \composed \mu_{i-1}) \restrict \eta
$$
which is a simplicial isomorphism with image $\hat\alpha_1 * \cdots * \hat\alpha_m$. We have thus defined $g_i$ so as to satisfy the commutativity equation on each natural edge $\eta \subset T_{i-1}$.
This completes the proof of Proposition~\ref{PropCBE}.
\end{proof}
\subsection{Composition and decomposition of combing rectangles.}
\label{SectionCombRectOps}
\begin{lemma}[Composition of combing rectangles]
\label{LemmaCombingComp}
\qquad \\
Given two combing rectangles of the form
$$\xymatrix{
S_0 \ar[r]^{f_1} \ar[d]^{\pi_0}
& \cdots \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}} \ar[d]^{\pi_i}
& \cdots \ar[r]^{f_K}
& S_K \ar[d]^{\pi_K}
\\
T_0 \ar[r]^{g_1} \ar[d]^{\rho_0}
& \cdots \ar[r]^{g_i}
& T_i \ar[r]^{g_{i+1}} \ar[d]^{\rho_i}
& \cdots \ar[r]^{g_K}
& T_K \ar[d]^{\rho_K}
\\
U_0 \ar[r]^{h_1}
& \cdots \ar[r]^{h_i}
& U_i \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K
}$$
their composition, which is the commutative diagram
$$\xymatrix{
S_0 \ar[r]^{f_1} \ar[d]^{\rho_0 \composed \pi_0}
& \cdots \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}} \ar[d]^{\rho_i \composed \pi_i}
& \cdots \ar[r]^{f_K}
& S_K \ar[d]^{\rho_K \composed \pi_K}
\\
U_0 \ar[r]^{h_1}
& \cdots \ar[r]^{h_i}
& U_i \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K
}$$
is a combing rectangle. The collapsed subgraph of $\rho_i \composed \pi_i$ is the union of the collapsed subgraph of $\pi_i$ with the inverse image under $\pi_i$ of the collapsed subgraph of $\rho_i$.
\end{lemma}
\begin{proof}
For each edgelet $e \subset U_i$, the map $\rho_{i}$ is injective over the interior of $e$, and so there is a unique edgelet $e' \subset T_i$ such that $\rho_{i}(e')=e$. The map $\pi_i$ is injective over the interior of $e'$, and it follows that $\rho_i \composed \pi_i$ is injective over the interior of $e$. This proves that $\rho_i \composed \pi_i$ is a collapse map and that the second diagram in the statement above is a combing rectangle.
Given an edgelet of $S_i$, clearly its image under $\rho_i \composed \pi_i$ is a vertex of $U_i$ if and only if its image under $\pi_i$ is a vertex of $T_i$ or an edgelet of $T_i$ whose image under $\rho_i$ is a vertex of $U_i$.
\end{proof}
\begin{lemma}[Decomposition of combing rectangles]
\label{LemmaCombingDecomp}
Given a combing rectangle of the form
$$\xymatrix{
S_0 \ar[r]^{f_1} \ar[d]^{\upsilon_0}_{[\sigma_0]}
& \cdots \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}} \ar[d]^{\upsilon_i}_{[\sigma_i]}
& \cdots \ar[r]^{f_K}
& S_K \ar[d]^{\upsilon_K}_{[\sigma_K]}
\\
U_0 \ar[r]^{h_1}
& \cdots \ar[r]^{h_i}
& U_i \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K
}$$
and given equivariant subgraphs $\sigma'_i \subset \sigma_i$ ($i=0,\ldots,K$) having the property that $f_{i}^\inv(\sigma'_{i}) = \sigma'_{i-1}$ for each $i=1,\ldots,K$, there exist two combing rectangles of the form
$$\xymatrix{
S_0 \ar[r]^{f_1} \ar[d]^{\pi_0}_{[\sigma'_0]}
& \cdots \ar[r]^{f_i}
& S_i \ar[r]^{f_{i+1}} \ar[d]^{\pi_i}_{[\sigma'_i]}
& \cdots \ar[r]^{f_K}
& S_K \ar[d]^{\pi_K}_{[\sigma'_K]}
\\
T_0 \ar[r]^{g_1} \ar[d]^{\rho_0}
& \cdots \ar[r]^{g_i}
& T_i \ar[r]^{g_{i+1}} \ar[d]^{\rho_i}
& \cdots \ar[r]^{g_K}
& T_K \ar[d]^{\rho_K}
\\
U_0 \ar[r]^{h_1}
& \cdots \ar[r]^{h_i}
& U_i \ar[r]^{h_{i+1}}
& \cdots \ar[r]^{h_K}
& U_K
}$$
whose composition (as in Lemma~\ref{LemmaCombingComp}) is the given combing rectangle.
\end{lemma}
\begin{proof} Define the collapse map $\pi_i \colon S_i \xrightarrow{[\sigma'_i]} T_i$ to be the quotient map obtained by collapsing each component of $\sigma'_i$ to a point. Since $f_i^\inv(\sigma'_i)=\sigma'_{i-1}$, there exists a map $g_i \colon T_{i-1} \to T_i$ induced from $f_i \colon S_{i-1} \to S_i$ under the quotient, which makes the top rectangle with the $S$ row and the $T$ row commutative, and this rectangle is therefore a combing rectangle. By Lemma~\ref{LemmaCombingProperties}, the $T$ sequence is foldable. Define a subgraph $\tau_i = \pi_i(\sigma_i) \subset T_i$. We have $g_i^\inv(\tau_i) = g_i^\inv(\pi_i(\sigma_i)) = \pi_{i-1}(f_i^\inv(\sigma_i)) = \pi_{i-1}(\sigma_{i-1}) = \tau_{i-1}$, where the second equation is verified by a diagram chase using that the map $\pi_{i-1}$ is surjective, and that $\pi_i$ is injective over the interior of each edgelet of $T_i$. Clearly the collapse map $\upsilon_i \colon S_i \xrightarrow{[\sigma_i]} U_i$ factors as the composition of $\pi_i \colon S_i \xrightarrow{[\sigma'_i]} T_i$ and a collapse map $\rho_i \colon T_i \xrightarrow{[\tau_i]} U_i$, making the bottom diagram with the $T$ row and the $U$ row commutative, and this row is therefore a combing rectangle.
\end{proof}
\section{Free splitting units}
\label{SectionFSU}
In this section we study how to break a fold sequence into natural units called \emph{free splitting units}. Our story of free splitting units grew in the telling. The original concept was motivated by units along train track splitting paths that are implicit in the ``nested train track argument'' of \cite{MasurMinsky:complex1} and refinements of that argument in \cite{MMS:Splitting}. The details of the definition were tailored to fit the proofs of our two major results: our Main Theorem on hyperbolicity of the free splitting complex, via the arguments of Sections~\ref{SectionPushingDownPeaks}; and Proposition~\ref{PropFoldPathQuasis} which says that free splitting units give a uniformly quasigeodesic parameterization of fold paths in $\FS'(F)$.
The main results of this section are Proposition~\ref{PropCoarseRetract} which verifies the \emph{Coarse Retraction} axiom of Masur and Minsky, and Lemma~\ref{LemmaUnitsLipschitz} which says that free splitting units give a uniformly coarse Lipschitz parameterization of fold paths in $\FS'(F)$. Underlying Lemma~\ref{LemmaUnitsLipschitz} are Lemmas~\ref{LemmaBRNatural} and~\ref{LemmaBROneB} which give two methods of finding diameter bounds along foldable foldable sequences.
The diameter bounds, which are stated and proved in Section~\ref{SectionDiamBounds}, arise from finding ``invariant natural structures'' along the foldable sequence. The first diameter bound, Lemma~\ref{LemmaBRNatural} occurs when each free splitting along the fold path decomposes equivariantly into a pair of natural subgraphs in a manner which is ``invariant'' with respect to the foldable maps (see Definition~\ref{DefBRDecompos}). The second diameter bound, Lemma~\ref{LemmaBROneB}, occurs when each free splitting has a particular orbit of natural edges which is ``almost invariant'' with respect to the foldable maps (see Definition~\ref{DefAIE}).
The combinatorial structures underlying the two diameter bounds are used to formulate the definition of free splitting units along a fold sequence (see Definitions~\ref{DefLessThanOneFSU} and~\ref{DefGeneralFSU}). The diameter bounds are not applied directly to the fold sequence itself, but instead to foldable sequences obtained by transforming the given fold sequence via an application of ``combing by collapse'' followed by an application of ``combing by expansion''. One can already see this kind of transformation in the ``nested train track argument'' of \cite{MasurMinsky:complex1}.
\subsection{Diameter bounds along foldable sequences}
\label{SectionDiamBounds}
In this section we describe a pair of techniques for finding upper bounds on the diameter of foldable sequences.
\paragraph{Diameter bounds from natural red-blue decompositions.} Consider a free splitting $F \act T$ and a nonempty, proper, $F$-invariant subgraph $\beta \subset T$ having no degenerate components. The conjugacy classes of nontrivial stabilizers of connected components of $\beta$ form a free factor system $\F(\beta)$, as one can see by forming the collapse map $T \xrightarrow{[\beta]} U$ and noting that $\F(\beta)$ is a subset of $\F(U)$. Passing further to the quotient graph of groups $X = U / F_n$, the image of $\beta$ under the composition $T \mapsto U \mapsto X$ is a subset $V_\beta$ of the vertex set of $X$. Let $C_1(\beta)$ be the number of $F$-orbits of components of $\beta$, equivalently the cardinality of $V_\beta$. Let $C_2(\beta)$ be the sum of the ranks of the components of $\F(\beta)$, equivalently the sum of the ranks of the subgroups labelling the vertices $V_\beta$ in the graph of groups $X$, and so we have $0 \le C_2(\beta) \le \rank(F)$. Defining the \emph{complexity} of~$\beta$ to be $C(\beta) \equiv C_1(\beta) + (\rank(F) - C_2(\beta))$, we have $C(\beta) \ge 1$. If furthermore $\beta$ is a natural subgraph of $S$ then $C_1(\beta) \le 3 \rank(F) - 3$, because the number of component orbits of $\beta$ is at most the number of natural edge orbits in $\beta$, and $3 \rank(F)-3$ is an upper bound for the number of natural edge orbits of any free splitting of $F$. Altogether this shows that the complexity of any nonempty, proper, natural, $F$-invariant subgraph $\beta \subset T$ satisfies
$$1 \le C(\beta) \le 4 \rank(F)-3
$$
\begin{definition}[Invariant blue-red decompositions]\label{DefBRDecompos}
An \emph{invariant blue--red decomposition} for a foldable sequence $T_0 \xrightarrow{g_1} T_1 \xrightarrow{g_2}\cdots\xrightarrow{g_k} T_K$, also called an \emph{invariant decomposition} for short, is a decomposition $\beta_k \union \rho_k = T_k$ for each $k=0,\ldots,K$ such that for $0 \le i \le j \le K$ we have $(g^i_j)^\inv(\beta_j) = \beta_i$ and $(g^i_j)^\inv(\rho_j) = \rho_i$ (where in expressions like $(g^i_j)^\inv(\beta_j)$ we abuse notation by deleting degenerate components). Notice that any choice of final decomposition $\beta_K \union \rho_K = T_K$ determines a unique invariant decomposition by the equations $\beta_i = (g^i_K)^\inv(\beta_K)$ and $\rho_i = (g^i_K)^\inv(\rho_K)$. An invariant decomposition is \emph{natural} if either of the following two equivalent properties holds: $\beta_0,\rho_0$ are natural subgraphs of~$T_0$ if and only if $\beta_k,\rho_k$ are natural subgraphs of $T_k$ for all $k=0,\ldots,K$. The ``only if'' direction follows by observing that the image of each natural vertex under a foldable map is a natural vertex, and so the image of a natural subgraph is a natural subgraph.
\end{definition}
Because an invariant decomposition is determined by the final decomposition, a general invariant decomposition carries little information about the foldable sequence. The typical behavior is that the edgelets within a natural edge $e \subset T_i$ will alternate many times between red and blue, that is, the number of components of $e \intersect \beta_i$ and $e \intersect \rho_i$ will be very large. Exploiting the difference between this typical behavior and the contrasting special behavior of a natural invariant decomposition is at the heart of the proof of the Main Theorem, specifically in the proof of Proposition~\ref{PropPushdownInToto} Step~2.
Here is our first diameter bound:
\begin{lemma}\label{LemmaBRNatural}
Given a foldable sequence $T_0 \xrightarrow{g_1} T_1 \xrightarrow{g_2}\cdots\xrightarrow{g_k} T_K$ with an invariant natural decomposition $\beta_k \union \rho_k = T_k$, the following hold:
\begin{enumerate}
\item \label{ItemBRDecrease}
The complexity $C(\beta_k)$ is nonincreasing as a function of $k=0,\ldots,K$.
\item \label{ItemBRSubintervals}
The interval $0 \le k \le K$ subdivides into $\le 4 \rank(F)-3$ subintervals on each of which $C(\beta_k)$ is constant.
\item \label{ItemBRBound}
If $C(\beta_k)$ is constant on the subinterval $a \le k \le b$, where $0 \le a \le b \le K$, then
$$\diam\{T_a,\ldots,T_b\} \le 4
$$
\end{enumerate}
\end{lemma}
\textbf{Remark.} When $T_0 \xrightarrow{g_1} T_1 \xrightarrow{g_2}\cdots\xrightarrow{g_k} T_K$ is a \emph{fold} sequence, one obtains a diameter bound for the entire sequence as follows. Subdivide the interval $0,\ldots,K$ into $\le 4 \rank(F)-3$ subintervals on which $C(\beta_k)$ is constant. On each subinterval one has a diameter bound of $4$. At each of the $\le 4 \rank(F)-4$ fold maps where one subinterval transitions to another, one has an additional distance bound of $2$ coming from Lemma~\ref{LemmaFoldDistance}. Putting these together,
$$\diam\{T_0,\ldots,T_K\} \le 4 (4 \rank(F)-3) + 2 (4 \rank(F)-4) = 24 \rank(F) - 20
$$
However, the manner in which we actually apply Lemma~\ref{LemmaBRNatural} to fold sequences is via concepts of free splitting units in the next section; see Lemma~\ref{LemmaUnitsLipschitz}.
Before turning to the proof proper of Lemma~\ref{LemmaBRNatural}, we first state a sublemma about the behavior of complexity of invariant subforests under fold maps.
\begin{sublemma}\label{SublemmaBlueFoldBounds}
If $f \colon S \to T$ is a fold map of free splittings, if $\beta_T \subset T$ is a nonempty, proper, $F$-invariant subgraph, and if $\beta_S=f^\inv(\beta_T)$ (as usual ignoring degenerate components), then $C_1(\beta_S) \ge C_1(\beta_T)$, and $C_2(\beta_S) \le C_2(\beta_T)$, and so $C(\beta_S) \ge C(\beta_T)$. Furthermore, equality holds if and only if $f$ restricts to a bijection of component sets of $\beta_S$ and $\beta_T$.
\end{sublemma}
We delay the proof of this sublemma and meanwhile turn to:
\begin{proof}[Proof of Lemma \ref{LemmaBRNatural}] Item~\pref{ItemBRDecrease} follows from Sublemma~\ref{SublemmaBlueFoldBounds} by factoring each foldable map $g_k \colon T_{k-1} \to T_k$ into folds. Item~\pref{ItemBRSubintervals} follows from \pref{ItemBRDecrease} and the fact that $1 \le C(\beta_K) \le C(\beta_0) \le 4 \rank(F)-3$.
To prove \pref{ItemBRBound}, fixing $i,j$ with $a \le i < j \le b$, it suffices to prove that $d(T_i,T_j) \le 4$. By assumption of \pref{ItemBRBound}, $C(\beta_k)$ is constant for $i \le k \le j$. For each $i < k \le j$, factoring $g_k \colon T_{k-1} \to T_k$ into folds, applying \pref{ItemBRDecrease} to get constant complexity on the fold factorization, and applying Sublemma~\ref{SublemmaBlueFoldBounds} to each of those folds, it follows that $g_k$ induces a bijection between the component sets of $\beta_{k-1}$ and $\beta_k$. By composing, it follows that $g^i_j = g_j \composed \cdots \composed g_{i+1}$ induces a bijection between the component sets of $\beta_i$ and~$\beta_j$.
Now we may factor $g^i_j$ into a fold sequence of the form
$$T_i = U_0 \xrightarrow{h_1} \cdots \xrightarrow{h_P} U_P \xrightarrow{h_{P+1}}\cdots \xrightarrow{h_Q} U_Q = T_j
$$
by prioritizing folds of blue edge pairs over folds of red edge pairs up until $U_P$ when there are no more blue edge pairs to fold, with the result that if $0 < p \le P$ then an edge pair of $U_{p-1}$ folded by $f_p$ is blue, whereas if $P < q \le Q$ then an edge pair of $S_{q-1}$ folded by $h_q$ is red. To see that prioritizing in this manner is possible, if $g^i_j$ does not already restrict to a simplicial isomorphism from $\beta_i$ to $\beta_j$ then, using that $g^i_j$ induces a bijection of components of $\beta_i$ and $\beta_j$, together with the \emph{Local to global principle} (see the proof of Lemma~\ref{LemmaFoldSequenceConstruction} and the following \emph{Remark}), it follows that some pair of oriented natural edges $\eta_1,\eta_2 \subset \beta_i$ with the same initial vertex have images in $\beta_j$ with the same initial direction. We may therefore define the first fold $h_1$ to be a maximal fold factor of $g^i_j$ obtained by folding $\eta_1,\eta_2$, producing a factorization $T_i = U_0 \xrightarrow{h_1} U_1 \mapsto T_j$. Pushing the natural blue-red decomposition of $U_0$ forward (or equivalently pulling that of $T_j$ back), we obtain a natural blue-red decomposition of $U_1$, and the map $U_1 \mapsto T_j$ still induces a bijection of component sets of blue graphs. We may then continue by induction, stopping when the map $U_P \mapsto T_j$ restricts to a simplicial isomorphism of blue graphs. If the map $U_P \mapsto T_j$ is not already a simplicial isomorphism then one continues the fold factorization arbitrarily, with the result that all folds from $U_P$ to $T_j$ are red.
For $0 \le p \le P$, by collapsing all blue edges of $U_p$, we obtain a free splitting $X_p$ with a collapse map $U_p \mapsto X_p$. Also, for $P \le q \le Q$, by collapsing red edges of $U_q$ we obtain a free splitting $Y_q$ with a collapse map $U_q \to Y_q$.
We claim that up to equivalence $X_p$ is independent of $p=0,\ldots,P$ and $Y_q$ is independent of $q=P,\ldots,Q$. From this claim it follows that $T_i,T_j$ are connected in $\FS'(F_n)$ by a path of length~$\le 4$ as follows:
$$[T_i] = [U_0] \collapse [X_0] = [X_P] \expand [U_P] \collapse [Y_P] = [Y_Q] \expand [U_Q] = [T_j]
$$
which completes the proof.
We prove for each $p=1,\ldots,P$ that $X_{p-1},X_p$ are equivalent, and for $q=P+1,\ldots,Q$ that $Y_{q-1},Y_q$ are equivalent; the two cases are similar and we do just the first. Let $e_1,e_2$ be maximal oriented segments with the same initial vertex that are identified by the fold $U_{p-1} \mapsto U_p$. Recall that the fold map $U_{p-1} \mapsto U_p$ can be factored as $U_{p-1} \xrightarrow{q'} U' \xrightarrow{q''} U_p$ where $q'$ identifies proper initial segments of $e_1,e_2$ and $q''$ folds the remaining unidentified segments. Since $e_1,e_2$ are blue, by pushing forward the blue-red decomposition of $U_{p-1}$, or pulling back that of $U_p$, we obtain a blue-red decomposition of $U'$. Furthermore, there is a collapse map $U' \mapsto U_{p-1}$ which collapses the blue segment resulting from partially identifying $e_1,e_2$, and a collapse map $U' \mapsto U_p$ which collapses the remaining unidentified segments, also blue. By composition we obtain collapse maps $U' \mapsto U_{p-1} \to X_{p-1}$ and $U' \mapsto U_p \mapsto X_p$ each of which collapses the entire blue subgraph of $U'$. It follows that $X_{p-1}$ and $X_p$ are equivalent.
\end{proof}
\begin{proof}[Proof of Sublemma \ref{SublemmaBlueFoldBounds}] Let $e_1,e_2 \subset S$ be oriented natural edges with the same initial vertex that are folded by the map $f$. Let $\eta_1 \subset e_1$, $\eta_2 \subset e_2$ be maximal initial subsegments that are identified by $f$. Let $v_1 \in \eta_1$, $v_2 \in \eta_2$ be the terminal endpoints. Note that either $\eta_1 \union \eta_2 \subset \beta_S$ or $\eta_1 \union \eta_2 \subset S \setminus \beta_S$. If $\eta_1,\eta_2 \subset \beta_S$, or if $\eta_1,\eta_2 \subset S \setminus \beta_S$ and either $v_1 \not\in \beta_S$ or $v_2 \not\in \beta_S$, then all inequalities are equations and $f$ is a bijection of component sets.
We are reduced to the case that $\eta_1 \union \eta_2 \subset S \setminus \beta_S$ and $v_1,v_2 \in \beta_S$, and so $f$ is not a bijection of component sets because the two components $\beta_{S,1}, \beta_{S,2}$ of $\beta_S$ containing $v_1,v_2$ are mapped to the one component of $\beta_{T,0}$ of $\beta_T$ that contains $f(v_1)=f(v_2)$. We must prove that the inequalities $C_1(\beta_S) \ge C_1(\beta_T)$ and $C_2(\beta_S) \le C_2(\beta_T)$ both hold and that at least one of them is strict.
Let the fold map $f \colon S \to T$ be factored as $S \mapsto U \mapsto T$ where $S \mapsto U$ folds short initial segments of $\eta_1,\eta_2$, and $U \mapsto T$ folds the remaining segments, as in the proof of Lemma~\ref{LemmaFoldDistance}. Let $u_1,u_2 \in U$ be the images of $v_1,v_2$. In order to compare the complexities of $\beta_S \subset S$ and $\beta_T \subset T$ we shall move them both into $U$ where we can make the comparison directly.
Letting $\beta_{U} \subset U$ be the image of $\beta_S$, equivalently the preimage of $\beta_T$, the fold map $S \mapsto U$ clearly induces an equivariant bijection from the component set of $\beta_S$ to that of $\beta_{U}$, and so the values of $C_1$, $C_2$, and $C$ on $\beta_S$, $\beta_{U}$ are all equal. Letting $\beta^+_{U} = \beta_{U} \union F \cdot [u_1,u_2]$, the fold map $U \mapsto T$ induces an equivariant bijection from the component set of $\beta^+_{U}$ to that of $\beta_T$, and so the values of $C_1$, $C_2$, and $C$ on $\beta^+_{U}$, $\beta_T$ are equal. So now we must prove the inequalities $C_1(\beta_{U}) \ge C_1(\beta^+_{U})$ and $C_2(\beta_{U}) \le C_2(\beta^+_{U})$ and that at least one of them is strict.
Let $\beta_{U,1}$, $\beta_{U,2}$ be the images of $\beta_{S,1}$, $\beta_{S,2}$, respectively, under the fold map $S \mapsto U$. In the quotient graph of groups $U/F$, notice that $\beta^+_U / F$ is the union of $\beta_U / F$ with the segment obtained by projecting $[u_1,u_2]$, that segment is disjoint from $\beta_U / F$ except at its endpoints, it has one endpoint on $\beta_{U,1}/F$, and the other endpoint at $\beta_{U,2} / F$, and the stabilizer of the interior vertex of that segment is trivial. It follows that if $C_1(\beta_U) > C_1(\beta^+(U))$, that is, if $\beta_{U,1}$, $\beta_{U,2}$ are in different component orbits, then $C_1(\beta_U) = C_1(\beta^+_U)+1$ and $C_2(\beta_U) = C_2(\beta^+_U)$. On the other hand if $C_1(\beta_U) = C_1(\beta^+_U)$, that is if $\beta_{U,1}$ and $\beta_{U,2}$ are in the same component orbit, then $C_1(\beta_U) = C_1(\beta^+_U)$ and $C_2(\beta_U)+1 = C_2(\beta^+_U)$.
\end{proof}
\paragraph{Diameter bounds from almost invariant edges.} Consider a foldable map $f \colon S \to T$ and a natural edge $e_T \subset T$. By ignoring unnatural vertices in $e_T$ and their pre-images in $S$ we may speak about $e_T$-edgelets in $S$; these are the closures of the components of $f^\inv(\interior(e_T))$, each of which is a subsegment of a natural edge of $S$. If $S$ contains a unique $e_T$-edgelet and if $e_S \subset S$ is the natural edge containing that edgelet then we say that the pair $e_S,e_T$ is an \emph{almost invariant edge} of the foldable map $f$.
\begin{definition}[Almost invariant edge] \label{DefAIE}
An \emph{almost invariant edge} for a foldable sequence $T_0 \xrightarrow{f_1} T_1 \xrightarrow{f_2}\cdots\xrightarrow{f_k} T_K$ is a sequence of natural edges $e_k \subset T_k$, $k=0,\ldots,K$, such that for $0 \le i < j \le K$ the edges $e_i \subset T_i$ and $e_j \subset T_j$ are an almost invariant edge for the foldable map $f^i_j \colon T_i \to T_j$. Note that an almost invariant edge exists for the whole foldable sequence if and only if one exists for the map $f^0_K \colon T_0 \to T_K$. To see why, observe that for any natural edge $e_K \subset T_K$, letting $m_k$ be the number of $e_K$ edgelets in $T_k$, the sequence $m_k$ is nonincreasing as a function of $k \in 0,\ldots,K$. If there is a natural edge $e_0 \subset T_0$ so that $e_0,e_K$ is an almost invariant edge for the map $f^0_K$ then $m_0=1$, and so $m_k$ has constant value equal to~$1$. Letting $e_k \subset T_k$ be the unique natural edge containing an $e_K$ edgelet in $T_k$, it follows that $(e_k)_{0\le k \le K}$ is an almost invariant edge for the whole foldable sequence. This argument also shows that each almost invariant edge for a foldable sequence $T_0 \mapsto\cdots\mapsto T_K$ is determined by its last term $e_K \subset T_K$.
\end{definition}
Here is our second diameter bound:
\begin{lemma}\label{LemmaBROneB}
Given a foldable sequence $T_0 \mapsto \cdots \mapsto T_K$, the following are equivalent:
\begin{enumerate}
\item \label{ItemAIEexistsSome}
The foldable map $T_0 \mapsto T_K$ has an almost invariant edge.
\item \label{ItemAIEexistsAll}
The foldable sequence $T_0 \mapsto \cdots \mapsto T_K$ has an almost invariant edge.
\item \label{ItemOneEdgeExistsAll}
There exists a one-edge free splitting $R$ such that $d(T_k,R) \le 1$ for all $k=0,\ldots,K$.
\item \label{ItemOneEdgeExistsSome}
There exists a one-edge free splitting $R$ such that $d(T_0,R) \le 1$ and $d(T_K,R) \le 1$.
\end{enumerate}
Furthermore if these hold then $\diam\{T_0,\ldots,T_K\} \le 2$.
\end{lemma}
\begin{proof} The bound in the last sentence clearly follows from~\pref{ItemOneEdgeExistsAll}. We have seen that \pref{ItemAIEexistsSome}$\implies$\pref{ItemAIEexistsAll}, and clearly~\pref{ItemOneEdgeExistsAll}$\implies$~\pref{ItemOneEdgeExistsSome}.
\medskip
We next prove \pref{ItemAIEexistsAll}$\implies$\pref{ItemOneEdgeExistsAll}. Let $(e_k)_{k=0,\ldots,K}$ be an almost invariant edge. Let $\sigma_k \subset T_k$ be the complement of the orbit of the natural edge $e_k$. Define a collapse map $T_k \xrightarrow{[\sigma_k]} R_k$, so $R_k$ is a one-edge free splitting. It suffices to prove for each $k=1,\ldots,K$ that $[R_{k-1}]=[R_k]$. Letting $e'_{k-1} \subset e_{k-1}$ be the unique $e_k$-edgelet in $T_{k-1}$, letting $\sigma'_{k-1} \subset T_{k-1}$ be the complement of the orbit of $e'_{k-1}$, and defining a collapse map $T_{k-1} \xrightarrow{[\sigma'_{k-1}]} R'_{k-1}$, clearly the map $T_{k-1} \mapsto T_k$ induces an equivariant homeomorphism $R'_{k-1} \to R_k$, and so $[R'_{k-1}]=[R_k]$. Also, since $\sigma_{k-1}$ is the maximal natural subgraph of $\sigma'_{k-1}$, the identity map on $T_{k-1}$ induces a collapse map $R_{k-1} \to R'_{k-1}$ which is a bijection on natural vertices and which, on each natural edge of $R_{k-1}$, collapses an initial and/or terminal segment and is otherwise injective. It follows that the collapse map $R_{k-1} \mapsto R'_{k-1}$ is equivariantly homotopic to a conjugacy, and so $[R_{k-1}]=[R'_{k-1}]=[R_k]$.
\medskip
It remains to prove \pref{ItemOneEdgeExistsSome}$\implies$\pref{ItemAIEexistsSome}. After rewording, this says that if $f \colon S \to T$ is a foldable map of free splittings, and if there exists a one-edge free splitting $R$ such that $d(R,S), d(R,T) \le 1$, then $f \colon S \to T$ has an almost invariant edge. Fix an oriented natural edge $e_R \subset R$ with initial and terminal vertices $r_\pm$, and oriented natural edges $e_S \subset S$, $e_T \subset T$ with initial and terminal vertices $s_\pm$, $t_\pm$ respectively, so that there are collapse maps $S,T \mapsto R$ which collapse the complement of the orbits of $e_S,e_T$ and which take $e_S,e_T$ homeomorphically to $e_R$. We shall prove that $e_S,e_T$ is an almost invariant edge for $f \colon S \to T$.
There is a component decomposition $R \setminus e_R = R_- \disjunion R_+$ where $R_\pm$ contains the vertex $r_\pm$ and there are corresponding component decompositions $S \setminus e_S = S_- \disjunion S_+$, $T \setminus e_T = T_- \disjunion T_+$ so that $S_\pm$, $T_\pm$ are the inverse images of $R_\pm$, respectively, under the collapse maps $S,T \mapsto R$ (in general the ``$\pm$'' notation means ``$+$ or $-$, respectively''; for instance ``$S_\pm$ is the inverse image of $R_\pm$'' means ``$S_+$, $S_-$ is the inverse image of $R_+$, $R_-$, respectively''). Note that $R_\pm$, $S_\pm$, $T_\pm$ are natural subgraphs of $R,S,T$, respectively. Also, $r_\pm$ is the unique point on the topological frontier of $R_\pm$ in $R$, and similarly for $S_\pm$, $T_\pm$. Also, each vertex in each of these subgraphs has valence~$\ge 2$ within the subgraph: in, say, $R_-$ this is obvious for all interior vertices, and the frontier vertex $r_-$ is a natural vertex in $R$ having only one $R$-direction not in $R_-$, namely the direction of $e_R$.
It suffices to prove that $f(S_\pm) \subset T_\pm$, which immediately implies that $e_S,e_T$ is an almost invariant edge for $f \colon S \to T$. Assuming that either $f(S_-) \not\subset T_-$ or $f(S_+) \not\subset T_+$, we shall produce a contradiction. The arguments are similar in either case, so we shall assume that $f(S_-) \not\subset T_-$.
Given a free splitting $F \act U$ and a nontrivial $\gamma \in F$ let $\alpha_U(\gamma)$ denote either the axis of $\gamma$ in $U$ or the unique vertex of $\gamma$ fixed by $U$. Let $F_\pm$ denote the set of nontrivial $\gamma \in F$ such that $\alpha_R(\gamma) \subset R_\pm$.
Note that for each natural edge $e \subset R_\pm$ there exists $\gamma \in F_\pm$ whose axis under the action $F \act R$ contains $e$. It follows that
$$R_\pm = \bigcup_{\gamma \in F_\pm} \alpha_R(\gamma)
$$
Note also that
\begin{enumerate}
\item\label{ItemFpm}
$\displaystyle S_\pm = \bigcup_{\gamma \in F_\pm} \alpha_S(\gamma) \quad\text{and}\quad T_\pm = \bigcup_{\gamma \in F_\pm} \alpha_T(\gamma)$
\end{enumerate}
To prove this for $S_-$, say, note first that the collapse map $S \mapsto R$ takes $S_\pm$ to~$R_\pm$ and its restriction to $\alpha_S(\gamma)$ has image $\alpha_R(\gamma)$ for each $\gamma \in F$. If $\alpha_S(\gamma) \subset S_-$ then $\alpha_R(\gamma) \subset R_-$ and hence $\gamma \in F_-$, and since the axes contained in $S_-$ cover $S_-$ we get one inclusion $S_- \subset \cup_{\gamma \in F_-} \alpha_S(\gamma)$. For the other inclusion, if $\alpha_S(\gamma) \not\subset S_-$ then either $\alpha_S(\gamma)$ crosses $e_S$ and so $\alpha_R(\gamma)$ crosses $e_r$, or $\alpha_S(\gamma) \subset S_+$ and so $\alpha_R(\gamma) \subset R_+$, and in either case $\gamma \not\in F_-$.
Next we show:
\begin{enumeratecontinue}
\item\label{ItemCrossingBound}
There exists a finite number $A \ge 0$ such that $T_- \subset f(S_-) \subset N_A(T_-)$
\end{enumeratecontinue}
Applying the inclusion $f(\alpha_S(\gamma)) \supset \alpha_T(\gamma)$ to all $\gamma \in F_-$ and using \pref{ItemFpm} we obtain one inclusion $T_- \subset f(S_-)$. The opposite inclusion follows by applying the \emph{bounded cancellation lemma} to the map $f \colon S \to T$. The version of the lemma that we need comes from \BookZero, Lemma~3.1, and although the hypothesis there requires that $F \act S$ be a properly discontinuous action (called there a ``free simplicial tree''), the first paragraph of that proof works exactly as stated for a map like $f$ that factors as a fold sequence. The conclusion of that first paragraph is that there exists $A$, a \emph{bounded cancellation constant} for $f$, such that for any vertices $x,y \in S$, in the tree $T$ the set $f[x,y]$ is contained in the $A$ neighborhood of the segment $[f(x),f(y)]$. Applying this to our situation, we conclude that for any $\gamma \in F$ we have $f(\alpha_S(\gamma)) \subset N_A(\alpha_T(\gamma))$. Applying this to all $\gamma \in F_-$ and using~\pref{ItemFpm}, it follows that $f(S_-) \subset N_A(T_-)$, completing the proof of~\pref{ItemCrossingBound}.
We show that the only way for $f(S_-)$ to cross $e_T$ is to do so rather sharply:
\begin{enumeratecontinue}
\item\label{ItemThornProp} If $f(S_-) \not\subset T_-$ then $f(S_-) = T_- \union [t_-,f(s_-)]$. Recalling that $t_-$ is the unique frontier point of $T_-$, it follows that $T_- \intersect [t_-,f(s_-)]=\{t_-\}$.
\end{enumeratecontinue}
To see why, by \pref{ItemCrossingBound} the tree $f(S_-) \setminus T_-$ has finite diameter, by assumption of \pref{ItemThornProp} it is nondegenerate, and so it has at least two vertices of valence~$1$, at least one being distinct from $t_-$. The graph $f(S_-)$ therefore has at least one vertex of valence~$1$. But~$s_-$ is the unique frontier vertex of $S_-$ so by the First Derivative Test the point $f(s_-)$ is the unique valence~$1$ vertex of $f(S_-)$.
Combining this with $T_- \subset f(S_-)$, \pref{ItemThornProp} follows immediately.
But from \pref{ItemThornProp} we deduce that $f \colon S \to T$ has at most~$2$ gates at the natural vertex $s_-$, because all of the directions at $s_-$ distinct from the direction of $e_S$ are mapped by $f$ to a single direction at $f(s_-)$, namely, the direction of the segment $[f(s_-),t_-]$. This contradicts that a foldable map has at least~$3$ gates at every natural vertex.
\end{proof}
\subsection{Definitions and properties of free splitting units}
\label{SectionFSUDefsPropsApps}
Given a fold sequence $S_0 \xrightarrow{f_1} S_1 \xrightarrow{f_2} \cdots \xrightarrow{f_K} S_K$, we shall first define what it means for $S_i,S_j$ to ``differ by $<1$ free splitting unit'' for $i,j \in 0,\ldots,K$, and we prove an appropriate stability result for this definition. With this in hand, for any $i,j \in 0,\ldots,K$ we then define the number of free splitting units between $S_i$ and $S_j$. Lemma~\ref{LemmaFirstFSUBound} proves that the free splitting parameterization along the fold sequence is a Lipschitz parameterization with respect to distance in $\FS'(F)$.
\begin{definition}[$<1$ free splitting unit] \label{DefLessThanOneFSU}
Given a fold sequence $S_0 \xrightarrow{f_1} \cdots \xrightarrow{f_K} S_K$ and $0 \le i < j \le K$, we say that \emph{$S_i,S_j$ differ by $<1$ free splitting unit} if there exists a commutative diagram of the form
$$\xymatrix{
T_i \ar[r] \ar[d]_{[\tau_i]}
& T_{i+1} \ar[r] \ar[d]_{[\tau_{i+1}]}
& \cdots \ar[r] & T_{j-1} \ar[r] \ar[d]^{[\tau_{j-1}]}
& T_j \ar[d]^{[\tau_j]}\\
S'_i \ar[r]
& S'_{i+1} \ar[r]
& \cdots \ar[r] & S'_{j-1} \ar[r]
& S'_j & \\
S_i \ar[r]_{f_{i+1}}\ar[u]^{[\sigma_i]}
& S_{i+1} \ar[r]_{f_{i+2}} \ar[u]^{[\sigma_{i+1}]}
& \cdots \ar[r]_{f_{j-1}} & S_{j-1} \ar[r]_{f_{j}} \ar[u]_{[\sigma_{j-1}]}
& S_j \ar[u]_{[\sigma_j]} \\
}$$
whose top and bottom rectangles are combing rectangles, so that foldable sequence $T_i \mapsto\cdots\mapsto T_j$ on the top row has either an invariant natural blue-red decomposition of constant complexity or an almost invariant edge (by combining Lemmas~\ref{LemmaBRNatural} and~\ref{LemmaBROneB}, this holds if and only if the foldable map $T_i \mapsto T_j$ has either an invariant natural blue-red decomposition of constant complexity or an almost invariant edge). To complete the definition, we symmetrize the concept by requiring that $S_j,S_i$ differ by $<1$ free splitting unit if and only if $S_i,S_j$ differ by $<1$ free splitting unit.
\end{definition}
\medskip
The following is an immediate consequence of the definition, by restricting to the appropriate subdiagram of the above commutative diagram:
\begin{lemma}[Stability of free splitting units] Given a fold sequence $S_0 \mapsto\cdots\mapsto S_K$ and $0 \le i \le i' \le j' \le j \le K$, if $S_i,S_j$ differ by $<1$ free splitting unit then $S_{i'}$, $S_{j'}$ differ by $<1$ free splitting unit.
\qed\end{lemma}
Using these concepts we get a diameter bound as follows:
\begin{lemma}\label{LemmaFirstFSUBound}
Given a fold sequence $S_0 \mapsto\cdots\mapsto S_K$ and $0 \le i \le j \le K$, if $S_i,S_j$ differ by $<1$ free splitting unit then $\diam\{S_i,\ldots,S_j\} \le 8$.
\end{lemma}
\begin{proof} Consider the commutative diagram in the definition of $<1$ free splitting unit. Combining Lemmas~\ref{LemmaBRNatural} and~\ref{LemmaBROneB}, it follows that $\diam\{T_i,\ldots,T_j\} \le 4$. Since $d(S_k,T_k) \le 2$ for each $k$, we have $\diam\{S_i,\ldots,S_j\} \le 8$.
\end{proof}
\paragraph{The coarse retract axiom.} As an application of the concepts of free splitting units, particularly Lemma~\ref{LemmaBROneB}, we now prove that our definition for projecting $\FS'(S)$ onto fold paths satisfies the first of the three Masur-Minsky axioms:
\begin{proposition}
\label{PropCoarseRetract}
For any fold sequence $S_0 \mapsto \cdots \mapsto S_K$, the associated projection map $\pi \colon \FS'(F) \to [0,\ldots,K]$ satisfies the \emph{Coarse Retraction} axiom with the constant $c = 6$: for any $i=0,\ldots,K$ we have $i \le \pi(S_i)$ and the diameter of the set $\{S_i,\ldots,S_{\pi(S_i)}\}$ is $\le 6$. Furthermore, there is $<1$ free splitting unit between $S_i$ and $S_{\pi(S_i)}$.
\end{proposition}
\begin{proof} We start by noticing that a projection diagram from $S_i$ to $S_0 \mapsto\cdots\mapsto S_K$ of depth $i$ certainly exists, where all vertical arrows are conjugacies and all collapse graphs are empty; see Figure~\ref{FigureCoarseRetractCombRect1}.
\begin{figure}[h]
\centerline{\xymatrix{
S_0 \ar[r] \ar@{=}[d] & \cdots \ar[r] & S_i \ar@{=}[d] \\
S_0 \ar[r] & \cdots \ar[r] & S_i \\
S_0 \ar[r] \ar@{=}[u] & \cdots \ar[r] & S_i \ar[r] \ar@{=}[u] & \cdots \ar[r] & S_K \\
}}
\caption{A projection diagram from $S_i$ to $S_0 \mapsto\cdots\mapsto S_K$ of depth $i$.}
\label{FigureCoarseRetractCombRect1}
\end{figure}%
By definition, $\pi(S_i)$ is the largest integer in the set $[0,\ldots,K]$ such that (after rechoosing the free splitting $F \act S_i$ in its conjugacy class, and after rechoosing the fold sequence $S_0 \mapsto\cdots\mapsto S_K$ in its conjugacy class) a projection diagram from $S_i$ to $S_0 \mapsto\cdots\mapsto S_K$ of depth $\pi(S_i)$ exists. This largest integer therefore satisfies $i \le \pi(S_i)$ and yields a projection diagram as in Figure~\ref{FigureCoarseRetractCombRect2}.
\begin{figure}\centerline{\xymatrix{
T_0 \ar[r] \ar[d] & \cdots \ar[r] & T_i \ar[r] \ar[d] & \cdots \ar[r] & T_{\pi(S_i)} \ar[r] \ar[d] & S_i \\
S'_0 \ar[r] & \cdots \ar[r] & S'_i \ar[r] & \cdots \ar[r] & S'_{\pi(S_i)} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_i \ar[r] \ar[u] & \cdots \ar[r] & S_{\pi(S_i)} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}}
\caption{A maximal depth projection diagram from $S_i$ to $S_0 \mapsto\cdots\mapsto S_K$.}
\label{FigureCoarseRetractCombRect2}
\end{figure}
Let $e' \subset S'_i$ be any natural edge, and let $R$ be the one-edge free splitting obtained from $S'_i$ by collapsing the complement of the orbit of $e'$. Then we have collapse maps $T_i \mapsto S'_i \mapsto R$ and $S_i \mapsto S'_i \mapsto R$, proving that $d(T_i,R) \le 1$ and $d(S_i,R) \le 1$. Applying Lemma~\ref{LemmaBROneB}, the foldable sequence on the top row from $T_i$ to $S_i$ has an almost invariant edge, and by restriction there is an almost invariant edge from $T_i$ to $T_{\pi(S_i)}$. Also by Lemma~\ref{LemmaBROneB}, the set $\{T_i,\ldots,T_{\pi(S_i)}\}$ has diameter $\le 2$, and since $d(S_k,T_k) \le 2$ for each $k$ it follows that $\diam\{S_i,\ldots,S_{\pi(S_i)}\} \le 6$. And by Definition~\ref{DefLessThanOneFSU}, it follows that there is $<1$ free splitting unit between $S_i$ and $S_{\pi(S_i)}$.
\end{proof}
\begin{definition}[General count of free splitting units]
\label{DefGeneralFSU}
\index{free splitting unit}%
\hfill\break
Given a fold sequence $S_0 \mapsto \cdots \mapsto S_K$, for $0 \le i,j \le K$ we say that $S_i,S_j$ \emph{differ by $\ge 1$ free splitting unit} if they do not differ by $<1$ free splitting unit. Then, for $0 \le I \le J \le K$, the \emph{number of free splitting units between $S_I$ and $S_J$} is defined to be the maximum integer $\Upsilon \ge 0$ for which there exists a sequence of integers $I \le i(0) < \cdots < i(\Upsilon) \le J$ of length $\Upsilon+1$, parameterized by integers $0 \le u \le \Upsilon$, such that if $1 \le u \le \Upsilon$ then $S_{i(u-1)}, S_{i(u)}$ differ by $\ge 1$ free splitting unit. Notice that our definitions are consistent in that $\Upsilon=0$ if and only if, following the earlier definition, there is $<1$ free splitting unit between $S_I$ and $S_J$. Also, we symmetrize the definition by saying that the number of free splitting units between $S_J$ and $S_I$ equals the number between $S_I$ and $S_J$.
\end{definition}
\subparagraph{Remark.} In counting the number of free splitting units between $S_i$ and $S_j$, although this number depends on the fold sequence $S_i \mapsto\cdots\mapsto S_j$ that connects $S_i$ to $S_j$, that fold sequence will always be clear by context and we suppress this dependence in our terminology. Notice that this number does \emph{not} depend on any other details of an ambient fold sequence of which $S_i \mapsto\cdots\mapsto S_j$ might be a subinterval. In particular, the number of free splitting units between $S_i$ and $S_j$ is unaffected if the ambient fold sequence is truncated by deleting an initial segment before $S_i$ and/or a terminal segment after $S_j$.
\medskip
Notice that with the notation as above, if $0 \le u \le v \le \Upsilon$ then the number of free splitting units between $S_{i(u)}$ and $S_{i(v)}$ equals $v-u$. To see why, first note that this number is $\ge v-u$ by construction. If it were $\ge v-u+1$ then one could alter the sequence $i(0) < \cdots < i(\Upsilon)$ by removing the entries $i(u), \ldots, i(v)$ and inserting an increasing sequence of $\ge v-u+2$ entries in the interval $[i(u),i(v)]$ which amongst themselves have $\ge 1$ free splitting unit between any consecutive two. By \emph{Stability of Free Splitting Units} the new entries would have $\ge 1$ free splitting units with the remaining entries outside of the interval $[i(u),i(v)]$. The new sequence would therefore still have $\ge 1$ free splitting units between consecutive terms, but would have length $\ge \Upsilon+2$, contradicting the maximality of $\Upsilon$.
One can count free splitting units between $S_I$ and $S_J$ in several ways. For example, define the \emph{front greedy subsequence}\index{front greedy subsequence} from $I$ to $J$ to be the sequence $I=j(0) < j(1) < \cdots < j(\Upsilon') \le J$ obtained by induction as follows: assuming $j(u)$ is defined, and assuming $S_{j(u)}$ and $S_J$ differ by $\ge 1$ free splitting unit, let $j(u+1)$ be the least integer $>j(u)$ such that $S_{j(u)}$ and $S_{j(u+1)}$ differ by $\ge 1$ free splitting unit; the sequence stops when $S_{j(\Upsilon')}, S_J$ differ by $<1$ free splitting unit. We claim that $\Upsilon'$, the length of the front greedy subsequence, is equal to the number of free splitting units between $S_I$ and $S_J$. When $S_I,S_J$ differ by $<1$ free splitting unit the claim is immediate. In the case where $S_I,S_J$ differ by $\ge 1$ free splitting unit, clearly $\Upsilon' \ge 1$; then, noting by stability that $S_{j(u)}$, $S_{j(v)}$ differ by $\ge 1$ free splitting unit for $1 \le u < v \le \Upsilon'$, and using maximality of $\Upsilon$, it follows that $\Upsilon \ge \Upsilon'$. For the opposite inequality we argue by contradiction assuming that $\Upsilon \ge \Upsilon'+1$. Consider any subsequence $I \le i(0) < i(1) < \cdots < i(\Upsilon) \le J$ such that $S_{i(u-1)}, S_{i(u)}$ differ by $\ge 1$ free splitting unit for each $u=1,\ldots,\Upsilon$. By maximality of $\Upsilon$ it follows that between each of the pairs $S_I, S_{i(0)}$ and $S_{i(\Upsilon)}, S_J$ there is $<1$ free splitting unit. By stability it follows that between $S_I$ and $S_{i(1)}$ there is $\ge 1$ free splitting unit. By definition of $j(1)$ we have $j(1) \le i(1)$. By stability it follows that $S_{j(1)}$ and $S_{i(2)}$ differ by $\ge 1$ free splitting unit from which it follows that $j(2) \le i(2)$. Continuing by induction we see that $j(u) \le i(u)$ for $u=1,\ldots,\Upsilon'$. But since $j(\Upsilon') \le i(\Upsilon') < i(\Upsilon'+1) \le i(\Upsilon) \le J$ and since $S_{i(\Upsilon')}, S_{i(\Upsilon'+1)}$ differ by $\ge 1$ free splitting unit, it follows by stability that $S_{j(\Upsilon')}, S_J$ differ by $\ge 1$ free splitting unit, which contradicts the definition of $\Upsilon'$.
In a similar fashion one proves that the number of free splitting units is equal to the length of the \emph{back greedy subsequence}\index{back greedy subsequence} $I \le \ell(\Upsilon'') < \ell(\Upsilon''-1) < \cdots < \ell(1) < \ell(0) = J$, defined as follows: assuming by induction that $\ell(u)$ is defined and that $S_I$ and $S_{\ell(u)}$ differ by $\ge 1$ free splitting unit, $\ell(u+1)$ is the greatest integer $<\ell(u)$ such that $S_{\ell(u+1)}$ and $S_{\ell(u)}$ differ by $\ge 1$ free splitting unit; the sequence stops when $S_I$, $S_{\ell(\Upsilon'')}$ differ by $<1$ free splitting unit.
The following result says that a fold path which is parameterized by free splitting units is a coarse Lipschitz path in $\FS(F)$:
\begin{lemma}\label{LemmaUnitsLipschitz} For any fold path $S_0 \mapsto\cdots\mapsto S_K$ and any $0 \le I \le J \le K$, if the number of free splitting units between $S_I$ and $S_J$ equals $\Upsilon$ then the diameter in $\FS'(F)$ of the set $\{S_I,\ldots,S_J\}$ is $\le 10 \Upsilon + 8$.
\end{lemma}
\begin{proof} If $\Upsilon=0$, that is if $S_I,S_J$ differ by $<1$ free splitting unit, then by Lemma~\ref{LemmaFirstFSUBound} we have $\diam\{S_I,\ldots,S_J\} \le 8$.
If $\Upsilon \ge 1$, from $S_I$ to $S_J$ let $I=i(0) < \cdots < i(\Upsilon) \le J$ be the front greedy sequence. For $u=1,\ldots,\Upsilon$, the free splittings $S_{i(u-1)}$ and $S_{i(u)-1}$ differ by $<1$ free splitting unit, and so $\diam\{S_{i(u-1)},\ldots,S_{i(u)-1}\}\le 8$. By Lemma~\ref{LemmaFoldDistance} we have $d(S_{i(u)-1},S_{i(u)}) \le 2$ and so $\diam\{S_{i(u-1)}, \ldots, S_{i(u)}\} \le 10$. It follows in turn that $\diam\{S_I=S_{i(0)},\ldots,S_{i(\Upsilon)}\} \le 10 \Upsilon$. Since $S_{i(\Upsilon)},S_J$ differ by $<1$ free splitting unit we have $\diam\{S_{i(\Upsilon)},\ldots,S_J\} \le 8$, and putting it all together, $\diam \{S_I,\ldots,S_J\} \le 10\Upsilon + 8$.
\end{proof}
We also need the following lemma which gives a coarse triangle inequality for free splitting units within a fold path:
\begin{lemma}\label{LemmaFSUTriangleInequality}
Given a fold path $S_0 \mapsto\cdots\mapsto S_K$ and $i,j,k \in \{0,\ldots,K\}$, if $\Upsilon_1$ is the number of free splitting units between $S_i$ and $S_j$ and $\Upsilon_2$ is the number between $S_j$ and $S_k$ then the number $\Upsilon$ between $S_i$ and $S_k$ satisfies $\Upsilon \le \Upsilon_1 + \Upsilon_2 + 1$.
\end{lemma}
\begin{proof} In the case where $j$ is between $i$ and $k$, using \emph{symmetry of free splitting units} we may assume that $i \le j \le k$. Let $i=i(0)<\cdots<i(\Upsilon)\le k$ be the front greedy sequence from $S_i$ to $S_k$. Clearly the front greedy sequence from $S_i$ to $S_j$ is an initial segment, implying that $i(\Upsilon_1) \le j$ and $i(\Upsilon_1+1) > j$, and so we have a subsequence $S_{i(\Upsilon_1+1)},\ldots,S_{i(\Upsilon)}$ of $S_j,\ldots,S_k$ with the property that between any two adjacent elements of this subsequence there is $\ge 1$ free splitting unit.
By Definition~\ref{DefGeneralFSU} and the hypothesis on $\Upsilon_2$, the length of this subsequence is therefore $\le \Upsilon_2+1$, giving us $\Upsilon - \Upsilon_1 \le \Upsilon_2+1$.
In the case where $j > \max\{i,k\}$, again using symmetry we may assume $i \le k < j$. Let $i=i(0) < \cdots < i(\Upsilon_1) \le j$ be the front greedy subsequence between $S_i$ and $S_j$. Again the front greedy subsequence between $S_i$ and $S_k$ is an initial subsegment and so $\Upsilon \le \Upsilon_1 \le \Upsilon_1 + \Upsilon_2 + 1$.
In the case where $j < \min\{i,k\}$, using symmetry we assume $j < k \le i$, and we proceed similarly using the back greedy subsequence between $S_j$ and $S_i$.
\end{proof}
\section{Proof of the Main Theorem}
\label{SectionMainProof}
We begin with a quick sketch of the proof.
Consider a free splitting $T$, a fold sequence $S_0 \mapsto\cdots\mapsto S_K$, and a maximal depth projection diagram which defines the projection ${k_T} \in \{0,\ldots,K\}$ from $T$ to this fold sequence. The form of this projection diagram can be viewed in Section~\ref{SectionCombingRectangles}, Figure~\ref{FigureProjDiagram}, the top row of which is a foldable sequence $T_0 \mapsto\cdots\mapsto T_{k_T} \mapsto T$. We then apply Lemma~\ref{LemmaFoldSequenceConstruction} to factor the final foldable map $T_{k_T} \mapsto T$ as a fold sequence of the form $T_{k_T} \mapsto\cdots\mapsto T_L=T$, which we then paste into the foldable sequence on the top row of the projection diagram to get an ``augmented'' projection diagram. Figure~\ref{FigureMaxProjDiagram} shows the original, unaugmented projection diagram and the augmented version in the same picture. Note that the top row of the augmented projection diagram is the foldable sequence $T_0 \mapsto\cdots\mapsto T_{k_T} \mapsto\cdots\mapsto T_L=T$. See Section~\ref{SectionStatmentFSUContraction} for more details on augmented projection diagrams.
Consider also a geodesic in the \nb{1}skeleton of $\FS'(F)$ starting with $T$ and ending with some free splitting~$R$. This geodesic is a zig-zag path; suppose for concreteness that it starts with a collapse and ends with an expand, $T = T_L^0 \collapses T_L^1 \expands T_L^2 \collapses T_L^3 \expands T_L^4 \collapses \cdots \expands T_L^{D}=R$, and so $D = d(T,R)=d(T^0_L,T^D_L)$ is even. By combing the foldable sequence $T_0 \mapsto\cdots\mapsto T_{k_T} \mapsto\cdots\mapsto T_L=T$ across each collapse and expansion in this zig-zag path one at a time, we obtain ``The Big Diagram, Step 0'' depicted in Section~\ref{SectionProofFSUContraction}, Figure~\ref{FigureBigDiagram0}, which is built out of the projection diagram and an $L \cross D$ rectangle composed of $D$ combing rectangles. Note that the interior even terms along the zig-zag path, the free splittings $T_L^2, T_L^4, \ldots, T_L^{D-2}$, are ``peaks'' of the zig-zag. The big $L \cross D$ rectangle has the form of a corrugated aluminum roof in which the interior even horizontal rows are peaks of the corrugations.
Our technique can be described as ``pushing down the peaks''. In brief, we prove that if one backs up from $T_L$ to some earlier term in the fold path $T_{k_T}\mapsto\cdots\mapsto T_L$, moving back a certain fixed number of free splitting units, then the big diagram can be simplified by pushing the first corrugation peak down, reducing the number of corrugation peaks by~$1$, as shown in ``The Big Diagram, Step 1''. These ``back up --- pushdown'' arguments are found in Section~\ref{SectionPushingDownPeaks}. Therefore, if the number of free splitting units between $T_{k_T}$ and $T_L$ is greater than a certain multiple of the number of peaks in the zig-zag path from $T_L$ to $T^D_L$ then the number of corrugation peaks in the Big Diagram can be reduced to zero. With one final ``back up --- push down'' step that uses up some of the original projection diagram for $T_L$, one obtains a projection diagram from $R$ to $S_0 \mapsto\cdots\mapsto S_K$, from which one concludes that the projection of $R$ to $S_0\mapsto\cdots\mapsto S_K$ is not much further back (measured in free splitting units) than $S_{k_T}$ which is the projection of $T$.
The exact statement proved by these arguments is contained in Proposition~\ref{PropFSUContraction} which can be regarded as a reformulation of the \emph{Coarse Lipchitz} and \emph{Desymmetrized strong contraction} axioms in terms of free splitting units, and which quickly implies those axioms and the main theorem as shown in Section~\ref{SectionStatmentFSUContraction}. The proof of Proposition~\ref{PropFSUContraction} itself is carried out in Sections~\ref{SectionPushingDownPeaks} and~\ref{SectionProofFSUContraction}.
\subsection{Desymmetrized strong contraction reformulated and applied}
\label{SectionStatmentFSUContraction}
In Proposition~\ref{PropFSUContraction} we reformulate the \emph{Coarse Lipschitz} and \emph{Desymmetrized strong contraction} axioms as a joint statement expressed in terms of free splitting units. The proposition will be proved in later subsections of Section~\ref{SectionMainProof}.
After stating the proposition, we use it to finish off the proof of the main theorem. We also use it to prove Proposition~\ref{PropFoldPathQuasis} which describes precisely how to reparameterize fold paths in terms of free splitting units so as to obtain uniform quasigeodesics in~$\FS'(F)$.
To set up Proposition~\ref{PropFSUContraction}, consider any fold path $S_0 \mapsto \cdots \mapsto S_K$, any free splitting $F \act T$ and any projection diagram of maximal depth $\pi(T)=k_T \in [0,\ldots,K]$ as depicted in Figure~\ref{FigureMaxProjDiagram}. Applying Proposition~\ref{LemmaFoldSequenceConstruction}, we may factor the foldable map $f \colon T_{k_T} \to T$ as a fold sequence, and then replace $f$ with this factorization in the top line of the projection diagram, to obtain a sequence of maps
$$T_0 \xrightarrow{f_1} \cdots\xrightarrow{f_{k_T}} T_{k_T} \xrightarrow{f_{k_T+1}} \cdots \xrightarrow{f_L} T_L=T
$$
This sequence of maps is still foldable --- if $0 \le k \le k_T$ then $f^k_L$ is foldable by virtue of being a map in the original foldable sequence on the top line of the unaugmented projection diagram; and if $k_T < k \le L$ then $f^k_L$ is foldable by virtue of being a map in the newly inserted fold sequence (note that if one replaces any but the last map in a foldable sequence with a fold factorization, this trick does not work --- the resulting sequence need not be foldable). We therefore obtain the \emph{augmented projection diagram}\index{projection diagram!augmented} from $T$ to $S_0\mapsto\cdots\mapsto S_K$ of maximal depth, as depicted also in the Figure~\ref{FigureMaxProjDiagram}.
\begin{figure}[h]
$$\xymatrix{
T_0 \ar[r] \ar[d] & \cdots \ar[r] & T_{k_T} \ar[r] \ar[d] \ar@/^1pc/[rr]^f & \cdots \ar[r] & T_L = T \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{An \emph{augmented} projection diagram from $T$ to $S_0\mapsto\cdots\mapsto S_K$ of maximal depth~$k_T$ (with the straight arrows from $T_{k_T}$ to $T$) is obtained from a maximal depth projection diagram (with the curved arrow from $T_{k_T}$ to $T$ labelled $f$) by inserting a fold sequence factorization of the foldable map $f \colon T_{k_T} \to T$. After this insertion the whole sequence $T_0 \mapsto\cdots\mapsto T_{k_T} \mapsto\cdots\mapsto T_L=T$ in the top row is still a foldable sequence.}
\label{FigureMaxProjDiagram}
\end{figure}
\begin{proposition}[Strong contraction in terms of free splitting units]
\label{PropFSUContraction}
\quad \\
Letting $b_1 = 4 \rank(F)-3$, the following holds. Consider a fold path $S_0 \mapsto \cdots \mapsto S_K$, a free splitting $F \act T$ with projection $\pi(T)=k_T \in [0,\ldots,K]$, and an augmented projection diagram of maximal depth $k_T$ as notated in Figure~\ref{FigureMaxProjDiagram}. Let $\Upsilon$ be the number of free splitting units between $T_{k_T}$ and $T_L=T$. If $F \act R$ is a free splitting such that $d(T,R) \le \max\left\{2 \lfloor \Upsilon / b_1 \rfloor,1\right\}$, and if the number of free splitting units between $S_0$ and $S_{k_T}$ is $\ge b_1$, then there exists $l \in [0,\pi(R)]$ such that the number of free splitting units between $S_l$ and $S_{k_T}$ is~$\le b_1$.
\end{proposition}
\subparagraph{Remark.} To put it more plainly, Proposition~\ref{PropFSUContraction} says that the projection of $R$ to the fold path $S_0 \mapsto\cdots\mapsto S_K$ is no farther to the left of the projection of $T$ than a bounded number of free splitting units, as long as $d(T,R)$ is at most some bounded proportion of the number~$\Upsilon$. One can think of the number $\Upsilon$ as being a stand-in for the distance from $T$ to the fold path $S_0 \mapsto\cdots\mapsto S_K$ (a posterior one sees that $\Upsilon$ is indeed quasicomparable to that distance). Notice that the proposition does not apply if no projection diagram exists for $T$, nor if the number of free splitting units between $S_0$ and $S_{k_T}$ is too small; in either of these cases the projection of $T$ is close to $S_0$ in $\FS'(F)$. These special situations are handled in Case~1 of the proof of the Main Theorem.
Note that Proposition~\ref{PropFSUContraction} is trivially true when $\pi(R) \ge k_T$, by taking $l=k_T$. The real meat of the proposition is when $\pi(R) < k_T$.
\bigskip
Proposition~\ref{PropFSUContraction} is proved in Sections~\ref{SectionPushingDownPeaks} and~\ref{SectionProofFSUContraction}. For the rest of Section~\ref{SectionStatmentFSUContraction} we shall apply Proposition~\ref{PropFSUContraction} to prove first the Main Theorem and then Proposition~\ref{PropFoldPathQuasis} regarding quasigeodesics in $\FS'(F)$.
\begin{proof}[Proof of the Main Theorem] \qquad As we showed earlier, Proposition~\ref{PropProjToFoldPath} implies Proposition~\ref{PropFoldContractions} which implies the Main Theorem. To prove Proposition~\ref{PropProjToFoldPath} we must prove that the projections to fold paths in $\FS'(F)$ satisfy the \emph{Coarse retraction}, \emph{Coarse Lipschitz}, and \emph{Desymmetrized strong contraction} axioms given in Section~\ref{SectionMasurMinsky}, with uniform constants depending only on $\rank(F)$. In Proposition~\ref{PropCoarseRetract} we already did this for the \emph{Coarse retraction} axiom. We turn to the other two axioms.
Fix the fold path $S_0 \mapsto\cdots\mapsto S_K$ and free splittings $F\act T,R$ with projections $\pi(T),\pi(R) \in [0,\ldots,K]$. For verifying both the \emph{Coarse Lipschitz} and \emph{Desymmetrized strong contraction} axioms we may assume that $\pi(R) \le \pi(T)$. We seek to bound the diameter in $\FS'(F)$ of the set $\{S_{\pi(R)},\ldots,S_{\pi(T)}\}$. If $\pi(T)=0$ then $\pi(R)=0$ and we are done. Otherwise, after rechoosing $T$ in its conjugacy class and rechoosing $S_0 \mapsto\cdots\mapsto S_K$ in its equivalence class, we may choose an augmented maximal depth projection diagram for $T$ and $S_0 \mapsto\cdots\mapsto S_K$ as notated in Figure~\ref{FigureMaxProjDiagram}. Let $\Upsilon$ be the number of free splitting units between $T_{k_L}$ and $T_L = T$.
Throughout the proof we denote the constants from Lemma~\ref{LemmaUnitsLipschitz} as
$$L=10, \quad C=8
$$
It follows that along any fold path, for any two terms of that path between which the number of free splitting units is at most
$$b_1 = 4 \rank(F)-3
$$
the diameter in $\FS'(F)$ of the segment between those two terms is at most
$$c = L b_1 + C = 40 \rank(F) - 22
$$
This is the value of $c$ that will be used in verifying the two axioms.
\subparagraph{Case 1:} Suppose that the number of free splitting between $S_0$ and $S_{\pi(T)}$ is $<b_1$. Applying the inequality $0 \le \pi(R) \le \pi(T)$ together with \emph{Stability of free splitting units}, it follows that the number of free splitting units between $S_{\pi(R)}$ and $S_{\pi(T)}$ is~$< b_1$. By Lemma~\ref{LemmaUnitsLipschitz} the diameter of the set $\{S_{\pi(R)},\ldots,S_{\pi(T)}\}$ is~$\le c$, which is the common conclusion of the \emph{Coarse Lipschitz} and \emph{Desymmetrized strong contraction} axioms. In this case, those axioms are verified using any values of $a,b$.
\subparagraph{Case 2:} Suppose that the number of free splitting units between $S_0$ and $S_{\pi(T)}$ is $\ge b_1 > 0$.
We claim that the following statement holds:
\begin{itemize}
\item[$(*)$] If $d(T,R) \le \max\left\{2 \lfloor \Upsilon / b_1 \rfloor,1\right\}$ then the number of free splitting units between $S_{\pi(R)}$ and $S_{\pi(T)}$ is $\le b_1$, and so the diameter in $\FS'(F)$ of the set $\{S_{\pi(R)},\ldots,S_{\pi(T)}\}$ is~$\le c$.
\end{itemize}
To prove $(*)$, assume that $d(T,R) \le \max\left\{2 \lfloor \Upsilon / b_1 \rfloor,1\right\}$. Using the hypothesis of Case~2 we may apply Proposition~\ref{PropFSUContraction}, concluding that for some $l \in [0,\pi(R)]$ the number of free splitting units between $S_l$ and $S_{k_T}$ is $\le b_1$. Using \emph{Stability of free splitting units} it follows that the number of free splitting units between $S_{\pi(R)}$ and $S_{k_T}$ is $\le b_1$. Applying Lemma~\ref{LemmaUnitsLipschitz} we have $\diam\{S_{\pi(R)},\ldots,S_{\pi(T)}\} \le c$.
Since $(*)$ applies whenever $d(T,R) \le 1$, the \emph{Coarse Lipschitz} axiom follows immediately.
To prove \emph{Desymmetrized strong contraction} we shall produce constants $a,b > 0$ so that if $a \le d(T,\{S_0,\ldots,S_K\})$ and $d(T,R) \le b \,\cdot\, d(T,\{S_0,\ldots, S_K\})$ then $d(T,R) \le 2 \lfloor \Upsilon / b_1 \rfloor$, for then $(*)$ applies and so $\diam\{S_{\pi(R)},\ldots,S_{\pi(T)}\} \le c$.
Consider first the case that $\Upsilon < 2b_1$. By Lemma~\ref{LemmaUnitsLipschitz} we have $d(T_{k_T},T) < 2 b_1 L + C$ and so $d(T,S_0\mapsto\cdots\mapsto S_K) < 2 b_1 L + C +2$. By taking $a= 2 b_1 L + C + 2 = 80 \rank(F) - 52$ we may dispense with this case.
Consider next the case that $\Upsilon \ge 2 b_1$. It follows that $\Upsilon \ge 1$. We have $\Upsilon/b_1 \le 2 (\Upsilon / b_1 - 1)$ from which it follows that
$$\Upsilon / b_1 \le 2 \lfloor \Upsilon / b_1 \rfloor
$$
The number of free splitting units between $T_{k_T}$ and $T_L=T$ equals $\Upsilon$ and so by Lemma~\ref{LemmaUnitsLipschitz} we have $d(T,T_{k_T}) \le L \Upsilon + C$. It follows that $d(T,S_{k_T}) \le L \Upsilon+C+2$, which implies that $d(T,S_0\mapsto\cdots\mapsto S_K) \le L \Upsilon+C+2$. Let
\begin{align*}
b &= \frac{1}{80 \rank(F) - 60} = \frac{1}{b_1(L + C + 2)} \\
&\le \frac{1}{b_1(L + \frac{C+2}{\Upsilon})} = \frac{\Upsilon}{b_1(L \Upsilon+C+2)}
\end{align*}
where the inequality follows from $\Upsilon \ge 1$. We then have
$$b(L \Upsilon + C + 2) \le \Upsilon / b_1
$$
It follows that if $d(T,R) \le b \cdot d(T,S_0\mapsto\cdots\mapsto S_K)$ then $d(T,R) \le \Upsilon / b_1 \le 2 \lfloor \Upsilon / b_1 \rfloor$ and we are done, subject to proving Proposition~\ref{PropFSUContraction}.
\end{proof}
\paragraph{Quasigeodesic reparameterization of fold paths.} We can also use these arguments to show how fold paths can be reparameterized, using free splitting units, to give a system of uniform quasigeodesics in $\FS'(F)$. Recall that each fold sequence $S_0 \mapsto\cdots\mapsto S_M$ can be interpolated by a continuous edge path in $\FS'(F)$: for each fold $S_{m-1} \mapsto S_m$, the vertices $S_{m-1},S_m$ are connected in $\FS'(F)$ by an edge path of length $2$, $1$, or $0$, by Lemma~\ref{LemmaFoldDistance}. Let $\Upsilon$ be the number of free splitting units from $S_0$ to~$S_M$. Choose any sequence $0 \le m_0 < m_1 < \cdots < m_\Upsilon \le M$ such that for $u=1,\ldots,\Upsilon$ there is $\ge 1$ free splitting unit between $S_{m_{u-1}}$ and~$S_{m_u}$. Notice that by \emph{Stability of Free Splitting Units}, the number of free splitting units between $S_0$ and $S_{m_1}$, and between $S_{m_{\Upsilon-1}}$ and $S_M$ is $\ge 1$, and so we may rechoose the first and last terms of the sequence so that $0=m_0 < m_1 < \cdots < m_\Upsilon=M$. Choose a continuous parameterization of the interpolating edge path of the form $\gamma \colon [0,\Upsilon] \to \FS'(F)$ such that $S_{m_u} = \gamma(u)$. We call this a \emph{free splitting parameterization} of the fold sequence $S_0 \mapsto\cdots\mapsto S_M$.
We use Proposition~\ref{PropFSUContraction}, in particular some details of the preceding proof, in order to prove the following result:
\begin{proposition}\label{PropFoldPathQuasis}
There exist constants $k,c$ depending only on $\rank(F)$ such that any free splitting parameterization $\gamma \colon [0,\Upsilon] \to \FS'(F)$ of any fold path $S_0 \mapsto\cdots\mapsto S_M$ is a $k,c$ quasigeodesic in $\FS'(F)$, that is,
$$ \frac{1}{k} \abs{s-t} - c \le d(\gamma(s),\gamma(t)) \le k \abs{s-t}+c \quad\text{for all $s,t \in [0,\Upsilon]$.}
$$
\end{proposition}
\begin{proof} We continue with the constants $L=10$, $C=8$, $b_1=4 \rank(F)-3$ from the previous proof.
As shown back in the definition of free splitting units, for each integer $u=1,\ldots,\Upsilon$ there is exactly~$1$ free splitting unit between $S_{m_{u-1}}$ and $S_{m_u}$. Applying Lemma~\ref{LemmaUnitsLipschitz} it follows that for each $u=1,\ldots,\Upsilon$ the set $\{S_{m_{u-1}},\ldots,S_{m_u}\}$ has diameter $\le L+C$. Combining this with the fact that the edge path interpolating each fold has length~$\le 2$ it follows that
$$(**) \qquad\qquad \diam(\gamma[u-1,u]) \le L+C+1 \quad\text{for each $u = 1,\ldots,\Upsilon$}
$$
Given $s,t \in [0,\Upsilon]$, if there is no integer in the interval $[s,t]$ then $d(\gamma(s),\gamma(t)) \le L+C+1$. Otherwise we take $u,v \in [s,t]$ to be the smallest integer $\ge s$ and the largest integer $\le t$, respectively, and we have
\begin{align*}
d(\gamma(s),\gamma(t)) &\le d(\gamma(u),\gamma(v)) + d(\gamma(s),\gamma(u)) + d(\gamma(t),\gamma(v)) \\ & \le (L+C+1) \abs{v-u} + 2(L+C+1) \\ &\le k \abs{s-t} + c
\end{align*}
using any $k \ge L+C+1=19$ and any $c \ge 2(L+C+1)=38$ (and we note that this inequality also holds in the previous case where there is no integer in $[s,t]$). This proves the second inequality of the proposition.
To prove the first inequality, we first prove it for integer values $u \le v \in [0,\ldots,\Upsilon]$. Fix a geodesic edge path $\rho$ of length $D=d(\gamma(u),\gamma(v))$ connecting $\gamma(u)$ to $\gamma(v)$ in $\FS'(F)$. Project $\rho$ to the fold path $S_0\mapsto\cdots\mapsto S_M$. By the statement $(*)$ above, within this fold path there are $\le b_1$ free splitting units between the projections of any two consecutive vertices of $\rho$. By applying Lemma~\ref{LemmaFSUTriangleInequality}, the coarse triangle inequality for free splitting units, it follows that there are $\le D (b_1+1)$ free splitting units between $S_{\pi(\gamma(u))}$ and $S_{\pi(\gamma(v))}$, the projections of $\gamma(u)$ and~$\gamma(v)$, respectively. By Proposition~\ref{PropCoarseRetract}, where the \emph{Coarse retract} axiom was proved, the number of free splitting units between $S_{m_u}=\gamma(u)$ and $S_{\gamma(u)}$, and between $S_{m_v} = \gamma(v)$ and $S_{\gamma(v)}$, are both~$<1$. By applying Lemma~\ref{LemmaFSUTriangleInequality} again, the number of free splitting units between $S_{m_u}$ and $S_{m_v}$ is $\le D(b_1+1)+2$, that is, $\abs{u-v} \le D(b_1+1)+2$.
For arbitrary $s<t \in [0,\ldots,\Upsilon]$, letting $u \in [0,\Upsilon]$ be the largest integer $\le s$ and $v \in [0,\Upsilon]$ be the smallest integer $\ge t$, we have $\gamma(s) \in \gamma[u,u+1]$ and $\gamma(t) \in \gamma[v-1,v]$. By $(**)$ we therefore have $d(S_{\gamma(s)},S_{\gamma(u)})$, $d(S_{\gamma(t)},S_{\gamma(v)}) \le L+C+1=19$. It follows that:
\begin{align*}
\abs{s-t} &\le \abs{u-v} \\
&\le (b_1+1) d(S_{\gamma(v)}, S_{\gamma(v)}) + 2 \\
\frac{1}{b_1+1} \abs{s-t} - \frac{2}{b_1+1} & \le d(S_{\gamma(v)}, S_{\gamma(v)}) \\
&\le d(S_{\gamma(v)}, S_{\gamma(v)}) + (19 - d(S_{\gamma(u)},S_{\gamma(s)})) \\
&\qquad\qquad + (19 - d(S_{\gamma(v)},S_{\gamma(t)})) \\
\frac{1}{b_1+1} \abs{s-t} - \left( \frac{2}{b_1+1} + 38 \right) &\le d(S_{\gamma(s)},S_{\gamma(t)})
\end{align*}
This proves that the first inequality is true for any $\displaystyle k \ge b_1+1=4 \rank(F)-2$ and any \break $\displaystyle c \ge \frac{2}{b_1+1} + 38 = \frac{1}{2\rank(F)-1} + 38$.
Proposition~\ref{PropFoldPathQuasis} is therefore proved for $k = \max\{19,4\rank(F)-2\}$ and $c=39$.
\end{proof}
\subsection{Pushing down peaks}
\label{SectionPushingDownPeaks}
Recall that every geodesic in $\FS'(F)$ is a zig-zag edge path. On a zig-zag subpath of the form $T^{i-1} \expandsto T^i \collapsesto T^{i+1}$, where $T^i$ is the domain of two incident collapse maps $T^i \mapsto T^{i-1}$ and $T^i \mapsto T^{i+1}$, we say that $T^i$ is a \emph{peak}. If on the other hand $T^{i-1} \collapsesto T^i \expandsto T^{i+1}$ then $T^i$ is a \emph{valley}.
We start with a simplistic technique that can be used to shortcut a zig-zag path, and we work up to a technique, described in Proposition~\ref{PropPushdownInToto}, that will be central to the proof of the Main Theorem. In each case the intuition is to ``push down the peak'', thereby reducing length.
\paragraph{The peak of a W diagram.} A \emph{W diagram}\index{W diagram} or a \emph{W zig-zag} is a length~$4$ zig-zag path with a peak in the middle, sometimes depicted as in Figure~\ref{FigureWDiagram}.
\begin{figure}[h]
$$\xymatrix{
T^4 \ar[dr] && T^2 \ar[dl]_{[\beta]} \ar[dr]^{[\rho]} && T^0 \ar[dl] \\
& T^3 && T^1
}$$
\caption{A W diagram}
\label{FigureWDiagram}
\end{figure}
We think of $\beta,\rho$ as the ``blue'' and ``red'' subgraphs of $T^2$. In this generality, an edgelet of $T^2$ may be in either, or both, or neither of $\beta,\rho$. The subgraphs $\beta,\rho$ therefore do \emph{not} necessarily form a blue--red decomposition of $T^2$ as in Definition~\ref{DefBRDecompos}, which requires that $\beta,\rho$ have no edgelets in common and and their union is all of $T^2$; furthermore, even if $\beta,\rho$ did form a blue--red decomposition, they need not be a \emph{natural} one, which requires in addition that they both be natural subgraphs of $T^2$. Soon, though, we shall narrow down to a key special case where $\beta,\rho$ is indeed a natural blue--red decomposition.
Pushing down the peak is easy when $\beta\union\rho$ is a proper subgraph of $T^2$, for in that case the given W diagram extends to a commutative diagram of collapse maps as shown in the diagram in Figure~\ref{FigureSimplistic}.
\begin{figure}
$$\xymatrix{
T^4 \ar[dr] && T^2 \ar[dl]_{[\beta]} \ar[dr]^{[\rho]} \ar[dd]|{[\beta \union \rho]} && T^0 \ar[dl] \\
& T^3 \ar[dr]_{[\rho\setminus(\beta\intersect\rho)]} && T^1 \ar[dl]^{[\beta\setminus(\beta\intersect\rho)]} \\
&& T^h
}$$
\caption{A simplistic pushdown works if $\beta\union\rho \subset T^2$ is a proper subgraph.}
\label{FigureSimplistic}
\end{figure}
In that diagram, collapse of $\beta\union\rho \subset T^2$ produces $T^h$. The collapse map $T^2 \xrightarrow{[\rho]} T^1$ takes the edgelets of the subgraph $\beta\setminus(\beta\intersect\rho)\subset T^2$ bijectively to the edgelets of a subgraph of $T^1$ which by convention is also denoted $\beta\setminus(\beta\intersect\rho)$; collapse of this subgraph also produces~$T^h$. Similarly, collapse of $\rho\setminus(\beta\intersect\rho) \subset T^3$ produces $T^h$. Compositions of collapse maps being collapse maps, we obtain a length~2 zig-zag path $T^0 \rightarrow T^h \leftarrow T^4$ that cuts short the original length~4 zig-zag path --- we have successfully ``pushed down the peak''.
The same argument works on a length~$3$ zig-zag path --- which can be visualized by cutting off one of the terminal edges of a W zig-zag --- with the result that if the union of the two collapse graphs at the peak of the zig-zag forms a proper subgraph then there is a length~2 path with the same endpoints. We summarize as follows:
\begin{lemma}\label{LemmaRedBlueUnion}
Given a W zig-zag as notated in Figure~\ref{FigureWDiagram} or a length~3 zig-zag obtained from Figure~\ref{FigureWDiagram} by cutting off one of the terminal edges, if the path is geodesic then $T^2 = \beta \union \rho$.
\qed\end{lemma}
\paragraph{Normalizing a W diagram.} We shall also need to push down the peak of certain W diagrams in the situation where $T^2 = \beta \union \rho$. In this situation it is convenient to first alter the W diagram to ensure that $\beta \intersect \rho$ contains no edgelet of $T^2$, equivalently $\beta,\rho$ is a blue--red decomposition of~$T^2$ as in Definition~\ref{DefBRDecompos}. If $\beta\intersect\rho$ does contain an edgelet of $T^2$ then, since $\beta,\rho$ are proper subgraphs, the given W diagram is contained in a commutative diagram of collapse maps as shown in the diagram in Figure~\ref{FigureNormalization}, called a \emph{normalization diagram}.\index{normalization diagram}
\begin{figure}[h]
$$\xymatrix{
T^4 \ar[dddrr] &&&& T^2 \ar[dddll]_{[\beta]} \ar[dd]|{[\beta\intersect\rho]} \ar[dddrr]^{[\rho]} &&&& T^0 \ar[dddll] \\
&&&& &&&& \\
&& && T'{}^2 \ar[dll]|{[\beta\setminus(\beta\intersect\rho)]} \ar[drr]|{[\rho\setminus(\beta\intersect\rho]} && \\
&& T^{3} &&&& T^{1}
}$$
\caption{A normalization diagram. The W zig-zag on the top of the diagram has the property that $T^2=\beta\union\rho$. The W zig-zag on the bottom of the diagram is normalized.}
\label{FigureNormalization}
\end{figure}
In this diagram, subgraphs of $T'{}^2$ are labelled by the same convention as described above.
Since $T^2 = \beta \union \rho$ it follows that the two subgraphs $\beta \setminus (\beta\intersect \rho)$ and $\rho \setminus (\beta \intersect \rho)$ of $T'{}^2$ partition the edgelets of $T'{}^2$.
Motivated by this observation, we say that a zig-zag path in $\FS'(F)$ is \emph{normalized} if at every free splitting $F \act T$ along the path that forms a peak, the two subgraphs of $T$ whose collapses define the vertices of the path incident to $T$ form a blue--red decomposition of $T$.
The argument we have just given shows that every geodesic zig-zag path in $\FS'(F)$ may be replaced by a normalized zig-zag path of the same length and with the same set of valleys.
\paragraph{Pushdown subgraphs and baseball diagrams.} We now turn to a more sophisticated technique for pushing down the peak of a W diagram. Consider a W diagram as notated in Figure~\ref{FigureWDiagram} and suppose that $\beta \union \rho = T^2$ is a blue--red decomposition. Consider also a subgraph $\kappa \subset T^2$ that satisfies the following:
\begin{description}
\item[$\kappa$ is a pushdown subgraph:] $\kappa$ is a proper, equivariant subgraph, and each natural edge of $T^2$ not contained in $\kappa$ contains at least one red and one blue edgelet of $T^2$ that are not contained in $\kappa$.
\end{description}
No requirement is imposed that a pushdown subgraph be a natural subgraph; the proof of Proposition~\ref{PropPushdownInToto} produces pushdown subgraphs which are not natural. Note that a pushdown subgraph can \emph{only} exist if $\beta \union \rho = T^2$ is not a natural blue--red decomposition.
Given a normalized W diagram and a pushdown subgraph $\kappa \subset T^2$, we may extend the W diagram to a larger commutative diagram of collapse maps called a \emph{baseball diagram},\index{baseball diagram} as shown in Figure~\ref{FigureBaseball}. Certain superscripts in this diagram represent various positions on a baseball diamond: $T^1$, $T^2$, $T^3$ represent $1^{\text{st}}$, $2^{\text{nd}}$ and $3^{\text{rd}}$ bases, $T^p$ the pitcher's mound, $T^{h1}$ and $T^{h3}$ the points halfway from home plate to $1^{\text{st}}$ and $3^{\text{rd}}$ bases.
\begin{figure}
$$\xymatrix{
T^4 \ar[ddrr] &&&& T^2 \ar[ddll]_{[\beta]} \ar[dd]^{[\kappa]} \ar[ddrr]^{[\rho]} &&&& T^0 \ar[ddll] \\
&&&& &&&& \\
&& T^3 \ar[dr] _{[\kappa \setminus (\kappa\intersect\beta)]} && T^p \ar[dl]_{[\beta\setminus(\kappa\intersect\beta)]}^{\gamma} \ar[dr]^{[\rho\setminus(\kappa\intersect\rho)]}_{\sigma} && T^1 \ar[dl]^{[\kappa\setminus(\kappa\intersect\rho)]} \\
&&& T^{h3} && T^{h1}
}$$
\caption{A baseball diagram}
\label{FigureBaseball}
\end{figure}
Collapsed subgraphs of the trees $T^1,T^p,T^3$ in this diagram are named following a convention similar to that used earlier. Because $\kappa$ is a pushdown subgraph, neither of the two subgraphs $\rho \setminus (\kappa \intersect \rho)$, $\beta \setminus (\kappa \intersect \beta) \subset T^p$ contains a natural edge of $T^p$. It follows that neither of the two collapse maps $\sigma \colon T^p \to T^{h1}$, $\gamma \colon T^p \to T^{h3}$ collapses an entire natural edge of $T^p$. Each of the maps $\sigma,\gamma$ therefore induces by restriction a bijection of natural vertex sets, takes each natural edge onto a natural edge inducing a bijection of natural edge sets, and is homotopic to a conjugacy relative to natural vertex sets. By restricting to natural vertex sets we therefore obtain a well-defined bijection $\gamma \composed \sigma^\inv$ from the natural vertex set of $T^{h1}$ to the natural vertex set of $T^{h3}$ which extends to a conjugacy $\xi \colon T^{h1} \mapsto T^{h3}$. Since collapses are transitive, we have again successfully ``pushed down the peak'', without even bothering to involve home plate as in the earlier scenario:
$$\xymatrix{
T^4 \ar[dr] &&&& T^0 \ar[dl] \\
& T^{h3} && T^{h1} \ar[ll]^{\xi}_{\approx}
}$$
We record this as:
\begin{lemma}[Pushing down peaks]\label{LemmaPushingDown}
Given a normalized W diagram notated as in Figure~\ref{FigureWDiagram}, and given a pushdown subgraph $\kappa \subset T^2$, there exists a baseball diagram notated as in Figure~\ref{FigureBaseball}, in which each map $\gamma \colon T^p \to T^{h3}$ and $\sigma \colon T^p \to T^{h1}$ induces by restriction a bijection of natural vertex sets and a bijection of natural edge sets, and is homotopic rel natural vertices to a conjugacy. By composition we therefore obtain a bijection $\gamma\sigma^\inv$ from the natural vertex set of $T^{h1}$ to the natural vertex set of $T^{h3}$ that extends to a conjugacy $\xi \colon T^{h1} \to T^{h3}$.
\qed\end{lemma}
We emphasize that the conjugacy in the conclusion of this lemma need not be a \emph{map}, i.e.\ it need not be simplicial. Nonsimplicial conjugacies resulting from Lemma~\ref{LemmaPushingDown} will proliferate into the proof of Proposition~\ref{PropFSUContraction} given in Section~\ref{SectionProofFSUContraction}, and that proof will have a certain step dedicated to patching up this problem.
\paragraph{Pushing down corrugation peaks.} One key strategy occuring in the proof of Proposition~\ref{PropFSUContraction} is to set up applications of Lemma~\ref{LemmaPushingDown} by finding pushdown subgraphs in peaks of normalized W diagrams. Of course this is impossible if the W diagram is geodesic. Nevertheless in Proposition~\ref{PropPushdownInToto} we will show that when combing a fold path across an arbitrary W diagram, even one which is geodesic, one can always locate enough pushdown subgraphs to carry out the pushdown process in a useful fashion, as long as the fold path is sufficiently long when measured in free splitting units.
Consider a fold sequence $T^0_0 \mapsto \cdots \mapsto T^0_J$. Consider also a zig-zag path $T^0_J \xrightarrow{} T^1_J \xleftarrow{[\rho_J]} T^2_J \xrightarrow{[\beta_J]} T^3_J \xleftarrow{} T^4_J$ in $\FS'(F)$, which may be regarded as a W diagram. We do not assume that this W diagram is a geodesic, nor even that it is normalized, but we do assume that $T^2_J = \beta_J \union \rho_J$. Consider finally a stack of four combing rectangles combined into one commutative diagram as shown in Figure~\ref{FigurePullBack1}, where the given fold sequence occurs as the $T^0$ row along the bottom of the diagram, and the W zig-zag occurs as the $T_J$ column along the right side (in such diagrams, in general we refer to rows by dropping subscripts, and to columns by dropping superscripts).
\begin{figure}[h]
$$\xymatrix{
T^4_0 \ar[r] \ar[d]
& \cdots \ar[r] & T^4_{I} \ar[r] \ar[d] & \cdots \ar[r]
& T^4_J \ar[d] \\
T^3_0 \ar[r]
& \cdots \ar[r] & T^3_{I} \ar[r] & \cdots \ar[r]
& T^3_J \\
T^2_0 \ar[r] \ar[u]^{[\beta_0]} \ar[d]_{[\rho_0]}
& \cdots \ar[r] & T^2_{I} \ar[r] \ar[u]^{[\beta_{I}]} \ar[d]_{[\rho_{I}]} & \cdots \ar[r]
& T^2_J \ar[u]^{[\beta_J]} \ar[d]_{[\rho_J]} \\
T^1_0 \ar[r]
& \cdots \ar[r] & T^1_I \ar[r] & \cdots \ar[r]
& T^1_J \\
T^0_0 \ar[r] \ar[u]
& \cdots \ar[r] & T^0_I \ar[r] \ar[u] & \cdots \ar[r]
& T^0_J \ar[u]
}$$
\caption{A diagram of four combing rectangles over $F$. The $T^0$ row along the bottom is assumed to be a fold sequence. In the $T_J$ column we assume that $T^2_J = \rho_J \union \beta_J$.}
\label{FigurePullBack1}
\end{figure}
Such a diagram can be constructed, for example, by starting with the bottom row and right side, and applying Propositions~\ref{PropCBC}, then~\ref{PropCBE}, then~\ref{PropCBC}, then~\ref{PropCBE}, in that order, to comb the given fold sequence along each of the four edges of the given zig-zag path. We will also encounter such diagrams constructed by other combing processes involving concatenation and deconcatenation of combing rectangles.
We can visualize Figure~\ref{FigurePullBack1} as a piece of corrugated metal. The $T^2$ row forms a peak of the corrugation which we wish to push down all at once, by parallel applications of Lemma~\ref{LemmaPushingDown}. Of course this is impossible in general, for instance when the $T_J$ column is a geodesic path in $\FS'(F)$.
We now describe a process which allows us to push down the corrugation peak along the $T^2$ row, at the expense of throwing away the portion of the diagram to the right of the $T_I$ column that is depicted in Figure~\ref{FigurePullBack1}. The next lemma says that this is always possible \emph{as long as} the bottom row has sufficiently many free splitting units between $T^0_I$ and $T^0_J$. As a consequence, the $T_j$ columns for $0 \le j \le I$ are \emph{not} geodesic paths in $\FS'(F)$ because $d(T^0_j,T^4_j) \le 2$, even when the $T_J$ on the far right is geodesic. We thus obtain a key indicator of ``hyperbolic'' behavior: local curve shortening.
The following proposition introduces the constant $4 \rank(F)-3$ which is needed for the proof of Proposition~\ref{PropFSUContraction}.
\break
\begin{proposition}\label{PropPushdownInToto}
For any commutative diagram as in Figure~\ref{FigurePullBack1}, if the number of free splitting units between $T^0_I$ and $T^0_J$ is $\ge 4 \rank(F) - 3$ then there is a commutative diagram
$$\xymatrix{
T^4_0 \ar[r] \ar[d]
& \cdots \ar[r]
&T^4_I \ar[d] \\
T^{h3}_0 \ar[r] \ar@{=}[d]^{\xi_0}
& \cdots \ar[r]
&T^{h3}_I \ar@{=}[d]^{\xi_I} \\
T^{h1}_0 \ar[r]
& \cdots \ar[r]
&T^{h1}_I \\
T^0_0 \ar[r] \ar[u]
& \cdots \ar[r]
&T^0_I \ar[u]
}$$
such that the following hold: the top and bottom horizontal rows are the same foldable sequences as the top and bottom rows of Figure~\ref{FigurePullBack1} between the $T_0$ and $T_I$ columns; the $T^{h1}$ and $T^{h3}$ rows are foldable sequences; for each $j=0,\ldots,J$ the function $\xi_j$ is a (nonsimplicial) conjugacy between $T^{h1}_j$ and~$T^{h3}_j$; and the top and bottom horizontal rectangles are combing rectangles obtained from the top and bottom combing rectangles of Figure~\ref{FigurePullBack1} between the $T_0$ and $T_I$ columns by application of \emph{Composition of combing rectangles} \ref{LemmaCombingComp}.
\end{proposition}
\begin{proof} There are three steps to the proof: normalization; pullback; and pushdown.
\smallskip
\textbf{Step 1: Normalization.} Knowing that $T^2_J = \beta_J \union \rho_J$, and knowing for each $j=0,\ldots,J$ that $\beta_j$, $\rho_j$ are the union of the edgelets mapped to $\beta_J$, $\rho_J$, respectively, under the foldable map $T^2_j \mapsto T^2_J$, it follows that $T^2_j = \beta_j \union \rho_j$. If the $T_J$ column is already normalized, that is if $\beta_J \union \rho_J= T_J$ is a blue--red decomposition, then the same is true of $\beta_j \union \rho_j = T_j$, and so each $T_j$ column is normalized and we pass directly to Step~2.
Otherwise, let us assume that $\beta_J$, $\rho_J$ have some edgelets in common. The union of these edgelets is a subgraph with nondegenerate components which by abuse of notation we denote $\beta_J \intersect \rho_J \subset T^2_J$. It follows that for each $j=0,\ldots,J$ the graphs $\beta_j,\rho_j$ have some edgelets in common, these being the edgelets that are mapped to $\beta_J \intersect \rho_J$ by the foldable map $T^2_j \mapsto T^2_J$; their union forms a subgraph $\beta_j \intersect \rho_j \subset T^2_j$. We may now carry out the normalization process depicted in Figure~\ref{FigureNormalization}, in parallel as $j$ varies from $0$ to $J$. The resulting normalization diagrams, commutative diagrams of collapse maps, are shown in Figure~\ref{FigureNormalizationWZigZag}.
\begin{figure}[h]
$$\xymatrix{
T^4_j \ar[dddrr] &&&& T^2_j \ar[dddll]_{[\beta_j]} \ar[dd]|{[\beta_j\intersect\rho_j]} \ar[dddrr]^{[\rho_j]} &&&& T^0_j \ar[dddll] \\
&&&& &&&& \\
&& && T'{}^2_j \ar[dll]|{\quad[\beta_j\setminus(\beta_j\intersect\rho_j)]} \ar[drr]|{[\rho_j\setminus(\beta_j\intersect\rho_j]\quad} && \\
&& T^{3}_j &&&& T^{1}_j
}$$
\caption{Parallel normalization diagrams associated to the W zig-zags from $T^0_j$ to $T^4_j$ in Figure~\ref{FigurePullBack1}.}
\label{FigureNormalizationWZigZag}
\end{figure}
\break
We claim that for each of the seven arrows in Figure~\ref{FigureNormalizationWZigZag}, as $j$ varies from $0$ to $J$ we obtain a combing rectangle. One can visualize this statement as a description of a 3-dimensional commutative diagram where the normalization diagrams are lined up in parallel vertical planes, connected up by six foldable sequences (one for each of the six positions in the normalization diagram) and seven combing rectangles (one for each of the seven arrows). The claim is true by hypothesis for the four arrows on the top of the diagram. To obtain the combing rectangle with vertical arrows from $T^2_j$ to $T'{}^2_j$, since $\beta_j\intersect\rho_j$ is the inverse image of $\beta_J\intersect\rho_J$ under the foldable map $T^2_j \mapsto T^2_J$, by Proposition~\ref{PropCBC} the collapse maps $T^2_j \xrightarrow{[\beta_j\intersect\rho_j]} T'{}^2_j$ fit together in a combing rectangle as follows:
$$\xymatrix{
T^2_0 \ar[r] \ar[d]^{[\beta_0\intersect\rho_0]}
& \cdots \ar[r] & T{}^2_{I} \ar[r]\ar[d]^{[\beta_I\intersect\rho_I]} & \cdots \ar[r]
& T^2_J \ar[d]^{[\beta_J\intersect\rho_J]} \\
T'{}^2_0 \ar[r]
& \cdots \ar[r] & T'{}^2_{I} \ar[r] & \cdots \ar[r]
& T'{}^2_J \\
}$$
The two combing rectangles with vertical arrows from $T'{}^2_j$ to $T^1_j$ and from $T'{}^2_j$ to $T^3_j$, respectively, are obtained by two applications of Lemma~\ref{LemmaCombingDecomp} \emph{Decomposition of combing rectangles}, the first application using the $T^2_j$ to $T^1_j$ and the $T^2_j$ to $T'{}^2_j$ combing rectangles, and the second using the $T^2_j$ to $T^3_j$ and the $T^2_j$ to $T'{}^2_j$ combing rectangles. This proves the claim.
The outcome of the claim is a commutative diagram of the form shown in Figure~\ref{FigureNormalOutcome}, in which the top and bottom rectangles are the same combing rectangles as in Figure~\ref{FigurePullBack1}. By construction (see Figure~\ref{FigureNormalization}), the zig zag path on the right side of Figure~\ref{FigureNormalOutcome} is normalized, completing Step~1.
\begin{figure}
$$\xymatrix{
T^4_0 \ar[r] \ar[d]
& \cdots \ar[r] & T^4_{I} \ar[r] \ar[d] & \cdots \ar[r]
& T^4_J \ar[d] \\
T^3_0 \ar[r]
& \cdots \ar[r] & T^3_{I} \ar[r] & \cdots \ar[r]
& T^3_J \\
T'{}^2_0 \ar[r] \ar[u] \ar[d]
& \cdots \ar[r] & T'{}^2_{I} \ar[r] \ar[u] \ar[d] & \cdots \ar[r]
& T'{}^2_J \ar[u] \ar[d] \\
T^1_0 \ar[r]
& \cdots \ar[r] & T^1_I \ar[r] & \cdots \ar[r]
& T^1_J \\
T^0_0 \ar[r] \ar[u]
& \cdots \ar[r] & T^0_I \ar[r] \ar[u] & \cdots \ar[r]
& T^0_J \ar[u]
}$$
\caption{The outcome of normalizing Figure~\ref{FigurePullBack1}, using the parallel normalization diagrams of Figure~\ref{FigureNormalizationWZigZag}.}
\label{FigureNormalOutcome}
\end{figure}
\textbf{Step 2: Pullback.} This is the central argument where the concepts of free splitting units are used to their maximal effect.
Having carried out Step 1, we may now go back to Figure~\ref{FigurePullBack1} and assume that each $T_j$ column is a normalized W zig-zag. In other words, for each $j$ we have a blue--red decomposition $\beta^2_j \union \rho^2_j = T^2_j$.
Let $\Upsilon$ be the number of free splitting units along the bottom row of the diagram between $T^0_I$ and $T^0_J$, and choose a sequence $I \le i(0) < \cdots < i(\Upsilon) \le J$ so that for each $u=1,\ldots,\Upsilon$ there is $\ge 1$ free splitting unit between $T^0_{i(u-1)}$ and $T^0_{i(u)}$. By hypothesis we have $\Upsilon \ge 4 \rank(F)-3$.
We prove that the blue--red decomposition $\beta_I \union \rho_I = T^2_I$ is not natural. Arguing by contradiction, suppose that $\beta_I \union \rho_I = T^2_I$ is natural. By Definition~\ref{DefBRDecompos}, it follows that $\beta_i \union \rho_i = T^2_i$ is natural for $I \le i \le J$. By Lemma~\ref{LemmaBRNatural}, the interval $I \le i \le J$ breaks into no more than $4 \rank(F)-3$ subintervals on each of which the complexity of $\beta_i$ is constant. By Definition~\ref{DefLessThanOneFSU}, on each of these subintervals there is $<1$ free splitting unit, and so each of these subintervals contains at most one entry from the sequence $i(0) < \cdots < i(\Upsilon)$. It follows that $\Upsilon \le 4 \rank(F) - 4$, contradicting the hypothesis.
\subparagraph{Remark.} The previous version of this paper contained an invalid argument, starting from the statement that $\beta_i, \rho_i \union T^2_i$ is natural for $I \le i \le J$. The erroneous statement, which incorrectly exploited $\beta_i,\rho_i$, said that if one expands $T^2_i$ by blowing up each vertex $v \in \beta_i \intersect \rho_i$, pulling the blue and red edges at $v$ apart to form two vertices connected by a gray edge, then the resulting tree with $F$-action is a free splitting. The error is that the inserted gray edges might have nontrivial stabilizers. Correcting this error led to a revamping of the theory of free splitting units presented in Section~\ref{SectionFSU}. In particular, the concept of an ``invariant, natural, blue--red decomposition'' in Definition~\ref{DefBRDecompos}, and the diameter bounds of Lemma~\ref{LemmaBRNatural}, are new to this version of the paper and were concocted to correctly exploit the subgraphs $\beta_i,\rho_i \subset T^2_i$.
\smallskip
\textbf{Step 3: Pushdown.} Having carried out Steps~1 and~2, we assume now that we have a commutative diagram as shown in Figure~\ref{FigurePullBack2}, in which each column is normalized and the blue--red decomposition $\beta_I \union \rho_I = T^2_I$ is not natural. It follows that $T^2_I$ has a natural edge $e$ which contains both red and blue edgelets. Using this, we shall produce the commutative diagram needed for the conclusion of Proposition~\ref{PropPushdownInToto}. The argument will be a somewhat more intricate version of the parallel normalization process used in Step~1, using parallel baseball diagrams instead.
\begin{figure}[h]
$$\xymatrix{
T^4_0 \ar[r] \ar[d]
& \cdots \ar[r]
& T^4_I \ar[d] \\
T^3_0 \ar[r]
& \cdots \ar[r]
& T^3_I \\
T^2_0 \ar[r] \ar[u]^{[\beta_0]} \ar[d]_{[\rho_0]}
& \cdots \ar[r]
& T^2_I \ar[u]^{[\beta_I]} \ar[d]_{[\rho_I]} \\
T^1_0 \ar[r]
& \cdots \ar[r]
& T^1_I \\
T^0_0 \ar[r] \ar[u]
& \cdots \ar[r]
& T^0_I \ar[u]
}$$
\caption{Each of the four horizontal rectangles is a combing rectangle. We assume that every column is a normalized W zig-zag and that the tree $T^2_I$ has an edge $e$ containing both red and blue edgelets.}
\label{FigurePullBack2}
\end{figure}
Define a proper $F$-equivariant natural subgraph $\kappa_I = T^2_I$ to be the complement of the orbit of $e$, and so every natural edge of $T^2_I$ not in $\kappa_I$ contains both a red and a blue edgelet. By decreasing induction on $j \in \{0,\ldots,I-1\}$ define an $F$-equivariant subgraph $\kappa_j \subset T^2_j$ to be the inverse image of $\kappa_{j+1}$ under the foldable map $T^2_j \mapsto T^2_{j+1}$ (ignoring degenerate components as usual); equivalently $\kappa_j$ is the inverse image of $\kappa_I$ under $T^2_j \mapsto T^2_I$. It follows that the subgraphs $\kappa_j \subset T^2_j$ are proper for all $j=0,\ldots,I$.
We claim that for $j=0,\ldots,I$ the graph $\kappa_j$ is a pushdown subgraph of $T^2_j$. To prove this, given a natural edge $\eta_j \subset T^2_j$ such that $\eta_j \not\subset \kappa_j$, we must find a red and a blue edgelet in $\eta_j$ neither of which is in $\kappa_j$. Foldable maps take natural vertices to natural vertices and natural edges to nondegenerate natural edge paths, so the image of $\eta_j$ under the foldable map $T^2_j \mapsto T^2_I$ is a nondegenerate natural edge path denoted $\eta_I \subset T^2_I$. Since $\eta_j \not\subset \kappa_j$, it follows that $\eta_I \not\subset \kappa_I$, and so $\eta_I$ contains a natural edge not in $\kappa_I$ which therefore has both a red and a blue edgelet. Since natural edges not in $\kappa_I$ have interior disjoint from $\kappa_I$ it follows that $\eta_I$ contains a red and a blue edgelet neither of which is in $\kappa_I$. By pulling back under the foldable map $T^2_j \mapsto T^2_I$ we obtain a red and a blue edgelet in $\eta_j$ neither of which is in $\kappa_j$.
We now apply Lemma~\ref{LemmaPushingDown} in parallel to each column $j$ of Figure~\ref{FigurePullBack2} for $j=0,\ldots,I$. The resulting baseball diagrams, commutative diagrams of collapse maps, are shown in Figure~\ref{FigureParameterizedBaseball} (compare Figure~\ref{FigureBaseball}). Lemma~\ref{LemmaPushingDown} also produces conjugacies $T^p_j \mapsto T^{h3}_j$ and $T^p_j \mapsto T^{h1}_j$ and hence conjagacies $T^{h1}_j \to T^{h3}_j$. What we are still missing, however, are the conclusions of Proposition~\ref{PropPushdownInToto} concerned with combing rectangles and commutativity.
\begin{figure}[h]
$$\xymatrix{
T^4_j \ar[ddrr] &&&& T^2_j \ar[ddll]_{[\beta_j]} \ar[dd]^{[\kappa_j]} \ar[ddrr]^{[\rho_j]} &&&& T^0_j \ar[ddll] \\
&&&& &&&& \\
&& T^3_j \ar[dr] && T^p_j \ar[dl]^{\gamma_j} \ar[dr]_{\sigma_j} && T^1_j \ar[dl] \\
&&& T^{h3}_j && T^{h1}_j
}$$
\caption{The baseball diagram associated to the W-diagram from $T^0_j$ to $T^4_j$.}
\label{FigureParameterizedBaseball}
\end{figure}
We claim that for each of the nine arrows in Figure~\ref{FigureParameterizedBaseball}, as $j$ varies from $0$ to $I$ we obtain a combing rectangle. As in Step~1, one visualizes this as a 3-dimensional commutative diagram by lining up the baseball diagrams in parallel vertical planes, connected up by eight foldable sequences (one for each of the eight positions in the baseball diagram) and nine combing rectangles (one for each of the nine arrows). The claim is true by hypothesis for the four arrows on the top of the diagram.
For the arrow from 2nd base to the pitcher's mound, since $\kappa_j$ is the inverse image of $\kappa_J$ under the foldable map $T^2_j \mapsto T^2_I$, by Proposition~\ref{PropCBC} the collapse maps $T^2_j \xrightarrow{[\kappa_j]} T^p_j$ fit together in a combing rectangle
$$\xymatrix{
T^2_0 \ar[r] \ar[d]_{[\kappa_0]} & \cdots\ar@{}[d]|{\text{I}} \ar[r] & T^2_I \ar[d]^{[\kappa_I]} \\
T^p_0 \ar[r] & \cdots \ar[r] & T^p_I \\
}$$
Notice that for each $j=0,\ldots,J$, the subgraph $\kappa_j \union \rho_j$ is proper, because any natural edge not in $\kappa_j$ contains a blue edgelet not in $\kappa_j$, which is also not in $\kappa_j \union \rho_j$. Similarly the subgraph $\kappa_j \union \beta_j$ is proper. By Proposition~\ref{PropCBC}, since $\kappa_j \union \rho_j$ is the inverse image of $\kappa_{j+1} \union \rho_{j+1}$, and since $\kappa_j \union \beta_j$ is the inverse image of $\kappa_{j+1} \union \beta_{j+1}$, we obtain combing rectangles
$$\xymatrix{
T^2_0 \ar[r] \ar[d]_{[\kappa_0 \union \beta_0]} & \cdots\ar@{}[d]|{\text{II}} \ar[r] & T^2_I \ar[d]^{[\kappa_I \union \beta_I ]}
& & &
T^2_0 \ar[r] \ar[d]_{[\kappa_0 \union \rho_0]} & \cdots\ar@{}[d]|{\text{III}} \ar[r] & T^2_I \ar[d]^{[\kappa_I \union \rho_I ]} \\
T^{h3}_0 \ar[r] & \cdots \ar[r] & T^{h3}_0
& & &
T^{h1}_0 \ar[r] & \cdots \ar[r] & T^{h1}_0
}$$
Rectangles II and III do not correspond to any of the nine arrows in the baseball diagram, but to invisible arrows going from 2nd base to the point halfway between 1st base and home plate and from 2nd base to the point halfway between 3rd base and home plate.
For the arrows going from the pitcher's mound to the points halfway between 1st and home and halfway between 3rd and home, apply Lemma~\ref{LemmaCombingDecomp} \emph{Decomposition of combing rectangles}, first to combing rectangles II and I and then to combing rectangles III and~I, to obtain combing rectangles
$$\xymatrix{
T^p_0 \ar[r] \ar[d]_{[\beta_0 \setminus (\kappa_0 \intersect \beta_0)]}^{\gamma_0}
& \cdots\ar@{}[d]|{\text{IV}} \ar[r] & T^p_I \ar[d]^{[\beta_I \setminus (\kappa_I \intersect \beta_I)]}_{\gamma_I}
& & & &
T^p_0 \ar[r] \ar[d]_{[\rho_0 \setminus (\kappa_0 \intersect \rho_0)]}^{\sigma_0}
& \cdots\ar@{}[d]|{\text{V}} \ar[r] & T^p_I \ar[d]^{[\rho_I \setminus (\kappa_I \intersect \rho_I)]}_{\sigma_I}
\\
T^{h3}_0 \ar[r] & \cdots \ar[r] & T^{h3}_0
& & & &
T^{h1}_0 \ar[r] & \cdots \ar[r] & T^{h1}_0
}$$
where we follow the same notation convention for subgraphs of $T^p_0$ as used in the original baseball diagram Figure~\ref{FigureBaseball}.
For the arrows going from 1st base and 3rd base to the points halfway home, applying Lemma~\ref{LemmaCombingDecomp} \emph{Decomposition of combing rectangles} to combing rectangle II and the 2nd base to 3rd base combing rectangle, and then to combing rectangle III and the 2nd base to 1st base combing rectangle, we obtain combing rectangles
$$\xymatrix{
T^3_0 \ar[r] \ar[d]_{[\kappa_0 \setminus (\kappa_0 \intersect \beta_0)]}^{\gamma_0}
& \cdots\ar@{}[d]|{\text{VI}} \ar[r] & T^3_I \ar[d]^{[\kappa_I \setminus (\kappa_I \intersect \beta_I)]}
& & & &
T^1_0 \ar[r] \ar[d]_{[\kappa_0 \setminus (\kappa_0 \intersect \rho_0)]}^{\sigma_0}
& \cdots\ar@{}[d]|{\text{VII}} \ar[r] & T^1_I \ar[d]^{[\kappa_I \setminus (\kappa_I \intersect \rho_I)]}
\\
T^{h3}_0 \ar[r] & \cdots \ar[r] & T^{h3}_0
& & & &
T^{h1}_0 \ar[r] & \cdots \ar[r] & T^{h1}_0
}$$
Applying Lemma~\ref{LemmaCombingComp} \emph{Composition of combing rectangles}, by composing the two combing rectangles corresponding to the arrows along the 1st base foul line in Figure~\ref{FigureParameterizedBaseball} we obtain the combing rectangle from the $T^0$ row to the $T^{h1}$ row needed for the conclusion of Proposition~\ref{PropPushdownInToto}. Similarly, by composing the two combing rectangles corresponding to the arrows along the 3rd base foul line we obtain the combing rectangle from the $T^4$ row to the $T^{h3}$~row.
To complete Step~3 and the proof of the proposition, it remains to construct the commutative diagram of conjugacy maps $\xi_j \colon T^{h1}_j \to T^{h3}_j$ in the conclusion of the lemma. For this purpose it suffices to replace combing rectangles IV and V by commutative diagrams of conjugacies of the form
$$\xymatrix{
T^p_0 \ar[r] \ar[d]^{\bar\gamma_0}
& \cdots\ar@{}[d]|{\overline{\text{IV}}} \ar[r] & T^p_I \ar[d]_{\bar\gamma_I}
& & & &
T^p_0 \ar[r] \ar[d]^{\bar\sigma_0}
& \cdots\ar@{}[d]|{\overline{\text{V}}} \ar[r] & T^p_I \ar[d]_{\bar\sigma_I}
\\
T^{h3}_0 \ar[r] & \cdots \ar[r] & T^{h3}_I
& & & &
T^{h1}_0 \ar[r] & \cdots \ar[r] & T^{h1}_I
}$$
for then defining $\xi_j = \bar\gamma_j \composed \bar\sigma_j^\inv \colon T^{h1}_j \to T^{h3}_j$ we will be done. While Lemma~\ref{LemmaPushingDown} produces conjugacies $T^{h1}_j \to T^{h3}_j$ for each $j=0,\ldots,J$, if that lemma is used crudely there is no guarantee that these conjugacies will form commutative diagrams as needed. With a little care in how Lemma~\ref{LemmaPushingDown} is applied we can get the needed guarantee. We construct diagram $\overline{\text{IV}}$ in detail, the construction of $\overline{\text{V}}$ being similar. The construction is by induction, starting from the $T_I$ column on the far right and moving leftward.
First apply Lemma~\ref{LemmaPushingDown} to produce a conjugacy $\bar\gamma_I \colon T^p_I \to T^{h3}_I$ so that the restrictions of $\gamma_I$ and $\bar\gamma_I$ to natural vertex sets are the same. Proceeding by decreasing induction on~$j$, suppose that for some $j$ we have produced all the conjugacies from column $T_j$ to $T_I$ in diagram $\overline{\text{IV}}$ making that portion of the diagram commute, and so that the restrictions to natural vertex sets of the conjugacies in diagrams IV and $\overline{\text{IV}}$ are the same from column $T_j$ to column $T_I$. We must choose the conjugacy $\bar \gamma_{j-1} \colon T^p_{j-1} \to T^{h3}_{j-1}$ so as to fill in a commutative diagram of $F$-equivariant functions
$$\xymatrix{
T^p_{j-1} \ar@{.>}[d]_{\bar\gamma_{j-1}} \ar[r]^{f_{j}} & T^{p}_{j} \ar[d]^{\bar\gamma_{j}} \\
T^{h3}_{j-1} \ar[r]^{g_j} & T^{h3}_{j}
}$$
where $f_j$, $g_j$ are the foldable maps in Rectangle IV, and where the restrictions of $\bar\gamma_{j-1}$ and $\gamma_{j-1}$ to natural vertex sets are the same. This tells us how to define $\bar\gamma_{j-1}$ on natural vertex sets. Consider a natural edge $\eta \subset T^p_{j-1}$. By Lemma~\ref{LemmaPushingDown} its image $\gamma_{j-1}(\eta) \subset T^{h3}_{j-1}$ is a natural edge whose endpoints are the $\bar\gamma_{j-1}$ images of the endpoints of $\eta$. The foldable map $f_j \colon T^p_{j-1} \mapsto T^{p}_j$ is injective on $\eta$, the conjugacy $\bar\gamma_{j}$ is injective on $f_j(\eta)$, and we have the following equation of subsets:
$$g_j(\gamma_{j-1}(\eta)) = \gamma_{j}(f_j(\eta)) = \bar\gamma_{j}(f_j(\eta))
$$
The foldable map $g_j$ is injective on the natural edge $\gamma_{j-1}(\eta)$, and therefore has a homeomorphic inverse $g_j^\inv \colon \bar\gamma_{j}(f_j(\eta)) \to \gamma_{j-1}(\eta)$, and so we can define
$$\bar\gamma_{j-1} \restrict \eta = (g_j^\inv \composed \bar\gamma_j \composed f_j) \restrict \eta
$$
This completes Step~3 and the proof of Proposition~\ref{PropPushdownInToto}.
\end{proof}
\subsection{Proof of Proposition \ref{PropFSUContraction}}
\label{SectionProofFSUContraction}
\subparagraph{Prologue.} Consider a fold sequence $S_0 \mapsto\cdots\mapsto S_K$ over $F$, a free splitting $F \act T$, and an augmented projection diagram of maximal depth $k_T=\pi(T)$ as notated in Figure~\ref{FigureMaxProjDiagram} of Section~\ref{SectionStatmentFSUContraction}, whose top row has the fold sequence $T_{k_T} \mapsto\cdots\mapsto T_L=T$ as a terminal segment. Let $\Upsilon$ be the number of free splitting units between $T_{k_T}$ and $T_L=T$. Using the constant $b_1 = 4 \rank(F)-3$ from Proposition~\ref{PropPushdownInToto}, we list every $b_1^{\text{th}}$ term of the back greedy subsequence of this fold sequence~as
$$k_T \le L_\Omega < L_{\Omega-1} < \cdots < L_1 < L_0=L
$$
where $\Omega = \lfloor \Upsilon / b_1 \rfloor$. Thus $L_\omega$ is the greatest integer $< L_{\omega-1}$ such that there are exactly $b_1$ free splitting units between $T_{L_\omega}$ and $T_{L_{\omega-1}}$, for each $\omega=1,\ldots,\Omega$. Emphasizing only those $T$'s with subscripts from the list $L_\Omega,\ldots,L_0$, and assigning them a superscript~$0$ for later purposes, we may write the augmented projection diagram in the form
$$\xymatrix{
T^0_0 \ar[r] \ar[d] & \cdots \ar[r] & T^0_{k_T} \ar[d] \ar[r] & \cdots \ar[r] & T^0_{L_\Omega} \ar[r] & T^0_{L_{\Omega-1}} \ar[r] & \cdots \ar[r] & T^0_{L_1} \ar[r] & T^0_{L_0} =T
\\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
where the foldable map $T^0_{k_T} \to T^0_{L_\Omega}$ may just be the identity map.
\smallskip
Consider also a vertex $R \in \FS'(F)$ and a geodesic path from $T$ to $R$ in~$\FS'(F)$. We shall assume here that $d(T,R) \ge 3$; the case that $d(T,R) \le 2$ will be considered in the epilogue. If the path from $T$ to $R$ starts with an expansion of $T$, prefix the path with an improper collapse. The result in a zig-zag path of the form
$$T=T^0_{L_0} \rightarrow T^1_{L_0} \leftarrow T^2_{L_0} \rightarrow T^3_{L_0} \cdots T^D_{L_0} = R
$$
where $D = d(T,R)$ or $d(T,R)+1$ and~$D \ge 3$. The peaks along this zig-zag are the even terms strictly between $0$ and $D$, the first such peak being $T^2_{L_0}$. For each peak along this path, applying Lemma~\ref{LemmaRedBlueUnion} together with the assumption that $d(T,R) \ge 3$ it follows that the peak is the union of its two collapse graphs. The number of peaks along this zig-zag path equals $\lfloor \frac{D-1}{2} \rfloor$ which equals $\frac{D-2}{2}$ if $D$ is even and $\frac{D-1}{2}$ if $D$ is odd.
By combing the foldable sequence $T^0_0 \mapsto\cdots\mapsto T^0_{L_0}$ across each collapse or expansion of the zig-zag path $T^0_{L_0} \rightarrow T^1_{L_0} \leftarrow \cdots T^D_{L_0} = R$,
alternately applying \emph{Combing by Collapse} \ref{PropCBC} and \emph{Combing by Expansion} \ref{PropCBE}, and by stacking the resulting combing rectangles atop the augmented projection diagram, we obtain The Big Diagram, Step 0, shown in Figure~\ref{FigureBigDiagram0}.
\begin{figure}
$$\xymatrix{
T^D_0 \ar@{.}[d]\ar[r]&\cdots \ar[r]&T^D_{k_T} \ar@{.}[d] \ar[r] & \cdots \ar[r]& T^D_{L_\Omega} \ar[r]\ar@{.}[d] & \cdots \ar[r] & T^D_{L_1} \ar[r]\ar@{.}[d] & \cdots \ar[r] & T^D_{L_0}\ar@{.}[d] \ar@{=}[r] & R \\
T^4_0 \ar[r]\ar[d]&\cdots \ar[r]&T^4_{k_T} \ar[d] \ar[r] & \cdots \ar[r]& T^4_{L_\Omega} \ar[r]\ar[d] & \cdots \ar[r] & T^4_{L_1} \ar[r]\ar[d] & \cdots \ar[r] & T^4_{L_0}\ar[d] \\
T^3_0 \ar[r]&\cdots \ar[r]&T^3_{k_T} \ar[r] & \cdots \ar[r]& T^3_{L_\Omega} \ar[r] & \cdots \ar[r] & T^3_{L_1} \ar[r] & \cdots \ar[r] & T^3_{L_0} \\
T^2_0 \ar[r]\ar[d]^{[\rho_0]}\ar[u]_{[\beta_0]}&\cdots \ar[r]&T^2_{k_T} \ar[u]_{[\beta_{k_T}]} \ar[d]^{[\rho_{k_T}]} \ar[r] & \cdots \ar[r]& T^2_{L_\Omega} \ar[r]\ar[d]^{[\rho_{L_\Omega}]}\ar[u]_{[\beta_{L_\Omega}]} & \cdots \ar[r] & T^2_{L_1} \ar[r]\ar[d]^{[\rho_{L_1}]}\ar[u]_{[\beta_{L_1}]} & \cdots \ar[r] & T^2_{L_0}\ar[d]^{[\rho_{L_0}]}\ar[u]_{[\beta_{L_0}]} \\
T^1_0 \ar[r]&\cdots \ar[r]&T^1_{k_T} \ar[r] & \cdots \ar[r]& T^1_{L_\Omega} \ar[r] & \cdots \ar[r] & T^1_{L_1} \ar[r] & \cdots \ar[r] & T^1_{L_0} \\
T^0_0 \ar[r] \ar[d] \ar[u]& \cdots \ar[r] & T^0_{k_T} \ar[d] \ar[u] \ar[r] & \cdots \ar[r] & T^0_{L_\Omega} \ar[r]\ar[u] & \cdots \ar[r] & T^0_{L_1} \ar[r]\ar[u] & \cdots \ar[r] & T^0_{L_0} \ar[u] \ar@{=}[r] & T
\\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The Big Diagram, Step 0. \hfill\break We emphasize the columns indexed by $L_\Omega, \ldots, L_1, L_0$. Each horizontal row is a foldable sequence, and the rectangle between any two rows is a combing rectangle. The bottom row is a fold sequence, and the $T^0$ row from $T^0_{k_T}$ to $T^0_{L_0}$ is a fold sequence. Each peak of the $T_{L_0}$ column is the union of its two collapse graphs. Rows in this and subsequent diagrams will be indicated by stripping off subscripts, for instance the ``$T^0$ row'' refers to the foldable sequence $T^0_0 \mapsto\cdots\mapsto T^0_{L_0}$; similarly, columns are indicated by stripping off superscripts. Since each peak of column $T_{L_0}$ between rows $T^0$ and $T^D$ is the union of its two collapse graphs, it follows that each peak of each column $T_j$ between rows $T^0$ and $T^D$ is the union of its two collapse graphs, because the two collapse graphs at a column~$j$ peak $T^{2i}_j$ are the pullbacks under the foldable map $T^{2i}_j \mapsto T^{2i}_{L_0}$ of the two collapse graphs at the corresponding column $L_0$ peak $T^{2i}_{L_0}$.}
\label{FigureBigDiagram0}
\end{figure}
Proposition~\ref{PropFSUContraction} will be proved by explicitly transforming the Big Diagram, Step 0 into a projection diagram from $R$ onto $S_0 \mapsto\cdots\mapsto S_K$ of an appropriate depth $l$ needed to verify the conclusions of the proposition. This transformation is primarily an induction that uses the pushdown tools of Section~\ref{SectionPushingDownPeaks}, followed by an epilogue which uses the pushdown tools one more time. As the proof progresses we will consider the truncated fold sequences $T^0_{k_T} \mapsto\cdots\mapsto T^0_{L_\omega}$ for increasing values of $\omega$, but such truncation will not affect measurements of free splitting units between $T^0_i$ and $T^0_j$ as long as $k_T \le i \le j \le L_\omega$ (see the remark following Definition~\ref{DefGeneralFSU}).
\subparagraph{Induction.} We explain in detail how to carry out the first step of the induction. Under our assumption that $d(T,R) \ge 3$, the $T_{L_0}$ column of the Big Diagram, Step 0 has a peak at $T^2_{L_0}$. Assuming furthermore that $\Upsilon \ge b_1$, equivalently $\Omega \ge 1$, then $L_1$ is defined and there are $\ge b_1 = 4 \rank(F)-3$ free splitting units between $T^0_{L_1}$ and $T^0_{L_0}$. We may therefore apply Proposition~\ref{PropPushdownInToto} to the portion of the diagram between the $T^0$ and $T^4$ rows as follows: trim away all portions of the Big Diagram, Step 0 that lie to the right of the $T_{L_1}$ column and below the $T^D$ row, and use the conclusion of Proposition~\ref{PropPushdownInToto} to replace the combing rectangles between the $T^0$ and $T^4$ rows, to get the Big Diagram, Step 0.1, shown in Figure~\ref{FigureBigDiagram0point1}.
\begin{figure}[h]
$$\xymatrix{
T^D_0 \ar@{.}[d]\ar[r]&\cdots \ar[r]&T^D_{k_T} \ar@{.}[d] \ar[r] & \cdots \ar[r]& T^D_{L_\Omega} \ar[r]\ar@{.}[d] & \cdots \ar[r] & T^D_{L_1} \ar[r] \ar@{.}[d] & \cdots \ar[r] & T^D_{L_0} \ar@{=}[r] & R \\
T^4_0 \ar[r]\ar[d]&\cdots \ar[r]&T^4_{k_T} \ar[d] \ar[r] & \cdots \ar[r]& T^4_{L_\Omega} \ar[r]\ar[d] & \cdots \ar[r] & T^4_{L_1} \ar[d] \\
T^{h3}_0 \ar@{=}[d]^{\xi_0} \ar[r] & \cdots\ar[r]& T^{h3}_{k_T} \ar@{=}[d]^{\xi_{k_T}} \ar[r] &\cdots\ar[r] & T^{h3}_{L_\Omega} \ar@{=}[d]^{\xi_{L_\Omega}}
\ar[r] &\cdots \ar[r] & T^{h3}_{L_1} \ar@{=}[d]^{\xi_{L_1}}
\\
T^{h1}_0 \ar[r] & \cdots\ar[r]& T^{h1}_{k_T} \ar[r] &\cdots \ar[r] & T^{h1}_{L_\Omega}
\ar[r] &\cdots \ar[r] & T^{h1}_{L_1}
\\
T^0_0 \ar[r] \ar[d] \ar[u]& \cdots \ar[r] & T^0_{k_T} \ar[d] \ar[u] \ar[r] & \cdots \ar[r] & T^0_{L_\Omega} \ar[r]\ar[u] & \cdots \ar[r] & T^0_{L_1} \ar[u] \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The Big Diagram, Step 0.1.}
\label{FigureBigDiagram0point1}
\end{figure}
The rectangles of the Big Diagram, Step 0.1 between the $T^0$ and $T^{h1}$ rows and between the $T^{h3}$ and $T^4$ rows are combing rectangles. Each $\xi_j \colon T^{h1}_j \to T^{h3}_j$ is a conjugacy, possibly nonsimplicial. Now we must pause to patch things up in order to make these conjugacies simplicial.
We claim that, by an operation of equivariant subdivision of simplicial structures and re-assignment of barycentric coordinates on edgelets, carried out over all free splittings in Big Diagram, Step 0.1, but \emph{without} changing any of the functions, we may assume that the conjugacies $\xi_i$ are indeed simplicial maps. Here are the details of this operation.
\break
Consider first the conjugacy $\xi_{L_1} \colon T^{h1}_{L_1} \to T^{h3}_{L_1}$. We may subdivide $T^{h1}_{L_1}$ at the pre-image of the vertex set of $T^{h3}_{L_1}$, and simultaneously subdivide $T^{h3}_{L_1}$ at the image of the vertex set of $T^{h1}_{L_1}$, to obtain new equivariant vertex sets on which $\xi_{L_1}$ is a bijection; it is also a bijection of edgelets, although it may not yet respect barycentric coordinates. We may then reassign the barycentric coordinates on one edgelet of $T^{h1}_{L_1}$ in each $F$-orbit, and move these coordinates around by the $F$-action, to obtain a new simplicial structure on~$T^{h1}_{L_1}$. We may then push these coordinates forward under the map $\xi_{L_1}$ to obtain new barycentric coordinates on the edgelets of $T^{h3}_{L_1}$. Having carried out these operations, the map $\xi_{L_1}$ is now a simplicial conjugacy.
Now we move left one step: by a similar subdivision/re-assignment on $T^{h1}_{L_1-1}$, pulling back vertices and barycentric coordinates under the foldable map $T^{h1}_{L_1-1} \mapsto T^{h1}_{L_1}$, we may assume that this map is simplicial. Similarly, by a subdivision/re-assignment on $T^{h3}_{L_1-1}$, we may assume that the foldable map $T^{h3}_{L_1-1} \mapsto T^{h3}_{L_1}$ is simplicial. We have now verified that in the commutative diagram
$$\xymatrix{
T^{h3}_{L_1-1} \ar[r] \ar@{=}[d]^{\xi_{L_1-1}} & T^{h3}_{L_1} \ar@{=}[d]^{\xi_{L_1}} \\
T^{h1}_{L_1-1} \ar[r] & T^{h1}_{L_1}
}$$
the top, bottom, and right sides are simplicial maps; by commutativity, the left side is therefore automatically simplicial.
Now we continue to move left: by similar subdivisions/re-assignments carried out one at a time on the trees in rows $T^{h1}$ and $T^{h3}$, moving to the left one at a time from the last map in each row, we may assume that these rows are simplicial; having done this, by commutativity each of the maps $\xi_j \colon T^{h1}_j \to T^{h3}_j$ is automatically a simplicial conjugacy. Now we move up: by similar subdivisions/re-assignments carried out one at a time on the trees in rows $T^4$, \ldots, $T^D$, starting with the collapse maps $T^4_j \mapsto T^{h3}_j$ and moving upward, we may assume that each vertical arrow above row $T^{h3}$ is simplicial; having done this, each of the horizontal arrows from row $T^{h3}$ upward and between columns $T_0$ and $T_{L_1}$ is automatically simplicial. Now, from $T^D_{L_1}$ we move to the right: by similar subdivisions/re-assignments we may assume that each of the maps $T^D_{L_1}\mapsto\cdots\mapsto T^D_{L_0}=R$ is simplicial. Finally, in a similar manner moving down from row $T^{h3}$ to row $S$, then moving right from $S_{k_T}$ to $S_K$, we have proved the claim.
Knowing now that we have \emph{simplicial} conjugacies $\xi_j \colon T^{h1}_j \to T^{h3}_j$, and using commutativity of the rectangle between rows $T^{h1}$ and $T^{h3}$, we may identify $T^{h1}_j$ and $T^{h3}_j$ via the map $\xi_j$, replacing these two rows by a single row as shown in \emph{The Big Diagram,~Step~1.}
In summary, when $d(T,R) \ge 3$ and $\Upsilon \ge b_1$, we have completed the first iteration of the induction argument: starting from the Big Diagram Step 0, by applying Proposition~\ref{PropPushdownInToto}, trimming away everything to the right of column $T_{L_1}$ and below row $T^D$, and replacing everything between rows $T^0$ and $T^4$, we get the Big Diagram Step 0.1, and then by subdividing and re-assigning barycentric coordinates we may assume that the conjugacies between rows $T^{h1}$ and $T^{h3}$ are simplicial. Identifying rows $T^{h1}$ and $T^{h3}$, we obtain the Big Diagram Step~1, shown in Figure~\ref{FigureBigDiagram1}. In the process we have decreased by~$2$ the lengths of all vertical zig-zag paths and the number of combing rectangles between the $T^0$ and $T^D$ rows. Observe that the conjugacy class of the free splitting $R$, and the equivalence class of the fold sequence $S_0 \mapsto\cdots\mapsto S_K$, have not been altered by these subdivision/re-assigmnent operations.
\begin{figure}[h]
$$\xymatrix{
T^D_0 \ar@{.}[d]\ar[r]&\cdots \ar[r]&T^D_{k_T} \ar@{.}[d] \ar[r] & \cdots \ar[r]& T^D_{L_\Omega} \ar[r]\ar@{.}[d] & \cdots \ar[r] & T^D_{L_1} \ar[r] \ar@{.}[d] & \cdots \ar[r] & T^D_{L_0} \ar@{=}[r] & R \\
T^4_0 \ar[r]\ar[d]&\cdots \ar[r]&T^4_{k_T} \ar[d] \ar[r] & \cdots \ar[r]& T^4_{L_\Omega} \ar[r]\ar[d] & \cdots \ar[r] & T^4_{L_1} \ar[d] \\
T^{h}_0 \ar[r] & \cdots\ar[r]& T^{h}_{k_T} \ar[r] &\cdots \ar[r] & T^{h}_{L_\Omega}
\ar[r] &\cdots \ar[r] & T^{h}_{L_1}
\\
T^0_0 \ar[r] \ar[d] \ar[u]& \cdots \ar[r] & T^0_{k_T} \ar[d] \ar[u] \ar[r] & \cdots \ar[r] & T^0_{L_\Omega} \ar[r]\ar[u] & \cdots \ar[r] & T^0_{L_1} \ar[u] \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The Big Diagram, Step 1}
\label{FigureBigDiagram1}
\end{figure}
To complete the inductive step there is one last thing to do, namely to verify that along the zig-zag path in column $T_{L_1}$ on the right side of the Big Diagram, Step 1, each peak is the union of its two collapse graphs. This is evident for each peak from $T^6_{L_1}$ upward, since the collapse maps and collapse graphs are unchanged at those peaks from the Big Diagram, Step 0. For the peak at $T^4_{L_1}$, one of the collapse graphs is unchanged from the Big Diagram, Step 0, namely that of the map $T^4_{L_1} \mapsto T^5_{L_1}$. For the collapse graph of the map $T^4_{L_1} \mapsto T^h_{L_1}$, we use the part of the conclusion of Proposition~\ref{PropPushdownInToto} which tells us that the combing rectangle in the Big Diagram Step 1 between the $T^4$ and $T^h$ rows is obtained by an application of \emph{Composition of combing rectangles}, Lemma~\ref{LemmaCombingComp}, using the combing rectangle in the Big Diagram Step 0 between the $T^4$ and $T^3$ rows and between the $T_0$ and $T_{L_1}$ columns. What Lemma~\ref{LemmaCombingComp} allows us to conclude is that the collapse graph of the Step 0 map $T^4_{L_1} \mapsto T^3_{L_1}$ is contained in the collapse graph of Step 1 map $T^4_{L1} \mapsto T^h_{L_1}$. The union of the two collapse graphs of $T^4_{L_1}$ in the Big Diagram, Step 1 is therefore still equal to $T^4_{L_1}$.
\smallskip
\textbf{Remark.} The reader may wonder why the initial normalization step was necessary in the proof of Proposition~\ref{PropPushdownInToto}: we could have started with a \emph{normalized} zig-zag geodesic on the right side of the Big Diagram, Step 0. This would imply that the $T^4$ column in that diagram is normalized at $T^4_{L_1}$. Nonetheless it is possible that the $T^4$ column in the Big Diagram, Step 1 is not normalized at $T^4_{L_1}$, because the collapse graph for $T^4_{L_1} \mapsto T^h_{L_1}$ may be strictly larger than the collapse graph for $T^4_{L_1} \mapsto T^3_{L_1}$. If so then the normalization step of Proposition~\ref{PropPushdownInToto} is inescapable in the next step of the induction.
\smallskip
We now describe the induction step in general. From the hypothesis we have $d(T,R) \le \max\{2\Omega,1\}$. If $d(T,R) \le 2$ then we refer to the Epilogue below. Otherwise, under the assumption $d(T,R) \ge 3$, we have $D \le d(T,R) + 1 \le 2 \Omega + 1$, and so we may repeat the above argument inductively a total of $\lfloor \frac{D-1}{2} \rfloor$ times, removing the corrugation peaks one at a time. For each $\omega = 2,\ldots,\lfloor \frac{D-1}{2} \rfloor$, at the $\omega$ step of the induction we do the following. At the start of the $\omega$ step we have the Big Diagram, Step~$\omega-1$, analogous to the Big Diagram Step~$1$ but with $L_{\omega-1}$ in place of $L_1$ and $L^{2\omega}$ in place of~$L^4$, and with a stack of $D-2\omega+2$ combing rectangles between the $T^0$ and $T^D$ rows. We trim away the portion of the diagram to the right of column $T_{L_\omega}$, on or above row $T^0$, and below row $T^D$. We replace the four combing rectangles between rows $T^0$ and $T^{2\omega+2}$ by two combing rectangles and a commutative diagram of conjugacies. We carry out a subdivision/re-assignment operation which allows us to assume that the conjugacies are simplicial. We then collapse the commutative diagram of conjugacies, identifying its two rows into a single row. We have now produced the Big Diagram, Step~$\omega$, with a stack of $D-2\omega$ combing rectangles between the $T^0$ and $T^D$ rows: we have decreased by~$2$ the lengths of all vertical zig-zag paths between the $T^0$ and $T^D$ rows and decreased by~$1$ the number of corrugation peaks. Finally we verify that each peak along column $T_{L_\omega}$ is still the union of its two collapse graphs.
At each stage of the induction, we have not altered the conjugacy class of $R$ nor the equivalence class of $S_0\mapsto\cdots\mapsto S_K$.
\subparagraph{Epilogue.} If $d(T,R) \ge 3$, when the induction process stops we have backed up to column~$T_{L_\omega}$ where $\omega = \lfloor \frac{D-1}{2} \rfloor$, and there are no remaining corrugation peaks above row~$T^0$. We obtain the Big Diagram, Step $\lfloor \frac{D-1}{2} \rfloor$, a not-so-big diagram that comes in two cases. The Case~1 diagram occurs when $D$ is even, and it has two combing rectangles between row $T^0$ and row $T^D$; see Figure~\ref{FigureNotSoBigCase1}. The Case~2 diagram occurs when $D$ is odd and has only one such combing rectangle; see Figure~\ref{FigureNotSoBigCase2}. In each of these diagrams, the conjugacy class of $R$ and the equivalence class of the fold sequence $S_1 \mapsto\cdots\mapsto S_K$ have not been changed from the initial setup in the Prologue.
\begin{figure}
$$\xymatrix{
T^D_0 \ar[r] \ar[d] & \cdots \ar[r] & T^D_{k_T} \ar[r] \ar[d] & \cdots \ar[r] & T^D_{L_\omega} \ar[r] \ar[d] & \cdots \ar[r] & T^D_{L_0} = R \\
T'_0 \ar[r] & \cdots \ar[r] & T'_{k_T} \ar[r] & \cdots \ar[r] & T'_{L_\omega} \\
T^0_0 \ar[r] \ar[d] \ar[u] & \cdots \ar[r] & T^0_{k_T} \ar[r] \ar[d] \ar[u] & \cdots \ar[r] & T^0_{L_\omega}\ar[u] \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{Case 1: a collapse--expand from $T^0$ to $T^D$}
\label{FigureNotSoBigCase1}
\end{figure}
\begin{figure}
$$\xymatrix{
T^D_0 \ar[r] & \cdots \ar[r] & T^D_{k_T} \ar[r] & \cdots \ar[r] & T^D_{L_\omega} \ar[r] & \cdots \ar[r] & T^D_{L_0} = R \\
T^0_0 \ar[r] \ar[d] \ar[u] & \cdots \ar[r] & T^0_{k_T} \ar[r] \ar[d] \ar[u] & \cdots \ar[r] & T^0_{L_\omega}\ar[u] \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{Case 2: a collapse from $T^0$ to $T^D$.}
\label{FigureNotSoBigCase2}
\end{figure}
If $d(T,R) \le 2$ then, starting from the augmented projection diagram depicted in the prologue, and depending on the nature of the geodesic from $T$ to $R$, we proceed as follows. If $d(T,R)=1$ and there is a collapse $T \collapses R$, we comb the $T^0$ row along this collapse to obtain the Case~2 diagram with $\omega=0$ and $T^D_{L_\omega}=T^D_{L_0}=R$. If $d(T,R)=1$ and there is an expansion $T \expandsto R$ then we append an improper collapse $T \collapsesto T$ to get a length~2 collapse--expand zig-zag $T \collapsesto T \expandsto R$, and we comb the $T^0$ row along this collapse--expand to obtain the Case~1 diagram with similar notation changes. If $d(T,R)=2$ and there is a collapse--expand from $T$ to $R$ then, combing the $T^0$ row along this collapse--expand, we produce the Case~1 diagram with similar notation changes. Finally, if $d(T,R)=2$ and there is an expand--collapse from $T$ to $R$, then combing the $T^0$ row along this expand--collapse, we obtain the Case~3 diagram in Figure~\ref{FigureNotSoBigCase3}.
\begin{figure}
$$\xymatrix{
T^2_0 \ar[r] & \cdots \ar[r] & T^2_{k_T} \ar[r] & \cdots \ar[r] & T^2_{L_0} = R \\
T'_0 \ar[r] \ar[u]\ar[d] & \cdots \ar[r] & T'_{k_T} \ar[r] \ar[u]\ar[d] & \cdots \ar[r] & T'_{L_0} \ar[u]\ar[d] \\
T^0_0 \ar[r]\ar[d] & \cdots \ar[r] & T^0_{k_T} \ar[r] \ar[d] & \cdots \ar[r] & T^0_{L_0} = T\\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{Case 3: an expand--collapse from $T^0$ to $T^2$.}
\label{FigureNotSoBigCase3}
\end{figure}
We now finish off Case~1; afterwards we shall reduce Cases~2 and~3 to Case~1. In the Case 1 diagram, trim off everything to the right of column~$T_{k_T}$, on or above row~$T^0$, and below row~$T^D$, to obtain the diagram shown in Figure~\ref{FigureCaseOneTrimmed}, which has a corrugation peak along the $T^0$ row. We must consider two subcases, depending on whether the peak $T^0_{k_T}$ of the W zig-zag in column $k_T$ is the union of its two collapse graphs $b_{k_T}$, $\rho_{k_T}$.
\begin{figure}[h]
$$\xymatrix{
T^D_0 \ar[r] \ar[d] & \cdots \ar[r] & T^D_{k_T} \ar[r] \ar[d] & \cdots \ar[r] & T^D_{L_0} = R \\
T'_0 \ar[r] & \cdots \ar[r] & T'_{k_T} \\
T^0_0 \ar[r] \ar[d]_{[r_0]} \ar[u]^{[b_0]} & \cdots \ar[r] & T^0_{k_T} \ar[d]^{[r_{k_T}]} \ar[u]_{[b_{k_T}]} \\
S'_0 \ar[r] & \cdots \ar[r] & S'_{k_T} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The Case~1 diagram, trimmed down.}
\label{FigureCaseOneTrimmed}
\end{figure}
Suppose first that $T^0_{k_T} \ne b_{k_T} \union r_{k_T}$ in Figure~\ref{FigureCaseOneTrimmed}. For each $j=0,\ldots,k_T$, in the tree $T^0_j$ which is the peak of the W zig-zag in column $j$, the union of its two collapse graphs $b_j \union r_j$ is a proper subgraph, that subgraph being the inverse image of $b_{k_T} \union r_{k_T}$ under the foldable map $T^0_j \mapsto T^0_{k_T}$. We may therefore carry out the simplistic pushdown depicted in Figure~\ref{FigureSimplistic}, in parallel as $j$ varies from $0$ to $k_T$, resulting in a diagram of the form depicted in Figure~\ref{FigureCaseOnePushedDown}.
\begin{figure
$$\xymatrix{
T^D_0 \ar[r] \ar[d] & \cdots \ar[r] & T^D_{k_T} \ar[r] \ar[d] & \cdots \ar[r] & T^D_{L_0} = R \\
T'_0 \ar[r] \ar[d] & \cdots \ar[r] & T'_{k_T} \ar[d] \\
T''_0 \ar[r] & \cdots \ar[r] & T''_{k_T} \\
S'_0 \ar[r] \ar[u] & \cdots \ar[r] & S'_{k_T} \ar[u] \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{k_T} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The result of a parallel simplistic pushdown on Figure \ref{FigureCaseOneTrimmed}, in the case when $T^0_{k_T} \ne \beta_{k_T} \union \rho_{k_T}$. Concatenating the upper two combing rectangles into a single one, and the same for the lower two, we obtain a projection diagram.}
\label{FigureCaseOnePushedDown}
\end{figure}
In Figure~\ref{FigureCaseOnePushedDown}, the $T''$ row is obtained by applying Proposition~\ref{PropCBC} \emph{Combing by collapse} using the collapse graphs $b_j \union r_j \subset T^0_j$, and the middle two combing rectangles are each obtained by an application of Lemma~\ref{LemmaCombingDecomp} \emph{Decomposition of combing rectangles}. By applications of Lemma~\ref{LemmaCombingComp} \emph{Composition of combing rectangles}, we may compose the lower two and the upper two combing rectangles of Figure~\ref{FigureCaseOnePushedDown} to produce a depth $k_T$ projection diagram from $R$ to $S_0 \mapsto\cdots\mapsto S_K$, and the proof of Proposition~\ref{PropFSUContraction} is complete in this case.
Suppose next that $T^0_{k_T} = b_{k_T} \union r_{k_T}$ in Figure~\ref{FigureCaseOneTrimmed}. From the hypothesis of Proposition~\ref{PropFSUContraction}, there are $\ge b_1 = 4 \rank(F)-3$ free splitting units along the bottom row of the diagram between $S_0$ and~$S_{k_T}$. Let $\ell \in \{0,\ldots,k_T\}$ be the largest integer such that there are $\ge b_1$ free splitting units between $S_l$ and $S_{k_T}$, from which it follows that there are exactly $b_1$ free splitting units between $S_I$ and $S_{k_T}$. We may now carry out one last iteration of the Induction. Applying Proposition~\ref{PropPushdownInToto}, remove all portions of the diagram in Figure~\ref{FigureCaseOneTrimmed} to the right of column $l$, above the $S$ row, and below the $T^D$ row, and replace the four combing rectangles by two combing rectangles and a commutative diagram of conjugacies. After an operation of subdivision and re-assignment of barycentric coordinates, we may assume that the conjugacies are all simplicial. After collapsing the commutative diagram of conjugacies, identifying its two rows to a single row, we obtain the diagram depicted in Figure~\ref{FigureOneLastTime}, in which the conjugacy class of the free splitting $R$ and the equivalence class of the fold sequence $S_0 \mapsto\cdots\mapsto S_K$ remain unchanged. This is the desired projection diagram from the free splitting $R$ to the fold sequence $S_0 \mapsto\cdots\mapsto S_K$ which completes the proof of Proposition~\ref{PropFSUContraction} in case~1.
\begin{figure
$$\xymatrix{
T^D_0 \ar[r] \ar[d] & \cdots \ar[r] & T^D_{l} \ar[r] \ar[d] & \cdots \ar[r] & T^D_{L_0} = R \\
S^h_0 \ar[r] & \cdots \ar[r] & S^h_{l} \\
S_0 \ar[r] \ar[u] & \cdots \ar[r] & S_{l} \ar[r] \ar[u] & \cdots \ar[r] & S_K \\
}$$
\caption{The projection diagram resulting from one last iteration of the Induction carried out on Figure \ref{FigureCaseOneTrimmed}, in the case when $T^0_{k_T} = \beta_{k_T} \union \rho_{k_T}$.}
\label{FigureOneLastTime}
\end{figure}
\subparagraph{Remark.} As was remarked earlier regarding the Big Diagram, Step 1 depicted in Figure~\ref{FigureBigDiagram1}, in the context of case~1 depicted in Figure~\ref{FigureOneLastTime}, the initial normalization step in the proof of Proposition~\ref{PropPushdownInToto} cannot be avoided, because there is no guarantee that the $S_{k_T}$ column is normalized at $T^0_{k_T}$.
\bigskip
We reduce case~2 to case~1 by producing a case~1 diagram: just attach an improper combing rectangle to the top of the case 2 diagram, by defining the foldable sequence $T'_0 \mapsto\cdots\mapsto T'_{L_\Omega}$ to equal the foldable sequence $T^D_0 \mapsto\cdots\mapsto T^D_{L_\Omega}$, and defining for each $j=0,\ldots,L_\Omega$ an improper collapse map $T^D_j \to T'_j$ which is just the identity map.
We also reduce case~3 to case~1. First trim away everything in the Case~3 diagram to the right of the $k_T$ column, on or above the $T^0$ row, and below the $T^2$ row. Next, apply Lemma~\ref{LemmaCombingComp}, \emph{Composition of combing rectangles}, to the two combing rectangles between the $S'$ row and the $T'$ row, concatenating them into a single combing rectangle. Finally, attach an improper combing rectangle to the top of the diagram as in case~2. The result is a case~1 diagram, completing the reduction.
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A smooth metric measure space is a triple $(M^n, g, e^{-f} d\mu_g)$, where $(M^n,g)$ is a complete $n$-dimensional Riemannian manifold, $f$ is a smooth real function on $M$, and $d\mu_g$ is the Riemannian volume density on $M$. The study of partial differential equations on smooth metric measure spaces has received considerable attention in the last few decades, and numerous results on Riemannian manifolds with Ricci curvature bounded from below were extended to smooth metric measure spaces (many to even non-smooth metric measure spaces) with $N$-Bakry-\'Emery Ricci curvature bounded from below. Recall that the $N$-Barky-\'Emery Ricci curvature $\operatorname{Ric}^N_f$ of a smooth metric measure space, which is a natural generalization of the classical Ricci curvature for Riemannian manifolds, is defined for $N \in [n, \infty]$ by
\begin{equation*}
\operatorname{Ric}^N_f =\begin{cases} \operatorname{Ric}, & \text{ if } N=n \text{ and } f=\text{constant}, \\
-\infty, & \text{ if } N=n \text{ and } f\neq \text{constant},\\
\operatorname{Ric} +\nabla^2 f -\frac{\nabla f \otimes \nabla f}{N-n}, & \text{ if } N \in (n, \infty), \\
\operatorname{Ric}+\nabla^2 f, & \text{ if } N=\infty.
\end{cases} \\
\end{equation*}
Here $\operatorname{Ric}$ and $\nabla^2 f$ denote the Ricci curvature of $M$ and the Hessian of $f$, respectively.
We also write $\operatorname{Ric}_f := \operatorname{Ric}^{\infty}_f = \operatorname{Ric} + \nabla^2 f$ for simplicity.
Important examples of smooth metric measure spaces with $\operatorname{Ric}^N_f$ bounded from below include: Riemannian manifolds with Ricci curvature bounded from below (corresponds to $N=n$ and $f=\text{constant}$), Bakry-\'Emery manifolds (corresponds to $N=\infty$), gradient Ricci solitons (i.e. $\operatorname{Ric}_f = \Upsilon g$ for $\Upsilon \in \mathbb{R}$), and quasi-Einstein metrics (i.e. $\operatorname{Ric}+\nabla^2 f -\frac{\nabla f \otimes \nabla f}{N-n} =\Upsilon g$ for $\Upsilon \in \mathbb{R}$).
On a smooth metric measure space $(M^n ,g, e^{-f} d\mu_g)$, we are interested in the following quasi-linear isotropic operators $Q: TM \setminus\{0\} \times \text{Sym}^2T^*M \to \mathbb{R}$
\begin{equation}\label{Q def}
Q[p, X] := \operatorname{tr} \left( \left[\alpha(|p|)\frac{p \otimes p}{|p|^2}+\beta(|p|)\left(I_n-\frac{p \otimes p}{|p|^2} \right) \right] X \right)-\beta(|p|)\langle p, \nabla f \rangle,
\end{equation}
where $\alpha$ and $\beta$ are nonnegative continuous functions, $I_n$ is the $n\times n$ identity matrix, and $TM$, $T^*M$ and $\text{Sym}^2T^*M$ denote the tangent bundle, the cotangent bundle and the set of symmetric two tensors on $M$, respectively.
Throughout the paper, we assume that $Q: TM \setminus\{0\} \times \text{Sym}^2T^*M \to \mathbb{R}$ is a continuous function (allowed to be singular at $p=0$) and $Q$ is degenerate elliptic in the sense that $Q(p,X) \geq Q(p,Y)$ for all $p \in TM \setminus\{0\}$ and $X \geq Y \in \text{Sym}^2T^*M$. Since $Q[u]$ is not necessarily of divergent form, it is necessary to the terminology of viscosity solutions in \cite{CIL92} (see also Section 2) throughout the paper.
Note that the family of operators $Q[u]:=Q[\nabla u, \nabla^2 u]$ in \eqref{Q def} includes many greatly studied elliptic operators for suitable choices of $\alpha, \beta$ and $f$. For instance, $Q[u]$ covers:
\begin{enumerate}
\item the Laplacian \\
$\Delta u:=\text{div}(\nabla u)$ (with $\alpha =\beta =1$ and $f=\text{constant}$);
\item the $f$-Laplacian \\
$\Delta_f u :=\Delta u -\langle \nabla u, \nabla f \rangle$ (with $\alpha =\beta =1$);
\item the $p$-Laplacian \\
$\Delta_p u :=\text{div}(|\nabla u|^{p-2} \nabla u)$ with $1<p<\infty$ (with $\alpha =(p-1)|\nabla u|^{p-2}$, $\beta =|\nabla u|^{p-2}$ and $f=\text{constant}$);
\item the weighted $p$-Laplacian \\
$\Delta_{p,f}u :=\Delta_p u -|\nabla u|^{p-2} \langle \nabla u, \nabla f \rangle$ (with $\alpha =(p-1)|\nabla u|^{p-2}$, $\beta =|\nabla u|^{p-2}$);
\item the mean curvature operator \\
$Hu :=\sqrt{1+|\nabla u|^2}\text{div}\left( \frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\right)$ (with $\alpha =\frac{1}{1+|
\nabla u|^2}$, $\beta=1$ and $f =\text{constant}$ ),
\end{enumerate}
as well as some non-divergent or degenerate elliptic operators such as
\begin{enumerate}
\item[(6)] the normalized or game-theoretic $p$-Laplacian \\
$\Delta^N_p u:=\frac{1}{p}|\nabla u |^{2-p} \text{div}(|\nabla u|^{p-2} \nabla u)$ for $1<p<\infty$ (with $\alpha=(p-1)/p$ and $\beta=1/p$ and $f=\text{constant}$);
\item[(7)] the level set mean curvature operator or the $1$-Laplacian \\
$\Delta_1 u:= \Delta u -|\nabla u|^{-2} \nabla^2 u(\nabla u, \nabla u)$ (with $\alpha =0$ and $\beta=1$, $f=\text{constant}$);
\item[(8)] the $\infty$-Laplacian \\
$\Delta_\infty u:= |\nabla u|^{-2} \nabla^2 u(\nabla u, \nabla u)$ (with $\alpha=1$, $\beta=0$ and $f=\text{constant}$)
\end{enumerate}
Indeed, the second order part of $Q[u]$ can be written as
a combination of the $1$-Laplacian
and the $\infty$-Laplacian,
\begin{equation*}
Q[u]=\alpha(|\nabla u|) \Delta_{\infty} u + \beta(|\nabla u|) \Delta_{1} u - \beta(|\nabla u|) \langle \nabla u, \nabla f \rangle.
\end{equation*}
The $1$-Laplacian $\Delta_1$ appears in the level set formulation of the mean curvature flow (see \cite{CGG} and \cite{ES91}) and have been investigated extensively ever since, while the $\infty$-Laplacian $\Delta_\infty$ plays an important role in the description of tug-of-war games (see for instance \cite{Evans07} and \cite{PSSW09}).
The normalized $p$-Laplacian $\Delta^N_p$ has recently received considerable attention for its important applications in image-processing \cite{Kawohl08} and in the description of tug-of-war games with noise \cite{PS08}.
The class of operators $Q[u]$ in \eqref{Q def} were considered in various recent works including \cite{Andrewssurvey15}\cite{AC09}\cite{AC13}\cite{AN12}\cite{AX19}\cite{Li16}\cite{LW17}\cite{LW19eigenvalue}\cite{LW19eigenvalue2}. The key feature is that this class of operators has corresponding one-dimensional operators, which are obtained by assuming that the solution depends only on one of the variables, as well as taking into account the effect of geometric data such as curvature and dimension.
This observation, together with the idea of comparing with solutions of one-dimensional equations, has led to numerous important results in the last decade. These include sharp gradient estimates via the the modulus of continuity estimates for quasi-linear equations on Euclidean domains \cite{AC09}, sharp modulus of continuity estimates for parabolic equations on Riemannian manifolds (assuming either $\partial M=\emptyset$ or $\partial M$ is convex and the Neumann boundary condition is imposed) and a simple proof of the optimal lower bound for the first nonzero eigenvalue of the Laplacian \cite{AC13}, proof of the Fundamental Gap Conjecture for convex domains in the Euclidean space \cite{AC11} (see also \cite{Ni13} for an elliptic proof and \cite{DSW18}\cite{HW17}\cite{SWW19} for the fundamental gaps of convex domains in the sphere).
The above-mentioned results are proved by using the two-points maximum principle together with comparisons with one-dimensional models.
We refer the reader to the wonderful survey \cite{Andrewssurvey15} for more discussions on modulus of continuity estimates and its applications as well as more applications of the two-points maximum principle in geometric heat equations.
The purpose of the present paper is to investigate the quasi-linear operators defined in \eqref{Q def} with the Dirichlet boundary condition on smooth metric measure spaces, as well as to extend previous results for Riemannian manifolds in \cite{AC13}\cite{AX19}\cite{LW17}\cite{LW19eigenvalue} to the more general smooth metric measure spaces setting.
Recall that a nonnegative function $\varphi$ is called a modulus of continuity for a function $u: M \to \mathbb{R}$ if
\begin{equation*}
u(y)-u(x) \leq 2 \varphi \left(\frac{d(x,y)}{2} \right)
\end{equation*}
for all $x, y \in M$, where $d$ is the distance induced by the Riemannian metric.
Our first result establishes sharp estimates on the modulus of continuity of solutions to the parabolic equation
\begin{equation}\label{parabolic pde}
u_t =Q[u]
\end{equation}
with Dirichlet boundary condition.
\begin{thm}\label{Thm MC}
Let $(M^n,g, e^{-f} d\mu_g)$ be a compact smooth metric measure space with smooth nonempty boundary $\partial M$ and diameter $D$.
Let $u:M\times [0,T) \to \mathbb{R}$ be a viscosity solution of \eqref{parabolic pde}
with Dirichlet boundary condition $u(x,t)=0$ for $x\in \partial M$ and $t\in [0,T)$. Let $\varphi:[0, D/2] \times \mathbb{R}_+ \to \mathbb{R}_+$ be a smooth function satisfying (i) $\varphi_0(s):=\varphi(s, 0)$ is a modulus of continuity of $u(x, 0)$ and $|u(x,0)|\le \varphi_0(d(x,\partial M))$; (ii) $\varphi'\geq 0$ and $\varphi''\le 0$ on $[0, D/2] \times \mathbb{R}_+$, $\varphi(0,t)=0$ for $t\ge 0$.
\begin{enumerate}
\item[(1)] Suppose that for $N \in [n, \infty)$, we have $\operatorname{Ric}^N_f \geq (N-1)\kappa$ and $H_f \geq (N-1) \Lambda$ for $\Upsilon, \Lambda \in \mathbb{R}$, and $\varphi$ satisfies $\varphi_t \geq \alpha(\varphi')\varphi'' -(N-1) T_{\kappa, \Lambda} \beta(\varphi')\varphi'$. Then $\varphi(s,t)$ is a modulus of continuity for $u(x,t)$ for each $t \in [0,T)$;
\item[(2)] Suppose that $\operatorname{Ric}_f \geq 0$, $H_f \geq 0$ and $\varphi$ satisfies $\varphi_t \geq \alpha(\varphi')\varphi''$. Then $\varphi(s,t)$ is a modulus of continuity for $u(x,t)$ for each $t \in [0,T)$.
\end{enumerate}
\end{thm}
Here $H_f$ denotes the $f$-mean curvature of $\partial M$ defined by
\begin{equation*}
H_f(x) =H(x) -\langle \nabla f(x), \nu(x) \rangle,
\end{equation*}
where $\nu(x)$ is the outward unit normal vector field at $x\in \partial M$ and $H(x)$ denotes the mean curvature at $x \in \partial M$, and the function $T_{\kappa,\Lambda}$ is defined for $\kappa, \Lambda \in \mathbb{R}$ by
$$T_{\kappa,\Lambda} (t):=- \frac{C'_{\kappa, \Lambda}(t)}{C_{\kappa, \Lambda}(t)}, $$
where $C_{\kappa, \Lambda}(t)$ is unique solution of the initial value problem
\begin{equation}\label{C def}
\begin{cases}
\phi''+\kappa \phi =0, \\
\phi(0)=1, \phi'(0) =-\Lambda.
\end{cases}
\end{equation}
Theorem \ref{Thm MC} complements the work of Andrews and Clutterbuck \cite{AC13} for Riemannian manifold (closed or with a convex boundary and Neumann boundary condition).
It seems for the Neumann case, one can only deal with convex boundaries as in \cite{AC13}, but for the Dirichlet case, we are able to handle any lower bound of the $f$-mean curvature of the boundary. Theorem \ref{Thm MC} is sharp. In fact, by exact same process as in \cite[Section 5]{AC13}, one can construct solutions of \eqref{parabolic pde} on
$(\Upsilon, \Lambda)$-equational model space as in \cite[Theorem 1.6 and 1.7]{Sakurai19}, satisfying the conditions of Theorem \ref{Thm MC} and satisfying the conclusion with equality holds.
The proof of Theorem \ref{Thm MC} relies on the two comparison theorems, one for $d(x,\partial M)$ (see Theorem \ref{Thm comparison distance to boundary}), the other one for $d(x,y)$ as a function on $M\times M$ (see Theorem \ref{Thm comparison distance}). Both comparison theorems are sharp, more general than the ones in the literature and of independent interests.
The modulus of continuity estimates in
\cite{AC13} led to an easy proof of the optimal lower bound on the first nonzero (closed or Neumann) eigenvalue of the Laplacian or the $f$-Laplacian in terms of dimension, diameter, and the lower bound of Ricci curvature.
The sharp lower bound was previously proved by Zhong and Yang for the nonnegative Ricci case case, by Kr\"oger \cite{Kroger92} (see also Bakry-Qian \cite{BQ00} for an explicit statement) for general Ricci lower bound using gradient estimates method,
and independently by Chen and Wang \cite{CW94,CW95} using stochastic method.
The gradient estimate method, dating back to the work of Li \cite{Li79} and Li and Yau \cite{LY80}, was used to prove sharp lower bound for the first nonzero eigenvalue of the $p$-Laplacian
in \cite{Valtorta12} and \cite{NV14}, and for the weighted $p$-Laplacian in \cite{LW19eigenvalue, LW19eigenvalue2}. On the other hand, the modulus of continuity approach seems only work for $1<p\leq 2$, see \cite[Section 8]{Andrewssurvey15} and \cite[Section 2]{LW19eigenvalue}.
It is natural to ask whether the modulus of continuity estimates in Theorem \ref{Thm MC}
lead to lower bound for the first Dirichlet eigenvalue of the Laplacian or more generally the $p$-Laplacian.
It turns out that we can establish optimal lower bound for the first Dirichlet eigenvalue of a large class of quasi-linear operators, but not as a consequence of Theorem \ref{Thm MC}.
Indeed, this can be achieved this by proving similar estimates as in Theorem \ref{Thm MC} but requiring one of the two variables to be contained in $\partial M$. We focus on the statement here and elaborate the difference between the closed/Neumann and Dirichlet case in Section 6.
To define eigenvalues, it is necessary to assume that $Q[u]$ is homogeneous of degree $\gamma >0$ in the sense that
\begin{equation}\label{def homogeneous}
Q[c u] =c^{\gamma} Q[u].
\end{equation}
The Dirichlet eigenvalue problem is then
\begin{equation}\label{eigen problem}
\begin{cases}
Q[u] =-\operatorname{l} |u|^{\gamma -1 } u, & \text{ in } M, \\
u=0, & \text{ on } \partial M.
\end{cases}
\end{equation}
Examples of homogeneous variational operators include the Laplacian, the $f$-Laplacian, the $p$-Laplacian and the weighted $p$-Laplacian, while examples of homogeneous non-variational operators include the normalized $p$-Laplacian $\Delta^N_p$ for $1<p<\infty$ and the operator $|Du|^{\gamma} \Delta^N_p$ for $\gamma > -1$ and $1<p<\infty$ (homogeneous of degree $\gamma+1$).
For operators that are not variational, a new definition for its first Dirichlet eigenvalue (also called the principle eigenvalue) is needed.
Following Berestycki, Nirenberg, and Varadhan \cite{BNV94}, in the papers \cite{BD07, BD06} (where they actually deal with a wider class of operators), Birindelli and Demengel introduced the first Dirichlet eigenvalue $\bar{\operatorname{l}}(Q)$ of the operator $Q[u]$ defined as
\begin{equation}\label{def principal eigenvalue}
\bar{\operatorname{l}}(Q) =\sup \{\operatorname{l} \in \mathbb{R} : \text{there exists a positive viscosity supersolution $u$ of } \eqref{eigen problem} \}.
\end{equation}
Calling it a first Dirichlet eigenvalue can be justified: they proved
that there exists a positive eigenfunction vanishing on the boundary associated with $\bar{\operatorname{l}}$, via Perron's method for viscosity solutions. In other words, for $\operatorname{l}=\bar{\operatorname{l}}(Q)$, the eigenvalue problem \eqref{eigen problem} admits a positive viscosity solution. The simplicity of $\bar{\operatorname{l}}(Q)$ has been proved very recently for the normalized $p$-Laplacian in \cite{CFK20}, but is not known for general operators.
We establish the following optimal lower bound for $\bar{\operatorname{l}}(Q)$ in terms of geometric data of the underlying smooth metric measure space.
\begin{thm}\label{thm Dirichlet eigenvalue}
Let $(M^n, g, e^{-f}d\mu_g)$ be a compact smooth metric measure space with smooth nonempty boundary $\partial M$.
Let $Q[u]$ be defined in \eqref{Q def} and assume further that $Q[u]$ is homogeneous of degree $\gamma >0$ in the sense of \eqref{def homogeneous}.
Let $\bar{\operatorname{l}}(Q)$ be the first Dirichlet eigenvalue of $Q[u]$ defines as in \eqref{def principal eigenvalue}.
\begin{enumerate}
\item[(i)] Suppose $\operatorname{Ric}^N_f \geq (N-1)\kappa$ and $H_f \geq \Lambda$ for $N \in [n,\infty)$ and $\Upsilon, \Lambda \in \mathbb{R}$. Then we have
\begin{equation*}
\bar{\operatorname{l}}(Q) \geq \operatorname{l}_1
\end{equation*}
where $\operatorname{l}_1$ is the first eigenvalue of the one-dimensional problem
\begin{equation}\label{1D eq N finite}
\begin{cases}
\alpha (\varphi') \varphi'' -(N-1) T_{\Upsilon,\Lambda} \beta(\varphi') \varphi' =-\operatorname{l} |\varphi|^{\gamma-1}\varphi, \\
\varphi(0)=0, \varphi'(R)=0.
\end{cases}
\end{equation}
\item[(ii)] Suppose $\operatorname{Ric}^N_f \geq 0$ and $H_f \geq 0$. Then we have
\begin{equation*}
\bar{\operatorname{l}}(Q) \geq \mu_1
\end{equation*}
where $\mu_1$ is the first eigenvalue of the one-dimensional problem on $[0,R]$
\begin{equation}\label{1D eq N infinite}
\begin{cases}
\alpha (\varphi') \varphi'' =-\operatorname{l} |\varphi|^{\gamma-1}\varphi, \\
\varphi(0)=0, \varphi'(R)=0.
\end{cases}
\end{equation}
\end{enumerate}
\end{thm}
Theorem \ref{thm Dirichlet eigenvalue} covers the sharp lower bound of the first Dirichlet eigenvalue of the Laplacian proved by Li and Yau \cite{LY80} for $\Upsilon=\Lambda=0$ and by Kasue \cite{Kasue84} for general $\Upsilon, \Lambda \in \mathbb{R}$ and of the $p$-Laplacian and weighted $p$-Laplacian by Sakurai \cite{Sakurai19}.
Furthermore, the equality case is achieved if and only if $(M^n, g, e^{-f}d\mu_g)$ is a $(\Upsilon, \Lambda)$-equational model space.
We will indeed provide two proofs for Theorem \ref{thm Dirichlet eigenvalue} in Section 6, one uses the idea of modulus of continuity estimates by studying sharp decay rate (see Theorem \ref{Thm Decay Intro}) for solutions of the parabolic equation \eqref{parabolic pde} as in the closed or Neumann case, while the other one uses the new definition \eqref{def principal eigenvalue} together with Theorem \ref{Thm comparison distance to boundary}, which easily provides a positive viscosity super-solution to the eigenvalue problem \eqref{eigen problem}.
Finally, we extend the results in \cite{AC09}\cite{AC13}\cite{AX19}\cite{LW17}\cite{LW19eigenvalue} on Riemannian manifolds to the more general setting of smooth metric measure spaces.
These include sharp modulus of continuity for solutions of \eqref{parabolic pde} with empty boundary or convex boundary and Neumann boundary condition (see Theorem \ref{thm Neumann}),
sharp height-dependent gradient bounds for parabolic equations (see Theorem \ref{thmh}) and for elliptic equations (see Theorem \ref{Thm1.3} and \ref{Thm3.1}).
The paper is organized as follows.
In Section 2, we recall the definition of viscosity solutions and the parabolic maximum principle for semi-continuous functions.
In Section 3 and 4, we prove comparison theorems for the second derivatives of $d(x,\partial M)$ and $d(x,y)$, respectively.
In Section 5, we extend the results in \cite{AC13} on Riemannian manifolds to smooth metric measure spaces.
The proof of Theorem \ref{Thm MC} will be given in Section 6.
Then two proofs of Theorem \ref{thm Dirichlet eigenvalue} are provided in Section 7.
In Sections 8 and 9, we prove sharp height-dependent gradient bounds for parabolic and elliptic equations on smooth metric measure spaces, respectively.
\section{Preliminaries on Viscosity Solutions}
It is necessary to work with viscosity solutions as the operator $Q[u]$ is not necessarily of divergence form. We refer the reader to
\cite{CIL92} for the general theory of viscosity solutions of non-singular operators on domains in the Euclidean space, to \cite{Giga06} for necessary adaptions for singular operators and to \cite{I} for adaptions for equations on Riemannian manifolds.
For the convenience of the reader, we provide in this section the definition of viscosity solutions and the parabolic maximum principle for semi-continuous functions on manifolds.
\subsection{Definition of Viscosity Solutions}
Let $(M^n,g)$ be a Riemannian manifold.
We write $\text{USC}(M\times (0,T))$ for the set of all upper-semicontinuous functions from $M\times (0,T)$ to $\mathbb{R}$. Likewise, $\text{LSC}(M\times (0,T))$ contains all lower-semicontinuous functions from $M\times (0,T)$ to $\mathbb{R}$. For upper- and lower-semicontinuous functions, one can introduce the notion of super- and sub-jets respectively (see for instance \cite[Section 8]{CIL92}).
We first introduce the notion of parabolic semijets on manifolds. We write $z=(x,t)$ and $z_0=(x_0, t_0)$.
\begin{definition}
For a function $u\in \mbox{USC}(M\times (0,T))$,
we define the parabolic second order superjet of $u$ at a point $z_0\in M\times (0,T)$ by
\begin{align*}
\mathcal{P}^{2,+} u (z_0) &:=\{(\varphi_t(z_0), \nabla \varphi(z_0), \nabla^2\varphi(z_0)) :
\varphi \in C^{2,1}(M\times (0,T)), \\
& \mbox{ such that } u- \varphi \mbox{ attains a local maximum at } z_0\}.
\end{align*}
For $u\in \mbox{LSC}(M\times (0,T))$, the parabolic second order subjet of $u$ at $z_0\in M\times (0,T)$ is defined by
$$\mathcal{P}^{2,-} u (z_0):=-\mathcal{P}^{2,+} (-u) (z_0).$$
\end{definition}
We also define the closures of $\mathcal{P}^{2,+} u (z_0)$ and $\mathcal{P}^{2,-} u (z_0)$ by
\begin{align*}
\overline{\mathcal{P}}^{2,+}u(z_0)
&=\{(\tau,p,X)\in \mathbb{R} \times T_{x_0}M \times Sym^2(T^*_{x_0}M) |
\mbox{ there is a sequence } (z_j,\tau_j,p_j,X_j) \\
&\mbox{ such that } (\tau_j,p_j ,X_j)\in \mathcal{P}^{2,+}u(z_j) \\
&\mbox{ and } (z_j,u(z_j),\tau_j,p_j,X_j) \to (z_0,u(z_0),\tau, p ,X) \mbox{ as } j\to \infty \}; \\
\overline{\mathcal{P}}^{2,-}u(z_0)&=-\overline{\mathcal{P}}^{2,+}(-u)(z_0).
\end{align*}
\begin{definition}(Semicontinuous Envelopes)
Let $X$ be a metric space and let $f$ be a function defined on a dense subset of $X$. We call the function $f^*: X \to \mathbb{R}$ defined by
\begin{equation*}
f^{*}(x) := \inf \{g \in \text{USC}(X) | g \geq f\}
\end{equation*}
the upper-semicontinuous envelope of $f$. Analogously,
\begin{equation*}
f_{*}(x) := \sup \{g \in \text{LSC}(X) | g \leq f\}
\end{equation*}
is the lower-semicontinuous envelope of $f$.
\end{definition}
Now we give the definition of a viscosity solution for the general equation
\begin{equation} \label{equ mfd}
u_t+F(x, t, u, \nabla u, \nabla^2 u)=0
\end{equation}
on $M$.
Assume $F: M \times[0,T] \times \mathbb{R} \times \left(T_{x_0}M\setminus\{0\}\right) \times \text{Sym}^2(T^*_{x_0}M) \to \mathbb{R}$ is continuous and proper, i.e.,
$$F(x,t,r,p,X) \leq F(x,t,s,p,Y) \mbox{ whenever } p\neq 0, r\leq s, Y \leq X.$$
\begin{definition}\label{def viscosity}
(i) A function $u \in \mbox{USC}(M\times(0,T))$ is a viscosity subsolution of \eqref{equ mfd}
if for all $z \in M\times(0,T)$ and $(\tau, p, X) \in \mathcal{P}^{2,+}u(z)$,
\begin{align*}
\tau +F_*(z, u(z), p, X) \leq 0.
\end{align*}
(ii) A function $u \in \mbox{LSC}(M\times(0,T))$ is a viscosity supersolution of \eqref{equ mfd}
if for all $z \in M\times(0,T)$ and $(\tau, p, X) \in \mathcal{P}^{2,-}u(z)$,
\begin{align*}
\tau +F^*(z, u(z) , p, X) \geq 0.
\end{align*}
(iii) A viscosity solution of \eqref{equ mfd} is defined to be a continuous function that is both a
viscosity subsolution and a viscosity supersolution of \eqref{equ mfd}.
\end{definition}
\subsection{Maximum principle for viscosity solutions}
The main technical tool we use is the parabolic maximum principle for semicontinuous functions on manifolds, which is a restatement of \cite[Theorem 8.3]{CIL92}, for Riemannian manifolds. One can also find it in \cite[Section 2.2]{I} or \cite[Theorem 3.8]{AFS1}.
\begin{thm}\label{max prin}
Let $M_1^{N_1}, \cdots, M_k^{N_k}$ be Riemannian manifolds, and $\Omega_i \subset M_i$ open subsets.
Let $u_i \in USC((0,T)\times \Omega_i)$, and $\varphi$ defined on $(0,T)\times \Omega_1 \times \cdots \times \Omega_k$ such that $\varphi$ is continuously differentiable in $t$ and twice continuously differentiable in
$(x_1, \cdots x_k) \in \Omega_1 \times \cdots \times \Omega_k$.
Suppose that $\hat{t} \in (0,T), \hat{x}_i \in \Omega_i$ for $i=1, \cdots, k$ and the function
$$\omega(t, x_1, \cdots, x_k) :=u_1(t,x_1)+\cdots + u_k(t,x_k)-\varphi(t,x_1, \cdots , x_k) $$
attains a maximum at $(\hat{t},\hat{x}_1, \cdots, \hat{x}_k)$ on $(0,T)\times \Omega_1 \times \cdots \times \Omega_k$.
Assume further that there is an $r >0$ such that for every $\eta >0$ there is a $C>0$ such that for $i=1, \cdots, k$
\begin{align*}
& b_i \leq C \mbox{ whenever } (b_i,q_i,X_i) \in \overline{\mathcal{P}}^{2,+}u_i(t,x_i) , \\
& d(x_i, \hat{x}_i)+|t-\hat{t}| \leq r \mbox{ and } |u_i(t,x_i)|+|q_i| +\|X_i\| \leq \eta.
\end{align*}
Then for each $\lambda>0$, there are $X_i \in Sym^2(T^*_{\hat{x}_i} M_i)$ such that
\begin{align*}
& (b_i,\nabla_{x_i}\varphi(\hat{t},\hat{x}_1, \cdots, \hat{x}_k),X_i) \in \overline{\mathcal{P}}^{2,+}u_i(\hat{t},\hat{x}_i),\\
& -\left(\frac 1 \lambda +\left\|S\right\| \right)I \leq
\begin{pmatrix}
X_1 & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & X_k
\end{pmatrix}
\leq S+\lambda S^2, \\
& b_1 + \cdots + b_k =\varphi_t(\hat{t},\hat{x}_1, \cdots, \hat{x}_k),
\end{align*}
where $S=\nabla^2\varphi(\hat{t},\hat{x}_1, \cdots, \hat{x}_k).$
\end{thm}
\section{Comparison theorems for the second derivatives of $d(x,y)$}
In this section, we prove a comparison theorem for the second derivatives of $d(x,y)$, which generalize \cite[Theorem 3]{AC13} on Riemannian manifolds to smooth metric measure spaces.
Let $(x_0,y_0)\in M\times M\setminus\{(x,x) : x\in M\}$ and $d(x_0, y_0)=s_0$.
Let $\gamma_0:[0,s_0]\rightarrow M$ be a unit minimizing geodesic joining $x_0$ and $y_0$ with $\gamma_0(0)=x_0$ and $\gamma_0(s_0)=y_0$. We choose Fermi coordinates $\{e_i(s)\}$ along $\gamma_0$ with $e_n(s)=\gamma_0'(s)$. Note that the distance function $d(x,y)$ may not be smooth at $(x_0,y_0)$, so one cannot apply the maximum principle for semicontinuous functions on manifolds directly. To overcome this, we proceed as in \cite[pages 561-562]{LW17} and replace $d(x,y)$ by a smooth function $\rho(x,y)$, which is defined as follows:
\begin{definition}[Modified distance function]\label{def-rho}
Let $U(x_0,y_0)\subset M\times M\setminus\{(x,x) : x\in M\}$ be a neighborhood of $(x_0, y_0)$.
Define variation fields $V_i(s)$ along $\gamma_0(s)$ by
$V_i(s)=\eta(s) e_i(s)$ for $1\le i\le n-1$,
and $V_n(s)=e_n(s)$, where $\eta(s)$ is a smooth function to be chosen later. We then define a smooth function $\rho(x,y)$ in $U(x_0,y_0)$ to be the length of the curve $$\exp_{\gamma_0(s)}{\sum\limits_{i=1}^n\Big((1-\frac{s}{s_0})b_i(x)+\frac{s}{s_0}c_i(y)\Big)V_i(s)}$$
for $s\in [0,s_0]$, where $b_i(x)$ and $c_i(y)$ are so defined that
$$
x=\exp_{x_0}\left(\sum_{i=1}^n b_i(x) e_i(0)\right) \text{\quad and \quad } y=\exp_{y_0}\left(\sum_{i=1}^n c_i(y) e_i(s_0)\right).
$$
\end{definition}
For $\rho(x,y)$ defined above, it is well known from the standard variation formulas of arc-length that
\begin{lemma}[Variation formulas]
The first variation formula gives
\begin{equation}\label{1st}
\nabla \rho(x_0,y_0)=\big(-e_n(0),e_n(s_0)\big).
\end{equation}
The second variation formula gives
\begin{equation}\label{2nd1}
\nabla^2 \rho \Big(\big(e_n(0),\pm e_n(s_0)\big),\big(e_n(0),\pm e_n(s_0)\big)\Big)=0,
\end{equation}
and for $1\le i\le n-1$
\begin{equation}\label{2nd2}
\nabla^2 \rho \Big(\big(e_i(0),e_i(s_0)\big),\big(e_i(0),e_i(s_0)\big)\Big)=\int_0^{s_0}(\eta')^2-\eta^2 R\big(e_i,e_n,e_i,e_n\big) \, ds
\end{equation}
at $(x_0, y_0)$.
\end{lemma}
\begin{thm}\label{Thm comparison distance}
Let $(M^n, g, e^{-f} d\mu_g)$ be a compact smooth metric measure space with diameter $D$. Let $\varphi: [0, \frac{D}{2}] \to \mathbb{R}$ be a smooth nondecreasing function with $\varphi'\ge 0$, and define $v(x,y):=2\varphi\left(\frac{d(x,y)}{2}\right)$.
\begin{enumerate}
\item[(i)] Suppose $\operatorname{Ric}^N_f \geq (N-1)\kappa$ for $N \in [n,\infty)$. Then on the set $ (M\times M) \setminus \{(x,x) : x\in M\}$, the function $v(x,y)$ is a viscosity supersolution of
\begin{equation*}
L[\nabla^2 v, \nabla v] = 2 \Big(\alpha(\varphi') \varphi '' -(N-1)T_{\Upsilon,0} \beta(\varphi')\varphi' \Big)\big|_{ s=\frac{d(x,y)}{2}}.
\end{equation*}
\item[(ii)] Suppose $\operatorname{Ric}_f\geq \Upsilon$ for $\Upsilon \in \mathbb{R}$. Then on the set $ (M\times M) \setminus \{(x,x) : x\in M\}$, the function $v(x,y)$ is a viscosity supersolution of
\begin{equation*}
L[\nabla^2 v, \nabla v] = 2 \Big(\alpha(\varphi') \varphi '' -\Upsilon s \beta(\varphi')\varphi' \Big)\Big|_{s=\frac{d(x,y)}{2}}.
\end{equation*}
\end{enumerate}
Here the operator $L$ is defined by
\begin{align*}
L[B, w] =\inf & \left\{ \operatorname{tr}(AB)-\beta(|w|)\langle\nabla (f(x)+f(y)), w\rangle
: A \in \text{Sym}^2(T^*_{(x,y)}M\times M), \right. \\
& \text{ }\left. A \geq 0, A|_{T^*_x M} =a (w|_{T_x M}), A|_{T^*_y M} =a (w|_{T_y M}) \right\}
\end{align*}
for any $B \in \text{Sym}^2(T_{(x,y)} M\times M)$ and $w \in T^*_{(x,y)}M \times M$. Where $a(w)$ is defined by
$$a(w)(\xi,\xi)=\alpha(|w|)\frac{(w\cdot \xi)^2}{|w|^2}+\beta(|w|)(|\xi|^2-\frac{(w\cdot \xi)^2}{|w|^2}).$$
\end{thm}
\begin{proof}
The case $N=n$ has been proved in \cite{AC13}. The proof here is a slight modification of the proof of Theorem 3 in \cite{AC13} and we include it for the reader's convenience.
By approximation it suffices to consider the case where $\varphi'$ is strictly positive. For any $x_0,y_0 \in M$ with $x_0\neq y_0$, it suffices to show that any smooth function $\psi(x,y)$ satisfies
$$
\psi(x, y)\le v(x,y)
$$
in a neighborhood $U(x_0,y_0)$ of $(x_0,y_0) $ with equality at $(x_0, y_0)$, it holds
\begin{equation}\label{2.1}
L[\nabla^2 \psi,\nabla \psi](x_0, y_0)\le 2 \ \Big(\alpha(\varphi') \varphi '' -(N-1)T_{\Upsilon,0} \beta(\varphi')\varphi' \Big)\Big|_{s=\frac{s_0}{2}}
\end{equation}
for $n\le N<\infty$, and
\begin{equation}\label{2.2}
L[\nabla^2 \psi,\nabla \psi](x_0, y_0)\le 2 \Big(\alpha(\varphi') \varphi '' -\Upsilon s \beta(\varphi')\varphi' \Big)\Big|_{s=\frac{s_0}{2}}
\end{equation}
for $N=\infty$. Where $s_0=d(x_0, y_0)$.
From the definition of $\rho(x,y)$, we see clearly that $d(x,y)\le \rho(x,y)$ and $d(x_0,y_0)=\rho(x_0,y_0)$. Since
$\varphi'>0$, we have that
\begin{equation}\label{2.6}
\psi(x,y)\le v(x,y) \le 2\varphi(\frac{\rho(x,y)}{2})
\end{equation}
in $U(x_0,y_0)$ with equality at $(x_0, y_0)$. Then the first derivative of $\psi$ at $(x_0, y_0)$ yields
$$
\nabla_x \psi=- \varphi' e_n(0) \text{\quad and\quad} \nabla_y \psi= \varphi' e_n(s_0),
$$
where we used the first variation formula (\ref{1st}). Here and below the derivatives of $\varphi$ are all evaluated at $\frac{s_0}{2}$.
From the definition of $a$ we deduce
$$
a(\nabla \psi|_{T_{x_0}M})=\alpha(\varphi')e_n(0)\otimes e_n(0)+\beta(\varphi')\sum_{i=1}^{n-1}e_i(0)\otimes e_i(0)
$$
and
$$
a(\nabla \psi|_{T_{y_0}M})=\alpha(\varphi')e_n(s_0)\otimes e_n(s_0)+\beta(\varphi')\sum_{i=1}^{n-1}e_i(s_0)\otimes e_i(s_0).
$$
To prove inequality (\ref{2.1}) and (\ref{2.2}), we choose $A$ as follows
$$
A=\alpha(\varphi')\big(e_n(0),-e_n(s_0)\big)\otimes \big(e_n(0),-e_n(s_0)\big)+\beta(\varphi')\sum_{i=1}^{n-1}\big(e_i(0), e_i(s_0)\big)\otimes\big (e_i(0),e_i(s_0)\big)
$$
which is clearly nonnegative, and agrees with $a$ on $T_{x_0}M$ and $T_{y_0}M$ as required. This choice gives
\begin{eqnarray}\label{2.7}
\operatorname{tr}(A\nabla^2\psi)
&=&\alpha(\varphi')\nabla^2\psi\big((e_n(0), -e_n(s_0)),(e_n(0), -e_n(s_0))\big)\nonumber\\
&&+\sum_{i=1}^{n-1}\beta(\varphi')\nabla^2\psi\big((e_i(0), e_i(s_0)),(e_i(0), e_i(s_0))\big).
\end{eqnarray}
Now we estimate the terms involving second derivatives of $\psi$. Recall from (\ref{2.6}) that $\psi(x,y)-2\varphi(\frac{\rho(x,y)}{2})$ attains a local maximum at $(x_0, y_0)$, then the second derivatives at $(x_0, y_0)$ yields
\begin{eqnarray}\label{2.9}
&&\nabla^2 \psi \big((e_n(0), -e_n(s_0)),(e_n(0), -e_n(s_0))\big)\nonumber\\
&\le& 2\nabla^2\varphi(\frac{\rho(x,y)}{2})\big((e_n(0), -e_n(s_0)),(e_n(0), -e_n(s_0))\big)\nonumber\\
&=&\frac{d}{dt}\Big|_{t=0}2\varphi(\frac{s_0}{2}-t)\nonumber\\
&=&2\varphi''(\frac{s_0}{2})
\end{eqnarray}
and for $1\le i\le n-1$
\begin{eqnarray}\label{2.8}
&&\nabla^2 \psi \big((e_i(0), e_i(s_0)),(e_i(0), e_i(s_0))\big)\nonumber\\
&\le& 2\nabla^2\varphi(\frac{\rho(x,y)}{2})\big((e_i(0), e_i(s_0)),(e_i(0), e_i(s_0))\big)\nonumber\\
&=&\varphi'(\frac{s_0}{2})\nabla^2 \rho\big((e_i(0),e_i(s_0)),((e_i(0),e_i(s_0))\big)\nonumber\\
&=&\varphi'(\frac{s_0}{2})\int_0^{s_0}(\eta')^2-\eta^2 R(e_i,e_n,e_i,e_n) \, ds
\end{eqnarray}
where we used the variation formulas (\ref{1st}), (\ref{2nd1}) and (\ref{2nd2}).
Substituting (\ref{2.9}) and (\ref{2.8}) to (\ref{2.7}), we obtain
\begin{equation*}
\operatorname{tr}(A\nabla^2\psi)\le 2 \alpha(\varphi')\varphi''+\beta(\varphi')\varphi'\int_0^{s_0}(n-1)(\eta')^2-\eta^2 \operatorname {Ric}(e_n,e_n) \, ds.
\end{equation*}
Therefore, using the definition of $L$ we have
\begin{eqnarray}\label{2.10}
&&L[\nabla^2 \psi, \nabla \psi](x_0, y_0)\nonumber\\
&\le& \operatorname{tr}(A\nabla^2\psi)-\beta(\varphi')\langle \nabla (f(x_0)+f(y_0)) , \nabla \psi\rangle\nonumber\\
&\le&2 \alpha(\varphi')\varphi''+\beta(\varphi')\varphi'\int_0^{s_0}(n-1)(\eta')^2-\eta^2 \operatorname {Ric}(e_n,e_n) \, ds\nonumber\\
&&-\beta(\varphi')\langle \nabla f(x_0) , \nabla_x \psi\rangle-\beta(\varphi')\langle \nabla f(y_0) , \nabla_y \psi\rangle.
\end{eqnarray}
Now we prove inequality (\ref{2.1}) and (\ref{2.2}).\\
\textbf{Case 1.} $n<N<\infty$.
Choose $$\eta(s)=\frac{C_{\Upsilon, 0}(s-\frac{s_0}{2})}{C_{\Upsilon,0}(\frac{s_0}{2})}$$in definition of $\rho(x,y)$, and we estimate using $\operatorname{Ric}_f^N\ge (N-1)\Upsilon$ that
\begin{eqnarray}\label{2.11}
&&\int_0^{s_0}(n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \,ds \nonumber\\
& \leq & (N-1) \int_0^{s_0} (\eta')^2\,ds -(N-n) \int_0^{s_0} (\eta')^2 \,ds-(N-1)\Upsilon \int_0^{s_0} \eta^2 \,ds \nonumber\\
&& + \int_0^{s_0} \eta^2 \nabla^2 f (e_n,e_n) \, ds -\frac{1}{N-n}\int_0^{s_0} \eta^2 \nabla f \otimes \nabla f (e_n, e_n)\, ds \nonumber\\
&=& (N-1) \int_0^{s_0} (\eta')^2\,ds -(N-n) \int_0^{s_0} (\eta')^2 \,ds-(N-1)\Upsilon \int_0^{s_0} \eta^2 \,ds \nonumber\\
&& + \left. \eta^2 (f\circ \gamma_0 )' \right|_0^{s_0} - 2 \int_0^{s_0} \eta \, \eta' (f \circ \gamma_0)' ds -\frac{1}{N-n} \int_0^{s_0} \eta^2 ((f\circ \gamma_0)' )^2 ds \nonumber\\
&\le& (N-1) \int_0^{s_0} (\eta')^2 \, ds-(N-1)\Upsilon \int_0^{s_0} \eta^2 \, ds\nonumber\\
&&+\langle \nabla f(y_0), e_n(s_0) \rangle-\langle \nabla f(x_0), e_n(0) \rangle,
\end{eqnarray}
where we used $(N-n)(\eta')^2+2\eta \eta' (f\circ \gamma_0)'+\frac{1}{N-n}\eta^2((f\circ \gamma_0)')^2\ge0$ in the last inequality. Since
\begin{eqnarray*}
\int_0^{s_0} (\eta')^2 -\Upsilon \eta^2 \, ds=\eta \eta'|_0^{s_0}-\int_0^{s_0} \eta''\eta +\Upsilon \eta^2 \, ds
=-2T_{\Upsilon, 0}(\frac{s_0}{2}),
\end{eqnarray*}
then we deduce from (\ref{2.11}) that
\begin{eqnarray}\label{eta1}
&&\int_0^{s_0}(n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \, ds\nonumber\\
&\le& -2(N-1)T_{\Upsilon, 0}(\frac{s_0}{2})+\langle \nabla f(y_0), e_n(s_0) \rangle-\langle \nabla f(x_0), e_n(0) \rangle
\end{eqnarray}
Combining (\ref{2.10}) and ({\ref{eta1}}) together, we obtain \begin{eqnarray*}
L[\nabla^2 \psi,\nabla \psi](x_0, y_0)\le2\alpha(\varphi')\varphi''-2(N-1)T_{\Upsilon,0}\beta(\varphi')\varphi'
\end{eqnarray*}
where we used $\nabla_x \psi=- \varphi'(\frac{s_0}{2}) e_n(0)$ and $ \nabla_y \psi= \varphi' (\frac{s_0}{2})e_n(s_0)$ at $(x_0, y_0)$. Thus we proved (\ref{2.1}).
\textbf{Case 2.} $ N=\infty$.
Choose $\eta=1$, and then we estimate using $\operatorname{Ric}+\nabla^2 f\ge \Upsilon$ that
\begin{eqnarray}\label{eta2}
&&\int_0^{s_0}(n-1)(\eta')^2-\eta^2 \operatorname {Ric}(e_n,e_n) \, ds\nonumber\\
&=&\int_0^{s_0}-\operatorname {Ric}(e_n,e_n) \, ds
\nonumber\\
&\le&\int_{0}^{s_0}-\Upsilon +(f\circ \gamma_0(s))'' \, ds\nonumber\\
&=&-\Upsilon s_0+\langle \nabla f(y_0), e_n(s_0) \rangle-\langle \nabla f(x_0), e_n(0) \rangle.
\end{eqnarray}
Substituting inequality (\ref{eta2}) to (\ref{2.10}) we get at $(x_0, y_0)$
\begin{eqnarray*}
L[\nabla^2 \psi,\nabla \psi]&\le&\operatorname{tr}(A\nabla^2\psi)-\beta(\varphi')\langle\nabla (f(x)+f(y)), \nabla \psi\rangle\\
&\le&2\alpha(\varphi')\varphi''-\Upsilon s_0 \beta(\varphi')\varphi'\\
&=&2 \Big(\alpha(\varphi') \varphi '' -\Upsilon s \beta(\varphi')\varphi' \Big)\Big|_{s=\frac{s_0}{2}},
\end{eqnarray*}
which proves (\ref{2.2}).
\end{proof}
\section{Modulus of Continuity Estimates for Neumann Boundary Condition}
The goal of this section is to extend the results in \cite{AC13} from Riemannian manifolds to smooth metric measure spaces, as well as from smooth solutions to viscosity solutions.
\begin{thm}\label{thm Neumann}
Let $(M^n, g, e^{-f}d\mu_g)$ be a compact smooth metric measure space with diameter $D$ (possibly with smooth strictly convex boundary). Let $u:M\times [0,T)\rightarrow \mathbb{R}$ be a viscosity solution of $u_t=Q[u]$
(with Neumann boundary conditions if $\partial M \neq \emptyset$), where $Q[u]$ is defined in \eqref{Q def}.
Assume $\operatorname{Ric}^N_f \geq (N-1) \Upsilon$ for $N\in [n,\infty)$ and $\Upsilon \in \mathbb{R}$, or $\operatorname{Ric}_f\geq \Upsilon$ for $\Upsilon \in \mathbb{R}$.
Let $\varphi:[0, D/2] \times \mathbb{R}_+ \to \mathbb{R}_+$ be a smooth function satisfying
\begin{enumerate}
\item $\varphi(s,0) =\varphi_0(s)$ for each $s \in [0,D/2]$;
\item $\varphi_t \geq \alpha(\varphi')\varphi'' -(N-1) T_{\kappa, 0} \beta(\varphi')\varphi'$ if $N\in[n, \infty)$, or \\
$\varphi_t \geq \alpha(\varphi')\varphi'' -\Upsilon s \beta(\varphi')\varphi'$ if $N= \infty$;
\item $\varphi'\geq 0$ on $[0, D/2] \times \mathbb{R}_+$;
\item $\varphi_0$ is a modulus of continuity of $u(x, 0)$.
\end{enumerate}
Then $\varphi(s,t)$ is a modulus of continuity for $u(x,t)$ for each $t \in [0,T)$, i.e.,
\begin{equation*}
u(y,t)-u(x,t) -2\varphi\left(\frac{d(x,y)}{2},t \right) \leq 0
\end{equation*}
for all $x,y \in M$ and $t\in [0,T)$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{thm Neumann}]
On the product manifold $M \times M \times [0,T)$, define an evolving quantity $Z_{{\varepsilon}}(x,y,t)$ by
\begin{equation*}
Z_{{\varepsilon}}(x,y,t) =u(y,t) -u(x,t) -2\varphi \left(\frac{d(x,y)}{2},t \right) -\frac{{\varepsilon}}{T-t},
\end{equation*}
for any small positive ${\varepsilon}$.
Note that we have $Z_{{\varepsilon}}(x,y,0) <0$ since we assumed that $\varphi_0$ is a modulus of continuity for $u$ at $t = 0$. Also observe that $Z_{{\varepsilon}}(x,x,t) <0$ for all $x\in M$ and $t\in [0,T)$. Thus, if $Z_{{\varepsilon}}$ ever becomes positive, there exists a time $t_0 > 0$ and points $x_0 \neq y_0$ in
$M$ such that $Z_{{\varepsilon}}$ attains its maximum at $(x_0,y_0,t_0)$. Notice that the Neumann condition, convexity of the boundary $\partial M$, and the positivity of $\varphi'$ guarantees that both $x_0$ and $y_0$ are in $M$ if $(M^n, g)$ has non-empty boundary.
Since the distance function $d(x,y)$ may not be smooth, we replace $d(x,y)$ by the smooth function $\rho(x,y)$ defined in Definition \ref{def-rho}, and the function
$$u(y,t)-u(x, t)-2\varphi\left(\frac{\rho(x,y)}{2},t\right)-\frac{{\varepsilon}}{T-t}$$
has a local maximum at $(x_0,y_0,t_0)$ by the monotonically increasing of $\varphi$.
Now we can apply the parabolic version maximum principle (Theorem \ref{max prin}) for semicontinuous functions on manifolds to conclude that:
for each $\lambda >0$, there exist symmetric tensors $X, Y$ such that
\begin{eqnarray}\label{1stx}
(b_1, \nabla_y \psi (x_0, y_0, t_0), X) &\in \overline{\mathcal{P}}^{2,+} u(y_0,t_0),\\
(-b_2, - \nabla_x \psi (x_0, y_0, t_0), Y) &\in \overline{\mathcal{P}}^{2,-} u(x_0,t_0),
\end{eqnarray}
\begin{eqnarray}\label{1stt}
b_1+b_2= \psi_t (x_0, y_0, t_0)=2 \varphi_t(\frac{s_0}{2},t_0)+\frac{{\varepsilon}}{(T-t_0)^2},
\end{eqnarray}
and
\begin{eqnarray}\label{2ndx}
\begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix}
\leq S+\lambda S^2,
\end{eqnarray}
where $\psi(x,y,t)=2\varphi(\frac{\rho(x,y)}{2},t)+\frac{{\varepsilon}}{T-t}$, $S=\nabla^2 \psi(x_0,y_0,t_0)$, and $s_0=d(x_0,y_0)$.
Using the first derivative formula (\ref{1st}) of $\rho$, we have
\begin{equation}\label{der psi}
\nabla_x \psi (x_0, y_0, t_0)=-\varphi'(\frac{s_0}{2}, t_0)e_n(0)
\text{\quad and \quad}
\nabla_y \psi (x_0, y_0, t_0)=\varphi'(\frac{s_0}{2}, t_0)e_n(s_0).
\end{equation}
Since $u$ is both a subsolution and a supersolution of (\ref{Q def}), then (\ref{1stx}) yields
\begin{equation}\label{b1}
b_1\le \operatorname{tr}(A(\varphi')X)-\beta(\varphi')\varphi'\langle \nabla f(y_0), e_n(s_0)\rangle
\end{equation}
and
\begin{equation}\label{b2}
-b_2\ge \operatorname{tr}(A(\varphi')Y)-\beta(\varphi')\varphi'\langle \nabla f(x_0), e_n(0)\rangle
\end{equation}
where $A$ is a diagonal matrix defined by
$$
A=\left(
\begin{array}{cccc}
\beta(\varphi') & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(\varphi') & 0 \\
0 & \cdots & 0 & \alpha(\varphi') \\
\end{array}
\right)
$$
and we have used equality (\ref{der psi}) and $\varphi'>0$.
Set
$$C=\left(
\begin{array}{cccc}
\beta(\varphi') & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(\varphi') & 0 \\
0 & \cdots & 0 & -\alpha(\varphi') \\
\end{array}
\right),$$
and then $\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)\ge 0$.
Substituting inequality (\ref{b1}) and (\ref{b2}) to (\ref{1stt}), we have
\begin{eqnarray}\label{a5}
&&2\varphi_t(\frac{s_0}{2}, t_0)+\frac{{\varepsilon}}{(T-t_0)^2}\\
&=& b_1+b_2\nonumber\\
&\le& \operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)
\left(
\begin{array}{cc}
X & 0 \\
0 & -Y \\
\end{array}
\right)
\right] \nonumber\\
&& -\beta(\varphi')\varphi'\langle \nabla f(y_0), e_n(s_0)\rangle+\beta(\varphi')\varphi'\langle \nabla f(x_0), e_n(0)\rangle\nonumber\\
&\le&\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S
\right]
+\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S^2
\right]\nonumber\\
&&-\beta(\varphi')\varphi'\big(\langle \nabla f(y_0), e_n(s_0)\rangle-\langle \nabla f(x_0), e_n(0) \rangle\big),\nonumber
\end{eqnarray}
where we used inequality (\ref{2ndx}).
Direct calculation shows
\begin{eqnarray}\label{a4}
&&\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S
\right]\nonumber\\
&=&\alpha(\varphi')\nabla^2\psi\big((e_n(0), -e_n(s_0)),(e_n(0), -e_n(s_0))\big)\nonumber\\
&&+\sum_{i=1}^{n-1}\beta(\varphi')\nabla^2\psi\big((e_i(0), e_i(s_0)),(e_i(0), e_i(s_0))\big)\nonumber\\
&=&2\alpha(\varphi')\varphi''+\beta(\varphi')\varphi'\sum_{i=1}^{n-1}\nabla^2\rho\big((e_i(0), e_i(s_0)),(e_i(0), e_i(s_0))\big)\nonumber\\
&=&2\alpha(\varphi')\varphi''+\beta(\varphi')\varphi'\int_0^{s_0}(n-1)(\eta')^2-\eta^2 \operatorname {Ric}(\gamma_0',\gamma_0') \, ds
\end{eqnarray}
where we used the variation formulas (\ref{1st}), (\ref{2nd1}) and (\ref{2nd2}) for $\rho(x,y)$.
If $\operatorname{Ric}_f^N \ge(N-1)\Upsilon$, we choose $\eta=\frac{C_{\Upsilon, 0}(s-\frac{s_0}{2})}{C_{\Upsilon,0}(\frac{s_0}{2})}$ in (\ref{a4}), and using (\ref{eta1}) we obtain
\begin{eqnarray*}
\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S
\right]&\le& 2\alpha(\varphi')\varphi''-2(N-1)T_{\Upsilon, 0}(\frac{s_0}{2})\beta(\varphi')\varphi' \\
&& +\beta(\varphi')\varphi'(\langle \nabla f(y_0), e_n(s_0) \rangle-\langle \nabla f(x_0), e_n(0) \rangle),
\end{eqnarray*}
and then (\ref{a5}) yields
\begin{eqnarray}\label{a6}
&&2\varphi_t(\frac{s_0}{2},t_0)+\frac{{\varepsilon}}{(T-t_0)^2}\nonumber\\
&\le&2 \ \Big(\alpha(\varphi') \varphi '' -(N-1)T_{\Upsilon,0} \varphi'\beta(\varphi') \Big)\Big|_{s=\frac{s_0}{2},t=t_0}+\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S^2
\right].
\end{eqnarray}
If $\operatorname{Ric}_f\ge \Upsilon$, we choose $\eta=1$, and substituting (\ref{eta2}) to equality (\ref{a4}) we obtain
\begin{eqnarray*}
\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S
\right]
&\le& 2\alpha(\varphi')\varphi''-\Upsilon s_0\beta(\varphi')\varphi'\\
&&+\beta(\varphi')\varphi'(\langle \nabla f(y_0), e_n(s_0) \rangle-\langle \nabla f(x_0), e_n(0) \rangle),
\end{eqnarray*}
and then (\ref{a5}) yields
\begin{eqnarray}\label{a7}
&&2\varphi_t(\frac{s_0}{2},t_0)+\frac{{\varepsilon}}{(T-t_0)^2}\nonumber\\
&\le&2\ \Big(\alpha(\varphi') \varphi '' -\Upsilon s \beta(\varphi')\varphi' \Big)\Big|_{s=\frac{s_0}{2},t=t_0}+\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A & C \\
C & A \\
\end{array}
\right)S^2
\right].
\end{eqnarray}
Since $\lambda$ is arbitrary, we get the contradictions with the assumption 2 by letting $\lambda\rightarrow 0$ in (\ref{a6}) and (\ref{a7}).
Therefore we conclude that $Z_{{\varepsilon}}(x,y,t)\le 0$ for $t\in [0, T)$. Letting ${\varepsilon}\rightarrow 0^+$, we finish the proof of Theorem \ref{thm Neumann}.
\end{proof}
As an application of Theorem \ref{thm Neumann}, we obtain optimal lower bound on the smallest positive eigenvalue of the weighted $p$-Laplacian with $1<p\leq 2$ on a smooth metric measure space.
\begin{thm}\label{thm Neumann Eigenvalue}
Fix $1<p\leq 2$. Let $(M^n, g, e^{-f}d\mu_g)$ be a compact smooth metric measure space with diameter $D$ (possibly with smooth strictly convex boundary).
Let $\operatorname{l}_{1,p}$ be the first nonzero closed or Neuman eigenvalue of the weighted $p$-Laplacian.
\begin{enumerate}
\item If $\operatorname{Ric}^N_f \geq (N-1) \Upsilon$ for $N\in [n,\infty)$ and $\Upsilon \in \mathbb{R}$, then we have
\begin{equation*}
\operatorname{l}_{1,p} \geq \bar{\operatorname{l}}_{1,p}
\end{equation*}
where $\bar{\operatorname{l}}_{1,p}$ is the first nonzero Neuman eigenvalue of the one-dimensional problem
\begin{equation*}
(p-1)|\varphi'|^{p-1}\varphi'' -(N-1) T_{\Upsilon, 0} |\varphi'|^{p-2}\varphi' =-\operatorname{l} |\varphi|^{p-2}\varphi
\end{equation*}
on the interval $[-D/2, D/2]$.
\item If $N=\infty$ and $\operatorname{Ric}_f \geq \kappa$ for $\kappa \in \mathbb{R}$, then we have
\begin{equation*}
\operatorname{l}_{1,p} \geq \bar{\mu}_{1,p}
\end{equation*}
where $\bar{\mu}_{1,p}$ is the first nonzero Neuman eigenvalue of the one-dimensional problem
\begin{equation*}
(p-1)|\varphi'|^{p-1}\varphi'' -\Upsilon t |\varphi'|^{p-2}\varphi' =-\operatorname{l} |\varphi|^{p-2}\varphi
\end{equation*}
on the interval $[0, D]$.
\end{enumerate}
\end{thm}
\begin{proof}
The proof is a slight modification of \cite[Section 8]{Andrewssurvey15} or \cite[Section 2]{LW19eigenvalue}, so we omit the details here.
\end{proof}
\begin{remark}
We emphasis that Theorem \ref{thm Neumann Eigenvalue} holds for $2<p<\infty$ as well, as shown by Koerber \cite{Koerber18} for the $\kappa=0$ case and by the second author \cite{Tu20} for general $\kappa \in \mathbb{R}$. We also emphasis that special cases of Theorem \ref{thm Neumann Eigenvalue} have been proved
in \cite{AC13}\cite{AN12}\cite{CW94}\cite{CW95}\cite{Kroger92}\cite{LW19eigenvalue}\cite{LW19eigenvalue2}\cite{NV14}\cite{Valtorta12}\cite{ZY84}.
\end{remark}
\section{Comparison theorems for the second derivatives of $d(x, \partial M)$}
In this section, we prove comparison theorems for the second derivatives of $d(x,\partial M)$. Let $R$ denote the inradius of $M$ defined by
$$R=\sup\{d(x, \partial M): x\in M \}.$$
\begin{thm}\label{Thm comparison distance to boundary}
Let $(M^n, g, e^{-f}d\mu_g)$ be a compact smooth metric measure space with smooth nonempty boundary $\partial M$.
Let $\varphi:[0, R ] \to \mathbb{R}_+$ be a smooth function with $\varphi' \geq 0$ and define $v(x) :=\varphi\left(d(x,\partial M)\right)$.
\begin{enumerate}
\item[(i)] Suppose that $\operatorname{Ric}^N_f \geq (N-1)\kappa$ and $H_f \geq \Lambda$ for $\Upsilon, \Lambda \in \mathbb{R}$ and $N\in [n,\infty)$. Then $v(x)$ is a viscosity supersolution of \begin{equation*}
Q[v] =\left(\alpha (\varphi') \varphi'' -(N-1) T_{\kappa, \Lambda} \beta(\varphi') \varphi' \right) \big|_{d(x,\partial M)}.
\end{equation*}
\item[(ii)] Suppose that $\operatorname{Ric}_f \geq 0$ and $H_f \geq 0$. Then $v(x)$ is a viscosity supersolution of
\begin{equation*}
Q[v] =\left(\alpha (\varphi') \varphi'' \right) \big|_{d(x,\partial M)}.
\end{equation*}
\end{enumerate}
\end{thm}
Theorem \ref{Thm comparison distance to boundary} generalizes the comparison theorems for the Laplacian \cite{Kasue84} and for the weighted $p$-Laplacian \cite{Sakurai19} since our operator $Q$ is much more general.
The differential inequalities hold in the classical sense at points where $d(x, \partial M)$ is smooth, in the distributional sense if $Q$ is of divergence form and in the viscosity sense for general $Q$.
\begin{proof}[Proof of Theorem \ref{Thm comparison distance to boundary}]
By approximation, it suffices to consider the case $\varphi' >0$ on $[0, R]$.
By definition of viscosity solutions (see Definition \ref{def viscosity}), it suffices to prove that for any smooth function $\psi$ touching $v$ from below at $x_0 \in M$, i.e.,
\begin{align*}
\psi(x) \leq v(x) \text{ on } M \text{ with }
\psi(x_0) = v(x_0),
\end{align*}
it holds that
\begin{equation*}
Q^*[\psi](x_0) \leq \left. \left[\alpha (\varphi') \varphi'' -(N-1) T_{\kappa, \Lambda} \beta(\varphi') \varphi' \right] \right|_{d(x_0,\partial M)},
\end{equation*}
where $Q^*$ is the upper-semicontinuous envelope of $Q$ (see Definition 2.2).
Since the function $d(x, \partial M)$ may not be smooth at $x_0$, so we need to replace it by a smooth function $\bar{d}(x)$ defined in a neighborhood $U(x_0)$ of $x_0$ satisfying $\bar{d}(x) \geq d(x, \partial M)$ for $x \in U(x_0)$ and $\bar{d}(x_0)=d(x_0, \partial M)$.
The construction is standard (see e.g. \cite[pp. 73-74]{Wu79} or \cite[pp. 1187]{AX19}), which we state below for reader's convenience.
Since $M$ is compact, there exists $y_0 \in \partial M$ such that $d(x_0,y_0)=d(x_0, \partial M):=s_0$. Let $\gamma:[0,s_0] \to M$ be the unit speed length-minimizing geodesic with $\gamma(0)=x_0$ and $\gamma(s_0)=y_0$.
For any vector $X \in \exp^{-1}_{x_0}U(x_0)$, let $X(s) (s\in [0,s_0])$ be the vector field obtained by parallel translating $X$ along $\gamma$, and we decompose it as
\begin{equation*}
X(s) =a X^\perp (s) +b \gamma'(s),
\end{equation*}
where $a$ and $b$ are constants along $\gamma$ with $a^2+b^2 =|X(s)|^2$, and $X^\perp(s)$ is a unit parallel vector field along $\gamma$ orthogonal to $\gamma'(s)$. Define
\begin{equation*}
W(s)=a \, \eta(s) X^\perp(s) + b\left(1-\frac{s}{s_0} \right)\gamma'(s),
\end{equation*}
where $\eta:[0,s_0] \to \mathbb{R}_+$ is a $C^2$ function to be chosen later.
Next we define the $n$-parameter family of curves $\gamma_X :[0,s_0] \to M$ such that
\begin{enumerate}
\item $\gamma_0 =\gamma$;
\item $\gamma_X(0) =\exp_{x_0}(W(0))$ and $\gamma_X(s_0) \in \partial M$;
\item $W_l(s)$ is induced by the one-parameter family of curves $l \to \gamma_{lX}(s)$ for $l \in [-l_0,l_0]$ and $s\in [0,s_0]$;
\item $\gamma_X$ depends smoothly on $X$.
\end{enumerate}
Finally let $\bar{d}(x)$ be the length of the curve $\gamma_X(x)$ where $x=\exp_{x_0}(X) \in U(x_0)$.
Then we have
$\bar{d}(x) \geq d(x,\partial M)$ on $U(x_0)$, $\bar{d}(x_0)=d(x_0, \partial M)$. Recall the first and second variation formulas:
\begin{equation*}
\nabla \bar{d}(x_0) =-\gamma'(0)
\end{equation*}
and
\begin{equation*}
\nabla^2 \bar{d} (X,X)= -a^2 \eta(s_0)^2 \operatorname{II}(X^\perp(s_0), X^\perp(s_0)) +a^2 \int_0^{s_0} \left((\eta')^2 -\eta^2 R(X^\perp, \gamma',X^\perp, \gamma') \right) ds
\end{equation*}
where $\operatorname{II}$ denotes the second fundamental form of $\partial M$ at $y_0$. Then for an orthonormal frame $\{e_i(s)\}_{i=1}^n$ along $\gamma$ with $e_n(s) =\gamma'(s)$ we have
\begin{equation}\label{1st bard}
\nabla \bar{d}(x_0) =-e_n(0),
\end{equation}
\begin{equation}\label{2nd bardn}
\nabla^2 \bar{d} (e_n(0),e_n(0))=0,
\end{equation}
and for $1\le i \le n-1$
\begin{equation}\label{2nd bardi}
\nabla^2 \bar{d} (e_i(0),e_i(0))= - \eta(s_0)^2 \operatorname{II}(e_i(s_0), e_i(s_0)) + \int_0^{s_0} (\eta')^2 -\eta^2 R(e_i, e_n,e_i, e_n)\, ds.
\end{equation}
Since the function $\psi(x) -\varphi\left(d(x, \partial M)\right)$ attains its maximum at $x_0$ and $\varphi' >0$, it follows that the function $\psi(x) -\varphi(\bar{d}(x))$ attains a local maximum at $x_0$. The first and second derivative tests yield
$$ \nabla \psi (x_0) =-\varphi' e_n (0), \quad \psi_{nn}(x_0) \leq \varphi'',$$
and
$$
\psi_{ii}(x_0) \leq \varphi' \nabla^2\bar{d} \left(e_i(0), e_i(0)\right)
$$
for $1 \le i \leq n-1$, where we used (\ref{1st bard}) and (\ref{2nd bardn}).
Here and below the derivatives of $\varphi$ are all evaluated at $s_0=d(x_0, \partial M)$.
It then follows from (\ref{2nd bardi}) that
\begin{equation*}
\sum_{i=1}^{n-1} \nabla^2 \bar{d}\left(e_i(0), e_i(0)\right) =- \eta(s_0)^2 H(y_0) + \int_0^{s_0}(n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \, ds,
\end{equation*}
and then we have
\begin{eqnarray}\label{eq 3.1}
&& Q^*[\psi](x_0)=Q[\psi](x_0) \nonumber \\
&=&\alpha (\varphi')\psi_{nn} +\beta(\varphi') \sum_{i=1}^{n-1} \psi_{ii} + \beta(\varphi')\varphi' \langle \nabla f(x_0), e_n(0) \rangle \nonumber\\
&\leq& \alpha (\varphi')\varphi'' +\beta(\varphi')\varphi' \left( \int_0^{s_0} (n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \, ds \right)\nonumber\\
&&+\beta(\varphi')\varphi'\left( -\eta(s_0)^2 H(y_0) +\langle \nabla f(x_0), e_n(0) \rangle\right)
\end{eqnarray}
where $H$ denote the mean curvature of $\partial M$.
We estimate using $\operatorname{Ric}^N_f \geq (N-1) \kappa$ that
\begin{eqnarray*}
&& \int_0^{s_0} \left((n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \right) ds \\
& \leq & (N-1) \int_0^{s_0} (\eta')^2\, ds -(N-n) \int_0^{s_0} (\eta')^2\, ds -(N-1)\Upsilon \int_0^{s_0} \eta^2 ds \\
&& + \int_0^{s_0} \eta^2 \nabla^2 f (\gamma',\gamma') ds -\frac{1}{N-n}\int_0^{s_0} \eta^2 \nabla f \otimes \nabla f (\gamma', \gamma')ds \\
&=& (N-1) \int_0^{s_0} (\eta')^2 \, ds-(N-n) \int_0^{s_0} (\eta')^2 \, ds-(N-1)\Upsilon \int_0^{s_0} \eta^2 ds \\
&& + \left. \eta^2 (f\circ \gamma )' \right|_0^{s_0} - 2 \int_0^{s_0} \eta \, \eta' (f \circ \gamma)' ds -\frac{1}{N-n} \int_0^{s_0} \eta^2 ((f\circ \gamma)' )^2 ds \\
&=& (N-1) \int_0^{s_0} (\eta')^2\, ds -(N-1)\Upsilon \int_0^{s_0} \eta^2 ds +\left. \eta^2 (f\circ \gamma )' \right|_0^{s_0} \\
&& -\int_0^{s_0} \frac{\eta^2}{N-n}\left((N-n)\frac{\eta'}{\eta} + (f\circ \gamma)' \right) ^2\, ds \\
& \leq & (N-1) \int_0^{s_0} (\eta')^2\, ds -(N-1)\Upsilon \int_0^{s_0} \eta^2 ds +\left. \eta^2 (f\circ \gamma )' \right|_0^{s_0}
\end{eqnarray*}
Using $H_f \geq (N-1)\Lambda$ and choosing $\eta(s)=C_{\kappa, \Lambda}(s_0-s) /C_{\kappa, \Lambda}(s_0)$ with $C_{\kappa, \Lambda}$ defined in \eqref{C def}, we calculate that
\begin{eqnarray*}
&& -\eta(s_0)^2 H(y_0) +\langle \nabla f(x_0), e_n(0)\rangle+ \int_0^{s_0} (n-1)(\eta')^2 -\eta^2 \operatorname{Ric}(e_n, e_n) \, ds \\
& \leq & -\frac{1}{C^2_{\Upsilon, \Lambda}(s_0)}H(y_0) +(N-1) \int_0^{s_0} (\eta')^2 \, ds-(N-1)\Upsilon \int_0^{s_0} \eta^2 ds + \frac{1}{C^2_{\Upsilon, \Lambda}(s_0)} (f\circ \gamma )'(s_0) \\
&=& -\frac{1}{C^2_{\Upsilon, \Lambda}(s_0)}H_f(y_0)-(N-1)\kappa \int_0^{s_0} \eta^2 ds \\
&&+(N-1) \left( \eta(s_0)\eta'({s_0}) -\eta(0)\eta'(0) -\int_0^{s_0} \eta(s) \eta''(s) ds \right) \\
&\leq& -\frac{N-1}{C^2_{\Upsilon, \Lambda}(s_0)}\Lambda + (N-1)\eta'(s_0) \eta(s_0)-(N-1)\eta(0)\eta'(0)\\
&=&-(N-1) T_{\kappa, \Lambda},
\end{eqnarray*}
where we used $C_{\Upsilon, \Lambda}'(0)=-\Lambda$.
Thus,
\begin{eqnarray*}
Q^*[\psi](x_0) \leq \left[\alpha (\varphi')\varphi'' -(N-1) T_{\kappa, \Lambda} \beta(\varphi')\varphi' \right]_{d(x_0, \partial M)}.
\end{eqnarray*}
For the $N=\infty$ case, we choose $\eta(s)=1$ in \eqref{eq 3.1} and compute that
\begin{eqnarray*}\label{} \nonumber
Q^*[\psi](x_0) &\leq& \alpha (\varphi')\varphi'' +\beta(\varphi')\varphi' \left( \int_0^{s_0} -\operatorname{Ric}(e_n, e_n) \, ds - H(y_0) +\langle \nabla f(x_0), e_n(0) \rangle \right)\\
& \leq & \alpha (\varphi')\varphi'' + \int_0^{s_0} (f \circ \gamma )''(s) ds -H_f(y_0) - (f \circ \gamma)'(s_0) +(f \circ \gamma)'(0) \\
&\leq & \alpha (\varphi')\varphi''
\end{eqnarray*}
where we have used $\operatorname{Ric} + \nabla^2 f \geq 0$ in the second inequality and $H_f \geq 0$ in the last inequality.
The proof is complete.
\end{proof}
\section{Modulus of Continuity Estimates for Dirichlet Boundary Condition}
To derive sharp estimates on the modulus of continuity of solutions to \eqref{parabolic pde} with Dirichlet boundary condition, we fix one of two variables in the modulus of continuity estimate in Theorem \ref{Thm MC} to be inside the boundary and derive the following decay estimate.
\begin{thm}\label{Thm Decay Intro}
Let $(M^n,g, e^{-f} d\mu_g)$ and $u$ be the same as in Theorem \ref{Thm MC}.
Suppose that $M$ satisfies $\operatorname{Ric}^N_f \geq (N-1)\kappa$ for $N\in [0,\infty)$ and $\kappa \in \mathbb{R}$, and $\partial M$ satisfies $H_f \geq (N-1) \Lambda$ for $\Lambda \in \mathbb{R}$, or suppose that $\operatorname{Ric}_f \geq 0$ and $H_f \geq 0$ if $N = \infty$ (in this case we use the convention $T_{\kappa, \Lambda}=0$).
Let $\varphi:[0,R] \times \mathbb{R}_+ \to \mathbb{R}_+$ be a smooth function satisfying
\begin{enumerate}
\item $\varphi_t \geq \alpha(\varphi')\varphi'' -(N-1) T_{\kappa, \Lambda} \beta(\varphi')\varphi'$;
\item $\varphi'\geq 0$ on $[0,R] \times \mathbb{R}_+$, and $\varphi(0, t)=0$ for $t\ge 0$.
\end{enumerate}
Define $$Z(x,t) :=u(x,t) -\varphi\left(d(x,\partial M), t\right).$$
If $Z(x,0)\leq 0$ on $M$, then $Z(x,t) \leq 0$ on $M\times [0,T)$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{Thm Decay Intro}]
By the same techniques as in the proof of Theorem \ref{Thm comparison distance to boundary} and
Theorem \ref{thm Neumann}, it's easy to see that under the assumptions of Theorem \ref{Thm Decay Intro}, the function $\varphi\left(d(x,\partial M), t \right)$ is a viscosity supersolution of \eqref{parabolic pde}. The desired estimate follows from the comparison principle for viscosity solutions since it holds true initially and on the boundary.
\end{proof}
We present the proof of Theorem \ref{Thm MC} now. The proof uses the comparison theorems for $d(x, \partial M)$ and $d(x,y)$ proved in Sections 3 and 4.
\begin{proof}[Proof of Theorem \ref{Thm MC}]
For small ${\varepsilon}>0$, consider the function $Z$ defined on $M \times M \times [0,T)$ by
\begin{equation*}
Z(x,y,t) =u(y,t) -u(x,t) -2\varphi \left(\frac{d(x,y)}{2}, t \right) -{\varepsilon}(1+t).
\end{equation*}
Since $\varphi_0$ is a modulus of continuity of $u(x,0)$, we have $Z(x,y,0) \leq -{\varepsilon}$. If $Z$ ever becomes positive, there must be a first time $t_0 >0$ and points $x_0, y_0 \in M$ such that $Z(x_0,y_0,t_0) =0$ and $Z(x,y,t) \leq 0$ for all $x, y \in M$ and $t \leq t_0$. Clearly $x_0 \neq y_0$ as $Z(x,x,t) \leq -{\varepsilon}$ for each $x\in M$ and $t\in [0,T)$. The Dirichlet boundary condition also rules out the possibility that both $x_0$ and $y_0$ lie on $\partial M$. So we have three possibilities.
\textbf{Case 1:} Both $x_0$ and $y_0$ lie in the interior of $M$.
In this case, the same argument as in the proof of \cite[Theorem 1]{AC13} with the comparison theorem there replaced by Theorem \ref{Thm comparison distance}, leads to a contradiction. Hence this case cannot occur.
\textbf{Case 2:} $x_0 \in \partial M$ and $y_0$ lies in the interior of $M$.
In this case, we have
$$
u(y_0, t_0)-2\varphi(\frac{d(x_0,y_0)}{2}, t_0)-{\varepsilon}(1+t_0)=0.
$$
Using $\varphi'\ge 0$ and $\varphi''\le 0$, we estimate that
\begin{eqnarray}\label{con 1}
&&u(y_0, t_0)-\varphi(d(y_0,\partial M), t_0)\nonumber\\
&\ge& u(y_0, t_0)-2\varphi(\frac{d(y_0,\partial M)}{2}, t_0) \nonumber\\
&\ge& u(y_0, t_0)-2\varphi(\frac{d(y_0,x_0)}{2}, t_0)\nonumber\\
&=&{\varepsilon} (1+t_0).
\end{eqnarray}
Since $u(y,0) -\varphi\left(d(y,\partial M), 0 \right) \leq 0$,
then by Theorem \ref{Thm Decay Intro}, we have
$$
u(y,t) -\varphi\left(d(y,\partial M), t \right)\le 0
$$
which contradicts with inequality \eqref{con 1} at $y=y_0$ and $t=t_0$. Therefore Case 2 cannot occur.
\textbf{Case 3:} $y_0 \in \partial M$ and $x_0$ lies in the interior of $M$.
Similar argument as in Case 2 rules out this possibility.
\end{proof}
\section{Two proofs of Theorem \ref{thm Dirichlet eigenvalue}}
We provide two proofs for Theorem \ref{thm Dirichlet eigenvalue} in this section.
\subsection{First proof via decay estimates for parabolic equations}
Similarly as in \cite{AC13}, estimates on the modulus of continuity in Theorem \ref{Thm MC} lead to lower bound for the first Dirichlet eigenvalue. However, it does not give optimal lower bound for the first Dirichlet eigenvalue.
Below we elaborate the difference between the first Dirichlet and the first closed or Neumnann eigenvalue for the Laplacian below.
Recall that the idea of Andrew and Clutterbuck \cite{AC13} to detect the first nonzero eigenvalue (with either $\partial M =\emptyset$ or Neumann boundary condition) of the Laplacian via the modulus of continuity estimates is by knowing how quickly the solutions to the heat equation decay.
This is because we may solve
\begin{equation*}
\begin{cases}
u_t =\Delta u, &\\
u(x,0) =u_0(x), &
\end{cases}
\end{equation*}
by expanding $u_0 =\sum_{i=0}^\infty a_i \varphi_i$, where $\varphi_i$ are eigenfunctions of the Laplacian (with Neumann boundary condition if $\partial M \neq \emptyset$). Then the solution to the heat equation is given by $u(x,t) =\sum_{i=0}^\infty e^{-\operatorname{l}_i t} a_i \varphi_i$.
This does not converges to zero as $\operatorname{l}_0=0$, but the key idea is that $|u(x,t)-u(y,t)|$ does converges to zero and in fact
$|u(x,t)-u(y,t)| \approx e^{-\operatorname{l}_1 t}$ as $t \to \infty$. Thus the main ingredient in \cite{AC13} is to establish the estimate
\begin{equation*}
|u(x,t)-u(y,t)| \approx C e^{-\bar{\operatorname{l}}_1 t}
\end{equation*}
for any solution to the heat equation, which is an easy consequence of the modulus of continuity estimates. Then taking $u(x,t)=e^{-\operatorname{l}_1 t} \varphi_1(x)$ leads to
\begin{equation*}
|\varphi_1(x) -\varphi_1(y)| \leq C e^{(\operatorname{l}_1 -\bar{\operatorname{l}}_1)t},
\end{equation*}
which implies $\operatorname{l} \geq \bar{\operatorname{l}}_1$.
If the Dirichlet boundary condition is posed, however, the solution $u(x,t)$ to the heat equation does converge to zero, as there is no constant term in the expansion of the initial data in terms of eigenfunctions. Thus to detect the first eigenvalue, it suffices to prove that any solution to the heat equation decays like $|u(x,t)| \leq C e^{-\bar{\operatorname{l}}_1 t} $.
For this reason, sharp lower bound for the first Dirichlet are given in terms of the inradius $R$, rather than the diameter $D$, together with other curvature data.
When $Q[u]$ is homogeneous of degree $\gamma >0$, we get sharp decay estimates by comparing with self-similar solutions.
\begin{prop}\label{Prop decay p-Laplacian}
Let $M$ and $u$ be the same as in Theorem \ref{Thm MC}. Assume $Q[u]$ is homogeneous of degree $\gamma>0$. Then we have the decay estimate
\begin{equation*}
u(x,t) \leq C e^{-t\operatorname{l}_1^{\frac{1}{\gamma-1}}}
\end{equation*}
where $C$ depends on $u(\cdot, 0)$, and $\operatorname{l}_1$ is the first eigenvalue of the one-dimensional problem \eqref{1D eq N finite}.
\end{prop}
\begin{proof}[Proof of Proposition \ref{Prop decay p-Laplacian}]
Let $\varphi$ be the eigenfunction associated to the eigenvalue $\operatorname{l}_1$. Since $\varphi$ has positive derivative at $s=0$ and is positive for all $s\in (0, R]$, there exists $C>0$ depending only on $u(\cdot, 0)$ such that $u(x,0) \leq C \varphi (d(x, \partial M))$ for all $x\in M$. It's easy to verify the function $\psi(s,t)=C e^{-t\operatorname{l}_1^{\frac{1}{\gamma-1}}} \varphi(s)$ satisfies all the requirements in Theorem \ref{Thm Decay Intro}, and we derive that
\begin{equation*}
u(x,t) \leq C e^{-t \operatorname{l}_1^{\frac{1}{\gamma-1}}} \varphi(d(x,\partial M)) \leq C \sup \varphi \,e^{-t\operatorname{l}_1^{\frac{1}{\gamma-1}}}.
\end{equation*}
\end{proof}
We can now give the first proof of Theorem \ref{thm Dirichlet eigenvalue}.
\begin{proof}[First proof of Theorem \ref{thm Dirichlet eigenvalue}]
Let $\bar{\operatorname{l}}(Q)$ be the first Dirichlet eigenvalue of $Q[u]$ with eigenfunction $v(x)$, then $u(x,t)=e^{-t \bar{\operatorname{l}}(Q)^{\frac{1}{\gamma-1}}} v(x)$ satisfies \eqref{parabolic pde} on $M\times [0,\infty)$ with Dirichlet boundary condition.
By Proposition \ref{Prop decay p-Laplacian}, we have on $M\times [0,\infty)$,
$$e^{-t \bar{\operatorname{l}}(Q)^{\frac{1}{\gamma-1}}} v(x) \leq C e^{-t\operatorname{l}_1^{\frac{1}{\gamma-1}}}.$$
Letting $t\to \infty$ implies $\bar{\operatorname{l}}(Q) \geq \operatorname{l}_1$.
\end{proof}
\subsection{Second proof via comparison theorems for $d(x, \partial M)$}
Our second proof used the new definition for the first Dirichlet eigenvalue given in \eqref{def principal eigenvalue} together with the comparison theorem for second derivatives of $d(x, \partial M)$.
\begin{proof}[Second proof of Theorem \ref{thm Dirichlet eigenvalue}]
We only deal with the case $N\in [n,\infty)$ here as the $N=\infty$ case is completely similar.
Let $\operatorname{l}_1$ be the first eigenvalue of the one-dimensional problem \eqref{1D eq N finite} with $\varphi$ be the corresponding eigenfunction. We must have $\varphi'>0$ on $[0,R)$ since $\varphi$ is the first eigenfunction. Then by Theorem \ref{Thm comparison distance to boundary}, the function $v(x)=\varphi\left(d(x,\partial M)\right)$ is a positive viscosity supersolution of $Q[u] = -\operatorname{l} |u|^{\gamma -1} u$. It follows from the definition of $\bar{\operatorname{l}}(Q)$ in \eqref{def principal eigenvalue} that we have $\bar{\operatorname{l}}(Q) \geq \operatorname{l}_1$.
\end{proof}
\section{Gradient Estimates for Parabolic Equations}
In this section, we derive height-dependent gradient bounds for viscosity solutions of parabolic equations. Both the equations and the curvature conditions will be a bit more restrictive than in previous sections, but they are consistent with previous results in this direction obtained in \cite[Theorem 6]{Andrewssurvey15} for smooth solutions and \cite[Theorem 4.1]{LW17} for viscosity solutions.
We consider parabolic equations of the form
\begin{eqnarray}\label{paraeq}
\frac{\partial u}{\partial t}&=&\left[\alpha(|\nabla u|,u, t)\frac{\nabla_iu \nabla_ju}{|\nabla u|^2}
+ \beta(t)\left(\delta_{ij}-\frac{\nabla_iu \nabla_ju}{|\nabla u|^2}\right)\right]\nabla_i\nabla_ju\\ \nonumber
&&-\beta(t)\langle \nabla f,\nabla u\rangle+q(|\nabla u|, u, t),
\end{eqnarray}
where $\alpha$ and $\beta$ are nonnegative functions.
It's easy to see that the \eqref{paraeq} covers the heat equation and the parabolic normalized $p$-Laplacian equation $u_t =\Delta^N_p u$.
\begin{theorem}\label{thmh}
Let $(M^n,g, e^{-f} d\mu_g)$ be a compact smooth metric measure space with diameter $D$ (possibly with smooth strictly convex boundary) and $\text{Ric}_f\ge \Upsilon$ for $\Upsilon \leq 0$.
Suppose $u: M\times [0,T)\rightarrow \mathbb{R}$ is a viscosity solution of \eqref{paraeq} (with Neumann boundary conditions if $\partial M \neq \emptyset$).
Let $\varphi: [0,D]\times [0,T)\rightarrow \mathbb{R}$ be a solution of
\begin{equation}\label{eqvpp}
\varphi_t=\alpha(\varphi', \varphi, t)\varphi''-\Upsilon s \beta(t) \varphi'+q(\varphi',\varphi,t)
\end{equation}
with $\varphi'>0$, such that the
range of $u(\cdot, 0)$ is contained in $[\varphi(0,0), \varphi(D,0)]$. Let $\Psi(s,t)$ be given by
inverting $\varphi$ for each $t$, and assume that for all $x$ and $y$ in $M$,
$$\Psi(u(y,0),0)-\Psi(u(x,0),0)-d(x,y)\le 0.$$
Then
$$
\Psi(u(y,t),t)-\Psi(u(x,t),t)-d(x,y)\le 0.
$$
for all $x,y\in M$ and $t\in[0,T)$.
\end{theorem}
By letting $y$ approach $x$, we get
\begin{corollary}
Let $(M^n,g, e^{-f} d\mu_g)$, $u$ and $\varphi$ be the same as in Theorem \ref{thmh}. Then
\begin{equation*}
|\nabla u(x, t)| \leq \varphi'\left(\Psi(u(x,t),t) \right)
\end{equation*}
for all $(x,t) \in M \times [0, T)$.
\end{corollary}
We begin with a lemma about the behavior of parabolic semijets when composed with an increasing function.
\begin{lemma}\label{Lemma}
Let $u$ be a continuous function. Let $\varphi:\mathbb{R} \times [0, T) \to \mathbb{R} $ be a $C^{2,1}$ function with $\varphi^{\prime} \geq 0$.
Let $\Psi:\mathbb{R} \times [0, T) \to \mathbb{R} $ be such that
$$\Psi(\varphi(u(y,t),t),t)=u(y,t)$$
$$\varphi (\Psi(u(y,t),t),t)=u(y,t)$$
(i) Suppose $(\tau, p, X) \in \mathcal{P}^{2,+}\Psi(u(y_0,t_0),t_0)$, then
$$(\varphi_t+\varphi^{\prime}\tau, \varphi^{\prime}p, \varphi^{\prime \prime}p \otimes p+ \varphi^{\prime} X ) \in \mathcal{P}^{2,+} u(y_0,t_0),$$
where all derivatives of $\varphi$ are evaluated at $(\Psi(u(y_0,t_0)), t_0)$.
(ii) Suppose $(\tau, p, X) \in \mathcal{P}^{2,-}\Psi(u(y_0,t_0),t_0)$, then
$$(\varphi_t+\varphi^{\prime}\tau, \varphi^{\prime}p, \varphi^{\prime \prime}p \otimes p+ \varphi^{\prime} X ) \in \mathcal{P}^{2,-} u(y_0,t_0),$$
where all derivatives of $\varphi$ are evaluated at $(\Psi(u(y_0,t_0)), t_0)$.
(iii) The same holds if one replaces the parabolic semijets by the their closures.
\end{lemma}
\begin{proof}
See \cite[Lemma 4.1]{LW17}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmh}]
The theorem is valid if we show that for any $\epsilon>0$,
\begin{equation}
Z^{\epsilon}(x,y,t):=\Psi(u(y,t),t)-\Psi(u(x,t),t)-d(x,y)-\frac{\epsilon}{T-t}\le 0.\label{Ze}
\end{equation}
To prove inequality (\ref{Ze}), it suffices to show $Z^{\epsilon}$ can not attain the maximum in $M\times M\times(0, T)$.
Assume by contradiction that there exist $t_0 \in (0,T)$, $x_0$ and $y_0$ in $M$ at which the function
$Z^{\epsilon}$
attains its maximum. Notice that the Neumann condition, convexity of $\partial M$, and the positivity of $\varphi'$ guarantees that $x_0$ and $y_0$ are in $M$ if $\partial M \neq \emptyset$.
Take $\rho(x, y)$ defined as in Definition \ref{def-rho} with $\eta=1$ . Then the function
$$\Psi(u(y,t),t)-\Psi(u(x,t),t)-\rho(x,y)-\frac{{\varepsilon}}{T-t}$$ has a local maximum at $(x_0,y_0,t_0)$.
If $\epsilon >0$, then we necessarily have $x_0 \neq y_0$.
By the parabolic maximum principle Theorem \ref{max prin} for semicontinuous functions on manifolds,
for any $\lambda >0$, there exist $X, Y$ satisfying
\begin{eqnarray*}
(b_1, \nabla_y \rho(x_0, y_0), X) &\in& \overline{\mathcal{P}}^{2,+} \Psi(u(y_0,t_0), t_0),\\
(-b_2, -\nabla_x \rho(x_0, y_0), Y) &\in& \overline{\mathcal{P}}^{2,-} \Psi(u(x_0,t_0), t_0),
\end{eqnarray*}
\begin{equation}\label{Dt}
b_1+b_2=\frac{\epsilon}{(T-t_0)^2},
\end{equation}
and
\begin{equation}\label{Hessian inequality for quasilinear}
-\left(\lambda^{-1}+\left\|S\right\| \right)I \leq
\begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix}
\leq S+\lambda S^2,
\end{equation}
where $S=\nabla^2 \rho(x_0,y_0)$.
By Lemma \ref{Lemma}, we have
\begin{equation*}
(b_1 \varphi^{\prime}(z_{y_0}, t_0)+\varphi_t(z_{y_0}, t_0), \varphi^{\prime}(z_{y_0}, t_0) e_n(s_0), \varphi^{\prime}(z_{y_0}, t_0)X+\varphi''(z_{y_0}, t_0)e_n(s_0)\otimes e_n(s_0))
\end{equation*}
and
\begin{equation*}
(-b_2 \varphi^{\prime}(z_{x_0}, t_0) +\varphi_t(z_{x_0}, t_0), -\varphi^{\prime} (z_{x_0}, t_0) e_n(0), \varphi^{\prime}(z_{x_0}, t_0) Y+\varphi''(z_{x_0}, t_0)e_n(0)\otimes e_n(0))
\end{equation*}
are in $\overline{\mathcal{P}}^{2,+} u(y_0,t_0)$
and $\overline{\mathcal{P}}^{2,-} u(x_0,t_0)$ respectively.
Where
$z_{x_0}=\Psi(u(x_0, t_0), t_0)$, $z_{y_0}=\Psi(u(y_0, t_0), t_0)$, and we used the first variation formula $\nabla \rho(x_0,y_0)=(-e_n(0), e_n(s_0))$.
Since $u$ is both a subsolution and a supersolution of (\ref{paraeq}), we have
\begin{eqnarray}\label{pb1}
&&b_1 \varphi^{\prime}(z_{y_0}, t_0)+\varphi_t(z_{y_0}, t_0)\nonumber\\
&\le& \operatorname{tr}\Big(\varphi^{\prime}(z_{y_0}, t_0) A_1 X+
\varphi''(z_{y_0}, t_0)A_1e_n(s_0)\otimes e_n(s_0)\Big)\nonumber\\&&
-\beta(t_0)\varphi'(z_{y_0},t_0)\langle \nabla f(y_0), e_n(s_0)\rangle+q(\varphi'(z_{y_0},t_0),\varphi(z_{y_0},t_0), t_0),
\end{eqnarray}
and
\begin{eqnarray}\label{pb2}
&&-b_2 \varphi^{\prime}(z_{x_0}, t_0) +\varphi_t (z_{x_0}, t_0)\nonumber\\
&\ge& \operatorname{tr}\Big( \varphi^{\prime}(z_{x_0}, t_0)A_2Y+\varphi''(z_{x_0}, t_0)A_2 e_n(0)\otimes e_n(0)\Big)\nonumber\\
&&-\beta(t_0)\varphi'(z_{x_0},t_0)\langle \nabla f(x_0), e_n(0)\rangle+q(\varphi'(z_{x_0},t_0),\varphi(z_{x_0},t_0), t_0),
\end{eqnarray}
where $$
A_1=\left(
\begin{array}{cccc}
\beta( t_0) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(t_0) & 0 \\
0 & \cdots & 0 & \alpha(|\varphi^{\prime}(z_{y_0}, t_0)|,\varphi(z_{y_0}, t_0), t_0) \\
\end{array}
\right),
$$
and
$$
A_2=\left(
\begin{array}{cccc}
\beta( t_0) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(t_0) & 0 \\
0 & \cdots & 0 & \alpha(|\varphi^{\prime}(z_{x_0}, t_0)|,\varphi(z_{x_0}, t_0), t_0) \\
\end{array}
\right).
$$
Set
$$C=\left(
\begin{array}{cccc}
\beta(t_0) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(t_0) & 0 \\
0 & \cdots & 0 & 0 \\
\end{array}
\right),$$
and simple calculation shows $\left(
\begin{array}{cc}
A_1 & C \\
C & A_2 \\
\end{array}
\right)\ge 0$.
Then we conclude from (\ref{Dt}), (\ref{pb1}) and (\ref{pb2}) that
\begin{eqnarray*}
&&\frac{\epsilon}{(T-t_0)^2}=b_1+b_2\\
&\le& \frac{\operatorname{tr}(\varphi^{\prime}(z_{y_0}, t_0)A_1X)+\operatorname{tr}(A_1 e_n(s_0)\otimes e_n(s_0))\varphi''(z_{y_0}, t_0)-
\varphi_t(z_{y_0}, t_0)}{\varphi^{\prime}(z_{y_0}, t_0)}\\
&&+\frac{\operatorname{tr}(-\varphi^{\prime}(z_{x_0}, t_0)A_2 Y)-\operatorname{tr}(A_2 e_n(0)\otimes e_n(0))\varphi''(z_{x_0}, t_0)+
\varphi_t(z_{x_0}, t_0)}{\varphi^{\prime}(z_{x_0}, t_0)}\\
&&+\frac{q(\varphi'(z_{y_0},t_0),\varphi(z_{y_0},t_0), t_0)}{\varphi^{\prime}(z_{y_0}, t_0)}-\frac{q(\varphi'(z_{x_0},t_0),\varphi(z_{x_0},t_0), t_0)}{\varphi^{\prime}(z_{x_0}, t_0)}\\
&&-\beta(t_0)\Big(\langle \nabla f(y_0), e_n(s_0)\rangle-\langle \nabla f(x_0), e_n(0)\rangle\Big)\\
&=& -\beta(t_0)\Big(\langle \nabla f(y_0), e_n(s_0)\rangle-\langle \nabla f(x_0), e_n(0)\rangle\Big)+\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1& C \\
C & A_2 \\
\end{array}
\right)
\left(
\begin{array}{cc}
X & 0 \\
0 & -Y \\
\end{array}
\right)
\right]\\
&&+\frac{\varphi_t(z_{x_0}, t_0)-\alpha(\varphi'(z_{x_0},t_0),\varphi(z_{x_0}, t_0), t_0)\varphi''(z_{x_0},t_0)-q(\varphi'(z_{x_0},t_0),\varphi(z_{x_0},t_0),t_0)}{\varphi^{\prime}(z_{x_0}, t_0)}\\
&&-\frac{\varphi_t(z_{y_0}, t_0)-\alpha(\varphi'(z_{y_0},t_0),\varphi(z_{y_0}, t_0),t_0)\varphi''(z_{y_0},t_0)-q(\varphi'(z_{y_0},t_0),\varphi(z_{y_0},t_0),t_0)}{\varphi^{\prime}(z_{y_0}, t_0)}\\
&\le&-\beta(t_0)\Big(\langle \nabla f(y_0), e_n(s_0)\rangle-\langle \nabla f(x_0), e_n(0)\rangle\Big)+\beta(t_0)\Upsilon (z_{y_0}-z_{x_0})\\
&&+\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1& C \\
C & A_2 \\
\end{array}
\right)S
\right]
+\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1 & C \\
C & A_2 \\
\end{array}
\right)S^2
\right]
\end{eqnarray*}
where we have used the inequality (\ref{Hessian inequality for quasilinear}) and the equation (\ref{eqvpp}) of $\varphi$.
Direct calculation gives
\begin{eqnarray*}
\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1& C \\
C & A_2 \\
\end{array}
\right)S
\right]&=&\beta(t_0)\sum_{i=1}^{n-1}\nabla^2 \rho\Big((e_i(0),e_i(s_0)),(e_i(0),e_i(s_0))\Big)\\
&&+\alpha(\varphi'(z_{y_0},t_0),\varphi(z_{y_0},t_0),t_0)\nabla^2 \rho\Big((0,e_n(s_0)),(0,e_n(s_0))\Big)\\
&&+\alpha(\varphi'(z_{x_0},t_0),\varphi(z_{x_0},t_0),t_0)\nabla^2 \rho\Big((e_n(0),0),(e_n(0),0)\Big)
\end{eqnarray*}
Since
$$
\nabla^2 \rho\Big((e_n(0),0),(e_n(0),0)\Big)=0,\quad \nabla^2 \rho\Big((0,e_n(s_0)),(0,e_n(s_0))\Big)=0
$$
and
\begin{eqnarray*}
&&\sum_{i=1}^{n-1}\nabla^2 \rho\Big((e_i(0),e_i(s_0)),(e_i(0),e_i(s_0))\Big)\\
&=&\int_0^{s_0}(n-1)(\eta')^2-\eta^2 \operatorname {Ric}(e_n,e_n) \, ds\\
&\le&-\Upsilon s_0+\Big(\langle \nabla f(y_0), e_n(s_0)\rangle-\langle \nabla f(x_0), e_n(0)\rangle\Big)
\end{eqnarray*}
where we used (\ref{2nd2}) and (\ref{eta2}).
Therefore we conclude
\begin{eqnarray*}
\frac{\epsilon}{(T-t_0)^2}&\le& \Upsilon\beta(t_0)(z_{y_0}-z_{x_0}-s_0)+\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1 & C \\
C & A_2 \\
\end{array}
\right)S^2
\right]\\
&\le&\lambda\operatorname{tr}\left[\left(
\begin{array}{cc}
A_1 & C \\
C & A_2 \\
\end{array}
\right)S^2
\right],
\end{eqnarray*}
where we have used $\Upsilon\le 0$ and $$z_{y_0}-z_{x_0}-s_0>0$$ by the assumption.
Then we get the contradiction by letting $\lambda\rightarrow 0$ . Therefore (\ref{Ze}) is true, hence completing the proof.
\end{proof}
\section{Gradient Estimates for Elliptic Equations}
We derive height-dependent gradient estimate for elliptic quasi-linear equations. For elliptic equations, we can deal with the slightly more general quasi-linear operator
\begin{equation}\label{eq1.1}
{\mathcal L}_f (u, \nabla u, \nabla^2 u)=0,
\end{equation}
where the operator ${\mathcal L}_f$ is defined by
\begin{eqnarray*}
{\mathcal L}_f (u, \nabla u, \nabla^2 u) &=& \left[\alpha(u,|\nabla u|)\frac{\nabla_i u \nabla_j u}{|\nabla u|^2}+\beta(u,|\nabla u|)\left(\delta_{ij}-\frac{\nabla_i u \nabla_j u}{|\nabla u|^2} \right) \right] \nabla_i\nabla_j u \\
&& -\beta(u,|\nabla u|)\langle \nabla u, \nabla f\rangle +b(u,|\nabla u|),
\end{eqnarray*}
where $\alpha$ and $\beta$ are nonnegative functions, $\beta(s,t)>0$ for $t>0$.
\begin{thm}\label{Thm1.3}
Let $(M^n, g,f)$ be a closed Bakry-Emery manifold with $\operatorname{Ric}+\nabla^2 f \geq \kappa g$ for some $\kappa \leq 0$. Let $u$ be a viscosity solution of the equation \eqref{eq1.1}. Let $\varphi: [a,b]\rightarrow [\inf u, \sup u]$ be a $C^2$ solution of
\begin{itemize}
\item[(i)] $\alpha(\varphi, \varphi')\varphi'' -\kappa\, t\, \beta(\varphi,\varphi')+b(\varphi, \varphi') =0$ on $[a,b]$;
\item[(ii)] $\varphi(a) =\inf u$, \text{ } $\varphi(b)=\sup u$, \text{ } $\varphi' >0$ on $[a,b]$.
\end{itemize}
Let $\Psi$ be the inverse of $\varphi$. Then we have
\begin{equation*}
\Psi(u(y)) -\Psi(u(x)) -d(x,y) \leq 0,
\end{equation*}
for all $x,y \in M$.
\end{thm}
As an immediate corollary, by letting $y$ approach $x$, we get the following gradient estimate:
\begin{corollary}
Under the assumptions of Theorem \ref{Thm1.3}, we have
\begin{equation*}
|\nabla u(x)| \leq \varphi'(\Psi(u(x)))
\end{equation*}
for all $x\in M$.
\end{corollary}
\subsection{The case $\kappa \leq 0$}
\begin{proof}[Proof of Theorem \ref{Thm1.3}]
We argue by contradiction and suppose that
$$m := \max_{M\times M} \left\{ \Psi(u(y)) -\Psi(u(x)) -d(x,y) \right\} >0.$$
The positive maximum must be attained at some point $(x_0,y_0) \in M \times M$ with $x_0 \neq y_0$, since the function $\Psi(u(y)) -\Psi(u(x)) -d(x,y)$ is continuous and vanishes on the diagonal of $M\times M$.
We replace $d(x,y)$ by $\rho(x,y)$ to apply maximum principle. From the definition of $\rho(x,y)$, we see that $d(x,y) \leq \rho(x,y)$ in $U(x_0,y_0)$ with equality at $(x_0,y_0)$.
Thus we have
\begin{equation*}
\Psi(u(y)) -\Psi(u(x)) -\rho(x,y) \leq m
\end{equation*}
on $U(x_0, y_0)$ and with equality at $(x_0,y_0)$.
Now we can apply the maximum principle for semicontinuous functions on manifolds to conclude that for any $\lambda >0$, there exist $X\in Sym^2(T^*_{x_0}M)$ and $Y\in Sym^2(T^*_{y_0}M)$ such that
\begin{align*}
(\nabla_y\rho(x_0,y_0), Y ) & \in \overline{J}^{2,+}(\Psi(u(y_0)), \\
(-\nabla_x \rho(x_0,y_0), X) & \in \overline{J}^{2,-}(\Psi(u(x_0)),
\end{align*}
and
\begin{equation}\label{eq2.1}
\begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix}
\leq S+\lambda S^2,
\end{equation}
where $S=\nabla^2 \rho(x_0,y_0)$.
The first variation formula of arc length implies
\begin{equation*}
\nabla_y \rho(x_0,y_0) =e_n(s_0) \text{ and } \nabla_x \rho(x_0,y_0) =-e_n(0).
\end{equation*}
By Lemma 8 in \cite{AX19}, we get
\begin{align*}
\left(\varphi'(z_{y_0})e_n(s_0), \varphi'(z_{y_0}) Y +\varphi''(z_{y_0}) e_n(s_0) \otimes e_n(s_0) \right) & \in \overline{J}^{2,+}u(y_0), \\
\left(\varphi'(z_{x_0})e_n(0), \varphi'(z_{x_0}) X +\varphi''(z_{x_0}) e_n(0) \otimes e_n(0) \right) & \in \overline{J}^{2,-}u(x_0),
\end{align*}
where $z_{y_0} = \Psi (u(y_0))$ and $z_{x_0} = \Psi (u(x_0))$.
The fact that $u$ is a viscosity solution of \eqref{eq1.1} implies
\begin{eqnarray*}
&&\operatorname{tr}(\varphi'(z_{y_0})A_2 Y +\varphi''(z_{y_0})A_2e_n(s_0) \otimes e_n(s_0) ) +b(\varphi(z_{y_0}), \varphi'(z_{y_0}))\\
&\ge&\beta(\varphi(z_{y_0}), \varphi'(z_{y_0}))\varphi'(z_{y_0})\langle e_n(s_0), \nabla f(y_0) \rangle
\end{eqnarray*}
and
\begin{eqnarray*}
&& \operatorname{tr}(\varphi'(z_{x_0})A_1 X +\varphi''(z_{x_0})A_1 e_n(0) \otimes e_n(0) ) +b(\varphi(z_{x_0}), \varphi'(z_{x_0})) \\
&\leq& -\beta(\varphi(z_{x_0}), \varphi'(z_{x_0}))\varphi'(z_{x_0})\langle e_n(0), \nabla f(x_0) \rangle ,
\end{eqnarray*}
where
\begin{equation*}
A_1=\left(
\begin{array}{cccc}
\beta(\varphi(z_{x_0}), \varphi'(z_{x_0})) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(\varphi(z_{x_0}), \varphi'(z_{x_0})) & 0 \\
0 & \cdots & 0 & \alpha(\varphi(z_{x_0}), \varphi'(z_{x_0})) \\
\end{array}
\right),
\end{equation*}
and
\begin{equation*}
A_2=\left(
\begin{array}{cccc}
\beta(\varphi(z_{y_0}), \varphi'(z_{y_0})) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(\varphi(z_{y_0}), \varphi'(z_{y_0})) & 0 \\
0 & \cdots & 0 & \alpha(\varphi(z_{y_0}), \varphi'(z_{y_0})) \\
\end{array}
\right).
\end{equation*}
Therefore,
\begin{eqnarray*}
&& \alpha(\varphi(z_{y_0}), \varphi'(z_{y_0})) \varphi''(z_{y_0}) +b(\varphi(z_{y_0}), \varphi'(z_{y_0})) + \varphi'(z_{y_0}) \operatorname{tr} \left(\begin{pmatrix}
0 & C \\
C & A_2
\end{pmatrix} \begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix} \right) \\
&\ge& \beta(\varphi(z_{y_0}), \varphi'(z_{y_0}))\varphi'(z_{y_0})\langle e_n(s_0), \nabla f(y_0) \rangle,
\end{eqnarray*}
where
\begin{equation*}
C=\left(
\begin{array}{cccc}
\beta(\varphi(z_{y_0}), \varphi'(z_{y_0})) & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \cdots &\beta(\varphi(z_{y_0}), \varphi'(z_{y_0})) & 0 \\
0 & \cdots & 0 & 0 \\
\end{array}
\right).
\end{equation*}
Similarly,
\begin{eqnarray*}
&& \alpha(\varphi(z_{x_0}), \varphi'(z_{x_0})) \varphi''(z_{x_0}) +b(\varphi(z_{x_0}), \varphi'(z_{x_0})) + \varphi'(z_{x_0}) \operatorname{tr} \left(\begin{pmatrix}
A_1 & 0 \\
0 & 0
\end{pmatrix} \begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix} \right) \\
&\le & \beta(\varphi(z_{x_0}), \varphi'(z_{x_0}))\varphi'(z_{x_0})\langle e_n(0), \nabla f(x_0) \rangle.
\end{eqnarray*}
Combing the above two inequalities,
\begin{eqnarray*}
0 &\leq& \left.\frac{\alpha(\varphi,\varphi') \varphi'' +b(\varphi, \varphi')}{\beta(\varphi,\varphi') \varphi'} \right|_{z_{y_0}} - \left.\frac{\alpha(\varphi,\varphi') \varphi'' +b(\varphi,\varphi')}{\beta(\varphi,\varphi') \varphi'} \right|_{z_{x_0}} \\
&& -\langle e_n(s_0), \nabla f(y_0)\rangle + \langle e_n(0), \nabla f(x_0) \rangle \\
&& + \frac{1}{\beta(\varphi(z_{y_0}), \varphi'(z_{y_0}))} \operatorname{tr} \left(\begin{pmatrix}
0 & C \\
C & A_2
\end{pmatrix} \begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix} \right) \\
&& +\frac{1}{\beta(\varphi(z_{x_0}), \varphi'(z_{x_0}))}\operatorname{tr} \left(\begin{pmatrix}
A_1 & 0 \\
0 & 0
\end{pmatrix} \begin{pmatrix}
X & 0 \\
0 & -Y
\end{pmatrix} \right).
\end{eqnarray*}
Letting
\begin{eqnarray*}
W &=& \frac{1}{\beta(\varphi(z_{y_0}),\varphi'(z_{y_0}))} \begin{pmatrix}
0 & C \\
C & A_2
\end{pmatrix} +\frac{1}{\beta(\varphi(z_{x_0}),\varphi'(z_{x_0}))} \begin{pmatrix}
A_1 & 0 \\
0 & 0
\end{pmatrix} \\
&=& \begin{pmatrix}
I_{n-1} & 0 & I_{n-1} & 0 \\
0 & \left.\frac{\alpha(\varphi,\varphi')}{\beta(\varphi,\varphi')}\right|_{z_{x_0}} & 0 & 0 \\
I_{n-1} & 0 & I_{n-1} & 0 \\
0 & 0 & 0 & \left.\frac{\alpha(\varphi,\varphi')}{\beta(\varphi,\varphi')}\right|_{z_{y_0}}
\end{pmatrix}
\end{eqnarray*}
It's easy to see that $W$ is a positive semi-definite matrix.
Using \eqref{eq2.1}, we obtain that
\begin{eqnarray*}
0 &\leq& \left.\frac{\alpha(\varphi,\varphi') \varphi'' +b(\varphi, \varphi')}{\beta(\varphi,\varphi') \varphi'} \right|_{z_{y_0}}
- \left.\frac{\alpha(\varphi,\varphi') \varphi'' +b(\varphi,\varphi')}{\beta(\varphi,\varphi') \varphi'} \right|_{z_{x_0}} \\
&& -\langle e_n(s_0), \nabla f(y_0)\rangle + \langle e_n(0), \nabla f(x_0) \rangle \\
&& + \operatorname{tr}(WS) +\lambda \operatorname{tr}(WS^2)
\end{eqnarray*}
and
\begin{eqnarray*}
\operatorname{tr}(WS) &=& \sum_{i=1}^{n-1} \nabla^2 \rho \left( (e_i(0),e_i(s_0)), (e_i(0),e_i(s_0)) \right) \\
&& + \left.\frac{\alpha(\varphi,\varphi') }{\beta(\varphi,\varphi') } \right|_{z_{x_0}} \nabla^2 \rho \left( (e_n(0), 0), (e_n(0),0) \right) \\
&& + \left.\frac{\alpha(\varphi,\varphi') }{\beta(\varphi,\varphi') } \right|_{z_{y_0}} \nabla^2 \rho \left( (0, e_n(s_0)), (0, e_n(s_0)) \right) \\
&=& - \int_0^{s_0} \operatorname{Ric}(e_n,e_n) ds
\end{eqnarray*}
where we used the variation formulas
\begin{eqnarray*}
\nabla^2 \rho \left( (e_n(0), 0), (e_n(0),0) \right) =0, \quad
\nabla^2 \rho \left( (0, e_n(s_0)), (0, e_n(s_0)) \right) =0.
\end{eqnarray*}
and (\ref{2nd2}) with $\eta(s)=1$.
Finally, we get
\begin{eqnarray*}
0 &\leq& \left.\frac{\alpha(\varphi,\varphi') \varphi'' +b(\varphi, \varphi')}{\beta(\varphi,\varphi') \varphi'} \right|^{z_{y_0}}_{z_(x_0)} +\lambda \operatorname{tr}(WS^2) \\
&& -\langle e_n(s_0), \nabla f(y_0)\rangle + \langle e_n(0), \nabla f(x_0) \rangle - \int_0^{s_0} \operatorname{Ric}(e_n,e_n) ds \\
&\leq & \kappa z_{y_0} -\kappa z_{x_0} +\lambda \operatorname{tr}(WS^2) -\kappa s_0 \\
&=& \kappa \left (\Psi(u(y_0)) -\Psi(u(x_0)) -d(x_0,y_0) \right) +\lambda \operatorname{tr}(WS^2) \\
&=& \kappa \, m +\lambda \operatorname{tr}(WS^2),
\end{eqnarray*}
where we used the curvature condition $\operatorname{Ric}+\nabla^2 f\ge \Upsilon$ in the second inequality.
Since $\kappa <0$ and $m>0$, we get a contradiction by letting $\lambda \to 0$.
\end{proof}
\subsection{The case $\kappa >0$}
The argument given for $\kappa \leq 0$ in the previous section does not lead to a contradiction when $\kappa >0$. As in \cite{AX19}, we may not be able to show that any solutiong $\varphi$ to the one-dimensional equation is a barrier in the case $\kappa >0$. However, we prove that for some family of solutions to the one-dimensional equation, the property of being barriers can be extended smoothly in the family. Moreover, this phenomenon holds for any $\kappa \in \mathbb{R}$, regardless of its sign.
\begin{thm}\label{Thm3.1}
Let $(M^n, g,f)$ be a closed Bakry-Emery manifold with $\operatorname{Ric}+\nabla^2 f \geq \kappa g$ for some $\kappa > 0$. Let $u$ be a $C^3$ solution of equation \eqref{eq1.1}.
Assume $\alpha, \beta$ are $C^2$ functions.
Suppose $\varphi_c:[a_c,b_c] \to [\inf u, \sup u]$ is a family of $C^2$ solutions of the one-dimensional equation
\begin{equation}
\alpha(\varphi,\varphi') \varphi'' -\kappa \, t \, \beta(\varphi, \varphi') \varphi' +b(\varphi, \varphi') =0
\end{equation}
on $[a,b]$ which satisfies
\begin{itemize}
\item[(i)] $\varphi_c(a_c)=\inf u, \varphi_c(b_c)=\sup u, \varphi_c'>0 \text{ on } [a_c,b_c];$
\item[(ii)] $\varphi'_c$ is uniformly large for $c \gg c_u$;
\item[(iii)] $\varphi_c$ depends smoothly on $c \in (c_u, \infty)$.
\end{itemize}
Let $\Psi_c$ be the inverse of $\varphi_c$. Then we have
\begin{equation}\label{eq3.2}
\Psi_c(u(y)) -\Psi_c(u(x)) -d(x,y) \leq 0,
\end{equation}
for all $x,y \in M$ and $c\in (c_u, \infty)$.
\end{thm}
\begin{corollary}
Under the assumptions of Theorem \ref{Thm3.1}, we have
\begin{equation*}
|\nabla u(x)| \leq \varphi'_c \left(\Psi_c(u(x)) \right),
\end{equation*}
for all $x\in M$ and $c\in (c_u, \infty)$.
\end{corollary}
We prove the following lemma, which will be needed in the proof of Theorem \ref{Thm3.1}.
\begin{lemma}\label{lemma3.1}
Let $u$ be a $C^3$ solution of \eqref{eq1.1}. Let $x\neq y$ with $d(x,y) < \text{inj}(M)$, the injectivity radius of $M$. Let $\gamma_0: [0,s_0] \to M$ be the length-minimizing geodesic from $x$ to $y$, and choose Fermi coordinate as before. Let $z_y=\Psi(u(y))$ and $z_x =\Psi(u(x))$.
Then
\begin{equation*}
Z(x,y): =\Psi(u(y)) -\Psi(u(x)) -d(x,y)
\end{equation*}
satisfy
\begin{eqnarray*}
\mathcal{F}[Z]&:=& \left.\frac{\alpha(\varphi,\varphi')}{\beta(\varphi,\varphi')}\right|_{z_y} \nabla^2_{((0,e_n),(0,e_n))} Z + \left.\frac{\alpha(\varphi,\varphi')}{\beta(\varphi,\varphi')}\right|_{z_x} \nabla^2_{((e_n,0),(e_n,0))} Z \\
&& + \sum_{i-1}^{n-1} \nabla^2_{((e_i,e_i),(e_i,e_i))} Z \\
&=& \left.\frac{\alpha(\varphi,\varphi')\varphi'' +b(\varphi,\varphi')}{\beta(\varphi,\varphi')\varphi'}\right|^{z_x}_{z_y}
+ \nabla Z * \nabla Z +P * \nabla Z,
\end{eqnarray*}
where the coefficients of $\nabla Z * \nabla Z$ and $P *\nabla Z$ are $C^1$ functions.
\end{lemma}
\begin{proof}
This is a special case of Lemma 15 in \cite{AX19}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm3.1}]
We argue by contradiction and assume that \eqref{eq3.2} does not hold for some $c_0 >c_u$.
Let $\Delta = \{(x,x) : x\in M\}$ be the diagonal of $M\times M$ and consider a manifold $\hat{M}$ with boundary, which is a natural compactification of $(M\times M)\setminus \Delta$.
As a set, $\hat{M}$ is the disjoint union of the $(M\times M)\setminus \Delta$ and the unit sphere bundle $SM:=\{(x,v) : x \in M, v \in T_xM \}$. The manifold with structure is defined by the atlas generated by all charts for $(M\times M)\setminus \Delta$, together with the charts $\hat{Y}$ from $SM\times (0,r)$ defined by taking a chart $Y$ for $SM$ and setting $\hat{Y}(z,s):= \left(\exp(sY(z)), \exp(-s Y(z)) \right)$.
For simplicity of notations, we write $\varphi =\varphi_c$ and $\Psi =\Psi_c$ in the rest of the proof.
Define the function $\hat{Z}$ on $\hat{M}$ by
\begin{equation*}
\hat{Z}(x,y) =\frac{Z(x,y)}{d(x,y)} \text{ for } (x,y) \in (M\times M )\setminus \Delta,
\end{equation*}
and
\begin{equation*}
\hat{Z}(x, v) =\frac{\nabla_v u(x)}{\varphi'(\Psi(u(x)))} -1 \text{ for } (x,v) \in SM.
\end{equation*}
It's easy to see that function $\hat{Z}$ is continuous on $\hat{M}$.
Assumption (ii) implies that $\hat{Z} \leq 0$ on $\hat{M}$ for all $c$ sufficiently large.
So let $c_1$ be the smallest number such that
$\hat{Z} \leq 0$ on $\hat{M}$ for all $c \geq c_1$, i.e.,
\begin{equation*}
c_1 = \inf \{ t > c_u : \hat{Z} \leq 0 \text{ on } \hat{M} \text{ for all } c \in (t, \infty) \}.
\end{equation*}
By continuity, we have $c_1 > c_0$, which we shall prove lead to a contradiction.
For $c=c_1$, there will be two cases.
\textbf{Case 1:} $\hat{Z}(x_0,y_0) =0 $ for some $x_0 \neq y_0$.
By Lemma \ref{lemma3.1}, we have at $(x_0,y_0)$,
\begin{eqnarray*}
\mathcal{F}[Z] +\nabla Z *\nabla Z +P*\nabla Z \geq \kappa (z_{y_0}-z_{x_0}) = \kappa \, d(x_0,y_0) >0.
\end{eqnarray*}
This contradicts the fact that $Z$ attains its maximum at $(x_0,y_0)$, thus ruling out Case 1.
\textbf{Case 2:} $Z(x,y) < 0$ for all $x\neq y \in M$ and $\hat{Z}(x_0, v_0) =0$ for some $(x_0,v_0) \in SM$.
In this case, by Lemma \ref{lemma3.1}, we have for $x\neq y$ close enough to each other,
\begin{eqnarray*}
\mathcal{F}[Z] +\nabla Z *\nabla Z +P*\nabla Z \geq \kappa (z_{y_0}-z_{x_0}) = \kappa \, d(x_0,y_0) + \kappa Z \geq \kappa Z.
\end{eqnarray*}
The Hopf maximum principle in \cite{Hill70} applies to this situation and yields at $(x_0,v_0)$,
\begin{eqnarray*}
0 > \nabla_{(0,v)} Z(x,x)
&=&\lim_{t\to 0} \frac{Z(x,\exp(tv)) -Z(x,x)}{t} \\
&=& \lim_{t\to 0} \frac{\Psi(u(\exp(tv))) -\Psi(u(x)) - d(x, \exp_x(tv))}{t} \\
&=& \frac{\nabla_vu(x)}{\varphi'(z_x)} -1.
\end{eqnarray*}
This is a contradiction to $\hat{Z}(x_0,v_0) =\frac{\nabla_{v_0}u(x_0)}{\varphi'(z_{x_0})} -1 =0$.
Therefore, Case 2 is impossible.
\end{proof}
\section*{Acknowledgments} {The authors would like to thank Professor Lei Ni for his encouragement and interests in this work.}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Nonlinear diamond photonics provides an attractive technical basis for on-chip
photonic applications \cite{Hausmann:N:2014}, and has triggered numerous
research efforts in recent years.
Owing to the unique material properties of diamond
\cite{Rath:PSS:2015,Gaeta:NP:2019}, given by its large Kerr nonlinearity, wide
bandgap, high refractive index, negligible multi-photon loss, and transmission
window spanning from the ultrarviolet to the far-infrared, impressive
demonstrations of photonic devices with novel functionalities have emerged.
This includes, e.g., its use as a platform for quantum communication
\cite{Beveratos:PRL:2002}, and integrated high-$Q$ optical resonators
\cite{Hausmann:NL:2013,Hausmann:N:2014}, operating at new wavelengths compared
to existing chip-based nonlinear photonic devices for frequency comb generation
\cite{Kippenberg:S:2011,Gaeta:NP:2019}.
It thus exceeds its use in quantum optics applications and is becoming a
versatile material for optical devices.
A direct transfer of concepts from photonic crystal fibers and
silicon-based waveguides \cite{Ding:OE:2008,BlancoRedondo:NC:2014}, such as,
e.g., pulse-compression schemes and soliton-effects, to the diamond-based
platform seems possible.
Here, we consider the supercontinuum generation process
\cite{Ranka:OL:2000,Agrawal:BOOK:2019,Mitschke:BOOK:2016,Skryabin:RMP:2010}, a
paradigm of optical pulse propagation in fibers, which has revolutionized
optical coherence tomography \cite{Hartl:OL:2001}, and frequency metrology
\cite{Udem:N:2002}. In common silica-based photonic crystal fibers, this
process occurs on the lengthscale of several centimeters
\cite{Dudley:RMP:2009}, or even meters \cite{Ranka:OL:2000}.
We use the propagation properties of a diamond waveguide surrounded by silica
\cite{Hausmann:N:2014}, and demonstrate in terms of numerical simulations that
the supercontinuum generation process unfolds on a much shorter,
millimeter-length propagation scale.
In our analysis,
we investigate the
propagation dynamics of ultrashort optical pulses via the generalized
nonlinear Schrödinger equation \cite{Agrawal:BOOK:2019}, taking into account
higher-order dispersion, pulse self-steepening
\cite{deMartini:PR:1967,deOliveira:JOSAB:1992}, and the Raman effect
\cite{Gordon:OL:1986}.
This accounts for various processes that support the generation of widely
extended supercontinuum spectra, such as the modulation instability
\cite{Demircan:OC:2005}, soliton-fission
\cite{Husakou:PRL:2001,Demircan:APB:2007}, and self-frequency shift of Raman
solitons \cite{Gordon:OL:1986}.
The initial stage of the supercontinuum generation process allows to identify a
pulse self-compression mechanism based solely on soliton-effects
\cite{Mollenauer:OL:1983,Oliver:OL:2021}.
Exploiting this mechanism for higher-order soliton compression, we achieve
record-breaking pulse-compression factors, outperforming recent studies in
silicon-nitride waveguides \cite{Oliver:OL:2021}.
For instance, the compression of a hyperbolic-secant shaped input pulse of
$300\,\mathrm{fs}$, corresponding to a higher-order soliton of order $N=15$,
down to $5.4\,\mathrm{fs}$ is achieved on a propagation length of only
$6.33\,\mathrm{mm}$.
In this respect, diamond allows to consider comparatively high pulse
intensities enabling conditions that facilitate high-order soliton propagation
effects when pumping in the domain of anomalous dispersion.
The fabrication of diamond waveguides with cross-sections that allow to engineer
the required dispersion profiles, working at telecom wavelengths and exhibiting
the key-feature of a wide domain of anomalous dispersion, is technically
feasible \cite{Hausmann:N:2014,Feigel:OL:2017}.
In this regard, since silica-based fibers have clear limitations concerning
transparency and convenient dispersion profiles (as described in Sect.\
\ref{sec:methods} below),
working with diamond seems beneficial,
e.g., the ability to engineer unusual dispersion profiles with several
zero-dispersion points leads to the observation of new phenomena
\cite{Melchert:PRL:2019,Tam:PRA:2020,Tsoy:PRA:2007,Melchert:OL:2021,Willms:PRA:2022}.
In Sect.\ \ref{sec:methods} we introduce the numerical model for
nonlinear pulse propagation in more detail.
Section \ref{sec:results} contains the analysis of the supercontinuum
generation process and the pulse self-compression scheme in the considered
diamond waveguide.
Finally, we discuss our results and conclude in Sect.\ \ref{sec:discussion}.
\section{Methods}
\label{sec:methods}
For the numerical simulation and analysis of the nonlinear $z$-propagation
dynamics of ultrashort laser pulses we use the generalized nonlinear
Schrödinger equation (GNLS) \cite{Agrawal:BOOK:2019,Dudley:RMP:2009}
\begin{align}
\partial_z A = i \sum_{n\geq 2}& \frac{\beta_n}{n!}(i\partial_t)^n A + i \gamma \left( 1+i\omega_0^{-1}\partial_t\right) \notag \\
&\times \left[\,A(z,t)\int R(t^\prime) |A(z,t-t^\prime)|^2~{\rm{d}}t^\prime\right],
\label{eq:GNLS}
\end{align}
for a complex-valued field $A\equiv A(z,t)$ on a periodic time-domain of extend
$T$ with boundary condition $A(z,-T/2)=A(z,T/2)$.
In Eq.~(\ref{eq:GNLS}), $t$ is a retarded time measured in a reference frame
moving with the group velocity at $\omega_0$, where $\omega_0$ is a reference
frequency with units $\mathrm{rad/ps}$.
The real-valued coefficients $\beta_n$ specify the dispersion coefficients of
order $n$ with units $\mathrm{ps}^n/\mathrm{m}$, and $\gamma$ specifies the
nonlinear coefficient with units $\mathrm{W^{-1}/m}$.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig01_OM.pdf}
\caption{Characteristics of the propagation constant of the considered diamond
waveguide, reproduced following Fig.~5(a) Ref.~\cite{Hausmann:N:2014}. (a)
Frequency dependence of the relative group delay (rGD). (b) Frequency
dependence of the group-velocity dispersion (GVD), with zero-dispersion points
are at $\lambda_{\mathrm{Z1}}\approx 843\,\mathrm{nm}$, and
$\lambda_{\rm{Z2}}\approx 2340\,\mathrm{nm}$.
Domains of normal dispersion are shaded gray.
Top axis in (a) indicates the detuning $\Omega$, related to the wavelength
through $\lambda = 2 \pi c/(\omega_0+\Omega)$ with speed of light $c$.
\label{fig:OM:01}}
\end{figure}
To model dispersive and nonlinear effects in diamond waveguides we use a
propagation constant $\beta(\Omega)=\sum_{n\geq 2} (\beta_n/n!) \Omega^n$,
where $\Omega=\omega-\omega_0$ defines an angular frequency detuning,
characterized by the relative group delay
$\beta_1(\Omega)=\partial_\Omega\,\beta(\Omega)$ shown in
Fig.~\ref{fig:OM:01}(a), and group-velocity dispersion
$\beta_2(\Omega)=\partial_\Omega^2\,\beta(\Omega)$ shown in
Fig.~\ref{fig:OM:01}(b).
This broadband anomalous dispersion profile characterizes a silica embedded
diamond waveguide with height $H=950\,\mathrm{nm}$ and width
$W=875\,\mathrm{nm}$, extracted from Ref.~\cite{Hausmann:N:2014}.
This waveguide device was designed for the telecom wavelength range and
exhibits a wide domain of anomalous dispersion, bounded by zero dispersion
points at $\lambda_{\mathrm{Z1}}\approx 843\,\mathrm{nm}$ and
$\lambda_{\rm{Z2}}\approx 2340\,\mathrm{nm}$, see Fig.~\ref{fig:OM:01}.
We further use $\gamma=9.6\,\mathrm{W^{-1}/m}$ \cite{Feigel:OL:2017}.
The Raman effect is included via the total response function
\begin{align}
R(t)=(1-f_{{R}})\,\delta(t) + f_{{R}}\,h_{{R}}(t), \label{eq:R}
\end{align}
where the first term defines the instantaneous Kerr response, and where the
second term specifies a generic two-parameter Raman response function
\cite{Blow:JQE:1989,Stolen:JOSAB:1989}
\begin{align}
h_{{R}}(t) = \frac{\tau_1^2 + \tau_2^2}{\tau_1 \tau_2^2}\,e^{-t/\tau_2}\,\sin(t/\tau_1)\,\Theta(t), \label{eq:hR_t}
\end{align}
with fractional contribution $f_{{R}}$, with the Heaviside step-function
$\Theta(t)$ ensuring causality.
To model the Raman effect in diamond waveguides we here use $f_{{R}}=0.20$, $\tau_1
= 4.0~\mathrm{fs}$, and $\tau_2=5.7~\mathrm{fs}$ \cite{Kardas:OE:2013}.
Using a discrete sequence of angular frequency detunings $\Omega=
\omega-\omega_0 \in 2\pi T^{-1} \mathbb{Z}$, the
expressions
\begin{subequations}\label{eq:FT}
\begin{align}
&A_\Omega(z) = \frac{1}{T} \int_{-T/2}^{T/2} A(z,t)\,e^{i\Omega t}~{\rm{d}}t,\label{eq:FT_FT}\\
&A(z,t) = \sum_{\Omega} A_\Omega(z)\,e^{-i\Omega t}, \label{eq:FT_IFT}
\end{align}
\end{subequations}
specify forward [Eq.~(\ref{eq:FT_FT})], and inverse [Eq.~(\ref{eq:FT_IFT})]
Fourier transforms, relating the field envelopes $A(z,t)$ to the spectral
envelopes $A_\Omega(z)$.
The energy of the field $A$ can be written in the form $E(z)=\hbar \sum_\Omega
n_\Omega(z) \,(\omega_0+\Omega)$, where $\hbar$ is the reduced Planck constant,
and where the dimensionless quantity $n_{\Omega}(z) \equiv T |A_{\Omega}(z)|^2/[\hbar
(\omega_0 + \Omega)]$ specifies the number of photons with energy $\hbar
(\omega_0\!+\!\Omega)$. Consequently, the total number of photons is given by
\begin{align}
C_{\rm{Ph}}(z) = \frac{2\pi}{\hbar \Delta \Omega}\sum_\Omega \frac{|A_\Omega(z)|^2}{\omega_0 + \Omega}. \label{eq:CN}
\end{align}
Let us note that the GNLS (\ref{eq:GNLS}) conserves the total number of photons
$C_{\rm{Ph}}$, but does not conserve the energy $E$ due to the Raman
interaction and self-steepening \cite{Blow:JQE:1989}.
The numerical simulations in terms of the GNLS reported below are performed
using the variable stepsize ``conservation quantity error'' (CQE) method
\cite{Heidt:JLT:2009,Rieznik:IEEEPJ:2012,Melchert:CPC:2022,GNLStools:GitHub:2022},
with stepsize selection guided by $C_{\rm{Ph}}$.
To assess time-frequency interrelations within the field $A$
at a selected propagation distance $z$, we use the spectrogram
\cite{Melchert:SFX:2019,Cohen:IEEE:1989}
\begin{equation}
P_{S}(t,\Omega) = \frac{1}{2 \pi} \left|\int_{-T/2}^{T/2} A(z,t^\prime)h(t^\prime-t) e^{-i \Omega t^\prime}~{\rm d}t^\prime\right|^2, \label{eq:PS}
\end{equation}
wherein $h(x)=\exp(-x^2/2\sigma^2)$ is a Gaussian window function with
root-mean-square width $\sigma$, allowing to localize $A$ in time.
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{fig02_OM.pdf}
\caption{Supercontinuum generation processes. (a-c) Results for a silica-based
optical fiber. (a) Time-domain propagation dynamics. Inset shows zoom-in on the
soliton fission process. (b) Spectral-domain propagation dynamics. Vertical
dashed line indicates zero-dispersion point at $\lambda_{\rm{Z}}\approx
780\,\mathrm{nm}$. (c) Spectrogram at $z=12\,\mathrm{cm}$ using a rms-width
$\sigma=25\,\mathrm{fs}$ to localize the field.
(d-f) Same as (a-c) for a diamond waveguide. Vertical dashed lines in (e)
indicate zero-dispersion points at $\lambda_{Z1}\approx 843\,\mathrm{nm}$ and
$\lambda_{\rm{Z2}}\approx 2340\,\mathrm{nm}$.
\label{fig:OM:02}}
\end{figure*}
\section{Results}
\label{sec:results}
\paragraph{Supercontinuum generation}
Below we compare a supercontinuum generation process in a standard silica-based
optical fiber with with properties detailed in Ref.~\cite{Dudley:RMP:2009}, to
supercontinuum generation in a silica surrounded diamond waveguide exhibiting
the dispersion properties detailed in Fig.~\ref{fig:OM:01}.
Results of numerical simulations using initial hyperbolic-secant pulses
$A_0(t)=P_0\,{\mathrm{sech}}(t/t_0)$ with peak power $P_0$ and duration $t_0$
are shown in Fig.~\ref{fig:OM:02} (parameters are detailed below).
Starting from the spectrally narrow input pulse, the interplay of linear and
nonlinear effects inherent to Eq.~(\ref{eq:GNLS}) leads to an enormous spectral
broadening. This involves soliton fission, i.e.\ the successive breakup of the
initial pulse into fundamental solitons, see the insets of
Fig.~\ref{fig:OM:02}(a) and Fig.~\ref{fig:OM:02}(d),
for a silica-fiber and a diamond waveguide, respectively.
The pulse breakup is accompanied by the generation of dispersive waves in the
domain of normal dispersion, extending the spectrum towards the blue side.
Due to the Raman effect these solitons experience a self-frequency shift
[Figs.~\ref{fig:OM:02}(b,e)], extending the red side of the spectrum and
resulting in a deceleration of the pulses in the time domain
[Figs.~\ref{fig:OM:02}(a,d)].
Under certain conditions, the ejected solitons form strong refractive index
barriers that cannot be surpassed by quasi group-velocity matched dispersive
waves in the domain of normal dispersion, resulting in reflection processes
that further extend the blue side of the spectrum
\cite{Dudley:RMP:2009,Driben:OE:2010,Demircan:PRL:2013,Demircan:OL:2014}.
Instances of such reflection processes are visible in the spectrograms in
Figs.~\ref{fig:OM:02}(c,f).
While both supercontinuum generation processes look very similar regarding the
structure of their underlying soliton fission processes, see insets of
Figs.~\ref{fig:OM:02}(a,d), and spectrum, see Figs.~\ref{fig:OM:02}(b,e), both
occur on very different energy scales and propagation distances.
Let us note that a fundamental soliton for the silica-based optical fiber with
$\beta_2= -0.011\,\mathrm{ps^2/m}$, $\gamma=0.1\,\mathrm{W^{-1}/m}$, and, say,
$t_0=0.1\,\mathrm{ps}$, would require a peak power $P_0\approx 11\,\mathrm{W}$
and yield a soliton period $z_{S}=(\pi/2) L_D \approx 1.4\,\mathrm{m}$
(dispersion length $L_D=t_0^2/|\beta_2|$). Such a fundamental soliton would have
energy $E=2.2\,\mathrm{pJ}$.
In contrast, a diamond waveguide with $\beta_2=-0.26\,\mathrm{ps^2/m}$,
$\gamma=9.6\,\mathrm{W^{-1}/m}$, and $t_0=0.1\,\mathrm{ps}$ requires only
$P_0=2.7\,\mathrm{W}$ and exhibits $z_S\approx 0.006\,\mathrm{m}$. In this
case, $E=0.54 \,\mathrm{pJ}$, i.e.\ the energy required for the fundamental
soliton is smaller by about a factor of four.
For the supercontinuum generation process shown in Fig.~\ref{fig:OM:02}, in
case of the silica-based optical fiber, the initial pulse had peak power
$P_0=10\,\mathrm{kW}$ and duration $t_0=28.4\,\mathrm{fs}$, injected at
$\omega_0=2.260\,\mathrm{rad/fs}$ ($\lambda_0=835\,\mathrm{nm}$), corresponding
to a soliton of order $N\approx 8.7$.
In case of the diamond waveguide, $P_0=1.66\,\mathrm{kW}$,
$t_0=20\,\mathrm{fs}$, and $\omega_0=1.82\,\mathrm{rad/fs}$ ($\lambda_0=1035\,\mathrm{nm}$), corresponding to a soliton of order $N=7$.
Let us point out that while the supercontinuum generation process in the silica
fiber develops on a lengthscale of $12\,\mathrm{cm}$, a similar dynamics in
case of the diamond waveguide unfolds on merely $6\,\mathrm{mm}$.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig03_OM.pdf}
\caption{Pulse compression scheme based on soliton dynamics.
(a) Dependence of the soliton self-compression factor (SCF) $F_{\rm{SC}}$ on
the soliton order $N$.
(b) Dependence of the soliton self-compression length (SCL) $z_{\rm{SC}}$ on
$N$. The scaling laws governing the dashed lines in (a,b) are detailed in the
text. Blue dots in (a,b) are the results of numerical simulations in terms of
Eq.~(\ref{eq:GNLS}).
(c) Intensity profile at $z=z_{\rm{SC}}$ for soliton orders $N=10,\,15,\,20$,
centered on the pulse peak position $t_{\rm{peak}}$.
(d) Propagation dynamics in the time-domain for $N=15$, and, (e) propagation
dynamics in the spectral domain.
Horizontal lines in (d,e) indicate the optimal self-compression distance
$z_{\rm{SC}}=6.33\,\mathrm{mm}$. Vertical dashed lines in (e) indicate
zero-dispersion points.
(f) Variation of the pulse intensity upon propagation distance for soliton
orders $N=10,\,15,\,20$.
\label{fig:OM:03}}
\end{figure}
\paragraph{Self-compression scheme}
The initial stage of the above supercontinuum generation processes, which is
characterized by an enormous spectral broadening, allows to identify a pulse
compression scheme, which, in the case of the diamond waveguide
[Figs.~\ref{fig:OM:02}(d-f)], proceeds on a propagation scale of less than a
millimeter.
Subsequently we discuss this initial self-compression of a higher-order
soliton, occurring in the time-domain, in more detail.
The narrowing of picosecond pulses in a silica-based single-mode optical fiber
in a domain of anomalous dispersion was demonstrated experimentally in
Ref.~\cite{Mollenauer:OL:1983}.
Therein, pulses with initial duration of $t_0\approx 4~\mathrm{ps}$
($7~\mathrm{ps}$ FWHM according to Ref.~\cite{Mollenauer:OL:1983}) were
compressed to about $1/27$ of their initial duration within a fiber of length
$320~\mathrm{m}$.
The underlying mechanism builds upon soliton effects: for a negative value of
group-velocity dispersion and for high pulse intensities, exceeding that of a
fundamental soliton, the resulting chirp across the pulse leads to pulse
narrowing upon propagation; after the pulse attains maximum compression,
soliton fission occurs [see Fig.~\ref{fig:OM:02}(d)]; the higher the initial
intensity, the smaller the propagation distance that is required to reach the
point of maximum compression.
A qualitative analysis of the propagation dynamics of solitons of high order
$N$ in terms of the common nonlinear Schrödinger equation (NLS), i.e.\
Eq.~(\ref{eq:GNLS}) with $f_R=0$ and nonzero $\beta_2<0$ only, resulted in
approximate scaling laws for the maximally attainable self-compression (SC)
factor \cite{Mollenauer:OL:1983,Oliver:OL:2021}
\begin{align}
F_{\rm{SC}} = 4.1 N, \label{eq:FSC}
\end{align}
and the optimal self-compression distance \cite{Mollenauer:OL:1983,Oliver:OL:2021}
\begin{align}
z_{\rm{SC}} = \frac{L_D}{N} \left(0.32+\frac{1.1}{N}\right),\label{eq:zSC}
\end{align}
where $L_D=t_0^2/|\beta_2|$ is the dispersion length of the initial pulse and
$z_{\rm{SC}}$ specifies the propagation distance at which the compression
factor $F_{\rm{SC}}$ is achieved.
Subsequently we transfer the concept of this soliton-effect pulse compression scheme
to diamond waveguides with dispersive and nonlinear properties detailed in
Sect.~\ref{sec:methods}.
Specifically, we consider the operating wavelength
$\lambda_0=1540\,\mathrm{nm}$ ($\omega_0 = 1.212\,\mathrm{rad/fs}$), at which
$\beta_2(\omega_0)\approx -0.59\,\mathrm{ps^2/m}$, and hyperbolic secant pulses
of duration $t_0=0.3\,\mathrm{ps}$.
Figure~\ref{fig:OM:03} summarizes the results of our numerical simulations for
soliton orders $N=10$, $15$, and, $20$.
In Figs.~\ref{fig:OM:03}(a,b) we compare the theoretical predictions of
Eqs.~(\ref{eq:FSC}),(\ref{eq:zSC}) with simulations performed in terms of the
full GNLS, given by Eq.~(\ref{eq:GNLS}). The good agreement with the above
approximate scaling laws does not come as a surprise:
during the initial propagation stage, i.e.\ well before soliton fission sets
in, the dynamics is well described by the common NLS.
In this regard, an important requirement is that the underlying group-velocity
dispersion exhibits a nearly flat anomalous dispersion profile, extending over
a wide wavelength range.
Problems concerning the compression limit \cite{Demircan:PTL:2006}, due to an
overlap of the spectrally broadened pulse with the domain of normal dispersion,
are thus also reduced.
For instance, at $N=15$, we find that the initial pulse compresses down to
$5.4\,\mathrm{fs}$, resulting in a self-compression factor $F_{\rm{SC}}\approx
56$ at the optimal self-compression distance $z_{\rm{SC}}= 6.33\,\mathrm{mm}$.
Both these values are obtained from pulse propagation simulations in terms of
the GNLS~(\ref{eq:GNLS}).
Note that for the largest compression factors, the peak intensity might achieve
a $\mathrm{TW/cm^2}$ level, in which case higher-order effects such as
interband-transitions (multi-photon absorption) start to play an increasing
role.
A visual account of the achieved compression is given in
Fig.~\ref{fig:OM:03}(c), where the pulse intensity at $z_{\rm{SC}}$ is compared
to its initial trace for the above three choices of $N$.
Similar as in Ref.~\cite{Mollenauer:OL:1983}, we observe that
for increasing $N$, an increasingly narrow central peak on top of a broad
pedestal emerges.
The propagation dynamics for the case $N=15$ is demonstrated in
Figs.~\ref{fig:OM:03}(d,e): in the propagation range up to $z_{\rm{SC}}$, the
pulse self-compression and increase in peak intensity
[Fig.~\ref{fig:OM:03}(d)], accompanied by spectral broadening
[Fig.~\ref{fig:OM:03}(e)], is clearly evident; immediately beyond
$z_{\rm{SC}}$, soliton fission sets in.
Let us emphasize that the considered diamond waveguides support the propagation
of ultrashort solitons at rather low pulse intensities. This is a special
property enabled by diamond waveguides.
Finally, let us note that upon approaching $z_{\rm{SC}}$, the peak intensity
increases at a strongly increasing rate, see Fig.~\ref{fig:OM:03}(f).
This requires an adequate adjustment of the device length or input power to
optimally exploit this compression scheme.
\section{Discussion and conclusions}
\label{sec:discussion}
In summary, we studied the nonlinear propagation dynamics of optical
pulses in diamond waveguides in terms of the generalized nonlinear Schrödinger
equation.
Specifically, we considered a waveguide device, designed for the telecom
wavelength range with a wide domain of anomalous dispersion
\cite{Hausmann:N:2014}.
We demonstrated that the supercontinuum generation process, which for
silica-based optical fibers usually occurs on the scale of centimeters or even
meters, occurs already on the scale of millimeters.
Owing to the strong optical nonlinearity of diamond and the large negative
value of the achievable waveguide group-velocity dispersion, this process not
only occurs on much shorter propagation scales, it also requires much lower
pulse energies.
Recognizing that the propagation dynamics prior to soliton-fission
is always characterized by pulse narrowing, directly allows to transfer
a simple and efficient pulse compression scheme to the diamond platform,
promising record-breaking compression factors on chip-size propagation
distances.
This compression scheme, which is solely based on soliton effects in a
domain of anomalous dispersion, has recently been studied experimentally within
SiN waveguides \cite{Oliver:OL:2021}.
Therein, pulses with initial duration of $1.2\,\mathrm{ps}$ and soliton order
$N\approx 19$ were compressed to about $1/18$ of their initial duration within
a low-loss, dispersion engineered waveguide of $40\,\mathrm{cm}$ length.
This experimentally achieved compression factor is much below the theoretical
prediction $F_{\rm{SC}}\approx 78$ obtained via Eq.~(\ref{eq:FSC}).
Also, earlier efforts to exploit this self-compression mechanism did not yield
compression factors larger than 11
\cite{Colman:NP:2010,BlancoRedondo:NC:2014,Choi:APL:2019}.
In our case, the compression factor can simply be increased by starting with an
initial pulse duration in the picosecond range and using longer propagation
distances.
The limitation of this compression scheme is given mainly by the extend of the
domain of anomalous dispersion, supporting undisturbed soliton propagation.
While the presented study has a focus on diamond waveguides with a simple
geometry, we expect that other waveguide devices fabricated on basis of
synthetic diamond, such as angle-etched \cite{Shams:OL:2019}, and fin-shaped
structures \cite{Grote:APL:2016}, behave in a qualitatively similar manner.
The reported findings are of fundamental interest to nonlinear optics, and
provide further insight into the complex propagation dynamics of ultrashort
pulses in diamond waveguides.
\section*{Acknowledgements}
\noindent Funding: This work was supported by the Deutsche
Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the
Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering—Innovation
Across Disciplines) [EXC 2122, Project No.\ 390833453],
and
the European Regional Development Fund for the 'Hannover Alliance of Research
on Diamond (HARD)' (ZW7-85196513).
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Appendix:Derivation of Hydrodynamical Theory}
Consider a ring geometry.
Electron's coordinate is represented by a complex variable
$z_n=Le^{i\theta_n}$ where $\theta$ is an angle along the circle of
radius $L$. In this variables the Hamiltonian eq.(\ref{Hamiltonian}) is given by
\begin{equation}\label{e1}
H=
\frac{1}{2}\sum_{j=1}^{2N}(z_j\partial_j)^2+\sum_{i \neq j}^{2N}\frac{\lambda(\lambda-1)}{|z_i-z_j|^2}
\,.
\end{equation}
The ground state of the Hamiltonian wave function of (\ref{e1}) can be found exactly
\begin{equation}
\nolinebreak{\Psi_0=\left(\prod_{i=1}^{2N}z_i\right)^{-\lambda(2N-1)/2}|\Delta|^{\lambda-1}\Delta}\,,
\end{equation}
where
\begin{equation}
\Delta=\prod_{i<j}^{2N}(z_i-z_j)
\end{equation}
is Vandermonde determinant.
Excited states are given by
\begin{equation}
\Psi_{\kappa}=\Psi_0 J_\kappa,
\end{equation}
where Jack polynomials $J_\kappa$ are parameterized by partition $\kappa$.
The problem had been reduced to the properties of the new bosonic Hamiltonian
\begin{equation}
H_{\rm B}=\Psi_0^{-1}H\Psi_0
\end{equation}
that acts in the Hilbert space of symmetric wave functions.
It is given by
\begin{equation}\label{e3}
H_{\rm B}=\sum_{i=1}^{2N} D_i^2+\lambda\sum_{i<j}^{2N}\frac{z_i+z_j}{z_i-z_j}(D_i-D_j)\, ,
\end{equation}
where $D_i=z_i\partial_i$.
Aiming for a second quantization one
defines so called collective variables
\begin{eqnarray}&&
\nolinebreak{p(\theta)=\sum_{i=1}^{2N}\delta(\theta-\theta_i), \,\,\,
p_k=\int_0^{2\pi}d\theta e^{ik\theta}p(\theta)},
\nonumber \\&&
p(z)=\sum_{k=-\infty}^{\infty}z^{k-1}p_{-k} \, .
\end{eqnarray}
In terms of collective variables the bosonic Hamiltonian can be rewritten as\cite{Awata}
\begin{eqnarray}&&
\label{e4}
\hspace{-0.5cm}H_{\rm B}=\!
\frac{1}{2}\!\!\!\sum_{m,n=-N}^Nmnp_{n+m}\frac{\partial^2}{\partial
p_n\partial p_m}\!+\!(1-\!\lambda)\!\sum_{n=-N}^N\!
n^2p_n\frac{\partial}{\partial p_n}\!+\nonumber \\&&
\hspace{-.5cm}\frac{\lambda}{2}\sum_{m=0}^{N-1}\sum_{n=1}^{N-m}(n+m)\bigg[p_np_m\frac{\partial}{\partial
p_{n+m}}+p_{-n}p_{-m}\frac{\partial}{\partial p_{-n-m}}\bigg].
\end{eqnarray}
So far the transformation have been exact.
Passing to the hydrodynamic limit ($N \to \infty$, $\to \infty$ $2N/L \to \bar{\rho}$
one arrives to eq.(\ref{e5}); $x$ is a coordinate along the circle
($x=\frac{L}{2\pi}\theta$), the linear density
$\rho(x)=\frac{2\pi}{L}\rho(\theta)$.
The modes
of velocity operator are defined as
\begin{equation}
v_n=2\pi\left(-n\frac{\partial}{\partial p_{-n}}+\frac{1}{2}p_n{\rm
sgn(n)}\right).
\end{equation}
It is easy to see that these definitions
are consistent with a standard commutation \cite{Landau}
\begin{equation}
[v(x),\rho(y)]=-i\delta'(x-y)\, .
\end{equation}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:Intro}
With the increased use of electric vehicles (EVs) over the past decade, a large network of EV charging stations has been installed on the electric grid.
Data collected from city-wide deployment of EV charging stations can be used for both academic and industrial purposes, e.g., through statistical analysis \cite{nasrinEV2016} and modeling. Studies require reliable sessions data for understanding behaviors and exploring flexibility. The scarcity of reliable data has been discussed previously \cite{EVDSreview}, and its necessity has been pointed out for further research purposes. Even when data exists, it may be protected under confidentiality for private collectors, and not freely available for academic or public use. Lack of wide-spread availability and accessibility of realistic EV charging sessions data pose a significant hurdle, impeding further research in the field.
We propose a Synthetic Data Generator (SDG), that can be used to generate a sample of EV charging sessions. This implies temporal statistical modeling of arrivals and modeling of departures and the associated electrical load (i.e., required energy). We define a parametric model that can be trained from a real-world dataset. This trained model can be subsequently used to generate realistic data samples of EV sessions.
In this paper, we contribute with:
\begin{itemize}[topsep=0pt]
\setlength\itemsep{0em}
\item An approach to model sessions data for EVs over a group of charging stations defined as the SDG (\secref{sec:SDG});
\item An outline of the model parameters, and discussion over benefits and drawbacks of different models;
\item Answers to our main research questions, being:
\begin{question}\label{q:SDGdefination}~Which parametric models can be used to describe sessions of EVs. What are the input parameters and latent variables for these models? \end{question}
\begin{question}\label{q:generate}~How can we generate synthetic samples of EV sessions data from these parametric models? \end{question}
\end{itemize}
\section{Synthetic Data Generator}
\label{sec:SDG}
Each EV session can be described using three parameters:
\begin{enumerate*}[(i)]
\item \textit{Arrival time},
\item \textit{Departure time}, and
\item \textit{Energy charged} (in kWh).
\end{enumerate*}
We can define three models for each of these as:
\begin{itemize}[topsep=0pt]
\item \textit{Arrivals} = $AM$(Horizon)
\item \textit{Connected times} = $MM_{c}$(Arrivals)
\item \textit{Energy} = $MM_{e}$(Arrivals)
\end{itemize}
Where $AM$ means arrival model, $MM_{c}$ means mixture model for connection times and $MM_{e}$ means mixture model for the energy charged. `Horizon' is the time horizon for which the data needs to be generated and is an input parameter. Departure time in an EV charging session can be calculated as the sum of its arrival time and connected time.
\subsection{Arrival Models}
\label{sec:arrivalmodel}
Arrivals of EVs in a group of charging stations can be considered as events on a continuous timescale. Supported by our large-scale dataset, we assume that the inter-arrival time of EVs
follows an exponential distribution,
characterized by a parameter $\lambda$ representing the arrival rate (EVs per time unit).
This means we can either model the times in between successive events, or rather the number of events in a given time interval:
\begin{itemize}[topsep=0pt]
\item \textit{Exponential IAT distribution:} we generate the arrival of the next EV using $t_{i} = t_{i-1} + \Delta t $. Where $t_{i} $ is the time of $i^{th}$ arrival. $\Delta t $ is the time difference between the $i^\textrm{th}$ and ${i-1}^\textrm{th}$ arrivals. The probability density of $\Delta t $ (inter-arrival time, IAT) is exponentially distributed, characterized by $\lambda = f(\textrm{month}, \textrm{daytype}, \textrm{time-of-day})$, where `daytype' is either weekday or weekend.
\item \textit{Poisson process:} we generate the number of arrivals in a given time slot (e.g., slots of 60 minutes). The number of arrivals $N_\textit{arr}$ in a given time slot can be generated using sampling from a Poisson distribution
with mean $\lambda \cdot T$ (with $T$ the duration of a timeslot).
\end{itemize}
\noindent In \figref{fig:expdensity} we plot density of inter-arrival times where the original data refers to real-world data collected by ELaadNL. A fitted exponential distribution validates this assumption that inter-arrival times are exponentially distributed: Kolmogorov–Smirnov test results for all combinations of month, daytype and time-of-day slots have $p$-value $< 0.05$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,height=6cm]{Resources/exponential-dist-excel.png}
\caption{Inter-arrival time probability density for real world data collected by ELaadNL.
\label{fig:expdensity}
\setlength{\belowcaptionskip}{-00pt}
\end{figure}
We can model $\lambda$ (in function of month, daytype, time-of-day) using averaging (discontinuous) over each timeslot or rather fitting a continuous function of time (continuous). While fitting continuous curves, capturing the peaks during the day becomes very important. Also, fitted curves could result in negative values of $\lambda$, thus we need to impose lower and upper bounds on it.
Using the IAT approach, the time of next EV arrival is determined relatively to the time of the previous EV. In case $\lambda$ at a given time-of-day is very low, this implies the next EV arrival may be be generated very late (with a large $\Delta t $), thus skipping several consecutive timeslots. This is problematic if those next timeslots exhibit a much higher $\lambda$, and thus have a high probability of EV arrivals --- which would not be generated because of the very large $\Delta t $.
The second modeling approach, as a Poisson process, circumvents the above problem, since when certain timeslots have low $\lambda$, we will likely generate 0 arrivals, but still proceed to the immediately next timeslot (with possibly high $\lambda$ to generate possible arrivals there). A potential drawback still is that the variance and mean of the Poisson distribution are equal. In case this assumption does not hold, we adopt a negative binomial distribution instead.
\subsection{Mixture Models for Connected Times and Energy}
\label{sec:departuremodel}
Aside from arrival events, also EV departure events, as well as the EV charging load (or total energy charged) need to be modeled.
Arrivals are conditional on previous arrivals, obviously, and we model the probability distribution for connected times ($t_\textit{depart} - t_\textit{arr}$) as Gaussian Mixture Models (GMMs). Also for EV charging energy we use GMMs.
As in the arrival generation models, we define and fit different models for each month and time-of-day.
In summary, answering \qref{q:SDGdefination} (Which models can be used in SDG?), we propose
\begin{enumerate*}[(i)]
\item exponential IAT distribution or Poisson distribution for arrivals per timeslot ($\textit{AM}$),
\item GMMs for both the duration of EV sessions ($\textit{MM}_c$) and
\item their associated energy charged ($\textit{MM}_e$).
\end{enumerate*}
\section{Generating Samples}
\label{sec:training}
After fitting the SDG on a real-world data, we have $AM$, $MM_{c}$ and $MM_{e}$ as parametric models for arrival times, connection times and energy required. For synthetic generation of arrival times, e.g., $\lambda$ values can be used to generate $\Delta t $ (and hence a series of arrivals). For connection times and energy required, a sample from the PDFs can be used as a synthetically generated connection time and required energy. This answers \qref{q:generate}, on how we can generate samples.
With our SDG (AM, $MM_{c}$ and $MM_{e}$) can be saved as a separate file. These models will be supplied with the code (that we plan to make publicly available) to generate new synthetic data samples. These models do not include the actual real-world EV session data that the SDG was trained on. Hence, these models can be shared without violating confidentiality constraints. Generated samples from these models can be subsequently used for flexibility analysis, load balancing and other research purposes.
\section{Conclusion and Future Work}
\label{sec:future work}
Our paper summarized the modeling approach of EV charging sessions.
We adopted these models for training a synthetic data generator (SDG) with real-world data, that can thus generate synthetic samples of EV sessions data.
We plan to release the source code including the \emph{training} scripts, but also \emph{generation} code (including the settings thereof reflecting the trained model characteristics based on a large-scale real-world dataset) to produce synthetic EV session data reflecting real-world behavior. We believe this fits a strong need of researchers in both academic and industrial settings.
As future work, we aim to propose modeling methods for the time-varying arrival distribution model parameters.
Further, we will tackle the following challenges in depth:
\begin{enumerate*}[label={(\arabic*)}]
\item Studying the properties of the real-world data with the goal to define evaluations metrics for comparing real-world data with generated samples.
\item Correctly modeling peaks of arrivals during the day; study effective methods that avoid negative values in continuous $\lambda$ curves.
\end{enumerate*}
\begin{acks}
This research received funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for all short- and
full-length articles, and optional for two-page abstracts.
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader. Figure captions go below the figure. Your figures should
{\bfseries also} include a description suitable for screen readers, to
assist the visually-challenged to better understand your work.
Figure captions are placed {\itshape below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for all short- and
full-length articles, and optional for two-page abstracts.
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader. Figure captions go below the figure. Your figures should
{\bfseries also} include a description suitable for screen readers, to
assist the visually-challenged to better understand your work.
Figure captions are placed {\itshape below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{{#1}}}
\newcommand{\uple}[1]{\text{\boldmath${#1}$}}
\def\stacksum#1#2{{\stackrel{{\scriptstyle #1}}
{{\scriptstyle #2}}}}
\def--{--}
\def\map#1#2#3#4{\begin{matrix}#1&\mapsto 
\\#3 &\mapsto 
\end{matrix}}
\newcommand{\uple{\alpha}}{\uple{\alpha}}
\newcommand{\uple{\beta}}{\uple{\beta}}
\newcommand{\uple{b}}{\uple{b}}
\newcommand{\uple{a}}{\uple{a}}
\newcommand{\uple{h}}{\uple{h}}
\newcommand{\uple{l}}{\uple{l}}
\newcommand{\bft}{\uple{t}}
\newcommand{\uple{x}}{\uple{x}}
\newcommand{\uple{y}}{\uple{y}}
\newcommand{\uple{m}}{\uple{m}}
\newcommand{\uple{n}}{\uple{n}}
\newcommand{C(\mathcal{F})}{C(\mathcal{F})}
\newcommand{\uple{I}}{\uple{I}}
\newcommand{\uple{J}}{\uple{J}}
\newcommand{\mathrm{e}_q}{\mathrm{e}_q}
\newcommand{\mathrm{Std}}{\mathrm{Std}}
\newcommand{\mathrm{Sym}}{\mathrm{Sym}}
\newcommand{\mathrm{sym}}{\mathrm{sym}}
\newcommand{\mathrm{arith}}{\mathrm{arith}}
\newcommand{\mathrm{Irr}}{\mathrm{Irr}}
\newcommand{\mathrm{geom}}{\mathrm{geom}}
\newcommand{G^{\mathrm{arith}}}{G^{\mathrm{arith}}}
\newcommand{G_n^{\mathrm{arith}}}{G_n^{\mathrm{arith}}}
\newcommand{G^{\mathrm{geom}}}{G^{\mathrm{geom}}}
\newcommand{G_{\mathcal{F},\mathrm{arith}}}{G_{\mathcal{F},\mathrm{arith}}}
\newcommand{G_{\mathcal{F},\mathrm{geom}}}{G_{\mathcal{F},\mathrm{geom}}}
\newcommand{\Garithd}[1]{G_{{#1},\mathrm{arith}}}
\newcommand{\Ggeomd}[1]{G_{{#1},\mathrm{geom}}}
\newcommand{K^{\mathrm{sep}}}{K^{\mathrm{sep}}}
\newcommand{K_x^{\mathrm{sep}}}{K_x^{\mathrm{sep}}}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{K}}{\mathbf{K}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{C}^\times}{\mathbf{C}^\times}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{G}_{m}}{\mathbf{G}_{m}}
\newcommand{\mathbf{G}_{m,{\mathbf{F}_q}}}{\mathbf{G}_{m,{\mathbf{F}_q}}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{q^{1/2}}{q^{1/2}}
\newcommand{q^{-1/2}}{q^{-1/2}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{Q}_{\ell}}{\mathbf{Q}_{\ell}}
\newcommand{\ov{\mathbf{Q}_{\ell}}}{\ov{\mathbf{Q}_{\ell}}}
\newcommand{\ov{{\mathbf{F}_q}}}{\ov{{\mathbf{F}_q}}}
\newcommand{{\mathbf{F}_p}}{{\mathbf{F}_p}}
\newcommand{{\mathbf{F}^\times_p}}{{\mathbf{F}^\times_p}}
\newcommand{{\mathbf{F}_q}}{{\mathbf{F}_q}}
\newcommand{{\mathbf{F}_{q^n}}}{{\mathbf{F}_{q^n}}}
\newcommand{{\mathbf{F}^\times_{q^n}}}{{\mathbf{F}^\times_{q^n}}}
\newcommand{{\mathbf{F}^\times_q}}{{\mathbf{F}^\times_q}}
\newcommand{{\mathbf{F}_{q^d}}}{{\mathbf{F}_{q^d}}}
\newcommand{{\mathbf{F}^\times_{q^d}}}{{\mathbf{F}^\times_{q^d}}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\bar{\Ff}_p}{\bar{\mathbf{F}}_p}
\newcommand{\bar{\Ff}_q}{\bar{\mathbf{F}}_q}
\newcommand{\bar{\Qq}_{\ell}}{\bar{\mathbf{Q}}_{\ell}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{g^\natural}{g^\natural}
\newcommand{\boldsymbol{\mu}}{\boldsymbol{\mu}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{K}\ell}{\mathcal{K}\ell}
\newcommand{\mathcal{K}\ell}{\mathcal{K}\ell}
\newcommand{\overline{\mathbf{F}}}{\overline{\mathbf{F}}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\text{\boldmath$P$}}{\mathbf{P}}
\newcommand{\text{\boldmath$E$}}{\mathbf{E}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{g^{\sharp}}{g^{\sharp}}
\newcommand{y^{\sharp}}{y^{\sharp}}
\newcommand{\clconj}[1]{{{#1}}^{\sharp}}
\newcommand{\mods}[1]{\,(\mathrm{mod}\,{#1})}
\newcommand{\sli}[1]{\underline{{#1}}}
\newcommand{\ideal}[1]{\mathfrak{{#1}}}
\newcommand{\mathrm{Id}}{\mathrm{Id}}
\newcommand{\widehat}{\widehat}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{G}^{opt}}{\mathbf{G}^{opt}}
\newcommand{\hautk}[2]{\mathbf{G}_{{#1},{#2}}}
\newcommand{\hautz}[2]{\mathbf{G}^{a}_{{#1},{#2}}}
\newcommand{\hauti}[3]{\mathbf{G}^{{#1}}_{{#2},{#3}}}
\DeclareMathOperator{\frob}{Fr}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\skl}[1]{\sheaf{K}^{({#1})}}
\newcommand{\hk}[1]{\sheaf{K}\ell_{{#1}}}
\newcommand{\mutw}[3]{\mu_{{#3},{#2}}}
\newcommand{\frtr}[3]{(\Tr{{#1}})({#2},{#3})}
\DeclareMathOperator{\hypk}{Kl}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\hookleftarrow}{\hookleftarrow}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\fleche}[1]{\stackrel{#1}{\longrightarrow}}
\newcommand{\flecheinj}[1]{\stackrel{#1}{\hookrightarrow}}
\newcommand{\flechecinj}[1]{\stackrel{#1}{\hookleftarrow}}
\newcommand{\barre}[1]{\overline{{#1}}}
\DeclareMathOperator{\Spec}{Spec}
\DeclareMathOperator{\Vol}{Vol}
\DeclareMathOperator{\proj}{Proj}
\DeclareMathOperator{\Card}{Card}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\rk}{rk}
\DeclareMathOperator{\res}{Res}
\DeclareMathOperator{\reg}{reg}
\DeclareMathOperator{\ord}{ord}
\DeclareMathOperator{\cl}{Cl}
\DeclareMathOperator{\Div}{Div}
\DeclareMathOperator{\divg}{divg}
\DeclareMathOperator{\Pic}{Pic}
\DeclareMathOperator{\vol}{Vol}
\DeclareMathOperator{\Imag}{Im}
\DeclareMathOperator{\Reel}{Re}
\DeclareMathOperator{\syms}{Sym^{2}}
\DeclareMathOperator{\symk}{Sym}
\DeclareMathOperator{\li}{li}
\DeclareMathOperator{\Frob}{\mathrm{Frob}}
\DeclareMathOperator{\Fr}{\mathrm{Frob}}
\DeclareMathOperator{\Kl}{\mathrm{Kl}}
\DeclareMathOperator{\shKl}{\mathrm{Kl}}
\DeclareMathOperator{\ET}{\mathrm{ET}}
\DeclareMathOperator{\tr}{\mathrm{tr}}
\DeclareMathOperator{\nr}{\mathrm{Nr}}
\DeclareMathOperator{\Gal}{Gal}
\DeclareMathOperator{\Ind}{Ind}
\DeclareMathOperator{\Res}{Res}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\im}{Im}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\End}{End}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\varia}{Var}
\DeclareMathOperator{\argu}{Arg}
\DeclareMathOperator{\spect}{Spec}
\DeclareMathOperator{\disc}{disc}
\DeclareMathOperator{\swan}{Swan}
\DeclareMathOperator{\Sing}{Sing}
\DeclareMathOperator{\Drop}{drop}
\DeclareMathOperator{\sw}{Swan}
\DeclareMathOperator{\bb}{B}
\DeclareMathOperator{\codim}{codim}
\DeclareMathOperator{\ft}{FT}
\DeclareMathOperator{\cond}{\mathbf{c}}
\DeclareMathOperator{\Ad}{Ad}
\DeclareMathOperator{\dual}{D}
\DeclareMathOperator{\nearb}{R\Psi}
\DeclareMathOperator{\van}{R\Phi}
\DeclareMathOperator{\class}{c\ell}
\newcommand{\varepsilon}{\varepsilon}
\renewcommand{\rho}{\varrho}
\DeclareMathOperator{\SL}{SL}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\PGL}{PGL}
\DeclareMathOperator{\PGLd}{PGL_2}
\DeclareMathOperator{\rmT}{T}
\DeclareMathOperator{\rmB}{B}
\DeclareMathOperator{\rmG}{G}
\DeclareMathOperator{\rmN}{N}
\DeclareMathOperator{\rmU}{U}
\DeclareMathOperator{\PSL}{PSL}
\DeclareMathOperator{\Sp}{Sp}
\DeclareMathOperator{\GSp}{GSp}
\DeclareMathOperator{\SO}{SO}
\DeclareMathOperator{\Ort}{O}
\DeclareMathOperator{\SU}{SU}
\DeclareMathOperator{\Un}{U}
\DeclareMathOperator{\USp}{USp}
\newcommand{{\textstyle{\frac{1}{2}}}}{{\textstyle{\frac{1}{2}}}}
\newcommand{{\textstyle{\frac{1}{4}}}}{{\textstyle{\frac{1}{4}}}}
\newcommand{{\textstyle{\frac{3}{2}}}}{{\textstyle{\frac{3}{2}}}}
\newcommand{\avg}[1]{A[{#1}]}
\newcommand{\underline{O}}{\underline{O}}
\newcommand{O}{O}
\newcommand{\sheaf}[1]{\mathcal{{#1}}}
\newcommand{M}{M}
\newcommand{linearly disjoint}{linearly disjoint}
\newcommand{\sheafm}[1]{\tilde{\sheaf{{#1}}}_{\ell}}
\DeclareMathSymbol{\gena}{\mathord}{letters}{"3C}
\DeclareMathSymbol{\genb}{\mathord}{letters}{"3E}
\def\mathop{\sum \Bigl.^{\flat}}\limits{\mathop{\sum \Bigl.^{\flat}}\limits}
\def\mathop{\sum \Bigl.^{+}}\limits{\mathop{\sum \Bigl.^{+}}\limits}
\def\mathop{\sum \sum}\limits{\mathop{\sum \sum}\limits}
\def\mathop{\sum \sum \sum \sum}\limits{\mathop{\sum \sum \sum \sum}\limits}
\def\mathop{\sum\cdots \sum}\limits{\mathop{\sum\cdots \sum}\limits}
\def\mathop{\sum\bigl.^{\flat}}\limits{\mathop{\sum\bigl.^{\flat}}\limits}
\def\mathop{\sum \Bigl.^{*}}\limits{\mathop{\sum \Bigl.^{*}}\limits}
\def\mathop{\sum\sum \Bigl.^{*}}\limits{\mathop{\sum\sum \Bigl.^{*}}\limits}
\def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{**}}\limits}
\def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{\sharp}}\limits}
\def\mathop{\prod \Bigl.^{*}}\limits{\mathop{\prod \Bigl.^{*}}\limits}
\def\mathop{\sum \Bigl.^{h}}\limits{\mathop{\sum \Bigl.^{h}}\limits}
\def\frac{1}{2i\pi}\mathop{\int}\limits{\frac{1}{2i\pi}\mathop{\int}\limits}
\def\mathop{\oplus}\limits{\mathop{\oplus}\limits}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem*{theorem*}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{exo}[theorem]{Exercise}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{variant}[theorem]{Variant}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{comment}[theorem]{Comment}
\theoremstyle{remark}
\newtheorem*{convention}{Conventions}
\newtheorem*{ack}{Acknowledgement}
\newtheorem*{warning}{Warning}
\newtheorem{rem}[theorem]{Remark}
\newtheorem*{property}{Properties}
\theoremstyle{definition}
\newtheorem*{claim}{Claim}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem*{question}{Question}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem*{application}{Application}
\newtheorem{xca}{Exercise}
\newcommand{\indic}[1]{[\underline{Hint}:\ {#1}]}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\blankbox}[2]{%
\parbox{\columnwidth}{\centering
\setlength{\fboxsep}{0pt}%
\fbox{\raisebox{0pt}[#2]{\hspace{#1}}}%
}%
}
\newcommand{w}{w}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{$g$-equivalent}{$g$-equivalent}
\newcommand{$g$-equivalence}{$g$-equivalence}
\newcommand{G^g}{G^g}
\newcommand{\Psi}{\Psi}
\newcommand{\Upsilon}{\Upsilon}
\newcommand{(\sieve,\siftable)}{(\Psi,\Upsilon)}
\newenvironment{epigraph}
{\hfill\begin{minipage}{0.6\linewidth}\raggedleft\footnotesize}{\end{minipage}\bigskip\bigskip}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathrm{P}}{\mathrm{P}}
\newcommand{\mathrm{L}}{\mathrm{L}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathfrak{a}}{\mathfrak{a}}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{\lambda_f}{\lambda_f}
\newcommand{\rho_f}{\rho_f}
\newcommand{\lambda_g}{\lambda_g}
\newcommand{\rho_g}{\rho_g}
\newcommand{\varphi}{\varphi}
\renewcommand{\geq}{\geqslant}
\renewcommand{\leq}{\leqslant}
\renewcommand{\Re}{\mathfrak{Re}\,}
\renewcommand{\Im}{\mathfrak{Im}\,}
\newcommand{\eqref}{\eqref}
\newcommand{\backslash}{\backslash}
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\norm}[1]{\|{#1}\|}
\newcommand{\peter}[1]{\langle{#1}\rangle}
\newcommand\sumsum{\mathop{\sum\sum}\limits}
\newcommand\sumsumsum{\mathop{\sum\sum\sum}\limits}
\newcommand\sumsumnd{\mathop{{\sum\sum}^{nd}}\limits}
\newcommand\delval{1/8}
\newcommand\delvaln{1/16}
\newcommand\finalexponent{1/24}
\newcommand\rpfree{1/144}
\begin{document}
\title{Periodic twists of $\GL_3$-automorphic forms}
\author{Emmanuel Kowalski}
\address{ETHZ, Switzerland }
\email{kowalski@math.ethz.ch}
\author{Yongxiao Lin}
\address{EPFL/MATH/TAN, Station 8, CH-1015 Lausanne, Switzerland }
\email{yongxiao.lin@epfl.ch}
\author{Philippe Michel}
\address{EPFL/MATH/TAN, Station 8, CH-1015 Lausanne, Switzerland }
\email{philippe.michel@epfl.ch}
\author{Will Sawin}
\address{Columbia University, USA }
\email{sawin@math.columbia.edu}
\date{\today,\ \thistime}
\subjclass[2010]{11F55,11M41,11L07, 11T23, 32N10}
\keywords{Automorphic forms on $\GL_3$, Fourier coefficients, Hecke
eigenvalues, discrete Fourier transform, trace functions,
subconvexity}
\begin{abstract}
We prove that sums of length about $q^{3/2}$ of Hecke eigenvalues of
automorphic forms on~$\SL_3(\mathbf{Z})$ do not correlate with $q$-periodic
functions with bounded Fourier transform. This generalizes the earlier
results of Munshi and Holowinsky--Nelson, corresponding to
multiplicative Dirichlet characters, and applies in particular to
trace functions of small conductor modulo primes.
\end{abstract}
\thanks{Y. L., Ph.\ M.\ and E.\ K.\ were partially supported by a
DFG-SNF lead agency program grant (grant number
200020L\_175755). W. S. was partially supported by the Clay Mathematics
Institute. \today\ \currenttime}
\maketitle
\section{Introduction}
Let $\varphi$ be a cusp form for $\SL_3(\mathbf{Z})$ which is an eigenfunction
of all Hecke operators.
For any prime number~$q$ and any primitive Dirichlet character~$\chi$
modulo~$q$, we can then define the twisted $L$-function
$L(\varphi\otimes\chi,s)$, which is an entire function satisfying a
functional equation relating $s$ to $1-s$.
In a recent breakthrough, Munshi~\cite{Munshi,Munshi1} solved the
subconvexity problem for these twisted $L$-functions
$L(\varphi\otimes \chi,s)$ in the conductor aspect:
\begin{theorem}[Munshi]\label{th-munshi}
Let~$s$ be a complex number such that $\Re s=1/2$. For any
prime~$q$, any primitive Dirichlet character~$\chi$ modulo~$q$, and
for any~$\varepsilon>0$, we have
\begin{equation}\label{eq:subconvex}
L(\varphi\otimes \chi,s)\ll q^{3/4-1/308+\varepsilon},
\end{equation}
where the implied constant depends on $\varphi$, $s$ and~$\varepsilon$.
\end{theorem}
This result was recently analyzed in depth by Holowinsky and
Nelson~\cite{HN}, who discovered a remarkable simplification (and
strenghtening) of Munshi's ideas. They proved:
\begin{theorem}[Holowinsky--Nelson]\label{th-hn}
With notation and assumptions as in Theorem~\emph{\ref{th-munshi}},
we have
\begin{equation}\label{eq:hn}
L(\varphi\otimes \chi,s)\ll q^{3/4-1/36+\varepsilon}
\end{equation}
where the implied constant depends on $\varphi$, $s$ and~$\varepsilon$.
\end{theorem}
\begin{remark}
We mention further variants, simplifications and improvements, by
Aggarwal, Holowinsky, Lin and Sun~\cite{AHLS}, Holowinsky, Munshi
and Qi~\cite{HMQ}, Lin~\cite{Lin}, Sun and Zhao ~\cite{SZ}.
\end{remark}
Let $(\lambda(m,n))$ denote the Hecke-eigenvalues of~$\varphi$. By
the approximate functional equation for the twisted $L$-functions, the
bound \eqref{eq:hn} is essentially equivalent to the bound
\begin{equation}\label{eq:sumbound}
\sum_{n\geq 1}\lambda(1,n)\chi(n)
V\Bigl(\frac{n}{q^{3/2}}\Bigr)\ll q^{3/2-\delta},
\end{equation}
for~$\delta<1/36$, where $V$ is any smooth compactly supported
function and the implied constant depends on~$\varphi$, $\delta$
and~$V$.
From the perspective of such sums, motivated by the previous work of
Fouvry, Kowalski and Michel~\cite{FKM1}, which relates to automorphic
forms on~$\GL_2$, it is natural to ask whether this
bound~\eqref{eq:sumbound} holds when $\chi$ is replaced by a more
general trace function $K:{\mathbf{F}_q}\to \mathbf{C}$. Our main result shows that this
is the case, and in fact extends the result to a much wider range of
$q$-periodic functions by obtaining estimates only in terms of the
size of the discrete Fourier transform modulo~$q$.
Precisely, for any function~$V$ with compact support on~$\mathbf{R}$, we set
\begin{equation}\label{defSKX}
S_{V}(K,X):=\sum_{n\geq 1}\lambda(1,n)K(n)V\Bigl(\frac{n}{X}\Bigr).
\end{equation}
We will assume that $V:\mathbf{R}\to \mathbf{C}$ satisfies the following conditions
for some parameter~$Z\geq 1$:
\begin{equation}\label{eq:Vprop}
\mathrm{supp}(V)\subset ]1,2[,\text{ and }V^{(i)}(x)\ll Z^i\text{
for all $i\geq 0$},
\end{equation}
where the implied constant depends only on~$i$.
For any integer~$q\geq 1$ and any $q$-periodic function
$K\colon \mathbf{Z}\to \mathbf{C}$, we denote by
\begin{equation}\label{eq:fourierK}
\widehat K(n)=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}K(x)e\Bigl(\frac{nx}{q}\Bigr),
\end{equation}
for~$n\in\mathbf{Z}$, its (unitarily normalized) discrete Fourier transform
modulo~$q$. We write~$\norm{\widehat{K}}_{\infty}$ for the maximum of
$|\widehat{K}(n)|$ for~$n\in\mathbf{Z}$. We then have the discrete Fourier
inversion formula
$$
K(x)=\frac{1}{q^{1/2}}\sum_{n\in{\mathbf{F}_q}} \widehat{K}(n)
e\Bigl(-\frac{nx}{q}\Bigr)
$$
for~$x\in\mathbf{Z}$.
Our main result is a general bound for \eqref{defSKX} which matches
precisely the bound of Holowinsky--Nelson \cite{HN} in the case of a
multiplicative character:
\begin{theorem}\label{thm:main}
Let $\varphi$ be an $\SL_3(\mathbf{Z})$-invariant cuspidal Hecke-eigenform
with Hecke eigenvalues $(\lambda(m,n))$. Let~$q$ be a prime number,
and $K\colon \mathbf{Z}\to\mathbf{C}$ be a $q$-periodic function.
Let $V$ be a smooth, compactly supported function satisfying
\eqref{eq:Vprop} for some $Z\geq 1$. Assume that
$$
Z^{2/3}q^{4/3}\leq X \leq Z^{-2}q^{2}.
$$
For any~$\varepsilon>0$, we have
\begin{equation}\label{eq:sumboundK}
S_V(K,X) \ll
\norm{\widehat{K}}_{\infty}Z^{10/9}q^{2/9+\varepsilon}X^{5/6},
\end{equation}
where the implied constant depends only on~$\varepsilon$, on~$\varphi$, and on
the implicit constants in~\emph{(\ref{eq:Vprop})}.
\end{theorem}
\begin{remark}
(1) Suppose that we vary~$q$ and apply this bound with functions~$K$
modulo~$q$ that have absolutely bounded Fourier transforms. Take
$X=q^{3/2}$. We then obtain the bound
$$
S_V(K,q^{3/2}) \ll Z^{10/9}q^{3/2-1/36+\varepsilon}
$$
for any~$\varepsilon>0$.
\par
(2) For the bound \eqref{eq:sumboundK} to be non-trivial (i.e.,
assuming~$K$ to be absolutely bounded, better than $X$), it is
enough that
$$
X\geq Z^{20/3}q^{4/3+\delta}
$$
for some~$\delta>0$.
\par
(3) As in the paper~\cite{short-sums} of Fouvry, Kowalski, Michel,
Raju, Rivat and Soundararajan, where the main estimate is also
phrased in Fourier-theoretic terms only,\footnote{Although the size
of~$K$ enters in~\cite{short-sums} as well as that of its Fourier
transform.} the motivating examples of functions~$K$ satisfying
uniform bounds on their Fourier transforms are the trace functions
of suitable $\ell$-adic sheaves modulo~$q$. The simplest example
is~$K(n)=\chi(n)$, which recovers the bound of Munshi (up to the
value of the exponent) and Holowinsky--Nelson, since the values of
the Fourier transform are normalized Gauss sums of modulo~$\leq
1$. We recall some other examples below in
Section~\ref{sec-examples}.
\end{remark}
We can deduce from Theorem~\ref{thm:main} a weak but non-trivial
bound for the first moment of the twisted central $L$-values, with an
additional twist by a discrete Mellin transform. We first recall the
definition
$$
\Kl_3(n)=\frac{1}{q}\sum_{\substack{x,y,z\in{\mathbf{F}^\times_q}\\xyz=n}}
e\Bigl(\frac{x+y+z}{q}\Bigr)
$$
for a hyper-Kloosterman sum with two variables modulo a prime~$q$.
\begin{corollary}\label{cor-average}
Let $\varphi$ be an $\SL_3(\mathbf{Z})$-invariant cuspidal Hecke-eigenform
with Hecke eigenvalues $(\lambda(m,n))$. Let~$q$ be a prime number
and let~$\chi\mapsto M(\chi)$ be a function of Dirichlet characters
modulo~$q$.
\par
Let~$K$ and~$L$ be the $q$-periodic functions defined
by~$K(0)=L(0)=0$ and
\begin{align*}
K(n)&=\frac{q^{1/2}}{q-1}\sum_{\chi\mods{q}}\chi(n)M(\chi)\\
L(n)&=\frac{1}{q^{1/2}} \sum_{x\in{\mathbf{F}_q}}K(x)\Kl_3(nx)
\end{align*}
for~$n$ coprime to~$q$. We then have
$$
\frac{1}{q-1} \sum_{\chi\mods{q}}M(\chi)L(\varphi\otimes\chi, 1/2) \ll
\Bigl(\norm{\widehat{K}}_{\infty}+\norm{\widehat{L}}_{\infty}\Bigr) q^{2/9
+ \varepsilon},
$$
for any~$\varepsilon>0$, where the implied constant depends
on~$\varphi$ and,~$\varepsilon$.
\end{corollary}
A further natural application concerns the symmetric square lift, $\mathrm{sym}_2(\psi)$, of a
$\GL_2$-cusp form of level~$1$. Precisely, let $\psi$ be a cuspidal
Hecke-eigenform for~$\SL_2(\mathbf{Z})$ with Hecke eigenvalues
$(\lambda(n))_{n\geq 1}$. This implies the following
\begin{corollary}\label{cor-gl2}
Let $K$ and $V$ be as above and assume that
$Z^{2/3}q^{4/3}\leq X \leq Z^{-2}q^{2}$. Then, for any~$\varepsilon>0$, we
have
$$
\sum_{n\geq 1}\lambda(n^2)K(n)V\Bigl(\frac{n}{X}\Bigr) \ll
\norm{\widehat{K}}_{\infty}Z^{10/9}q^{2/9+\varepsilon}X^{5/6}+
\norm{K}_{\infty}Z^{1/3}q^{2/3}X^{1/2+\varepsilon},
$$
where the implied constant depends only on~$\varepsilon$, on~$\psi$, and on
the implicit constants in~\emph{(\ref{eq:Vprop})}.
\end{corollary}
\begin{remark}\label{blomerremark} As pointed out to us by V. Blomer, when $K=\chi$ is a Dirichlet character a stronger bound should be available: for $\chi$ quadratic, one has (see \cite{Blomer}) the stronger subconvex bound for the central value
\begin{equation}\label{blomerbound}
L(\mathrm{sym}_2(\psi)\otimes\chi,s)\ll_s q^{3/4-1/8+o(1)},\ \Re s=1/2.
\end{equation}
This would amount to a bound of the shape
$$
\sum_{n\geq 1}\lambda(n^2)\chi(n)V\Bigl(\frac{n}{q^{3/2}}\Bigr) \ll_Z
q^{3/2-1/8+\varepsilon}.
$$
The bound \eqref{blomerbound} actually extends to any character $\chi\mods q$ by the same method, using the Petrow-Young variant of the Conrey-Iwaniec method \cite{CI,PY}. However, since this approach uses positivity of central values, it is not entirely clear yet whether this could be extended to general trace functions.
\end{remark}
From this corollary, one can easily derive an estimate for twists of
the arithmetic function~$\lambda(n)^2=|\lambda(n)|^2$, which is related
to~$\lambda(n^2)$ by the convolution identity
\begin{equation}\label{convol}
\lambda(n)^2=\sum_{ab=n}\lambda(a^2).
\end{equation}
However, in terms of $L$-functions, a straightforward estimate
concerns sums of length close to~$q^2$, and not~$q^{3/2}$ anymore (it
amounts, when~$K=\chi$, to a subconvexity estimate
for~$L(f\otimes f\otimes \chi,{\textstyle{\frac{1}{2}}})$, which results directly from the
factorization of this $L$-function of degree~$4$).
One can however recover a bound for sums of length about~$q^{3/2}$
with more work, and here we require that $K$ be a trace function (more
precisely, a \emph{non-exceptional} trace function, in the sense
of~\cite[p. 1686]{FKM2}).
\begin{corollary}\label{cor2-gl2}\label{RScor}
Let $V$ be as above. Let $K$ be a the trace function of an
$\ell$-adic sheaf $\mathcal{F}$ modulo~$q$ which is a geometrically
irreducible middle-extension sheaf, pure of weight~$0$, on the
affine line over~$\mathbf{F}_q$. Assume that the sheaf $\mathcal{F}$ is not
geometrically isomorphic to the tensor product
$\mathcal{L}_\psi\otimes\mathcal{L}_\chi$ of an Artin-Schreier sheaf and a Kummer
sheaf.
\par
If $Z^{-4/3}q^{4/3+8\gamma/3} \leq X \leq Z^{-2}q^{2}$, then we have
$$
\sum_{n\geq 1}\lambda(n)^2K(n)V\Bigl(\frac{n}{X}\Bigr) \ll
X^{2/3+\varepsilon}q^{1/3}+Z^{5/6}X^{7/8+\varepsilon}q^{1/6}+X^{1+\varepsilon}q^{-\gamma}
$$
for any~$\varepsilon>0$, where the implied constant depends only
on~$\psi~$, $\varepsilon$ and on the conductor~$\cond(\mathcal{F})$ of~$\mathcal{F}$.
\end{corollary}
\begin{remark}
(1) Suppose that~$Z$ is fixed. The estimate is then non-trivial as
long as $X\gg q^{4/3+\delta}$; for $X=q^{3/2}$, it saves a factor
$q^{1/48}$ over the trivial bound.
\par
(2) The assumption that~$\mathcal{F}$ is not exceptional means intuitively
that $K$ is not proportional to the product of an additive and a
multiplicative character modulo~$q$. We then have in particular
$$
\|K\|_{\infty}+\|\widehat{K}\|_{\infty}\ll 1
$$
where the implied constant depends only on the conductor of~$\mathcal{F}$.
\end{remark}
\begin{remark}\label{remcor17}
(1) The reader may wonder why this paper is much shorter
than~\cite{FKM1}, and (with the exception of Corollary \ref{RScor})
requires much less input from algebraic geometry in the case of
trace functions. One reason is that we are considering (essentially)
sums of length~$q^{3/2}$ whereas the coefficients functions~$K$ are
$q$-periodic. This means that periodicity properties of the
summand~$K(n)$ have a non-trivial effect, whereas they do not for
the sums of length about~$q$ which are considered in~\cite{FKM1} in
the context of~$\GL_2$.
\par
Moreover, observe that an analogue of Theorem~\ref{thm:main}, with
an estimate that depends (in terms of~$K$) only on the size of the
Fourier transform~$\widehat{K}$, is \emph{false} in the setting
of~\cite{FKM1}, i.e., for sums
$$
\sum_{n\geq 1}\lambda(n)K(n)V\Bigl(\frac{n}{X}\Bigr)
$$
with~$X$ of size about~$q$, where $\lambda(n)$ are the
Hecke-eigenvalues of a cusp forms~$\psi$ for $\SL_2(\mathbf{Z})$ (as in
Corollary~\ref{cor-gl2}). Indeed, if we take $X=q$ and define~$K$ to
be the $q$-periodic function that coincides with the (real-valued)
function $n\mapsto \lambda(n)$ for $1\leq n\leq q$, then~$K$ has
discrete Fourier transform of size~$\ll \log q$ by the well-known
Wilton estimate (see, e.g.,~\cite[Th. 5.3]{iwaniec}, when~$\psi$ is
holomorphic), and yet
$$
\sum_{n\leq q}K(n)\lambda(n)=\sum_{n\leq q}|\lambda(n)|^2\asymp q
$$
by the Rankin--Selberg method.
\par
On the other hand, the same bound of Wilton combined with discrete
Fourier inversion implies quickly that if~$K$ is any $q$-periodic
function, then
$$
\sum_{n\leq q^{3/2}}\lambda(n) K(n)\ll
q^{1+1/4+\varepsilon}\norm{\widehat{K}}_{\infty}
$$
for any~$\varepsilon>0$. However, the natural length for applications
is~$q$ in the $\GL_2$ case.
\par
(2) The most obvious function $K$ for which Theorem~\ref{thm:main}
gives trivial results is an additive character $K(n)=e(an/q)$ for
some integer~$a\in\mathbf{Z}$, since the Fourier transform takes one value
of size~$q^{1/2}$. However, a useful estimate also exists in this
case: Miller~\cite{Miller} has proved that
$$
\sum_{n\geq 1}\lambda(1,n)e(\alpha
n)V\Bigl(\frac{n}{X}\Bigr)\ll_{\varphi,Z} X^{3/4+\varepsilon}
$$
for~$X\geq 2$, any~$\alpha\in\mathbf{R}$ and any~$\varepsilon>0$, where the
implied constant is independent of~$\alpha$. This is the
generalization to~$\GL_3$ of the bound of Wilton mentioned in the
first remark.
\par
(3) Using either the functional equation for the $L$-functions
$L(\varphi\otimes\chi,s)$, or the Voronoi summation formula, one can
show that the estimate of Miller implies a bound of the shape
$$
S_V(\Kl_2(a\cdot;q),X)\ll_{\varphi,Z} (qX)^{\varepsilon}X^{1/4}q^{3/4}
$$
for any~$\varepsilon>0$, where
$$
\Kl_2(n;q)=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}^\times_q}}e_q(\ov x+nx)
$$
is a normalized Kloosterman sum. This bound is non-trivial as long
as $X\geq q$. Since~$\Kl_2$ is a trace function that is bounded
by~$2$ and has Fourier transform bounded by~$1$, this gives (in a
special case) a stronger bound than what follows from
Theorem~\ref{thm:main}.
\par
(4) Remark (2) suggests a direct approach by the discrete Fourier
inversion formula, which gives
$$
\sum_{n\leq X}\lambda(1,n)K(n)=\frac{1}{\sqrt{q}} \sum_{0\leq h<q}
\widehat{K}(h) \sum_{n\leq X}\lambda(1,n)e\Bigl(\frac{nh}{q}\Bigr).
$$
A non-trivial bound for~$X\approx q^{3/2}$ in terms
of~$\norm{\widehat{K}}_{\infty}$ would then follow from a bound
$$
\sum_{n\leq X}\lambda(1,n)e\Bigl(\frac{nh}{q}\Bigr) \ll X^{\alpha}
$$
for additive twists of the Fourier coefficients
where~$\alpha<2/3$.
\par
Unsurprisingly, in the case of~$\GL_2$, although we have the best
possible estimate of Wilton (with the analogue of~$\alpha$
being~$1/2$), the resulting estimate for a sum of length~$q$ is
trivial.
\end{remark}
The plan of the paper is as follows: we will explain the idea and
sketch the key steps of the proof in
Section~\ref{sec-principle}. Section~\ref{sec-examples} recalls the
most important examples of trace functions, for which~$K$ has small
Fourier transform and hence for which Theorem~\ref{thm:main} is
non-trivial. Section~\ref{sec-reminders} presents a key
Fourier-theoretic estimate and some reminders concerning automorphic
forms and the Voronoi summation formula for~$\GL_3$. Then the last
sections complete the proof of Theorem~\ref{thm:main} following the
outline presented previously, and explain how to deduce
Corollaries~\ref{cor-average}, \ref{cor-gl2} and~\ref{RScor} (the last
of which requires further ingredients).
\subsection*{Acknowledgements}
We are very grateful to R. Holowinsky and P. Nelson for sharing with
us and explaining their work \cite{HN} which has directly inspired the
present paper. We are also very grateful to V. Blomer for Remark \ref{blomerremark} and to the referees for their
careful reading of the manuscript, comments and suggestions and in particular for pointing out
a serious error in the first version of Corollary~\ref{RScor}.
\subsection*{Notation}
For any~$z\in\mathbf{C}$, we define $e(z)=\exp(2\pi i z)$. If~$q\geq 1$,
then we denote by $e_q(x)$ the additive character modulo~$q$ defined
by $e_q(x)=e(x/q)$ for~$x\in\mathbf{Z}$. We often identify a $q$-periodic
function defined on~$\mathbf{Z}$ with a function on~$\mathbf{Z}/q\mathbf{Z}$.
For any finite abelian group~$A$, we use the notation~$\widehat{f}$ for
the unitary Fourier transform defined on the character
group~$\widehat{A}$ of~$A$ by
$$
\widehat{f}(\psi)=\frac{1}{\sqrt{|A|}}
\sum_{x\in A}f(x)\psi(x).
$$
We have then the Plancherel formula~$\norm{f}_2=\norm{\widehat{f}}_2$,
where
$$
\|f\|_2=\sum_{x\in A}|f(x)|^2,\quad\quad \|\widehat{f}\|_2=\sum_{\psi\in
\widehat{A}}|\widehat{f}(\psi)|^2.
$$
For any integrable function on~$\mathbf{R}$, we denote its Fourier transform
by
$$
\widehat{V}(y)=\int_{\mathbf{R}}V(x)e(-xy)dx.
$$
We recall the Poisson summation formula when performed with a
$q$-periodic function~$K$ in addition to a smooth function~$V$ with
fast decay at infinity: for any~$X\geq 1$, we have
\begin{equation}\label{eq-poisson}
\sum_{n\in\mathbf{Z}}K(n)V\Bigl(\frac{n}{X}\Bigr)= \frac{X}{q^{1/2}} \sum_{h\in\mathbf{Z}}
\widehat{K}(h)\widehat{V}\Bigl(\frac{hX}{q}\Bigr).
\end{equation}
This follows directly from the usual Poisson formula and the
definition of $\widehat{K}$ after splitting the sum into congruence
classes modulo~$q$.
\section{Principle of the proof}\label{sec-principle}
We use a direct generalisation of the method of Holowinsky and
Nelson~\cite{HN} that led to Theorem~\ref{th-hn}. Although it was
motivated by Munshi's approach, based on the use of the Petersson
formula as a tool to express the delta symbol, there is no remaining
trace of this point of view; however, we refer to~\cite[App. B]{HN}
for a detailed and insightful description of the origin of this
streamlined method, starting from Munshi's.
\subsection{Amplification}\label{ssec-amplification}
The first step is to realize the $q$-periodic function~$K$ within a
one-parameter family of $q$-periodic functions. Precisely, let
$\widehat K$ be the Fourier transform of $K$ (see \eqref{eq:fourierK})
and define
\begin{equation}\label{whatKzdef}
\widehat K(z,h):= \begin{cases} \widehat K(z)e_q(-h\ov z)& q\nmid z\\
\widehat K(0)& q\mid z
\end{cases}
\end{equation}
for $(z,h)\in \mathbf{Z}^2$. Then put
\begin{equation}\label{eq-kn}
K(n,h)=\frac{1}{q^{1/2}}\sum_{z\in{\mathbf{F}^\times_q}}\widehat K(z,h)e_q(-nz).
\end{equation}
for $(n,h)\in\mathbf{Z}^2$. By the discrete Fourier inversion formula, we
have
\begin{equation}\label{eq-detector}
K(n,0)=K(n)-\frac{\widehat K(0)}{q^{1/2}}
\end{equation}
More generally, for any probability measure $\varpi$ on~${\mathbf{F}^\times_q}$, the
average
$$
K_{\varpi}(n,h)=\sum_{l\in{\mathbf{F}^\times_q}}\varpi(l)K(n,\ov lh)
$$
satisfies $K_{\varpi}(n,0)=K(n)-\frac{\widehat K(0)}{q^{1/2}}$. It follows
that, for any parameter $H\geq 1$, we can express the sum $S_V(K,X)$
as the difference of double sums
$$
S_V(K,X)=\sum_{l\in\mathcal{L}}\varpi(l)\sum_{|h|\leq H}S_{V}(K(\cdot,h\ov
l),X)- \sum_{l\in\mathcal{L}}\varpi(l)\sum_{0<|h|\leq H}S_{V}(K(\cdot,h\ov
l),X),
$$
up to an error $\ll X/q^{1/2}$. We write this difference as
$$
S_V(K,X)=\mathcal{F}-\mathcal{O},
$$
say. One then needs to select a suitable probability measure~$\varpi$,
and then the two terms are then handled by different methods. It
should be emphasized that no main term arises (which would have to be
canceled in the difference between the two terms).
\begin{remark}
The argument is reminiscent of the \emph{amplification method}, the
function $K(n)=K(n,0)$ being ``amplified'' (up to a small error)
within the family $(K(n,h))_{|h|\leq H}$.
\end{remark}
\subsection{Bounding $\mathcal{F}$}
As in~\cite{HN}, we consider a probability measure~$\varpi$
corresponding to a product structure: we average over pairs $(p,l)$ of
primes such that $p\sim P$ and $l\sim L$, and take $\varpi(x)$
proportional to the number of representations $x=\ov p l\mods q$,
where $p\sim P$ and $l\sim L$ are primes (their sizes being parameters
$1\leq P,L<q/2$ to be chosen later).
The treatment of $\mathcal{F}$ is essentially the same as
in~\cite{HN, Lin}. By applying the Poisson summation formula to the
$h$-variable, with dual variable~$r$, we see the function
$$
(n,r,p,l)\mapsto \widehat K(-p\ov{l r})\lambda(1,n)e_q(np\ov{l r}),
$$
appear. We then appeal to the classical ``reciprocity law'' for
additive exponentials, namely
$$
e_q(np\ov {l r})\approx e_{rl}(-np\ov q),
$$
trading the modulus $q$ for the modulus $rl$, which will be
significantly smaller than $q$. We then apply the Voronoi summation
formula for the cusp form $\varphi$ on the $n$-variable (this is the
only real automorphic input), which transforms the additive phase
$e_{rl}(-np\ov q)$ into Kloosterman sums of modulus $rl$. We then
obtain further cancellation by smoothing out the resulting variable
(dual to $n$) by using Cauchy--Schwarz and detecting cancellations on
averages of products of Kloosterman sums, where the product structure
of the averaging set is essential.
In this part of the argument, the coefficient function~$K$ plays very
little role, and we could just more or less quote the
corresponding statements in~\cite{HN, Lin}, if the parameter~$Z$ was
fixed. Since we wish to keep track of its behavior (for the purpose of
flexibility for potential applications), we have to go through the
computations anew. This is done in detail in Section~\ref{sec:Fsum}.
\subsection{Bounding $\mathcal{O}$}
In the sum~$\mathcal{O}$, with the averaging performed in the same way as
for~$\mathcal{F}$, the key point is that the $n$-variable in the sum
$S_V(K(\cdot, h\bar{l},X)$ is very long compared to $q$. We apply
Cauchy's inequality to smooth it, keeping the other variables $h,p,l$
in the inside, thus eliminating the automorphic coefficients
$\lambda(1,n)$ (for which we only require average bounds, which we
borrow from the Rankin--Selberg theory, our second important
automorphic input). This leads quickly to the problem of estimating
the sum
$$
\sumsum_{p_1,h_1,l_1,p_2,h_2,l_2}\sum_{n\sim X}K(n,h_1p_1\ov
l_1)\ov{K(n,h_2p_2\ov l_2)}.
$$
We apply the Poisson formula in the $n$-variable; since $X$ is
typically much larger than $q$, only the zero frequency in the dual
sum contributes. This yields a key sum of the shape
$$
\sumsum_{p_1,h_1,l_1,p_2,h_2,l_2}\sum_{u\in{\mathbf{F}^\times_q}}|\widehat
K(u)|^2e_q((h_1p_1\bar{l}_1-h_2p_2\bar{l}_2)u^{-1})=
\sumsum_{p_1,h_1,l_1,p_2,h_2,l_2}
K_2(h_1p_1\bar{l}_1-h_2p_2\bar{l}_2),
$$
say.
\par
When $K$ is a multiplicative character, as in the work of
Holowinsky--Nelson, the proof is essentially finished then, since
$\widehat K(u)$ is a normalized Gauss sum, with a constant modulus,
hence~$K_2$ is simply a Ramanujan sum, which we can evaluate
explicitly.
\par
In general, we obtain cancellation using a very general
Fourier-theoretic bound for general bilinear forms
$$
\sum_{m\in{\mathbf{F}_q}}\sum_{n\in{\mathbf{F}_q}}\alpha_m\beta_nK_2(m-n),
$$
which involves only $L^2$-norm bounds for the coefficients and
$L^{\infty}$-norm bounds for the Fourier transform of~$K_2$
(see~Proposition~\ref{willprop}). The latter, it turns out, is
essentially~$|\widehat{K}|^2$, and we can obtain a good estimate purely
in terms of~$\norm{\widehat{K}}_{\infty}$. This part of the argument is
performed in Section~\ref{sec:O}.
\section{Examples of trace functions}\label{sec-examples}
Theorem~\ref{thm:main} certainly applies to ``random'' $q$-periodic
functions $K\colon \mathbf{Z}\to\mathbf{C}$, for all reasonable meanings of the word
``random'', but the basic motivating examples in number theory are
often provided by \emph{trace functions}. Since there are by now a
number of surveys and discussions of important examples (see,
e.g.,~\cite[\S 10]{FKM1} or~\cite[\S 2.2]{short-sums} or~\cite{pisa}),
we only recall some of them for concreteness.
\begin{itemize}
\item If $r\geq 1$ is a fixed integer and $\chi_1$, \ldots, $\chi_r$
are distinct non-trivial Dirichlet characters modulo~$q$, of
order~$d_i\geq 2$, and if $f_1$, \ldots, $f_r$, $g$ are polynomials
in $\mathbf{Z}[X]$ such that either $\deg(g\mods q)\geq 2$, or one of
the~$f_i\mods q$ is not proportional to a $d_i$-th power
in~$\bar{\Ff}_q[X]$, then
$$
K(n)=\chi_1(f_1(n))\cdots \chi_r(f_r(n))e\Bigl(\frac{g(n)}{q}\Bigr)
$$
has Fourier transform of size bounded only in terms of~$r$ and the
degrees of the polynomials~$f_i$ and~$g$. (This is a consequence of
the Weil bounds for exponential sums in one variable).
\par
Moreover (as is relevant only for Corollary~\ref{RScor} in this
paper), $K$ is a trace function, and it is non-exceptional,
unless~$g$ is of degree~$1$, $r=1$ and~$f_1$ is of degree~$\leq 1$.
\item Let~$r\geq 2$. Define~$\Kl_r(0)=0$ and
$$
\Kl_r(n)=\frac{1}{q^{(r-1)-2}}
\sum_{\substack{x_1,\ldots,x_r\in {\mathbf{F}_q}\\
x_1\cdots x_r=n}}e\Bigl(\frac{x_1+\cdots+x_r}{q}\Bigr)
$$
for~$n\in{\mathbf{F}^\times_q}$ (these are hyper-Kloosterman sums). Then
$\norm{\widehat{\Kl}_r}_{\infty}\leq c_r$, where~$c_r$ depends only
on~$r$ (this depends on Deligne's general proof of the Riemann
Hypothesis over finite fields and on the construction and basic
properties of Kloosterman sheaves).
\par
For all~$r\geq 2$, the function~$\Kl_r$ is a trace function of a
non-exceptional sheaf.
\end{itemize}
We also mention one important principle: if~$K$ is the trace function
of a Fourier sheaf~$\mathcal{F}$ (in the sense of~\cite{ESDE}),
then~$\widehat{K}$ is also such a function for a sheaf~$\ft(\mathcal{F})$;
moreover, if~$\mathcal{F}$ has conductor~$c$ (in the sense of~\cite{FKM1}),
then $\ft(\mathcal{F})$ has conductor $\leq 10c^2$, and in
particular~$\norm{\widehat{K}}_{\infty}\leq 10c^2$.
Finally, one example that is not usually discussed explicitly
(formally, because it arises from a skyscraper sheaf) is
$K(n)=q^{1/2}\delta_{n=a\mods{q}}$, the $L^2$-normalized delta
function at a point~$a\in\mathbf{Z}$. In this case, the Fourier transform is
an additive character, hence is bounded by one, and dividing
by~$q^{1/2}$, we obtain the bound
$$
\sum_{\substack{n\geq 1\\n\equiv a\mods{q}}}
\lambda(1,n)V\Bigl(\frac{n}{X}\Bigr) \ll
Z^{10/9}q^{-5/18+\varepsilon}X^{5/6},
$$
under the assumptions of Theorem~\ref{thm:main}; in particular,
if~$X=q^{3/2}$ and~$V$ satisfies~(\ref{eq:Vprop}) for~$Z=1$, we get
$$
\sum_{\substack{n\geq 1\\n\equiv a\mods{q}}}
\lambda(1,n)V\Bigl(\frac{n}{q^{3/2}}\Bigr) \ll q^{35/36+\varepsilon}
$$
for any~$\varepsilon>0$. Note that, under the generalized
Ramanujan--Petersson conjecture $\lambda(1,n)\ll n^{\varepsilon}$, we would
obtain the stronger bound $q^{1/2+\varepsilon}$ (and knowing the
approximation~$\lambda(1,n)\ll n^{\theta}$ for some $\theta<1/3$ would
be enough to get a non-trivial bound). We discuss this case in
further details in Remark \ref{lastremark}, in the context of
Corollary \ref{cor-average}.
\section{Preliminaries}\label{sec-reminders}
\subsection{A Fourier-theoretic estimate}
A key estimate in Section~\ref{sec:O} will arise from the following
general bound (special cases of which have appeared before, e.g. in
the case of multiplicative characters for problems concerning sums
over sumsets).
\begin{proposition}\label{willprop}
Let $A$ be a a finite abelian group, with group operation denoted
additively. Let $\alpha$, $\beta$ and $K$ be functions from~$A$
to~$\mathbf{C}$. We have
$$
\Bigl|\sum_{m,n\in A}\alpha(m)\beta(n)K(m-n)\Bigr|\leq
|A|^{1/2}\|\widehat K\|_\infty \|\alpha\|_2 \|\beta \|_2.
$$
\end{proposition}
\begin{proof}
Using orthogonality of characters, we write
$$
\sum_{m,n\in A}\alpha(m)\beta(n) K(m-n)=
\sum_{m,n,h\in A}\alpha(m)\beta(n) K(h)\frac{1}{|A|}
\sum_{\psi\in\widehat{A}}\psi(h-(m-n)).
$$
Moving the sum over~$\psi$ to the outside, this is equal to
$$
|A|^{1/2}\sum_{\psi\in\widehat{A}}
\widehat{\alpha}(\psi^{-1})\widehat{\beta}(\psi)\widehat{K}(\psi),
$$
whose absolute value is
$$
\leq |A|^{1/2}\norm{\widehat{K}}_{\infty}
\sum_{\psi\in\widehat{A}}|\widehat{\alpha}(\psi^{-1})\widehat{\beta}(\psi)|
\leq |A|^{1/2}
\|\widehat K\|_\infty \|\alpha\|_2 \|\beta \|_2,
$$
by the Cauchy--Schwarz inequality and the discrete Plancherel
formula.
\end{proof}
\subsection{Background on $\GL_3$-cusp forms}
We refer to \cite[Chap. 6]{Goldfeld} for notations. Let $\varphi$ be a
cusp form on~$\GL_3$ with level~$1$ and with Langlands parameters
$\mu=(\mu_1,\mu_2,\mu_3)\in\mathbf{C}^3$. We denote by
$(\lambda(m,n))_{m,n\not=0}$ its Fourier--Whittaker coefficients, and
assume that
$\varphi$ is an eigenform of the Hecke operators $T_n$ and $T_n^*$,
normalized so that $\lambda(1,1)=1$. The eigenvalue of~$T_n$ is
then~$\lambda(1,n)$ for~$n\geq 1$.
Let $\theta_3=5/14$. The archimedean parameters and the Hecke
eigenvalues are bounded individually by
$$
|\Re(\mu_{i})|\leq \theta_3.\quad\quad |\lambda(1,p)|\leq 3p^{\theta_3}
$$
for any~$i$ and any prime number~$p$ (see ~\cite{KimSar}).
Average estimates follow from the Rankin--Selberg method. We have
\begin{equation}
\label{eq:RS}
\sum_{1\leq n\leq X}|\lambda(1,n)|^2\ll X^{1+\varepsilon},
\end{equation}
and
\begin{equation}
\label{eq-RS2}
\sum_{1\leq m^2n\leq X}m|\lambda(m,n)|^2\ll X^{1+\varepsilon},
\end{equation}
for $X\geq 2$ and any $\varepsilon>0$, where the implied constant depends
only on $\varphi$ and $\varepsilon$. (See~\cite{Molteni} and ~\cite[Lemma 2]{Munshi1}.)
The key analytic feature of $\GL_3$-cusp forms that we use (as in
previous works) is the Voronoi summation formula for~$\varphi$
(originally due to Miller--Schmid, and Goldfeld--Li
independently). Since our use of the ``archimedean'' part of the
formula is quite mild, we use the same compact formulation as
in~\cite[\S 2.3]{HN}, where references are given.
Let~$q\geq 1$ be an integer (not necessarily prime). For $n\in\mathbf{Z}$, we
denote
$$
\Kl_2(n;q)=\frac{1}{\sqrt{q}}
\sum_{x\in (\mathbf{Z}/q\mathbf{Z})^{\times}}
e\Bigl(\frac{nx+\bar{x}}{q}\Bigr)
$$
where~$\bar{x}$ is the inverse of~$x$ modulo~$q$.
\begin{lemma}[Voronoi summation formula]\label{Voronoi}
For $\sigma\in\{-1,1\}$, there exist functions $\mathcal{G}^{\sigma}$,
meromorphic on~$\mathbf{C}$, holomorphic for~$\Re(s)>\theta_3$, with
polynomial growth in vertical strips~$\Re(s)\geq \alpha$ for
any~$\alpha>\theta_3$, such that the following properties hold.
\par
Let $a$ and~$q\geq 1$ be coprime integers, let $X>0$, and let $V$ be
a smooth function on~$]0,+\infty[$ with compact support.
We have
$$
\sum_{n\geq 1}\lambda(1,n)e_q(an)V\Bigl(\frac{n}{X}\Bigr) = q^{3/2}
\sum_{\sigma\in\{-1,1\}} \sum_{n\geq 1} \sum_{m\mid q}
\frac{\lambda(n,m)}{nm^{3/2}} \Kl_2\Bigl(\sigma n\ov
a;\frac{q}{m}\Bigr) \mathcal{V}_{\sigma}\Bigl(\frac{m^2n}{q^3/X}\Bigr),
$$
where
$$
\mathcal{V}_{\sigma}(x)= \frac{1}{2\pi i}\int_{(1)}x^{-s}\mathcal{G}^{\sigma} (s+1)
\Bigl(\int_{0}^{+\infty}V(y)y^{-s}\frac{dy}{y}\Bigr)ds.
$$
\end{lemma}
Note that the functions~$\mathcal{G}^{\sigma}$ depend (only) on the
archimedean parameters of~$\varphi$. We record some properties of the
functions $\mathcal{V}_{\sigma}(x)$; for~$Z$ fixed they are already explained
in~\cite[\S 2.3]{HN}.
\begin{lemma}\label{bounds-for-V}
Let $\sigma\in\{-1,1\}$. For any $j\geq 0$, any $A\geq 1$ and
any~$\varepsilon>0$, we have
$$
x^j\mathcal{V}_{\sigma}^{(j)}(x)\ll \min\Bigl( Z^{j+1}x^{1-\theta_3-\varepsilon},
Z^{j+5/2+\varepsilon}\Bigl(\frac{Z^3}{x}\Bigr)^A\Bigr)
$$
for~$x>0$, where the implied constant depends on~$(j,A,\varepsilon)$.
Moreover, for $x\geq 1$, we have
$$
x^j\mathcal{V}_{\sigma}^{(j)}(x)\ll x^{2/3}\min(Z^j, x^{j/3})
$$
where the implied constant depends on~$j$.
\end{lemma}
\begin{proof}
The first inequality in the first bound follows by shifting the
contour in $\mathcal{V}_{\pm}(x)$ to $\Re s=\theta_3-1+\varepsilon$, while the
second one follows by shifting contour to the far right. The second
bound follows from \cite[Lemma 6]{Blomer}.
\end{proof}
In particular, we see from the lemma that the functions
$\mathcal{V}_{\sigma}(x)$ decay very rapidly as soon as
$x\geq X^{\delta}Z^{3}$ for some~$\delta>0$.
\begin{remark}
The bound $x^j\mathcal{V}_{\sigma}^{(j)}(x)\ll Z^{j+1}x^{1-\theta_3-\varepsilon}$
can be replaced by $x^j\mathcal{V}_{\sigma}^{(j)}(x)\ll Z^{j}x^{1-\varepsilon}$,
under the Ramanujan-Selberg conjecture, i.e., if $\Re (\mu_i)=0$ for
all~$i$.
\end{remark}
\begin{remark}
Let~$N\geq 1$, and define a congruence
subgroup~$\Gamma_N\subset \SL_3(\mathbf{Z})$ by
$$
\Gamma_N=\Bigl\{\gamma\in\SL_3(\mathbf{Z})\,\mid\, \gamma\equiv\begin{pmatrix}
*&*&*\\ *&*&*\\0&0&*
\end{pmatrix}\mods N
\Bigr\}.
$$
Zhou~\cite{FZ} has established an explicit Voronoi summation
formula for $\GL_3$-cuspforms that are invariant under~$\Gamma_N$,
for additive twists by~$e_q(an)$ when either $(q,N)=1$ or $N\mid
q$. It should then be possible to use this formula to generalise
Theorem~\ref{thm:main} to such cuspforms by slight adaptations of
the argument below.
\end{remark}
\section{Amplification of the trace function}
We now begin the proof of Theorem~\ref{thm:main}. Let $q$ be a prime
number and $K$ a $q$-periodic function on $\mathbf{Z}$. Let $\widehat{K}$ be its
discrete Fourier transform~(\ref{eq:fourierK}), which is also a
$q$-periodic function on~$\mathbf{Z}$.
Let $P,L\geq 1$ be two parameters to be chosen later, with $2P<q$ and
$2L<q$. We define auxiliary sets
\begin{align*}
\mathrm{P}&:=\{p\in[P,2P[\,\mid\, p\equiv 1\mods{4}, \text{ prime}\}\\
\mathrm{L}&:=\{l\in[L,2L[\,\mid\, l\equiv 3\mods{4}, \text{ prime}\}.
\end{align*}
Note that these sets are disjoint. We denote
\begin{equation}\label{eq-H}
H=\frac{q^2L}{XP}.
\end{equation}
In the sequel, we assume that $H\geq 1$, that is
\begin{equation}\label{Hcond}
XP\leq q^2L.
\end{equation}
Let $W$ be a smooth function on $\mathbf{R}$ that satisfies \eqref{eq:Vprop}
with $Z=1$ and furthermore $\widehat W(0)=1$.
We now use the notation~$K(n,h)$ and~$\widehat{K}(z,h)$ and the basic
amplicatifon idea discussed in Section~\ref{ssec-amplification}
(see~\ref{whatKzdef} and~\ref{eq-kn}). We define
\begin{align*}
\mathcal{F}
&=
\frac{1}{|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{h\in \mathbf{Z}}S_{V}(K(\cdot,hp\ov l),X)
\widehat{W}\Bigl(\frac{h}{H}\Bigr)\\
&=\frac{1}{|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{h\in\mathbf{Z}}\widehat{W}\Bigl(\frac{h}{H}\Bigr)
\sum_{n\geq 1}\lambda(1,n)K(n,hp\ov l)V\Bigl(\frac{n}{X}\Bigr).
\end{align*}
Separating the contribution of $h=0$ and applying~(\ref{eq-detector}),
we can write
\begin{equation}\label{an-identity}
\mathcal{F}=S_{V}(K,X)+\mathcal{O}+
O\Bigl(\frac{q^{\varepsilon}\norm{\widehat{K}}_{\infty}X}{q^{1/2}}\Bigr),
\end{equation}
for any~$\varepsilon>0$, where
\begin{equation}\label{eq-m2}
\mathcal{O}= \frac{1}{|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{h\neq 0}\widehat{W}\Bigl(\frac{h}{H}\Bigr)
\sum_{n\geq 1}\lambda(1,n)K(n,hp\ov l)V\Bigl(\frac{n}{X}\Bigr).
\end{equation}
Indeed, the contribution of $h=0$ is
\begin{align*}
\frac{1}{|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
S_{V}(K(\cdot,0),X)\widehat{W}(0)
&=S_{V}(K,X)-
\frac{\widehat K(0)}{|\mathrm{P}||\mathrm{L}|q^{1/2}}
\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{n\geq 1}\lambda(1,n)V\Bigl(\frac{n}{X}\Bigr)\\
&=S_{V}(K,X)+O\Bigl(\frac{\norm{\widehat{K}}_{\infty}X^{1+\varepsilon}}{q^{1/2}}\Bigr),
\end{align*}
for any $\varepsilon>0$, by \eqref{eq:RS}.
\section{Evaluation of $\mathcal{F}$}\label{sec:Fsum}
The evaluation of~$\mathcal{F}$ is close to the arguments of~\cite{HN}
and~\cite[\S 6]{Lin}. In fact, we could extract the desired bounds
from these sources (especially~\cite{Lin}) in the important special
case when the parameter~$Z$ is fixed as~$q$ varies. The reader who is
familiar with one of these references may therefore wish to skip the
proof of the next proposition in a first reading.
\begin{proposition}\label{proposition-for-F}
Let $\eta>0$. Assume that
\begin{equation}
\label{XZlower}
X/Z\geq q^{1+\eta}.
\end{equation}
and
\begin{equation}\label{boundsPL}
L\leq P^4.
\end{equation}
Then for any $\varepsilon>0$, we have
$$
\mathcal{F}\ll q^{\varepsilon}\norm{\widehat{K}}_{\infty}
\Bigl(\frac{Z^{2}X^{3/2}P}{qL^{1/2}}+Z^{3/2}X^{3/4}(qPL)^{1/4}\Bigr),
$$
where the implied constant depends on $\varphi$, $\varepsilon$ and $\eta$.
\end{proposition}
The remainder of this section is dedicated to the proof of this
proposition. We fix $\eta$ satisfying~(\ref{XZlower}).
We apply the Poisson summation formula to the sum over $h$ in
$\mathcal{F}$, for each $(p,l)$. We obtain
$$
\sum_{h\in \mathbf{Z}}K(n,hp\ov
l)\widehat{W}\Bigl(\frac{h}{H}\Bigr)=\frac{H}{q^{1/2}}\sum_{(r,q)=1}\widehat
K(-p\ov l\ov r)e_q(np\ov l\ov r)W\Bigl(\frac{r}{R}\Bigr),
$$
where
\begin{equation}\label{eq-R}
R=q/H=\frac{XP}{qL}.
\end{equation}
Hence it follows that
$$
\mathcal{F}= \frac{q^{3/2}L}{XP|\mathrm{P}||\mathrm{L}|}
\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}} \sum_{(r,q)=1}\widehat K(-p\ov l\ov r)
\sum_{n\geq 1}e_q(n p\ov l\ov r) \lambda(1,n)V\Bigl(\frac{n}{X}\Bigr)
W\Bigl(\frac{r}{R}\Bigr).
$$
Since $l\leq 2L<q$, we have $(q,rl)=1$ in the sums. By reciprocity, we
have
$$
e_q(n p\ov l\ov r)=e_{rl}(-np\ov q)e_{qrl}(np)
$$
for $n\geq 1$.
\begin{remark}
Note that for $n\asymp X$, we have
$$
\frac{np}{qrl}\asymp \frac{XP}{qLXP/(qL)}\asymp 1,
$$
so that the additive character $e_{qrl}(np)$ doesn't oscillate.
\end{remark}
We define
$$
V_1(x)=e\Bigl(\frac{xXp}{qrl}\Bigr)V(x).
$$
We can then rephrase the above as
$$
\sum_{n\geq 1}\lambda(1,n)e_{rl}(-np\ov
q)e_{qrl}(np)V\Bigl(\frac{n}{X}\Bigr)= \sum_{n\geq
1}\lambda(1,n)e_{rl}(-np\ov q)V_1\Bigl(\frac{n}{X}\Bigr),
$$
and
$$
\mathcal{F}
=\frac{q^{3/2}L}{XP|\mathrm{P}||\mathrm{L}|}\sum_{p,l}\sum_{r\geq 1}
\widehat K(-p\ov l\ov r)W\Bigl(\frac{r}{R}\Bigr)
\sum_{n\geq 1}\lambda(1,n)e_{rl}(-np\ov q)V_1\Bigl(\frac{n}{X}\Bigr).
$$
Let $\mathcal{F}'$ be the contribution to the last expression of
those $(p,r,l)$ such that $(p,rl)=1$, and let $\mathcal{F}''$ be the
remaining contribution.
In the case $p\mid r$, we can apply the Voronoi formula with modulus
$rl/p$; estimating the resulting expression directly, one obtains an
estimate for the contribution $\mathcal{F}''$ to $\mathcal{F}$ that is bounded by
$$
\mathcal{F}''\ll \frac{\norm{\widehat{K}}_{\infty}Z^2X^{3/2+\varepsilon}}{qP}
$$
for any $\varepsilon>0$ (see~\cite[\S 6]{Lin} for a similar computation,
where such contribution is denoted $\mathcal{F}_1^{\sharp}$).
Now let $p$ be such that $(p,rl)=1$. By the Voronoi summation
formula (Lemma \ref{Voronoi}), we have
$$
\sum_{n\geq 1}\lambda(1,n)e_{rl}(-np\ov q)
V_1\Bigl(\frac{n}{X}\Bigr)=(rl)^{3/2} \sum_{\sigma\in\{-1,1\}}
\sum_{n\geq 1} \sum_{m\mid rl} \frac{\lambda(n,m)}{nm^{3/2}}
\Kl_2(\sigma\ov
pqn;rl/m)\mathcal{V}_{1,\sigma}
\Bigl(\frac{m^2n}{r^3l^3/X}\Bigr).
$$
Therefore $\mathcal{F}'=\mathcal{F}'_{1}+\mathcal{F}'_{-1}$, where
\begin{multline*}
\mathcal{F}'_{\sigma}
=\frac{q^{3/2}L}{XP|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{r\geq 1}\widehat K(-p\ov l\ov
r)
W\Bigl(\frac{r}{R}\Bigr)(rl)^{3/2}
\\
\sum_{n\geq 1}\sum_{m\mid rl}
\frac{\lambda(n,m)}{nm^{3/2}}\Kl_2(\sigma \ov
pqn;rl/m)\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2n}{r^3l^3/X}\Bigr).
\end{multline*}
We re-arrange the sums to get
\begin{multline*}
\mathcal{F}'_{\sigma} =\frac{(qRL)^{3/2}L}{XP|\mathrm{P}||\mathrm{L}|}
\sum_{r\geq 1}\Bigl(\frac{r}{R}\Bigr)^{3/2}
W\Bigl(\frac{r}{R}\Bigr) \sum_{n,m}
\frac{\lambda(n,m)}{\sqrt{nm}} \\
\sum_{p\in\mathrm{P}}\sum_{\substack{l\in\mathrm{L}\\m\mid rl}}
\frac{(l/L)^{3/2}}{\sqrt{n}m}\widehat K(-p\ov l\ov r)\Kl_2(\sigma \ov
pqn;rl/m)\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2n}{r^3l^3/X}\Bigr).
\end{multline*}
Let~$\delta>0$ be a small parameter. For fixed $r$ and $l$, using the
bounds from Lemma \ref{bounds-for-V} with a suitably large value
of~$A$, the contribution to the sum over~$m$ and~$n$ of $(m,n)$
such that
\begin{equation}\label{eq-truncate}
m^2n\geq q^{\delta}\frac{Z^3(rl)^3}{X}\asymp \frac{q^{\delta}Z^3X^2P^3}{q^3}
\end{equation}
is $\ll \norm{\widehat{K}}_{\infty}q^{-10}$ (say).
\par
To handle the remaining part of the sum, we apply the Cauchy--Schwarz
inequality to the sum over $(m,n)$, and we obtain
\begin{equation}\label{eq-mcfsigma}
\mathcal{F}'_{\sigma} \ll \frac{(qRL)^{3/2}L}{XP|\mathrm{P}||\mathrm{L}|}
\Bigl(\sum_{r\sim R} \sum_{\substack{n,m\geq 1\\m^2n<
q^{\delta}Z^3X^2P^3/q^3}}
\frac{|\lambda(n,m)|^2}{nm}\Bigr)^{1/2}\mathcal{N}_{\sigma}^{1/2}
+\norm{\widehat{K}}_{\infty}q^{-1},
\end{equation}
where
\begin{multline*}
\mathcal{N}_{\sigma}= \sum_{r,m\geq 1} W\Bigl(\frac{r}{R}\Bigr)
\frac{1}{m^2}
\sumsum_{\substack{p_1,p_2,l_1,l_2\\p_i\in\mathrm{P},l_i\in\mathrm{L}\\m\mid
(rl_1,rl_2)}} \Bigl(\frac{l_1l_2}{L^2}\Bigr)^{3/2}
\widehat K(-p_1\ov l_1\ov r)\ov{\widehat K(- p_2\ov l_2\ov r)}\\
\times\sum_{n\geq 1}\frac{1}{n}\Kl_2(\sigma \ov
p_1qn;rl_1/m)\ov{\Kl_2(\sigma \ov p_2qn;rl_2/m)}
\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2n}{r^3l_1^3/X}\Bigr)
\ov{\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2n}{r^3l_2^3/X}\Bigr)}.
\end{multline*}
We will prove the bound
\begin{equation}\label{Nsigmabound}
\mathcal{N}_{\sigma}\ll
q^{\varepsilon}\norm{\widehat{K}}_{\infty}^2
\Bigl(
Z^4RPL
+\frac{Z^3R^{3/2}q^3L^3}{X^2P}\Bigr).
\end{equation}
for any~$\varepsilon>0$.
If we select~$\delta>0$ small enough in terms of~$\varepsilon$, then by
the Rankin--Selberg bound~(\ref{eq-RS2}), we deduce that
\begin{equation}\label{eq-fpsigma}
\mathcal{F}'_{\sigma} \ll \frac{q^{3/2+\varepsilon}Z^{\varepsilon}L^{5/2}R^2}
{XP|\mathrm{P}||\mathrm{L}|}\mathcal{N}_{\sigma}^{1/2}+
\norm{\widehat{K}}_{\infty}q^{-1}
\end{equation}
for any $\varepsilon>0$.
We conclude, using~(\ref{eq-mcfsigma}) and recalling
that~$R=XP/(qL)$, that
\begin{align*}
\mathcal{F}'_{\sigma}
&\ll
\frac{q^{\varepsilon}\norm{\widehat{K}}_{\infty}
R^2(qL)^{3/2}}{XP^2}\bigg(Z^4RPL+\frac{Z^3R^{3/2}q^3L^3}{X^2P}
\bigg)^{1/2}
\\
&\ll
q^{\varepsilon}\norm{\widehat{K}}_{\infty}
\Bigl(\frac{Z^{2}X^{3/2}P}{qL^{1/2}}+Z^{3/2}X^{3/4}(qPL)^{1/4}\Bigr).
\end{align*}
for any~$\varepsilon>0$. Assuming \eqref{Nsigmabound}, this concludes the proof of
Proposition~\ref{proposition-for-F}.
\subsection{Proof of \eqref{Nsigmabound}} We will now investigate the inner sum over~$n$
in~$\mathcal{N}_{\sigma}$, and then perform the remaining summations
(over $r$, $m$, $p_i$, $l_i$) essentially trivially. We let
$$
U=\frac{q^{\delta/2}Z^{3/2}XP^{3/2}}{q^{3/2}},
$$
so that the sum over~$m$ has been truncated to~$m\leq U$.
Let~$F$ be a smooth non-negative function on $\mathbf{R}$ which is supported
on $[1/2,3]$ and equal to $1$ on $[1,2]$. Let $Y\geq 1$ be a parameter
with
\begin{equation}\label{eq-boundy}
Y\leq \frac{q^{\delta}Z^3X^2P^3}{m^2q^3},
\end{equation}
and define
$$
\mathcal{W}_{Y}(x)=\frac{1}{x}
\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2xY}{r^3l_1^3/X}\Bigr)
\ov{\mathcal{V}_{1,\sigma}\Bigl(\frac{m^2xY}{r^3l_2^3/X}\Bigr)} F(x).
$$
We study the sums
$$
\mathcal{P}_Y= \frac{1}{Y}\sum_{n\geq 1}\Kl_2(\ov
p_1qn;rl_1/m)\ov{\Kl_2(\ov p_2qn;rl_2/m)}
\mathcal{W}_{Y}\Bigl(\frac{n}{Y}\Bigr),
$$
and their combinations
\begin{equation}\label{eq-py}
\mathcal{N}_{Y,\sigma}= \sum_{r\geq 1}\sum_{1\leq m\leq U}
W\Bigl(\frac{r}{R}\Bigr)
\frac{1}{m^2}
\sumsum_{\substack{p_1,p_2,l_1,l_2\\p_i\in\mathrm{P},l_i\in\mathrm{L}\\m\mid
(rl_1,rl_2)}} \Bigl(\frac{l_1l_2}{L^2}\Bigr)^{3/2}
\widehat K(-p_1\ov l_1\ov r)\ov{\widehat K(- p_2\ov l_2\ov
r)}\,\mathcal{P}_Y.
\end{equation}
We will prove the following bound: for any $\varepsilon>0$, if $\delta$ is chosen small enough we have
\begin{equation}\label{NYbound}
\mathcal{N}_{Y,\sigma}\ll q^{\varepsilon}Z^4\norm{\widehat{K}}_{\infty}^2RPL+q^{\varepsilon}\norm{\widehat{K}}_{\infty}^2\frac{Z^3R^{3/2}q^3L^3}{X^2P}.
\end{equation}
Performing a dyadic partition of unity on the $n$ variable in $\mathcal{N}_{\sigma}$ we deduce \eqref{Nsigmabound}.
\subsection{Bounding $\mathcal{P}_Y$}
We apply the Poisson
summation formula~(\ref{eq-poisson}) with modulus $r[l_1,l_2]/m$, to
get
\begin{equation}\label{after-poisson}
\mathcal{P}_Y
=\frac{1}{r[l_1,l_2]/m}\sum_{n\in\mathbf{Z}}C(n,p_1,p_2, l_1,l_2,r, m)
\widehat{\mathcal{W}}_Y\Bigl(\frac{nY}{r[l_1,l_2]/m}\Bigr),
\end{equation}
where
\begin{eqnarray*}
C(n,p_1,p_2,l_1,l_2, r,m)=
\sum_{\beta \mods {r[l_1,l_2]/m}}
\Kl_2(\ov p_1q\beta;rl_1/m)
\ov{\Kl_2(\ov p_2q\beta;rl_2/m)} \, e_{r[l_1,l_2]/m}(\beta n),
\end{eqnarray*}
with~$\ov{p}_i$ denoting the inverse of~$p_i$ modulo~$rl_i/m$. We write
$$
\mathcal{P}_Y=\mathcal{P}_0+\mathcal{P}_1
$$
where
$$
\mathcal{P}_{0}=\frac{1}{r[l_1,l_2]/m} C(0,p_1,p_2,l_1,l_2,r,
m)\widehat{\mathcal{W}}_Y(0)
$$
is the contribution of the term~$n=0$ and~$\mathcal{P}_1$ is the
remainder in \eqref{after-poisson}. We show below that for any $\varepsilon>0$, if $\delta$ is chosen small enough we have
\begin{equation}\label{P0bound}
\mathcal{P}_{0}\ll \delta_\stacksum{l_1=l_2}{p_1=p_2}(qr)^{\varepsilon}Z^4+\delta_\stacksum{l_1=l_2}{p_1\not=p_2}\mathcal{P}_0\ll
q^{\varepsilon}Z^4\frac{m}{rl}
\Bigl(\frac{rl}{m},p_2-p_1\Bigr)
\end{equation}
and that
\begin{equation}\label{P1bound}\mathcal{P}_{1}\ll q^{2\varepsilon}Z^3\Bigl(\frac{r[l_1,l_2]1}{m}\Bigr)^{1/2}\frac{m^2q^3}{X^2P^3}.\end{equation}
Using \eqref{P0bound} in the sum~(\ref{eq-py}), we find that the contribution to~$\mathcal{N}_{Y,\sigma}$ of $\mathcal{P}_0$ is bounded by
\begin{gather}\nonumber
\sum_{r\geq 1} W\Bigl(\frac{r}{R}\Bigr) \sum_{1\leq m\leq U}
\frac{1}{m^2}
\sumsum_{\substack{p_1,p_2,l\\p_i\in\mathrm{P},l\in\mathrm{L}\\m\mid rl}}
\Bigl(\frac{l}{L}\Bigr)^{3} \widehat K(-p_1\ov l\ov r)\ov{\widehat K(-
p_2\ov l\ov r)}\,\mathcal{P}_0
\\ \nonumber
\ll q^{\varepsilon}Z^4\norm{\widehat{K}}_{\infty}^2 \sum_{r\asymp R}
\sum_{1\leq m\leq U}\frac{1}{m^2} \sum_{\substack{l\in\mathrm{L}\\m\mid
rl}} \Bigl(\sum_{p\in\mathrm{P}}1+
\frac{m}{rl}\sum_{\substack{p_1,p_2\in\mathrm{P}\\p_1\not=p_2}}\Bigl(\frac{rl}{m},p_2-p_1\Bigr)
\Bigr)\\ \nonumber
\ll q^{\varepsilon}Z^4\norm{\widehat{K}}_{\infty}^2 \Bigl(RPL+ \sum_{r\asymp
R} \frac{1}{r}\sum_{1\leq m\leq U}\frac{1}{m}
\sum_{\substack{l\in\mathrm{L}\\m\mid rl}}\frac{1}{l} \sum_{d\mid rl/m}\varphi(d)
\sum_{\substack{p_1,p_2\in\mathrm{P}\\p_1\equiv p_2\mods{d}}}1
\Bigr)\\
\ll q^{\varepsilon}Z^4\norm{\widehat{K}}_{\infty}^2 (RPL+P^2)\ll
q^{\varepsilon}Z^4\norm{\widehat{K}}_{\infty}^2RPL.\label{NYP0}
\end{gather}
Here $RPL=XP^2/q\gg P^2$, since $X$ satisfies \eqref{XZlower}.
Using \eqref{P1bound} we find that the contribution of~$\mathcal{P}_1$
to~$\mathcal{N}_{\sigma,Y}$ is bounded by
\begin{multline}\label{NYP1}
\ll q^{\varepsilon}Z^3\norm{\widehat{K}}_{\infty}^2\sum_{r\asymp R}
\sum_{1\leq m\leq U}
\frac{1}{m^2}
\sum_{\substack{p_1,p_2,l_1,l_2\\m\mid (rl_1,rl_2)}}
\Bigl(\frac{r[l_1,l_2]}{m}\Bigr)^{1/2}\frac{m^2q^3}{X^2P^3}\\
\ll q^{\varepsilon}Z^3\norm{\widehat{K}}_{\infty}^2
R\frac{q^3}{X^2P^3}(P^2L^2)(RL^2)^{1/2}
\ll q^{\varepsilon}\norm{\widehat{K}}_{\infty}^2\frac{Z^3R^{3/2}q^3L^3}{X^2P}.
\end{multline}
Combining \eqref{NYP0} and \eqref{NYP1} we obtain \eqref{NYbound}.
\subsection{Proof of \eqref{P0bound} and \eqref{P1bound}}
The next two lemmas evaluate the archimedan and non-archimedan Fourier transforms which occur in \eqref{after-poisson} :
\begin{lemma}\label{lm-trunc}
With notation as above, in particular~\emph{(\ref{eq-boundy})},
let~$j\geq 0$ be an integer and let~$\varepsilon>0$.
\par
\emph{(1)} We have
\begin{equation}\label{eq-wyzero}
\widehat{\mathcal{W}}_Y(0)\ll q^{\delta}Z^4.
\end{equation}
\par
\emph{(2)} We have
\begin{equation}\label{derivatives-of-F}
x^j\mathcal{W}^{(j)}_{Y}(x)\ll
\begin{cases}
Z^{2+j}\Bigl(\frac{m^2Yq^3}{X^2P^3}\Bigr)^{2-2\theta_3-\varepsilon}
& \quad \text{if } Y<\frac{X^2P^3}{m^2q^3}\\
\Bigl(\frac{m^2Yq^3}{X^2P^3}\Bigr)^{4/3+j/3} & \quad \text{if }
Y\geq \frac{X^2P^3}{m^2q^3},
\end{cases}
\end{equation}
where the implied constants depends on~$(\varphi,j,\varepsilon)$.
\par
\emph{(3)} If $1\leq |n|\leq q^{\delta}Z\frac{r[l_1,l_2]}{mY}$
then we have
$$
\widehat{\mathcal{W}}_{Y}\Bigl(\frac{nY}{r[l_1,l_2]/m}\Bigr)\ll
q^{\delta}Z^{2}\frac{m^2Yq^3}{X^2P^3}.
$$
\end{lemma}
\begin{proof}
Since~$F_Y$ has support in $[1/2,3]$, part~(1) follows from the
bound~$\mathcal{V}_{\sigma}(x)\ll x^{2/3}$ for~$x\geq 1$ and the
fact that
$$
\frac{m^2xY}{r^3l_i^3/X}\asymp \frac{m^2Y}{X^2P^3/q^3}\ll
q^{\delta}Z^3.
$$
\par
Part~(2) is obtained using the estimates
\begin{gather*}
x^j\mathcal{V}_{\pm}^{(j)}(x)\ll Z^{j+1}x^{1-\theta_3-\varepsilon}\quad \text{
if }
0<x<1,\\
x^j\mathcal{V}_{\pm}^{(j)}(x)\ll x^{2/3+j/3}\quad \text{ if } x\geq 1
\end{gather*}
(see Lemma \ref{bounds-for-V}), noting again that
$\frac{(rl)^3}{X}\sim \frac{X^2P^3}{q^3}$.
\par
From \eqref{derivatives-of-F}, for any $n$ such that
$1\leq |n|\leq q^{\delta}Z\frac{r[l_1,l_2]}{mY}$, we get the estimates
$$
\widehat{\mathcal{W}}_Y\Bigl(\frac{nY}{r[l_1,l_2]/m}\Bigr)\ll
\begin{cases}
Z^{2}\Bigl(\frac{m^2Yq^3}{X^2P^3}\Bigr)^{2-2\theta_3-\varepsilon},&
\quad
\mathrm{if}\, Y<\frac{X^2P^3}{m^2q^3}\\
\Bigl(\frac{m^2Yq^3}{X^2P^3}\Bigr)^{4/3},& \quad \mathrm{if}\,
Y\geq \frac{X^2P^3}{m^2q^3}
\end{cases}
$$
Since $m^2Yq^3/(X^2P^3)\ll q^{\delta}Z^3$, the second bound is also
$$
\ll q^{\delta/3}Z\frac{m^2Yq^3}{X^2P^3}.
$$
Together with the first bound, this implies part (3) of the lemma.
\end{proof}
\begin{lemma}\label{lm-delta}
Let $n\in\mathbf{Z}$, $p_1$, $p_2$ primes, $l_1$, $l_2$ primes, $r\geq 1$
and $m\geq 1$ be integers with $m\mid rl_i$.
\par
\emph{(1)} We have
$$
C(0,p_1,p_2,l_1,l_2,r,m)=0
$$
unless $l_1=l_2$.
\par
\emph{(2)} For $l$ prime with $m\mid rl$, we have
$$
|C(0,p_1,p_2,l,l,r,m)|\leq (rl/m,p_2-p_1)
. $$
\par
\emph{(3)} Let
$$
\Delta=q\frac{l_2^2p_2-l_1^2p_1}{(l_1,l_2)^2}.
$$
We have
$$
|C(n,p_1,p_2,l_1,l_2,r,m)|\leq
2^{O(\omega(r))}\Bigl(\frac{r[l_1,l_2]}{m}\Bigr)^{1/2}
\frac{(\Delta,n,m/rl_1,m//rl_2)}{(n,m/rl_1,m/rl_2)^{1/2}}.
$$
\par
\emph{(4)} Suppose that $\Delta=0$. If $(p_1,p_2)$ are
$\equiv 1\mods{4}$ and $(l_1,l_2)$ are $\equiv 3\mods{4}$, then
$p_1=p_2$ and $l_1=l_2$. For $p$ prime and $l$ prime with
$m\mid rl$, we have
$$
|C(n,p,p,l,l,r,m)|\leq 2^{O(\omega(r))}
\Bigl(\frac{rl}{m}\Bigr)^{1/2}\Bigl(n,\frac{rl}{m}\Bigr)^{1/2},
$$
In particular, $C(0,p,p,l,l,r,m)\ll r^{\varepsilon}\frac{rl}{m}$ for
any~$\varepsilon>0$.
\end{lemma}
\begin{proof}
Part (1) follows by direct computation (the sum vanishes unless
$[l_1,l_2]=l_1$ and $[l_1,l_2]=l_2$). If $n=0$ and $l_1=l_2$, then
$$
|C(0,p_1,p_2,l,l,r,m)|=\bigg|\sum_{\substack{x\bmod rl/m\\(x,rl/m)=1}}
e\Bigl(\frac{( p_2-p_1)x}{rl/m}\Bigr)\bigg| = \bigg| \sum_{d|(rl/m, p_2-p_1)} d \mu\Bigl(\frac{rl}{md} \Bigr) \bigg| \leq (rl/m,p_2-p_1)
$$
by a classical bound for Ramanujan's sum, which proves (2). Finally, part (3) is a special case of~\cite[Lemma
A.2 (A.3)]{HN} (applied with
$(\xi,s_1,s_2)=(n,rl_1/m,rl_2/m)$, and
$(a_1,b_1,a_2,b_2)=(q,p_1,q,p_2)$ in the definition of~$\Delta$).
If $\Delta=0$, then necessarily $p_1=p_2$ and $l_1=l_2$, and we
obtain~(4) immediately.
\end{proof}
\subsubsection{Estimation of $\mathcal{P}_0$}
Note that~$\mathcal{P}_0=0$ unless $l_1=l_2$. If that is the case, we
denote~$l=l_1=l_2$. We then have two bounds for~$\mathcal{P}_{0}$. If
we have also~$p_1=p_2$, then the quantity~$\Delta$ of
Lemma~\ref{lm-delta} (3) is zero.
Since~$\widehat{\mathcal{W}}_Y(0)\ll q^{\varepsilon}Z^4$ for any~$\varepsilon>0$
(provided~$\delta>0$ is chosen small enough) by Lemma~\ref{lm-trunc}
(1), we obtain
$$
\mathcal{P}_0\ll q^{\varepsilon}Z^4\frac{m}{rl}|C(0,p_1,p_1,l,l,r,m)|\ll
(qr)^{\varepsilon}Z^4
$$
by the last part of Lemma~\ref{lm-delta} (4).
\par
On the other hand, if~$p_1\not=p_2$, we have~$\Delta\not=0$ hence
$$
\mathcal{P}_0\ll
q^{\varepsilon}Z^4\frac{m}{rl}
\Bigl(\frac{rl}{m},p_2-p_1\Bigr)
$$
by Lemma~\ref{lm-delta} (1) (which shows that the sum
$C(0,p_1,p_2,l_1,l_2,r,m)$ is zero unless~$l_1=l_2$) and (2).
\par
\subsubsection{Estimation of $\mathcal{P}_1$}
Using Lemma~\ref{lm-trunc} (2) for a
suitable value of~$j$, we obtain first
$$
\mathcal{P}_{1}= \frac{1}{r[l_1,l_2]/m}\sum_{1\leq |n|\leq
q^{\delta}Z\frac{r[l_1,l_2]}{mY}} C(n,p_1,p_2,l_1,l_2,r,m)
\widehat{\mathcal{W}}_Y\Bigl(\frac{nY}{r[l_1,l_2]/m}\Bigr) +O(q^{-1})
$$
for any~$\varepsilon>0$ if~$\delta$ is chosen small enough. Then, by
Lemma~\ref{lm-delta} and Lemma~\ref{lm-trunc} (3), we deduce that
\begin{align*}
\mathcal{P}_1
&\ll q^{\varepsilon+2\delta}\frac{1}{(r[l_1,l_2]/m)^{1/2}}
\sum_{1\leq |n|\leq q^{\delta}Z\frac{r[l_1,l_2]}{mY}}
\frac{(\Delta,n,rl_1/m,rl_2/m)}{(n,rl_1/m,rl_2/m)^{1/2}}
\frac{Z^{2}m^2Yq^3}{X^2P^3}
\\
&\ll q^{2\varepsilon}Z^3\Bigl(\frac{r[l_1,l_2]1}{m}\Bigr)^{1/2}\frac{m^2q^3}{X^2P^3}
\end{align*}
if $\delta<\varepsilon/2$.
\section{Estimate of $\mathcal{O}$}\label{sec:O}
In this section, we bound the sum~$\mathcal{O}$ defined
in~(\ref{eq-m2}). Our goal is:
\begin{proposition}\label{corollary-for-O}
Let~$\eta>0$ be a parameter such that \eqref{XZlower} holds.
Let~$\varepsilon>0$. If~$\delta$ is a sufficiently small positive real
number and if $P,L,X$ satisfy
\begin{equation}\label{eqboundsPHLX}
XP\leq q^2L,\quad q^{1+\delta}L^2<X/8,\quad q^{\delta}PHL<q/8,
\end{equation}
then we have
\begin{equation}\label{final-bound-for-O}
\mathcal{O}\ll q^{\varepsilon}\norm{\widehat{K}}_{\infty}\frac{qX^{1/2}}{P},
\end{equation}
where the implied constant depends on~$\varphi$ and~$\varepsilon$.
\end{proposition}
We start by decomposing $\mathcal{O}$ into
$$
\mathcal{O}=\mathcal{O}_1+\mathcal{O}_2
$$
according to whether the prime $l$ divides $h$ or not, in other words
\begin{align*}
\mathcal{O}_1
&=\frac{1}{|\mathrm{P}||\mathrm{L}|}\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_{h\not=0}\widehat{W}\Bigl(\frac{hl}{H}\Bigr)
\sum_{n\geq 1}\lambda(1,n)K(n,hp)V\Bigl(\frac{n}{X}\Bigr)\\
\mathcal{O}_2
&=\frac{1}{|\mathrm{P}||\mathrm{L}|}
\sum_{p\in\mathrm{P}}\sum_{l\in\mathrm{L}}
\sum_\stacksum{h\neq 0}{(h,l)=1}
\widehat{W}\Bigl(\frac{h}{H}\Bigr)
\sum_{n\geq 1}\lambda(1,n)K(n,hp\ov l)V\Bigl(\frac{n}{X}\Bigr).
\end{align*}
Both of these sums will be handled in a similar way in the next two
subsections, beginning with the most difficult one.
\subsection{Bounding $\mathcal{O}_2$}
In the sum $\mathcal{O}_2$, we first use the bound
$$
\widehat{W}(x)\ll (1+|x|)^{-A}
$$
for any~$A\geq 1$ and~$x\in\mathbf{R}$, and
$$
\sum_{n\geq 1}\lambda(1,n)K(n,hp\ov l)V\Bigl(\frac{n}{X}\Bigr)\ll
X^{1+\varepsilon}q^{1/2}\norm{\widehat{K}}_{\infty}\ll
q^{5/2+2\varepsilon}\norm{\widehat{K}}_{\infty}
$$
for any~$\varepsilon>0$ (by~(\ref{eq:RS}) and discrete Fourier inversion) to
truncate the sum over~$h$ to~$|h|\leq q^{\delta}H$, for
some~$\delta>0$ that may be arbitrarily small.
Let~$T\geq 0$ be a smooth function with compact support such that
$T(x)=\norm{V}_{\infty}$ for $x\in [1/2,3]$ and such that~$T$
satisfies \eqref{eq:Vprop} with a fixed value of~$Z$. We then
have~$|V|\leq T$.
In the sum~$\mathcal{O}_2$, we split the $h$-sum into $O(\log q)$ dyadic
sums. We then apply the Cauchy--Schwarz inequality to smooth the
$n$-variable, and we obtain
$$
\mathcal{O}_2\ll \frac{\log^3 q}{PL} \Bigl(\sum_{n\sim
X}|\lambda(1,n)|^2\Bigr)^{1/2} \mathop{\mathrm{Max}}\limits_{1\leq H'\leq
q^{\delta}H}\mathcal{R}_{H'}^{1/2}\ll \frac{X^{1/2}\log^3 q}{PL}
\mathop{\mathrm{Max}}\limits_{1\leq H'\leq q^{\delta}H}\mathcal{R}_{H'}^{1/2},
$$
by~(\ref{eq:RS}) again, where
$$
\mathcal{R}_{H'}= \sumsum_{p_1,h_1,l_1,p_2,h_2,l_2} \sum_{n\geq
1}K(n,h_1p_1\ov l_1)\ov{K(n,h_2p_2\ov l_2)}
\widehat{W}\Bigl(\frac{h_1}{H}\Bigr)
\ov{\widehat{W}\Bigl(\frac{h_2}{H}\Bigr)}
T\Big(\frac{n}X\Bigr),
$$
with the variables in the sums constrained by the conditions
$$
p_i\in\mathrm{P},\quad l_i\in\mathrm{L},\quad
H'<h_i\leq 2H',\quad (l_i,h_i)=1.
$$
For $x\in{\mathbf{F}_q}$, we define
\begin{equation}\label{nudef}
\nu(x)=
\sum_{\substack{
(p,h,l)\in \mathrm{P}\times[H',2H'[ \times\mathrm{L},\\
(h,l)=1\\
ph\ov l\equiv x\mods q}}
\widehat{W}\Bigl(\frac{h}{H}\Bigr)
\end{equation}
so that we have
\begin{equation}\label{eq:nsum}
\mathcal{R}_{H'}=\sumsum_{x_1,x_2\in{\mathbf{F}_q}}
\nu(x_1)\nu(x_2)\sum_{n\geq 1}K(n,x_1)\ov{K(n,x_2)}
T\Bigl(\frac{n}X\Bigr).
\end{equation}
We apply the Poisson summation formula~(\ref{eq-poisson}) for the sum
over~$n$. This results in the formula
$$
\sum_{n\geq 1}K(n,x_1)\ov{K(n,x_2)}T\Bigl(\frac{n}X\Bigr)
=\frac{X}{\sqrt{q}}
\sum_{h\in\mathbf{Z}} \Bigl(\frac{1}{\sqrt{q}}
\sum_{n\mods{q}}K(n,x_1)\ov{K(n,x_2)}e\Bigl(\frac{nh}{q}\Bigr)
\Bigr)
\widehat{T}\Bigl(\frac{hX}{q}\Bigr).
$$
Observe that for any~$h\in\mathbf{Z}$, we have
$$
\frac{1}{\sqrt{q}}
\sum_{n\mods{q}}K(n,x_1)\ov{K(n,x_2)}e\Bigl(\frac{nh}{q}\Bigr)
=\frac{1}{\sqrt{q}}\sum_{u\mods{q}} \widehat K(u,x_1)\ov{\widehat
K(u+h,x_2)}
$$
where $ \widehat K(u,x)$ is defined as in \eqref{whatKzdef}. In
particular, this quantity is bounded by
$q^{1/2}\norm{\widehat{K}}_{\infty}^2$.
Now, for all $h\not=0$ and all~$A\geq 1$, we have
$$
\widehat{T}\Bigl(\frac{hX}{q}\Bigr)\ll_A
\Bigl(\frac{qZ}{hX}\Bigr)^A\leq \Bigl(\frac{qZ}{X}\Bigr)^A\leq
q^{-A\eta},
$$
by~(\ref{XZlower}), where the implied constant depends on~$A$. Hence,
taking~$A$ large enough in terms of~$\eta$, the contribution of
all~$h\not=0$ to the sum over~$n$ is
$\ll \norm{\widehat{K}}_{\infty}^2q^{-5}$, and the total contribution
to~$\mathcal{R}_{H'}$ is (using very weak bounds on~$\nu(x)$)
$$
\ll \norm{\widehat{K}}_{\infty}^2 q^{-3}(PHL)^2\ll
\norm{\widehat{K}}_{\infty}^2q^{-1}
$$
by~(\ref{eqboundsPHLX}).
The remaining contribution to~$\mathcal{R}_{H'}$ from the
frequency~$h=0$ is
$$
\frac{X}{\sqrt{q}}\sumsum_{x_1,x_2\in{\mathbf{F}_q}}
\nu(x_1)\nu(x_2)\frac{1}{\sqrt{q}}\sum_{u\in{\mathbf{F}_q}} \widehat
K(u,x_1)\ov{\widehat K(u,x_2)}\widehat{T}(0).
$$
\begin{lemma}
For any~$(x_1,x_2)\in{\mathbf{F}_q}\times{\mathbf{F}_q}$, we have
$$
\frac{1}{\sqrt{q}}\sum_{u\in{\mathbf{F}_q}}
\widehat K(u,x_1)\ov{\widehat K(u,x_2)}=
L(x_1-x_2)
$$
where
$$
L(x)=\frac{1}{\sqrt{q}}\sum_{u\in{\mathbf{F}^\times_q}}|\widehat{K}(u)|^2e_q(-\bar{u}x)
+\frac{1}{\sqrt{q}}|\widehat{K}(0)|^2.
$$
Moreover, we have
$$
\widehat{L}(h)=|\widehat{K}(0)|^2\delta_{h\equiv 0\mods{q}}+
|\widehat{K}(\bar{h})|^2\delta_{h\not\equiv 0\mods{q}},
$$
and in particular~$|\widehat{L}(h)|\leq \norm{\widehat{K}}_{\infty}^2$ for
all~$h\in{\mathbf{F}_q}$.
\end{lemma}
\begin{proof}
The first formula is an immediate consequence of the
definition~(\ref{whatKzdef}), and the second results from a
straightforward computation.
\end{proof}
\begin{lemma}
We have
$$
\norm{\nu}_2^2=\sum_{x\in{\mathbf{F}_q}}\nu(x)^2\ll q^{\varepsilon+\delta}PHL
$$
for any~$\varepsilon>0$.
\end{lemma}
\begin{proof}
From the last condition in~(\ref{eqboundsPHLX}), we have the
implications
\begin{equation}\label{PHLbound}
h_2p_2\ov l_2=h_1p_1\ov l_1\mods q\Longleftrightarrow
l_1h_2p_2\equiv l_2h_1p_1\mods q\Longleftrightarrow l_1h_2p_2=l_2h_1p_1.
\end{equation}
Therefore, if $(p_1,h_1,l_2)$ are given, the number of possibilities
for $(p_2,h_2,l_1)$ is $\ll q^{\varepsilon}$ for any~$\varepsilon>0$.
The bound
$$
\sum_{x\in{\mathbf{F}^\times_q}}\nu(x)^2\ll q^{\varepsilon}PH'L\leq q^{\varepsilon+\delta}PHL
$$
follows immediately.
\end{proof}
We can now combine these two lemmas with Proposition~\ref{willprop} to
deduce that
\begin{align*}
\frac{X}{\sqrt{q}}\Bigl|\sumsum_{x_1,x_2\in{\mathbf{F}_q}}
\nu(x_1)\nu(x_2)\frac{1}{\sqrt{q}}\sum_{u\in{\mathbf{F}_q}}
\widehat K(u,x_1)\ov{\widehat K(u,x_2)}\widehat{T}(0)\Bigr|
&\leq X\norm{\widehat{L}}_{\infty}\norm{\nu}_2^2|\widehat{T}(0)|\\
&\ll q^{\varepsilon}\norm{\widehat{K}}_{\infty}^2XPHL
\end{align*}
for any~$\varepsilon>0$, by taking~$\delta$ small enough in terms of~$\varepsilon$.
Hence we obtain
\begin{equation}\label{diagonal1total}
\mathcal{O}_2\ll q^{\varepsilon}\norm{\widehat{K}}_{\infty}X\Bigl(\frac{H}{PL}\Bigr)^{1/2}\ll
q^{1+\varepsilon}\norm{\widehat{K}}_{\infty}\frac{X^{1/2}}{P}.
\end{equation}
\subsection{Bounding $\mathcal{O}_1$ and end of the proof of Proposition~\ref{corollary-for-O}}
The treatment of $\mathcal{O}_1$ is similar to that of~$\mathcal{O}_2$, but simpler,
so we will be brief. We have
$$
\mathcal{O}_1=\frac{1}{|\mathrm{L}|}\sum_{l\in\mathrm{L}}\frac{1}{|\mathrm{P}|}
\sum_{p\in\mathrm{P}}\sum_{h\not=0} \widehat{W}\Bigl(\frac{h}{H/l}\Bigr)
\sum_{n\geq 1}\lambda(1,n)K(n,hp)V\Bigl(\frac{n}{X}\Bigr).
$$
We bound the sum over~$p$ for each individual~$l\in\mathrm{L}$, with
$h\ll H/l\asymp H/L$, by repeating the arguments of the previous
section with $H$ replaced by $H/l$ and $L$ replaced by $1$. We
obtain
\begin{equation}\label{O1bound}
\mathcal{O}_1\ll \norm{\widehat{K}}_{\infty}
q^{\varepsilon}X\Bigl(\frac{H}{PL}\Bigr)^{1/2}
\ll q^{1+\varepsilon}\norm{\widehat{K}}_{\infty}\frac{X^{1/2}}{P}
\end{equation}
for any~$\varepsilon>0$, as in the previous case.
Finally, since~$\mathcal{O}=\mathcal{O}_1+\mathcal{O}_2$, this bound combined
with~(\ref{diagonal1total}) implies Proposition~\ref{corollary-for-O}.
\section{End of the proof}
We can now finish the proof of our main theorem. We recall that $X,Z$
are such that
\begin{equation}
\label{eqcondZXq}
Z^{2/3}q^{4/3}\leq X \leq Z^{-2}q^{2}.
\end{equation}
In particular $Z\leq q^{1/4}$ and
$$X\geq Z^{2/3}q^{4/3}\geq Zq^{1+1/4}$$
therefore \eqref{XZlower} holds for~$\eta=1/4$.
Assuming that the conditions \eqref{eqboundsPHLX} hold,
combining~\eqref{an-identity}, Proposition~\ref{proposition-for-F} and
Proposition~\ref{corollary-for-O}, we deduce the estimate
$$
S_V(K,X) \ll
q^{\varepsilon}\norm{\widehat{K}}_{\infty}
\Biggl(\frac{Z^{2}X^{3/2}P}{qL^{1/2}}+Z^{3/2}X^{3/4}(qPL)^{1/4}
+\frac{qX^{1/2}}{P}\biggr)
$$
for any~$\varepsilon>0$. When $L=Z^{2/3}XP/q^{5/3}$, the first two terms are
equal to $Z^{5/3}XP^{1/2}/q^{1/6}$. For $P=q^{7/9}/(X^{1/3}Z^{10/9})$,
they are also equal to the third term which is
$Z^{10/9}q^{2/9}X^{5/6}$. Moreover, the conditions \eqref{eqcondZXq}
and~$Z\leq q^{1/4}$ imply then by simple computations that
$$
1\leq P,\ 1\leq L,\ L\leq P^4,\ XP\leq q^2L
$$
(for instance, $X^3Z^{10}\leq Z^{10}(q^2/Z^2)^3=Z^4q^6\leq q^7$
gives~$P\geq 1$), and then we get
$$
q^{1+\delta}L^2<\frac{X}{8}
$$
for~$\delta=1/18$ provided~$q$ is large enough (since
$qL^2=q^{-7/9}Z^{-8/9}X^{4/3}\leq X(X^{1/3}q^{-7/9})\leq Xq^{-1/9}$
using~$X\leq q^2$). By~(\ref{eq-H}), this also implies
that~$q^{\delta}PHL<q/8$. Hence this choice of the parameters
satisfies~(\ref{eqboundsPHLX}). We finally conclude that
$$
S_V(K,X) \ll \norm{\widehat{K}}_{\infty}
Z^{10/9}q^{2/9+\varepsilon}X^{5/6}
$$
for any~$\varepsilon>0$.
\section{Applications}
In this section, we prove
Corollaries~\ref{cor-average}, \ref{cor-gl2}.
\subsection{ Proof of Corollary~\ref{cor-average}} Applying the approximate functional
equation for~$L(\varphi\otimes\chi,s)$ in balanced form, we
immediately express the first moment
$$
\frac{1}{q-1} \sum_{\chi\mods{q}}M(\chi)L(\varphi\otimes\chi, 1/2)
$$
in terms of the sums
$$
\frac{1}{\sqrt{q}}
\sum_{n\geq 1} \frac{\lambda(1,n)}{\sqrt{n}}
K(n)V\Bigl(\frac{n}{q^{3/2}}\Bigr)
$$
and
$$
\frac{1}{\sqrt{q}}\sum_{n\geq 1}
\frac{\overline{\lambda(1,n)}}{\sqrt{n}}
L(n)V\Bigl(\frac{n}{q^{3/2}}\Bigr),
$$
for suitable test functions satisfying~(\ref{eq:Vprop}) for~$Z=1$,
where
$$
K(n)=\frac{q^{1/2}}{q-1}\sum_{\chi\mods{q}}M(\chi){\chi(n)},\quad
L(n)=\frac{q^{1/2}}{q-1}\sum_{\chi\mods{q}}\tau(\chi)^3M(\chi)\ov{\chi(n)},
$$
in terms of the normalized Gauss sum~$\tau(\chi)$. An elementary
computation shows that this function~$L$ coincides with the function
in the statement of Corollary~\ref{cor-average}. Since moreover
the~$\overline{\lambda(1,n)}$ are the Hecke-eigenvalues of the dual
cusp form~$\widetilde{\varphi}$, the corollary follows from
Theorem~\ref{thm:main} applied to~$K$ and~$L$.
\begin{remark}\label{lastremark}
(1) If
$$
M(\chi)=\frac{1}{\sqrt{q}}\sum_{x\in{\mathbf{F}^\times_q}}K(x)\ov{\chi(x)}
$$
is the discrete Mellin transform of the trace function~$K$ of a
Fourier sheaf~$\mathcal{F}$ that is a middle-extension sheaf on~$\mathbf{G}_m$ of
weight~$0$, and if no sheaf of the
form~$[x\mapsto x^{-1}]^*\dual(\mathcal{K}\ell_3)$ is among the geometrically
irreducible components of~$\mathcal{F}$, then
both~$\norm{\widehat{K}}_{\infty}$ and~$\norm{\widehat{L}}_{\infty}$ are
bounded in terms of the conductor of~$\mathcal{F}$ only and we obtain
$$
\frac{1}{q-1} \sum_{\chi\mods{q}}M(\chi)L(\varphi\otimes\chi,
1/2)\ll q^{2/9+\varepsilon}
$$
for any~$\varepsilon>0$, where the implied constant depends only
on~$\varepsilon$,~$\varphi$ and the conductor of~$\mathcal{F}$.
\par
(2) Applying the approximate functional equation in a balanced form
may not always be the best move. For instance, consider the important
special case where $M(\chi)=1$. We are then evaluating the first
moment
\begin{equation}\label{eqfirstmoment}
\frac{1}{q-1}\sum_{\chi\mods q}L(\varphi\otimes\chi,1/2)
\end{equation}
of the central values of the twisted $L$-functions. Then we are
working with the functions
$$
K(n)=q^{1/2}\delta_{n\equiv 1\mods q},\quad L(n)=\Kl_3( n;q),
$$
whose Fourier transforms are bounded by absolute constants. Hence the
above leads to
$$
\frac{1}{q-1} \sum_{\chi\mods{q}}L(\varphi\otimes\chi, 1/2)
\ll q^{2/9 + \varepsilon}
$$
for any~$\varepsilon>0$, where the implied constant depends on~$\varphi$
and~$\varepsilon$.
\par
On the other hand, the approximate functional equation in unbalanced
form yields sums of the shape
$$
\sum_{n\equiv 1\mods q} \frac{\lambda(1,n)}{\sqrt{n}}
V\Bigl(\frac{n}{Yq^{3/2}}\Bigr)
\quad\text{ and }\quad
\frac{1}{\sqrt{q}}\sum_{n\geq 1}
\frac{\overline{\lambda(1,n)}}{\sqrt{n}}
\Kl_3(n;q)V\Bigl(\frac{nY}{q^{3/2}}\Bigr),
$$
for some parameter $Y>0$ at our disposal. Assuming the
Ramanujan--Petersson conjecture for~$\varphi$
and~$\widetilde{\varphi}$, and using Deligne's bound
$|\Kl_3(n;q)|\leq 3$ for $(n,q)=1$, we obtain the much stronger bound
$$
\frac{1}{q-1}\sum_{\chi\mods
q}L(\varphi\otimes\chi,1/2)=1+(qY)^{\varepsilon}
\bigl({Y^{1/2}}/{q^{1/4}}+q^{1/4}/Y^{1/2}\bigr)\ll
q^{\varepsilon}
$$
for any~$\varepsilon>0$, on choosing $Y=q^{1/2}$.
\par
Note that, again under the Ramanujan--Petersson conjecture
for~$\varphi$ and its dual, we would obtain an \emph{asymptotic
formula} for the first moment \eqref{eqfirstmoment} \emph{provided}
we could obtain an estimate for $S_V(\Kl_3,X)$ with a power-saving in
terms of~$q$, when $X$ is a bit smaller than $q$. Results of this type
are however currently only known if $\varphi$ is an Eisenstein series
(starting from the work~\cite{FI} of Friedlander and Iwaniec for the
ternary divisor function; see also the papers of Fouvry, Kowalski and
Michel~\cite{FKMd3}, of Kowalski, Michel and Sawin~\cite{KMS} and of
Zacharias~\cite{Zac}).
This illustrates the importance of the problem of obtaining
non-trivial bounds for short sums in Theorem \ref{thm:main}. However,
we expect that much more refined properties of trace functions and
their associated sheaves will be necessary for such a purpose (as
indicated by Remark~\ref{remcor17}).
\end{remark}
\subsection{Proof of Corollary~\ref{cor-gl2}}
The symmetric square~$\varphi$ of~$\psi$ has Hecke eigenvalues
\begin{equation}\label{lambda(n2)}
\lambda(1,n)=\sum_{d^2\mid n}\lambda\Bigl(\frac{n^2}{d^2}\Bigr),
\end{equation}
and hence, by M\"obius inversion, we have
$$
\lambda(n^2)=\sum_{d^2\mid n}\mu(d)\lambda\Bigl(1,\frac{n}{d^2}\Bigr).
$$
We deduce that
$$
\sum_{n\geq 1}\lambda(n^2)K(n)V\Bigl(\frac{n}{X}\Bigr)= \sum_{d\geq
1}\mu(d) \sum_{n\geq
1}K(nd^2)\lambda(1,n)V\Bigl(\frac{nd^2}{X}\Bigr).
$$
For
$$
1\leq d\leq \frac{X^{1/2}}{Z^{1/3}q^{2/3}},
$$
we can apply Theorem~\ref{thm:main} to the sum over~$n$ and the
$q$-periodic function $L(n)=K(nd^2)$, with~$X$ replaced
by~$X/d^2$. Since~$q\nmid d$, we have
$\widehat{L}(x)=\widehat{K}(\bar{d}^2x)$ for any~$x\in \mathbf{Z}$, so
that~$\norm{\widehat{L}}_{\infty}=\norm{\widehat{K}}_{\infty}$, and we get
\begin{align*}
\sum_{d\leq X^{1/2}/(Z^{1/3}q^{2/3})}\mu(d) \sum_{n\geq
1}K(nd^2)\lambda(1,n)V\Bigl(\frac{nd^2}{X}\Bigr)
&\ll
\norm{\widehat{K}}_{\infty}Z^{10/9}q^{2/9+\varepsilon} \sum_{d\geq
1}\frac{X^{5/6}}{d^{5/3}}
\\
&\ll
\norm{\widehat{K}}_{\infty}Z^{10/9}q^{2/9+\varepsilon}X^{5/6}
\end{align*}
for any~$\varepsilon>0$.
\par
Since~$V$ has compact support in $[1/2,3]$, the sum over~$n$ is empty
if $d\geq \sqrt{3X}$. Since
$$
\sum_{n\geq 1}K(nd^2)\lambda(1,n)V\Bigl(\frac{nd^2}{X}\Bigr) \ll
\norm{K}_{\infty}\Bigl(\frac{X}{d^2}\Bigr)^{1+\varepsilon}
$$
for any~$\varepsilon>0$, by the Rankin--Selberg bound~(\ref{eq:RS}), we can
estimate the remaining part of the sum as follows:
\begin{align*}
\sum_{X^{1/2}/(Z^{1/3}q^{2/3})<d\leq \sqrt{3X}}\mu(d) \sum_{n\geq
1}K(nd^2)\lambda(1,n)V\Bigl(\frac{nd^2}{X}\Bigr)
&\ll
\norm{K}_{\infty}X^{1+\varepsilon}
\sum_{X^{1/2}/(Z^{1/3}q^{2/3})<d\leq
\sqrt{3X}}\frac{1}{d^{2+2\varepsilon}}
\\
&\ll
\norm{K}_{\infty}X^{1/2+\varepsilon}Z^{1/3}q^{2/3}
\end{align*}
for any~$\varepsilon>0$.
\begin{remark}
The additional dependency on~$\norm{K}_{\infty}$ seems to be
unavoidable in Corollary~\ref{cor-gl2}.
\end{remark}
\section{Proof of Corollary~\ref{RScor}}
The proof of Corollary~\ref{RScor} requires additional ingredients
besides Theorem~\ref{thm:main}. We will be somewhat brief in handling
these additional arguments (especially standard analytic arguments),
since similar computations have been performed in a number of other
papers (e.g.~\cite{FKM2}).
First, in terms of the
Hecke-eigenvalues $\lambda(m,n)$ of the symmetric square~$\psi$
of~$\varphi$, we have the identity
$$
\lambda(n)^2=\sum_{d^2bc=n}\mu(d)\lambda(1,c)
$$
(see~(\ref{lambda(n2)}) and~(\ref{convol})). Thus we have
$$
\sum_{n\geq 1}\lambda(n)^2K(n)V\Bigl(\frac{n}{X}\Bigr)=
\sum_{d,m,n\geq
1}\mu(d)\lambda(1,n)K(d^2mn)V\Bigl(\frac{d^2mn}{X}\Bigr).
$$
\par
We bound the contribution of the integers~$n$ divisible by~$q$ using
the Kim--Sarnak bound~\cite{KimSar} for~$\lambda(1,n)$, which shows
that it is
$$
\ll
\|K\|_\infty X^{1+\varepsilon}q^{-1+7/32},
$$
for any~$\varepsilon>0$, and hence is negligigle. We may therefore restrict
the sum on the right-hand side to integers such that $(dmn,q)=1$.
For a fixed value of $d\leq D$, coprime to~$q$, we consider the sum
\begin{equation}\label{eq-Td}
T_{d,x}=\sum_{m,n\geq
1}\lambda(1,n)K(d^2mn)V\Bigl(\frac{d^2mn}{X}\Bigr).
\end{equation}
We need to estimate the sum of~$T_{d,x}$ over~$d\geq 1$.
Let $D\geq 1$ be some small parameter to be fixed later. The
contribution of the integers $d> D$ is bounded trivially and is
$$
\sum_{d>D}T_{d,X}\ll \frac{\|K\|_\infty X^{1+\varepsilon}}{D}
$$
for any~$\varepsilon>0$.
We now fix $d\leq D$, coprime to~$q$. We handle the sum~$T_{d,X}$ by a
smooth dyadic partition of unity on the $m$-variable. This reduces the
problem to estimates of $O(\log X)$ sums of the form
\begin{equation}\label{DMsum}
S_{d,M}= \sum_{\substack{m,n\geq 1\\(mn,q)=1}}
\lambda(1,n)K(d^2mn)V\Bigl(\frac{d^2mn}{X}\Bigr)
W\Bigl(\frac{m}M\Bigr)
\end{equation}
where $W$ is smooth and compactly supported in $[1/2,5/2]$. We
set
$$
N=\frac{X}{d^2M},
$$
so that $n\sim N$ in the sum.
The estimate for~(\ref{DMsum}) splits in three cases, depending on the
size of~$M$.
\subsection{When $M$ is small}
We assume that $\frac{X}{d^2m}\geq \frac{X}{D^2M}\geq
Z^{2/3}q^{4/3}$, or in another words, that
\begin{equation}\label{condition-on-M}
D^2M\leq \frac{X}{Z^{2/3}q^{4/3}}.
\end{equation}
We can then apply Theorem~\ref{thm:main}, and we derive
\begin{align}\nonumber
S_{d,M}&\ll \|K\|_\infty q^{7/32-1}\frac{X^{1+\varepsilon}}{d^2}
+\|\widehat K\|_\infty Z^{10/9}q^{2/9+\varepsilon}
\sum_{m\sim M}\Bigl(\frac{X}{d^2m}\Bigr)^{5/6}\\
&\ll \|K\|_\infty X^{\varepsilon}q^{7/32-1}\frac{X}{d^2}
+\|\widehat K\|_\infty X^{\varepsilon}Z^{10/9}
\Bigl(\frac{X}{d^2}\Bigr)^{5/6}q^{2/9}{M^{1/6}}
\label{bound1}
\end{align}
for any~$\varepsilon>0$ (the first term corresponds to removing the
constraint $(n,q)=1$).
\subsection{When $M$ is in the Fourier range}
If $M\geq q^{1/2}$, then it is
beneficial to apply the Poisson summation formula to the
$m$-variable. As in the previous case, the cost of removing the
condition $(m,q)=1$ is $\ll \|K\|_\infty q^{7/32-1}X^{1+\varepsilon}/d^2$
for~$\varepsilon>0$. The Poisson summation formula implies that
$$
\sum_{m\geq
1}K(d^2mn)V\Bigl(\frac{d^2mn}{X}\Bigr)W\Bigl(\frac{m}M\Bigr)\ll
\|\widehat K\|_\infty \Bigl(\frac{M}{q^{1/2}}+q^{1/2}\Bigr)
$$
and therefore
\eqref{DMsum} is bounded by
\begin{equation}\label{bound2}
S_{d,M}\ll \|K\|_\infty X^{\varepsilon}q^{7/32-1}\frac{X}{d^2}
+\|\widehat K\|_\infty X^{\varepsilon}\frac{X}{d^2}
\Bigl(\frac{1}{q^{1/2}}+\frac{q^{1/2}}{M}\Bigr)
\end{equation}
for any~$\varepsilon>0$.
\subsection{When $M$ is large but not in Fourier range}
If~$M\leq q^{1/2}$, thinking of the prototypical case when
$X\sim q^{3/2}$ and $D$ is close to one, the $n$-sum is of length
close to $q$, so the natural move is to smooth the $n$-sum, and then
use the Poisson summation formula on the resulting sums.
Thus we apply the Cauchy--Schwarz inequality to \eqref{DMsum}, leaving
the $n$ variable outside, namely
\begin{equation}\label{eqCS}
|S_{d,M}|^2
\ll\sum_{n\sim X/d^2M}|\lambda(1,n)|^2\times
\sumsum_\stacksum{m_i\sim
M}{(m_i,q)=1}\sum_{n\geq 1}K(d^2m_1n)\ov{K(d^2m_2n)}
V\Bigl(\frac{d^2m_1n}X\Bigr)\ov{V}\Bigl(\frac{d^2m_2n}X\Bigr).
\end{equation}
Here, we have dropped the constraint $(n,q)=1$ on the right-hand side
by positivity, and replaced the expressions $W\Bigl(\frac{m_i}M\Bigr)$
by the summation conditions $ m_i\sim M$.
By the Poisson summation formula, we have
\begin{equation}\label{applypoisson}
\sum_{n\geq 1}
K(d^2m_1n)\ov{K(d^2m_2n)}
V\Bigl(\frac{d^2m_1n}X\Bigr)\ov{V}\Bigl(\frac{d^2m_2n}X\Bigr)
=\frac{N}{q^{1/2}}\sum_{h\in\mathbf{Z}}
\widehat{K}_{(2)}(h)\mathcal{W}\Bigl(\frac{h}{q/(X/d^2M)}\Bigr),
\end{equation}
where $\mathcal{W}(y)$ is a smooth function depending on $d,m_1,m_2$,
rapidly decaying as $y\rightarrow\infty$, and
$$
\widehat{K}_{(2)}(h)=\frac{1}{\sqrt{q}}\sum_{n\in\mathbf{F}_q}
K(d^2m_1n)\ov{K(d^2m_2n)}e\Bigl(\frac{nh}{q}\Bigr).
$$
\par
To go further, we use the assumption of Corollary~\ref{RScor} that $K$
is the trace function of a middle-extension $\ell$-adic sheaf~$\mathcal{F}$
that is not exceptional. Indeed, from \cite[Theorem 6.3]{FKM2}, we can
deduce that there exists a set $B\subset {\mathbf{F}^\times_q}$ such
that $|B|$ is bounded in terms of the conductor of~$\mathcal{F}$ only, and
such that whenever
\begin{equation}\label{eqdiag}
m_1/m_2\mods q\not\in B,
\end{equation}
then we have
$$
\|\widehat{K}_{(2)}\|_\infty\ll 1
$$
where the implied constant depends on the conductor of~$\mathcal{F}$ only.
Returning to \eqref{eqCS}, we apply the bound \eqref{applypoisson} to
the pairs pairs $(m_1,m_2)$ which satisfy \eqref{eqdiag}, and apply
the trivial bound otherwise.
We see then that the contribution to the second factor of \eqref{eqCS}
of the ``diagonal'' pairs not satisfying \eqref{eqdiag} is bounded by
$$
\ll X^{\varepsilon}M\Bigl(\frac{M}q+1\Bigr)\frac{X/M}{d^2}
$$
for any~$\varepsilon>0$, while the contribution of the pairs $(m_1,m_2)$
satisfying \eqref{eqdiag} is bounded by
$$
\ll X^{\varepsilon}M^2\Bigl(\frac{X/M}{d^2q^{1/2}}+q^{1/2}\Bigr),
$$
for any~$\varepsilon>0$, where in both cases the implied constant depends
only on~$\varepsilon$ and on the conductor of~$\mathcal{F}$.
Collecting these bounds, we obtain from \eqref{eqCS} the bound
\begin{equation}\label{bound3}
S_{d,M} \ll
\frac{X^{1+\varepsilon}}{d^2}
\Bigl(\frac{1}{M^{1/2}}+\frac{1}{q^{1/4}}+q^{1/4}M^{1/2}
\frac{d}{X^{1/2}}\Bigr),
\end{equation}
for any~$\varepsilon>0$, where the implied constant depends only on~$\varepsilon$
and on the conductor of~$\mathcal{F}$.
\subsection{End of the proof}
Now we can combine the previous bounds. Let~$\eta>0$ and~$\delta$
with~$0<\delta<1/4$ be parameters to be determined later.
\subsubsection*{-- If $M\leq q^{2\delta}$,} we then apply the bound
\eqref{bound1} (and the dyadic decomposition of~$T_{d,x}$ in a
combination of sums~$S_{d,M}$) to derive
\begin{equation}\label{bound-c}
\sum_{d\leq D}T_{d,X}\ll X^{1+\varepsilon}q^{7/32-1}
+Z^{10/9}X^{5/6+\varepsilon}q^{2/9+\delta/3},
\end{equation}
under the condition that
\begin{equation}\label{condition-X}
X\geq Z^{2/3}D^2q^{4/3+2\delta}
\end{equation}
(see \eqref{condition-on-M}).
\subsubsection*{-- If $M\geq q^{1/2+\eta}$,} we apply the bound
\eqref{bound2} and sum over $d\leq D$, to find that
\begin{equation}\label{bound-a}
\sum_{d\leq D} T_{d,X}
\ll X^{1+\varepsilon}\Bigl(\frac{1}{q^{1/2}}+\frac{q^{1/2}}{M}\Bigr)
\ll X^{1+\varepsilon}q^{-\eta}
\end{equation}
in that case.
\subsubsection*{-- If $q^{2\delta}\leq M< q^{1/2+\eta}$,} we then
apply the bound \eqref{bound3} and sum over $d\leq D$, obtaining
\begin{equation}\label{bound-b}
\sum_{d\leq D}T_{d,X}\ll
X^{1+\varepsilon}\Bigl(\frac{1}{q^{\delta}}
+\frac{1}{q^{1/4}}+\frac{q^{1/2+\eta/2}}{X^{1/2}}\Bigr)
\ll X^{1+\varepsilon}\Bigl(q^{-\delta}+\frac{q^{1/2+\eta/2}}{X^{1/2}}\Bigr).
\end{equation}
This covers all of the ranges for $M$. We now choose $\eta, \delta>0$
such that the bound in \eqref{bound-a} is equal to the second term in
\eqref{bound-b}, and the first term in \eqref{bound-b} is consistent
with the second term in \eqref{bound-c}. That is, we choose
$q^{\eta}=(X/q)^{1/3}$ and
$q^{\delta}=\frac{X^{1/8}}{Z^{5/6}q^{1/6}}$. Therefore we have in all
cases the estimate
$$
\sum_{d\leq D}T_{d,X} \ll
X^{2/3+\varepsilon}q^{1/3}+Z^{5/6}X^{7/8+\varepsilon}q^{1/6}+X^{1+\varepsilon}q^{7/32-1},
$$
for any~$\varepsilon>0$, under the assumption that
$$
X\gg D^{8/3}q^{4/3}Z^{-4/3},
$$
and the implied constant depends only on~$\varepsilon$ and the conductor
of~$\mathcal{F}$.
Finally we combine this with the previously noted estimate
$$
\sum_{d>D}T_{d,X}\ll
\frac{\|K\|_\infty X^{1+\varepsilon}}{D}
$$
(recall that for a non-exceptional trace function, we
have~$\|\widehat{K}\|_{\infty}\ll 1$ where the implied constant depends
only on the conductor of~$\mathcal{F}$), to conclude that
$$
\sum_{n\geq 1}\lambda(n)^2K(n)V\Bigl(\frac{n}{X}\Bigr)\ll
X^{2/3+\varepsilon}q^{1/3}+Z^{5/6}X^{7/8+\varepsilon}q^{1/6}+X^{1+\varepsilon}/D.
$$
We take $D=q^{\gamma}$ for some small~$\gamma>0$, and then we have
$$
\sum_{n\geq 1}\lambda(n)^2K(n)V\Bigl(\frac{n}{X}\Bigr)\ll
X^{2/3+\varepsilon}q^{1/3}+Z^{5/6}X^{7/8+\varepsilon}q^{1/6}+X^{1+\varepsilon}q^{-\gamma},
$$
provided that
$$
X\gg q^{4/3+8\gamma/3}/Z^{4/3},
$$
where the implied constant depends only on~$\varepsilon$ and the conductor
of~$\mathcal{F}$.
This concludes the proof of Corollary \ref{cor2-gl2}.
\begin{bibdiv}
\begin{biblist}
\bib{AHLS}{article}{
author={Aggarwal, K.},
author={Holowinsky, R.},
author={Lin, Y.},
author={Sun, Q.},
title={The Burgess bound via a trivial delta method},
note={\url{arXiv:1803.00542v1}},
date={2018},
}
\bib{Blomer}{article}{
author={Blomer, V.},
title={Subconvexity for twisted $L$-functions on ${\rm GL}(3)$},
journal={Amer. J. Math.},
volume={134},
date={2012},
number={5},
pages={1385--1421},
}
\bib{CI}{article}{
author={Conrey, J. B.},
author={Iwaniec, H.},
title={The cubic moment of central values of automorphic $L$-functions},
journal={Ann. of Math. (2)},
volume={151},
date={2000},
number={3},
pages={1175--1216},
doi={10.2307/121132},
}
\bib{FKM1}{article}{
author={Fouvry, {\'E}.},
author={Kowalski, E.},
author={Michel, Ph.},
title={Algebraic twists of modular forms and Hecke orbits},
journal={GAFA},
volume={25},
note={\url{arXiv:1207.0617}},
date={2015},
number={2},
pages={580-657},
}
\bib{FKMd3}{article}{
author={Fouvry, {\'E}.},
author={Kowalski, E.},
author={Michel, Ph.},
title={On the exponent of distribution of the ternary divisor function},
journal={Mathematika},
note={\url{arXiv:1304.3199}},
date={2015},
volume={61},
number={1},
pages={121-144},
}
\bib{FKM2}{article}{
author={Fouvry, \'E.},
author={Kowalski, E.},
author={Michel, Ph.},
title={Algebraic trace functions over the primes},
note={\url{arXiv:1211.6043}},
journal={Duke Math. Journal},
date={2014},
volume={163},
pages={1683-1736},
number={9},
}
\bib{pisa}{article}{
author={Fouvry, {\'E}.},
author={Kowalski, E.},
author={Michel, Ph.},
title={Trace functions over finite fields and their applications},
book={
series={Colloquio de Giorgi},
publisher={Springer},
},
date={2014},
}
\bib{short-sums}{article}{
author={Fouvry, {\'E}.},
author={Kowalski, E.},
author={Michel, Ph.},
author={Raju, C.},
author={Rivat, J.},
author={Soundararajan, K.},
title={On short sums of trace functions},
journal={Ann. Inst. Fourier},
date={2017},
volume={67},
pages={423--449},
}
\bib{FI}{article}{
author={Friedlander, J.B.},
author={Iwaniec, H.},
title={Incomplete Kloosterman sums and a divisor problem},
note={(with an appendix by
B. J. Birch and E. Bombieri)},
journal={Ann. of Math. (2)},
volume={121},
date={1985},
number={2},
pages={319--350},
}
\bib{Goldfeld}{book}{
author={Goldfeld, D.},
title={Automorphic forms and $L$-functions for the group ${\rm
GL}(n,\mathbf{R})$},
series={Cambridge Studies in Advanced Mathematics},
volume={99},
note={With an appendix by Kevin A. Broughan},
publisher={Cambridge University Press, Cambridge},
date={2006},
pages={xiv+493},
}
\bib{HMQ}{article}{
author={Holowinsky, R.},
author={Munshi, R.},
author={Qi, Z.},
title={Character sums of composite moduli and hybrid subconvexity},
conference={
title={Advances in the theory of automorphic forms and their
$L$-functions},
},
book={
series={Contemp. Math.},
volume={664},
publisher={Amer. Math. Soc., Providence, RI},
},
date={2016},
pages={135--148},
}
\bib{HN}{article}{
author={Holowinsky, R.},
author={Nelson, P.},
title={Subconvex bounds on $\GL(3)$ via degeneration to frequency zero},
journal={Math. Ann.},
volume={372},
date={2018},
number={1-2},
pages={299--319},
}
\bib{iwaniec}{book}{
author={Iwaniec, H.},
title={Topics in classical automorphic forms},
series={Graduate Studies in Mathematics},
volume={17},
publisher={American Mathematical Society, Providence, RI},
date={1997},
pages={xii+259},
}
\bib{ESDE}{book}{
author={Katz, N. M.},
title={Exponential sums and differential equations},
series={Annals of Mathematics Studies},
volume={124},
publisher={Princeton University Press},
address={Princeton, NJ},
date={1990},
}
\bib{KimSar}{article}{
author={Kim, Henry H.},
author={Sarnak, Peter}
title={Refined estimates towards the Ramanujan and Selberg conjectures}
note={Appendix to H. Kim, Functoriality for the exterior square of ${\rm GL}_4$ and the
symmetric fourth of ${\rm GL}_2$},
journal={J. Amer. Math. Soc.},
volume={16},
date={2003},
number={1},
pages={139--183},
}
\bib{KMS}{article}{
author={Kowalski, Emmanuel},
author={Michel, Ph.},
author={Sawin, Will},
title={Stratification and averaging for exponential sums: Bilinear forms with generalized Kloosterman sums},
journal={Annali della Scuola Normale Superiore di Pisa (to appear)}
note={\url{arXiv:1802.09849}},
date={2018}
}
\bib{Lin}{article}{
author={Lin, Y.},
title={Bounds for twists of $\GL(3)$ $L$-functions},
note={\url{arXiv:1802.05111}},
date={2018},
}
\bib{Miller}{article}{
author={Miller, S. D.},
title={Cancellation in additively twisted sums on ${\rm GL}(n)$},
journal={Amer. J. Math.},
volume={128},
date={2006},
number={3},
pages={699--729},
}
\bib{Molteni}{article}{
author={Molteni, Giuseppe},
title={Upper and lower bounds at $s=1$ for certain Dirichlet series with
Euler product},
journal={Duke Math. J.},
volume={111},
date={2002},
number={1},
pages={133--158},
}
\bib{Munshi}{article}{
author={Munshi, R.},
title={The circle method and bounds for $L$-functions---IV: Subconvexity
for twists of $\rm GL(3)$ $L$-functions},
journal={Ann. of Math. (2)},
volume={182},
date={2015},
number={2},
pages={617--672},
}
\bib{Munshi1}{article}{
author={Munshi, Ritabrata},
title={Twists of $\GL(3)$ $L$-functions},
note={\url{arXiv:1604.08000}},
date={2016},
}
\bib{PY}{article}{
author={Petrow, Ian},
author={Young, Matthew},
title={The Weyl bound for Dirichlet $L$-functions of cube-free conductor},
note={\url{arXiv:1811.02452}},
date={2018},
}
\bib{SZ}{article}{
author={Sun, Qingfeng},
author={Zhao, Rui},
title={Bounds for ${\rm GL}_3$ $L$-functions in depth aspect},
journal={Forum Math.},
volume={31},
date={2019},
number={2},
pages={303--318},
}
\bib{Zac}{article}{
author = {Zacharias, Rapha\"el},
title = {Simultaneous non-vanishing for Dirichlet $L$-functions},
journal = {Annales de l'Institut Fourier},
publisher = {Association des Annales de l'institut Fourier},
volume = {69},
number = {4},
year = {2019},
pages = {1459-1524},
}
\bib{FZ}{article}{
author={Zhou, Fan},
title={The Voronoi formula on $GL(3)$ with ramification},
note={\url{arXiv:1806.10786}},
date={2018},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Shape Priors for Object SLAM}
In this section, we explain in more detail the benefit of using learnt shape priors in object SLAM. We evaluate the related object-oriented SLAM methods under three important properties: \textit{1. Can the system reconstruct previously unseen objects? 2. Is the reconstruction complete? 3. Does the reconstruction preserve detailed shape?}
The first line of works~\cite{SLAM++, Tateno2016When2.5D} relies on a pre-scanned CAD model database to perform online detection and registration. The reconstructed object shapes are complete and detailed, but the system cannot handle new objects. The second category of works~\cite{maskfusion, fusion++} leverages 2D segmentation masks, treat each segmented object as individual identities and perform reconstruction on-the-fly. This reconstruction-by-segmentation strategy can reconstruct arbitrary unseen object shapes in great detail, but the reconstructed shape is incomplete due to partial observation and missing depth.
The last line of research represents objects as simple geometric shapes, such as ellipsoids~\cite{quadricslam} or cuboids~\cite{cubeslam}.
This kind of geometric representation can be applied to arbitrary unseen shapes and preserve some important properties like scale and orientation, but we've lost the level of details in the shapes.
With learnt shape priors, all the three properties can be achieved at the same time. Its generative property means it can generalize to unseen shapes. Object shape decoded from latent code is guaranteed to be detailed and complete, even from a single partial view. The compact representation also benefits the optimization process. Table \ref{tab:related_work} provides an overview of the related works under the different properties. Only NodeSLAM~\cite{node-slam} and DSP-SLAM can achieve all the three important properties. Unlike NodeSLAM, DSP-SLAM can also deal with large scale environments and monocular RGB inputs.
\section{Full Derivation of Jacobians}
As mentioned in the main paper, one of our contribution is the fast Gauss-Newton optimizer, which is crucial to achieve real-time performance. This section provides full derivation of Jacobians of each individual residual term.
\subsection{Jacobian of Surface Term Residual \label{sec:J_surf}}
As shown in Eq. 1 in the main paper, the surface term residual at pixel $\matr{u} \in \mathbf{\Omega}_s$ is simply the SDF value of the back-projected point under object coordinate:
\begin{equation}
e_s(\matr{u}) = G(\matr{T}_{oc} \pi^{-1}(\matr{u}, {\mathcal{D}}), \matr{z}) = G(^o\matr{x}, \matr{z})
\end{equation}
Its Jacobian with respect to object pose and shape $[\boldsymbol{\xi}_{oc};\matr{z}]$ is defined as:
\begin{equation} \label{eq:J_surf}
\matr{J}_s = \frac{\partial e_s(\matr{u})}{\partial [\boldsymbol{\xi}_{oc};\matr{z}]}
\end{equation}
where $\boldsymbol{\xi}_{oc} \in \mathbb{R}^7$ is the Cartesian representation (twist coordinate) of the corresponding element in Lie Algebra $\mathfrak{sim}(3)$. The Jacobian with respect to the shape code $\frac{\partial e_s(\matr{u})}{\partial \matr{z}}$ could be obtained directly via back-propagation. To derive the Jacobian with respect to object pose $\boldsymbol{\xi}_{oc}$, we first factorize it using chain rule:
\begin{table}[tbp]
\center
\setlength\tabcolsep{4pt}
\begin{tabular}{c||l|l|l||l|l}
Method & \specialcell[b]{Unseen\\Objects} & \specialcell[b]{Full\\Shape} & \specialcell[b]{Detailed\\Shape} & \specialcell[b]{RGB\\Input} & \specialcell[b]{Large\\Scene} \\ \hline \hline
\specialcell[b]{2.5D is not\\enough\cite{Tateno2016When2.5D}} & ~ & \checkmark & \checkmark & ~ & ~ \\ \hline
\specialcell[b]{SLAM++\\\cite{SLAM++}} & ~ & \checkmark & \checkmark & ~ & ~\\ \hline\hline
\specialcell[b]{Mask-\\Fusion \cite{maskfusion}} & \checkmark & ~ & \checkmark & ~ & ~ \\ \hline
\specialcell[b]{Fusion++\\\cite{fusion++}} & \checkmark & ~ & \checkmark & ~ & ~\\ \hline\hline
\specialcell[b]{Quadric-\\SLAM \cite{quadricslam}} & \checkmark & \checkmark & ~ & \checkmark &\checkmark \\ \hline
\specialcell[b]{Cube-\\SLAM \cite{cubeslam}} & \checkmark & \checkmark & ~ & \checkmark & \checkmark\\ \hline\hline
\specialcell[b]{Node-\\SLAM \cite{node-slam}} & \checkmark & \checkmark & \checkmark & ~ & ~ \\ \hline
\textbf{\specialcell[b]{DSP-\\SLAM}} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark
\end{tabular}
\caption{Comparison of the properties of DSP-SLAM with respect
to other object-oriented SLAM systems.
\vspace{-1.0cm}}
\label{tab:related_work}
\end{table}
\begin{equation} \label{eq:J_surf_pose_1}
\frac{\partial e_s(\matr{u})}{\partial \boldsymbol{\xi}_{oc}} = \frac{\partial G(^o\matr{x}, \matr{z})}{\partial ^o\matr{x}} \frac{\partial ^o\matr{x}}{\partial \boldsymbol{\xi}_{oc}}
\end{equation}
Then the first term $\frac{\partial G(^o\matr{x}, \matr{z})}{\partial ^o\matr{x}}$ can also be obtained via back-propagation. The second Jacobian term could be computed by applying a left perturbation to the pose $\matr{T}_{oc}$:
\begin{align}
\frac{\partial ^o\matr{x}}{\partial \boldsymbol{\xi}_{oc}} & = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\exp{\big(\delta \boldsymbol{\xi}^\wedge}\big) \matr{T}_{oc} {^c\matr{x}} - \matr{T}_{oc} {^c\matr{x}}}{\delta \boldsymbol{\xi}} \\
& = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\big(\matr{I} + \delta \boldsymbol{\xi}^\wedge\big) \matr{T}_{oc} {^c\matr{x}} - \matr{T}_{oc} {^c\matr{x}}}{\delta \boldsymbol{\xi}} \\
& = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\delta \boldsymbol{\xi}^\wedge \matr{T}_{oc} {^c\matr{x}}}{\delta \boldsymbol{\xi}} = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\delta \boldsymbol{\xi}^\wedge {^o\matr{x}}}{\delta \boldsymbol{\xi}} \label{eq:J_surf_pose_2}
\end{align}
where $\exp(\cdot)$ is the exponential map from Lie Algebra $\mathfrak{sim}(3)$ to the corresponding Lie Group $\mathbf{Sim}(3)$, and $\cdot ^\wedge$ is the hat-operator that maps a twist coordinate in $\mathbb{R}^7$ to $\mathfrak{sim}(3)$:
\begin{equation} \label{eq:hat}
\boldsymbol{\xi}^ \wedge = \begin{bmatrix} \boldsymbol{\nu} \\ \boldsymbol{\phi} \\ s \end{bmatrix}^ \wedge = \begin{bmatrix} \boldsymbol{\phi}_{\times} + s\matr{I} & \boldsymbol{\nu} \\ \matr{0} & 0\end{bmatrix}
\end{equation}
$\boldsymbol{\nu} \in \mathbb{R}^3$, $\boldsymbol{\phi} \in \mathbb{R}^3$ and $s \in \mathbb{R}$ represent the translation, rotation and scale part of the twist coordinate respectively. $(\cdot)_\times$ maps from $\mathbb{R}^3$ to $\mathfrak{so}(3)$ (skew-symmetric matrices). With the equation above, the Jacobian term in Eq. 6 can be computed as: \vspace{-0.3cm}
\begin{align}
\lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\delta \boldsymbol{\xi}^\wedge {^o\matr{x}}}{\delta \boldsymbol{\xi}} & = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{\delta \boldsymbol{\phi}_\times {^o\matr{x}} + \delta s{^o\matr{x}} + \delta \boldsymbol{\nu}}{\delta \boldsymbol{\xi}} \\
& = \lim_{\delta \boldsymbol{\xi} = \matr{0}} \frac{ \delta \boldsymbol{\nu} - {^o\matr{x}}_\times \delta \boldsymbol{\phi}+ \delta s{^o\matr{x}}}{\delta \boldsymbol{\xi}} \\
& = \begin{bmatrix} \matr{I} & -^o\matr{x}_{\times} & ^o\matr{x} \end{bmatrix} \label{eq:J_surf_pose_3}
\end{align}
The full Jacobian of the surface consistency term residual with respect to the object pose can be obtained by combining Eq. \ref{eq:J_surf_pose_1}, \ref{eq:J_surf_pose_2}, \ref{eq:J_surf_pose_3}. Note that here we derive the Jacobian with slight abuse of notations and neglected the notation of homogeneous representation. For a more detailed explanation of Lie Theory, please refer to ~\cite{barfoot_2017} or \cite{sola2018_micro_lie}.
\subsection{Jacobian of Rendering Term Residual}
As stated in the main paper, the rendering term residual of pixel $\matr{u}$ is
\begin{equation}
e_r(\matr{u}) = d_\matr{u} - \hat{d}_{\matr{u}}
\end{equation}
To compute the Jacobian of the rendering terms $\matr{J}_{r}$, we can expand it using chain rule:
\begin{align}
\matr{J}_{r} & = \frac{\partial e_{r}}{\partial \left[\boldsymbol{\xi}_{oc}; \matr{z} \right]} \\
& = \sum_{k=1}^M \frac{\partial e_{r}}{\partial o_k} \frac{\partial o_k}{\partial s_k} \frac{\partial{G({^o\matr{x}_k}, \matr{z})}}{\partial \left[ \boldsymbol{\xi}_{oc}; \matr{z} \right]} \label{eq:J_render}
\end{align}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/supp/rendering_values.png}
\caption{Demonstration of the depth rendering process. Only very few ray points contribute to the overall Jacobian term.
\vspace{-0.5cm}}
\label{fig:render_values}
\end{figure}
where $\{^o\matr{x}_k\}_{k=1}^M$ is the depth-ordered set of sampled points along the ray of back-projecting pixel $\matr{u}$. The third term $\frac{\partial{G( ^o\matr{x}_k, \matr{z})}}{\partial \left[ \boldsymbol{\xi}_{oc}; \matr{z} \right]}$ has exactly the same form as the surface term Jacobian in Eq.~\ref{eq:J_surf}, thus it can be computed following Sec.~\ref{sec:J_surf} . The second term $\frac{\partial o_k}{\partial s_k}$ is either a constant value or $0$, as the sdf-occupancy conversion is linear inside the cut-off threshold and constant elsewhere.
\begin{equation} \label{eq:conversion_grad}
\frac{\partial o_k}{\partial s_k} =
\begin{cases}
-\frac{1}{2\sigma} & |s_k| < \sigma\\
0 & \text{elsewhere}
\end{cases}
\end{equation}
To compute the first term, we expand the residual term using Eq. 3 and Eq. 4 in the main paper following~\cite{multi-view_supervision}: \vspace{-0.3cm}
\begin{equation*}
\begin{aligned}
e_r & = d_\matr{u} - \bigg( \sum_{i=1}^M d_i \ o_i \prod_{j=1}^{i-1} (1-o_j) + d_{M+1} \prod_{j=1}^M (1 - o_j) \bigg) \\
& = \sum_{i=1}^M \psi(i) \ o_i \prod_{j=1}^{i-1} (1-o_j) + \psi(M+1) \prod_{j=1}^M (1 - o_j) \\
& = \sum_{i=1}^{M+1} \psi(i) \prod_{j=1}^{i-1} (1-o_j) - \sum_{i=1}^M \psi(i) \prod_{j=1}^i (1-o_j) \\
& = \psi(1) + \sum_{i=1}^M (\psi(i+1) - \psi(i)) \prod_{j=1}^i (1-o_j)
\end{aligned}
\end{equation*}
where $\psi(i) = d_{\matr{u}} - d_i$ is the depth residual for each of the sampled points along the ray. Differentiating with respect to the sample point, we reach: \vspace{-0.2cm}
\begin{align}
\frac{\partial e_{r}}{\partial o_k}
& = \sum_{i=1}^M (\psi(i+1) - \psi(i)) \frac{\partial}{\partial o_k}\prod_{j=1}^i (1-o_j) \\
& = \sum_{i=k}^M (\psi(i) - \psi(i+1)) \prod_{j=1, j \neq k}^i (1-o_j) \\
& = \Delta d\sum_{i=k}^M \prod_{j=1, j \neq k}^i (1-o_j) \label{eq:dr_do}
\end{align}
As we are sampling uniformly along the ray, the coefficient $\psi(i) - \psi(i+1) = d_{i+1} - d_i$ reduces to $\Delta d$. To understand this expression we can multiply $(1-o_k)$ on both sides, of Eq.~\ref{eq:dr_do}, which gives us:
\begin{equation}
\frac{\partial e_{r}}{\partial o_k}(1 - o_k) = \Delta d\sum_{i=k}^M t_i
\end{equation}
where $t_i = \prod_{j=1}^i (1 - o_j)$ represents the accumulated transmittance at point $^o\matr{x}_i$ along the ray. Before hitting the surface, $o_k$ is zero, so the term $(1-o_k)$ has no effect, and the Jacobian term becomes the sum of transmittance of all the points starting from point k. And after the ray entering the solid body, this Jacobian term will be zero. This means only points before entering the solid body contribute to the rendered depth value.
Now from Eq.~\ref{eq:conversion_grad} and Eq.~\ref{eq:dr_do}, we already know that many Jacobian terms that make up the final Jacobian in Eq.~\ref{eq:J_render} should be zero. Therefore, we could further speed up the optimization by not evaluating the third term $\frac{\partial{G( ^o\matr{x}_k, \matr{z})}}{\partial \left[ \boldsymbol{\xi}_{oc}; \matr{z} \right]}$ at those points. Figure~\ref{fig:render_values} is a demonstration of the rendering process and the sparse nature of the resulting Jacobians.
\subsection{Jacobian of Prior Terms}
Based on the shape regularisation energy defined in Eq. 6 in the main paper. The residual of this energy term is simply the shape code vector itself:
\begin{equation} \label{eq:E_code}
e_c = \matr{z}
\end{equation}
Based on Eq.~\ref{eq:E_code}, we have
\begin{equation}
\frac{\partial e_c}{\partial \boldsymbol{\xi}_{oc}} = \matr{0}, \frac{\partial e_c}{\partial \matr{z}} = \matr{I}
\end{equation}
Optionally, we can also apply structural priors such as constraining the object orientation to be aligned with the ground plane, as in~\cite{wang2020directshape}. We can define the rotation prior residual as
\begin{align}
e_{rot} & = 1 - \matr{R}_{co}(0:2,1) \cdot \matr{n}_g \\
& = 1 - \begin{bmatrix} 0 & 1 & 0 \end{bmatrix}\matr{R}_{oc}\matr{n}_g
\end{align}
\begin{figure}[tpb]
\centering
\includegraphics[width=0.48\linewidth]{figures/redwood/gt_together.png}
\includegraphics[width=0.48\linewidth]{figures/redwood/no_gt_together.png}
\caption{Results with GT ($\leftarrow$) vs. MaskRCNN ($\rightarrow$) masks \vspace{-0.6cm}}
\label{fig:gt_mask}
\end{figure}
where $\matr{R} \in \mathbf{SO}(3)$ denotes the rotation part of the transformation matrix, and $(0:2, 1)$ represents the operation of getting the second column of the matrix. $\matr{n}_g$ is the normal direction to the ground plane under camera coordinate frame. The Jacobian with respect to pose can be easily obtained following Eq.~\ref{eq:J_surf_pose_3}:
\begin{align}
\frac{\partial e_{rot}}{\partial \boldsymbol{\xi}_{oc}} &= - \begin{bmatrix} 0 & 1 & 0 \end{bmatrix}\frac{\partial \matr{R}_{oc}\matr{n}_g}{\partial \boldsymbol{\xi}_{oc}} \\
& = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} \matr{0} & (\matr{R}_{oc}\matr{n}_g)_{\times} & \matr{0} \end{bmatrix} \\
& = \begin{bmatrix} 0 & 0 & 0 & (\matr{R}_{oc}\matr{n}_g)_{\times}(1,0:2) & 0 \end{bmatrix}
\end{align}
where $(0:2, 1)$ represents the operation of getting the second row of the matrix. As this residual has no effect on object shape, we have:
\begin{equation}
\frac{\partial e_{rot}}{\partial \matr{z}} = \matr{0}
\end{equation}
\section{Experiment Details and Run-time Analysis}
We evaluate the run-time performance of our full SLAM system (stereo+LiDAR) on a Linux system with Intel Core i9 9900K CPU at 3.60GHz, and an nVidia GeForce RTX2080 GPU with 8GB of memory. The 2D detection boxes and masks are obtained using MaskRCNN~\cite{maskrcnn},\footnote{\url{https://github.com/facebookresearch/detectron2}} throughout all the experiments. Initial object poses are inferred using the LiDAR-based 3D b-box detector SECOND~\cite{second}.\footnote{\url{https://github.com/traveller59/second.pytorch}} Our object reconstruction pipeline is implemented in Python with PyTorch. The back-end object-SLAM framework is implemented in C++ as an extension of the original ORB-SLAM2 implementation. The Python part is embedded into C++ using pybind11.
Table~\ref{tab:timing} lists the breakdown of the run-time performance of different major components. Note that all those components are performed only at key-frames.
For KITTI sequences, we adopted two design choices to achieve real-time performance. Firstly, we noticed shape estimation did not improve much over time in KITTI, thus we perform full optimization only for new objects and only update poses for existing objects (as stated in the main paper). Secondly, we abort new object optimization whenever necessary to ensure new key-frame is inserted in time. These make our system able to operate at $10$Hz on KITTI. In sequences such as Freiburg and Redwood-OS, it is beneficial to update the shape more frequently, with fewer GN iterations to guarantee real-time performance. Object meshes can be extracted optionally using marching cube with an extra GPU, for visualization only.
\begin{table}[tbp]
\centering
\begin{tabular}{l|c}
\textbf{Component} & \textbf{Time (ms)} \\ \hline
Mask R-CNN Detector & 70 / frame \\
3D LiDAR Detector & 60 / frame \\
Pose-shape Optimization & 20$\times$10 / new object \\
Pose-only Optimization & 4$\times$5 / vis. object \\ \hline
\end{tabular}
\caption{Run-time analysis with system components \vspace{-0.4cm}}
\label{tab:timing}
\end{table}
\section{Ablation with GT masks}
We have shown promising reconstruction results on cars on both KITTI and Freiburg dataset. Chairs have thin structures and a complex topology and are more challenging to reconstruct than cars. We noticed the thin legs of the last chair in Fig.~10 in the main paper were not properly reconstructed. This is because our reconstruction makes use of Mask-RCNN segmentation masks, which are noisy and affected by shadows. This can result in background points being associated to the chair, leading to incorrect results. We conducted ablation study using ground-truth masks as input. Fig.~\ref{fig:gt_mask} illustrates an upper bound quality that could be achieved with clean GT masks.
\section{More Qualitative Results}
We show more qualitative reconstruction results in Fig.~\ref{fig:recon_results}. For each scene the RGB image is shown on the top and the reconstruction results are shown underneath. We also show an example of reconstructed map in Fig.~\ref{fig:map_result}.
\begin{figure*}[tpb]
\centering
\includegraphics[width=0.48\textwidth]{figures/supp/00-000117.png}
\includegraphics[width=0.48\textwidth]{figures/supp/00-001190.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-00-000117.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-00-001190.png}
\includegraphics[width=0.48\textwidth]{figures/supp/00-001848.png}
\includegraphics[width=0.48\textwidth]{figures/supp/00-003264.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-00-001848.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-00-003264.png}
\includegraphics[width=0.48\textwidth]{figures/supp/00-003516.png}
\includegraphics[width=0.48\textwidth]{figures/supp/07-000195.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-00-003516.png}
\includegraphics[width=0.48\textwidth]{figures/supp/recon-07-000195.png}
\caption{More qualitative results on object reconstruction from a single view-point \vspace{-1cm}}
\label{fig:recon_results}
\end{figure*}
\begin{figure*}[tpb]
\centering
\includegraphics[width=0.90\textwidth]{figures/supp/kitti-07-map.png}
\caption{Reconstructed map of KITTI-07. Note the dense reconstruction of the objects and the shape variations. \vspace{-1cm}}
\label{fig:map_result}
\end{figure*}
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Simultaneous Localization and Mapping (SLAM) is the process of estimating the trajectory of a moving camera while reconstructing its surrounding environment. From a purely geometric perspective, SLAM is often regarded as a well-understood or even solved problem. Many state-of-the-art dense SLAM algorithms can achieve accurate trajectory estimation and create high-quality geometric reconstructions that can be used in obstacle avoidance or path planning for mobile robots. However, when it comes to more complex tasks that require scene understanding, geometry-only scene representations fall short of providing key semantic information.
Taking advantage of recent deep learning breakthroughs in semantic segmentation and object detection algorithms \cite{maskrcnn, faster-rcnn, yolo} semantic SLAM systems augment geometric low-level map primitives by fusing semantic labels into the 3D reconstruction~\cite{semantic-fusion, semantic-stereo-fusion,Chen2019SuMa++}. However, the resulting scene maps merely consist of a set of labelled 3D points where reasoning about the scene at the level of objects to infer meaningful information such as the number of objects of each category, their size, shape or relative pose remains a challenging task. Better use of the semantic information is required in the form of an object-centric map representation that allows detailed shape estimation and meaningful instantiation of scene objects.
Our proposed approach forms part of a more recent family of \emph{object-aware} SLAM methods that reconstruct object-centric maps grouping all the low-level geometric primitives (voxels, points ...) that make up the same object into a single instance. Front-end camera tracking and back-end optimization are both performed at the level of object instances. While the first object-level methods, such as SLAM++~\cite{SLAM++}, mapped previously known object instances, more recent systems have taken advantage of instance level semantic segmentation masks \cite{maskrcnn} to achieve object level reconstruction for unknown objects \cite{fusion++} even in the presence of dynamic objects \cite{maskfusion, midfusion}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.235\textwidth]{figures/08-2758.png}
\includegraphics[width=0.235\textwidth]{figures/recon-08-2758.png}
\caption{Qualitative shape and pose results on a stereo+LiDAR KITTI sequence. A very sparse set of LiDAR points was used to reconstruct each car. LiDAR points on the road are only shown for illustration.\vspace{-0.5cm}}
\label{fig:single-view}
\end{figure}
However, these early object level SLAM systems exhibit major drawbacks: They either require a pre-known database of object instances \cite{SLAM++}; or reconstruct objects from scratch without exploiting shape priors \cite{maskfusion, midfusion, fusion++}, which results in partial or incomplete object reconstructions. We improve this by exploiting the regularity of shapes within an object category in the form of learned shape priors, defined as a latent code $\matr{z}$ and a generative model $G(\matr{z})$ that decodes it into its full geometry. This brings us several advantages; object shapes decoded from latent codes are guaranteed to be detailed and complete, regardless of partial observations or changes in view-points, they provide a compact representation and they can be optimized using the gradients obtained through back-propagation.
Using ORB-SLAM2~\cite{orbslam2} as a sparse camera tracking and mapping backbone, DSP-SLAM takes the reconstructed 3D point-cloud as input and fits a latent code to each detected object instance, using a combination of 3D surface consistency and rendered depth losses. Foreground objects, background features and camera poses are further refined via bundle adjustment using a joint factor graph. We show DSP-SLAM operating in $3$ different modes: monocular, stereo, and stereo+LiDAR. The monocular and stereo systems use the respective ORB-SLAM2 modalities as the SLAM backbone and the reconstructed 3D point-clouds to reconstruct the detected objects. The stereo+LiDAR system uses stereo ORB-SLAM2 as the SLAM backbone but in addition it incorporates a sparse set of LiDAR measurements (as few as 50 per object) for object reconstruction and pose-only optimization.
\noindent\textbf{Contributions:} While DSP-SLAM is not the first approach to leverage shape priors for 3D reconstruction~\cite{frodo,node-slam} from image sequences, it innovates in various ways.
Firstly, unlike~\cite{frodo,node-slam}, our map does not only represent objects, but also reconstructs the background as sparse feature points, optimizing them together in a joint factor graph, marrying the best properties of feature-based~\cite{orbslam2} (highly accurate camera tracking) and object-aware SLAM (high level semantic map). Secondly, although Node-SLAM \cite{node-slam} also incorporates shape priors within a real-time SLAM system~\cite{node-slam}, it uses dense depth images for shape optimization, while DSP-SLAM can operate with RGB-only monocular streams and requires as few as $50$ 3D points per object to obtain accurate shape estimates. Finally, although both FroDO~\cite{frodo} and DSP-SLAM can operate in a monocular RGB setting, FroDO is a slow batch approach that requires all frames to be acquired in advance and associated with their camera poses, while
DSP-SLAM is an online, sequential method that can operate at $10$ frames per second.
In terms of object shape and pose estimation, we improve quantitative and qualitatively over auto-labelling~\cite{Zakharov2020Autolabeling3D}, a state-of-the-art prior-based object reconstruction method. Experiments on the KITTI odometry \cite{kitti_odom} dataset show that, with stereo+LiDAR input our joint bundle adjustment offers improvements in trajectory estimation over the feature-only stereo system ORB-SLAM2~\cite{orbslam2}, used as our backbone. Moreover, DSP-SLAM offers comparable tracking performance to state-of-the-art stereo~\cite{Wang2017StereoDSO}, LiDAR-only~\cite{Chen2019SuMa++} and dynamic~\cite{Bescos2018DynaSLAM} SLAM systems, while providing rich dense object reconstructions. DSP-SLAM also achieves promising qualitative reconstruction results with monocular input on Freiburg Cars \cite{freiburg-cars} and Redwood-OS \cite{choi2016redwood} dataset.
\section{\label{system-overview}System Overview}
DSP-SLAM is a sequential localisation and mapping method that reconstructs the complete detailed shape of detected objects while representing the background coarsely as a sparse set of feature points. Each object is represented as a compact and optimizable code vector $\matr{z}$. We employ DeepSDF \cite{deepsdf} as the shape embedding, that takes as input a shape code $\matr{z} \in \mathbb{R}^{64}$ and a 3D query location $\matr{x} \in \mathbb{R}^{3}$, and outputs the signed distance function (SDF) value $s = G(\matr{x}, \matr{z})$ at the given point. An overview of DSP-SLAM is shown in Fig.~\ref{fig:system}. DSP-SLAM runs at almost real time ($10$ frames per second) and can operate on different modalities: monocular, stereo or stereo with LiDAR; depending on the available input data.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/000140.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/000168.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/000428.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/000655.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/140r.png} \hspace{-0.6em}
\includegraphics[width=0.1115\textwidth]{figures/single_view_shape/168r.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/428r.png} \hspace{-0.6em}
\includegraphics[width=0.115\textwidth]{figures/single_view_shape/655r.png} \hspace{-0.6em}
\caption{Shape reconstruction: qualitative results. \vspace{-0.5cm}}
\label{fig:single-view-shape}
\end{figure}
\noindent\textbf{Sparse SLAM backbone:} ORB-SLAM2~\cite{orbslam2} is used as the tracking and mapping backbone, a feature-based SLAM framework that can operate on monocular or stereo sequences. While the tracking thread estimates camera pose at frame-rate from correspondences, the mapping thread builds a sparse map by reconstructing 3D landmarks.
\noindent\textbf{Detections:} We perform object detection at each key-frame, to jointly infer 2D bounding boxes and segmentation masks. In addition, an initial estimate for the object pose estimation is obtained via 3D bounding box detection~\cite{smoke,second}.
\noindent\textbf{Data association:} New detections will either be associated to existing map objects, or instantiated as a \textit{new} object via object-level data association.
Each detected object instance $I$ consists of a 2D bounding box ${\mathcal{B}}$, a 2D mask ${\mathcal{M}}$, the dpeth observation of sparse 3D point cloud ${\mathcal{D}}$, and the initial object pose $\matr{T}_{co, 0}$.
\noindent\textbf{Prior-based object reconstruction:} Newly instantiated objects will be reconstructed following the object reconstruction pipeline described in Sec.~\ref{object-recon}. DSP-SLAM takes the set of sparse 3D point observations ${\mathcal{D}}$, which can come from reconstructed SLAM points (in monocular and stereo modes) or LiDAR input (in stereo+LiDAR mode) and optimises the shape code and object pose to minimise surface consistency and depth rendering losses. Objects already present in the map will only have their 6-dof pose updated via pose-only optimization.
\noindent\textbf{Joint map optimisation:} A joint factor graph of point features (from SLAM), objects and camera poses is optimised via bundle adjustment to maintain a consistent map and incorporate loop closure. New objects are added as nodes to the joint factor graph and their relative pose estimates $\matr{T}_{co}$ as camera-object edges. Object-level data association and joint bundle adjustment will be discussed in Sec.~\ref{object-slam}.
\section{\label{object-recon}Object Reconstruction with Shape Priors}
We aim to estimate the full dense shape $\matr{z}$ and 7-DoF pose $\matr{T}_{co}$, represented as a homogeneous transformation matrix $\matr{T}_{co} = [s\matr{R}_{co}, \matr{t}_{co}; \matr{0}, 1] \in \matr{Sim}(3)$, for an object with detections $I = \{{\mathcal{B}}, {\mathcal{M}}, {\mathcal{D}}, \matr{T}_{co, 0}\}$. We formulate this as a joint optimization problem, which iteratively refines the shape code and object pose from an initial estimate. We propose two energy terms $E_{surf}$ and $E_{rend}$ and formulate a Gauss-Newton solver with analytical Jacobians.
\subsection{Surface Consistency Term}
This term measures the alignment between observed 3D points and the reconstructed object surface.
\begin{equation} \label{E_surf}
E_{surf} = \frac{1}{\left| \mathbf{\Omega}_s \right|}\sum_{\matr{u} \in \mathbf{\Omega}_s} G^2(\matr{T}_{oc} \pi^{-1}(\matr{u}, {\mathcal{D}}), \matr{z})
\end{equation}
where $\mathbf{\Omega}_s$ denotes the pixel coordinates of the set of sparse 3D points ${\mathcal{D}}$, which can come from reconstructed SLAM points (in monocular and stereo modes) or LiDAR input (in stereo+LiDAR mode). Ideally, the back-projected point at pixel $\matr{u}$ should perfectly align with the object surface resulting in zero SDF value, giving a zero error residual. In practice, we observed that the surface consistency term alone is not sufficient for correct shape and pose estimation in the case of partial observations. Fig.~\ref{fig:shape-ablation} illustrates a case where only points on the back and the right side of a car are observed (shown in green). Using the surface consistency alone term leads to incorrect shape estimation -- much larger than its actual size. To address this issue, we propose a rendering loss, that provides point-to-point depth supervision and enforces silhouette consistency to penalize shapes that grow outside of the segmentation mask.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/shape_ablation_success.png}
\caption{Illustration of the effectiveness of the rendering term in the presence of partial observations. \textbf{Left:} Detected object and partial surface point observations (green). \textbf{Middle:} Optimisation result with $E_{surf}$ only. The loss is minimised but the shape grows larger than its actual size. \textbf{Right:} Optimisation result with the rendering term. Enforcing the silhouette constraint results in the correct scale. \vspace{-0.6cm}}
\label{fig:shape-ablation}
\end{figure}
\subsection{Differentiable SDF Renderer}
Following \cite{multi-view_supervision, node-slam}, we build our SDF renderer via differentiable ray-tracing. For each pixel $\matr{u}$, we back-project a ray $^c\matr{x} = \matr{o} + d \matr{K}^{-1} \dot{\matr{u}}$ parameterized by the depth value $d$ under camera coordinate space, with $\matr{o}$ being the camera optical centre and $\matr{K}$ being camera intrinsic matrix. We sample $M$ discrete depth values $\{d_i\}$ along each ray within the range $[d_{min}, d_{max}]$, with $d_i = d_{min} + (i-1) \Delta d$, and $\Delta d = (d_{max} - d_{min}) / (M-1)$. The bounds of the depth range are determined by the current estimation of object translation and scale, and are re-computed at each iteration.
\noindent\textbf{Occupancy Probabilities}
The SDF value $s_i$ at each sampled point can be obtained by transforming sampled points to the object coordinate frame and passing through the DeepSDF decoder. The SDF value encodes the probability that a given point is occupied by the object or belongs to free space. We apply a piecewise linear function to the predicted SDF values to indicate the occupancy probability $o_i$, defined in Eq.~\ref{sdf2occ}, where $\sigma$ represents the cut-off threshold which controls the smoothness of the transition. We fix $\sigma = 0.01$ throughout our experiments. \vspace{-0.4cm}
\begin{equation} \label{sdf2occ}
s_i = G(\matr{T}_{oc} {^c\matr{x}},\, \matr{z}) \quad \textrm{and} \quad
o_i =
\begin{cases}
1 & s_i < -\sigma\\
-\frac{s_i}{2 \sigma} & \left|s_i\right| \leq \sigma \\
0 & s_i > \sigma
\end{cases}
\end{equation}
\noindent\textbf{Event Probabilities} When tracing points along the ray, the ray either terminates or escapes without hitting other points. These $M+1$ event probabilities can be defined as:
\begin{eqnarray} \label{term_prob}
& \phi_i = o_i \prod_{j=1}^{i-1} (1- o_j), i = 1, \dots, M \nonumber \\
& \phi_{M + 1} = \prod_{j=1}^{M} (1- o_j)
\end{eqnarray}
\noindent\textbf{Rendered Depth and Rendering Term} With the probabilities defined above, the rendered depth value at each pixel $\matr{u}$ can be computed as the expected depth value of the terminating point as in Eq.~\ref{render_depth}. To make it consistent, we set $d_{M+1}$, the depth value associated with escape probability, to a constant value $1.1 d_{max}$, as in \cite{node-slam}.
\begin{equation} \label{render_depth}
\hat{d}_{\matr{u}} = \sum_{i=1}^{M+1} \phi_i d_i
\end{equation}
Since the rendering is fully differentiable, it can be integrated in our optimization. Unlike~\cite{node-slam, multi-view_supervision}, we perform ray-tracing in continuous space and do not require to discretize the object model. The final rendering term is as follows:
\begin{equation}
E_{rend} = \frac{1}{\left| \mathbf{\Omega}_r \right|} \sum_{\matr{u} \in \mathbf{\Omega}_r} (d_\matr{u} - \hat{d}_{\matr{u}})^2
\end{equation}
where $\mathbf{\Omega}_r = \mathbf{\Omega}_s \cup \mathbf{\Omega}_b$ is the union of surface pixels and pixels not on object surface but inside the 2D bounding box ${\mathcal{B}}$. Surface pixels $\mathbf{\Omega}_s$ are the same set of pixels used in Eq.~\ref{E_surf}, obtained by projecting the 3D reconstucted SLAM points onto the image masks as discussed in Sec.~\ref{system-overview}. The pixels in $\mathbf{\Omega}_b$ are assigned the same depth value as $d_{M+1} = 1.1 d_{max}$ and provide important silhouette supervision for our optimization since they penalize renderings that lie outside the object boundary, forcing empty space. As the pixels in $\mathbf{\Omega}_b$ do not require a depth measurement, we perform uniform sampling inside the 2D bounding box and filter out those inside the segmentation mask.
\subsection{Optimization details}
Our final energy is the weighted sum of the surface and rendering terms and a shape code regularization term:
\begin{equation}
E = \lambda_s E_{surf} + \lambda_r E_{rend} + \lambda_c \norm{\matr{z}}^2
\end{equation}
The hyperparameter values used for optimization $\lambda_s = 100$, $\lambda_r = 2.5$ and $\lambda_c = 0.25$ were tuned such that the Hessian matrices of the energy terms are of the same order of magnitude. Since all terms are quadratic, we adopt a Gauss-Newton optimisation approach with analytical Jacobians (Please refer to supplemental material for detail), initialized from a zero shape code $\matr{z} = \matr{0}$. The initialisation for the object pose
$\matr{T}_{co, 0}$ is given by a LiDAR 3D detector~\cite{second} when LiDAR is available. In the monocular/stereo case, it is given by an image-based 3D detector~\cite{smoke} or by performing PCA on the sparse object point cloud.
\section{\label{object-slam}Object SLAM}
As an object-based SLAM system, DSP-SLAM builds a joint factor graph of camera poses, 3D feature points and object locations and poses. As Fig.~\ref{system-overview} shows, the factor graph introduces object nodes and camera-object edges.
\subsection{\label{data-association}Object Data Association}
Data association between new detections and reconstructed objects is an important step in object-level SLAM. We aim to associate each detection $I$ to its \textit{nearest} object $o$ in the map, adopting different strategies depending on the different input modalities. When LiDAR input is available we compare the distance between 3D bounding box and reconstructed object. When only stereo or monocular images are used as input, we count the number of matched feature points between the detection and object. If multiple detections are associated with the same object, we keep the \textit{nearest} one and reject others. Detections not associated with any existing objects are initialised as new objects and their shape and pose optimised following Sec.~\ref{object-recon}. For stereo and monocular input modes, reconstruction only happens when enough surface points are observed. For detections associated with existing objects, only the pose is optimised by running pose-only optimization and a new camera-object edge added to the joint factor-graph.
\begin{table*}
\renewcommand{\arraystretch}{1.0}
\centering
\begin{tabular}{c|c c c c|c c c c}
\hline
\multirow{2}{*}{Diff.} & \multicolumn{4}{c|}{Auto-labelling\cite{Zakharov2020Autolabeling3D}} & \multicolumn{4}{c}{Ours} \\
\cline{2-9}
& BEV@0.5 & 3D@0.5 & NS@0.5 & NS@1.0 & BEV@0.5 & 3D@0.5 & NS@0.5 & NS@1.0 \\
\hline
E & 80.70 & \textbf{63.96} & 86.52 & 94.31 & \textbf{83.31} & 62.58 & \textbf{88.01} & \textbf{96.86} \\
M & 63.36 & 44.79 & 64.44 & 85.24 & \textbf{75.28} & \textbf{47.76} & \textbf{76.15} & \textbf{89.97} \\
\hline
\end{tabular}
\caption{Quantitative comparison of object cuboid prediction quality with Auto-labelling on KITTI3D on Easy and Moderate samples. Results of Auto-labelling are taken from their paper. Best results are shown as bold numbers.
\vspace{-0.1cm}}
\label{tab:object_detection}
\end{table*}
\begin{figure*}[tbp]
\centering
\includegraphics[width=1.00\textwidth]{figures/labelling_ours/ours-labelling-cropped.png}
\caption{A qualitative comparison of shape reconstruction and pose estimation against Auto-labelling~\cite{Zakharov2020Autolabeling3D}. \textbf{Left:} input RGB image. \textbf{Middle:} result with DSP-SLAM \textbf{Right:} result with auto-labelling~\cite{Zakharov2020Autolabeling3D}\vspace{-0.5cm}}
\label{fig:comparison-auto-labelling}
\end{figure*}
\subsection{\label{joint_ba}Joint Bundle Adjustment}
Our joint map consists of a set of camera poses $C = \{\matr{T}_{wc_i}\}_{i=1}^{M}$, object poses $O = \{\matr{T}_{wo_j}\}_{j=1}^{N}$ and map points $P = \{^w\matr{p}_k\}_{k=1}^{K}$. Our joint BA can be formulated as a non-linear least squares optimization problem:
\begin{eqnarray}
C^*, O^*, P^* &{} = {}& \mathop{\arg\min}_{\{C, O, P\}} \sum_{i, j} \norm{\matr{e}_{co}(\matr{T}_{wc_i}, \matr{T}_{wo_j})}_{\Sigma_{i,j}} \nonumber \\
&& {+}\:\sum_{i, k} \norm{\matr{e}_{cp}(\matr{T}_{wc_i}, ^w\matr{p}_k)}_{\Sigma_{i,k}}
\label{eq:ba}
\end{eqnarray}
where $\matr{e}_{co}$ and $\matr{e}_{cp}$ represent the residuals for camera-object and camera-point measurements and $\Sigma$ is the co-variance matrix of measurement residuals. Objects act as additional landmarks, which results in improvements in tracking performance as shown in our evaluation on KITTI. The optimization is solved with Levenberg-Marquardt in g2o \cite{g2o}.
\noindent\textbf{Camera-Object Measurements:} Object-camera poses $\matr{T}_{co}$ are evaluated by minimising the surface alignment term in Eq. \ref{E_surf} while keeping the shape code and scale fixed. New pose observations serve as edges between camera pose $\matr{T}_{wc}$ and object pose $\matr{T}_{wo}$, and the residual is defined as:
$ \matr{e}_{co} = \log (\matr{T}^{-1}_{co} \cdot \matr{T}^{-1}_{wc} \cdot \matr{T}_{wo})$
where $\log$ is the logarithm mapping from $\matr{SE}(3)$ to $\mathfrak{se}(3)$. Poses in the factor graph are 6-DoF, as object scale is only optimised when first detected.
\noindent\textbf{Camera-Point Measurements:} We use the standard formulation of re-projection error used in ORB-SLAM2~\cite{orbslam2}:
$ \matr{e}_{cp} = \pi(\matr{T}_{wc}^{-1} {^w\matr{p}}) - \matr{\Tilde{u}}$,
where $\matr{\Tilde{u}}$ is the measured pixel coordinate of map point $\matr{p}$. We follow a similar strategy as ORB-SLAM2 to tune $\Sigma_{ij}$ such that the two energy terms contribute roughly the same to the overall optimization.
\section{Related work}
\noindent\textbf{Object-aware SLAM:}
SLAM++ \cite{SLAM++} pioneered object-aware RGB-D SLAM, representing the scene at the level of objects using a joint pose-graph for camera and object poses. A database of pre-scanned objects was created in advance and object instances were detected and mapped using a pre-trained 3D detector, ICP losses and pose-graph optimization. In later work, Tateno \emph{et al.}~\cite{Tateno2016When2I} aligned object instances from a pre-trained database to volumetric maps while Stuckler \emph{et al.}~\cite{stuckler2012model} performed online exploration, learning object models on the fly and tracking them in real time. An important drawback of instance-based approaches is their inability to scale to a large number of objects and their need for object models to be known in advance. More recent object-aware RGB-D SLAM systems have dropped the requirement for known models and instead take advantage of state-of-the art 2D instance-level semantic segmentation masks~\cite{maskrcnn} to obtain object-level scene graphs~\cite{semantic-fusion} and per-object reconstructions via depth fusion, even in the case of dynamic scenes~\cite{maskfusion,midfusion}.
Extensions of object-aware SLAM to the case of monocular video input deal with the additional challenge of relying only on RGB-only information~\cite{mono-object-slam, mono-object-sparse-slam, quadricslam, cubeslam, category-specific-models} which results in the use of simplistic shape representations. In QuadricSLAM \cite{quadricslam} objects are represented as ellipsoids and fit to monocular observations while in CubeSLAM \cite{cubeslam} cuboid proposals generated from single-view detections are optimized in a joint bundle adjustment optimization.
While the above SLAM systems represent an important step forward towards equipping robots with the capability of building semantically meaningful object-oriented maps, they fall short of exploiting semantic priors for object reconstruction. In this paper we take this direction of using a category-specific learnt shape prior and embed this within an object-aware SLAM system.
\noindent\textbf{3D Reconstruction with Shape Priors:}
The use of learnt compact shape embeddings as priors for 3D reconstruction has a long tradition in computer vision. From 3D morphable models for the reconstruction of faces or bodies~\cite{blanz1999morphable,loper2015smpl}, to PCA models to represent category specific object shape priors~\cite{directshape}.
Other examples of the use of embedding spaces for single or multi-view shape reconstruction include GPLVMs \cite{dame2013dense, prisacariu2011shared, prisacariu2012simultaneous} or neural representations~\cite{hu2018dense-embedding, zhu2018object} such as a variational autoencoder~\cite{node-slam}, AltlasNet~\cite{AtlasNet,2019-lin} or DeepSDF~\cite{deepsdf,frodo,2019-xu,Zakharov2020Autolabeling3D}. DeepSDF \cite{deepsdf} provides a powerful implicit learnt shape model that encapsulates the variations in shape across an object category, in the form of an auto-decoder network that regresses the signed distance function (SDF) values of a given object and has been used as a shape prior for single-view~\cite{2019-xu} and multi-view~\cite{frodo} reconstruction. Similarly to~\cite{Zakharov2020Autolabeling3D} DSP-SLAM adopts DeepSDF as the shape prior and takes sparse LiDAR and images as input, however~\cite{Zakharov2020Autolabeling3D} takes single frames and is not a SLAM method. DOPS~\cite{Najibi2020DOPSLT} is a single-pass 3D object detection architecture for LiDAR that estimates both 3D bounding boxes and shape.
Our approach is most closely related to those that build consistent multi-object maps over an entire sequence such as FroDO~\cite{frodo} and Node-SLAM~\cite{node-slam}. Unlike FroDO ~\cite{frodo} ours is a sequential SLAM system and not a batch method. Unlike Node-SLAM \cite{node-slam}, in our system low-level point features and high-level objects are jointly optimized to bring the best of both worlds: accurate tracking and rich semantic shape information. DeepSLAM++~\cite{hu2019deep-slam++} leverages shape priors in a SLAM pipeline by selecting 3D shapes predicted by Pix3D~\cite{Sun_2018_CVPR}, but forward shape generation is often unstable and lead to poor results on real data.
\section{Experimental Results}
\begin{table*}
\centering
\setlength{\arrayrulewidth}{0.5mm}
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{c c c c c c c c c c c c c}
\hline
\multirow{3}{*}{Approach} & \multicolumn{11}{c}{Sequence} & \multirow{3}{*}{Average} \\
& 00* & 01 & 02* & 03 & 04 & 05* & 06* & 07* & 08* & 09* & 10 & \\
& rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & rpe/rre & \\
\hline
SuMa++ \cite{Chen2019SuMa++} & \textbf{0.64}/\textbf{0.22} & 1.60/0.46 & 1.00/0.37 & 0.67/0.46 & \textbf{0.37}/0.26 & 0.40/0.20 & 0.46/0.21 & \textbf{0.34}/\textbf{0.19} & 1.10/0.35 & \textbf{0.47}/\textbf{0.23} & \textbf{0.66}/0.28 & \textbf{0.70}/0.29 \\
Ours St+LiDAR (250pts) & 0.75/\textbf{0.22} & \textbf{1.49}/0.20 & \textbf{0.79}/\textbf{0.23} & \textbf{0.60}/\textbf{0.18} & 0.47/0.11 & \textbf{0.32}/\textbf{0.15} & 0.39/0.21 & 0.52/0.28 & \textbf{0.94}/\textbf{0.27} & 0.79/0.28 & 0.69/\textbf{0.26} & \textbf{0.70}/\textbf{0.22} \\
Ours St+LiDAR (50pts) & 0.80/0.24 & 1.50/\textbf{0.15} & 0.84/0.26 & 0.61/0.18 & 0.44/\textbf{0.10} & 0.32/0.16 & \textbf{0.35}/\textbf{0.15} & 0.57/0.24 & 1.03/0.30 & 0.78/0.27 & 0.67/0.30 & 0.72/\textbf{0.22} \\
\hline
ORB-SLAM2 \cite{orbslam2} & 0.70/0.25 & \textbf{1.38}/0.20 & 0.76/0.23 & 0.71/0.17 & 0.45/0.18 & \textbf{0.40}/\textbf{0.16} & 0.51/0.15 & \textbf{0.50}/\textbf{0.28} & 1.07/0.31 & \textbf{0.82}/0.25 & 0.58/0.28 & \textbf{0.72}/0.22 \\
St DSO \cite{Wang2017StereoDSO} & 0.84/0.26 & 1.43/\textbf{0.09} & 0.78/\textbf{0.21} & 0.92/\textbf{0.16} & 0.65/0.15 & 0.68/0.19 & 0.67/0.20 & 0.83/0.36 & \textbf{0.98}/\textbf{0.25} & 0.98/\textbf{0.18} & \textbf{0.49}/\textbf{0.18} & 0.84/\textbf{0.20} \\
St LSD-SLAM \cite{engel2015stereo-lsd} & \textbf{0.63}/0.26 & 2.36/0.36 & 0.79/0.23 & 1.01/0.28 & \textbf{0.38}/0.31 & 0.64/0.18 & 0.71/0.18 & 0.56/0.29 & 1.11/0.31 & 1.14/0.25 & 0.72/0.33 & 0.91/0.27 \\
DynaSLAM \cite{Bescos2018DynaSLAM} & 0.74/0.26 & 1.57/0.22 & 0.80/0.24 & 0.69/0.18 & 0.45/\textbf{0.09} & \textbf{0.40}/\textbf{0.16} & 0.50/0.17 & 0.52/0.29 & 1.05/0.32 & 0.93/0.29 & 0.67/0.32 & 0.76/0.23 \\
Ours St only & 0.71/\textbf{0.24} & 1.45/0.30 & \textbf{0.75}/0.23 & 0.73/0.19 & 0.47/0.11 & 0.57/0.23 & 0.57/0.22 & 0.51/0.29 & 1.02/0.32 & 0.87/0.26 & 0.65/0.31 & 0.75/0.25 \\
Ours St only (5Hz) & 0.71/0.26 & 1.43/0.23 & 0.78/0.24 & \textbf{0.67}/0.18 & 0.46/\textbf{0.09} & \textbf{0.40}/\textbf{0.16} & \textbf{0.47}/\textbf{0.14} & 0.52/0.29 & 0.99/0.31 & 0.90/0.28 & 0.63/0.31 & \textbf{0.72}/0.22 \\
\hline
\end{tabular}}
\caption{Comparison of camera tracking accuracy - average $t_{rel}$ [\%] and $r_{rel}$ [\si{\degree}/100m] against state-of-the-art stereo and LiDAR SLAM systems. Sequences marked with * contain loops. Note that Stereo-DSO is a purely visual odometry system, so their result is without loop closing. We keep it in the table for completeness. \vspace{-0.5cm}}
\label{tab:traj-lp}
\end{table*}
We perform a quantitative evaluation of our novel prior-based object reconstruction optimisation, using LiDAR input on the KITTI3D Dataset \cite{kitti3d}, comparing with auto-labelling~\cite{Zakharov2020Autolabeling3D}, the most related approach. In addition, we evaluate the camera trajectory errors of our full DSP-SLAM system on both stereo+LiDAR and stereo-only input on the KITTI Odometry~\cite{kitti_odom} benchmark, comparing with state-of-the-art approaches. We also provide qualitative results of our full SLAM system on pure monocular input on Freiburg Cars \cite{freiburg-cars} and Redwood-OS \cite{choi2016redwood} Chairs dataset.
\subsection{\label{det3d}3D Object Reconstruction}
We conduct a quantitative comparison of our object pose estimation on the KITTI3D benchmark, against auto-labeling~\cite{Zakharov2020Autolabeling3D}, a recent approach to prior-based object shape and pose reconstruction based on image and LiDAR inputs, and using the same shape prior embedding (DeepSDF~\cite{deepsdf}) and similar level of supervision (object masks and sparse depth from the LiDAR measurements).
\noindent\textbf{Experimental Setting:} For a fair comparison, we evaluate our approach using a single image and LiDAR input and take the 2D segmentation masks and initial pose estimates from the auto-labelling code release~\cite{Zakharov2020Autolabeling3D} as initialization for our own optimization approach. We evaluate the results of pose estimation on the trainval split of KITTI3D which consists of 7481 frames, using the same metrics proposed in~\cite{Zakharov2020Autolabeling3D}: BEV AP @ 0.50, 3D AP @ 0.50, and the distance threshold metric (NS) from the nuscenes dataset~\cite{caesar2020nuscenes}.
\noindent\textbf{Results:} We report quantatitive results in Tab.~\ref{tab:object_detection}. Our method achieves better performance under almost all metrics, especially on harder samples. We also visualize the comparison of reconstructed shapes and pose in Fig.~\ref{fig:comparison-auto-labelling}. Auto-labelling~\cite{Zakharov2020Autolabeling3D} does not capture shape accurately for several vehicles: The first two cars on the left side are sedans, but auto-labelling~\cite{Zakharov2020Autolabeling3D} reconstructs them as "beetle"-shaped. In addition, some of the cars on the right side are reconstructed with incorrect poses which do not align with the image. In contrast, DSP-SLAM obtains accurate shape and pose.
\noindent\textbf{Timing Analysis:} To achieve close to real-time performance, we employ a Gauss-Newton solver with faster convergence than first-order methods during our optimization, leading to significant speed-ups.
Tab.~\ref{tab:timing} shows a run-time comparison between a first-order optimizer and our Gauss-Newton solver with analytical gradients. Our method is approximately one order of magnitude faster to complete a single iteration, and requires fewer iterations to converge.
\begin{table}[tbp]
\centering
\begin{tabular}{c|l|c|c}
Method & Energy Terms & msec. / iter & \# of iter \\ \hline
1st order & $E_{surf} + E_{rend}$ & 183 & 50 \\
1st order & $E_{surf}$ & 88 & 50 \\ \hline
Ours GN & $E_{surf} + E_{rend}$ & 20 & 10 \\
Ours GN & $E_{surf}$ & 4 & 10 \\ \hline
\end{tabular}
\caption{Speed comparison between first-order optimization and our Gauss-Newton method with analytical Jacobians\vspace{-0.5cm}}
\label{tab:timing}
\end{table}
\noindent\textbf{Ablation Study:} We conducted an ablation study for DSP-SLAM with stereo+LiDAR input to analyse the effect of the number of LiDAR points used for shape optimization on the reconstruction error. Fig.~\ref{fig:recon-num-pts} shows that there is no significant difference when reducing the number of LiDAR points from 250 to 50. The reconstruction quality starts to degrade when the number of points is further reduced to 10.
\subsection{\label{full_slam}KITTI Odometry Benchmark}
We evaluate the camera trajectory error for our full DSP-SLAM system on the KITTI odometry benchmark with both stereo+LiDAR and stereo-only input. We evaluate on the 11 training sequences and compare with state-of-the-art SLAM systems of different input modalities using relative translation error $t_{rel}$ (\%) and relative rotation error $r_{rel}$ (degree per 100m). Quantitative results are shown in Table~\ref{tab:traj-lp}.
\noindent\textbf{Stereo+LiDAR input:}
The upper part of Tab.~\ref{tab:traj-lp} shows trajectory errors of our system with stereo+LiDAR input. Results suggest our method achieves comparable results with SuMa++, a state-of-the-art LiDAR-based semantic SLAM system~\cite{Chen2019SuMa++}. Note however, that our method only takes very few LiDAR points (several hundred per frame) while SuMa++ uses a full LiDAR point-cloud. It is interesting to see the comparison between our stereo+LiDAR system and stereo ORB-SLAM2, which is used as our backbone system. With our LiDAR-enhanced object reconstruction and joint BA, tracking accuracy improves on most sequences, especially 03, 05, 06, 08 where adequate number of static objects are observed throughout the sequence.
However, our system performs slightly worse on some sequences which contain only moving objects (01, 04) or long trajectory segments where no static objects are observed (02, 10).
The table also shows the effect on the camera trajectory error when using 250 vs 50 points for object reconstruction. The results suggest that the impact of reducing the number of points on camera tracking accuracy is minimal.
\noindent\textbf{Stereo-only input:} The lower part of Tab.~\ref{tab:traj-lp} contains the results of our stereo-only system. It can be seen that our stereo-only system performs slightly worse than stereo ORB-SLAM2, which means dense shape reconstruction and joint BA does not help improve tracking accuracy with stereo-only input. We argue that the reason is two-fold. Firstly, 3D measurements based on stereo images are noisier than LiDAR-based measurements, giving rise to lower accuracy in object pose estimates.
Secondly, in the stereo-only case, the surface points are obtained from the SLAM system, where the same features are repeatedly measured and not from multiple (LiDAR) measurements.
We also noticed that, to guarantee timings, we were performing bundle-adjustment less frequently than ORB-SLAM2. We re-ran DSP-SLAM, at a slightly reduced frame-rate (5Hz), performing BA after every key-frame (as ORB-SLAM2) and the average performance increased, matching ORB-SLAM2 at $0.72/0.22$.
A comparison with state-of-the-art stereo SLAM systems is also included in Tab.~\ref{tab:traj-lp}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.00\textwidth]{figures/reconstruction_number_pts_single_row.png}
\caption{Object reconstruction results when using different number of LiDAR points per object (N=250, 50, 10). There is no noticeable difference when the number of points is reduced from 250 to 50. The reconstruction quality starts to degrade when further reducing to 10. The degraded parts are marked with a red circle.
\vspace{-0.1cm}}
\label{fig:recon-num-pts}
\end{figure*}
\subsection{Freiburg Cars \& Redwood-OS Dataset}
\begin{figure*}[tbph!]
\centering
\includegraphics[width=0.246\textwidth]{figures/freiburg/Frame-001.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/Frame-002.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/Frame-003.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/Frame-010.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/render_001.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/render_002.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/render_003.png}
\includegraphics[width=0.246\textwidth]{figures/freiburg/render_010.png}
\caption{Qualitative results on Freiburg Cars dataset
\vspace{-0.1cm}}
\label{fig:freiburg}
\end{figure*}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=0.246\textwidth]{figures/redwood/Frame-01053.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/Frame-02484.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/Frame-09374.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/Frame-09647.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/01053-front-side.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/02484-new-front-side.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/09374-front-side.png}
\includegraphics[width=0.246\textwidth]{figures/redwood/09647-front-side.png}
\caption{Qualitative results on Redwood-OS Chairs dataset
\vspace{-0.4cm}
}
\label{fig:redwood}
\end{figure*}
Finally, we evaluate our SLAM system with monocular input on the Freiburg Cars dataset \cite{freiburg-cars} and Redwood-OS Chairs dataset. Both datasets consist of object-centric sequences with the camera moving around the object. Demonstrations can be seen on Fig.~\ref{fig:freiburg} and~\ref{fig:redwood} and in the supplementary video.
\noindent\textbf{Experimental Setting:}
3D Bounding boxes are estimated using PCA on the reconstructed surface points. Note that this approach cannot differentiate between the front and back side of the car. To address this issue, we initialize with two flipped hypothesis and keep the orientation that yields a smaller loss.
\noindent\textbf{Results:} Fig.~\ref{fig:freiburg} provides qualitative reconstruction results on 4 Freiburg Cars sequences. Our system is capable of reconstructing dense, accurate and high-quality shapes for cars solely from monocular input at 10-20 fps. Fig.~\ref{fig:redwood} illustrates results on chairs from the Redwood-OS \cite{choi2016redwood} dataset. Reconstruction accuracy is slightly worse than on cars as chairs have more complex shape variations. Results are promising nonetheless -- our method still produces dense meshes that capture the overall object shape from monocular RGB-only sequences, in quasi-real time.
\section{Conclusions}
We have presented DSP-SLAM, a new object-aware real-time SLAM system that exploits deep shape priors for object reconstruction, produces a joint map of sparse point features for the background and dense shapes for detected objects. We show almost real-time performance on challenging real-world datasets such as KITTI (stereo and stereo+LiDAR), and even on monocular setting Freiburg cars and Redwood-OS. Our quantitative comparisons with competing approaches on camera trajectory estimation and shape/pose reconstruction show comparable or superior performance to state of the art methods.
\vspace{-0.1cm}
\section*{Acknowledgements}
\vspace{-0.1cm}
Research presented here has been supported by the UCL Centre for Doctoral Training in Foundational AI under UKRI grant number EP/S021566/1. We thank Wonbong Jang and Adam Sherwood for fruitful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}%
\def\subsection{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\subsubsection{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
\centering
}%
}%
\def\paragraph{%
\@startsection
{paragraph}%
{4}%
{\parindent}%
{\z@}%
{-1em}%
{\normalfont\normalsize\itshape}%
}%
\def\subparagraph{%
\@startsection
{subparagraph}%
{5}%
{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}%
}%
\def\section@preprintsty{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsection@preprintsty{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsubsection@preprintsty{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
}%
}%
\@ifxundefined\frontmatter@footnote@produce{%
\let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote
}{}%
\def\@pnumwidth{1.55em}
\def\@tocrmarg {2.55em}
\def\@dotsep{4.5pt}
\setcounter{tocdepth}{3}
\def\tableofcontents{%
\addtocontents{toc}{\string\tocdepth@munge}%
\print@toc{toc}%
\addtocontents{toc}{\string\tocdepth@restore}%
}%
\def\tocdepth@munge{%
\let\l@section@saved\l@section
\let\l@section\@gobble@tw@
}%
\def\@gobble@tw@#1#2{}%
\def\tocdepth@restore{%
\let\l@section\l@section@saved
}%
\def\l@part#1#2{\addpenalty{\@secpenalty}%
\begingroup
\set@tocdim@pagenum{#2}%
\parindent \z@
\rightskip\tocleft@pagenum plus 1fil\relax
\skip@\parfillskip\parfillskip\z@
\addvspace{2.25em plus\p@}%
\large \bf %
\leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@
\hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip
\par
\nobreak %
\endgroup
}%
\def\tocleft@{\z@}%
\def\tocdim@min{5\p@}%
\def\l@section{%
\l@@sections{}{section
}%
\def\l@f@section{%
\addpenalty{\@secpenalty}%
\addvspace{1.0em plus\p@}%
\bf
}%
\def\l@subsection{%
\l@@sections{section}{subsection
}%
\def\l@subsubsection{%
\l@@sections{subsection}{subsubsection
}%
\def\l@paragraph#1#2{}%
\def\l@subparagraph#1#2{}%
\let\toc@pre\toc@pre@auto
\let\toc@post\toc@post@auto
\def\listoffigures{\print@toc{lof}}%
\def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
\def\listoftables{\print@toc{lot}}%
\let\l@table\l@figure
\appdef\class@documenthook{%
\@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}%
\raggedcolumn@sw{\raggedbottom}{\flushbottom}%
}%
\def\tableft@skip@float{\z@ plus\hsize}%
\def\tabmid@skip@float{\@flushglue}%
\def\tabright@skip@float{\z@ plus\hsize}%
\def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}%
\def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}%
\def\@makefntext#1{%
\def\baselinestretch{1}%
\reset@font
\footnotesize
\leftskip1em
\parindent1em
\noindent\nobreak\hskip-\leftskip
\hb@xt@\leftskip{%
\Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}%
\hss\@makefnmark\
}%
#1%
\par
}%
\prepdef
\section{Introduction}
\label{sec::intro}
We observe our Universe primarily though thermal and non-thermal radiation as well as by means of atomic line transitions. The thermal component probes gas (including ionized and neutral components)
in thermal equilibrium, while the non-thermal emission is emitted by particles such as leptons (e.g. electrons and positrons)
or hadrons (e.g., protons and ions of heavier elements) that are not in thermal equilibrium.
Thanks to multi-wavelength observations of various astrophysical objects such as supernova
remnants \citep[SNRs,][]{2012Helder}, active galactic nuclei \citep{2010Abdo},
gamma-ray bursts \citep{2009Abdo}, galaxy clusters \citep{Brunetti2014,2019vanWeeren}, and galaxies \citep{Beck2015}, observational insights
into the emission processes and the radiating particle
distributions in collisionless astrophysical plasmas is now possible.
Observed non-thermal emission in our Universe spans a vast range of energies: from radio to TeV gamma-ray energies, and occurs in
various astrophysical objects spanning many length scales: from sub-AU to Mpc scales.
The observed non-thermal power-law emission indicates that these plasmas are far from thermal equilibrium
and suggests an efficient particle acceleration process giving rise to a population of cosmic rays
(CRs) composed of electrons, protons, and heavier ions \citep[][]{1987Blandford,Marcowith2016}. In our Galaxy, the acceleration of
particles near SNR shocks is thought to be the main mechanism for producing CRs up to particle energies of $10^{16}$eV \citep[e.g.,][]{Gaisser2016}.
The interaction of plasmas on both sides of the shock wave produces electromagnetic perturbations.
Protons and heavier ions, with speeds larger than a few times the shock speed, interact resonantly with these electromagnetic perturbations and, as a result, are transported diffusively in the vicinity of shock waves.
Because these magnetic perturbations are converging at the shock front, this leads to proton and ion acceleration through a process known as diffusive shock acceleration \citep[DSA,][]{Krymskii1977,Axford1977,Blandford1978,Bell1978a,Bell1978b}.
The level of magnetic fluctuations determines the acceleration efficiency of particles, and vice versa,
accelerated particles excite plasma instabilities \citep{Bell2004} which further amplify the magnetic field near shock fronts.
Observational evidences for electron acceleration at shocks are numerous. These include puzzling giant radio shocks that are observed at the outskirts of merging galaxy clusters.
These diffuse radio-emitting structures are often coincident with weak merger-driven shocks \citep{2019vanWeeren}, and interpreted as synchrotron emitting regions powered by relativistic CR electrons that underwent the DSA process \citep{Ensslin1998,Pfrommer2008,Kang2012,Pinzke2013}.
Moreover, X-ray observations show that the non-thermal emission component at the edges of SNRs such as SN 1006 is due to accelerated CR electrons emitting synchrotron radiation \citep{Willingale+1996}.
The particular morphology of SN 1006 in radio and X-rays has raised the question about the impact of the orientation of the upstream magnetic field on the acceleration mechanism of CR electrons. Radio polarization observations suggest a large-scale magnetic field that is aligned along the north-east to south-west direction \citep{Reynoso2013}. Hence, the magnetic field seems to be aligned with the polar caps visible in radio \citep{Dyer2009}, X-rays \citep{Cassam-Chenaie2008} and TeV gamma rays \citep{Acero2010}, suggesting preferentially quasi-parallel acceleration \citep{Voelk2003}. Azimuthal variations of X-ray cutoff energies \citep{Rothenflug2004, Katsuda2010} and the radius ratio of the forward shock to the tangential discontinuity \citep{Cassam-Chenaie2008} reinforce this picture. Three-dimensional magneto-hydrodynamics (MHD) simulations support the interpretation of preferred quasi-parallel shock acceleration of electrons \citep{Bocchino2011,Schneiter2015,Winner+2020} and protons \citep{Winner+2020,Pais2020a,Pais2020b}.
However, we still lack a detailed understanding of how electrons are accelerated to high energies at these shocks: for electrons and protons traveling with the same non-relativistic speed, the kinetic energy of electrons is much smaller than that of protons owing to their substantially smaller mass in comparison to protons. Hence, unlike protons, these electrons can not directly scatter off of the magnetic perturbations near shock waves and DSA is not possible for electrons without some process(es) that can provide and sustain pre-acceleration of incoming upstream electrons.
This is known as the electron injection problem in shocks \citep{Amano+2010,Guo+2015}.
The importance of understanding the acceleration mechanism of electrons is also reinforced by their much higher radiative yield in comparison to that of protons at the same energy because the Thompson cross section scales inversely with particle mass squared.
In general, electron pre-acceleration proceeds in two steps: 1.\ electrons are heated in the downstream region to energies much higher than their initial kinetic energy, and 2.\ an acceleration phase where electrons scatter off of electromagnetic waves with a wavelength shorter than the shock width, which further increases the electron energy and enables them to participate in the DSA process.
The nature of these waves, which accelerate electrons in step 2, depends on the relative angle, $\theta_{B}$, between the direction of the large-scale background magnetic field and the direction of the propagation of the shock \citep[see, e.g.,][]{Guo+2015}.
Moreover, recent global MHD modeling of the multi-wavelength emission from the supernova remnant SN 1006 shows that efficiency of electron acceleration at quasi-parallel ($0^{\circ}<\theta_{B} < 45^{\circ}$) shocks has to be at least an order of magnitude higher in comparison to quasi-perpendicular ($45^{\circ}<\theta_{B} < 90^{\circ}$) shock configurations \citep{Winner+2020}.
In quasi-perpendicular shocks, electron pre-acceleration is thought to occur via shock drift acceleration (SDA) where electrons are
reflected and accelerated via a combination of the cross shock electric potential and the shock rest frame motional electric field \citep{Wu1984,Krauss+Wu1989}
and/or shock surfing acceleration (SSA) where electron acceleration occurs when they interact with electrostatic waves at the shock front \citep{Shimada+2000,Xu2020,KumarReville2021}.
Theoretically, the SDA mechanism is shown to be inefficient for accelerating electrons at planar shocks in the so-called scattering-free limit \citep{Ball+Melrose_2001}. However, in the presence of strong pitch-angle scattering, this mechanism is modified to become stochastic SDA, which could result in efficient acceleration \citep{Katou+2019}.
On the other hand, particle-in-cell (PIC) simulations have shown that the SSA mechanism mediated by waves that are generated due to the Buneman instability is efficient in pre-accelerating electrons \citep[see, e.g.,][]{Bohdan2019}. While these simulations used unrealistically low ion-to-electron mass ratios and (too) high {\alf} speeds, it remains to be shown how these results can carry over to more realistic astrophysical conditions.
For quasi-parallel shocks, it is usually assumed that hot electrons generate whistler waves that lead to pitch-angle scattering and acceleration of these electrons. Efficient wave production in this scenario requires high values of {\alf}ic Mach number, $\mathcal{M}_{\rm A}$. However, when tested with PIC simulations, this scenario did not result in any significant electron acceleration \citep{Riquelme+2011,Niemiec+2012}.
Recently, \citet{sharp2} discovered a new instability (called the intermediate-scale instability) that drives comoving ion-cyclotron waves unstable in the vicinity of the shock front. This presents {\it a new mechanism for generating waves} that can scatter electrons and potentially enables an efficient DSA mechanism for electrons. Unlike the whistler-mediated case, this mechanism requires low values of $\mathcal{M}_{\rm A}$ which is a condition for the new instability to operate.
In this paper we test this mechanism using 1D3V PIC simulations (i.e., one spatial and three velocity-space dimensions) of parallel electron-ion shocks and show that it indeed leads to very efficient electron acceleration.
The paper is organized as follows.
In Section~\ref{sec::setup}, we present the setup for our simulations, and compute the linear growth rates for the expected unstable wavemodes at the shock front region.
The growth of these wavemodes is responsible for the formation of the shock and for creating non-thermal populations of electrons and ions.
In Section~\ref{sec:Bamp}, we present the evolution of density and magnetic field amplification in our simulations.
We study the impact of the intermediate-scale instability driven wavemodes on the acceleration of electrons in the downstream region of the shock in Section~\ref{sec:nth1}.
In Section~\ref{sec:thermal}, we discuss the heating of the shocked plasmas in the downstream and the shock front region. We also compare this heating to analytical heating models at shocks.
The fraction of energy that is channeled into non-thermal electrons and ions is quantified and its evolution is presented in Section~\ref{sec:nothermal}. We discuss our findings and their implications in Section~\ref{sec:dis}, and conclude the paper in Section~\ref{sec:concl}.
Throughout this paper, we assume the SI system of units.
\section{Non-relativistic shock simulations}
\label{sec::setup}
\begin{figure}
\includegraphics[width=8.8cm]{Plots/Fig01.pdf}
\caption{The formation of a strong parallel shock in the contact discontinuity rest frame (simulation frame).
Left-hand side: the reflection of an incident electron-ion plasma drifting to the left side of the simulation with speed $\varv_u$ leads to the formation of the shock.
Right-hand side: after shock formation, the shock speed relative to the upstream plasma is $\varv_{\rm sh} \sim 4 \varv_u/3$ (assuming a strong shock and adopting the Rankine-Hugoniot jump conditions), and the density jump at the shock front is on average $\sim 4 n_0$, where $n_0$ is the number density of the far upstream plasma.
That is, at the shock front the {\alf} speed of ions $\varv_{\rm A}$ is $0.5$ of that in the far upstream, which means that the {\alf}ic Mach number, $\mathcal{M}_{\rm A}$, in this region is twice of that in the far upstream.
\label{fig:ShockSketch}}
\end{figure}
Here we discuss the setup for our shock simulations and compute the scales and the growth rates for the expected unstable linear wavemodes at the shock front regions.
\subsection{Simulation setup}
We perform 1D3V particle-in-cell (PIC) simulations using the SHARP code \citep{sharp,resolution-paper,sharp2}, where the shock is formed by reflecting the upstream electron-ion plasma (initially moving towards the left, i.e., negative $\bs{\hat{\vec{x}}}$ direction) at $x=0$.
That is, the shock is formed by the interaction of two exact replica of the same plasma moving in opposite directions, and the simulations are performed in the rest-frame of the contact discontinuity. A sketch for the initial configuration and the resulting shock formation is shown in Figure~\ref{fig:ShockSketch}.
In all simulations, the upstream plasma skin-depth is resolved by 10 computation cells, and the CFL stability condition on the computational time step, $\Delta t$, is such that $c \Delta t = 0.45 ~ \Delta x$, where $c$ is the speed of light in vacuum.
We use 200 particles per species in each computational cell.
The boundary of the computational domain expands with time on the right-hand side to save computational cost while containing all energetic particles in the precursor of the shock.
The upstream plasma is initially moving with velocity $\varv_u = -0.1 c$, and both electrons (with mass $m_e$) and ions (with mass $m_i$) have the same physical temperature $T_e = T_p = 4 \times 10^{-8} m_i c^2/k_{\rm B}$, where $k_{\rm B}$ is Boltzmann's constant.
Initially, both species have the same uniform number density $n_i=n_e=n_0$.
For such shocks, the expected shock speed (in the rest-frame of upstream plasma) is
$\varv_{\rm sh} = 4 |\varv_u| /3 \sim 0.133 c$.
Therefore, the sonic Mach number $ \mathcal{M}_s = \varv_{\rm sh} / \varv_s \sim
365$, where the sonic speed is $\varv_s = \sqrt{\Gamma_{\rm ad} k_{\rm B} (T_e+T_p)/m_i
}$, and $\Gamma_{\rm ad} = 5/3$ is the adiabatic index. That is, all shock simulations presented here have high sonic Mach number.
The initial (large-scale) background magnetic field $\vec{ B}_0 = B_0 \bs{\hat{\vec{x}}}$ is such that the {\alf}ic Mach number $\mathcal{M}_{\rm A} = \varv_{\rm sh}/\varv_{\rm A}$, where $\varv_{\rm A} = B_0/\sqrt{ \mu_0 n_i m_i}$ is the {\alf} speed of ions.
It is important to note here that collisionless shocks found in the intracluster medium have $\mathcal{M}_s \sim 1$ to 3 \citep{Ryu+2003, Pfrommer+2006,Vazza+2009,Schaal+2016}, and for such low values of the sonic Mach number, the intermediate-scale growth rates are much larger in comparison to that at the gyro-scale \citep{sharp2}. This implies a stronger impact of the intermediate-scale unstable modes on the acceleration of electrons in the intracluster medium.
However, we leave a demonstration of this point in simulations to future works.
\subsection{Unstable linear wavemodes at the shock front}
\label{sec::theory}
\begin{figure}
\includegraphics[width=8.6cm]{Plots/Fig02.pdf}
\caption{Top: growth rates ($\Gamma$) normalized to the non-relativistic ion gyro-frequency, $\Omega_i$.
Bottom: the phase velocity ($\varv_{\rm ph}$) of the fastest growing parallel (unstable) electromagnetic wavemodes created by the penetration of upstream cold plasma drifting into the denser plasma at the shock front (in the simulation frame).
The solutions are shown for $\mathcal{M}_{\rm A} = 21.3$, with a realistic ion-to-electron mass ratio, $m_r=m_i/m_e$ (blue curves).
In the case of $\mathcal{M}_{\rm A} = 5.3$, solutions are shown for a realistic value of $m_r$ (red curves) and a reduced $m_r$ (black curves).
Only the case with $\mathcal{M}_{\rm A}=5.3$ and a realistic mass ratio is expected to excite the intermediate-scale instability~\citep{sharp2} and thus the small scale ion-cyclotron wave modes that are comoving with the upstream plasma with velocity $\varv_u = -0.1 c$ are amplified, i.e., come from solutions of $D^{+}=0$ (see Equation~\ref{eq:disp}).
Small scale unstable wavemodes in the other (blue and black) cases are whistler waves, i.e., they derive from solutions of $D^{-}=0$ (see Equation~\ref{eq:disp}).
\label{fig:GrowthRate}}
\end{figure}
To study the impact of the intermediate-scale instability on the heating and the acceleration of electrons at parallel non-relativistic shocks, we present simulations that differ in their ability to excite the intermediate-scale instability, which manifests itself by exciting ion-cyclotron waves that are comoving with the upstream plasma~\citep{sharp2}.
The upstream drifting plasma drives wavemodes unstable at the shock front\footnote{In the linear dispersion calculation we present here, the background number density is assumed to be uniform.
Number density non-uniformity could change the growth rates if the non-uniformity scale is close to the wavelengths of the unstable modes.
However studying this theoretically, even in the linear regime, is tedious \citep[see, e.g.,][]{sim_inho_18, th_inho_20}.}.
Assuming gyrotropic momentum distributions for various species, the
dispersion relation, in the simulation frame, for parallel (w.r.t.\ the background magnetic field $B_0$) propagating electromagnetic wavemodes with wavenumber $k$ and complex frequency
$\omega$ can be written as~\citep{Schlickeiser+2002}
\begin{eqnarray}
D^{\pm}
&=& 1- \frac{k^2c^2}{\omega ^2}
\nonumber \\
&&+
\sum_{s=1}^{4}
\frac{ \omega_s^2 }{ \gamma_s \omega ^2 }
\left[
\frac{ \omega -k \varv_{\parallel,s} }
{ k \varv_{\parallel,s} - \omega \pm \Omega_s}
-
\frac{ \varv_{\perp,s}^2 \left(k^2c^2-\omega ^2\right) /c^2 }
{2 \left(k \varv_{\parallel,s} -\omega \pm \Omega_s \right)^2}
\right] ~\hspace{-0.16cm} .
\label{eq:disp}
\end{eqnarray}
Here, $s=\{1,2\}$ denote the shocked electron and ion plasmas at the shock front, respectively, and $s=\{3,4\}$ denote the incoming (upstream) cold electron and ion plasmas, respectively. The drift speeds are $\varv_{\parallel,s} = \{ - \varv_{u}/3, - \varv_{u}/3, \varv_{u}, \varv_{u} \}$,
and the relativistic gyro-frequencies $\Omega_s = \{- m_r \Omega_0,
\Omega_0, -m_r \Omega_0, \Omega_0\}/\gamma_s$, where
$\Omega_0 = e B_0/m_i $ is the non-relativistic ion gyro-frequency, and
the Lorentz factor is $\gamma_s = (1 - \varv_{\parallel,s}^2 - \varv_{\perp,s}^2)^{-1/2}$.
Solutions to $D^{+}=0$ ($D^{-}=0$) are solutions for the left (right) handed polarized electromagnetic wavemodes. When solving for the maximum growth rates at some $k$, we solve the dispersion relation with both signs, and find the fastest growing mode regardless of its polarization.
For various species $s$, the perpendicular speeds are $\varv_{\perp,s} = \varv_s/\sqrt{2} ~ \forall s $,
the plasma frequencies are $\omega_s = \{ \sqrt{3 m_r} ~\omega_i,
\sqrt{3}~ \omega_i, \sqrt{m_r} ~\omega_i, \omega_i \}$, where $\omega^2_i= e^2 n_i/(m_i \epsilon_0)$ is the square of the ion plasma frequency in the far upstream of the shock, $e$ is the elementary charge, and $\epsilon_0$ is the permittivity of free space.
We present a simulation that can excite the intermediate-scale instability at the shock front region. It has an upstream {\alf}ic Mach number $\mathcal{M}_{\rm A} = 5.3$ and uses a realistic mass ratio $m_r = m_i/m_e=1836.$
That is, the shock front {\alf}ic Mach number
\begin{eqnarray}
\mathcal{M}^f_{\rm A} \sim 10.6 < \sqrt{m_r}/2 \sim 21.4 .
\end{eqnarray}
This condition represents the necessary criteria for driving such comoving ion-cyclotron wave modes \citep{sharp2}.
The solution of the dispersion relation (Equation~\ref{eq:disp}) for
this case is shown in Figure~\ref{fig:GrowthRate} (red curves).
The lower panel shows that the phase velocity of the unstable wave modes (at $kc/\omega_i \sim 5$) are comoving with the upstream drifting plasmas ($\varv_{\rm ph} \sim -0.1 c$) for the simulation with $\mathcal{M}_{\rm A} = 5.3$ and $m_r=1836$ (red curve).
To demonstrate the importance of this instability in electron-heating, we present two more simulations where the condition of this instability is violated.
\begin{deluxetable}{ cccccc }
\tablewidth{8.7cm}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablecaption{
Parameters of our electron-ion parallel shock simulations \vspace{-0.2cm}
\label{table:Sims}
}
\tablehead{ Name & $\varv_u/c$\tablenotemark{a} & $\mathcal{M}_{\rm A}$\tablenotemark{b}
& $\mathcal{M}_s$\tablenotemark{c}
& $m_i/m_e$
& Condition\tablenotemark{d}
}
\startdata
\rule{0pt}{12pt} Ma5Mr1836 & -0.1 & 5.3 & 365 & 1836 & \checkmark
\\
\rule{0pt}{12pt} Ma5Mr100 & -0.1 & 5.3 & 365 & 100 & $\times$
\\
\rule{0pt}{12pt} Ma21Mr1836 & -0.1 & 21.3 & 365 & 1836 & $\times$
\enddata
\tablenotetext{a}{Upstream plasma velocity in the contact-discontinuity rest frame.}
\tablenotetext{b}{Upstream {\alf}ic Mach number.}
\tablenotetext{c}{Upstream sonic Mach number.}
\tablenotetext{d}{This shows whether the condition ($\varv_{\rm sh}/\varv_{\rm A} < \sqrt{m_r}/2 $) for exciting the intermediate-scale instability at the shock front is satisfied.}
\end{deluxetable}
The first simulation has $\mathcal{M}_{\rm A} = 21.3$, with a realistic mass ratio, $m_r=1836$.
In this case, $\mathcal{M}_{\rm A}^f \sim 40.6 > \sqrt{m_r}/2 \sim 21.4$.
Indeed, the solutions of the dispersion relation for this case (blue curves in Figure~\ref{fig:GrowthRate}) show that sub ion-skin-depth unstable whistler wavemodes are not comoving with the upstream plasma and thus can be easily quenched.
For this simulation, we increase $\mathcal{M}_{\rm A}$ by decreasing $B_0$, i.e., by lowering $\varv_{\rm A}$.
The second simulation has $\mathcal{M}_{\rm A} = 5.3$, but with a reduced mass ratio, $m_r=100$. In this case, $\mathcal{M}_{\rm A}^f \sim 10.3 > \sqrt{m_r}/2 = 5 $.
Again, the solutions of the dispersion relation for this case (black curves in Figure~\ref{fig:GrowthRate}) show that the sub ion-skin-depth unstable whistler wavemodes are not comoving with the upstream plasma and thus can also be easily quenched.
For this simulation, we increase $T_e=T_i$ such that $T_i/m_i$ and thus the sonic speed $\varv_s$ is unchanged.
Consequently, the sonic Mach number $\mathcal{M}_s$ is also unchanged.
A summary of parameters of the three simulation that we present here is given in Table~\ref{table:Sims}.
It is important to note that larger values of the number density at the shock front, i.e., $n > 4n_0$, increases $\mathcal{M}_{\rm A}^f$ in simulations. Hence, we have added a safety margin to our chosen values of $\mathcal{M}_{\rm A}^f$ that can in principle account for on additional factor of four enhancement in density over its MHD prediction without changing our conclusion on the condition of whether the intermediate-scale instability is excited in our simulations, as given in Table~\ref{table:Sims}.
\section{Shock formation and magnetic field amplification; self-driven scattering centers}
\label{sec:Bamp}
\begin{figure*}
\includegraphics[width=18.5cm]{Plots/Fig03.png}
\caption{Left: evolution of the number density normalized to the far upstream number density ($n_0$).
Right: evolution of $B_y$ normalized to the far upstream parallel magnetic field ($B_0$).
These are the evolution in the simulation Ma5Mr1836 (top), Ma5Mr100 (middle), and Ma20Mr1836 (bottom).
\label{fig:XT-evol}
}
\end{figure*}
The interaction of the drifting and reflected plasma results in instabilities on scales both longer and shorter than the ion-skin depth.
Such unstable modes slow down the reflected plasma and thus create an over-density ($n> 2 n_0$) behind the shock transition region\footnote{Here, $n$ is the number density of both ions and electrons together.}, where $n_0$ is the far upstream number density.
After the formation of the shock, the interaction of the incoming upstream plasma with the denser plasma behind the shock also forms an unstable plasma configuration.
This leads to particle scattering and a constant heating and acceleration of the upstream plasma as it is swept over by the shock.
The left-hand side of Figure~\ref{fig:XT-evol} shows the time evolution of the number density, $n$, normalized to $n_0$, in all the simulations.
The right-hand side shows the time evolution of $B_y$ normalized to the initial background (parallel) magnetic fields $B_0$ (the $B_z$ evolution is very similar to that of $B_y$, albeit shifted slightly in space).
Figure~\ref{fig:XT-evol} shows the formation of the shock via the excitation of unstable magnetic field wavemodes in various simulations.
After the formation of the shock, wavemodes on small and large scales are unstable as seen in Figure~\ref{fig:GrowthRate}. These modes are continuously driven at the shock front region as the shock swipes through the upstream plasma.
In Figure~\ref{fig:XT-evol}, the white lines that separate the shock front and the upstream regions indicates the location of the shock in various simulations. While the exact position of the white line is determined visually, we note that its slope is approximately given by
\begin{eqnarray}
\frac{ \Delta t \Omega_i }{ \Delta x \omega_i/c } =
\frac{ \Omega_i/\omega_i }{ \varv_{\rm sh} /c }
= \frac{ 1}{\mathcal{M}_{\rm A} }.
\end{eqnarray}
That is, the slope for the Ma20Mr1836 simulation is smaller by a factor of $\sim 4$ in comparison to the other simulations, and thus the range in $x$-direction is larger by the same factor.
We define the shock front (transition) to be a region that is 200 $c/\omega_i$ wide behind the location of the shock, i.e., it is the region between the two white lines in various panels of Figure~\ref{fig:XT-evol}.
As particles (especially ions) scatter back and forth across the shock front (transition) region, they are accelerated to higher energies, and thus some particles escape from this region toward the upstream plasma.
These excites unstable wavemodes, which in turn scatter most of these counter-streaming energetic particles back toward the shock front region \citep{Bell1978a}.
This process generates the so called shock precursor region ahead of the shock in the upstream plasma as seen in the right-hand panels of Figure~\ref{fig:XT-evol}.
In the next sections, we closely look at the nature of these driven modes and their impact on particle acceleration and heating for both ions and electrons.
The overall density jump between the far upstream and far downstream region ($n/n_0$) is notably larger than expected from the MHD jump condition that predicts $n/n_0= 4$ (for a strong parallel shock). Instead, our simulations with $\mathcal{M}_{\rm A}=5.3$ show an overall density jump of $n/n_0 \sim 6$.
This has already been seen in a number of PIC simulation before as summarized by \citet{Bret2020}.
In hybrid-PIC simulations, where the jump in the density is not constrained by using a very stiff electron equation of state,
\citet{Colby+2019} shows that for a simulation with $\mathcal{M}_{\rm A}=20$ the density jump grows to $n/n_0 \gtrsim 5.5$.
\section{Impact of intermediate-scale instability driven wavemodes on acceleration}
\label{sec:nth1}
\begin{figure}
\includegraphics[width=8.6cm]{Plots/Fig04.pdf}
\caption{Top: downstream electron (solid) and ion (dashed) momentum spectra at $t ~\Omega_i= 260$ from Ma5Mr1836 (red), Ma5Mr100 (black), Ma20Mr1836 (blue) simulations.
The downstream is defined to be at a distance that is larger than 200 $c/\omega_i$ from the shock front; see Figure~\ref{fig:XT-evol}.
Bottom: downstream perpendicular magnetic energy in Fourier space at $t ~\Omega_i= 260$ .
This shows that the co-moving (traveling towards the downstream) unstable waves that are driven at the shock front in the Ma5Mr1836 simulation generate a much higher level of small-scale magnetic fields. These increased magnetic fluctuations imply a much stronger scattering and thus, more efficient acceleration of electrons in comparison to the other simulations.
\label{fig:spectra300}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=18.5cm]{Plots/Fig05.png}
\caption{Time evolution of the average perpendicular magnetic field power, $\langle \delta B^k_{\perp} \rangle$, in the downstream (left) and shock front (right) regions.
The dashed lines indicate the small-scale ($k c/\omega_i > 2$) time evolution while solid lines show the large-scale ($k c/\omega_i< 2$) time evolution.
Different simulations are indicated with different colors; Ma5Mr1836 with red, Ma5Mr100 with black, and Ma20Mr1836 with blue.
\label{fig:KLS}
}
\end{figure*}
The importance of the intermediate-scale instability on the particle acceleration process can be quantified by comparing simulations which differ in their ability to excite comoving ion-cyclotron modes via the intermediate-scale instability.
As seen in Figure~\ref{fig:GrowthRate}, all simulations are predicted to have unstable wavemodes on scales smaller than the ion skin depth at the shock front (transition) region.
However, only in the simulation Ma5Mr1836, these modes are comoving with the incoming flow (red, see the bottom panel of Figure~\ref{fig:GrowthRate}), and hence their mode power can be transferred to the downstream region where they can scatter electrons.
The top panel of Figure~\ref{fig:spectra300} shows the electron and proton spectra as a function of dimensionless velocity $u_s/c$. To obtain these spectra, we first transform to the frame in which the plasma is at rest before we compute the momentum spectra from which we estimate the plasma temperature as laid out in Appendix~\ref{app:temp}.
As shown in the top panel of Figure~\ref{fig:spectra300}, the simulation Ma5Mr1836 (where the intermediate-scale instability can grow) has a much higher efficiency in converting the incoming flow kinetic energy into non-thermal energy in electrons in the downstream region. In fact, it is more efficient (by about 2 orders of magnitude) in accelerating electrons in comparison to the simulation with a higher $\mathcal{M}_{\rm A}$ (blue), suggesting that there is a much more efficient process at work that enables electrons to be efficiently accelerated in our low-$\mathcal{M}_{\rm A}$ model (red).
Moreover, the Ma5Mr100 simulation (black) has the same $\mathcal{M}_{\rm A}$ and $\mathcal{M}_{\rm s}$ as the simulation shown in red but uses a lower mass ratio: $m_r=100$.
As shown in the top panel of Figure~\ref{fig:spectra300}, this results in a much smaller electron acceleration efficiency in this simulation.
Additionally, the heating of electrons in the downstream region of the simulation with a reduced mass ratio (black) is much smaller compared to the simulations with a realistic mass ratio (red and blue)\footnote{In Appendix~\ref{app:temp}, we show that the normalized temperature, $k_{\rm B} T_s/m_s c^2$, of the thermal part of the plasma population of species $s$ is proportional to the value of $u/c$ where the $u^4 f(u)$ is maximized, see Equation~\eqref{Eq:TR}.}.
That is, the top panel of Figure~\ref{fig:spectra300} demonstrates that using a lower mass ratio results not only in an artificially suppressed electron acceleration efficiency but also leads to an erroneous electron heating in the downstream region.
As we will now argue, the differences in the acceleration efficiency of electrons as well as for particle heating at shocks is a direct consequence of the differences in the nature of the unstable small-scale modes.
The bottom panel of Figure~\ref{fig:spectra300} shows that the small-scale power of the perpendicular magnetic field in the downstream region is much larger in the simulation model Ma5Mr1836 in comparison to the simulations Ma5Mr100 (black) and Ma21Mr1836 (blue).
In the latter two cases, the small-scale unstable modes are not comoving with the incoming flow, and thus can be easily quenched, which leads to a much smaller amount of small-scale perpendicular magnetic power in the downstream regions.
To demonstrate the growth of these small scale wavemodes, in Figure~\ref{fig:KLS} we plot the time evolution of the average perpendicular magnetic field for small (dashed) and large (solid) scale wave modes in various simulations. To this end, we take the power of the perpendicular magnetic field as shown in the bottom panel of Figure~\ref{fig:spectra300} and compute the average for larger scales, i.e., for $kc/\omega_i<2$ and for small scales, i.e., for $2<kc/\omega_i<20$.
The top panel of Figure~\ref{fig:GrowthRate} predicts that in the linear regime, both small and large scale magnetic perturbations grow with similar rates in all simulations at the shock transition region.
The right-hand panel of Figure~\ref{fig:KLS} demonstrates that this prediction is in an excellent agreement with the growth of magnetic perturbations in various simulations at the shock transition region.
On the other hand, the left-hand panel of Figure~\ref{fig:KLS} demonstrates the crucial importance of the difference in the nature of the grown small scale wavemodes.
As shown in the bottom panel of Figure~\ref{fig:GrowthRate}, only for the Ma5Mr1836 simulation (red), the small-scale wavemodes are comoving with the incoming flow. This means that these modes are not easily quenched in the shock transition region, which is demonstrated by the ability of maintaining a high level of small-scale fluctuations as a result of continued growth of the instability, which explains the much larger efficiency of electron acceleration in this simulations, see Figure~\ref{fig:spectra300}.
Moreover, much of this small-scale electro-magnetic power is transferred to the downstream region as can be inferred from the left-hand panel of Figure~\ref{fig:KLS}.
In the case of the simulations without the intermediate-scale instability (Ma5Mr100 and Ma21Mr1836), much of the small-scale power is quenched and thus a much smaller fraction of this is transferred to the downstream region of the shock.
In the next sections, we show that the use of a lower mass ratio also leads to erroneous heating and acceleration efficiency in the shock front (transition) region, and that holds true throughout the evolution of the simulations (as seen in Figures~\ref{fig:Temp} and \ref{fig:Eff}).
\section{Heating of the shocked plasmas}
\label{sec:thermal}
\begin{figure*}
\includegraphics[width=18.5cm]{Plots/Fig06.pdf}
\caption{Top: shown is the time evolution of downstream (left) and shock front (right) temperatures of ions (dashed) and electrons (solid) in various simulations. Dotted red (black) line in the top left-hand panel indicate the predicted MHD ion temperature for $m_r = 1836 ~ (100)$ as given in Equation~\eqref{eq:kTiV}.
Bottom: evolution of the electron-to-ion temperature ratio of the plasma in the downstream (left) and shock front (right) regions in various simulations.
\label{fig:Temp}
}
\end{figure*}
All simulations presented here only have a parallel background magnetic field and are characterized by the same sonic Mach number $\mathcal{M}_s \sim 365$.
After shock formation, electrons and ions in the downstream and shock front regions are heated. Modeling the saturation temperatures for electron and ions in such collisionless shocks is very important for understanding the observations of SNRs~\citep{Vink+2015}, clusters of galaxies~\citep{Russell+2012}, and the warm hot intergalactic medium which contains a significant fraction of the baryon in the universe~\citep{Bykov+2008}.
\subsection{Models for saturation temperatures}
The MHD model for strong parallel shocks predicts that all the upstream drifting kinetic energy is converted into thermal energy in the post-shock rest frame.
The expected temperature can be computed using the Rankine-Hugoniot jump conditions for a strong, parallel MHD shock in the regime of high sonic Mach numbers, $\mathcal{M}_s\gg1$ \citep[e.g.,][]{boyd}, so that the temperature of the shocked regimes (downstream and shock front regimes) is given by
\begin{align}
k_{\rm B} T = \frac{3}{16} \mu m_i \varv_\mathrm{sh}^2
= \frac{1}{3} \mu m_i \varv_u^2 = \frac{1}{600} m_r m_e c^2,
\label{eq:kTi}
\end{align}
where $\varv_\mathrm{sh}=4\varv_u /3$ is the upstream velocity in the shock frame, $\varv_u=0.1c$ is our adopted value for the upstream velocity in the simulation frame, we adopted the adiabatic index $\Gamma_\mathrm{ad}=5/3$ appropriate for our case, and $\mu = 1/2$ is the mean molecular weight of an electron-proton fluid.
The MHD assumption implies that the ion and electron temperatures equilibrate on negligible timescales so that $T_e=T_i$ in this model.
That is, we expect the temperature in the shocked regimes to depend on the adopted mass ratio as follows:
\begin{equation}
k_{\rm B} T_{i} = k_{\rm B} T_{e} = \left\{
\begin{aligned}
&3.06~~~ m_e c^2
&&\mbox{ for } m_r = 1836,\\
&0.166~ m_e c^2
&&\mbox{ for } m_r = 100.\\
\end{aligned}
\right.
\label{eq:kTiV}
\end{equation}
These values are shown with dotted red ($m_r=1836$) and black ($m_r=100$) lines in the top left-hand panel of Figure~\ref{fig:Temp}.
Because these shocks are collisionless, the MHD assumption of strong collisional relaxation is not fulfilled and the electrons may not equilibrate with the ions.
Therefore, another theoretical model for saturation temperatures can be obtained by assuming that thermalization of the initial flow kinetic energy occurs separately for ions and electron \citep{Vink+2015}.
This implies that the electron-to-ion temperature $T_e/T_i = m_e/m_i$, and for our simulations this predicts $k_{\rm B} T_e = 0.0016 ~ m_e c^2$ (where we used Equation~\ref{eq:kTi}).
\subsection{Saturation temperatures in simulations}
In Figure~\ref{fig:Temp} we show the evolution of the temperature in the downstream (left) and shock front (right) regions in various simulations.
The top panel of Figure~\ref{fig:Temp} shows that a significant fraction of the incoming flow kinetic energy is converted to thermal heating. However, unlike the prediction of MHD shocks, the electron and ion temperatures differ.
In all simulations, the ion temperatures in the downstream and shock front regions saturate slightly below the MHD prediction because a fraction of the incoming flow kinetic energy is channeled into magnetic energy (as seen in Section~\ref{sec:Bamp}) and non-thermal energy of cosmic ray ions and electrons (as we show in Section~\ref{sec:nothermal}).
For ions, we find $k_{\rm B}T_{i}\sim2.5~m_e c^2$ (for $m_r = 1836$, which is only 22.4\% lower than the MHD prediction) and $k_{\rm B}T_{i}\sim0.1~m_e c^2$ (for $m_r = 100$, which is 66\% lower than the MHD prediction).
On the other hand, for electrons, saturation temperatures have a larger mismatch with the MHD predicted temperatures.
While the mismatch of the post-shock temperature in the case of a realistic mass ratio can be approximately accounted for by efficient ion acceleration and energy retained in electromagnetic fluctuations, the offset in the case of a mass ratio of $m_r = 100$ is puzzling and cannot be explained by energy in non-thermal components, which only add up to 5\%. We conjecture that the missing plasma instabilities as a result of the artificially small mass ratio precludes exciting sufficient electromagnetic modes that provide the necessary scattering centers for dissipating all the kinetic energy of ions.
Figure~\ref{fig:Temp} also shows that assuming an independent evolution for ions and electrons is not correct since the initial flow kinetic energy (mainly from the ions) excites magnetic field wavemodes which both heat and accelerate electrons to an energy density that, in total, is much larger than that of the initial flow kinetic energy for the electrons.
The top panel of Figure~\ref{fig:Temp} shows that the saturation temperature for ions (electrons) is lower (larger) than predicted from such models, and the bottom panels show that the prediction for the electron-to-ions temperature ratio of such models is inconsistent with all simulations.
In agreement with the prediction from MHD shocks, simulations with a realistic ion-to-electron mass ratio have saturation temperatures of ions and electrons that are independent of the {\alf}ic Mach number $\mathcal{M}_{\rm A}$. This is demonstrated by the excellent agreement between blue and red (dashed and solid) curves in Figure~\ref{fig:Temp} albeit with $T_e < T_i$.
The saturation temperature of electrons is the same in the downstream and shock front regions, which is also approximately true for ions. This means that all heating occurs in the shock front region and {\it no further} heating occurs in the downstream region for ions or electrons.
The lower panels of Figure~\ref{fig:Temp} show that, at saturation, $T_e/T_i = 0.4$ in the downstream regime for the two simulations that use a realistic mass ratio and $T_e/T_i = 1.4$ for the simulation with $m_r=100$.
In comparison to simulations with a realistic mass ratio, the simulation that uses a low mass ratio of $m_r=100$ results in {\it incorrect} saturation temperatures for ions (by factor of $\sim 25$) and electrons (by factor of $\sim 7.4$) at both, the downstream and shock front regions as shown in the top panel of Figure~\ref{fig:Temp}.
This low saturation temperature for electrons leads to strong suppression in electron acceleration efficiency as we show in Section~\ref{sec:nothermal}.
Moreover, the use of a lower mass ratio results in a wrong electron-to-ion temperature ratio at saturation, i.e., $T_e>T_i$ as can be seen from the bottom panel of Figure~\ref{fig:Temp}.
Saturation temperatures where $T_e < T_i$ are observed for example in Balmer-dominated shocks~\citep{van-Adelsberg+2008}.
That is, $T_e/T_i>1$ found in the simulation with low ion-to-electron mass ratio disagrees with observed saturation temperature ratios.
Surprisingly, for the simulation with $m_r=100$, the ion and electron temperatures continue to increase from the shock front region to the downstream region. This indicate that further heating occurs in the downstream region of this simulation in disagreement with the results from the simulations with a realistic mass ratio.
\section{Formation of non-thermal particles}
\label{sec:nothermal}
\begin{figure*}
\includegraphics[width=18.5cm]{Plots/Fig07.pdf}
\caption{Top: the time evolution of the acceleration efficiency (as defined in Equation~\ref{Eq:Eff-sh}) in the downstream (left) and shock front (right) region for ions (dashed) and electrons (solid) in various simulations.
Bottom: the time evolution of the fraction of non-thermal energy of electrons relative to that of ions ($K_{\rm ei}$) in the downstream (left) and shock front (right) regions in various simulations.
\label{fig:Eff}
}
\end{figure*}
As shown above, the upstream kinetic energy is channeled into magnetic energy (Section~\ref{sec:Bamp}), non-thermal energy (Section~\ref{sec:nth1}) and thermal energy (Section~\ref{sec:thermal}) in the downstream and shock front regions of the system.
Here we quantify the amount of energy carried by non-thermal ions and electrons in these regions in Figure~\ref{fig:Eff}.
In Appendix~\ref{app:temp}, we show that the fraction of particles with non-thermal energy can be quantified in various ways, and here we focus on how much energy density is carried by these particles when compared to the upstream kinetic energy density, i.e., using Equation~\eqref{Eq:Eff-sh}.
We define non-thermal particles to be particles with $u>5u_m$~\citep{Caprioli2014a,Xu2020}, where $u=\gamma \varv$ is the spatial-part of the 4-velocity, $\gamma$ is the Lorenz factor, and $u_m$ is defined as the value of $u$ where $u^4 f(u)$ is maximized.
As an example, particles to the right of the blue vertical lines in Figure~\ref{fig:Thermal} are considered to form the non-thermal part of the momentum distribution.
This quantification is done for different regions as a function of time and was used to compute the time evolution of the efficiency shown in the top panel of Figure~\ref{fig:Eff}.
The top panel shows the time evolution of the efficiency $\epsilon_{\rm sh}$, i.e., the percentage of upstream kinetic energy density that is channeled into non-thermal electrons and ions in the downstream (right) and shock front regions (left).
By contrast, the bottom panel shows the time evolution of the ratio of non-thermal energy of electrons to that of ions, $K_{\rm ei}$, in various simulations also in the downstream (left) and shock front (right) regions.
The non-thermal energy density of ions (dashed curves) is about 20-30\% of the upstream kinetic energy density, and this result is roughly independent of the ion-to-electron mass ratio $m_r$ at the shock front region.
The simulation with a higher $\mathcal{M}_{\rm A}$ has a slightly higher efficiency in producing non-thermal ions at the shock front (transition) region.
Moreover, in the downstream region, it contains a higher fraction of non-thermal ions indicating that this simulation is more efficient in scattering ions that are accelerated in the shock front (transition) region back to the downstream region.
On the other hand, the fraction that is channeled into non-thermal electrons is strongly dependent on both, $\mathcal{M}_{\rm A}$ and $m_r$.
The top panel of Figure~\ref{fig:Eff} shows that, at the shock front, the simulation with $\mathcal{M}_{\rm A}=5.2$ and a realistic mass ratio efficiently accelerates electrons, leading to 0.05--0.1\% of initial flow kinetic energy converted into non-thermal electrons, while in the simulation with a higher $\mathcal{M}_{\rm A}$ and a realistic mass ratio, a smaller fraction (0.01--0.05\%) of the upstream kinetic energy is channeled into non-thermal electrons.
The simulation with a reduced mass ratio leads to about 2-3 orders of magnitude reduction in electron acceleration efficiency.
Clearly, the simulation where the intermediate-scale instability is allowed to grow at the shock front region (red) is much more efficient in confining the accelerated electrons by scattering them back to the downstream regions, leading to a much larger non-thermal electron energy in the downstream region in comparison to the simulation with a higher $\mathcal{M}_{\rm A}$ as seen in the left-hand panel of Figure~\ref{fig:Eff}.
Moreover, the simulation with a low mass ratio (black) leads to significant reduction in the energy channeled into non-thermal electrons in the downstream region.
As shown in the lower left-hand panel of Figure~\ref{fig:Eff}, in the downstream region, the simulation that allows for growth of comoving ion-cyclotron waves (red) has a 2-3 orders of magnitude larger non-thermal electron-to-ion energy ratio, $K_{\rm ei}$, in comparison to that in the simulations where the condition for the intermediate-scale instability is not satisfied.
\section{Discussion}
\label{sec:dis}
In this section, we discuss the impact of a finite thermal temperature on the growth of the intermediate-scale instability, followed by a comparison of our proposed electron pre-acceleration mechanism in comparison to the mechanism proposed by \citet{2015phrvl.114h5003p}. Finally, we discuss how our results can be used to possibly infer a lower limit on the amplitude of the magnetic field at shocks of supernova remnants.
\subsection{Impact of finite plasma beta}
The dispersion relation used in Section~\ref{sec::theory} to predict the unstable modes at the shock transition region assumes a cold background of electrons and ions.
Thus, it is of great importance to consider the impact of the finite plasma beta $\beta_s$ on the growth of the intermediate-scale instability, where $\beta_s \equiv 2 \varv_{{\rm th},s}^2/\varv_{{\rm A},s}^2$, the square of the thermal speed is $\varv_{{\rm th},s}^2 \equiv k_B T_s/m_s$ and $s=e,i$ for electrons and ions respectively.
The dispersion relation studied in \citet{sharp2} and presented in Section~\ref{sec::theory} are computed assuming $\beta_i =\beta_e=0$ while the simulation presented by \citet{sharp2} has $\beta_e = \beta_i = 2$. Interestingly, the growth rates of the analytical derivation in the cold-background limit and the simulations with the finite $\beta$ values show an excellent agreement, thus indicating that there is no impact of $\beta_e$ or $\beta_i$ on the maximum growth rate of the intermediate-scale instability.
\citet{sharp2} argue that the damping due to thermal ion-cyclotron damping (at $k \varv_{\rm th,i} = (1$--$2) \Omega_i$) could at most impact one peak of the instability if parameters are fine-tuned and thus typically it cannot impact instability growth for typical astrophysical plasma conditions. Similarly, it is strainght-forward to show that this is also true for thermal electron-cyclotron damping because the separation between the two peaks of the intermediate-scale instability does not solely depend on the ion-to-electron mass ratio.
For the shock problem studied in the present paper, there are two very different regimes of the beta factor: a low-beta regime in the pre-shock region and a high-beta regime in the shock transition zone. In the pre-shock region, we have $\beta_e = \beta_i = 1.28\times10^{-4}$ ($\mathcal{M}_{\rm A}=5.3$) and $\sim 2 \times 10^{-3}$ $(\mathcal{M}_{\rm A} = 21.3$). The case of $\beta_e \gtrsim1$ would imply a much faster growth of the intermediate-scale instability in comparison to the growth rate of the gyro-scale instability because of the dependence of the growth rate on $\varv_\perp$ as given in equation (6) of \citet{sharp2}. Thus, we expect that in this case the instability will have an even larger impact on the mechanism of electron acceleration but postpone a detailed study of this to future works.
At the shock transition zone of the simulations with $m_r=1836$, the electron are hot and the temperature is such that $k_{\rm B} T_e \sim m_e c^2$. Therefore, we obtain $\beta_e = 1.74$ for the $\mathcal{M}_{\rm A} = 5.3$ simulation and $\beta_e = 27.88$ for the $\mathcal{M}_{\rm A} = 21.3$ simulation, i.e., in both cases, the electron beta is larger than one. However, only in the simulation with $\mathcal{M}_{\rm A} = 5.3$, the relative drift is such that the instability condition is fulfilled and thus allows for the growth of the intermediate-scale instability. This allows resonant interactions between electrons and unstable modes and results in much larger acceleration efficiency as shown in the simulation.
\subsection{On the electron acceleration mechanism}
The proposed mechanism for pre-accelerating electrons works by exciting intermediate-scale waves at the shock transition region.
These sub-ion skin-depth (ion-cyclotron) waves are comoving with the incoming upstream plasma (see bottom panel of Figure~\ref{fig:GrowthRate}) and hence are coherently transported to the downstream region (see Figure~\ref{fig:KLS}).
In the downstream and shock transition regions, the hot electrons scatter off of these unstable waves and the waves reflected at the contact discontinuity (see top-left panel of Figure~\ref{fig:XT-evol}). This leads to the high acceleration efficiency shown in the red simulation (with $\mathcal{M}_{\rm A} = 5.3$) as seen in Figures~\ref{fig:spectra300} and \ref{fig:Eff}, possibly in a first-order Fermi type process.
The simulation with $\mathcal{M}_A=21.3$ (depicted in blue) shows excitation of sub-ion skin-depth (Whistler) waves at the shock front region. However, these modes are not transported to the downstream region (see Figure~\ref{fig:KLS}), which thus results in a much lower electron acceleration efficiency.
That is, this simulation shows that for a parallel shock geometry, electron pre-acceleration due to scattering with Whistler waves has a much lower efficiency in comparison to that in the red simulation (see Figure~\ref{fig:Eff}).
We emphasize that our proposed mechanism for electron pre-acceleration does not depend on ion acceleration.
This is clearly manifested by the large fraction of pre-accelerated electrons before we observe any significant ion acceleration in the simulation with $\mathcal{M}_{\rm A} = 5.3$, as shown in the top-panel (red curves) of Figure~\ref{fig:spectra300}.
On the other hand, the mechanism proposed by~\citet{2015phrvl.114h5003p} for electron pre-acceleration relies on the amplification of Bell modes in the shock pre-cursor driven by the propagation of accelerated ions and thus the combination of SDA and scattering with Bell modes resulted in electron pre-acceleration and injection in DSA cycles.
\subsection{Applications to supernova remnants}
\label{sec:SNR}
While we have clearly demonstrated the importance of the intermediate-scale instability for thermalizing and accelerating electrons in our PIC simulations, we will now turn to discuss the potential relevance of this instability in observations of astrophysical shocks. Perhaps the cleanest astrophysical object is the supernova remnant SN~1006, which enables testing our ideas of the prevailing plasma instabilities that are responsible for electron scattering and acceleration at the quasi-parallel shock conditions we encounter at the polar cap regions of that remnant shock. \citet{Winner+2020} perform three-dimensional MHD simulations of the Sedov Taylor explosion and evolve the electron distribution function while accounting for magnetic obliquity-dependent electron acceleration. To this end, their method finds and characterizes the MHD shocks, injects a pre-defined electron power-law spectrum into the post-shock region \citep{Winner+2019}, and explores the effects of varying the magnetic amplification factor on the surface brightness maps as well as the multi-frequency spectrum of SN~1006.
Matching the radial emission profiles and the gamma-ray spectrum requires a volume-filling, turbulently amplified magnetic field of $B\approx35~\mu$G in the downstream region of the parallel shock and that the Bell-amplified magnetic field \citep{Bell2004} is starting to get damped in the further post-shock region \citep[see figure~2 of][]{Winner+2020}. The exact value of the Bell amplification factor $f_\mathrm{B}$ barely changes the multi-frequency spectrum so that we obtain a post-shock Alfv\'en velocity of $\varv_\mathrm{A}=B f_\mathrm{B}/\sqrt{\mu_0\mu m_p n}\approx200~f_\mathrm{B}~\mathrm{km~s}^{-1}$, where $\mu=1.25$ is the mean molecular weight of the warm interstellar medium, $m_p$ is the proton rest mass, and $n=0.12~\mathrm{cm}^{-3}$. While the Bell amplified field is maximized in the immediate post-shock regime \citep{Caprioli2014b}, the turbulently amplified magnetic field keeps rising in the post-shock regime \citep{Ji+2016}, so that it is appropriate to set $f_\mathrm{B}=1$ while noting that the turbulently amplifying field on its route to saturation is replacing the Bell-amplified field as it is damping further downstream.
Adopting the lab frame shock velocity of SN~1006 at the current epoch, $\varv_s=3000~\mathrm{km~s}^{-1}$, we obtain a post-shock Alfv\'enic Mach number of
\begin{align}
\mathcal{M}_\mathrm{A}=\frac{\varv_s}{\varv_\mathrm{A}}=15.2<\frac{\sqrt{m_r}}{2},
\end{align}
which obeys the condition for exciting the intermediate-scale instability and thus should enable efficient electron acceleration at the polar cap regions of the parallel shocks of SN~1006 \citep[see figure~4 of][]{Winner+2020}.
In fact, efficient electron acceleration at parallel shocks enables us to put a lower limit on the post-shock magnetic field,
\begin{align}
\label{eq:Bmin}
B&>2 \varv_s \sqrt{\frac{\mu_0\rho}{m_r A}}=B_\mathrm{min}\\
&= 22.7~\left(\frac{\varv_s}{3000~\mathrm{km~s}^{-1}}\right)
\left(\frac{n}{A\, 0.1~\mathrm{cm}^{-3}}\right)^{1/2}\,\mu\mathrm{G},
\end{align}
where $m_r$ is the proton-to-electron mass ratio and $A$ is the atomic mass number of the element responsible for driving the intermediate~scale instability. Hence, if the plasma is composed of abundant ${}^{56}\mathrm{Fe}$ ions, the minimum post-shock magnetic field is lowered to $3~\mu$G for the same shock parameters.
For heavy ions to dominate the growth of the intermediate instability, we require them to be very abundant because the growth rate, $\Gamma$, of the intermediate-scale instability depends on the number density of ions via $\Gamma \propto n_{\rm Fe}^{1/3}$.
\section{Conclusions}
\label{sec:concl}
In collisionless shocks, electrons are heated to energies much larger than the kinetic energy of upstream electrons impinging on the shock. However, in non-relativistic shocks, their Larmor radii fall short of those of ions so that another acceleration mechanism is needed to boost their energies to the point where they can actively participate in the process of DSA.
Previously suggested mechanisms, which were based on driving whistler waves unstable by hot downstream or cold upstream electrons, require high values of the {\alf}ic Mach number and were shown to not work in PIC simulations \citep{Niemiec+2012}.
In this paper we consider a new mechanism for electron pre-acceleration in quasi-parallel, non-relativistic electron-ion shocks that is based on driving ion-cyclotron waves unstable by means of ions that are drifting with their upstream velocity through the shock transition zone. The corresponding intermediate-scale instability \citep{sharp2} only works for low values of the {\alf}ic Mach number at the shock front, $\mathcal{M}_{\rm A}^f < \sqrt{m_i/m_e} \approx 21$, which is the condition for the instability to operate.
We present results from three 1D3V PIC simulations for parallel electron-ion shocks with sonic Mach number $\mathcal{M}_{\rm s} \sim 365$:
the first (red) simulation uses a realistic ion-to-electron mass ratio of $m_r=1836$ and provides favorable conditions for exciting the intermediate scale instability.
The second (black) simulation employs identical physical initial conditions but has an artificially lower value for the ion-to-electron mass ratio, $m_r=100$, which violates the instability condition.
In the third (blue) simulation, the condition is also violated by using a higher value of $\mathcal{M}_{\rm A}$ and $m_r=1836$.
Highlight results include:
\begin{itemize}
\item Only the simulation that grows the intermediate scale instability scatters hot electrons in the downstream off of driven ion-cyclotron waves. We demonstrate that this efficiently pre-accelerates electrons to energies that enable them to participate in DSA cycles and yields a non-thermal power-law distribution of electrons (see Figure~\ref{fig:spectra300}).
\item This effective electron acceleration comes about because the excited ion-cyclotron waves at the shock front are comoving with the upstream plasma and hence, survive advection into the downstream. In consequence, the amplitude of perpendicular magnetic field fluctuations, which are resonantly scattering hot electrons, is substantially increased in comparison to the other simulations that preclude instability growth (see Figures~\ref{fig:spectra300} and \ref{fig:KLS}).
\item The simulation with the higher value of $\mathcal{M}_{\rm A}$ shows a reduction, by more than 2 orders of magnitude, in the efficiency of electron acceleration (see Figure~\ref{fig:Eff}). However, the electrons in the downstream are heated to the same temperature as the red simulation (see Figure~\ref{fig:Temp}).
\item The simulation with the lower mass ratio ($m_r=100$) results in much lower heating of electrons in the downstream region and thus an even lower electron acceleration efficiency (see Figures~\ref{fig:Eff} and \ref{fig:Temp}). We conclude that accurate PIC simulations of collisionless electron-ion shocks require realistic mass ratios.
\end{itemize}
These findings put the magnetic amplification processes at shocks into the limelight because of the strict instability criterion that favors low values of $\mathcal{M}_{\rm A}$ and as such, is able to relate efficient electron acceleration with a lower limit on the downstream magnetic field strength (see Equation~\ref{eq:Bmin}). Most importantly, this paper provides an important cornerstone in our understanding of diffusive shock acceleration of electrons, and potentially enables us to construct an analytical subgrid model for including electron acceleration into modern MHD models.
\section*{Acknowledgements}
We would like to thank Anatoly Spitkovsky for discussions on various aspects of the simulations. We also acknowledge comments by Damiano Caprioli and the anonymous referee.
M.S., R.L., T.T., and C.P. acknowledge support by the European Research Council under ERC-CoG grant CRAGSMAN-646955.
The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). This research was in part supported by the Munich Institute for Astro- and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC-2094 – 390783311.
\begin{appendix}
\section{Distribution of relativistic particles}
\label{app:temp}
Here we review the shape of the momentum distribution of charged particles in thermal equilibrium, and the impact of the existence of a non-thermal tail.
We discuss how it can be characterized in such cases, and different ways to quantify the fraction of density and energy in its non-thermal tail.
\subsection{Average thermal energy}
In thermal equilibrium, the rest-frame (R) isotropic momentum distribution of relativistic particles (with elementary mass $m_s$) is given by the Maxwell-J{\"u}ttner distribution~\citep{Wright1975}
\begin{eqnarray}
\label{Eq:distribution}
f_{\rm th}(\vec{u}) = \frac{ n_R e^{-\gamma/\tilde{T}_R} }{4 \pi \tilde{T}_R K_2(1/\tilde{T}_R)} ,
\end{eqnarray}
where $K_2$ is the modified Bessel function of the second kind, the Lorentz-factor is $\gamma = \sqrt{1+ \vec{u} \cdot \vec{u}}$, and $\tilde{T}_R$ is the thermodynamical equilibrium temperature in the rest frame\footnote{
It is worth mentioning here that because the phase-space volume is invariant under Lorentz transformation, this distribution takes the same form in any other frame, e.g., in the laboratory frame \citep{Wright1975}.} normalized by $m_s c^2/k_{\rm B}$, all velocities are normalized with the speed of light $c$, and $n_R$ is the rest-frame number density of the particles.
Therefore, the average (thermal + rest-mass) energy per particle, normalized by $m_s c^2$, is
\begin{eqnarray}
\langle \bar{E} \rangle &=& \int d^3u \gamma \frac{f_{\rm th}(\vec{u})}{n_R} = 4 \pi \int_0^{\infty} du \frac{u^2 \gamma e^{-\gamma/\tilde{T}_R} }{4 \pi \tilde{T}_R K_2(1/\tilde{T}_R)}
\nonumber \\
&=&
\frac{1}{ \tilde{T}_R K_2(1/\tilde{T}_R)} \int_0^{\infty} du u^2 \gamma e^{-\gamma/\tilde{T}_R}
\nonumber \\
&=&
\frac{-1}{ \tilde{T}_R K_2(1/\tilde{T}_R)} \frac{d}{d(1/\tilde{T}_R)} \int_0^{\infty} du u^2 e^{-\gamma/\tilde{T}_R}
\nonumber \\
&=&
\frac{-1}{ \tilde{T}_R K_2(1/\tilde{T}_R)} \frac{d ( \tilde{T}_R K_2(1/\tilde{T}_R) )}{d(1/\tilde{T}_R)}
=3 \tilde{T}_R +\frac{K_1 \left(1/\tilde{T}_R \right)}{K_2\left(1/\tilde{T}_R \right)}.
~~~~~~~
\end{eqnarray}
That is, the average thermal energy per particle is
\begin{eqnarray}
\label{Eq:Eth}
\langle E_{\rm th} \rangle = \left( 3 \tilde{T}_R +\frac{K_1 \left(1/\tilde{T}_R \right)}{K_2\left(1/\tilde{T}_R \right)} - 1 \right) m_s c^2.
\end{eqnarray}
\subsection{Equilibrium rest-frame temperature}
For a distribution of particle momenta, we can find $\tilde{T}_R$ (assuming a thermal equilibrium of most low energy particles) as follows:
\begin{eqnarray}
\label{Eq:du4fu}
\frac{d }{du} \left[ u^4 f_{\rm th} \right] = \frac{ n_R u^3 \left(4 \sqrt{u^2+1} \tilde{T}_R-u^2\right) }{4 \pi \tilde{T}_R^2 K_2(1/\tilde{T}_R) \sqrt{u^2+1}} e^{-\frac{\sqrt{u^2+1}}{\tilde{T}_R}} .
\end{eqnarray}
That is, using the value of $u$ for which $u^4 f(u)$ is maximum ($u_m$), we can determines $\tilde{T}_R$ using
\begin{eqnarray}
\label{Eq:TR}
\tilde{T}_R = \frac{ u_m^2 }{ 4 \sqrt{1+u_m^2}}.
\end{eqnarray}
\subsection{Momentum distribution with non-thermal particles}
Here, we consider the full momentum distribution $f_{\rm full}$ that contains non-thermal and high-energy particles.
The main assumption adopted here is that {\it{all low energy particles are in thermal equilibrium}}.
That is, all non-thermal particle are such that $u \gg u_m$, which means the value of $u$ at which the derivative in Equation~\eqref{Eq:du4fu} is zero is always outside the range for which the non-thermal parts of the distribution is non-zero.
Therefore, the value of $u_m$ and hence $\tilde{T}_R$, computed from Equation~\eqref{Eq:TR}, is still the same.
An example of such a case is shown in Figure~\ref{fig:Thermal}.
Assuming that the non-thermal part of the distribution follows a power law with a typical index of -$4$, the full distribution can be approximated as follows,
\begin{eqnarray}
\label{Eq:Ffull}
f_{\rm full}(u) = \frac{n_R}{1+\alpha}
\left[
\frac{e^{-\gamma/\tilde{T}_R}}{4 \pi k_2 (1/\tilde{T}_R)}
+
\alpha
\frac{u_1 u_2 u^{-4}}{4 \pi (u_1 - u_2) }\theta_1 \theta_2
\right],
\end{eqnarray}
where $\alpha$ is the fractional number density of non-thermal particles which is implicitly assumed to be small, i.e., $\alpha \ll 1$, $\theta_1 = \theta(u-u_1)$, $\theta_2 = \theta(u_2 -u)$, and $u_1$ and $u_2$ ($u_2>u_1$) is the range in which the distribution takes a power-law form.
The distribution in Equation~\eqref{Eq:Ffull} is discontinuous at $u=u_1$. However, in reality the thermal and $u^{-4}$ parts of the distribution are connected with a suprathermal distribution (with a logarithmic slope steeper than $-4$) that starts at $u \sim 4 u_m$ to $ 5 u_m$ \citep{Caprioli2014a}.
Therefore, the suprathermal distribution does not change the relation between $u_m$ and $\tilde{T}_R$ (Equation~\ref{Eq:TR}).
The inclusion of the non-thermal part leads to a lower fraction of particles in thermal equilibrium and hence lowers proportionately the value of $E_{\rm th}$ at a given $\tilde{T}_R$. Namely
\begin{eqnarray}
\label{Eq:Eth2}
\langle E_{\rm th} \rangle_{\rm full} \approx \frac{1}{1+\alpha} \left( 3 \tilde{T}_R +\frac{K_1 \left(1/\tilde{T}_R \right)}{K_2\left(1/\tilde{T}_R \right)} - 1 \right) m_s c^2.
\end{eqnarray}
The value of $\alpha$ can be determined by comparing the value of $f(u_m)$ and $f_{\rm full}(u_m)$, where the value of
$f(u_m)$ is computed from Equation~\eqref{Eq:distribution} at $u_m$ where the expression $u^4 f_{\rm full}(u)$ is maximized, and
$f_{\rm full}(u_m)$ is computed from the normalized histogram of the particles' normalized momenta ($u$) in the simulation.
\begin{figure}
\includegraphics[width=8.6cm]{Plots/Fig08.pdf}
\caption{
Momentum distribution for ions (red dashed) and electrons (red solid) at $t~ \Omega_i=325$ for the Ma5Mr1836 simulation, where we first transform momenta of particles to the plasma rest frame before we compute the momentum spectra.
Solid black lines show the analytical Maxwell-J{\"u}ttner distribution scaled appropriately, $u^4 f(u)$ (as given by Equation~\ref{Eq:distribution}). The normalized temperature (given by Equation~\ref{Eq:TR}) is determined from the location of the peaks ($u_m^i$ for ions and $u_m^e$ for electrons).
Particle energies with $u>5u_m$ (to the right of blue lines) are considered to be non-thermal.
\label{fig:Thermal}
}
\end{figure}
\subsection{Acceleration efficiency}
We note here that $\alpha = n_{\rm non-th}/n_{\rm th}$, where $n_{\rm th}$ is the number density of particles in thermal equilibrium, and $n_{\rm non-th}$ is the number density of non-thermal particles. Thus,
the rest-frame total number density is $n_R = n_{\rm th} + n_{\rm non-th}$.
That is, we can define the efficiency of acceleration as the fraction of non-thermal particles ($\epsilon_n$) measured in per cent via
\begin{eqnarray}
\label{Eq:EffN}
\epsilon_n \equiv \frac{n_{\rm non-th}}{n_R} \times 100 = \frac{\alpha}{1+\alpha} \times 100 .
\end{eqnarray}
We can also define the acceleration efficiency by the fractional energy carried by non-thermal particles ($\epsilon_E$) in per cent as
\begin{eqnarray}
\label{Eq:EffE}
\epsilon_E & \equiv &
\frac{ E_{\rm tot} - \langle E_{\rm th} \rangle_{\rm full} }{ E_{\rm tot} } \times 100
=
\frac{E_{\rm non-th}}{E_{\rm tot}} \times 100 ,
\end{eqnarray}
where $ E_{\rm tot} = \langle \gamma -1 \rangle m_s c^2$ is the average of $(\gamma-1)$ of all thermal and non-thermal particles.
An equally important definition of the acceleration efficiency is how much of the upstream plasma kinetic energy is channeled into non-thermal energy. In this case, the efficiency of acceleration in per cent is defined as
\begin{eqnarray}
\label{Eq:Eff-sh}
\epsilon_{\rm sh}
& \equiv &
\frac{ E_{\rm non-th} }{
0.5 ~ (m_e + m_i) \varv_{\rm sh}^2 } \times 100 ,
\end{eqnarray}
where $\varv_{\rm sh}$ is the upstream plasma drifting speed in the shock rest-frame.
We compute the non-thermal energy density, $E_{\rm non-th}$, as the average (per particle) energy with $u>5u_m$ \citep[][]{Xu2020}. The vertical blue lines in Figure~\ref{fig:Thermal} show such cuts.
That is, particles with velocities $u>5u_m$ are assumed to form the non-thermal part of the distribution function.
\end{appendix}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Introduction}
The classification problem for group actions on operator algebras has a long history dating back to the hallmark results of Connes \cite{Connes73, Connes74, Connes75, Connes76, Connes77} about injective factors that employed ideas of a dynamical nature in a crucial way.
The subsequent pioneering work by Jones \cite{Jones80}, Ocneanu \cite{Ocneanu85} and others \cite{SutherlandTakesaki89, KawahigashiSutherlandTakesaki92, KatayamaSutherlandTakesaki98, Masuda13} completely unraveled the structure of countable amenable group actions on injective factors and paved the way for related further developments in subfactor theory \cite{Jones83, Popa94, Popa10, Masuda99}.
When the acting group is nonamenable, then the structure of its actions on the hyperfinite II$_1$-factor is already less managable, as a theorem of Jones \cite{Jones83oc} and other subsequent stronger \emph{no-go theorems} such as \cite{Popa06, BrothierVaes15} later demonstrated.
Connes' seminal paper \cite{Connes76} was the first of many influential works in operator algebras to make use of a kind of touchstone object (in his case the hyperfinite II$_1$-factor $\mathcal{R}$) and to begin the classification approach by showing that every object to be classified absorbs this touchstone object.
More specifically, Connes' approach begins with a structural result asserting that every injective II$_1$-factor is \emph{McDuff} --- i.e., tensorially absorbs $\mathcal R$; see \cite{McDuff70} --- which is then used further to show that each such factor is isomorphic to $\mathcal R$.
In Ocneanu's pioneering work to classify outer actions of an arbitrary countable amenable group $G$ on $\mathcal{R}$, he likewise proves at an early point in the theory that each such action (even without assuming outerness) absorbs the trivial $G$-action on $\mathcal R$ tensorially, which he then exploits for his more sophisticated classification theorem.
Although one would generally need injectivity of a factor to arrive at a satisfactory classification theory about (outer) $G$-actions on it, this early part of Ocneanu's theory works in remarkable generality.
The precise statement and a (comparably) self-contained proof using the methods of this article, which we included for the reader's convenience, is Theorem~\ref{theorem:model-absorption}.
If one is concerned with C$^*$-algebraic considerations related to the \emph{equivariant Jiang--Su stability problem} (see \cite[Conjecture A]{Szabo21si}), the current methods always find a way to exploit Oceanu's aforementioned theorem in one form or another, usually involving to some degree Matui--Sato's property (SI) technique \cite{MatuiSato12acta, MatuiSato12, MatuiSato14, Sato19}.
Looking at the state-of-the-art at present \cite{GardellaHirshbergVaccaro, Wouters21}, the key difficulties arise from pushing these methods to the case where a group action $G\curvearrowright A$ on a C$^*$-algebra induces a complicated $G$-action on the traces of $A$.
In particular, it is generally insufficient for such considerations to only understand $G$-actions on $\mathcal{R}$, but one rather needs to have control over $G$-actions on more general tracial von Neumann algebras.
This C$^*$-algebraically motivated line of investigation led us to ask the following question that is intrinsic to von Neumann algebras:
\begin{question}
Let $G$ be a countable amenable group and $M$ a separably acting finite von Neumann algebra with $M\cong M\bar{\otimes}\mathcal R$.
Is it true that every action $\alpha: G\curvearrowright M$ is cocycle conjugate to $\alpha\otimes\operatorname{id}_{\mathcal R}: G\curvearrowright M\bar{\otimes}\mathcal R$?
\end{question}
Although Ocneanu's original results confirm this in full generality when $M$ is a factor,\footnote{While Ocneanu's work \cite{Ocneanu85} only contains an explicit proof for so-called centrally free actions $\alpha$, his comment following \cite[Theorem 1.2]{Ocneanu85} suggests an alternative approach to avoid this assumption. In several papers, the more general version without central freeness is also attributed to Ocneanu.} it turned out to be not so straightforward to resolve this question, despite common folklore wisdom in the subject suggesting that the factor case ought to imply the general case.
Some classification results in the literature \cite{JonesTakesaki84, SutherlandTakesaki85} imply that the above has a positive answer when $M$ is injective, but relying on this has two drawbacks.
Firstly, the question we are trying to answer is by design much weaker than a hard classification result, so it would be desirable to have a proof not relying on such a powerful theorem, in particular when an assumption such as injectivity may not even be needed.
Secondly, there is a subtle gap in the proof of \cite[Lemma 4.2]{SutherlandTakesaki85}.
We are indebted to Stefaan Vaes for pointing this out to us in the context of the above question and for outlining a sketch of proof on how to correct this, which became a sort of blueprint for the main result of the fourth section.
In contemporary research by C$^*$-algebraists, the aforementioned results by Sutherland--Takesaki are still used to provide a partial answer to the above question, for example in \cite{GardellaHirshbergVaccaro}. In light of the previous discussion, the present article aims to give a self-contained and --- dare we say also relatively elementary --- approach to answer this question instead.
In fact we can treat it in greater generality than posed above, without restrictions on the type of $M$ and in the setting of amenable actions of arbitrary discrete groups.
The following can be viewed as our main result; see Theorem~\ref{thm:general-amenable-McDuff}.
\begin{theoremi} \label{theorem-A}
Let $G$ be a countable discrete group and $M$ a von Neumann algebra with separable predual such that $M \cong M\bar{\otimes} \mathcal{R}$.
Then every amenable action $\alpha\colon G \curvearrowright M$ is cocycle conjugate to $\alpha \otimes \mathrm{id}_\mathcal{R}\colon G\curvearrowright M\bar{\otimes} \mathcal{R}$.
\end{theoremi}
Along the way, our methodology employs dynamical variants of McDuff-type properties analogous to the theory of strongly self-absorbing C$^*$-dynamics \cite{Szabo18ssa}, which can and is treated in the more general setting of continuous actions of locally compact groups; see Definitions~\ref{def:strong-absorption} and \ref{def:ssa-action}.
\begin{defii}
Let $G$ be a second-countable locally compact group.
An action $\delta: G\curvearrowright\mathcal R$ is called \emph{strongly self-absorbing}, if there exists an isomorphism $\Phi: \mathcal R\to\mathcal R\bar{\otimes}\mathcal R$, a $(\delta\otimes\delta)$-cocycle $\mathbb{U}: G\to\mathcal{U}(\mathcal R\bar{\otimes}\mathcal R)$ and a sequence of unitaries $v_n\in\mathcal{U}(\mathcal R\bar{\otimes}\mathcal R)$ such that
\[
v_n(x\otimes 1_{\mathcal R})v_n^* \to \Phi(x) \quad\text{and}\quad v_n(\delta\otimes\delta)_g(v_n)^* \to \mathbb{U}_g
\]
in the strong operator topology for all $x\in\mathcal R$ and $g\in G$, the latter uniformly over compact subsets in $G$.
\end{defii}
For such actions we prove the following dynamical generalization of McDuff's famous theorem \cite{McDuff70}; see Theorem~\ref{thm:general-McDuff}.
\begin{theoremi} \label{theorem-C}
Let $G$ be a second-countable locally compact group.
Let $\alpha: G \curvearrowright M$ be an action on a von Neumann algebra with separable predual and let $\delta: G \curvearrowright \mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.
Then $\alpha$ is cocycle conjugate to $\alpha\otimes\delta$ if and only if there exists a unital equivariant $*$-homomorphism $(\mathcal R,\delta) \to (M_{\omega,\alpha},\alpha_\omega)$, where the latter denotes the induced $G$-action on the asymptotic centralizer algebra of $M$.
\end{theoremi}
Our initial methodology inspired by the theory of C$^*$-dynamics is only well-suited to build all the aforementioned (and other related) theory in the setting of (actions on) semi-finite von Neumann algebras.
After the first preliminary section, the second and third section are dedicated to proving Theorem~\ref{theorem-C} in the special case of semi-finite von Neumann algebras.
The fourth section then builds on some of this theory, combined with the original ideas by Sutherland--Takesaki \cite{SutherlandTakesaki85} related to disintegrating a $G$-action to an action of its transformation groupoid induced on the center. This culminates in our main technical result of that section --- a kind of measurable local-to-global principle for absorbing a given strongly self-absorbing action, Theorem~\ref{thm:main_technical} --- which is then used to prove a stronger version of Theorem~\ref{theorem-A} for actions on semi-finite von Neumann algebras.
The general main results are then obtained in the fifth section with the help of Tomita--Takesaki theory.
It is in this step that it becomes obvious why we want to treat Theorem~\ref{theorem-C} beyond the case of discrete groups.
Namely, if $\alpha\colon G\curvearrowright M$ is an action as in Theorem~\ref{theorem-A} on a von Neumann algebra that is not semi-finite, we may consider the extended action $\tilde{\alpha}\colon G\curvearrowright\tilde{M}$ on the (semi-finite) continuous core.
However, in order to conclude that $\alpha$ absorbs $\delta$ with the help of Tomita--Takesaki theory, it is not sufficient to argue that $\tilde{\alpha}$ absorbs $\delta$, but one actually needs to verify this absorption for certain enlargements of these actions to continuous actions of $G\times\mathbb{R}$, which in any case requires Theorem~\ref{theorem-C} for non-discrete groups.
Fortunately this can all be arranged to work and we end the last section with the proofs of our main results.
\section{Preliminaries}
Throughout the paper, $\omega$ denotes a fixed free ultrafilter on $\mathbb{N}$ and $G$ denotes a second-countable, locally compact group.
Let $M$ be a von Neumann algebra with predual $M_*$. For $x \in M$ and $\phi \in M_*$ we define elements $x\phi, \phi x$ and $[x,\phi] \in M_*$ by $(x\phi)(y) = \phi(yx)$, $(\phi x)(y) = \phi(xy)$ for all $y \in M$ and $[x,\phi] = x\phi - \phi x$. Moreover, for $x \in M$ and $\phi \in M_*$ we set $\|x\|_{\phi} = \phi(x^*x)^{1/2}$ and $\|x\|_{\phi}^\# = \phi(x^*x + xx^*)^{1/2}$. When $\phi$ is a faithful normal state, the norms $\|\cdot\|_\phi$ and $\|\cdot\|_\phi^\#$ induce the strong and strong-$*$ topology on bounded sets, respectively.
More generally, when $\phi$ is a normal weight on $M$, we define $\|x\|_{\phi}:= \phi(x^*x)$ for all $x$ contained in the left-ideal
\[\{x \in M \mid \phi(x^*x) < \infty\}.\]
Recall that $M$ is called $\sigma$-finite if it admits a faithful normal state.
\subsection{Ultrapowers of von Neumann algebras}
We start with a reminder on the Ocneanu ultrapower of a von Neumann algebra and related concepts.
This originates in \cite[Section 2]{Connes74} and \cite[Section 5]{Ocneanu85}, but the reader is also referred to \cite{AndoHaagerup14} for a thorough exposition on ultrapower constructions.
\begin{definition}
Let $M$ be a $\sigma$-finite von Neumann algebra. We define the subset $\mathcal{I}_\omega(M) \subset \ell^\infty(M)$ by
\begin{align*}
\mathcal{I}_\omega(M) &= \{(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M) \mid x_n \rightarrow 0 \text{ $*$-strongly as } n \rightarrow \omega\}\\
&= \{(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M) \mid \lim_{n \rightarrow \omega}\|x_n\|_{\phi}^\# =0 \text{ for some faithful normal state } \phi \text{ on } M\}.
\end{align*}
Denote \[\mathcal{N}_\omega(M)=\{(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M) \mid (x_n)_{n \in \mathbb{N}} \mathcal{I}_\omega(M) \subset \mathcal{I}_\omega(M), \text{ and } \mathcal{I}_\omega(M)(x_n)_{n \in \mathbb{N}} \subset \mathcal{I}_\omega(M)\},\]
\[\mathcal{C}_\omega(M) =\{(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M) \mid \lim_{n \rightarrow \omega}\|[x_n,\phi]\| = 0 \text{ for all } \phi \in M_*\}.\]
Then \[{\mathcal{I}_\omega(M) \subset \mathcal{C}_\omega(M) \subset \mathcal{N}_\omega(M)}.\]
The \emph{Ocneanu} ultrapower $M^\omega$ of $M$ is defined as
\[M^\omega := \mathcal{N}_\omega(M)/\mathcal{I}_\omega(M),\]
and the \emph{asymptotic centralizer} $M_\omega$ of $M$ is defined as
\[M_\omega := \mathcal{C}_\omega(M)/\mathcal{I}_\omega(M).\]
These are both von Neumann algebras.
Any faithful normal state $\phi$ on $M$ induces a faithful normal state $\phi^\omega$ on $M^\omega$ via the formula
\[\phi^\omega((x_n)_{n \in \mathbb{N}}) = \lim_{n \rightarrow \omega} \phi(x_n).\]
The restriction of $\phi^\omega$ to $M_\omega$ is a tracial state.
\end{definition}
\begin{remark}
Since the constant sequences are easily seen to be contained in $\mathcal{N}_\omega(M)$, one considers $M$ as a subalgebra of $M^\omega$.
If $\lim_{n \rightarrow \omega}\|[x_n,\phi]\| = 0$ for all $\phi \in M_*$, then $\lim_{n \rightarrow \omega}\|[x_n,y]\|_\phi^\# = 0$ for all $y \in M$ by \cite[Proposition 2.8]{Connes74}.
In this way we get a natural inclusion $M_\omega \subset M^\omega \cap M'$.
That same proposition also shows that in order to check whether a sequence $(x_n)_n$ in $\ell^\infty(M)$ satisfies $\lim_{n \rightarrow \omega}\|[x_n,\psi]\| = 0$ for all $\psi \in M_*$, it suffices to check if this is true for just a single faithful normal state $\phi$ and to check if $\lim_{n \rightarrow \omega}\|[x_n,y]\|^\#_\phi =0$ for all $y \in M$.
This shows that $M_\omega = M^\omega \cap M'$ whenever $M$ admits a faithful normal tracial state. The same is then true for all semi-finite von Neumann algebras with separable predual (for example by \cite[Lemma~2.8]{MasudaTomatsu16}).
\end{remark}
\begin{definition}
A continuous action $\alpha\colon G \curvearrowright M$ of a second-countable locally compact group on a von Neumann algebra is a homomorphism $G \to \mathrm{Aut}(M)$, $g \mapsto \alpha_g$ such that
\[
\lim_{g \rightarrow 1_G}\|\varphi \circ \alpha_g - \varphi\|=0 \text{ for all } \varphi \in M_*.
\]
By \cite[Proposition~X.1.2]{Takesaki03}, this is equivalent to the map being continuous for the point-weak-$*$ (or equivalently, point-weak, point-strong,$\hdots$) topology.
In most contexts we omit the word ``continuous'' as it will be implicitly understood that we consider some actions to be continuous.
In contrast, we will explicitly talk of an algebraic $G$-action when we are considering an action of $G$ viewed as a discrete group.
\end{definition}
Given an action $\alpha\colon G \curvearrowright M$, the induced algebraic actions $\alpha^\omega\colon G \rightarrow \mathrm{Aut}(M^\omega)$ and $\alpha_\omega \colon G \rightarrow \mathrm{Aut}(M_\omega)$ are usually not continuous.
The remainder of this subsection is devoted (for lack of a good literature reference) to flesh out the construction of their `largest' von Neumann subalgebras where the action is sufficiently well-behaved for our needs, called the \emph{$(\alpha, \omega)$-equicontinuous parts} (see Definition \ref{def:equicontinuous_parts}).
These constructions are based on \cite[Section 3]{MasudaTomatsu16}, where the special case $G=\mathbb{R}$ is considered.
\begin{definition}
Let $M$ be a $\sigma$-finite von Neumann algebra with an action $\alpha\colon G \curvearrowright M$.
Fix a faithful normal state $\phi$ on $M$. An element $(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M)$ is called \emph{$(\alpha, \omega)$-equicontinuous} if for every $\varepsilon>0$, there exists a set $W \in \omega$ and an open neighborhood $1_G\in U \subset G$ such that
\[
\sup_{n\in W} \sup_{g\in U} \|\alpha_g(x_n) - x_n\|_\phi^\# < \varepsilon .
\]
We denote the set of $(\alpha, \omega)$-equicontinuous sequences by $\mathcal{E}^\omega_\alpha$.
\end{definition}
\begin{remark}
The definition above does not depend on the faithful normal state chosen.
Whenever $\phi$ and $\psi$ are two faithful normal states on $M$, one has for every $\varepsilon>0$ some $\delta>0$ such that for all $x \in (M)_1$, $\|x\|_\phi^\# < \delta$ implies $\|x\|_\psi^\# < \varepsilon$.
\end{remark}
\begin{lemma}\label{lemma:epsilon_delta_sequences}
Let $M$ be a von Neumann algebra with faithful normal state $\phi$. For all $(x_n)_{n \in \mathbb{N}} \in \mathcal{N}_\omega(M)$ the following holds:
For any $\varepsilon >0$ and compact set $\Psi \subset M_*^+$ there exists a $\delta >0$ and $W \in \omega$ such that if $y \in (M)_1$ and $\|y\|_\phi^\#< \delta$, then $\sup_{\psi\in \Psi} \|x_n y\|_{\psi}^\# <\varepsilon$ and $\sup_{\psi\in \Psi} \|y x_n\|_{\psi}^\# <\varepsilon$ for all $n \in W$.
\end{lemma}
\begin{proof} We prove this by contradiction.
Suppose that there exists $\varepsilon >0$ and a compact set $\Psi \subset M_*^+$ such that for any $k \in \mathbb{N}$ there exists a $y_k \in (M)_1$ with $\|y_k\|_\phi^\# < 1/k$ but the following set belongs to $\omega$:
\[
A_k := \Big\{ n \in\mathbb{N} \ \Big| \ \sup_{\psi \in \Psi}\|x_ny_k\|_\psi^\#\geq \varepsilon \text{ or } \sup_{\psi \in \Psi}\|y_k x_n\|_\psi^\# \geq \varepsilon \Big\}.
\]
Define $W_0 := \mathbb{N}$ and $W_k := A_1 \cap \hdots \cap A_k \cap [k, \infty)$ for $k \geq 1$.
These all belong to $\omega$.
For each $n \in \mathbb{N}$ define $k(n)$ as the unique number $k \geq 0$ with $n \in W_k \setminus W_{k+1}$. Put $z_n := y_{k(n)}$ if $k(n) \geq 1$, else put $z_n:=1_M$.
Note that for all $n \in W_m$ we get that $\|z_n\|_\phi^\# = \|y_{k(n)}\|_\phi^\# <\frac{1}{k(n)} \leq \frac{1}{n}$.
Therefore, it holds that $(z_n)_{n\in \mathbb{N}} \in \mathcal{I}_\omega(M)$.
Since $(x_n)_{n \in \mathbb{N}} \in \mathcal{N}_\omega(M)$, it follows that also $(x_nz_n)_{n\in \mathbb{N}}$ and $(z_nx_n)_{n\in \mathbb{N}}$ belong to $\mathcal{I}_\omega(M)$.
Hence we get that for all $\psi \in \Psi$
\[\lim_{n \rightarrow \omega} \left(\|x_nz_n\|_\psi^\# + \|z_nx_n\|_\psi^\#\right) = 0.\]
Since $\Psi$ is compact, we also have
\[\lim_{n \rightarrow \omega} \sup_{\psi \in \Psi}\left(\|x_nz_n\|_\psi^\# + \|z_nx_n\|_\psi^\#\right) = 0.\]
This gives a contradiction, since our choice of $z_n$ implies that for all $n \in W_1$
\[\sup_{\psi \in \Psi}\left(\|x_nz_n\|_\psi^\# + \|z_nx_n\|_\psi^\#\right)\geq \varepsilon.\]
\end{proof}
\begin{lemma}
Let $M$ be a $\sigma$-finite von Neumann algebra with action $\alpha:G \curvearrowright M$.
For any two sequences $(x_n)_{n \in \mathbb{N}}, (y_n)_{n \in \mathbb{N}} \in \mathcal{E}_\alpha^\omega \cap \mathcal{N}_\omega(M)$ it follows that $(x_ny_n)_{n\in \mathbb{N}} \in \mathcal{E}_\alpha^\omega$.
\end{lemma}
\begin{proof}
Without loss of generality we may assume $\sup_{n \in \mathbb{N}} \|x_n\| \leq \frac{1}{2}$ and $\sup_{n \in \mathbb{N}} \|y_n\| \leq \frac{1}{2}$.
Fix a faithful normal state $\phi$ on $M$. Let $K \subset G$ be a compact neighbourhood of the neutral element.
Take $\varepsilon >0$ arbitrarily.
By Lemma \ref{lemma:epsilon_delta_sequences} there exists $\delta>0$ and $W_1 \in \omega$ such that for every $z \in (M)_1$ with $\|z\|_\phi^\# < \delta$ one has
\[
\sup_{g \in K}\|x_n z\|_{\phi \circ \alpha_g}^\#< \frac{\varepsilon}{2} \text{ and } \|zy_n\|_\phi^\# < \frac{\varepsilon}{2} \text{ for all } n \in W_1.
\]
Since $(x_n)_{n\in \mathbb{N}}$ and $(y_n)_{n\in \mathbb{N}}$ both belong to $\mathcal{E}_\alpha^\omega$, we can find an open $U \subset K$ containing the neutral element, and a $W_2 \in \omega$ such that
\[
\sup_{n \in W_2}\sup_{ g \in U}\|\alpha_g(x_n) - x_n\|_\phi^\# < \delta \text{, and}
\]
\[\sup_{n \in W_2}\sup_{ g \in U}
\|\alpha_{g^{-1}}(y_n) - y_n\|_\phi^\# < \delta.
\]
Then for $g \in U$ and $n \in W_1 \cap W_2$ we have
\begin{align*}
\|\alpha_g(x_n) \alpha_g(y_n) - x_ny_n\|_\phi^\# &\leq \|\alpha_g(x_n)(\alpha_g(y_n) - y_n)\|_\phi^\# + \|(\alpha_g(x_n) - x_n)y_n\|_\phi^\#\\
&= \|x_n(\alpha_{g^{-1}}(y_n) - y_n)\|_{\phi \circ \alpha_g}^\# + \|(\alpha_g(x_n) - x_n)y_n\|_\phi^\#\\
& < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon.
\end{align*}
This ends the proof.
\end{proof}
\begin{lemma}
Let $M$ be a $\sigma$-finite von Neumann algebra with an action $\alpha: G \curvearrowright M$.
Then:
\begin{enumerate}[label=\textup{(\arabic*)},leftmargin=*]
\item \label{lem:equicont-algebra:1}
Suppose $(x_n)_{n\in \mathbb{N}}, (y_n)_{n\in \mathbb{N}} \in \ell^\infty(M)$ satisfy $(x_n-y_n)_{n\in \mathbb{N}} \in \mathcal{I}_\omega(M)$.
Then $(x_n)_{n\in \mathbb{N}} \in \mathcal{E}^\omega_\alpha$ if and only if $(y_n)_{n\in \mathbb{N}} \in \mathcal{E}^\omega_\alpha$.
\item \label{lem:equicont-algebra:2}
$\mathcal{E}_\alpha^\omega \cap \mathcal{N}_\omega(M)$ is an $\alpha$-invariant C$^*$-subalgebra of $\ell^\infty(M)$.
\end{enumerate}
\end{lemma}
\begin{proof} Fix a faithful normal state $\phi$ on $M$.
We first prove \ref{lem:equicont-algebra:1}.
Let $\varepsilon>0$.
We can choose $W \in \omega$ and an open neighborhood $1_G \in U \subset G$ such that
\[
\sup_{n\in W} \sup_{g\in U} \|\alpha_g(x_n) - x_n\|_\phi^\# < \frac{\varepsilon}{2}.
\]
Without loss of generality we may assume that $K=\overline{U}$ is compact.
Consider $s_n := \sup_{g \in K} \|x_n - y_n\|_{\phi \circ \alpha_g}^\#$. Since $K$ is compact and ${(x_n-y_n)_{n\in \mathbb{N}} \in \mathcal{I}_\omega(M)}$, we have $\lim_{n \rightarrow \omega} s_n = 0$.
Hence, after possibly replacing $W$ by a smaller set in the ultrafilter, we can assume that $s_n < \varepsilon/4$ for all $n \in W$.
We may conclude for all $g \in U$ and $n \in W$ that
\begin{align*}
\|\alpha_g(y_n) - y_n\|_\phi^\# &\leq \|\alpha_g(y_n) - \alpha_g(x_n)\|_\phi^\# + \|\alpha_g(x_n)- x_n\|_\phi^\# + \|x_n - y_n\|_\phi^\#\\
&\leq 2s_n + \frac{\varepsilon}{2} < \varepsilon.
\end{align*}
Since $\varepsilon>0$ was arbitrary, $(y_n)_{n\in \mathbb{N}}$ belongs to $\mathcal{E}_\alpha^\omega$.
Let us prove \ref{lem:equicont-algebra:2}.
Clearly $\mathcal{E}_\alpha^\omega$ is a $*$-closed, norm-closed linear subspace of $\ell^\infty(M)$.
The previous lemma shows that $\mathcal{E}_\alpha^\omega \cap \mathcal{N}_\omega(M)$ is closed under multiplication.
To see that $\mathcal{E}_\alpha^\omega$ is $\alpha$-invariant, take $(x_n)_{n \in \mathbb{N}} \in \mathcal{E}_\alpha^\omega$ and $h \in G$. Take $\varepsilon >0$.
We can find an open neighborhood $1_g \in U \subset G$ and $W \in \omega$ such that one has
\[\sup_{n \in W}\sup_{g \in U}\|\alpha_g(x_n) - x_n\|_{\phi \circ \alpha_h}^\# < \varepsilon.\]
Then for all $g \in hUh^{-1}$ and $n \in W$ we observe
\[
\|\alpha_g(\alpha_h(x_n)) - \alpha_h(x_n)\|_\phi^\# = \|\alpha_{h^{-1}gh}(x_n) - x_n\|_{\phi \circ \alpha_h}^\# < \varepsilon.
\]
This shows that $(\alpha_h(x_n))_{n\in \mathbb{N}} \in \mathcal{E}_\alpha^\omega$.
\end{proof}
\begin{definition}\label{def:equicontinuous_parts}
Let $M$ be a $\sigma$-finite von Neumann algebra with an action $\alpha\colon G \curvearrowright M$.
We define ${M_\alpha^\omega := (\mathcal{E}_\alpha^\omega \cap \mathcal{N}_\omega(M))/\mathcal{I}_\omega}$ and ${M_{\omega, \alpha}:= M_\alpha^\omega \cap M_\omega}$.
We call them the \emph{$(\alpha,\omega)$-equicontinuous parts} of $M^\omega$ and $M_\omega$, respectively.
\end{definition}
\begin{lemma}
Let $M$ be a $\sigma$-finite von Neumann algebra with an action $\alpha\colon G \curvearrowright M$.
Then $M_\alpha^\omega$ and $M_{\omega, \alpha}$ are von Neumann algebras.
\end{lemma}
\begin{proof}
We show that $M_\alpha^\omega$ is a von Neumann algebra by showing that its unit ball is closed with respect to the strong operator topology in $M^\omega$.
Then it automatically follows that $M_{\omega, \alpha} = M_\alpha^\omega \cap M_\omega$ is also a von Neumann algebra.
Take a sequence $(X_k)_k \in (M_\alpha^\omega)_1$ that strongly converges to $X \in (M^\omega)_1$. Fix a faithful normal state $\phi$ on $M$ and a compact neighbourhood of the neutral element $K \subset G$.
Then the function $K \rightarrow (M^\omega)_*$ given by $g \mapsto \phi^\omega \circ \alpha_g^\omega$ is continuous (because $\phi^\omega \circ \alpha_g^\omega = (\phi \circ \alpha_g)^\omega)$.
Hence, the set $\{\phi^\omega \circ \alpha_g^\omega\}_{g \in K}$ is compact and thus $\lim_{n \rightarrow \infty} \sup_{g \in K} \|X_n - X\|^\#_{\phi^\omega \circ \alpha_g^\omega} = 0$.
Fix $\varepsilon >0$.
Pick representing sequences $(x_k(n))_{n\in \mathbb{N}}$ and $(x(n))_{n\in \mathbb{N}}$ for the elements $X_k$ and $X$, respectively, such that $\|x_k(n)\| \leq 1$, $\|x(n)\| \leq 1$, for all $k,n \in \mathbb{N}$.
Then we can find $k_0 \in \mathbb{N}$ and $W_1 \in \omega$ such that
\[
\sup_{n\in W_1} \sup_{g\in K} \|x_{k_0}(n) - x(n)\|_{\phi\circ \alpha_g}^\# < \frac{\varepsilon}{3}.
\]
Since $(x_{k_0}(n))_{n\in \mathbb{N}} \in \mathcal{E}_\alpha^\omega$, we can find an open neighborhood $1_G \in U \subset K$ and $W_2 \in \omega$ such that
\[
\sup_{n\in W_2} \sup_{g\in U} \|\alpha_g(x_{k_0}(n)) - x_{k_0}(n)\|_\phi^\# < \frac{\varepsilon}{3}.
\]
Then for all $g \in U$ and $n \in W_1 \cap W_2$ it holds that
\begin{align*}
\|\alpha_g(x(n)) - x(n)\|_\phi^\# & \leq \|x(n) - x_{k_0}(n)\|_{\phi \circ \alpha_g}^\# + \|\alpha_g(x_{k_0}(n)) - x_{k_0}(n)\|_\phi^\# + \|x_{k_0}(n) - x(n)\|_\phi^\#\\
&< \frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3} = \varepsilon.
\end{align*}
This shows that $(x(n))_{n \in \mathbb{N}} \in \mathcal{E}_\alpha^\omega$, or in other words $X \in M_\alpha^\omega$.
\end{proof}
\begin{lemma}
Let $M$ be a $\sigma$-finite von Neumann algebra with an action $\alpha\colon G \curvearrowright M$ of a second-countable locally compact group.
Then $\alpha^\omega$ restricted to $M_\alpha^\omega$ and $\alpha_\omega$ restricted to $M_{\omega,\alpha}$ are continuous $G$-actions.
\end{lemma}
\begin{proof}
Fix a faithful normal state $\phi$ on $M$. Since $\phi^\omega$ is faithful, $\{a\phi^\omega \mid a \in M_\alpha^\omega\}$ is dense in $(M_\alpha^\omega)_*$. For $a \in M_\alpha^\omega$ and $g \in G$ one has
\begin{align*}
\|(a\phi^\omega) \circ \alpha^\omega_g - a \phi^\omega \|_{(M_\alpha^\omega)_*} & \leq \|\alpha_{g^{-1}}^\omega(a)(\phi^\omega \circ \alpha_g^\omega - \phi^\omega)\|_{(M_\alpha^\omega)_*} \|+ \|(\alpha_{g^{-1}}^\omega(a) - a)\phi^\omega\|_{(M_\alpha^\omega)_*}\\
& \leq \|a\| \, \|\phi \circ \alpha_g - \phi\|_{M_*} + \|\alpha_{g^{-1}}^\omega(a) - a\|_{\phi^\omega}.
\end{align*}
When $g \rightarrow 1_G$, this expression converges to zero because $\alpha$ is a continuous $G$-action and $a \in M_\alpha^\omega$.
This shows that $\alpha^\omega$ restricts to a genuine continuous $G$-action on $M_\alpha^\omega$, so the same is true for the restriction of $\alpha_\omega$ to $M_{\omega, \alpha}$.
\end{proof}
\begin{lemma} \label{lemma:lifting_invariance_compact_sets}
Let $M$ be a von Neumann algebra with a faithful normal state $\phi$ and an action $\alpha\colon G \curvearrowright M$.
Let $z \in M_\alpha^\omega$, $\varepsilon >0$, $K \subset G$ a compact set and suppose that $\| \alpha_g^\omega(z)-z\|_{\phi^\omega}^\# \leq \varepsilon$ for all $g \in K$.
If $(z_n)_{n\in \mathbb{N}}$ is any bounded sequence representing $z$, then
\[
\lim_{n \rightarrow \omega} \max_{g \in K} \| \alpha_g(z_n)-z_n\|_\phi^\# \leq \varepsilon.
\]
\end{lemma}
\begin{proof}
Let $\delta>0$.
Then for each $g \in K$ there exists an open neighborhood $g \in U \subset G$ and $W_g \in \omega$ such that
\[\sup_{n \in W_g}\sup_{h \in U}\|\alpha_h(z_n) - \alpha_g(z_n)\|_\phi^\# < \delta.\]
Since this obviously yields an open cover of $K$ and $K$ is compact, we can find finitely many elements $g_1, \hdots, g_N \in K$ and an open covering $K \subset \cup_{i=1}^N U_j$ with $g_j \in U_j$ and some $W_1 \in \omega$ such that for $j=1, \hdots, N$ we have
\[\sup_{n \in W_1}\sup_{g \in U_j}\|\alpha_{g}(z_n) - \alpha_{g_j}(z_n)\|_\phi^\# < \delta.\]
Since $\max_{g \in K}\| \alpha_g^\omega(z)-z\|_{\phi^\omega}^\# \leq \varepsilon$, there exists $W_2 \in \omega$ such that for all $n \in W_2$ and $j=1, \hdots, N$
\[\|\alpha_{g_j}(z_n) - z_n\|_\phi^\# \leq \varepsilon+\delta.\]
Hence, for an arbitrary $g \in K$, there is some $j \in \{1, \hdots, N\}$ such that $g \in U_j$ and
\[\| \alpha_g(z_n)-z_n\|_\phi^\# \leq \|\alpha_g(z_n) - \alpha_{g_j}(z_n)\|_\phi^\# + \|\alpha_{g_j}(z_n) - z_n\|_\phi^\# \leq 2\delta+ \varepsilon\]
for all $n \in W_1 \cap W_2.$
Since $\delta$ was arbitrary, this proves the claim.
\end{proof}
\subsection{Cocycle morphisms}
\begin{definition}[cf.\ {\cite[Definition 1.10]{Szabo21cc}}]
Let $\alpha\colon G \curvearrowright M$ and $\beta\colon G \curvearrowright N$ be two actions of a second-countable locally compact group on von Neumann algebras.
A \emph{cocycle morphism} from $(M,\alpha)$ to $(N,\beta)$ is a pair $(\phi,\mathbbm{u})$, where $\phi\colon M \rightarrow N$ is a unital normal $*$-homomorphism and $\mathbbm{u}:G\rightarrow \mathcal{U}(N)$ is a continuous map (in the strong operator topology) such that for all $g,h \in G$ we have
\[
\mathrm{Ad}(\mathbbm{u}_g) \circ \beta_g \circ \phi = \phi \circ \alpha_g \quad\text{and}\quad \mathbbm{u}_g \beta_g(\mathbbm{u}_h) = \mathbbm{u}_{gh}.
\]
In the special case where $\mathbbm{u}$ is the trivial map, we identify $\phi$ with $(\phi,1)$ and call $\phi$ equivariant.
\end{definition}
\begin{remark} \label{rem:cocycle-category}
As the arguments in \cite[Subsection 1.3]{Szabo21cc} show, the above endows the class of continuous $G$-actions on von Neumann algebras with a categorical structure, whereby the Hom-sets are given by cocycle morphisms.
The composition is given via
\[
(\psi,\mathbbm{v}) \circ (\phi,\mathbbm{u}) := (\psi \circ \phi, \psi(\mathbbm{u}) \mathbbm{v})
\]
for any pair of cocycle morphisms
\[
(M, \alpha) \overset{(\phi,\mathbbm{u})}{\longrightarrow} (N, \beta) \quad \text{ and } \quad (N, \beta) \overset{(\psi,\mathbbm{v})}{\longrightarrow} (L, \gamma).
\]
We see furthermore that a cocycle morphism $(\phi, \mathbbm{u})\colon(M, \alpha) \rightarrow (N, \beta)$ is invertible if and only if $\phi$ is a $*$-isomorphism of von Neumann algebras, in which case we have $(\phi, \mathbbm{u})^{-1} = (\phi^{-1}, \phi^{-1}(\mathbbm{u})^*)$.
If this holds, we call $(\phi, \mathbbm{u})$ a \emph{cocycle conjugacy}.
We call two actions $\alpha$ and $\beta$ \emph{cocycle conjugate}, denoted as $\alpha \simeq_{\mathrm{cc}} \beta$, if there exists a cocycle conjugacy between them.
\end{remark}
\begin{example} \label{ex:inner-cc}
Let $\alpha\colon G \curvearrowright M$ be an action.
Then every unitary $v \in \mathcal{U}(M)$ gives rise to a cocycle conjugacy
\[
\big( \mathrm{Ad}(v), (v \alpha_g(v)^*)_{g \in G} \big) \colon (M, \alpha) \rightarrow (M, \alpha).\]
We will also write this simply as $\mathrm{Ad}(v)$ when it is clear from context that we are talking about cocycle morphisms.
When $\beta\colon G \curvearrowright N$ is another action and ${(\phi, \mathbbm{u})\colon (M, \alpha) \rightarrow (N, \beta)}$ is a cocycle conjugacy, then
\[ (\phi,\mathbbm{u}) \circ \mathrm{Ad}(v) = \mathrm{Ad}(\phi(v)) \circ (\phi, \mathbbm{u}).\]
\end{example}
\begin{definition}
Let $\alpha:G \curvearrowright M$ and $\beta\colon G \curvearrowright N$ be two actions on finite von Neumann algebras $M$ and $N$.
Let $\tau_N$ be a faithful normal tracial state on $N$.
Let $(\phi,\mathbbm{u})$ and $(\psi,\mathbbm{v})$ be two cocycle morphisms from $(M,\alpha)$ to $(N,\beta)$. We say that $(\phi,\mathbbm{u})$ and $(\psi,\mathbbm{v})$ are \emph{approximately unitarily equivalent} if there exists a net of unitaries $w_\lambda \in \mathcal{U}(N)$ such that $\|w_\lambda\phi(x)w_\lambda^*-\psi(x)\|_{\tau_N} \to 0$ for all $x\in M$ and $\max_{g\in K} \| w_\lambda \mathbbm{u}_g \beta_g(w_\lambda)^* - \mathbbm{v}_g\|_{\tau_N} \rightarrow 0$ for every compact set $K\subseteq G$.
We denote the relation of approximately unitary equivalence by $\approx_{\mathrm{u}}$.
\end{definition}
\section{One-sided intertwining}
In this section we prove a version of \cite[Lemma~2.1]{Szabo18ssa} for group actions on semi-finite von Neumann algebras. First we prove the following intermediate lemma:
\begin{lemma}\label{lemma:point_strong_dense_subset}
Let $M, N$ be von Neumann algebras, and let $\tau_N$ be a faithful, normal, semi-finite trace on $N$.
Consider a sequence of $*$-homomorphisms $(\theta_n\colon M \rightarrow N)_{n \in \mathbb{N}}$ and a $*$-isomorphism $\theta:M \rightarrow N$ such that $\tau_N \circ \theta = \tau_N \circ \theta_n$ for all $n \in \mathbb{N}$.
Let $X \subset (M)_1$ be a dense subset in the strong operator topology that contains a sequence of projections $(p_n)_{n \in \mathbb{N}}$ converging strongly to $1_M$ with $\tau_N (\theta(p_n)) < \infty$.
If $\theta_n (x) \rightarrow \theta(x)$ strongly as $n \rightarrow \infty$ for every $x \in X$, then $\theta_n \rightarrow \theta$ in the point-strong topology as $n \rightarrow \infty$.
\end{lemma}
\begin{proof}
Take $y \in (M)_1$. Since the sequence $(\theta(p_n))_{n \in \mathbb{N}}$ converges strongly to $1_N$, it suffices to show that for all $k \in \mathbb{N}$
\[ (\theta(y)-\theta_n(y)) \theta(p_k) \rightarrow 0 \text{ strongly as } n \rightarrow \infty.\]
Fix $k \in \mathbb{N}$ and $a \in N$ such that $\tau_N(a^*a) < \infty$. Given $\varepsilon>0$, there exists $x \in X$ such that
\[\|\theta(x-y)\theta(p_k)\|_{\tau_N} < \frac{\varepsilon}{4\|a\|}.\]
Then there exists $n_0 \in \mathbb{N}$ such that for all $n \geq n_0$
\[\|(\theta(x p_k) - \theta_n ( x p_k))a\|_{\tau_N} < \frac{\varepsilon}{4} \text{ and } \|(\theta(p_k)-\theta_n(p_k))a\|_{\tau_N} < \frac{\varepsilon}{4}.\]
For all $n \geq n_0$ we then get that
\begin{align*}
\|(\theta(y) - \theta_n(y))\theta(p_k)a\|_{\tau_N} &\leq \|\theta(x-y)\theta(p_k)a\|_{\tau_N} + \|\theta_n(x-y)\theta_n(p_k)a\|_{\tau_N}\\
&\quad + \|(\theta(xp_k) - \theta_n(xp_k))a\|_{\tau_N} + \|\theta_n(y)(\theta(p_k) - \theta_n(p_k))a\|_{\tau_N}\\
&< 2\|a\| \|\theta(x-y)\theta(p_k)\|_{\tau_N} +\varepsilon/4 + \|(\theta(p_k)-\theta_n(p_k))a\|_{\tau_N} \\
&< \varepsilon.
\end{align*}
As $k$ and $a$ were arbitrary, this proves the claim.
\end{proof}
\begin{lemma}\label{lemma:one-sided_intertwining}
Let $M$ and $N$ be two von Neumann algebras with separable predual and faithful normal semi-finite traces $\tau_{M}$ and $\tau_{N}$, respectively.
Let $\alpha\colon G \curvearrowright M$ and $\beta\colon G \curvearrowright N$ be two actions.
Let $\rho\colon (M, \alpha) \rightarrow (N, \beta)$ be a unital equivariant normal $*$-homomorphism with $\tau_N \circ \rho = \tau_M$.
Suppose there exists a faithful normal state $\phi$ on $N$ and a sequence of unitaries $(w_n)_{n \in \mathbb{N}}$ in $\mathcal{U}(N)$ satisfying
\begin{enumerate}[leftmargin=*,label=$\bullet$]
\item $\mathrm{Ad}(w_n) \circ \rho \to \rho$ in the point-strong topology;
\item For all $y \in (N)_1$ there exists a sequence $(x_n)_{n \in \mathbb{N}} \subset (M)_1$ such that $y - w_n\rho(x_n)w_n^* \rightarrow 0$ in the strong operator topology;
\item $\max_{g \in K} \|\beta_g(w_n^*) - w_n^*\|_\phi \rightarrow 0$ for every compact subset $K \subseteq G$.
\end{enumerate}
Then $\rho(\mathcal{Z}(M))=\mathcal{Z}(N)$ and there exists a cocycle conjugacy $(\theta,\mathbbm{v})$ between $\alpha$ and $\beta$ with $\theta|_{\mathcal{Z}(M)}=\rho|_{\mathcal{Z}(M)}$.
In case $\tau_N$ is finite, the existence of such a sequence of unitaries for $\phi = \tau_N$ is equivalent to the condition that $\rho$ is approximately unitarily equivalent to a cocycle conjugacy.
\end{lemma}
\begin{proof}
We note right away that the first two conditions above can always be tested on self-adjoint elements, hence one can equivalently state them with the strong-$*$ topology.
Denote
\[\mathfrak{m} := \{x \in M \mid \tau_M(x^*x) < \infty\} \subset M.\]
We let $L^2(M,\tau_M)$ denote the GNS-Hilbert space of $M$ with respect to $\tau_M$.
Similarily, we use the notation $L^2(N, \tau_N)$.
Choose a countable subset $X = \{x_n\}_{n \in \mathbb{N}}$ in $(M)_1$ such that $X \cap \mathfrak{m}$ is $\|\cdot\|_{\tau_M}$-dense in $(\mathfrak{m})_1$.
Take a strongly dense sequence $\{y_n\}_{n \in \mathbb{N}}$ in $(N)_1$.
Choose an increasing sequence of compact subsets $K_n \subseteq G$ such that the union is all of $G$.
We are going to create a map $\theta: M\to N$ via an inductive procedure.
For the first step, we choose $x_{1,1} \in (M)_1$ and $z_1 \in \mathcal{U}(N)$ such that
\begin{itemize}
\item $ \|z_1 \rho(x_1) z_1^* - \rho(x_1)\|^\#_\phi \leq 1/2$;
\item $\|y_1 - z_1\rho(x_{1,1})z_1^*\|_\phi \leq 1/2$;
\item $\max_{g \in K_1} \|\beta_g(z_1^*) - z_1^*\|_\phi \leq 1/2$.
\end{itemize}
Now assume that after the $n$-th step of the induction we have found $z_1, \hdots, z_n \in \mathcal{U}(N)$ and ${\{x_{l,j}\}_{j \leq l \leq n} \subset (M)_1}$ such that
\begin{enumerate}
\item $\|z_n \rho(x_j) z_n^* - \rho(x_j)\|^\#_{\phi \circ \mathrm{Ad}(z_1 \hdots z_{n-1})} \leq 2^{-n}$ for $j= 1, \hdots, n$; \label{eq:commutation_dense}
\item $\|z_n \rho(x_{l,j}) z_n^* - \rho(x_{l,j})\|^\#_{\phi \circ \mathrm{Ad}(z_1 \hdots z_{n-1})} \leq 2^{-n}$ for $l=1, \hdots, n-1$ and $j = 1, \hdots, l$; \label{eq:commutation_close_elements}
\item $ \| z_{n-1}^* \hdots z_1^* y_j z_1 \hdots z_{n-1} - z_n\rho(x_{n,j})z_n^*\|_{\phi \circ \mathrm{Ad}(z_1 \hdots z_{n-1})} \leq 2^{-n}$ for $j=1, \hdots, n$; \label{eq:close_elements}
\item
$\max_{g \in K_n} \|\beta_g(z_n^*) - z_n^*\|_{\phi \circ \mathrm{Ad}(z_1 \hdots z_{n-1})} \leq 2^{-n}$ and
$\max_{g \in K_n} \|\beta_g(z_n^*) - z_n^*\|_{\phi \circ \mathrm{Ad}(\beta_g(z_1 \hdots z_{n-1}))} \leq 2^{-n}$.
\label{eq:invariance}
\label{eq:approximate_fixedness}
\end{enumerate}
Then by our assumptions we can find $z_{n+1} \in \mathcal{U}(N)$ and $\{x_{n+1,j}\}_{j \leq n+1}\subset (M)_1$ such that
\begin{itemize}
\item $\|z_{n+1} \rho(x_j)z_{n+1}^* - \rho(x_j)\|^\#_{\phi \circ \mathrm{Ad}(z_1 \hdots z_n)} \leq 2^{-(n+1)}$ for $j= 1, \hdots, n+1$;
\item $\|z_{n+1} \rho(x_{l,j})z_{n+1}^* - \rho(x_{l,j})\|^\#_{\phi \circ \mathrm{Ad}(z_1 \hdots z_n)} \leq 2^{-(n+1)}$ for $l=1, \hdots, n$ and $j = 1, \hdots, l$;
\item $ \|z_{n}^* \hdots z_1^*y_jz_1 \hdots z_{n} - z_{n+1}\rho(x_{n+1,j})z_{n+1}^*\|_{\phi \circ \mathrm{Ad}(z_1 \hdots z_n)} \leq 2^{-(n+1)}$ for $j=1, \hdots, n+1$;
\item $\max_{g \in K_{n+1}} \|\beta_g(z_{n+1}^*)- z_{n+1}^*\|_{\phi \circ \mathrm{Ad}(z_1 \hdots z_n)} \leq 2^{-(n+1)}$ and
$\max_{g \in K_{n+1}} \|\beta_g(z_{n+1}^*)- z_{n+1}^*\|_{\phi \circ \mathrm{Ad}(\beta_g(z_1 \hdots z_n))} \leq 2^{-(n+1)}$.
\end{itemize}
We carry on inductively and obtain a sequence of unitaries $(z_n)_{n \in \mathbb{N}}$ in $\mathcal{U}(N)$ and a family $\{ x_{n,j} \}_{n\in\mathbb{N}, j\leq n}\subset (M)_1$.
For each $n \in \mathbb{N}$, we define $u_n=z_1 \hdots z_n$ and the normal $*$-homomorphism ${\theta_n\colon M \rightarrow N}$ by $\theta_n = \mathrm{Ad}(u_n) \circ \rho$.
For $n > m$ and $j=1, \hdots, m+1$ we get
\begin{align*}
\|\theta_n(x_j) - \theta_m(x_j)\|^\#_{\phi} &\leq
\sum_{k=m}^{n-1} \|\theta_{k+1}(x_j) - \theta_k(x_j)\|^\#_\phi\\
&= \sum_{k=m}^{n-1} \|z_{k+1}\rho(x_j)z_{k+1}^* - \rho(x_j)\|^\#_{\phi \circ \mathrm{Ad}(z_1 \hdots z_k)}\\
\overset{\ref{eq:commutation_dense}}&{\leq} \sum_{k=m}^{n-1} 2^{-k-1}.
\end{align*}
We see that for all $j\in \mathbb{N}$ the sequence $(\theta_n(x_j))_{n \in \mathbb{N}}$ is norm-bounded and Cauchy with respect to $\|\cdot\|_\phi^\#$.
This means that it converges to some element in $N$ in the strong-$*$-operator topology.
A similar calculation using \ref{eq:commutation_close_elements} shows that
for $n > m \geq l \geq j$
\begin{equation}\label{eq:convergence_close_elements}\|\theta_n(x_{l,j}) - \theta_m(x_{l,j})\|^\#_{\phi} < \sum_{k=m}^{n-1} 2^{-k-1}, \end{equation}
so the sequence $(\theta_n(x_{l,j}))_{n \in \mathbb{N}}$ also converges in the strong-$*$-operator topology for all $j \leq l$.
Since $\theta_n$ is a $*$-homomorphism for all $n \in \mathbb{N}$, we conclude that, restricted to the C$^*$-algebra $A\subset M$ generated by $\{x_n\}_{n \in \mathbb{N}} \cup \{x_{l,j}\}_{j \leq l}$, the sequence $(\theta_n)_{n \in \mathbb{N}}$ converges point-$*$-strongly to a $*$-homomorphism $\theta'\colon A \rightarrow N$.
Since $A$ contains a $\|\cdot\|_{\tau_M}$-dense subset of $\mathfrak{m}$, and clearly $\tau_N\circ\theta'=\tau_M|_A$, there is a unique isometry $T\colon L^2(M, \tau_M) \rightarrow L^2(N, \tau_N)$ induced from the formula $T[a]=[\theta'(a)]$ for all $a\in A\cap\mathfrak{m}$.
Then the normal $*$-homomorphism
\[\theta \colon M \rightarrow N \colon x \mapsto T x T^*\]
extends $\theta'$ and $\left(\theta_n \big \lvert_\mathfrak{m}\right)_{n \in \mathbb{N}}$ converges point-strongly to $\theta\lvert_\mathfrak{m}$.
We claim that $\theta$ is an isomorphism.
Clearly $\tau_N\circ\theta=\tau_M$ and so $\theta$ is injective.
By applying \ref{eq:close_elements} we find for all $m \geq j$ that
\begin{equation*}
\|\theta_m(x_{m,j}) - y_j\|_\phi = \|z_m\rho(x_{m,j})z_m^* - z_{m-1}^* \hdots z_1^* y_j z_1 \hdots z_{m-1}\|_{\phi \circ \mathrm{Ad}(z_1 \hdots z_{m-1})} < 2^{-m}.
\end{equation*}
Combining this with \eqref{eq:convergence_close_elements} for $l=m$ and $n \rightarrow \infty$ we find that
\[\|\theta(x_{m,j}) - y_j\|_\phi \leq \|\theta'(x_{m,j}) - \theta_m(x_{m,j})\|_\phi + \|\theta_m(x_{m,j}) - y_j\|_\phi \leq 2^{-m} + 2^{-m} = 2^{-m+1}.\]
Since the $y_j$ are strongly dense in the unit ball of $N$ and $\theta$ is normal, this implies surjectivity of $\theta$.
By Lemma~\ref{lemma:point_strong_dense_subset} it then follows that $\theta_n\to \theta$ point-strongly as $n\to\infty$.
Since $\theta_n$ is a unitary perturbation of $\rho$ for each $n$, this implies $\rho|_{\mathcal{Z}(M)}=\theta_n|_{\mathcal{Z}(M)}\to\theta|_{\mathcal{Z}(M)}$ and in particular $\rho(\mathcal{Z}(M))=\theta(\mathcal{Z}(M))=\mathcal{Z}(N)$.
For $n > m$ and $g \in K_{m+1}$ we have
\begin{align*}
&\|z_1 \hdots z_n \beta_g(z_n^* \hdots z_1^*) - z_1 \hdots z_m \beta_g(z_m^*\hdots z_1^*)\|_{\phi}^\#\\
&\leq \sum_{k=m}^{n-1} \|z_1 \hdots z_k(z_{k+1}\beta_g(z_{k+1}^*) - 1) \beta_g(z_k^*\hdots z_1^*)\|_{\phi}^\#\\
&=\sum_{k=m}^{n-1} \big( \|\beta_g(z_{k+1}^*) - z_{k+1}^*\|^2_{\phi \circ \mathrm{Ad}(\beta_g(z_1 \hdots z_k))} + \|\beta_g(z_{k+1}^*) - z_{k+1}^*\|^2_{\phi \circ \mathrm{Ad}(z_1 \hdots z_k)} \big)^{1/2}\\
\overset{\ref{eq:invariance}}&{\leq} \sqrt{2} \sum_{k=m}^{n-1} 2^{-(k+ 1)}.
\end{align*}
From this calculation we see that for every $g \in G$ the sequences $(z_1 \hdots z_n \beta_g(z_n^*\hdots z_1^*))_{n \in \mathbb{N}}$ are Cauchy with respect to $\|\cdot\|^\#_\phi$, with uniformity on compact sets.
It follows that for every $g \in G$, the strong-$*$ limit $\mathbbm{v}_g = \lim_{n \rightarrow \infty} u_n \beta_g(u^*_n)$ exists in $\mathcal{U}(N)$ and that this convergence is uniform (w.r.t.\ $\|\cdot\|^\#_\phi$) on compact sets.
Since $\beta$ is point-strong continuous, this implies the continuity of the assignment $g \mapsto \mathbbm{v}_g$.
Moreover, for each $g \in G$ and $x \in M$ we have the equalities of limits with respect to the strong operator topology:
\begin{align*}
(\theta \circ \alpha_g) (x) &= \lim_{n \rightarrow \infty} (\mathrm{Ad}(u_n) \circ \rho \circ \alpha_g)(x)\\
&= \lim_{n \rightarrow \infty} (\mathrm{Ad}(u_n) \circ \beta_g \circ \rho) (x)\\
&= \lim_{n \rightarrow \infty} u_n \beta_g(u_n^*)\beta_g(u_n \rho(x) u_n^*) \beta_g(u_n) u_n^*\\
&= (\mathrm{Ad}(\mathbbm{v}_g) \circ \beta_g \circ \theta) (x).
\end{align*}
It follows that $(\theta,\mathbbm{v})$ is a cocycle conjugacy.
For the last part of the statement, assume that $\tau_N$ is finite.
Then our previous calculations show that in the above situation, $\rho$ is approximately unitarily equivalent to $\theta$.
Conversely, suppose $\rho$ is approximately unitarily equivalent to a cocycle conjugacy $(\theta,\mathbbm{v})$.
In particular, there exists a sequence $(u_n)_{n \in \mathbb{N}} \in \mathcal{U}(N)$ such that $\|u_n\rho(x)u_n^* - \theta(x)\|_{\tau_N} \rightarrow 0$ for all $x \in M$ and $\|u_n \beta_g(u_n^*) - \mathbbm{v}_g\|_{\tau_N} \rightarrow 0$ uniformly over compact subsets of $G$.
Choose a sequence $\{y_n\}_{n \in \mathbb{N}} \subset (N)_1$ that is strongly dense in $(N)_1$. For all $k,n \in \mathbb{N}$ define $x_{n,k} = \theta^{-1}(u_ny_ku_n^*)$. Then choose an increasing sequence $(m(n))_{n \in \mathbb{N}} \subset \mathbb{N}$ such that
\begin{equation*}
\lim_{n \rightarrow \infty} \| \theta(x_{n,k}) - u_{m(n)} \rho (x_{n,k}) u_{m(n)}^*\|_{\tau_N} = 0 \quad \text{for } k \in \mathbb{N}.\end{equation*}
Define $w_n := u_n^*u_{m(n)}$. One can check that these satisfy the assumptions in the lemma.
\end{proof}
\section{Strongly self-absorbing actions}
\begin{definition}[cf.\ {\cite[Definition 5.1]{Szabo21cc}}] \label{def:strong-absorption}
Let $\alpha:G \curvearrowright M$ and $\delta\colon G \curvearrowright N$ be two actions of a second-countable locally compact group on finite von Neumann algebras $M$ and $N$ with separable predual.
We say that $\alpha$ \emph{strongly absorbs} $\delta$ if the equivariant embedding
\[\mathrm{id}_M \otimes 1_N\colon (M, \alpha) \rightarrow (M \bar{\otimes} N, \alpha \otimes \delta)\]
is approximately unitarily equivalent to a cocycle conjugacy.
\end{definition}
\begin{definition} \label{def:ssa-action}
Let $\delta\colon G \curvearrowright \mathcal{R}$ be an action on the hyperfinite II$_1$-factor.
We say that $\delta$ is \emph{strongly self-absorbing}, if $\delta$ strongly absorbs $\delta$.
\end{definition}
\begin{definition}
Let $\alpha\colon G \curvearrowright \mathcal{R}$ be an action on the hyperfinite II$_1$-factor.
We say $\alpha$ has \emph{approximately inner half-flip} if the two equivariant embeddings
\[
\mathrm{id}_\mathcal{R} \otimes 1_\mathcal{R}, 1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R}\colon (\mathcal{R}, \alpha) \rightarrow (\mathcal{R} \bar{\otimes} \mathcal{R}, \alpha \otimes \alpha)
\]
are approximately unitarily equivalent (as cocycle morphisms).
\end{definition}
\begin{remark}
It is well-known that any type II$_1$ von Neumann algebra $N$ with approximately inner half-flip in the above sense (with $G=\{1\}$) must be isomorphic to $\mathcal R$. Indeed, it is clear that $N$ must have trivial center. Then $N \cong \mathcal{R}$ follows from \cite[Theorem 5.1]{Connes76} under the stronger condition that the flip automorphism on $N\bar{\otimes} N$ is approximately inner, but the weaker condition is seen to be enough via Connes' theorem and the obvious modification of the proof of \cite[Proposition 2.8]{EffrosRosenberg78} that shows the semi-discreteness of $N$.
\end{remark}
\begin{example}\label{example:trivial_action_R}
For any second-countable locally compact group $G$, the trivial action $\mathrm{id}_\mathcal{R}\colon G \curvearrowright \mathcal{R}$ has approximately inner half-flip as a consequence of the flip automorphism on a tensor product of matrix algebras $M_n(\mathbb{C}) \otimes M_n(\mathbb{C})$ being inner.
It is also seen to be a strongly self-absorbing action.
\end{example}
\begin{theorem} \label{theorem:sufficient_criterium_strong_absorption}
Let $\alpha\colon G \curvearrowright M$ be an action on a semi-finite von Neumann algebra with separable predual.
Suppose that $\delta\colon G \curvearrowright \mathcal{R}$ is an action with approximately inner half-flip such that there exists a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow (M_{\omega,\alpha}, \alpha_\omega)$.
Then there exists a cocycle conjugacy $(\theta,\mathbbm{v}): (M,\alpha) \to (M\bar{\otimes}\mathcal{R},\alpha \otimes \delta)$ with $\theta|_{\mathcal{Z}(M)}=\operatorname{id}_{\mathcal{Z}(M)}\otimes 1_{\mathcal R}$.
If $M$ is finite, then $\alpha$ strongly absorbs $\delta$.
\end{theorem}
\begin{proof}
Fix a faithful normal state $\phi$ on $M$.
Let $\pi\colon (\mathcal{R},\delta) \rightarrow (M_{\omega,\alpha},\alpha_{\omega})$ be a unital equivariant $*$-homomorphism.
We obtain an induced a map on the algebraic tensor product
\[
\mathcal{R} \odot M \rightarrow M^\omega_\alpha \text{ via } x \otimes m \mapsto \pi(x) m.
\]
Since for each $m \in M_+$ the map $x \mapsto \phi^\omega(\pi(x)m)$ defines a positive tracial functional on $\mathcal{R}$,
we see that it must be equal to some multiple of the unique tracial state $\tau$ on $\mathcal{R}$ and hence,
we get for each $x \in \mathcal{R}$ and $m \in M$ that
\[\phi^\omega(\pi(x) m) = \tau(x) \phi(m) = (\tau \otimes \phi)(x \otimes m).\]
So we see that the map sends the faithful normal state $\tau \otimes \phi$ to $\phi^\omega$ and hence,
it extends to a unital normal $*$-homomorphism $\mathcal{R} \bar{\otimes} M \rightarrow M^\omega_\alpha$,
which moreover is $(\delta \otimes \alpha)$-to-$\alpha^\omega$ equivariant.
In this way we get a unital equivariant normal $*$-homomorphism
\[(\mathcal{R} \bar{\otimes} \mathcal{R} \bar{\otimes}M, \delta \otimes \delta \otimes \alpha) \rightarrow (\mathcal{R} \bar{\otimes} M^\omega_\alpha, \delta \otimes \alpha^\omega),\]
given by $x_1 \otimes x_2 \otimes m \mapsto x_1 \otimes (\phi(x_2) m)$.
Composing with the canonical inclusion map $\iota\colon \mathcal{R} \bar{\otimes} M^\omega_\alpha \mapsto (\mathcal{R} \bar{\otimes} M)^\omega_{\delta\otimes\alpha}$ we get a unital and equivariant normal $*$-homomorphism
\[\Phi\colon (\mathcal{R} \bar{\otimes} \mathcal{R} \bar{\otimes} M, \delta \otimes \delta \otimes \alpha) \rightarrow ((\mathcal{R} \bar{\otimes} M)^\omega_{\delta \otimes \alpha}, (\delta \otimes \alpha)^\omega)\]
such that
\[
\Phi(x \otimes 1_\mathcal{R} \otimes m) = x \otimes m \text{ for all } x \in \mathcal{R}, m \in M,
\]
and
\[
\Phi(1_\mathcal{R} \otimes \mathcal{R} \otimes M) \subset \iota(1_\mathcal{R} \otimes M^\omega_\alpha).
\]
Since $\delta$ has approximately inner half-flip,
we can choose a sequence of unitaries $(v_n)_{n \in \mathbb{N}}$ in $\mathcal{R} \bar{\otimes} \mathcal{R}$ such that
$\max_{g \in K}\|v_n - (\delta \otimes \delta)_g(v_n)\|_{\tau \otimes \tau} \rightarrow 0$ for all compact subsets $K \subseteq G$ and
${\|x \otimes 1_\mathcal{R} - v_n (1_\mathcal{R} \otimes x)v_n^*\|_{\tau \otimes \tau} \rightarrow 0}$ for all $x \in \mathcal{R}$.
Define $u_n := \Phi(v_n \otimes 1_M) \subset (\mathcal{R} \bar{\otimes} M)^\omega_{\delta \otimes \alpha}$. This sequence of unitaries satisfies
\begin{itemize}
\item $[u_n, 1_\mathcal{R} \otimes m] = \Phi([v_n \otimes 1_M, 1_{\mathcal{R} \bar{\otimes} \mathcal{R}} \otimes m]) = 0$ for all $m \in M$;
\item $\Phi(1_\mathcal{R} \otimes x \otimes m) \in \iota(1_\mathcal{R} \otimes M^\omega_\alpha)$ and
\begin{align*}
\lim_{n \rightarrow \infty} u_n \Phi(1_\mathcal{R} \otimes x \otimes m) u_n^* &=\lim_{n \rightarrow \infty} \Phi((v_n \otimes 1_M)(1_\mathcal{R} \otimes x \otimes m)(v_n^* \otimes 1_M))\\
&=\Phi(x \otimes 1_\mathcal{R} \otimes m) \\&= x \otimes m
\end{align*}
where the limit is taken with respect to the strong operator topology;
\item $\displaystyle \max_{g \in K} \|u_n^* - (\delta \otimes \alpha)^\omega_g(u_n^*)\|_{(\tau \otimes \phi)^\omega} = \max_{g \in K} \|(v_n^* - (\delta \otimes \delta)_g(v_n^*)) \otimes 1_M\|_{\tau \otimes \tau \otimes \phi} \to 0$ for all compact $K \subseteq G$.
\end{itemize}
Each $u_n$ can be lifted to a sequence of unitaries $(z_n^{(k)})_{k\in \mathbb{N}}$ in $\mathcal{E}_{\delta \otimes \alpha}^\omega \cap \mathcal{N}_\omega(\mathcal{R} \bar{\otimes} M)$. Applying a diagonal sequence argument to the $(z_n^{(k)})_{k\in \mathbb{N}}$ and using Lemma~\ref{lemma:lifting_invariance_compact_sets}, we can obtain a sequence of unitaries $(w_n)_{n \in \mathbb{N}}$ in $\mathcal{R} \bar{\otimes} M$ such that
\begin{itemize}
\item $\mathrm{Ad}(w_n)(1_\mathcal{R} \otimes m) - 1_\mathcal{R} \otimes m \rightarrow 0$ strongly for all $m \in M$.
\item $\inf_{m \in (M)_1}\|x - w_n(1_\mathcal{R} \otimes m)w_n^*\|_{\tau \otimes \phi} \rightarrow 0$ for $x \in (\mathcal{R} \bar{\otimes}M)_1$.
\item $\max_{g \in K} \|w_n^* - (\delta \otimes \alpha)_g(w_n^*)\|_{\tau \otimes \phi} \rightarrow 0$ for every compact subset $K \subseteq G$.
\end{itemize}
We conclude that the map $1_\mathcal{R} \otimes \mathrm{id}_M\colon (M, \alpha) \rightarrow (\mathcal{R} \bar{\otimes} M, \delta \otimes \alpha)$ satisfies all the necessary conditions to apply Lemma \ref{lemma:one-sided_intertwining}. This completes the proof.
\end{proof}
\begin{theorem} \label{theorem:equivalence_ssa}
Let $\delta:G \curvearrowright \mathcal{R}$ be an action on the hyperfinite II$_1$-factor.
Then $\delta$ is strongly self-absorbing if and only if it has approximately inner half-flip and there exists a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow (\mathcal{R}_{\omega,\delta}, \delta_\omega)$.
\end{theorem}
\begin{proof}
The `if' direction follows immediately from the previous proposition.
To prove the other direction, we assume that $\delta$ is strongly self-absorbing and reproduce an argument analogous to \cite[Proposition 1.4]{TomsWinter07} and \cite[Proposition 5.5]{Szabo21cc}.
Denote the unique tracial state on $\mathcal{R}$ by $\tau$. Let $(\phi,\mathbbm{u})\colon (\mathcal{R}, \delta) \rightarrow (\mathcal{R} \bar{\otimes} \mathcal{R}, \delta \otimes \delta)$ be a cocycle conjugacy and $u_n \in \mathcal{U}(\mathcal{R} \bar{\otimes} \mathcal{R})$ a sequence of unitaries such that
\begin{equation}\label{eq:approx_coboundary} \lim_{n \rightarrow \infty} \max_{g \in K} \| u_n(\delta \otimes \delta)_g(u_n^*) - \mathbbm{u}_g\|_{\tau \otimes \tau} = 0 \text{ for every compact } K \subseteq G, \text{ and }\end{equation}
\[
\lim_{n \rightarrow \infty} \|\phi(x) - u_n(x \otimes 1) u_n^*\|_{\tau \otimes \tau} = 0 \text{ for all } x \in \mathcal{R}.
\]
Note that
\[\mathrm{Ad}(u_n^*) \circ \phi \circ \delta_g = \mathrm{Ad}(u_n^*\mathbbm{u}_g (\delta \otimes \delta)_g(u_n)) \circ (\delta \otimes \delta)_g \circ \mathrm{Ad}(u_n^*) \circ \phi.\]
As a consequence of \eqref{eq:approx_coboundary}, then for every compact $K \subseteq G$ one has
\begin{equation}\label{eq:approximate_invariance_perturbation}\lim_{n \rightarrow \infty} \max_{g \in K} \sup_{x \in (\mathcal{R})_1} \|(\mathrm{Ad}(u_n^*) \circ \phi \circ \delta_g)(x) - ((\delta \otimes \delta)_g \circ \mathrm{Ad}(u_n^*) \circ \phi)(x)\|_{\tau \otimes \tau} = 0.\end{equation}
In particular, applying this to $x = \phi^{-1}(u_n)$ and using that $(\tau \otimes \tau) = \tau \circ \phi^{-1}$ yields
\[\lim_{n \rightarrow \infty} \max_{g \in K}\|(\mathrm{Ad}(\phi^{-1}(u_n^*)) \circ \delta_g) (\phi^{-1}(u_n)) -(\phi^{-1}\circ (\delta \otimes \delta)_g)(u_n)\|_\tau = 0.\]
Combining this with \eqref{eq:approx_coboundary} again, one gets
\begin{equation}\label{eq:approx_coboundary_inverse}
\lim_{n \rightarrow \infty} \max_{g \in K} \| \phi^{-1}(u_n^*)\, \delta_g(\phi^{-1}(u_n)) - \phi^{-1}(\mathbbm{u}^*_g)\|_{\tau \otimes \tau} = 0 \text{ for every compact } K \subseteq G.
\end{equation}
First, we prove that $\delta$ has approximately inner half-flip.
Define the cocyle morphism $(\psi, \mathbbm{v}) := (\phi, \mathbbm{u})^{-1} \circ (1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R})$. Note that
\begin{align*}
1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R}
&= (\phi, \mathbbm{u}) \circ (\psi, \mathbbm{v})\\
&\approx_{\mathrm{u}} (\mathrm{id}_\mathcal{R} \otimes 1_\mathcal{R}) \circ (\psi, \mathbbm{v})\\
&= (\psi \otimes 1_\mathcal{R}, \mathbbm{v} \otimes 1).
\end{align*}
Applying the equivariant flip automorphism to both sides of this equivalence, we get that
\begin{equation}\label{eq:approx_equivalence_first_factor_embedding} \mathrm{id}_\mathcal{R} \otimes 1_\mathcal{R} \approx_\mathrm{u} (1_\mathcal{R} \otimes \psi, 1 \otimes \mathbbm{v}) .\end{equation}
We also get
\begin{align*}
(\psi \otimes 1_\mathcal{R}, \mathbbm{v} \otimes 1) &= (\phi^{-1} \otimes \mathrm{id}_\mathcal{R}, \phi^{-1}(\mathbbm{u})^* \otimes 1) \circ (1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R} \otimes 1_\mathcal{R})\\
\overset{\eqref{eq:approx_equivalence_first_factor_embedding}}&{\approx_{\mathrm{u}}} (\phi^{-1} \otimes \mathrm{id}_\mathcal{R}, \phi^{-1}(\mathbbm{u})^* \otimes 1) \circ (1_\mathcal{R} \otimes 1_\mathcal{R} \otimes \psi, 1\otimes 1 \otimes \mathbbm{v})\\
&=(1_\mathcal{R} \otimes \psi, \phi^{-1}(\mathbbm{u})^* \otimes \mathbbm{v})\\
\overset{\eqref{eq:approx_coboundary_inverse}}&{\approx_{\mathrm{u}}} (1_\mathcal{R} \otimes \psi, 1 \otimes \mathbbm{v}).
\end{align*}
By transitivity we get that ${1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R}\approx_\mathrm{u} \mathrm{id}_\mathcal{R} \otimes 1_\mathcal{R}}$.
Next we prove the existence of a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow (\mathcal{R}_{\omega,\delta}, \delta_\omega)$. Define the sequence of trace-preserving $*$-homomorphisms
\[\chi_n = \phi^{-1} \circ \mathrm{Ad}(u_n) \circ (1_\mathcal{R} \otimes \mathrm{id}_\mathcal{R}).\]
We conclude from \eqref{eq:approximate_invariance_perturbation} that for all $x \in \mathcal{R}$
\[\lim_{n \rightarrow \infty} \max_{g \in K} \|\delta_g(\chi_n(x)) - \chi_n(\delta_g(x))\|_{\tau} =0.\]
From this and the fact that all $\chi_n$ are trace-preserving it also follows that $(\chi_n(x))_{n \in \mathbb{N}}$ belongs to $\mathcal{E}^\omega_\alpha$.
Moreover, for any $x,y \in \mathcal{R}$
\begin{align*}
\lim_{n \rightarrow \infty} \|[x, \chi_n(y)] \|_\tau
&= \lim_{n \rightarrow \infty}\|[\phi(x), u_n(1 \otimes y) u_n^*]\|_{\tau \otimes \tau}\\
&= \lim_{n \rightarrow \infty}\|u_n[x \otimes 1, 1 \otimes y] u_n^*\|_{\tau \otimes \tau}\\
&=0.
\end{align*}
So the $\chi_n$ induce a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow (\mathcal{R}_{\omega,\delta}, \delta_\omega)$.
\end{proof}
The following can be seen as a direct generalization of the famous McDuff theorem \cite{McDuff70} to actions on semi-finite von Neumann algebras.
\begin{corollary} \label{cor:equivalence_equivariant_McDuff}
Let $\alpha:G \curvearrowright M$ be an action on a semi-finite von Neumann algebra with separable predual and let $\delta:G \curvearrowright \mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.
Then the following are equivalent:
\begin{enumerate}[leftmargin=*,label=\textup{(\arabic*)}]
\item There exists a cocycle conjugacy $(\theta,\mathbbm{v})\colon (M,\alpha) \to (M\bar{\otimes}\mathcal{R},\alpha \otimes \delta)$ with $\theta|_{\mathcal{Z}(M)}=\operatorname{id}_{\mathcal{Z}(M)}\otimes 1_{\mathcal R}$; \label{prop:McDuff:1}
\item $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \delta$; \label{prop:McDuff:2}
\item There exists a unital equivariant $*$-homomorphism $(\mathcal{R},\delta) \rightarrow (M_{\omega,\alpha}, \alpha_\omega)$. \label{prop:McDuff:3}
\end{enumerate}
\end{corollary}
\begin{proof}
The implication \ref{prop:McDuff:1}$\Rightarrow$\ref{prop:McDuff:2} is tautological.
Since strong self-absorption implies approximately inner half-flip by Proposition~\ref{theorem:equivalence_ssa}, the implication \ref{prop:McDuff:3}$\Rightarrow$\ref{prop:McDuff:1} follows from Proposition~\ref{theorem:sufficient_criterium_strong_absorption}.
In order to prove \ref{prop:McDuff:2}$\Rightarrow$\ref{prop:McDuff:3}, it is enough to show that there exists a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow ((M \bar{\otimes} \mathcal{R})_{\omega,\alpha \otimes \delta}, (\alpha \otimes \delta)_\omega)$.
We know there exists a unital equivariant $*$-homomorphism $(\mathcal{R}, \delta) \rightarrow (\mathcal{R}_{\omega,\delta}, \delta_\omega)$ by Proposition \ref{theorem:equivalence_ssa}.
Since the latter is unitally and equivariantly contained in $((M \bar{\otimes} \mathcal{R})_{\omega,\alpha \otimes \delta}, (\alpha \otimes \delta)_\omega)$, this finishes the proof.
\end{proof}
The following lemma is a straightforward application of the noncommutative Rokhlin Theorem of Masuda \cite[Theorem 4.8]{Masuda13}\footnote{This Rokhlin Theorem is actually a variant of Ocneanu's noncommutative Rokhlin Theorem \cite[Theorem 6.1]{Ocneanu85}, and the proof of Masuda's version is essentially the same as Ocneanu's proof. While it is possible to deduce what we need from Ocneanu's Theorem, here we cite Masuda's version for convenience of the reader, as it is directly applicable and there is no need to deal with $\varepsilon$-paving families of $G$.}
\begin{lemma}\label{lem:approx-central-embeddings}
Let $\alpha\colon G \curvearrowright M$ be an action of a countable discrete group on a McDuff factor (i.e., $M \cong M \bar{\otimes} \mathcal{R}$) with separable predual.
Let $N\subseteq G$ be the normal subgroup consisting of all elements $g\in G$ such that $\alpha_{\omega,g} \in\operatorname{Aut}(M_\omega)$ is trivial.
Suppose that the quotient group $G_0=G/N$ is amenable with quotient map $\pi: G\to G_0$.
Let $\delta: G_0\curvearrowright\mathcal R$ be an action with induced $G$-action $\delta_\pi=\delta\circ\pi$.
Then there exists an equivariant unital $*$-homomorphism $(\mathcal R,\delta_\pi)\to (M_\omega,\alpha_\omega)$.
\end{lemma}
\begin{proof}
Consider the induced faithful action $\gamma: G_0\curvearrowright M_\omega$ via $\gamma_{gN}=\alpha_{\omega,g}$.
Then clearly the claim is equivalent to finding a $G_0$-equivariant unital $*$-homomorphism $(\mathcal R,\delta)\to (M_\omega,\gamma)$.
Let us introduce some notation.
Let $(x_n)_{n \in \mathbb{N}} \in \ell^\infty(M)$ be a sequence representing an element $X \in M_\omega$. Then we set $\tau_\omega(X) = \lim_{n \rightarrow \omega} x_n$, where the limit is taken in the $\sigma$-weak topology.
Since $M$ is a factor and $\tau_\omega(X)$ is central, this limit belongs to $\mathbb{C}$. For any $\phi \in M_*$ we have
\[\phi^\omega(X) = \lim_{n \rightarrow \omega}\phi(x_n) = \phi(\tau_\omega(X)) = \tau_\omega(X).\]
In particular, $\tau_\omega$ defines a normal faithful tracial state on $M_\omega$ and we denote $\|X\|_1 = \tau_\omega(|X|)$.
Since $M$ is McDuff we can find a unital $*$-homomorphism $\Phi: \mathcal{R} \rightarrow M_\omega$. Fix $\varepsilon >0$ and a symmetric finite subset $F \subset\joinrel\subset G_0$ containing the neutral element.
By \cite[Lemmas 5.6 and 5.7]{Ocneanu85} we are allowed to apply \cite[Theorem 4.8]{Masuda13} to the action $\gamma\colon G_0\curvearrowright M_\omega$.
So if $S \subset\joinrel\subset G_0$ is a finite $(F,\varepsilon)$-invariant subset, then there exists a partition of unity of projections $\{E_s\}_{s \in S} \subset M_\omega$ such that
\begin{align}
\sum_{s \in g^{-1}S \cap S} \|\gamma_{g}(E_s) - E_{gs}\|_1 &< 4\varepsilon^{1/2} \text{ for all } g \in F; \label{eq:r-lemma1}\\
\sum_{s \in S\setminus g^{-1}S} \|E_s\|_1 &< 3\varepsilon^{1/2} \text{ for all } g \in F;\label{eq:r-lemma2}\\
[E_s, \gamma_h(X)] &= 0 \text{ for all } s \in S,\ h\in G_0,\ X \in \Phi(\mathcal{R}).\label{eq:commutation_image}
\end{align}
Define
\[
\Psi: \mathcal{R} \rightarrow M_\omega \text{ via } \Psi(x) = \sum_{s \in S} \gamma_{s}(\Phi(\delta_{s}^{-1}(x))) E_s.
\]
This is a unital trace-preserving $*$-homomorphism because the projections $E_s$ form a partition of unity and condition \eqref{eq:commutation_image}.
For $g \in F$ and $x \in \mathcal{R}$ we use conditions \eqref{eq:r-lemma1} and \eqref{eq:r-lemma2} to observe
\begin{align*}
\|\gamma_{g}(\Psi(x)) - \Psi(\delta_g(x))\|_1 &= \Big\| \sum_{s \in S} \gamma_{gs}(\Phi(\delta_s^{-1}(x))) \gamma_{g}(E_s) - \sum_{s \in g^{-1}S} \gamma_{gs}(\Phi(\delta_{s}^{-1}(x))) E_{gs} \Big\|_1
\\
&\leq \sum_{s \in S \cap g^{-1}S} \left\| \gamma_{gs}(\Phi(\delta_s^{-1}(x))) ( \gamma_{g}(E_s) - E_{gs}) \right\|_1\\
& \qquad + \sum_{s \in S\setminus g^{-1}S} \|\gamma_s(\Phi(\delta_s^{-1}(x))) E_s\|_1 + \sum_{s \in S \setminus gS} \|\gamma_{s}(\Phi(\delta_{g^{-1}s}^{-1}(x)))E_s\|_1 \\
&< 10\varepsilon^{1/2}\|x\|.
\end{align*}
Since we can do this for arbitrary $\varepsilon >0$ and $F \subset\joinrel\subset G$, the claim follows via a standard reindexing trick.
\end{proof}
The following result recovers a famous result due to Ocneanu \cite[Theorem 1.2 and following remark]{Ocneanu85} as well as his uniqueness theorem of outer actions of amenable groups on $\mathcal R$.
We include this proof for the reader's benefit as it is comparably elementary with the methods established so far.
\begin{theorem} \label{theorem:model-absorption}
Let $G$ and $G_1$ be countable discrete groups with $G_1$ amenable.
Let $\delta: G_1\curvearrowright\mathcal R$ be an outer action and $\alpha\colon G \curvearrowright M$ an action on a semi-finite McDuff factor (i.e.\ $M \cong M \bar{\otimes} \mathcal{R}$) with separable predual.
Then:
\begin{enumerate}[label=\textup{(\roman*)},leftmargin=*]
\item $\delta$ is strongly self-absorbing and cocycle conjugate to any other outer action $G_1\curvearrowright\mathcal R$.\label{theorem:model-absorption:1}
\item Suppose $H\subseteq G$ is a normal subgroup containing all elements $g\in G$ such that $\alpha_{\omega,g}$ is trivial.
Suppose $G_1=G/H$ with quotient map $\pi: G\to G_1$.
Then $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \delta_\pi$. \label{theorem:model-absorption:2}
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{theorem:model-absorption:1}:
Let $\tau$ be the unique tracial state on $\mathcal R$, which we may use to define the 1-norm $\|\cdot\|_1=\tau(|\cdot|)$ on $\mathcal R$.
Set $\delta^{(2)}=\delta\otimes\delta: G_1\curvearrowright \mathcal R\bar{\otimes}\mathcal R=:\mathcal R^{(2)}$, which is also an outer action.
Since the flip automorphism $\sigma$ on $\mathcal R^{(2)}$ is known to be approximately inner, we may pick a unitary $U\in\mathcal{U}(\mathcal R^{(2)\omega})$ with $UxU^*=\sigma(x)$ for all $x\in\mathcal R^{(2)}$.
By \cite[Theorem 3.2]{Connes77}, the induced action $\delta^{(2)\omega}: G_1\curvearrowright\mathcal R^{(2)}_\omega$ is faithful.
We may hence argue exactly as in the proof of Lemma~\ref{lem:approx-central-embeddings} and apply Masuda's noncommutative Rokhlin lemma.
So let $F\subset\joinrel\subset G_1$ be a symmetric finite set and $\varepsilon>0$.
If $S \subset\joinrel\subset G_1$ is a finite $(F,\varepsilon)$-invariant subset, then there exists a partition of unity of projections $\{E_s\}_{s \in S} \subset \mathcal R^{(2)}_\omega$ such that
\begin{align}
\sum_{s \in g^{-1}S \cap S} \|\delta^{(2)\omega}_{g}(E_s) - E_{gs}\|_1 &< 4\varepsilon^{1/2} \text{ for all } g \in F; \label{eq:r-lemma1-}\\
\sum_{s \in S\setminus g^{-1}S} \|E_s\|_1 &< 3\varepsilon^{1/2} \text{ for all } g \in F;\label{eq:r-lemma2-}\\
[E_s, x] &= 0 \text{ for all } s \in S,\ x\in\{ \delta^{(2)\omega}_h(U)\}_{h\in G_1}.\label{eq:commutation_image-}
\end{align}
Define $W = \sum_{s \in S} \delta^{(2)\omega}_{s}(U) E_s$.
This is also a unitary in $\mathcal R^{(2)\omega}$ implementing the flip $\sigma$ because the projections $E_s$ form a partition of unity and condition \eqref{eq:commutation_image-}.
For $g \in F$ we use conditions \eqref{eq:r-lemma1-} and \eqref{eq:r-lemma2-} to observe
\begin{align*}
\|\delta^{(2)\omega}_{g}(W) - W\|_1 &= \Big\| \sum_{s \in S} \delta^{(2)\omega}_{gs}(U) \delta^{(2)\omega}_{g}(E_s) - \sum_{s \in g^{-1}S} \delta^{(2)\omega}_{gs}(U) E_{gs} \Big\|_1
\\
&\leq \sum_{s \in S \cap g^{-1}S} \left\| \delta^{(2)\omega}_{gs}(U) ( \delta^{(2)\omega}_{g}(E_s) - E_{gs}) \right\|_1\\ & \qquad + \sum_{s \in S\setminus g^{-1}S} \| \delta^{(2)\omega}_{s}(U) E_s\|_1 + \sum_{s \in S \setminus gS} \|\delta^{(2)\omega}_{s}(U) E_{s}\|_1 \\
&< 10\varepsilon^{1/2}.
\end{align*}
Since we can do this for arbitrary $\varepsilon >0$ and $F \subset\joinrel\subset G_1$, we can use a reindexing trick to obtain a unitary $W\in\mathcal{U}((\mathcal R^{(2)\omega})^{\delta^{(2)\omega}})$ with $WxW^*=\sigma(x)$ for all $x\in\mathcal R^{(2)}$.
In particular, $\delta$ has approximately inner half-flip.
If we apply Lemma~\ref{lem:approx-central-embeddings} for $G=G_1$, $N=\{1\}$ and $\delta$ in place of $\alpha$, it follows with Theorem~\ref{theorem:equivalence_ssa} that $\delta$ is strongly self-absorbing.
If $\gamma\colon G_1\curvearrowright\mathcal R$ is another outer action, then the same follows for $\gamma$.
By applying Lemma~\ref{lem:approx-central-embeddings} and Corollary~\ref{cor:equivalence_equivariant_McDuff} twice, we obtain that $\gamma$ and $\delta$ absorb each other, hence they are cocycle conjugate.
\ref{theorem:model-absorption:2}:
Define $N$ to be the subgroup of all elements $g\in G$ such that $\alpha_{\omega,g}$ is trivial, and set $G_0=G/N$ with quotient map $\pi^0: G\to G_0$.
By assumption we have $N\subseteq H$, hence $G_1$ can be viewed as a quotient of $G_0$ via a map $\pi^{0\to 1}: G_0\to G_1$.
Then $\pi=\pi^{0\to 1}\circ\pi^0$ and the action $\delta_{\pi^{0\to 1}}:=\delta\circ\pi^{0\to 1}$ is a $G_0$-action with $(\delta_{\pi^{0\to 1}})_{\pi^0}=\delta_\pi$.
By Lemma~\ref{lem:approx-central-embeddings}, it follows that there exists an equivariant unital $*$-homomorphism $(\mathcal R,\delta_\pi) \to (M_\omega,\alpha_\omega)$.
Since $\delta$ was strongly self-absorbing, so is $\delta_\pi$ as a $G$-action and the claim follows by Corollary~\ref{cor:equivalence_equivariant_McDuff}.
\end{proof}
\section{Actions of discrete amenable groupoids}
We begin by recalling the definition of a discrete measured groupoid. This concept dates back to \cite{Mackey63}.
\begin{definition}A discrete measured groupoid $\mathcal{G}$ is a groupoid in the usual sense that carries the following additional structure:
\begin{enumerate}[label=$\bullet$,leftmargin=*]
\item The groupoid $\mathcal{G}$ is a standard Borel space and the units $\mathcal{G}^{(0)} \subset \mathcal{G}$ form a Borel subset.
\item The source and target maps $s,t\colon \mathcal{G} \rightarrow \mathcal{G}^{(0)}$ are Borel and countable-to-one.
\item Define $\mathcal{G}^{(2)}:= \{(g,h) \in \mathcal{G} \times \mathcal{G} \mid s(g) = t(h)\}.$ The multiplication map $\mathcal{G}^{(2)} \rightarrow \mathcal{G}\colon (g,h) \mapsto gh$ and the inverse map $\mathcal{G}\rightarrow \mathcal{G}\colon g \mapsto g^{-1}$ are Borel.
\item $\mathcal{G}^{(0)}$ is equipped with a measure $\mu$ satisfying the following property.
Let $\mu_s$ and $\mu_t$ denote the $\sigma$-finite measures on $\mathcal{G}$ obtained by integrating the counting measure over $s,t\colon \mathcal{G}\rightarrow \mathcal{G}^{(0)}$, respectively. Then $\mu_s \sim \mu_t$.
\end{enumerate}
\end{definition}
\begin{example}
An important example of a discrete measured groupoid is the \emph{transformation groupoid} associated to a non-singular action $G \curvearrowright (X, \mu)$ of a countable, discrete group $G$ on a standard measure space $(X, \mu)$. In that case the unit space can be identified with $X$ and the measure $\mu$ satisfies the necessary requirements. We denote this transformation groupoid by $G \ltimes X$.
\end{example}
We assume the reader is familiar with the concept of amenability for discrete measured groupoids, see \cite[Definition 3.2.8]{AnantharamanRenault00}. In particular, recall that a groupoid $\mathcal{G}$ is amenable if and only if the associated equivalence relation
\[
\big\{ \big( s(g),t(g) \big) \mid g \in \mathcal{G} \big\}
\]
and almost all associated isotropy groups
\[\{g \in \mathcal{G} \mid s(g) = t(g) = x\} \quad \text{for } x \in \mathcal{G}^{(0)}\]
are amenable (see e.g.\ \cite[Corollary 5.3.33]{AnantharamanRenault00}).
In case of a non-singular action $G \curvearrowright (X, \mu)$, the associated transformation groupoid $G \ltimes X$ is amenable if and only if the action is amenable in the sense of Zimmer (\cite{Zimmer78, Zimmer77, Zimmer77b}).
\begin{remark}
In this paper we work with measurable fields of all kinds of separable structures, such as Polish spaces, Polish groups, von Neumann algebras with separable predual, and fields that can be derived from these.
For Polish groups the definition is explicitly given in \cite{Sutherland85}, while the other notions can be defined in an analogous way.
We only consider the measurable setting and hence will often implicitly discard sets of measure zero whenever needed.
This means all measurable fields, groupoids and isomorphisms between measure spaces are defined up to sets of measure zero.
Because of this, all statements should be interpreted as holding only almost everywhere whenever appropriate.
This also means that we have no problem to apply the von Neumann measurable selection theorem (see e.g.\ \cite[Theorem 18.1]{Kechris95}) to obtain measurable sections after deletion of a suitable null set, and we will often omit the fine details related to such arguments.
\end{remark}
\begin{definition}
Let $\mathcal{G}$ be a discrete measured groupoid with unit space $(X,\mu)$. An \emph{action} $\alpha$ of $\mathcal{G}$ on a measurable field $(B_x)_{x \in X}$ of factors with separable predual is given by a measurable field of $*$-isomorphisms \[\mathcal{G} \ni g \mapsto \alpha_g\colon B_{s(g)} \rightarrow B_{t(g)},\]
satisfying $\alpha_g \circ \alpha_h = \alpha_{gh}$ for all $(g,h) \in \mathcal{G}^{(2)}$.
\end{definition}
\begin{definition}\label{def:cc_groupoid_actions}
Let $\mathcal{G}$ be a discrete measured groupoid with unit space $(X,\mu)$. Suppose that $\alpha$ and $\beta$ are actions of $\mathcal{G}$ on the measurable fields of factors with separable predual $(B_x)_{x \in X}$ and $(D_x)_{x \in X}$, respectively. The actions are said to be \emph{cocycle conjugate} if there exists a measurable field of $*$-isomorphisms $X \ni x \mapsto \theta_x\colon B_x \rightarrow D_x$ and a measurable field of unitaries $\mathcal{G} \ni g \mapsto w_g \in \mathcal{U}(D_{t(g)})$ satisfying
\begin{align*}
\theta_{t(g)} \circ \alpha_g \circ \theta_{s(g)}^{-1} &= \mathrm{Ad} w_g \circ \beta_g \text{ for all } g \in \mathcal{G}\\
w_g \beta_g(w_h) &= w_{gh} \text{ for all } (g,h) \in \mathcal{G}^{(2)}.
\end{align*}
\end{definition}
\begin{example} \label{ex:central_decomposition}
Let $B$ be a von Neumann algebra acting on a separable Hilbert space $\mathcal{H}$. Then we can centrally decompose $B$ as
\[ (B, \mathcal{H}) = \int_{X}^\oplus (B_x, \mathcal{H}_x)\, d\mu(x), \]
where $(X,\mu)$ is a standard probability space such that $L^\infty(X,\mu) \cong \mathcal{Z}(B)$ (see e.g.\ \cite[Theorem IV.8.21]{Takesaki02}).
In this way we get a measurable field of factors $(B_x)_{x \in X}$. When $B$ is of type I, II$_1$, II$_\infty$ or III, every $B_x$ has the same type by \cite[Corollary V.6.7]{Takesaki02}.
We claim that if $B \cong B \bar{\otimes} \mathcal{R}$, then every fibre $B_x$ is McDuff.
Pick a $*$-isomorphism $\Phi\colon B \rightarrow B \bar{\otimes} \mathcal{R}$.
Then there exists (see for example \cite[Theorem III.2.2.8]{Blackadar}) a unitary $U\colon \mathcal{H} \otimes \ell^2(\mathbb{N}) \rightarrow \mathcal{H} \otimes L^2(\mathcal{R}, \tau_\mathcal{R}) \otimes \ell^2(\mathbb{N})$ such that the amplification of $\Phi$ is spatial, i.e.\ $\Phi(b) \otimes 1 = U(x \otimes 1)U^*$.
We have the decompositions
\[(B \otimes \mathbb{C}, \mathcal{H} \otimes \ell^2(\mathbb{N})) =\int_X^\oplus (B_x \otimes \mathbb{C}, \mathcal{H}_x \otimes \ell^2(\mathbb{N})) \, d \mu(x),\text{ and}\]
\[(B \bar{\otimes} \mathcal{R} \otimes \mathbb{C},\mathcal{H} \otimes L^2(\mathcal{R}, \tau_\mathcal{R}) \otimes \ell^2(\mathbb{N})) = \int_X^\oplus \left(B_x \bar{\otimes}\mathcal{R} \otimes \mathbb{C}, \mathcal{H}_x \otimes L^2(\mathcal{R}, \tau_\mathcal{R}) \otimes \ell^2(\mathbb{N})\right) \, d\mu(x).\]
As the amplification of $\Phi$ necessarily maps the diagonal algebras (i.e.\ the respective centers) to each other, we can use the fact that the disintegration is unique \cite[Theorem 8.23]{Takesaki02}. In particular, this means every $B_x$ is isomorphic to some $B_y \bar{\otimes} \mathcal{R}$ and hence, $B_x \cong B_x \bar{\otimes} \mathcal{R}$.
Now suppose $\alpha\colon G \curvearrowright B$ is an action of a countable discrete group.
Let ${\mathcal{G} = G \ltimes X}$ denote the transformation groupoid associated to the action on $(X, \mu)$ induced by $\alpha$.
Then $\alpha$ can be disintegrated as an action $\bar{\alpha}$ of $\mathcal{G}$ on the measurable field $(B_x)_{x \in X}$ (see e.g.\ \cite[Corollary X.3.12]{Takesaki02}\footnote{When the field of factors $(B_x)_{x \in X}$ is constant (for example when $B$ is injective type II$_1$ and all $B_x$ are $\mathcal{R}$), this construction dates back to \cite{SutherlandTakesaki89}.
There, the groupoid $\mathcal{G}$ and action $\bar{\alpha}$ are also called the \emph{ancillary groupoid} and \emph{ancillary action} associated to $\alpha$. }) such that given $b= \int_x^\oplus b_x\, d \mu(x)$, we have
\[\alpha_g(b)_{g \cdot x} = \bar{\alpha}_{(g,x)}(b_{x}) \text{ for } (g,x) \in \mathcal{G}.\]
Assume $\beta\colon G \curvearrowright D$ is another action on a separably acting von Neumann algebra $(D, \mathcal{K}) = \int_X^\oplus (D_x, \mathcal{K}_x)\, d\mu(x)$, and assume that $\beta$ induces the same action on $(X,\mu)$ as $\alpha$.
Let $\bar{\beta}$ denote its decomposition as an action of $\mathcal{G}$ on $(D_x)_{x \in X}$.
If $\bar{\alpha}$ and $\bar{\beta}$ are cocycle conjugate in the sense of Definition~\ref{def:cc_groupoid_actions}, then $\alpha$ and $\beta$ are cocycle conjugate as actions on von Neumann algebras.
Indeed, let $X \ni x \mapsto \theta_x\colon A_x \rightarrow B_x$ and $\mathcal{G} \ni (g,x) \mapsto w_{(g,x)} \in \mathcal{U}(B_{g \cdot x})$ denote the measurable fields of $*$-isomorphisms and unitaries realizing a cocycle conjugacy between $\bar{\alpha}$ and $\bar{\beta}$.
This gives rise to a $*$-isomorphism $\theta\colon A \rightarrow B$ given by $\theta(a)_x = \theta_x(a_x)$ for $a = \int_X^\oplus a_x\, \mathrm{d} \mu(x) \in A$, and for each $g \in G$ we get a unitary $\mathbbm{v}_g \in \mathcal{U}(B)$ by
$(\mathbbm{v}_g)_x = w_{(g, g^{-1}\cdot x)}$. The pair $(\theta, \mathbbm{v})$ is a cocycle conjugacy.
Conversely, one can show that every cocycle conjugacy $(\theta, \mathbbm{v}):(B, \alpha) \rightarrow (D, \beta)$ with $\theta\big\lvert_{L^\infty(X)} = \mathrm{id}\big\lvert_{L^\infty(X)}$ gives rise to a cocycle conjugacy in the sense of Definition~\ref{def:cc_groupoid_actions}.
\end{example}
We will subsequently need the following lemma (albeit only for discrete groups) about strongly self-absorbing actions.
\begin{lemma} \label{lem:special-cc}
Let $G_j$, $j=1,2$, be two second-countable locally compact groups with a continuous group isomorphism $\phi: G_1\to G_2$.
Let $\delta^{(j)}\colon G_j\curvearrowright\mathcal R$, $j=1,2$, be two strongly self-absorbing actions and choose a cocycle conjugacy $(\Phi,\mathbb{U})\colon (\mathcal R,\delta^{(2)})\to (\mathcal R\bar{\otimes}\mathcal R,\delta^{(2)}\otimes\delta^{(2)})$ that is approximately unitarily equivalent to $\operatorname{id}_{\mathcal R}\otimes 1_{\mathcal R}$.
(Note that $\phi$ allows us to identify $(\Phi,\mathbb{U}\circ\phi)$ with a cocycle conjugacy between the $G_1$-action $\delta_\phi:=\delta^{(2)}\circ\phi$ and its tensor square.)
Let $\alpha^{(j)}\colon G_j\curvearrowright M_j$, $j=1,2$, be two actions on separably acting von Neumann algebras.
Given a cocycle conjugacy
\[
(\theta,\mathbbm{v})\colon (M_1,\alpha^{(1)})\to \big( M_2\bar{\otimes}\mathcal{R}, (\alpha^{(2)}\otimes\delta^{(2)})\circ\phi \big),
\]
and a conjugacy $\Delta\colon (\mathcal R, \delta^{(2)} \circ \phi)\to(\mathcal R,\delta^{(1)})$,
consider the cocycle conjugacy of $G_1$-actions
\[
(\Psi,\mathbb{V})= \big( (\theta, \mathbbm{v})^{-1} \otimes \Delta \big) \circ \big( \mathrm{id}_{M_2} \otimes (\Phi, \mathbbm{U}\circ\phi) \big) \circ (\theta, \mathbbm{v})
\]
between $(M_1,\alpha^{(1)})$ and $(M_1\bar{\otimes}\mathcal R,\alpha^{(1)}\otimes \delta^{(1)} )$.
Then there exists a sequence of unitaries $y_n\in\mathcal{U}(M_1\otimes\mathcal R)$ such that
\[
\operatorname{Ad}(y_n)\circ (\operatorname{id}_{M_1}\otimes 1_{\mathcal R})\to \Psi,\quad \operatorname{Ad}(y_n^*)\circ\Psi \to \operatorname{id}_{M_1}\otimes 1_{\mathcal R}
\]
point-strongly, and such that
\[
y_n(\alpha^{(1)}\otimes\delta^{(1)})_g(y_n)^* \to \mathbb{V}_g,\quad y_n^*\mathbb{V}_g(\alpha^{(1)}\otimes\delta^{(1)})_g(y_n)\to 1_{M_1\otimes\mathcal R}
\]
in the strong operator topology for all $g\in G$ and uniformly over compact sets.
\end{lemma}
\begin{proof}
By assumption, there is a sequence of unitaries $z_n\in\mathcal{U}(\mathcal{R}\bar{\otimes}\mathcal{R})$ such that
\begin{equation} \label{eq:main-tech:ssa1}
\|\Phi(x) - z_n(x\otimes 1_{\mathcal R})z_n^*\|_2\to 0\quad\text{for all } x\in\mathcal R.
\end{equation}
and
\begin{equation} \label{eq:main-tech:ssa2}
\max_{h\in K} \|\mathbb{U}_h - z_n(\delta^{(2)}\otimes\delta^{(2)})_h(z_n)^*\|_2\to 0 \quad\text{for every compact } K\subseteq G_2.
\end{equation}
By definition, we have
\[
\Psi = (\theta^{-1}\otimes\Delta)\circ (\operatorname{id}_{M_2}\otimes\Phi)\circ\theta
\]
and
\[
\mathbb{V}_g = ({\theta}^{-1}\otimes\Delta)\Big( (\operatorname{id}_{M_2}\otimes\Phi)(\mathbbm{v}_g) \cdot (1_{M_2}\otimes \mathbb{U}_{\phi(g)})\cdot (\mathbbm{v}_g^* \otimes 1_{\mathcal R}) \Big),\quad g\in G_1.
\]
If we consider the sequence of unitaries
\[
y_n=(\theta^{-1}\otimes\Delta)(1_{M_2}\otimes z_n),
\]
then we can observe with \eqref{eq:main-tech:ssa1} that
\[
\operatorname{Ad}(y_n^*)\circ\Psi \to (\theta^{-1}\otimes\Delta)\circ (\operatorname{id}_{M_2}\otimes\operatorname{id}_{\mathcal R}\otimes 1_{\mathcal R})\circ\theta = \operatorname{id}_{M_1}\otimes 1_{\mathcal R}
\]
as well as
\[
\operatorname{Ad}(y_n)\circ(\operatorname{id}_{M_1}\otimes 1_{\mathcal R}) = \operatorname{Ad}(y_n)\circ(\theta^{-1}\otimes\Delta)\circ (\operatorname{id}_{M_2}\otimes\operatorname{id}_{\mathcal R}\otimes 1_{\mathcal R})\circ\theta \to \Psi
\]
point-strongly.
Moreover, given $g\in G_1$, the fact that $(\theta^{-1}, \big( \theta^{-1}(\mathbbm{v}^*_g) \big)_{g\in G_1} )$ is the inverse of $(\theta,\mathbbm{v})$ leads to the equation $\alpha^{(1)}_g\circ\theta^{-1}=\theta^{-1}\circ\operatorname{Ad}(\mathbbm{v}_g)\circ(\alpha^{(2)}\otimes\delta^{(2)})_{\phi(g)}$.
If we combine this with \eqref{eq:main-tech:ssa1} and \eqref{eq:main-tech:ssa2}, we can see that
\[
\begin{array}{cl}
\multicolumn{2}{l}{ y_n^*\mathbb{V}_g(\alpha^{(1)}\otimes\delta^{(1)})_g(y_n) } \\
=& y_n^*\mathbb{V}_g\cdot (\theta^{-1}\otimes\Delta)\big( \operatorname{Ad}(\mathbbm{v}_g\otimes 1_{\mathcal R})(1_{M_2}\otimes(\delta^{(2)}\otimes\delta^{(2)})_{\phi(g)}(z_n)) \big) \\
=& (\theta^{-1}\otimes\Delta)\Big( (1_{M_2}\otimes z_n)^*\cdot (\operatorname{id}_{M_2}\otimes\Phi)(\mathbbm{v}_g) \cdot (1_{M_2}\otimes \mathbb{U}_{\phi(g)}) \cdots \\
& \phantom{ (\theta^{-1}\otimes\operatorname{id}_{\mathcal{R}})\Big( }\cdots (1_{M_2}\otimes(\delta^{(2)}\otimes\delta^{(2)})_{\phi(g)}(z_n))\cdot (\mathbbm{v}_g^* \otimes 1_{\mathcal R}) \Big) \\
=& (\theta^{-1}\otimes\Delta)\Big( \underbrace{ \big( \operatorname{id}_{M_2}\otimes ( \operatorname{Ad}(z_n^*)\circ\Phi ) \big) (\mathbbm{v}_g) }_{\to \mathbbm{v_g}\otimes 1_{\mathcal R} }
\cdot \big( 1_{M_2}\otimes \underbrace{ z_n^*\mathbb{U}_{\phi(g)}(\delta^{(2)}\otimes\delta^{(2)})_{\phi(g)}(z_n) }_{\to 1_{\mathcal{R}\otimes\mathcal{R}} } \big)\cdot (\mathbbm{v}_g^* \otimes 1_{\mathcal R}) \Big) \\
\to & 1_{M_1\otimes\mathcal R}
\end{array}
\]
in the strong operator topology, uniformly over compact subsets.
Analogously, we observe the convergence
\[
\begin{array}{cl}
\multicolumn{2}{l}{ y_n(\alpha^{(1)}\otimes\delta^{(1)})_g(y_n)^* }\\
=& (\theta^{-1}\otimes\Delta)\Big( (1_{M_2}\otimes z_n) \cdot \operatorname{Ad}(\mathbbm{v}_g\otimes 1_{\mathcal R})(1_{M_2}\otimes(\delta^{(2)}\otimes\delta^{(2)})_{\phi(g)}(z_n^*) ) \Big) \\
=& (\theta^{-1}\otimes\Delta)\Big( \underbrace{ \operatorname{Ad}(1_{M_2}\otimes z_n)(\mathbbm{v}_g\otimes 1_{\mathcal R})}_{\to (\operatorname{id}_{M_2}\otimes\Phi)(\mathbbm{v}_g)}
\cdot \big( 1_{M_2}\otimes \underbrace{ z_n(\delta^{(2)}\otimes\delta^{(2)})_{\phi(g)}(z_n^*) }_{\to \mathbb{U}_{\phi(g)} } \big) \cdot (\mathbbm{v}_g^*\otimes 1_{\mathcal R}) \Big) \\
\to & \mathbb{V}_g
\end{array}
\]
uniformly over compact sets.
This finishes the proof.
\end{proof}
In the proof of the main technical result of this section we will make use of the following variant, due to Popa--Shlyakhtenko--Vaes, of the cohomology lemmas in \cite[Appendix]{JonesTakesaki84} and \cite[Theorem 5.5]{Sutherland85}.
\begin{lemma}[{\cite[Theorem 3.5]{PopaShlyakhtenkoVaes20}}] \label{lemma:cohomology_lemma}
Let $\mathcal{S}$ be an amenable countable nonsingular equivalence relation on the standard probability space $(X,\mu)$.
Let $(G_x \curvearrowright P_x)_{x \in X}$ be a measurable field of continuous actions of Polish groups on Polish spaces, on which $\mathcal{S}$ is acting by conjugacies: we have measurable fields of group isomorphisms ${\mathcal{S} \ni (x,y) \mapsto \gamma_{(x,y)}\colon G_y \mapsto G_x}$ and homeomorphisms $\mathcal{S} \ni (x,y) \mapsto \beta_{(x,y)}\colon P_y \mapsto P_x$ satisfying
\[
\gamma_{(x,y)} \circ \gamma_{(y,z)} = \gamma_{(x,z)}, \quad \beta_{(x,y)} \circ \beta_{(y,z)} = \beta_{(x,z)}, \quad \beta_{(x,y)} (g \cdot \pi) = \gamma_{(x,y)}(g) \cdot \beta_{(x,y)}(\pi)
\]
for all $(x,y), (y,z) \in \mathcal{S}$ and $g \in G_y, \pi \in P_y$.
Let $X \ni x \mapsto \sigma'(x) \in P_x$ be a measurable section. Assume that for all $(x,y) \in \mathcal{S}$, the element $\sigma'(x)$ belongs to the closure of ${G_x \cdot \beta_{(x,y)}(\sigma'(y))}$.
Then there exists a measurable family $\mathcal{S} \ni (x,y) \mapsto v(x,y) \in G_x$ and a section $X \ni x \mapsto \sigma(x) \in P_x$ satisfying:
\begin{itemize}
\item $v$ is a 1-cocycle: $v(x,y) \gamma_{(x,y)}(v(y,z)) = v(x,z)$ for all $(x,y), (y,z)\in \mathcal{S}$;
\item $v(x,y)\cdot \beta_{(x,y)}(\sigma(y)) = \sigma(x)$ for all $(x,y) \in \mathcal{S}$.
\end{itemize}
\end{lemma}
\begin{remark} \label{rem:Polish-spaces}
Before we state and prove our main technical result in detail, we would like to outline for what kind of input data the lemma above will be used.
In the situation considered below, the typical Polish space $P$ will be the space of cocycle conjugacies $(M,\alpha)\to (N,\beta)$, where $\alpha: H\curvearrowright M$ and $\beta: H\curvearrowright N$ are actions of a countable discrete group $H$ on separably acting von Neumann algebras.
Here we consider the topology defined by declaring that one has a convergence of nets $(\theta^{(\lambda)},\mathbbm{v}^{(\lambda)})\to(\theta,\mathbbm{v})$ if and only if $\mathbbm{v}^{(\lambda)}_g\to\mathbbm{v}_g$ in the strong operator topology for all $g\in H$, and $\|\varphi\circ\theta-\varphi\circ\theta^{(\lambda)}\|\to 0$ for all $\varphi\in N_*$.
This generalizes the well-known topology on the space of isomorphisms $M\to N$ that one usually called the ``$u$-topology''; cf.\ \cite{Haagerup75, Winslow98}.
The typical Polish group acting on this Polish space would be the unitary group $\mathcal{U}(N)$, which is equipped with the strong operator topology, where the action is defined via composition with the inner cocycle conjugacy as per Example~\ref{ex:inner-cc} and Remark~\ref{rem:cocycle-category}.
In other words, a unitary $w\in\mathcal{U}(N)$ moves the cocycle conjugacy $(\theta,\mathbbm{v})$ to $\operatorname{Ad}(w)\circ(\theta,\mathbbm{v}) = \big( \operatorname{Ad}(w)\circ\theta, (w\mathbbm{v}\beta_g(w)^*)_{g\in H} \big)$.
If we assume in addition that $M$ and $N$ are semi-finite, we may pick a faithful normal semi-finite tracial weight $\tau$ on $N$.
Assume that $(\Psi,\mathbb{V})\in P$ is a cocycle conjugacy.
Then it follows from \cite[Proposition 3.7]{Haagerup75} that on the space of all isomorphisms $\Psi': M\to N$ with $\tau\circ\Psi'=\tau\circ\Psi'$, the $u$-topology coincides with the topology of point-strong convergence.
As a direct consequence, we may conclude the following.
If $(\Phi,\mathbb{U})\in P$ is another cocycle conjugacy and there exists a net of unitaries $w_\lambda\in\mathcal{U}(N)$ such that $w_\lambda\mathbb{V}_g\beta_g(w_\lambda)^*\to \mathbb{U}_g$ for all $g\in H$ in the strong operator topology and $\operatorname{Ad}(w_\lambda)\circ\Psi\to\Phi$ point-strongly, then $(\Phi,\mathbb{U})\in\overline{ \mathcal{U}(N)\cdot (\Psi,\mathbb{V}) }$.
\end{remark}
The following can be seen as the main technical result of this section, which we previously referred to as a kind of measurable local-to-global principle.
\begin{theorem} \label{thm:main_technical}
Let $G \curvearrowright (X,\mu)$ be an amenable action (in the sense of Zimmer) of a countable, discrete group on a standard probability space.
Let $\alpha$ be an action of ${\mathcal{G}:= G \ltimes X}$ on a measurable field of semi-finite factors with separable predual $(B_x)_{x \in X}$.
Denote by ${X \ni x \mapsto H_x}$ the measurable field of isotropy groups.
For any action ${\delta\colon G \curvearrowright \mathcal{R}}$ on the hyperfinite II$_1$-factor, we define a tensor product action $\alpha \otimes \delta$ of $\mathcal{G}$ on the field of factors $(B_x \bar{\otimes} \mathcal{R})_{x \in X}$ by $(\alpha \otimes \delta)_{(g,x)} = \alpha_{(g,x)} \otimes \delta_g$.
If $\delta$ is strongly self-absorbing, then the following are equivalent:
\begin{enumerate}[leftmargin=*,label=\textup{(\arabic*)}]
\item $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \delta$; \label{prop:main_technical:1}
\item For every $x \in X$ we have $\alpha{|_{H_x}} \simeq_{\mathrm{cc}} (\alpha \otimes \delta){|_{H_x}}$ as actions of $H_x$ on $B_x$ and $B_x \bar{\otimes} \mathcal{R}$. \label{prop:main_technical:2}
\end{enumerate}
\end{theorem}
\begin{proof}
We note that by following the argument outlined in Example~\ref{ex:central_decomposition} and by applying Corollary~\ref{cor:equivalence_equivariant_McDuff}, we see that $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \delta$ implies that one can find a cocycle conjugacy between $\alpha$ and $\alpha\otimes\delta$ that induces the identity map on $X$.
Hence \ref{prop:main_technical:1} implies \ref{prop:main_technical:2}.
In order to prove the other implication, assume \ref{prop:main_technical:2} holds.
To verify \ref{prop:main_technical:1}, we will show the existence of a measurable field of $*$-isomorphisms ${X \ni x \mapsto \theta_x \colon B_x \rightarrow B_x \bar{\otimes} \mathcal{R}}$ and unitaries ${\mathcal{G} \ni (g,x) \mapsto w_{(g,x)} \in \mathcal{U}(B_{g \cdot x} \bar{\otimes} \mathcal{R})}$ such that
\begin{align}
\theta_{g \cdot x} \circ \alpha_{(g,x)} \circ \theta_{x}^{-1} &= \mathrm{Ad} \big( w_{(g,x)} \big) \circ (\alpha \otimes \delta)_{(g,x)} \text{ for all } (g,x) \in \mathcal{G}\label{eq:required_cc}\\
w_{(g,h \cdot x)} (\alpha \otimes \delta)_{(g,h \cdot x)}(w_{(h,x)}) &= w_{(gh,x)} \text{ for all } g,h \in G, x \in X.\label{eq:required_cocycle}
\end{align}
For every $x \in X$, denote by $P_x$ the Polish space of cocycle conjugacies from $(B_x, \alpha|_{H_x})$ to $(B_x \bar{\otimes} \mathcal{R}, (\alpha \otimes \delta)|_{H_x})$ as per Remark~\ref{rem:Polish-spaces}.
In this way, we get a measurable field of Polish spaces $X \ni x \mapsto P_x$. Note that by assumption the sets $P_x$ are all non-empty and hence, there exists some measurable section
$ X \ni x \mapsto (\theta_x, \mathbbm{v}_x) \in P_x$.
Defining $w_{(g,x)} := \mathbbm{v}_{x}(g)$ for $g \in H_x$, we get that --- although $w$ is not defined on all of $\mathcal{G}$ yet --- the equations \eqref{eq:required_cc}--\eqref{eq:required_cocycle} are satisfied whenever they make sense.
In the rest of the proof we will show with the help of Lemma \ref{lemma:cohomology_lemma} that there exists a (potentially different) section for which there exists a well-defined map $w$ on all of $\mathcal{G}$ obeying conditions \eqref{eq:required_cc}--\eqref{eq:required_cocycle}.
Denote by $\mathcal{S}$ the countable non-singular orbit equivalence relation associated to $\mathcal{G}$, i.e.,
\[\mathcal{S}=\{(g \cdot x,x) \mid (g,x) \in \mathcal{G}\}.\]
As $G \curvearrowright (X,\mu)$ is amenable, the relation $\mathcal{S}$ is amenable and hence it follows by the Connes--Feldman--Weiss theorem \cite{ConnesFeldmanWeiss81} that after neglecting a set of measure zero, there exists a partition of $X$ into $\mathcal{S}$-invariant Borel subsets $X_0\sqcup X_1$ such that the restriction of $\mathcal{S}$ to $X_0$ has finite orbits and the restriction to $X_1$ is induced by a free non-singular action of $\mathbb{Z}$.
This implies that the map $q \colon \mathcal{G} \rightarrow \mathcal{S}\colon (g,x) \mapsto (g \cdot x,x)$ admits a measurable section, i.e., a measurable groupoid morphism $s \colon \mathcal{S} \rightarrow \mathcal{G}$ such that $q \circ s = \mathrm{id}_\mathcal{S}$.
Therefore, we can view $\mathcal{G}$ as the semi-direct product of the field of groups $(H_x)_{x \in X}$ and the measurable field of group isomorphisms $\phi_{(x,y)}\colon H_y \rightarrow H_x$ given by $\phi_{(x,y)}(g)= s(x,y) g s(x,y)^{-1}$.
Note that $\phi_{(x,y)} \circ \phi_{(y,z)} = \phi_{(x,z)}$ for all $(x,y), (y,z) \in \mathcal{S}$.
This means that in order to define a measurable field $\mathcal{G} \ni (g,x) \mapsto w_{(g,x)} \in \mathcal{U}(B_{g \cdot x} \bar{\otimes} \mathcal{R})$ satisfying \eqref{eq:required_cocycle}, it suffices to find measurable families of unitaries
${v{(x,y)} \in \mathcal{U}(B_x \bar{\otimes} \mathcal{R})}$ for $(x,y) \in \mathcal{S}$ and
$\mathbbm{v}_x(g) \in \mathcal{U}(B_x \bar{\otimes} \mathcal{R})$ for $x \in X, g \in H_x$ such that
\begin{itemize}
\item $v$ is a cocyle for the action of $\mathcal{S}$ on the field of factors $(B_x\bar{\otimes} \mathcal{R})_{x \in X}$ induced by $s$, i.e.,
$v{(x,y)} (\alpha \otimes \delta)_{s(x,y)}(v{(y,z)})=v(x,z)$ for all $(x,y), (y,z) \in \mathcal{S}$;
\item for each $x \in X$, the family $(\mathbbm{v}_x(g))_{g \in H_x}$ defines a cocycle for the action $H_x \curvearrowright B_x \bar{\otimes} \mathcal{R}$;
\item $\mathbbm{v}_x(g) = v{(x,y)} (\alpha \otimes \delta)_{s(x,y)}\left(\mathbbm{v}_y\left({\phi_{(y,x)}(g)}\right)\right) (\alpha \otimes \delta)_g\left(v(x,y)^*\right) $ for all $(x,y) \in \mathcal{S}$ and $g \in H_x$.
\end{itemize}
If these conditions are met, then setting $w_{g s(x,y)} := \mathbbm{v}_x(g)(\alpha \otimes \delta)_g (v(x,y))$ for $(x,y) \in \mathcal{S}$ and $g \in H_x$ yields condition \eqref{eq:required_cocycle}.
Moreover, in order for a measurable field of $*$-isomorphisms ${X \ni x \mapsto \theta_x \colon B_x \rightarrow B_x \bar{\otimes} \mathcal{R}}$ to satisfy \eqref{eq:required_cc}, it then suffices to check that
\begin{itemize}
\item for each $x \in X$, the pair $(\theta_x, \mathbbm{v}_x)$ is a cocycle conjugacy in $P_x$;
\item for $(x,y) \in \mathcal{S}$ we have $\theta_x \circ \alpha_{s(x,y)} \circ \theta_y^{-1} = \mathrm{Ad}(v(x,y))\circ (\alpha \otimes \delta)_{s(y,z)}$.
\end{itemize}
We introduce some notation to rephrase this in terms of the terminology of Lemma~\ref{lemma:cohomology_lemma}. Consider the natural action $\mathcal{U}(B_x \bar{\otimes} \mathcal{R}) \curvearrowright P_x$ given by composing a cocycle conjugacy with the inner one given by $\mathrm{Ad}(u)$ for $u \in \mathcal{U}(B_x \bar{\otimes} \mathcal{R})$ as per Remark~\ref{rem:Polish-spaces}.
In this way, we get a measurable field $(\mathcal{U}(B_x \bar{\otimes} \mathcal{R}) \curvearrowright P_x)_{x \in X}$ of continuous actions of Polish groups on Polish spaces.
Let us convince ourselves that in the terminology of Lemma~\ref{lemma:cohomology_lemma}, the equivalence relation $\mathcal{S}$ acts by conjugacies on this field of actions.
Firstly, we have a measurable field of group isomorphisms
\[
{\mathcal{S} \ni (x,y) \mapsto \gamma_{(x,y)} = (\alpha \otimes \delta)_{s(x,y)}|_{\mathcal{U}(B_y \bar{\otimes} \mathcal{R})} \colon \mathcal{U}(B_y \bar{\otimes} \mathcal{R})\rightarrow \mathcal{U}(B_x \bar{\otimes} \mathcal{R})}
\]
such that $\gamma_{(x,y)} \circ \gamma_{(y,z)} = \gamma_{(x,z)}$ for all $(x,y), (y,z) \in \mathcal{S}$.
The latter formula holds as $\alpha\otimes\delta$ was a $\mathcal G$-action and $s: \mathcal{S}\to\mathcal{G}$ is a section.
Secondly, we have an action of $\mathcal{S}$ on $(P_x)_{x \in X}$ as follows.
Given $(x,y) \in \mathcal{S}$ and $(\theta, \mathbbm{v}) \in P_y$, we define $\beta_{(x,y)}(\theta, \mathbbm{v}) := (\tilde{\theta}, \tilde{\mathbbm{v}})$, where
\[
\tilde{\theta} = (\alpha \otimes \delta)_{s(x,y)} \circ \theta \circ \alpha_{s(y,x)} \quad\text{and}\quad
\tilde{\mathbbm{v}}(h) = (\alpha \otimes \delta)_{s(x,y)}(\mathbbm{v}({\phi_{(y,x)}(h)})) \text{ for } h \in H_x.
\]
This construction yields a well-defined cocycle conjugacy in $P_x$, and we get a well-defined map $\beta_{(x,y)}\colon P_y \rightarrow P_x$.
Together these maps combine to form a measurable field of homeomorphisms $\mathcal{S} \ni (x,y) \mapsto \beta_{(x,y)}\colon P_y \to P_x$ such that $\beta_{(x,y)} \circ \beta_{(y,z)} = \beta_{(x,z)}$ for all $(x,y), (y,z) \in \mathcal{S}$.
This formula holds once again because $\alpha\otimes\delta$ and $\alpha$ are $\mathcal G$-actions and $s\colon \mathcal{S}\to\mathcal{G}$ is a section.
Moreover, the maps $\beta$ and $\gamma$ are compatible with the measurable field of actions ${(\mathcal{U}(B_x \bar{\otimes} \mathcal{R}) \curvearrowright P_x)_{x \in X}}$ (as required for Lemma~\ref{lemma:cohomology_lemma}), since for any $(x,y) \in \mathcal{S}, u \in \mathcal{U}(B_y \bar{\otimes} \mathcal{R})$ and $(\theta,\mathbbm{v}) \in P_y$ we may simply compare definitions and observe
\[
\beta_{(x,y)}(u \cdot (\theta,\mathbbm{v})) = \gamma_{(x,y)}(u) \cdot \beta_{(x,y)}(\theta,\mathbbm{v}).
\]
Having introduced all this data, our previous discussion can be rephrased.
In order to complete the proof, it suffices to find a measurable section $X \ni x \mapsto \sigma(x) \in P_x$ and a measurable family $\mathcal{S} \ni (x,y) \mapsto v(x,y) \in \mathcal{U}(B_x \bar{\otimes} \mathcal{R})$ such that
\begin{itemize}
\item $v(x,y) \gamma_{(x,y)}(v(y,z)) = v(x,z)$ for all $(x,y), (y,z) \in \mathcal{S}$;
\item $v(x,y) \cdot \beta_{(x,y)}(\sigma(y)) = \sigma(x)$ for all $(x,y) \in \mathcal{S}$.
\end{itemize}
By Lemma~\ref{lemma:cohomology_lemma}, such maps exist if we can merely show that
there exists a measurable section $X \ni x \mapsto (\theta_x, \mathbbm{v}_x) \in P_x$ such that for all $(x,y) \in \mathcal{S}$, the element $(\theta_x, \mathbbm{v}_x)$ belongs to the closure of $\mathcal{U}(B_x \bar{\otimes}\mathcal{R}) \cdot \beta_{(x,y)}(\theta_y,\mathbbm{v}_y)$.
We claim that this is indeed the case.
Consider any measurable section $X \ni x \mapsto (\theta'_x, \mathbbm{v}'_x) \in P_x$.
As $\delta$ is strongly self-absorbing, we can fix a cocycle conjugacy $(\Phi, \mathbb{U})$ from $(\mathcal{R}, \delta)$ to $(\mathcal{R}\bar{\otimes} \mathcal{R}, \delta \otimes \delta)$ that is approximately unitarily equivalent to $\operatorname{id}_{\mathcal R}\otimes 1_{\mathcal R}$.
For each $x \in X$ we can define the map
\[
\Lambda_x\colon P_x \rightarrow P_x,\quad (\theta, \mathbbm{v}) \mapsto \big( (\theta, \mathbbm{v})^{-1} \otimes \mathrm{id}_\mathcal{R} \big) \circ \big( \mathrm{id}_{B_x} \otimes (\Phi, \mathbb{U}) \big) \circ (\theta, \mathbbm{v})
\]
Then we get a new measurable section
\[
X \ni x \mapsto (\theta_x, \mathbbm{v}_x) := \Lambda_x(\theta'_x, \mathbbm{v}'_x) \in P_x.
\]
We claim that this section does the trick. Fix $(x,y) \in \mathcal{S}$.
If we denote $(\tilde{\theta}_x, \tilde{\mathbbm{v}}_x):= \beta_{(x,y)}(\theta_y, \mathbbm{v}_y)$, we need to convince ourselves that the cocycle conjugacy $(\theta_x, \mathbbm{v}_x)$ is in the closure of $\mathcal{U}(B_x \bar{\otimes}\mathcal{R})\cdot (\tilde{\theta}_x, \tilde{\mathbbm{v}}_x)$.
First of all, we observe that the construction of $(\theta_x, \mathbbm{v}_x)=\Lambda_x(\theta_x', \mathbbm{v}_x')$ can be seen as a special case of Lemma~\ref{lem:special-cc} for $M_1=M_2=B_x$, $G=H_x$, $\phi=\operatorname{id}_{H_x}$ and $\Delta=\operatorname{id}_{\mathcal R}$.
Hence we can find a sequence of unitaries $y_n\in\mathcal{U}(B_x\otimes\mathcal R)$ such that
\begin{equation} \label{eq:main-tech:yn}
y_n^*\theta_x(b)y_n \to b\otimes 1_{\mathcal R},\quad y_n^*\mathbbm{v}_x(h)(\alpha\otimes\delta)_h(y_n) \to 1_{B_x\otimes\mathcal R}
\end{equation}
in the strong operator topology for all $b\in B_x$ and $h\in H_x$.
Next, by our previous notation, the group isomorphism $\phi_{(x,y)}\colon H_y\to H_x$ is exactly the one so that the isomorphism of von Neumann algebras $\alpha_{s(x,y)}\colon B_y\to B_x$ can be viewed as a (genuine) conjugacy between the $H_x$-actions $(\alpha|_{H_y})_{\phi_{(y,x)}}$ and $\alpha|_{H_x}$.
Moreover if $s(x,y)=(k,y)$ for $k\in G$, then $(\alpha\otimes\delta)_{s(x,y)}=\alpha_{s(x,y)}\otimes\delta_k$ and $\delta_k$ can be seen as a conjugacy between the $H_x$-actions $(\delta|_{H_y})_{\phi_{(y,x)}}$ and $\delta|_{H_x}$.
By definition of $\beta_{(x,y)}$, we have
\[
\begin{array}{ccl}
\tilde{\theta}_x &=& (\alpha \otimes \delta)_{s(x,y)} \circ \theta_y \circ \alpha_{s(y,x)} \\
&=& (\alpha \otimes \delta)_{s(x,y)} \circ \big( {\theta_y'}^{-1} \otimes \mathrm{id}_\mathcal{R} \big) \circ \big( \mathrm{id}_{B_y} \otimes \Phi \big) \circ \theta_y' \circ \alpha_{s(y,x)} \\
&=& \big( ({\theta_y'}\circ\alpha_{s(y,x)})^{-1} \otimes \delta_k \big) \circ \big( \mathrm{id}_{B_y} \otimes \Phi \big) \circ (\theta_y' \circ \alpha_{s(y,x)})
\end{array}
\]
and for all $h\in H_x$ with $g=\phi_{(y,x)}(h)$, one has
\[
\begin{array}{ccl}
\tilde{\mathbbm{v}}_x(h)
&=& (\alpha \otimes \delta)_{s(x,y)}\Big( ({\theta_y'}^{-1}\otimes\operatorname{id}_{\mathcal R})\Big( (\operatorname{id}_{B_y}\otimes\Phi)(\mathbbm{v}_y'(g)) \cdot (1_{B_y}\otimes \mathbb{U}_g)\cdot (\mathbbm{v}_y'(g)^* \otimes1_{\mathcal R}) \Big) \Big) \\
&=& \big( ({\theta_y'}\circ\alpha_{s(y,x)})^{-1}\otimes\delta_k \big)\Big( (\operatorname{id}_{B_y}\otimes\Phi)(\mathbbm{v}_y'(g)) \cdot (1_{B_y}\otimes \mathbb{U}_g)\cdot (\mathbbm{v}_y'(g)^* \otimes1_{\mathcal R}) \Big)
\end{array}
\]
We conclude that Lemma~\ref{lem:special-cc} is applicable to the cocycle conjugacy $(\tilde{\theta}_x,\tilde{\mathbbm{v}}_x)$, where we insert $G_1=H_x$, $G_2=H_y$, $\phi=\phi_{(y,x)}$, $M_1=B_x$, $M_2=B_y$, $\Delta=\delta_k$ and the cocycle conjugacy $(\theta'_y\circ\alpha_{s(y,x)}, \mathbbm{v}'_y\circ\phi_{(y,x)})$ in place of $(\theta,\mathbbm{v})$.
This allows us to find a sequence of unitaries $w_n\in\mathcal{U}(B_x\otimes\mathcal R)$ satisfying
\begin{equation} \label{eq:main-tech:wn}
w_n(b\otimes 1_{\mathcal R})w_n^* \to \tilde{\theta}_x(b),\quad w_n(\alpha\otimes\delta)_h(w_n)^* \to \tilde{\mathbbm{v}}_x(h)
\end{equation}
in the strong operator topology for all $b\in B_x$ and $h\in H_x$.
If we consider both of the conditions \eqref{eq:main-tech:yn} and \eqref{eq:main-tech:wn} and keep in mind that $G$ is countable and $B_x$ is separable and semi-finite, with the help of Lemma \ref{lemma:point_strong_dense_subset} it is possible to find an increasing sequence of natural numbers $m_n$ such that the resulting sequence of unitaries $z_n=w_ny_{m_n}^*$ satisfies
\[
z_n\theta_x(b)z_n^* \to \tilde{\theta}_x(b),\quad z_n\mathbbm{v}_x(h)(\alpha\otimes\delta)_h(z_n)^* \to \tilde{\mathbbm{v}}_x(h)
\]
in the strong operator topology for all $b\in B_x$ and $h\in H_x$.
Then it follows from Remark~\ref{rem:Polish-spaces} that $(\theta_x, \mathbbm{v}_x)$ indeed belongs to the closure of $\mathcal{U}(B_x \bar{\otimes}\mathcal{R})\cdot (\tilde{\theta}_x, \tilde{\mathbbm{v}}_x)$.
This finishes the proof.
\end{proof}
\begin{definition}[see {\cite[Definition~3.4]{Delaroche79}}]
An action $\alpha \colon G \curvearrowright B$ of a countable discrete group on a von Neumann algebra is called amenable, if there exists an equivariant conditional expectation
\[
P\colon (\ell^\infty(G) \bar{\otimes} B, \tau \otimes \alpha) \rightarrow (B, \alpha),
\]
where $\tau$ denotes the left translation action $G \curvearrowright \ell^\infty(G)$.
\end{definition}
By \cite{Delaroche82}, an action $\alpha$ as above is amenable if and only if its restriction to $\mathcal{Z}(B)$ is amenable, which is equivalent to the action on the measure-theoretic spectrum of the center being amenable in the sense of Zimmer.
Recall that an automorphism $\alpha\in\operatorname{Aut}(M)$ on a separably acting von Neumann algebra is properly centrally non-trivial, if for every non-zero projection $p\in\mathcal{Z}(M)$, the restriction of $\alpha_\omega$ on $pM_\omega$ is non-trivial.
The following result contains Theorem~\ref{theorem-A} for actions on semi-finite von Neumann algebras as a special case via $H=G$.
\begin{corollary} \label{cor:McDuff-passes}
Let $G$ be a countable discrete group and $B$ a semi-finite von Neumann algebra with separable predual such that $B \cong B \bar{\otimes} \mathcal{R}$.
Let $\alpha: G \curvearrowright B$ be an amenable action.
Suppose $H\subseteq G$ is a normal subgroup such that for every $g\in G\setminus H$, the automorphism $\alpha_g$ is properly centrally non-trivial.
Let $G_1=G/H$ with quotient map $\pi: G\to G_1$ and let $\delta: G_1\curvearrowright\mathcal R$ be a strongly self-absorbing action.
Then $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \delta_\pi$.
\end{corollary}
\begin{proof}
Adopt the notation from Example~\ref{ex:central_decomposition} and Theorem~\ref{thm:main_technical}.
We identify $\alpha$ with an action of ${\mathcal{G}:= G \ltimes X}$ on a measurable field of semi-finite factors with separable predual $(B_x)_{x \in X}$.
Denote by ${X \ni x \mapsto H_x}$ the measurable field of isotropy groups.
Amenability of the action $\alpha$ implies that the action on $(X,\mu)$ is amenable in the sense of Zimmer,
which in turn implies amenability of the associated transformation groupoid.
In particular all isotropy groups $H_x$ are amenable.
By assumption on $H$ and \cite[Theorem 9.14]{MasudaTomatsu16}, it follows for every $g\in G\setminus H$ and $\mu$-almost all $x\in X$ that either $g\notin H_x$ or the automorphism $(\alpha|_{H_x})_g$ on the McDuff factor $B_x$ is centrally non-trivial.
In other words, after discarding a null set from $X$, we may assume for all $x\in X$ that for all $h\in H_x\setminus(H_x\cap H)$, the automorphism $(\alpha|_{H_x})_g$ on $B_x$ is centrally non-trivial.
By Theorem~\ref{theorem:model-absorption}, we get that $(\alpha|_{H_x})$ is cocycle conjugate to $(\alpha\otimes\delta_\pi)|_{H_x}$.
The claim then follows via Theorem~\ref{thm:main_technical}.
\end{proof}
\section{Actions on arbitrary von Neumann algebras}
In this section we shall generalize some of the main results we obtained so far, namely Corollaries~\ref{cor:equivalence_equivariant_McDuff} and \ref{cor:McDuff-passes}, to the context of group actions on not necessarily semi-finite von Neumann algebras.
This uses standard results in Tomita--Takesaki theory, which allow us to reduce the generalize case to the semi-finite case considered in the previous sections.
We will henceforth assume that the reader is familiar with the basics of Tomita--Takesaki theory as well as the theory of crossed products (for a thorough treatment the reader should consult the book \cite{Takesaki03}), although we are about to recall the specific points needed about the former for this section.
\begin{remark}[see {\cite[Chapters VIII, XII]{Takesaki03}}] \label{rem:tt-basics}
Let $M$ be a separably acting von Neumann algebra.
Given a faithful normal state $\varphi$ on $M$, we denote by $\sigma^\varphi: \mathbb{R}\curvearrowright M$ its associated \emph{modular flow}.
If $\psi$ is another faithful normal state on $M$, we denote by $(D\psi: D\varphi): \mathbb{R}\to\mathcal{U}(M)$ the associated \emph{Connes cocycle}, which is a $\sigma^\varphi$-cocycle satisfying $\operatorname{Ad}(D\psi: D\varphi)_t\circ\sigma^\varphi_t=\sigma^\psi_t$ for all $t\in\mathbb{R}$.
The crossed product von Neumann algebra $\tilde{M}=M\rtimes_{\sigma^\varphi}\mathbb{R}$ is called the \emph{continuous core} of $M$ and does not depend on the choice of $\varphi$ up to canonical isomorphism.
With slight abuse of notation, we will consider $M$ as a von Neumann subalgebra in $\tilde{M}$, and denote by $\lambda^{\sigma^\varphi}: \mathbb{R}\to\mathcal{U}(\tilde{M})$ the unitary representation implementing the modular flow on $M$.
The continuous core $\tilde{M}$ is always a semi-finite von Neumann algebra.
Given any automorphism $\alpha\in\operatorname{Aut}(M)$, there is an induced \emph{extended automorphism} $\tilde{\alpha}\in\operatorname{Aut}(\tilde{M})$ uniquely determined by
\[
\tilde{\alpha}|_M = \alpha \quad\text{and}\quad \tilde{\alpha}(\lambda^{\sigma^\varphi}_t)=(D \varphi\circ\alpha^{-1} : D\varphi)_t\cdot\lambda^{\sigma^\varphi}_t,\quad t\in\mathbb R.
\]
The assignment $\alpha\mapsto\tilde{\alpha}$ defines a continuous homomorphism of Polish groups.
Therefore, given more generally a continuous action $\alpha: G\curvearrowright M$ of a second-countable locally compact group, we may induce the \emph{extended action} $\tilde{\alpha}: G\curvearrowright\tilde{M}$.
Every extended automorphism on $\tilde{M}$ has the property that it commutes with the dual flow $\hat{\sigma}^\varphi: \mathbb{R}\curvearrowright\tilde{M}$.
\end{remark}
\begin{prop} \label{prop:cp-centralizer}
Let $G$ be a second-countable locally compact group.
Let $\alpha: G\curvearrowright M$ be an action on a separably acting von Neumann algebra.
Then the normal inclusion $M\subset M\rtimes_\alpha G$ induces (via componentwise application of representing sequences) a unital $*$-homomorphism
$(M_{\omega,\alpha})^{\alpha^\omega} \to (M\rtimes_\alpha G)_\omega$.
\end{prop}
\begin{proof}
Assume $M$ is represented faithfully on a Hilbert space $\mathcal{H}$, and consider the canonical inclusion $\pi\colon M \rtimes_\alpha G \rightarrow \mathcal{B}(L^2(G)) \bar{\otimes} M \subset \mathcal{B}(L^2(G, \mathcal{H}))$, which on $x \in M \subset M \rtimes_\alpha G$ is given by
\[ [\pi(x) \xi](g) = \alpha_g^{-1}(x)\xi(g), \quad \xi \in L^2(G, \mathcal{H}), g \in G.\]
If $(x_n)_{n \in \mathbb{N}}$ is any bounded sequence in $M$ representing an element $x \in (M_{\omega, \alpha})^{\alpha_\omega}$, then the invariance of $x$ will guarantee that
$\pi(x_n) - (1 \otimes x_n) \rightarrow 0$ in the strong-$*$ operator topology as $n \rightarrow \omega$. Since $(1 \otimes x_n)_{n \in \mathbb{N}}$ represents an element in $(\mathcal{B}(L^2(G)) \bar{\otimes} M)_\omega$, it follows that $(\pi(x_n))_{n \in \mathbb{N}}$ represents an element in $(\pi(M\rtimes_\alpha G))_\omega$, so $x \in (M\rtimes_\alpha G)_\omega$. This finishes the proof.
\end{proof}
\begin{prop} \label{prop:Takai-duality}
Let $\alpha: G \curvearrowright M$ be an action on a von Neumann algebra with separable predual.
Let $\varphi$ be a faithful normal state on $M$.
Let $\tilde{\alpha}: G\curvearrowright\tilde{M}=M\rtimes_{\sigma^\varphi}\mathbb R$ be the extended action on the continuous core as in Remark~\ref{rem:tt-basics}.
With some abuse of notation, denote by $\tilde{\alpha}$ also the induced action $G\curvearrowright\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R$ by acting trivially on the canonical unitary representation implementing $\hat{\sigma}^\varphi$.
Then under the Takesaki--Takai duality isomorphism $\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R \cong \mathcal B(L^2(\mathbb R))\bar{\otimes} M$, the action $\tilde{\alpha}$ is cocycle conjugate to $\operatorname{id}_{\mathcal{B}(L^2(\mathbb R))}\otimes\alpha$.
\end{prop}
\begin{proof}
Denote $\alpha'=\operatorname{id}_{\mathcal{B}(L^2(\mathbb R))}\otimes\alpha$.
We apply Takesaki--Takai duality \cite[Chapter X, Theorem 2.3(iii)]{Takesaki03} to understand the $G$-action $\tilde{\alpha}$.
If $M$ is represented faithfully on a Hilbert space $\mathcal H$, then the natural isomorphism
\[
\Theta: \tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R = (M\rtimes_{\sigma^\varphi}\mathbb R)\rtimes_{\hat{\sigma}^\varphi}\mathbb R \to \mathcal{B}(L^2(\mathbb R)) \bar{\otimes} M\subseteq \mathcal{B}(L^2(\mathbb R,\mathcal H))
\]
has the following properties.
Let $\xi\in L^2(\mathbb R,\mathcal H)$.
For $x\in M$ and $s,t\in\mathbb R$ we have
\[
[\Theta(x)\xi ](s) = \sigma^\varphi_{-s}(x)\xi(s) \quad\text{and}\quad [\Theta(\lambda^{\sigma^\varphi}_t)\xi](s) = \xi(s-t).
\]
Furthermore, if the dual flow is given via the convention
$\hat{\sigma}^\varphi_t(\lambda_s^{\sigma^{\varphi}}) = e^{its}\lambda^{\sigma^{\varphi}}_s$, then we also have
\[
[\Theta(\lambda^{\hat{\sigma}^\varphi}_t)\xi](s)=e^{its}\xi(s).
\]
Then we obtain an $\alpha'$-cocycle $\mathbb{W}$ via
\[
[\mathbb{W}_g\xi](s) = (D\varphi: D\varphi\circ\alpha_g^{-1})_{-s}\xi(s),\quad \xi\in L^2(\mathbb R,\mathcal H).
\]
Given how $\tilde{\alpha}$ acts on the domain of $\Theta$, we can observe (using \cite[Corollary VIII.1.4]{Takesaki03}) for any $g\in G$, $x \in M$, $\xi \in L^2(\mathbb{R}, \mathcal{H})$ and $s \in \mathbb{R}$ that
\[
\begin{array}{ccl}
[\Theta(\tilde{\alpha}_g(x))\xi ](s) &=& \sigma^\varphi_{-s}(\alpha_g(x))\xi(s) \\
&=& \operatorname{Ad}(D\varphi: D\varphi\circ\alpha_g^{-1})_{-s}\Big( \sigma_{-s}^{\varphi\circ\alpha_g^{-1}}(\alpha_g(x)) \Big)\xi(s) \\
&=& \Big(\operatorname{Ad}(D\varphi: D\varphi\circ\alpha_g^{-1})_{-s}\circ\alpha_g\Big) \big( \sigma^\varphi_{-s}(x) \big)\xi(s) \\
&=& \Big[ \Big( \operatorname{Ad}(\mathbb{W}_g)\circ\alpha'_g\circ\Theta\Big)(x)\, \xi \Big](s).
\end{array}
\]
Moreover, using the cocycle identity and the chain rule for the Connes cocycle (\cite[Theorem VIII.3.7]{Takesaki03}), we can see for any $g \in G$, $\xi \in L^2(\mathbb{R},\mathcal{H})$ and $s,t \in \mathbb{R}$ that
\[
\begin{array}{ccl}
[\Theta(\tilde{\alpha}_g(\lambda^{\sigma^\varphi}_t))\xi](s) &=& [\Theta\big( (D \varphi\circ\alpha_g^{-1} : D\varphi)_t\cdot\lambda^{\sigma^\varphi}_t \big)\xi](s) \\
&=& \sigma^{\varphi}_{-s}(D \varphi\circ\alpha_g^{-1} : D\varphi)_t [ \Theta(\lambda^{\sigma^\varphi}_t)\xi ](s) \\
&=& \sigma^{\varphi}_{-s}(D \varphi\circ\alpha_g^{-1} : D\varphi)_t \xi(s-t) \\
&=& (D\varphi: D\varphi\circ\alpha_g^{-1})_{-s} (D\varphi: D\varphi\circ\alpha_g^{-1})_{t-s}^* \xi(s-t) \\
&=& (D\varphi: D\varphi\circ\alpha_g^{-1})_{-s} \Big[ \Theta(\lambda_t^{\sigma^\varphi}) \mathbb{W}_g^* \xi \Big](s) \\
&=& \Big[ \mathbb{W}_g \Theta(\lambda_t^{\sigma^\varphi}) \mathbb{W}_g^* \xi \Big](s) \\
&=& \Big[ \Big( \operatorname{Ad}(\mathbb{W}_g)\circ\alpha'_g\circ\Theta\Big)(\lambda_t^{\sigma^\varphi}) \xi \Big](s)
\end{array}
\]
Here we used that $\alpha'_g$ fixes the shift operator given by $\Theta(\lambda^{\sigma^\varphi}_t)$ for all $g\in G$ and $t\in\mathbb{R}$.
Lastly, it is trivial to see that $\alpha'$ fixes operators of the form $\Theta(\lambda^{\hat{\sigma}^\varphi}_t)$, which in turn also commute pointwise with the cocycle $\mathbb{W}$.
In conclusion, all of these observations culminate in the fact that the isomorphism $\Theta$ extends to the cocycle conjugacy $(\Theta,\mathbb{W})$ between $\tilde{\alpha}$ and $\alpha'$.
\end{proof}
The following represents our most general McDuff-type absorption theorem:
\begin{theorem} \label{thm:general-McDuff}
Let $G$ be a second-countable locally compact group.
Let $\alpha: G \curvearrowright M$ be an action on a von Neumann algebra with separable predual and let $\delta: G \curvearrowright \mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.
Then $\alpha\simeq_{\mathrm{cc}}\alpha\otimes\delta$ if and only if there exists a unital equivariant $*$-homomorphism $(\mathcal R,\delta) \to (M_{\omega,\alpha},\alpha_\omega)$.
\end{theorem}
\begin{proof}
The ``only if'' part follows exactly as in the proof of Corollary~\ref{cor:equivalence_equivariant_McDuff}, so we need to show the ``if'' part.
Since Corollary~\ref{cor:equivalence_equivariant_McDuff} already covers the case when $M$ is finite, we may split off the largest finite direct summand of $M$ and assume without loss of generality that $M$ is properly infinite.
Let $\varphi$ be a faithful normal state on $M$.
As in Remark~\ref{rem:tt-basics}, we consider the (semi-finite) continuous core $\tilde{M}$ and the extended $G$-action $\tilde{\alpha}: G\curvearrowright\tilde{M}$.
Since the image of $\tilde{\alpha}$ commutes with the dual flow $\hat{\sigma}^{\varphi}$, we have a continuous action $\beta=\tilde{\alpha}\times\hat{\sigma}^\varphi: G\times\mathbb{R}\curvearrowright\tilde{M}$ via $\beta_{(g,t)}=\tilde{\alpha}_g\circ\hat{\sigma}^\varphi_t$ for all $g\in G$ and $t\in\mathbb{R}$.
Let us also consider the action $\delta^{\mathbb R}: G\times\mathbb{R}\curvearrowright\mathcal R$ given by $\delta^{\mathbb R}_{(g,t)}=\delta_g$ for all $g\in G$ and $t\in\mathbb{R}$, which is evidently also strongly self-absorbing.
We apply Proposition~\ref{prop:cp-centralizer} to $\mathbb{R}$ in place of $G$ and to the modular flow in place of $\alpha$.
In this context, recall that the ultrapower flow $(\sigma^\varphi)^\omega$ already acts continuously on the ultrapower $M^\omega$ (see \cite[Theorem 4.1]{AndoHaagerup14}), and its restriction to $M_\omega$ is trivial.
So $M_\omega=(M_{\omega,\sigma^\varphi})^{(\sigma^\varphi)^\omega}$ and Proposition~\ref{prop:cp-centralizer} implies that the inclusion $M\subset\tilde{M}$ induces an embedding $M_\omega\to \tilde{M}_\omega$.
Since by definition, one has $\tilde{\alpha}|_M=\alpha$ as $G$-actions, it is clear that bounded $(\alpha,\omega)$-equicontinuous sequences in $M$ become $(\tilde{\alpha},\omega)$-equicontinuous sequences in $\tilde{M}$.
Keeping in mind that the dual flow $\hat{\sigma}^\varphi$ acts trivially on $M$ by definition, the aforementioned inclusion therefore induces an equivariant unital $*$-homomorphism
\[
(M_{\omega,\alpha},\alpha_\omega) \to ( (\tilde{M}_{\omega,\beta})^{(\hat{\sigma}^\varphi)^\omega},\tilde{\alpha}_\omega ).
\]
If we compose this $*$-homomorphism with a given unital equivariant $*$-homomorphism $(\mathcal R,\delta)\to (M_{\omega,\alpha},\alpha_\omega)$, we can view the resulting map as an $(G\times\mathbb{R})$-equivariant unital $*$-homomorphism $(\mathcal R,\delta^{\mathbb R}) \to (\tilde{M}_{\omega,\beta},\beta_\omega)$.
Since $\tilde{M}$ is semi-finite, it follows from Corollary~\ref{cor:equivalence_equivariant_McDuff} that $\beta$ and $\beta\otimes\delta^{\mathbb R}$ are cocycle conjugate as $(G\times\mathbb R)$-actions, say via $(\Psi,\mathbb{V}): (\tilde{M},\beta)\to (\tilde{M}\bar{\otimes}\mathcal{R},\beta\otimes\delta^{\mathbb R})$.
Remembering $\beta=\tilde{\alpha}\times\hat{\sigma}^\varphi$, we consider the $\hat{\sigma}^\varphi\otimes \mathrm{id}_\mathcal{R}$-cocycle $\mathbbm{w}_t=\mathbb{V}_{(1_G,t)}$ and the $\tilde{\alpha}\otimes \delta$-cocycle $\mathbbm{v}_g=\mathbb{V}_{(g,0)}$.
The cocycle identity for $\mathbb{V}$ then implies the relation
\begin{equation} \label{eq:cocycle-interaction}
\mathbbm{w}_t(\hat{\sigma}^\varphi_t\otimes\operatorname{id}_{\mathcal R})(\mathbbm{v}_g)=\mathbbm{v}_g(\tilde{\alpha}\otimes\delta)_g(\mathbbm{w}_t)
\end{equation}
for all $g\in G$ and $t\in\mathbb R$.
The cocycle conjugacy of flows $(\Psi,\mathbbm{w})$ induces an isomorphism
\[
\Lambda:=(\Psi,\mathbbm{w})\rtimes\mathbb R: \tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R \to (\tilde{M}\bar{\otimes}\mathcal R)\rtimes_{\hat{\sigma}^\varphi\otimes\operatorname{id}_{\mathcal R}}\mathbb R \cong \big(\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R\big)\otimes\mathcal R
\]
via
\[
\Lambda|_{\tilde{M}} = \Psi \quad\text{and}\quad \Lambda(\lambda^{\hat{\sigma}^\varphi}_t)=\mathbbm{w}_t(\lambda^{\hat{\sigma}^\varphi}_t\otimes 1_{\mathcal R}),\quad t\in\mathbb R.
\]
With slight abuse of notation (as in Proposition~\ref{prop:Takai-duality}), we also denote by $\tilde{\alpha}$ the obvious induced $G$-action on the crossed product $\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R$.
Using that $(\Psi,\mathbb{V})$ was a cocycle conjugacy, we observe for all $g\in G$ and $t\in\mathbb{R}$ that
\[
\operatorname{Ad}(\mathbbm{v}_g)\circ(\tilde{\alpha}\otimes\delta)_g\circ\Lambda|_{\tilde{M}} = \operatorname{Ad}(\mathbb{V}_{(g,0)})\circ(\beta\otimes\delta^{\mathbb R})_{(g,0)}\circ\Psi=\Psi\circ\beta_{(g,0)}=\Psi\circ\tilde{\alpha}_g
\]
and
\[
\begin{array}{ccl}
\big( \operatorname{Ad}(\mathbbm{v}_g)\circ(\tilde{\alpha}\otimes\delta)_g\circ\Lambda \big)(\lambda^{\hat{\sigma}^\varphi}_t)
&=& \mathbbm{v}_g \big( (\tilde{\alpha}\otimes\delta)_g(\mathbbm{w}_t) (\lambda^{\hat{\sigma}^\varphi}_t\otimes 1_{\mathcal R}) \big) \mathbbm{v}_g^* \\
&\stackrel{\eqref{eq:cocycle-interaction}}{=}& \mathbbm{w}_t(\hat{\sigma}^\varphi_t\otimes\operatorname{id}_{\mathcal R})(\mathbbm{v}_g) (\lambda^{\hat{\sigma}^\varphi}_t\otimes 1_{\mathcal R}) \mathbbm{v}_g^* \\
&=& \mathbbm{w}_t(\lambda^{\hat{\sigma}^\varphi}_t\otimes 1_{\mathcal R}) \ = \ \Lambda(\lambda^{\hat{\sigma}^\varphi}_t).
\end{array}
\]
In conclusion, the pair $(\Lambda,\mathbbm{v})$ defines a cocycle conjugacy between the $G$-actions $\tilde{\alpha}$ on $\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R$ and $\tilde{\alpha}\otimes\delta$ on $\big(\tilde{M}\rtimes_{\hat{\sigma}^\varphi}\mathbb R\big)\otimes\mathcal R$.
By Proposition~\ref{prop:Takai-duality}, the action $\tilde{\alpha}$ is cocycle conjugate to $\operatorname{id}_{\mathcal B(\ell^2(\mathbb{N}))}\otimes\alpha$.
Since we assumed $M$ to be properly infinite, it furthermore follows that $\operatorname{id}_{\mathcal B(\ell^2(\mathbb{N}))}\otimes\alpha$ is cocycle conjugate to $\alpha$.\footnote{Although this appears to be well-known, we could not find a good literature source for this precise claim and in this generality. We note, however, that the recent proof of the analogous C$^*$-algebraic statement \cite[Proposition 1.4]{GabeSzabo22kp} carries over in an obvious way to this setting.}
Combining all these cocycle conjugacies yields one between $\alpha$ and $\alpha\otimes\delta$.
This finishes the proof.
\end{proof}
The following consequence is our last main result, which generalizes Corollary~\ref{cor:McDuff-passes} to actions on arbitrary von Neumann algebras.
\begin{theorem} \label{thm:general-amenable-McDuff}
Let $G$ be a countable discrete group and $M$ a von Neumann algebra with separable predual such that $M \cong M\bar{\otimes} \mathcal{R}$.
Then for every amenable action $\alpha: G \curvearrowright M$, one has $\alpha \simeq_{\mathrm{cc}} \alpha \otimes \mathrm{id}_\mathcal{R}$.
\end{theorem}
\begin{proof}
Choose a faithful normal state $\varphi$ on $M$.
Recall that the induced faithful normal state $\varphi^\omega$ on $M^\omega$ restricts to a tracial state on $M_\omega$. We denote by $\|\cdot\|_2=\|\cdot\|_{\varphi^\omega}$ the induced tracial 2-norm on $M_\omega$.
Since we assumed $M$ to be McDuff, it follows that $M_\omega$ is locally McDuff in the following sense:
Given any $\|\cdot\|_2$-separable $*$-subalgebra $B\subset M_\omega$, there exists a unital $*$-homomorphism $\mathcal R\to M_\omega\cap B'$.
Now we choose $N_1=\mathcal{Z}(M)$ as a subalgebra of $M_\omega$.
We may then choose a unital $*$-homomorphism $\psi_1: \mathcal{R}\to M_\omega$, and define $N_2$ to be the $\|\cdot\|_2$-closed $*$-subalgebra generated by $N_1$ and the range of $\alpha_{\omega,g}\circ\psi_1$ for all $g\in G$.
After that, we may choose a unital $*$-homomorphism $\psi_2: \mathcal{R}\to M_\omega\cap N_2'$, and define $N_3$ to be the $\|\cdot\|_2$-closed $*$-subalgebra generated by $N_2$ and the range of $\alpha_{\omega,g}\circ\psi_2$ for all $g\in G$.
Carry on inductively like this and obtain an increasing sequence of $\alpha_{\omega}$-invariant separable von Neumann subalgebras $N_k\subseteq M_\omega$.
The $\|\cdot\|_2$-closure of the union $\bigcup_{k\geq 1} N_k$ is then a separably acting finite von Neumann subalgebra, which is clearly McDuff and $\alpha_{\omega}$-invariant.
Furthermore $N$ contains the center of $M$ equivariantly, which implies that the action $\alpha_\omega$ is amenable on $N$.
By Corollary~\ref{cor:McDuff-passes} (with $H=G$), it follows that $\alpha_{\omega}|_N$ is cocycle conjugate to $(\alpha_\omega|_N)\otimes\operatorname{id}_{\mathcal R}$.
In particular we may find some unital $*$-homomorphism $\mathcal R\to (N_\omega)^{\alpha_\omega}$.
Applying a standard reindexation trick, we may use this to obtain a unital $*$-homomorphism $\mathcal R\to (M_\omega)^{\alpha_\omega}$, which finishes the proof by Theorem~\ref{thm:general-McDuff}.
\end{proof}
\addcontentsline{toc}{section}{References}
\bibliographystyle{gabor}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Currently, the number of molecules detected in the interstellar medium (ISM) thanks to their rotational signatures far exceeds 200 \citep{McGuire_2018}. Among them more than 70 species belong to the class
of the so-called interstellar complex organic molecules (iCOMs), namely molecules containing at least one carbon atom and a total of more than 6 atoms \citep{COMs}.
Nitrogen-bearing iCOMs are particularly interesting because of their prebiotic character; indeed, they represent key intermediates toward the main building blocks of biomolecules, like aminoacids and nucleobases. Within this class of iCOMs, six members of
the imine family have been detected so far in the ISM, namely methanimine (\ce{CH2NH}, \cite{godfrey73,Dickens_1997}), ethanimine (\ce{CH3CHNH}, \cite{loomis2013detection}), ketenimine (\ce{CH2CNH}, \cite{Lovas_2006}),
3-imino-1,2-propadienylidene (\ce{CCCNH}, \cite{kawaguchi1992detection}), C-cyanomethanimine (\ce{NCCHNH}, \cite{zaleski2013detection}; \cite{rivilla2019}), and --very recently-- Z-propargylimine (2-propyn-1-imine, \ce{HC\bond{=}C\bond{-}CH\bond{=}NH}, \cite{Bizzocchi2020}).
The main hypotheses on their formation mechanisms in astrophysical environments
involve either tautomerization of simple nitriles \citep{Lovas_2006} or their partial hydrogenation on dust-grain surface \citep{Theule2011,Krim2019}. However, for C-cyanomethanimine, a gas-phase formation route has been recently proposed, which involves addition of the
cyano radical (CN) to methanimine \citep{Vazart2015}. It is thus
quite natural to hypothesize that methanimine can play a role in the formation of other imines upon addition/elimination of reactive radicals already detected in the ISM, like \ce{CH3}, \ce{C2H} or \ce{OH}. Indeed, the reaction of the hydroxyl radical with methanimine is proven to effectively lead to the formation of formamide in the gas phase \citep{Vazart2016,formamide-solis}.
The focus of the present letter is the possible formation pathway of propargylimine (PGIM), whose Z-isomer has been very recently identified in the quiescent G+0.693-0.027 molecular cloud with an estimated column density of \SI{0.24\pm0.02d14}{\per\square\centi\meter} \citep{Bizzocchi2020}.
In the same study, an upper limit of \num{1.8d-10} was retrieved for the fractional
abundance (w.r.t. \ce{H2}) of the higher-energy E isomer (which means a column density $<$0.13), which was instead not observed. After the spectroscopic characterization of this imine and its astronomical detection, \cite{Bizzocchi2020} put forward some speculations about
feasible formation routes based on the relative abundances of a number of possible precursors in the G+0.693-0.027 molecular cloud. However, in spite of the detection of \ce{CH2NH} \citep{Zeng2018}, this has not been taken into consideration, notwithstanding the authors reported, among the others, a large fractional abundance for the ethynyl radical (\ce{C2H}, $^2\Sigma^+$), i.e. \num{3.91d-8}.
Based on these premises, we decided to perform a state-of-the-art quantum-chemical (QC) characterization of the stationary points on the doublet reactive \ce{C2H + CH2NH} potential energy surface (PES) followed by kinetic computations in the framework of a master equation model rooted in generalized transition state estimates of the elementary reaction rates.
From a theoretical point of view, the reactions between the ethynyl radical and several substrates have been recently investigated by state-of-the-art QC approaches \citep{Bowman20}, but addition/elimination reactions with unsaturated substrates have not yet been explored.
\section{Computational methodology}
\label{sec:compdet}
The starting point for the study of the formation pathway of PGIM is the identification of the potential reactants and the analysis of the corresponding reactive PES, which implies the accurate characterization of all stationary points from both a structural and energetic point of view. This first step then requires to be completed by kinetic calculations.
In the derivation of a feasible reaction mechanism, one has to take into account the extreme conditions of the ISM: low temperatures (10-100 K) and low number density (10-10$^7$ cm$^{-3}$). By translating density in terms of pressure, a number density of 10$^4$ cm$^{-3}$ corresponds to a pressure of 3.8$\times$10$^{-10}$ Pa ($\sim$3.8$\times$10$^{-15}$ atm).
\subsection{Reactive potential energy surface}
\label{pes:det}
We have followed the general computational strategy validated in several recent studies \citep{earthspace2020,Molecules,lupi:h2s,staa1652,Tonolo2020}, which involves the following steps:
\begin{enumerate}[i)]
\item The stationary points have been located and characterized using the double-hybrid B2PLYP functional \citep{doi:grimme2006}, combined with D3(BJ) corrections (to incorporate dispersion effects; \cite{D3,D3BJ}) and in conjunction with the jun-cc-pVTZ ``seasonal'' basis set \citep{papajak2009}.
\item Single-point energy calculations, at the B2PLYP-D3(BJ)/jun-cc-pVTZ geometries, have been performed by means of the so-called ``cheap'' composite scheme (ChS; \cite{cheap1,cheap2}), which starts from the coupled-cluster theory including single and double excitations augmented by a perturbative estimate of triples (CCSD(T); \cite{Pople89}) in conjunction with a triple-zeta basis set (cc-pVTZ; \cite{Dunning-JCP1989_cc-pVxZ}) and within the frozen-core (fc) approximation. To improve this level of theory, the ChS model considers the extrapolation to the complete basis set (CBS) limit and the effect of core-valence (CV) correlation using M{\o}ller-Plesset theory to second order (MP2; \cite{mp2}). Concerning the former contribution, the fc-MP2 energy is extrapolated to the CBS limit using the $n^{-3}$ expression \citep{Helgaker1997} in conjunction with the cc-pVTZ and cc-pVQZ basis sets. The CV correlation correction is, instead, the difference between the MP2 energy evaluated correlating all electrons and that computed within the fc approximation, both in conjunction with the cc-pCVTZ basis set \citep{cvbasis}.
\item ChS energies have been combined with anharmonic zero-point energy (ZPE) corrections evaluated at the B2PLYP-D3(BJ)/jun-cc-pVTZ level within hybrid degeneracy-corrected second-order vibrational perturbation theory (HDCPT2; \cite{Bloino2012,chemRev2019}).
\end{enumerate}
All calculations have been performed with the Gaussian software \citep{g16}.
\subsection{Kinetic models}
\label{Kin:Mod}
Global rate constants have been calculated by using a master equation (ME) approach based on \emph{ab initio}
transition state theory (AITSTME), thereby employing the MESS software as master equation solver
\citep{georgievskii2013reformulation}. For elementary reactions involving a transition state, rate constants
have been computed using transition state theory (TST), while for barrier-less elementary reactions, they have
been evaluated by means of phase space theory (PST; \cite{pechukas1965detailed,chesnavich1986multiple}). The basic assumption of PST is that the interaction between two reacting fragments is isotropic (following a $\frac{C_6}{R^6}$ power law) and does not affect the internal fragment motions \citep{doi:10.1021/cr050205w}. This approximation is generally valid for low-temperature phenomena, as those occurring in the ISM. To be more precise, in order to obtain the $C_6$ parameter for the PST calculation, we performed a scan of the \ce{HCC\bond{-}CH2NH} and \ce{HCC\bond{-}NHCH2} distances for the C- and N-end attack, respectively. Then, the corresponding minimum energy paths have been fitted to a $f(x)$=$f_0 - \frac{C_6}{x^6}$ function, thus obtaining a $C_6$ value of \SI{131.96}{\bohr\tothe{6}\hartree} for the former attack and of \SI{180.59}{\bohr\tothe{6}\hartree} for the latter.
In all cases, tunnelling has been accounted for using the Eckart model \citep{eckart1930penetration}.
The rate constants of the overall reactions leading to the \ce{C3H3N} imine isomers (namely, the E-,Z-PGIM species and N-ethynyl-methanimine, N-EMIM) have been evaluated in the 20-500 K temperature range. To model their temperature dependence, the rate constants at different temperatures have been fitted to a three-parameter modified Arrhenius equation, namely the Arrhenius-Kooij expression \citep{kooij1893zersetzung,laidler1996glossary}:
\begin{equation}
\label{eq:kooij}
k(T)=A\left(\frac{T}{300}\right)^n\exp\left(-\frac{E}{RT}\right)
\end{equation}
where $A$, $n$, and $E$ are the fitting parameters, $R$ being the universal gas constant.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/pes_chs_pgim.png}
\end{center}
\caption{Formation route of N-EMIM and the PGIM isomers: ChS energies augmented by anharmonic B2PLYP-D3(BJ) ZPE corrections.}
\label{fig:pespgimchs}
\end{figure}
\section{Results and discussion}
\subsection{Reactivity and energetics}
A recent re-investigation of the reaction channel starting from attack of the cyano radical to the C-end of methanimine \citep{Puzzarini2020} has shown that, for all stationary points, the ChS model has a maximum absolute deviation of 3 \SI{}{\kilo\joule\per\mol} and an average absolute deviation of 1.1 \SI{}{\kilo\joule\per\mol} with respect to a reference composite scheme, which is able to reach sub-kJ accuracy energetics. These errors are much smaller than those issuing from widely employed composite schemes (e.g. CBS-QB3, \cite{CBSQB3}, or G4, \cite{G4}) and well sufficient for obtaining quantitative estimates of reaction rates and branching ratios \citep{earthspace2020,Molecules,lupi:h2s,staa1652,Tonolo2020}. On these grounds, we have performed a full characterization of the doublet PES for the addition-elimination reactions of both CN and CCH radicals to methanimine at the ChS level.
As far as the reaction mechanism is concerned, hydrogen abstraction could be competitive with addition/elimination \citep{Bowman20}, but test computations showed that the former reaction channel is at least one order of magnitude slower than the latter one. As a consequence, only the addition/elimination reaction channel is analyzed in detail in the following.
The reaction mechanism proposed in the present paper for the formation of N-EMIM and the PGIM isomers is sketched in Figure \ref{fig:pespgimchs} and the relative energies of all the stationary points, with respect to reactants, are collected in Table \ref{tab:cchenergies} together with the corresponding results for the \ce{CH2NH + CN} reaction.
There are three possible initial adducts, corresponding to the attack of the ethynyl radical to the C or N ends and to the $\pi$-system of the imine double bond. However, the cyclic adduct resulting from the third option (CYCLO-1) is significantly less stable and easily interconverts to one of the corresponding open-chain minima (1Z or 1N). For both the CN and CCH radicals, the intermediate obtained upon attack to the N moiety is slightly more stable, but the reaction channels originating from it are ruled by transition states significantly less stable (albeit always submerged) than those ruling the corresponding channels issuing from 1Z or 1E. Noted is that the PES for the \ce{CH2NH + CN} reaction is, in any detail, analogous to that of the \ce{C2H} radical.
\begin{table}[!htbp]
\centering
\caption{ChS relative electronic energies ($\Delta E_{el}$) and corresponding standard enthalpies at 0 K ($\Delta H^{\circ}_0$) for the stationary points of the \ce{CH2NH + X} reaction. Values in \SI{}{\kilo\joule\per\mol}.}
\label{tab:cchenergies}
\resizebox{0.65\textwidth}{!}{%
\begin{tabular}{@{}ccccc@{}}
\toprule
& \multicolumn{2}{c}{X = C$_2$H} & \multicolumn{2}{c}{X = CN} \\
\cline{2-3}\cline{4-5}
& $\Delta E_{el}$ & $\Delta H^{\circ}_0$ & $\Delta E_{el}$ & $\Delta H^{\circ}_0$ \\ \midrule
\ce{CH2NH + X} & 0.00 & \multicolumn{1}{c|}{0.00} & 0.00 & 0.00 \\
1Z & -229.36 & \multicolumn{1}{c|}{-218.72} & -203.59 & -198.67 \\
TS-1E1Z & -217.65 & \multicolumn{1}{c|}{-205.90} & -192.39 & -183.24 \\
1E & -225.75 & \multicolumn{1}{c|}{-213.58} & -201.48 & -191.95 \\
TS-1Z2 & -96.36 & \multicolumn{1}{c|}{-93.12} & -63.55 & -62.81 \\
TS-1E2 & -88.80 & \multicolumn{1}{c|}{-86.01} & -57.47 & -57.18 \\
2 & -314.60 & \multicolumn{1}{c|}{-299.69} & -284.59 & -271.29 \\
TS-2PZ & -88.09 & \multicolumn{1}{c|}{-94.97} & -48.37 & -58.27 \\
TS-2PE & -84.99 & \multicolumn{1}{c|}{-92.15} & -46.22 & -56.40 \\
TS-1ZPZ & -77.81 & \multicolumn{1}{c|}{-83.90} & -40.62 & -49.89 \\
TS-1EPE & -73.86 & \multicolumn{1}{c|}{-80.46} & -37.97 & -47.70 \\
\ce{Z{-}IM + H} (PZ) & -99.21 & \multicolumn{1}{c|}{-110.00} & -60.89 & -74.73 \\
\ce{E{-}IM + H} (PE) & -95.40 & \multicolumn{1}{c|}{-106.61} & -58.55 & -72.78 \\
CYCLO-1 & -167.50 & \multicolumn{1}{c|}{-151.41} & -108.93 & -96.69 \\
TS-CY-C & -144.35 & \multicolumn{1}{c|}{-133.03} & -97.62 & -88.32 \\
TS-CY-N & -122.80 & \multicolumn{1}{c|}{-114.02} & -88.77 & -80.94 \\
1N & -233.04 & \multicolumn{1}{c|}{-223.67} & -208.44 & -201.32 \\
TS-1N2N & -53.16 & \multicolumn{1}{c|}{-53.04} & -25.66 & -27.15 \\
2N & -272.23 & \multicolumn{1}{c|}{-260.55} & -223.19 & -211.85 \\
TS-2NPN & -52.98 & \multicolumn{1}{c|}{-62.22} & -22.76 & -33.56 \\
TS-1NPN & -35.54 & \multicolumn{1}{c|}{-44.05} & -3.85 & -14.67 \\
\ce{N{-}IM + H} (PN) & -59.22 & \multicolumn{1}{c|}{-72.69} & -30.64 & -45.80 \\ \bottomrule
\end{tabular}%
}
\end{table}
Starting from the very stable 1Z (or 1E) pre-reactive complex, one might observe a loss of the hydrogen radical, leading directly to the Z (or E) isomer of PGIM. This step has an exit barrier of about $\sim$135 (or $\sim$133) kJ mol$^{-1}$. On the other hand, considering the presence of the stabilizing \ce{C2H} moiety on the carbon atom, hydrogen migration might be observed in order to localize the unpaired electron on this atom. This migration occurs through the submerged transition state TS-1Z2 (TS-1E2 for the E-PGIM), which lies 125.6 kJ mol$^{-1}$ above 1Z (127.6 kJ mol$^{-1}$ above 1E for the E-route), thus forming the most stable intermediate of the whole PES, namely 2, which is nearly 300 kJ mol$^{-1}$ below the reactants. Next, loss of hydrogen leads again to the Z (or E) form of PGIM through the submerged transition state TS-2PZ (TS-2PE), lying about 95 kJ mol$^{-1}$ (92 kJ mol$^{-1}$ for the E species) below the reactants (exit barrier of about 205 and 208 kJ mol$^{-1}$, respectively).
The comparison with the analogous reaction paths for the gas-phase production of C-cyanomethanimine \citep{Puzzarini2020} shows that the formation of PGIM is characterized by greater exothermicity (-108 vs. -60 kJ mol$^{-1}$ for the average of Z and E isomers) and lower exit barriers (126 vs. 140 kJ mol$^{-1}$ for the average of TS-1Z2 and TS-1E2 and 206 vs. 238 kJ mol$^{-1}$ for the average of TS-2PZ and TS-2PE). Furthermore, the stability of the pre-reactive complex 1Z or 1E (ruling the barrier-less entrance channel) and that of the intermediate 2 (involved in the two-step mechanism) are greater in the case of the addition of \ce{C2H} than for CN (-218.7 vs. -203.6 kJ mol$^{-1}$ for the average of 1Z,1E and -299.7 vs. -284.6 kJ mol$^{-1}$ for 2).
Moving to the attack to the N-end of methanimine, from the inspection of Figure~\ref{fig:pespgimchs}, it is evident that the two possible paths originating from the 1N pre-reactive complex are similar to those described above for the C-end attack, as already noted for the \ce{CH2NH + CN} reaction \citep{Vazart2015}. With the only exception of 1N, which lies lower in energy than 1Z and 1E, all intermediates and transition states of these paths are less stable with respect to the C-end counterparts. The product itself, i.e. N-EMIM + H (PN), lies at higher energy: -72.7 kJ mol$^{-1}$, to be compared with -106.6 kJ mol$^{-1}$ for E-PGIM + H (PE) and -110.0 kJ mol$^{-1}$ for Z-PGIM + H (PZ).
Studies for reactions of radical species with molecules containing a double bond have shown that the reactivity depends on the type of system. For the C=C bond, addition/elimination is barrierless and strongly favored over hydrogen elimination (e.g. \cite{jp301015b}), whereas for C=O bonds, only H elimination is barrierless, whereas both the C- and O- addition/eliminations involve small barriers (e.g. \cite{1.1903945}). Preliminary computations for the addition of other radicals (e.g. CP, OH, and \ce{CH3}) to methanimine show that the mechanism described in the previous paragraphs for the reaction with \ce{C2H} or CN represents a quite general route to the formation of complex imines, although in a few cases (e.g. \ce{CH3}) some transition states are not submerged with respect to reactants.
\subsection{Rate constants}
To definitely confirm the effectiveness of the mechanism proposed, kinetic computations are required.
The product specific rate constants as a function of temperature are shown in Figure \ref{fig:ratepgimchs} for the reaction of methanimine with \ce{C2H} and in Figure \ref{fig:ratecmimchs} for the reaction with CN, whereas the parameters of the Arrhenius-Kooij fits are given in Table \ref{tab:fitparameters}. These have been obtained by fitting the global rate constants computed in the 20-500 K range. In more detail, for each figure, four panels are provided: those on the left refer to the C-end attack (panels (a) and (c)), while those on the right to the N-end attack (panels (b) and (d)). In both figures, the upper panels show the temperature profiles of rate constants for the formation of the ``C-isomers'' (namely, Z-/E-PGIM and Z-/E-C-cyanomethanimine, CMIM), while the lower panels refer to the formation of the ``N-isomers'' (namely, N-EMIM and N-cyanomethanimine, N-CMIM).
\begin{figure*}[!htbp]
\gridline{\fig{figures/attacco_c_pgim.pdf}{0.5\textwidth}{}
\fig{figures/attacco_n_pgim.pdf}{0.5\textwidth}{}
}
\gridline{\fig{figures/attacco_c_enim.pdf}{0.5\textwidth}{}
\fig{figures/attacco_n_enim.pdf}{0.5\textwidth}{}
}
\caption{Temperature dependence plots of the \ce{CH2NH + C2H} reaction rate constants.
\label{fig:ratepgimchs}}
\end{figure*}
\begin{figure*}[!htbp]
\gridline{\fig{figures/attacco_c_cmim.pdf}{0.5\textwidth} {}
\fig{figures/attacco_n_cmim.pdf}{0.5\textwidth} {}
}
\gridline{\fig{figures/attacco_c_ncmim.pdf}{0.5\textwidth}{}
\fig{figures/attacco_n_ncmim.pdf}{0.5\textwidth}{}
}
\caption{Temperature dependence plots of the \ce{CH2NH + CN} reaction rate constants.
\label{fig:ratecmimchs}}
\end{figure*}
Focusing on the C-end reaction paths, the prevalence of the Z-product is related to the slightly lower energy of the corresponding transition states compared to those leading to the E isomer. Back-dissociation into reactants is negligible in the whole temperature range considered, whereas the overall rate constant for the PGIM formation raises by increasing the temperature, also showing progressive deviations from the Arrhenius behavior. The overall rate constant, which is of the order of 2-3 $\times$ \num{d-10} cm$^3$ molecule$^{-1}$ s$^{-1}$, is mainly ruled by the one-step mechanism leading to products from the 1Z/1E pre-reactive complex through the TS-1ZPZ/TS-1EPE transition state. However, this is always true for Z-PGIM, while for the E isomer the two-step mechanism seems to be the rate determining one above $\sim$350 K. The derived branching ratio is of the order of 1.5, smaller than the observational result ($\ge$1.9), as already noted in the case of C-cyanomethanimine. However, as in the latter case, the isomer abundance ratio obtained from astronomical observations is affected by a large uncertainty (the fractional abundance of E being indirectly derived), but it is, instead, close to the value obtained from a thermodynamic estimate based on the relative stability of the E and Z isomers. The considerations above on the computed isomers ratio assume a similar destruction rate for both E and Z species. In the case of C-cyanomethanimine, it has been suggested that the strong difference between the dipole moments of the E and Z forms (4.2 D and 1.4 D, respectively, from \cite{Vazart2015}) leads to significantly different destruction rates (see \cite{Rivilla20}). More specifically, \cite{Rivilla20} proposed a general rule-of-thumb for estimating the abundances of isomers based on their dipole moments, which has been denoted as ``relative dipole principle''. According to this, for propargylimine, whose isomers have very similar dipole moments ($\sim$2 D, see \cite{Bizzocchi2020}), the assumption that they have similar destruction rates seems to be reliable. At the same time, different reaction rates with H radicals cannot be invoked since, for both cyanomethanimine and propargylimine, the corresponding reactions are ruled by non-submerged transition states. Further investigation of alternative mechanisms would be surely warranted, but it is out of the scope of the present letter. As already noted for thermochemistry, the general kinetic features for the \ce{C2H} and CN additions to methanimine are very similar, thus giving further support to the plausibility and generality of the proposed mechanism. At 100 K, for PGIM, the overall rate constants for Z and E species (in cm$^3$ molecule$^{-1}$ s$^{-1}$) are \num{3.25d-10} and \num{2.22d-10}, respectively, to be compared to \num{2.73d-10} and \num{1.95d-10} for the two corresponding isomers of C-cyanomethanimine.
As far as the formation of the N-species is concerned, it is interesting to note that this process would be a little bit favored over formation of the C-species if the attacks to the two ends of the imino group would be independent (as actually is in the case of the CN addition to the \ce{CH3} or \ce{NH2} moiety of methylamine, see \cite{staa1652}), with a rate constant of 4-\num{5d-10} cm$^3$ molecule$^{-1}$ s$^{-1}$). However, Figure \ref{fig:pespgimchs} shows that the two channels are connected by a low-lying cyclic intermediate. Under these circumstances (also valid for the attack of the CN radical), the formation of C-products becomes faster by two orders of magnitude with respect to formation of the N-product, with the rate constant of the latter process slightly increasing with the temperature.
To provide a graphical explanation of the behavior of the global constant with temperature, the contributions of some specific reaction channels are shown in Figure \ref{fig:ratecontributipgim}. These are the two barrier-less (C- and N-end) entrance channels, the one- and two-step processes leading to Z-/E-PGIM for the C-end attack and the corresponding channel leading to N-EMIM for the attack to the N end of methanimine.
It is noted that, even if the entrance channel flux for the N-end attack is faster than the C-end attack one, the subsequent high barriers of the N-EMIM formation path slow down the flux, thus resulting in the preferential formation of the E,Z-PGIM, which presents lower lying barriers. In this picture, an important role is played by the TS-CY-N transition state linking 1N to the cyclic pre-reactive complex, CYCLO-1. In fact, this interconversion is the elementary step characterized by the lowest barrier for the N-end side of the overall \ce{CH2NH + C2H} reaction. Similar arguments also apply to the reaction involving CN.
\begin{table}[!htbp]
\centering
\caption{The Arrhenius-Kooij parameters for the \ce{CH2NH + X} reaction.}
\label{tab:fitparameters}
\resizebox{0.9\textwidth}{!}{%
\begin{tabular}{@{}lccc|ccc@{}}
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{c}{C-end attack} & \multicolumn{3}{c}{N-end attack} \\ \cmidrule(l){2-7}
X = \ce{C2H} & E & Z & N & E & Z & N \\ \cmidrule(l){2-7}
$A$ /cm$^3$ molecule$^{-1}$ s$^{-1}$ & \num{2.43d-10} & \num{3.51d-10} & \num{5.54d-12} & \num{2.66d-10} & \num{3.83d-10} & \num{1.26d-11} \\
$n$ & \num{7.58d-2} & \num{3.86d-2} & \num{6.33d-1} & \num{-6.10d-2} & \num{-9.22d-2} & \num{6.59d-1} \\
$E$ /\SI{}{\kilo\joule\per\mol} & \num{6.74d-2} & \num{8.72d-2} & \num{-2.32d-1} & \num{1.62d-1} & \num{1.77d-1} & \num{-2.15d-1} \\
rms\textsuperscript{\emph{a}} & \num{4.37d-12} & \num{7.12d-12} & \num{1.15d-13} & \num{1.02d-11} & \num{1.54d-11} & \num{1.97d-13} \\ \midrule
X = \ce{CN} & E & Z & N & E & Z & N \\ \cmidrule(l){2-7}
$A$ /cm$^3$ molecule$^{-1}$ s$^{-1}$ & \num{1.75d-10} & \num{2.42d-10} & \num{3.12d-12} & \num{1.46d-10} & \num{2.03d-10} & \num{6.51d-12} \\
$n$ & \num{-3.20d-1} & \num{-3.40d-1} & \num{2.68d-2} & \num{-6.56d-1} & \num{-6.68d-1} & \num{-2.72d-1} \\
$E$ /\SI{}{\kilo\joule\per\mol} & \num{2.17d-1} & \num{2.24d-1} & \num{1.03d-1} & \num{3.37d-1} & \num{3.39d-1} & \num{2.33d-1} \\
rms\textsuperscript{\emph{a}} & \num{1.09d-11} & \num{1.55d-11} & \num{8.39d-14} & \num{1.47d-11} & \num{2.07d-11} & \num{3.29d-13} \\ \bottomrule
\end{tabular}%
}
\textsuperscript{\emph{a}} rms stands for root-mean-square deviation of the fit.
\end{table}
It is noteworthy that the behavior discussed above for both types of radical attack to methanimine is specific of the low pressure limit (see computational details). In fact, moving to a pressure of 1 atm (of limited astrophysical interest, but of potential relevance in planetary atmospheres), the N-EMIM formation remains unfavorable with respect to E,Z-PGIM and, in general, all formation rate constants become slower. This trend is due to the stabilization of the entrance channel wells (namely 1N and 1Z) by collisions (that occurs at pressure values as high as 1 atm), thus leading to an increase of the effective reaction barriers with the consequent decrease of the overall rate constant, which shows a monotonic increase with temperature.
\begin{figure*}[!htbp]
\gridline{\fig{figures/entrance_pgim.pdf}{0.35\textwidth}{}
\fig{figures/attacco_c_contributi_pgim.pdf}{0.35\textwidth}{}
\fig{figures/attacco_n_contributi_enim.pdf}{0.35\textwidth}{}
}
\caption{Temperature dependence of the rate constants for the elementary steps of the overall \ce{CH2NH + C2H} reaction, namely barrier-less entrance (panel (a)), and one- or two-step paths leading to Z-/E-PGIM (panel (b)) and N-EMIM (panel (c)).
\label{fig:ratecontributipgim}}
\end{figure*}
A curved Arrhenius plot is obtained when the activation energy depends on the temperature and this behavior is captured by the Arrhenius-Kooij formula (see Equation \ref{eq:kooij}) when this dependence is linear. The root mean square deviations reported in Table \ref{tab:fitparameters} demonstrate that the data for the \ce{C3H3N} imine isomers are indeed well fitted by the Arrhenius-Kooij expression. Within this model, $E$ represents the activation energy at 0 K and the activation energy at a generic temperature $T$ is given by $E+n\left(\frac{RT}{300}\right)$. In the present case, the activation energy is always positive, with the exception of N-EMIM, as a result of both the capture rate constant and the subsequent energy barriers for the unimolecular steps. The $n$ parameter (the first derivative of the activation energy with respect to temperature) is always positive for the C-end attack, while it is negative for the PGIM isomers when the N-end attack takes place. Finally, the values of the pre-exponential factor $A$ are typical for this kind of reactions and rule the branching ratio between the Z and E PGIM isomers. Indeed, the ratio of the $A$ factors is 1.44 and the branching ratio ranges between 1.43 and 1.47 in the whole temperature range (20-500 K).
\section{Concluding remarks}
In this letter, we have proposed a gas-phase formation route for the recently detected Z-PGIM molecule.
In analogy to the addition of the CN radical to methanimine leading to cyanomethanimine, addition of the isoelectronic ethynyl radical easily leads to PGIM through a similar reaction mechanism, which involves the formation of a stable pre-reactive complex and its successive evolution by means of submerged transition states.
Since the level of the QC and kinetic computations carried out gives strong supports to the quantitative accuracy of our results, search for PGIM isomers in the other regions of the ISM where methanimine and the ethynyl radical have been both detected could be attempted to further validate the proposed reaction mechanism.
In a more general perspective, the results of our state-of-the-art computations provide convincing evidences about the feasibility of a general addition/elimination mechanism for the formation of complex imines. This starts from methanimine as a precursor and involves reactive radicals abundantly present in the interstellar space.
\acknowledgments
This work has been supported by MIUR
(Grant Number 2017A4XRCA) and by the University of Bologna (RFO funds). The SMART@SNS Laboratory (http://smart.sns.it) is acknowledged for providing high-performance computing facilities. Support by the Italian Space Agency (ASI; `Life in Space' project, N. 2019-3-U.0) is also acknowledged.
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In quantum optics, unitary phase operators were introduced in the 1980's by Barnett and Pegg \cite{pb} to describe the phase measurement and quantum phase-dependent effects. In the definition of two-mode unitary phase-difference operators \cite{q}, it is assumed that the total number of photons is conserved. This assumption can not be valid always except in closed
quantum optical systems such as two-mode Raman type processes in high-Q cavities \cite{q}. However, for matter-waves of ultracold atoms in a double-well (DW) trap, the total number of atoms is
conserved during the trap lifetime or duration of any experimental measurement on the trapped atoms. It is then necessary to formulate the quantum phase of matter-waves with a fixed number of particles. So, it is important to study quantum atom optics under the influence of unitary phase operators in matter-waves \cite{biswa,king}.
Ketterle's group \cite{k1} has first experimentally observed the atom interferometry of two-component Bose-Einstein Condensates (BECs) in a DW trap. They have observed the relative phase between two condensates with matter-wave interference \cite{Cronin}. In this case the DW is analogous to a coherent beam splitter. Their group \cite{k2} has demonstrated another experimental technique to determine the relative phase of two condensates by scattering of light. The advantage of this technique is that neither coherent splitting of BECs is required nor is recombination of the matter-waves. Matter-wave interferometry has also been developed using magnetically generated DW traps on an atom chip \cite{Schumm}. Their has been several experiments to determine the spatial phase of the matter wave interference by releasing two condensates from spatially separated potential wells \cite{k1}. In those experiments the phase is measured classically. Gross {\em et al}. first demonstrated experimentally quantum mechanical homodyne detection of matter-wave phase \cite{Gross}. Recently there are some experiments showing that a few numbers of particles (atomic bosons and fermions) can also be trapped using optical fields \cite{folling,Murman}.
Here we discuss the possibility of quantum phase measurement with matter-wave interferometry with small number of bosonic atoms in DW. In the experiment performed by Mandel's group in 1991 \cite{mandel} two modes of laser were employed in a interferometric homodyne detection scheme. One of the modes was treated classically with large number of photons, and the other quantum mechanically with variable average photon number. They measured the sine and cosine of quantum phase-difference operator and plotted the results as a function of average photon number in the second mode. Their results show that when the average photon numbers in both the modes are small, classical and quantum mechanical phases differ significantly. However, if the average photon number in the second mode is increased, classical and average quantum phases tend to match. Here we discuss the possibility of a matter-wave counterpart of Mandel's experiment using ultracold bosonic atoms in a quasi-1D DW.
\section{Phase-operators: A brief review}
Here we consider Barnett-Pegg \cite{pb} type quantum phase operators for matter-wave of few bosons or fermions. Matter-wave phase operators were first introduced in 2013 \cite{biswa}. It is shown that \cite{biswa,king}, for a low number of bosons or fermions, unitary nature of the phase-difference operators is important. For large number of photons or quanta, the non-unitary Carruthers-Nieto \cite{cn} phase-difference operators yield almost similar results as those due to Barnett-Pegg type unitary operator. Since, in the unitary regime, phase operators are formulated by coupling the vacuum state with the highest number state in a finite-dimensional Fock space, the effects of the vacuum state becomes significant for low number of particles. In early 90's, Mandel's group \cite{mandel} experimentally determined the phase-difference between two optical fields in both semi-classical and quantum cases. They made use of the sine and cosine of phase-difference operators of Carruthers and Nieto \cite{cn} as well as unitary operational phase-difference operators as they defined.
For the material particles, quantum phase operators associated with bosons and fermions have different character. Unitary quantum phase operators for bosons are introduced by the analogy of quantum phase operator formalism of photons. It is difficult to define quantum phase operator for fermions because more than one fermion can not occupy a single quantum state simultaneously. A quantum state for fermions can be either filled (by one fermion) or empty (vacuum state). Therefore, quantum phase-difference between two fermionic modes becomes well defined when single-particle quantum states of fermions are half-filled.
To clarify the canonically conjugate nature of number- and phase-difference operators, one can introduce two commuting operators corresponding to cosine and sine of the phase-difference. Both of them are canonically
conjugate to the number-difference operator. These two phase operators plus the number-difference operator forms a closed algebra \cite{biswa}.
To define an appropriate quantum phase operator, a complication arises from the number operator of a harmonic oscillator which has a lower bound state. Dirac \cite{dirac} first postulated the
existence of a hermitian phase operator in his description of quantized electromagnetic fields. Susskind and Glogower \cite{sg} first showed that Dirac's phase operator was neither unitary nor hermitian. If someone seeks to construct a unitary operator $U$ by following Dirac's postulate then $UU^\dagger$ = $\hat I \neq U^\dagger U$, hence $U$ is not unitary. Thus Susskind and Glogower \cite{sg} concluded that there does not exist any hermitian phase operator. Louisell \cite{Louisell} first introduced the periodic operator function corresponding to a phase variable which is conjugate to the
angular momentum. Carruthers and Nieto \cite{cn} introduced two-mode phase difference operators of a two-mode radiation field by using two non-unitary hermitian phase operators $C$ and $S$, measure the cosine and sine of the fields. The two-mode phase-difference operators as defined by Carruthers and Nieto \cite{cn} are given by
\begin{eqnarray}
\hat C^{CN}_{12} = \hat C_1\hat C_2 + \hat S_1\hat S_2 \nonumber \\
\hat S^{CN}_{12} = \hat S_1\hat C_2 - \hat S_2\hat C_1
\end{eqnarray}
where
\begin{eqnarray}
\hat C_i=\frac{1}{2}[(\hat N_i+1)^{-\frac{1}{2}} \hat a_i+\hat a^\dagger_i(\hat N_i+1)^{-\frac{1}{2}}] \nonumber
\end{eqnarray}
\begin{eqnarray}
\hat S_i=\frac{1}{2i}[(\hat N_i+1)^{-\frac{1}{2}}\hat a_i-\hat a^\dagger_i(\hat N_i+1)^{-\frac{1}{2}}] \nonumber
\end{eqnarray}
are the phase operators corresponding to the cosine and sine respectively, of $i$-th mode, where $\hat a^\dagger_i$($\hat a_i$) denotes the creation(annihilation) operator for a boson
and $\hat{N}_i = \hat{a}_i^{\dagger} \hat{a}_i$. The explicit form of phase-difference operators can be written (with $i$=1 or 2) as
\begin{eqnarray}
\hat C^{CN}_{12} = \frac{1}{2}[(\hat N_1+1)^{-\frac{1}{2}}\hat a_1\hat a^\dagger_2(\hat N_2+1)^{-\frac{1}{2}}+ \hat a^\dagger_1(\hat N_1+1)^{-\frac{1}{2}}(\hat N_2+1)^{-\frac{1}{2}}\hat a_2]
\end{eqnarray}
\begin{eqnarray}
\hat S^{CN}_{12} = \frac{1}{2i}[(\hat N_1+1)^{-\frac{1}{2}}\hat a_1\hat a^\dagger_2(\hat N_2+1)^{-\frac{1}{2}}- \hat a^\dagger_1(\hat N_1+1)^{-\frac{1}{2}}(\hat N_2+1)^{-\frac{1}{2}}\hat a_2]
\end{eqnarray}
In interferometric experiments, only the phase difference between two fields matters and not the absolute phase of a field. According to Barnett-Pegg formalism, hermitian and
unitarity of phase-difference operators corresponding to cosine and sine of phase have following explicit form
\begin{eqnarray}
\hat C_{12} = \hat C^{CN}_{12} + \hat C^{(0)}_{12} \label{eq6} \\
\hat S_{12} = \hat S^{CN}_{12} + \hat S^{(0)}_{12}
\label{eq7}
\end{eqnarray}
where
\begin{eqnarray}
\hat C^{(0)}_{12} = \frac{1}{2}[|N,0\rangle \langle 0,N|+|0,N\rangle \langle N,0|] \nonumber \\
\hat S^{(0)}_{12} = \frac{1}{2i}[|N,0\rangle \langle0,N|-|0,N\rangle \langle N,0|] \nonumber
\end{eqnarray}
are the operators which are constructed by coupling the vacuum state of one mode with the highest Fock state of the other mode. $N = \langle \hat{N}_1 \rangle + \langle \hat{N}_2 \rangle$ is total number of bosons which is conserved. $ |N_1 , N-N_1\rangle$ represents a two-mode Fock state with $N_1$ and $(N-N_1)$ being the atom numbers in modes 1 and 2, respectively. The difference of the number or the population imbalance between the two wells is $\hat W=\hat N_1-\hat N_2$. The commutation relations of the given operators $\hat C_{12},\hat S_{12}$ and $\hat W$ are as follows
\begin{equation}
[\hat {C_{12}}, \hat {W}]=2i(\hat S_{12}-(N+1)\hat S^{(0)}_{12})\nonumber
\end{equation}
\begin{equation}
[\hat {S_{12}}, \hat {W}]=-2i(\hat C_{12}-(N+1)\hat C^{(0)}_{12})\nonumber
\end{equation}
\begin{equation}
[\hat C_{12},\hat S_{12}]=0 \nonumber
\end{equation}
The first two of the above equations imply
\begin{eqnarray}
{\Delta C_{12}} \Delta W \ge \left | S_{12} - (N+1) S^{(0)}_{12} \right | \label{eq13} \\
{\Delta S_{12}} \Delta W \ge \left | C_{12} - (N+1) C^{(0)}_{12} \right |
\label{eq14}
\end{eqnarray}
Now, the standard quantum limit of fluctuation $\Delta_{SQL}$ in number-difference or phase-difference quantity is given by \cite{king}
\begin{eqnarray}
\Delta_{{\rm SQL}} = \frac{1}{N}\sqrt{[S_{12} - (N+1) S^{(0)}_{12}]^2+[ C_{12} - (N+1) C^{(0)}_{12}]^2}
\end{eqnarray}
and the normalized squeezing parameters for both phase- and number-difference operators, respectively, by
\begin{equation}
\Sigma_p={\Delta E_{\phi}}^2- \Delta_{{\rm SQL}}
\end{equation}
and
\begin{equation}
\Sigma_w={\Delta W_n}^2- \Delta_{{\rm SQL}}
\end{equation}
where $\Delta E_{\phi} = \sqrt{({\Delta C_{12}})^2 + ({\Delta S_{12}})^2 } $ is an average phase fluctuation and $\hat W_n = \frac{\hat{W}}{N}$, normalized number-difference operator.
The system will be squeezed in number or phase variables when
$\Sigma_w$ or $\Sigma_p$ becomes negative.
\section{The Model}
To build up the model, we consider a quasi-1D DW trap potential in which bosonic atoms are confined in the two sites of the DW. The DW has two quasi degenerate energy eigenfunction in which the ground band is occupied by the bosons. The idea is to initialize a certain number of bosons in one of the either site of the DW and let them evolve (tunnel) with time. So, the particle number in the other well ($N_2$) which was initially empty oscillates with time. We have taken the quantum mechanical average of $\hat N_{2}$ and $\hat S_{12}$ throughout the time upto which $N_1(t)=N_2(t)=N/2$.
\begin{figure}[h]
\centering
\hspace{-.1in}
\includegraphics[height=2in, width=2.5in]{DW1.eps}
\caption{\small Schematic of bosons in a quasi-1D DW trap with $J$ being tunneling coefficient.}
\label{Figure:1}
\end{figure}
To detect the phase, we propose a scheme of using the DW as double slit of interference experiment. By switching off the optical field, the bosons interfere as they all under the influence of gravity. From that one can detect the phase by absorption imaging the interference pattern on screen and analyzing the density profiles in the pattern.
\section{Results and discussions}
\begin{figure*}[h]
\centering
\begin{tabular}{@{}cccc@{}}
\hspace{-.1in}
\includegraphics[height=3in, width=2.5in, angle =270]{sine2.eps} &
\includegraphics[height=3in, width=2.5in, angle =270]{sine1.eps}\\
\end{tabular}
\caption{\small Calculated values of $\langle S_{12} \rangle$ as a function of average number of bosons for different total number of bosons.}
\label{Figure:2}
\end{figure*}
As the total number of bosons in our case is conserved, we calculate the quantum mechanical average of sine phase-difference operator as a function of number of bosons in the second well for different total number of particles. We consider symmetric trap for non-interacting bosons. Although non-interacting bosons are idealized, we assume the interaction to be very small and our case closely resembles to that. To begin with, we initialize the system with all bosons in one well and then the number in the other well ($N_2$) evolves with time. Throughout the evolution of $N_2$ up to half of the total population we take quantum mechanical average. Then we have plotted $\langle \hat S_{12} \rangle$ with $\langle \hat N_2 \rangle$. Our results are similar to that obtained by Mandel's group. For their case they have changed the photon numbers in both the modes treating one mode classically and other mode quantum mechanically. They have also changed the ratio of average photon numbers of two different modes in their experiment. Whereas, in our case we have only changed the total number of particles to mimic their experimental finding.
\section{Conclusions}
We have studied the sine of quantum phase difference between two sites of a DW for non-interacting bosons. The cosine operator can also be studied in the similar way. We have shown that when the total number of bosons is increased the result has a good agreement with the Mandel's experimental results. It is worth noticing how the results modify in presence of interactions and slight asymmetry of the trap. One can also calculate the fluctuations of sine and cosine phase operators. Recently, phase fluctuation below the shot-noise has been demonstrated experimentally with two components BEC's \cite{Burton}. The results we obtained suggest that when the particle number is small in either side of the well unitary phase operators become important. This can be attributed to the effect of vacuum term in unitary phase operators. In case of Josephson oscillations in BEC's the unitary quantum phase has not been studied so far. It may be possible to measure the quantum phase of these type of systems by using homodyne detection method.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Portrait images, showing mainly the face and upper body of people, are among the most common and important photographic depictions.
We look at them to emotionally connect with friends and family, we use them to best present ourselves in job applications and on social media, they remind us of memorable events with friends, and photographs of faces are omnipresent in advertising.
Nowadays, tools to computationally edit and post-process photographs are widely available and heavily used.
Professional and hobby photographers use them to bring out the best of portrait and social media photos, as well as of professional imagery used in advertising.
Photos are often post-processed with the purpose to change the mood and lighting, to create a specific artistic look and feel, or to correct image defects or composition errors that only become apparent after the photo has been taken.
Today, commercial software\footnote{For example: \url{www.adobe.com/Photoshop}} or recent research software \cite{Luan2017DeepPS,Gatys} offers a variety of ways to edit the color or tonal characteristics of photos.
Some tools even enable the change of visual style of photos to match certain color schemes \cite{Luan2017DeepPS,Shih14}, or to match a desired painterly and non-photo-realistic style \cite{Gatys,selim16}.
In many cases, however, edits to a portrait are needed that require more complex and high-level modifications e.g. modifying head posture, smile or scene illumination after the capture.
Enabling such edits from a single photograph is an extremely challenging and underconstrained problem.
\new{This is because editing methods need to compute reliable estimates of 3D geometry of the person and lighting in the scene.}
Moreover, they need to photo-realistically synthesize modified images of the person and background in a perspectively correct parallax-respecting manner, while inpainting disoccluding regions.
For ease of use, editing methods should use semantically meaningful parameterizations, which for the rest of the paper means the following: Head pose, face expression and scene lighting should be expressed as clearly disentangled and intuitive variables akin to computer animation controls, such as coordinates and angles, blendshape weights, or environment map parameterizations.
Existing methods to edit human portrait imagery at best achieve parts of these goals.
Some model-based methods to realistically edit human expression \cite{thies2019,thies2016face} and head pose \cite{kim2018DeepVideo} fundamentally require video input and do not work on single images.
Other editing approaches are image-based and cannot be controlled by intuitive parametric controls~\cite{Geng2018WarpguidedGF,elor2017bringingPortraits,wang2018fewshotvid2vid,fewshot-neuraltalkingheads,Siarohin_2019_NeurIPS}, only enable editing of a single semantic parameter dimension, e.g., scene illumination \cite{Sun19,Meka:2019,Zhou_2019_ICCV}, or do not photo-realistically synthesize some important features such as hair~\cite{pagan}.
Recently, generative adversarial neural networks, such as StyleGAN~\cite{Karras2019cvpr}, were trained on community face image collections to learn a manifold of face images.
They can be sampled to generate impressive photo-realistic face portraits, even of people not existing in reality.
However, their learned parameterization entangles important face attributes
(most notably identity, head pose, facial expression, and illumination), which thus cannot be independently and meaningfully controlled in the output.
It therefore merely allows control on a coarse style-based level, e.g., to adapt or transfer face styles on certain frequency levels between images.
\new{To overcome this limitation, StyleRig~\cite{anoynomous} describes a neural network that maps the parameters of a 3D morphable face model (3DMM) \cite{Blanz1999} to a pretrained StyleGAN for face images.}
However, while their results show disentangled control of face images synthesized by a GAN, %
they do not allow for editing real portrait photos.
\new{On the other hand, some approaches have tried to embed real images in the StyleGAN latent space.
\citet{Abdal_2019_ICCV,abdal2019image2stylegan} demonstrate high-quality embedding results, which are used to perform edits such as style or expression transfer between two images, latent space interpolation for morphing, or image inpainting.
However, when these embeddings are used to edit the input images using StyleRig~\cite{anoynomous}, the visual quality is not preserved and the results often have artifacts.
High-quality parametric control of expression, pose or illumination on real images has not yet been shown to be feasible. }
\new{We therefore present the first method for embedding real portrait images in the StyleGAN latent space which allows for photo-realistic editing that combines all the following features:
It enables photo-real semantic editing of all these properties --- head pose, facial expression, and scene illumination, given only a single in-the-wild portrait photo as input, see Fig.~\ref{fig:teaser}.}
Edits are coherent in the entire scene and not limited to certain face areas.
Edits maintain perspectively correct parallax, photo-real occlusions and disocclusions, and illumination on the entire person, without warping artifacts in the unmodeled scene parts, such as hair.
The embedding is estimated based on a novel non-linear optimization problem formulation.
Semantic editing in parameter space is then achieved based on the pretrained neural network of~\citet{anoynomous}, which maps the control space of a 3D morphable face model to the latent space of StyleGAN.
These semantic edits are accessible through a simple user interface similar to established face animation control.
We make the following contributions:
\begin{itemize}
\item
\new{We propose a hierarchical optimization approach that embeds a portrait image in the latent space of StyleGAN while ensuring high-fidelity as well as editability.}
\item
\new{Moreover, in addition to editability of the head pose, facial expression and scene illumination, we introduce an energy that enforces preservation of the facial identity. }
\end{itemize}
\section{Related Work}
We define face editing as the process of changing the head pose, facial expression, or incident illumination in a portrait image or video.
Many recent editing techniques are learning-based.
We distinguish between person-specific techniques that require a large corpus of images (or a long video) of the person, few-shot techniques that only require a small number of images, and single-shot techniques that only require a single image as input.
Our Portrait Image Embedding (PIE) approach is part of the third category and enables intuitive editing of a portrait image by a set of semantic control parameters. In addition to these categories, we will also summarize existing works related to portrait relighting.
\subsection{Person-specific Video Editing Techniques}
There has been a lot of research on person-specific techniques~\cite{thies2016face,kim2018DeepVideo,Recycle-GAN,thies2019,Kim19NeuralDubbing,
Wiles18} that require a large training corpus of the target person as input.
These approaches can be classified into model-based~\cite{thies2016face,kim2018DeepVideo,thies2019,Kim19NeuralDubbing} and image-based~\cite{Recycle-GAN} techniques.
Model-based techniques employ a parametric face model to represent the head pose, facial expression, and incident scene illumination.
The semantic parameter space spanned by the model can be used to either perform intuitive edits or transfer parameters from a source to a target video.
On the other end of the spectrum are image-based techniques that can transfer parameters, but do not provide intuitive semantic control.
\paragraph{Model-based Video Editing Techniques}
Facial reenactment approaches \cite{thies2016face,thies2019} change the facial expressions in a target video to the expressions in a driving source video.
These approaches achieve impressive results, but require a video of the target person as input and do not enable editing of the head pose and incident illumination.
Kim \etal~\shortcite{kim2018DeepVideo} proposed the first full head reenactment approach that is able to edit the head pose as well as the facial expression.
A conditional deep generative model is leveraged as a neural rendering engine.
While these approaches \cite{thies2016face,kim2018DeepVideo,thies2019} produce exciting results, they do not preserve the speaking style of the target.
In Kim~\etal~\shortcite{Kim19NeuralDubbing}, an approach is proposed for editing the expressions of a target subject while maintaining his/her speaking style.
This is made possible by a novel style translation network that learns a cycle-consistent mapping in blendshape space.
In contrast to our approach, all these techniques require a long video of the target as input and cannot edit a single image of an arbitrary person.
\paragraph{Image-based Video Editing Techniques}
Image-based techniques enable to control a target face through a driving video.
The approach of Bansal~\etal~\shortcite{Recycle-GAN} allows them to modify the target video while maintaining the speaking style.
A novel recycle loss is defined in the spatio-temporal video domain.
This approach obtains high-quality results for expressions and pose transfer.
In contrast to our approach, image-based approaches do not provide intuitive control via a set of semantic control parameters and have to be trained in a person-specific manner.
Thus, they cannot be employed to edit a single given image.
\subsection{Few-shot Editing Techniques}
Few-shot editing techniques \cite{wang2018fewshotvid2vid,fewshot-neuraltalkingheads,Wiles18} require only a small set of images of the target person as input.
Given multiple frames showing a target person, X2Face~\cite{Wiles18} drives a frontalized face embedding by a regressed warp field that is estimated by an encoder-decoder network.
The approach can also drive faces based on audio.
Wang~\etal~\shortcite{wang2018fewshotvid2vid} presented a few-shot video editing approach and showed its application to driving a target face via a source video.
A novel network weight generation module is proposed that is based on an attention mechanism.
To animate faces, the network is trained to transfer image sketches to photo-realistic face images.
The network is trained on a large multi-identity training corpus and can be applied to new unseen still images.
Zakharov~\etal~\shortcite{fewshot-neuraltalkingheads} presented a few-shot technique for animating faces.
Their solution has three components:
1) a generator network that translates landmark positions to photo-realistic images,
2) an embedding network that learns an intermediate representation for conditioning the generator, and
3) a discriminator.
The network is trained on a large corpus of face images across multiple identities and generalizes to new identities at test time.
Impressive results are shown in animating images, including legacy photos and even paintings.
The learned models of few-shot techniques \cite{fewshot-neuraltalkingheads,wang2018fewshotvid2vid,Wiles18} can be improved by fine-tuning on a few example images of the target person, e.g., images taken at different view-points or at different time instances.
The learned models can also be applied directly to new still images without fine-tuning.
\subsection{Single-shot Editing Techniques}
Several works~\cite{elor2017bringingPortraits,Geng2018WarpguidedGF,pagan} exist for controlling the expression and head pose given a single image as input.
Nagano~\et~\al~\shortcite{pagan} presented \textit{paGAN}, an approach for creating personalized avatars from just a single image of a person.
However, the work does not synthesize photo-realistic hair.
The approach of Averbuch-Elor~\etal~\shortcite{elor2017bringingPortraits} brings portrait images to life by animating their expression and pose.
The target image is animated through a 2D warp that is computed from the movement in the source video.
The mouth interior is copied from the source and blended into the warped target image.
The approach of Geng~\etal~\shortcite{Geng2018WarpguidedGF}
employs deep generative models to synthesize more realistic facial detail and a higher quality mouth interior.
First, a dense spatial motion field is used to warp the target image.
Afterwards, the first network corrects the warped target image and synthesizes important skin detail.
Finally, the second network synthesizes the mouth interior, including realistic teeth.
Siarohin~\etal~\shortcite{Siarohin_2019_NeurIPS} proposed a method for animating a single image based on a driving sequence. By detecting keypoints in both the target image and the driving frames, the method uses a neural network to compute a dense warping field, specifying how to translate the driving frames into the target image. Based on this information a second network produces high-quality output frames. Since keypoint extraction is also learned during training, the method is applicable for any category of input, and in particular works for face and full body images.
While existing single-shot editing techniques can only be controlled via a driving video, our approach enables intuitive editing of the head pose, facial expression and incident illumination in a portrait image
through intuitive parametric control, as well as through a driving video.
\subsection{Portrait Relighting}
Relighting approaches modify the incident illumination on the face~\cite{Peers07,Shu17b,Zhou_2019_ICCV,Sun19,Meka:2019}.
Earlier works~\cite{Peers07,Shu17b} require an exemplar portrait image that has been taken under the target illumination conditions.
More recent techniques use deep generative models~\cite{Zhou_2019_ICCV,Sun19,Meka:2019} and can relight images based on an environment map.
Zhou~\et~\al~\shortcite{Zhou_2019_ICCV} train a relighting technique based on a large corpus of synthetic images.
Relighting is performed in the luminance channel, which simplifies the learning task.
Sun~\etal~\shortcite{Sun19} use light stage data to train their relighting approach.
At test time, the network produces high quality relighting results, even for in-the-wild images.
While training with light stage data leads to high-quality results, their scarcity and careful recording protocol can limit their adaptation.
Meka~\etal~\shortcite{Meka:2019} showed that the 4D reflectance field can be estimated from two color gradient images captured in a light stage.
This provides more movement flexibility for the subject during recording, and hence takes an important step towards capturing relightable video.
\new{ \subsection{Image Editing using SyleGAN}
Several recent methods have been proposed to edit StyleGAN generated images.}
\new{Most approaches linearly change the StyleGAN latent codes for editing~\cite{anoynomous,shen2020interpreting,hrknen2020ganspace}.
Non-linear editing has been shown in \citet{abdal2020styleflow}.
Image2StyleGAN~\cite{Abdal_2019_ICCV,abdal2019image2stylegan} is a popular approach for embedding real images into the StyleGAN latent space with very high fidelity.
InterFaceGAN~\cite{shen2020interpreting} and StyleFlow~\cite{abdal2020styleflow} demonstrate editing of real images using these embeddings.
Very recently, \citet{zhu2020indomain} introduce a domain-guided embedding method which allows for higher-quality editing, compared to Image2StyleGAN.
However, they do not demonstrate results at the highest resolution for StyleGAN.
In this paper, we design an embedding method which allows for high-quality portrait editing using StyleRig~\cite{anoynomous}.}
\section{Semantic Editing of Real Facial Images}
We present an approach that allows for semantic editing of real facial images.
The key of our approach is to embed a given facial image in the StyleGAN latent space \cite{Karras2019cvpr}, where we pay particular attention to finding a latent encoding that is \emph{suitable for editing the image}.
\new{This is crucial, since the parameter space of the StyleGAN architecture is generally under-constrained. For example, it has been shown that a StyleGAN trained for human faces is able to synthesize images that show completely different content with high fidelity, such as images of cat faces~\cite{Abdal_2019_ICCV}} %
\new{Our goal is to compute embeddings which can be edited using 3DMM parameters using StyleRig. }
\begin{figure*}
\includegraphics[width=\textwidth]{figures/pipeline-new.png}
\caption{
Given a portrait input image, we optimize for a StyleGAN embedding which allows to faithfully reproduce the image (synthesis and facial recognition terms), editing the image based on semantic parameters such as head pose, expressions and scene illumination (edit and invariance terms), as well as preserving the facial identity during editing (facial recognition term).
A novel hierarchical non-linear optimization strategy is used to compute the result.
StyleGAN generated images (image with edit parameters) are used to extract the edit parameters during optimization.
At ``test time'',~i.e. for performing portrait image editing, the image with edit parameters is not needed.
Note that the identity term is not visualized here.
Images from~\citet{Shih14}.
}
\label{fig:pipeline}
\end{figure*}
\paragraph{Problem Statement}
We will refer to the image that we want to make editable as $\mathbf{I}$ (without any subscripts or arguments), which we assume to be a given input.
Moreover, we will refer to the StyleGAN code that will make image $\mathbf{I}$ editable as $\mathbf{w}$, which is the desired output of our approach. As such, we will introduce an energy function $E(\mathbf{w})$, which is minimized by solving a numerical optimization problem. This energy function accounts for the high fidelity of the synthesized image based on $\mathbf{w}$ (explained in Sec.~\ref{sec:hfimsyn}),
for editing-suitability (described in Sec.~\ref{sec:faceediting}), as well as for consistent face identity before and after the edit (Sec.~\ref{sec:recog}).
We emphasize that our approach is based on non-linear optimization techniques, and does not perform any learning of network weights, which in turn means that we do not require any ground truth data of edited facial images.
In order to formulate the energy function we will make use of several existing neural networks, where all of them are pretrained and remain fixed throughout the optimization.
We will now introduce some technical notations, which will allow us to have an additional layer of abstraction and thereby facilitate a more comprehensive description of the main concepts.
\paragraph{Notation}
Throughout this paper we will use $\mathbf{w}$ exclusively to refer to the (unknown) StyleGAN embedding that we want to find, and we will use $\mathbf{v}$ (potentially with subscripts) to refer to general StyleGAN embeddings. We note that the StyleGAN embeddings $\mathbf{w}$ and $\mathbf{v}$ can have two different forms, where each form has a different dimension, which we will describe in detail in Sec.~\ref{sec:opt}.
StyleGAN can be understood as a function $\operatorname{stylegan}(\cdot)$ that maps a given latent code to a portrait image. To simplify notation, we will use the function notation $\mathbf{I}(\mathbf{v}) := \operatorname{stylegan}(\mathbf{v})$ in order to emphasize that we use the StyleGAN embedding $\mathbf{v}$ to generate the image $\mathbf{I}(\mathbf{v})$. Analogously, we overload $\mathbf{I}(\cdot)$, so that it can also take a 3DMM parameter $\theta$ as input. As such, $\mathbf{I}(\theta)$ refers to an image rendered using the face model that is parameterized by $\theta$ (Sec.~\ref{sec:fmodel}), where differentiable rendering is employed~\cite{tewari17MoFA}. Note that this rendered image is only defined on foreground face pixels as opposed to StyleGAN images.
We use the variable $\tau \in \{\phi, \mathbb{\beta}, \gamma \}$ to indicate the user-defined facial semantic variable that is to be edited, which in our case can be the head pose $\phi$, facial expression $\beta$, or illumination $\gamma$.
Similarly, we use the complement notation~$\overline{\tau} \subset \{\phi, \rho, \alpha, \delta, \beta, \gamma\}$, to indicate all other variables, i.e., the ones that shall not be modified. With that, we use the notation $\theta^{\tau}$ (or $\theta^{\overline{\tau}}$) to refer to the extraction of the $\tau$-component (or $\overline{\tau}$-components) of $\theta$.
Since facial editing is implemented by modifying the $\tau$-component of the 3DMM parameter $\theta$, we write
$\theta' = [\theta_1^{\overline{\tau}}, \theta_2^{{\tau}}]$ to indicate that the respective $\tau$-component of $\theta_1$ is replaced by the corresponding component in $\theta_2$. For example, for $\tau = \beta$,
\begin{align}
\theta_1 &= (\phi_1,\rho_1,\alpha_1,\delta_1,\mathbb{\beta}_1,\gamma_1)\,,\quad\text{and}\\
\theta_2 &= (\phi_2,\rho_2,\alpha_2,\delta_2,\mathbb{\beta}_2,\gamma_2)\,,\quad\text{we have}\\
[\theta_1^{\overline{\tau}}, \theta_2^\tau] &= (\phi_1,\rho_1,\alpha_1,\delta_1,\mathbb{\beta}_2,\gamma_1) \,.
\end{align}
Moreover, we use the notation $\theta(\mathbf{v})$ to extract the 3DMM parameters from the StyleGAN embedding $\mathbf{v}$. In order to compute this, the embedding $\mathbf{v}$ is first used to synthesize the image $\mathbf{I}(\mathbf{v})$ (using StyleGAN),
followed by performing a 3D reconstruction based on the pretrained \emph{Model-based Face Autoencoder} (MoFA) network~\cite{tewari17MoFA}. Hence, for $\operatorname{MoFA}(\cdot)$ being the function that performs 3D reconstruction for a given image by estimating the 3DMM parameters, we define
\begin{align}
\theta(\mathbf{v}) = \operatorname{MoFA}(\mathbf{I}(\mathbf{v}))\,.
\end{align}
For any image $\mathbf{I'}$, we use the short-hand notation $\theta(\mathbf{I'}) = \operatorname{MoFA}(\mathbf{I'})$.
Similarly as above, we will use %
$\theta^{\tau}(\mathbf{v})$ and $\theta^{\tau}(\mathbf{I}')$ to extract only the $\tau$-component from the 3DMM parameters. %
Whenever arguments of $\theta(\cdot)$ or $\mathbf{I}(\cdot)$ are fixed, i.e., the arguments are not a variable, we use the short-hand notations $\theta_{\mathbf{v}} = \theta(\mathbf{v})$, $\theta_{\mathbf{I}'} = \theta(\mathbf{I}')$, or $\mathbf{I}_{\mathbf{v}} = \mathbf{I}(\mathbf{v})$.
We summarize the most important part of our notations in Table~\ref{table:notation}.
\begin{table}[]
\caption{Summary of notation.}
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Symbol} & \textbf{Meaning} \\ \midrule
$\mathbf{w}$ & StyleGAN embedding that we want to find \\
$\mathbf{v}$ & other StyleGAN embedding(s) \\
$\mathbf{\theta}$ & 3DMM parameter \\
$\mathbf{\tau}$ & component that is to be edited ($\tau \in \{\phi,\beta,\gamma\}$) \\
$\mathbf{I}$ & input image that we want to edit \\
$\mathbf{I}(\mathbf{v})$ &StyleGAN-synthesized image \\
$\mathbf{I}(\theta)$ & image of 3DMM rendering \\
$\theta^{\tau} $ & extraction of $\tau$-component of $\theta$ \\
$[\theta_1^{\overline{\tau}}, \theta_2^\tau]$ & combine $\overline{\tau}$-components in $\theta_1$ with $\tau$-component in $\theta_2$ \\
$\theta(\mathbf{v}), \theta_{\mathbf{v}}$ & 3D reconstruction of 3DMM parameters from $\mathbf{I}(\mathbf{v})$ \\
$\theta(\mathbf{I'}), \theta_{\mathbf{I'}}$ & 3D reconstruction of 3DMM parameters from $\mathbf{I}'$ \\
\bottomrule
\end{tabular}
\label{table:notation}
\end{table}
\paragraph{Objective function}
We solve for $\mathbf{w}$ by minimizing the energy function
\begin{align}
\label{eq:totalenergy}
E(\mathbf{w}) = E_{\text{syn}}(\mathbf{w}) + E_{\text{id}}(\mathbf{w}) + E_{\text{edit}}(\mathbf{w}) +
E_{\text{inv}}(\mathbf{w}) + E_{\text{recog}}(\mathbf{w}) \,.
\end{align}
$E_{\text{syn}}$ is a synthesis term enforcing the StyleGAN-synthesized image $\mathbf{I}(\mathbf{w})$ to be close to $\mathbf{I}$ (Sec.~\ref{sec:hfimsyn}). $E_{\text{id}}$, $E_{\text{edit}}$, $E_{\text{inv}}$ are face modification terms (Sec.~\ref{sec:faceediting}) enforcing edits to take place on the modified facial semantics while at the same time ensuring unmodified facial semantics to remain un-edited. $E_{\text{recog}}(\mathbf{w})$ is a face recognition term that will be introduced in Sec.~\ref{sec:recog}. A conceptual illustration of the energy function and the overall pipeline is shown in Fig.~\ref{fig:pipeline}. Next, we will discuss each term in more detail.
\subsection{High-Fidelity Image Synthesis}
\label{sec:hfimsyn}
Similarly to Image2StyleGAN~\cite{Abdal_2019_ICCV}, we use the following energy term that accounts for the StyleGAN-synthesized image $\mathbf{I}(\mathbf{w})$ being close to $\mathbf{I}$:
\begin{align}
E_{\text{syn}}(\mathbf{w}) & = \lambda_{\ell_2} \|{\mathbf{I}} - \mathbf{I}(\mathbf{w}) \|^2_2 + \lambda_{\text{p}} \| \mathbf{\Phi}({\mathbf{I}})-\mathbf{\Phi}( \mathbf{I}(\mathbf{w}) \|^2_2 \,.
\label{eq:energy_synth}
\end{align}
The first term in the energy $E_{\text{syn}}$ penalizes the discrepancy between $\mathbf{I}$ and the synthesized image in terms of the (squared) $\ell_2$-norm, whereas the second term penalizes discrepancies based on the \emph{perceptual loss}~\cite{Johnson2016Perceptual}.
The perceptual loss is estimated \new{on images downsampled by a factor of 4}, based on $\ell_2$-losses over VGG-16 layers \texttt{conv1\_1}, \texttt{conv1\_2}, \texttt{conv3\_2} and \texttt{conv4\_2} \cite{Simonyan15}. The notation $\mathbf{\Phi}(\cdot)$ refers to the function that downsamples a given input image and extracts features. The scalars $\lambda_{\ell_2}$ and $\lambda_{\text{p}}$ are the relative weights of both terms.
In principle, we could minimize the energy $E_{\text{syn}}$ in~\eqref{eq:energy_synth} in order to obtain the StyleGAN code $\mathbf{w}$, as done in~\citet{Abdal_2019_ICCV}, and perform editing operations on $\mathbf{w}$. A so-obtained code vector $\mathbf{w}$ allows the use of StyleGAN to obtain a highly accurate synthetic version of the input face, which is even capable of reconstructing backgrounds with high accuracy. However, such a $\mathbf{w}$ is sub-optimal for performing \emph{semantic face editing}, as we later demonstrate in Fig.~\ref{fig:ablative-loss}.
\subsection{Face Image Editing} \label{sec:faceediting}
\new{We augment the synthesis term with an editing energy that is based on the StyleRig framework~\cite{anoynomous}, which allows us to obtain more accurate semantic editing while preserving the non-edited attributes.}
Here, the StyleGAN embedding $\mathbf{w}$ that is to be determined should have the following three properties in order to be suitable for semantic editing:
\paragraph{Identity Property}
The identity property is phrased in terms of the $\ell_2$-norm of the difference of StyleGAN embeddings and is given by
\begin{align}
E_{\text{id}}(\mathbf{w}) & = \lambda_{\text{id}}\| \mathbf{w} - \operatorname{rignet}(\mathbf{w}, \theta^\tau(\mathbf{w})) \|_2^2 \,.
\end{align}
As such, whenever the RigNet is used to modify $\mathbf{w}$ with $\theta^\tau(\mathbf{w})$, i.e., a component of the 3DMM parameter extracted from $\mathbf{w}$, the embedding $\mathbf{w}$ should not be modified.
\paragraph{Edit Property}
In order to get around the obstacle of defining a suitable metric for 3DMM parameter vectors, whose components may be of significantly different scale, and the relative relevance of the individual components is not easily determined, we phrase the edit property in image space, \new{as in StyleRig~\cite{anoynomous}}.
As such, a facial edit is implicitly specified in image space via the StyleGAN embedding $\mathbf{v}$,
where
the $\tau$-component of the respective 3DMM parameters of $\mathbf{v}$, i.e. $\theta^{\tau}_{\mathbf{v}}$, specifies the edit operation.
The image-space version of the edit property reads
\begin{align}
\forall ~\mathbf{v}:\quad
\mathbf{I}_{\mathbf{v}} = \mathbf{I}( [\theta_{\mathbf{v}}^{\overline{\tau}},
\theta^{\tau}(\operatorname{rignet}(\mathbf{w}, \theta_{\mathbf{v}}^\tau)) ]) \,.
\end{align}
Note that this true equality cannot hold in practice, since the two images are from different domains (real image and a mesh rendering).
We are interested in minimimzing the difference between these terms.
This equation is best fulfilled whenever the $\tau$-component of the edited 3DMM parameters $\theta^\tau(\operatorname{rignet}(\mathbf{w}, \theta_{\mathbf{v}}^\tau))$ is equal to $\theta_{\mathbf{v}}^\tau$, i.e. the edit has been successfully applied.
Since computationally we cannot evaluate all choices of $\mathbf{v}$, we sample StyleGAN embeddings $\mathbf{v}$ as done in~\citet{anoynomous}, and then use the expected value as loss.
For integrating this property into our optimization framework we use a combination of a photometric term and a landmark term, which is defined as
\begin{align}
\ell(\mathbf{I}', \theta) = \lambda_{\text{ph}} \| \mathbf{I}' - \mathbf{I}(\theta)\|_{\text{\smiley{}}}^2 + \lambda_{\text{lm}} \| \mathcal{L}_{\mathbf{I}'} - \mathcal{L}(\theta)\|_F^2 \,.
\end{align}
The norm $\| \cdot \|_{\smiley{}}$ computes the $\ell_2$-norm of all \emph{foreground} pixels (the facial part of the image), whereas $\|\cdot\|_F$ is the Frobenius norm. By $\mathcal{L}_{\mathbf{I}'} \in \mathbb{R}^{66 \times 2}$ we denote the matrix of 2D facial landmarks extracted from the image $\mathbf{I}$ (based on~\citet{Saragih2011}), and $\mathcal{L}(\theta) \in \mathbb{R}^{66 \times 2}$ refers to the corresponding landmarks of the 3DMM after they have been projected onto the image plane.
With that, the edit property energy reads
\begin{align}
E_{\text{edit}}(\mathbf{w}) & =
\lambda_{\text{e}} \, \mathbb{E}_{\mathbf{v}}[\ell( \mathbf{I}_{\mathbf{v}},
[\theta_{\mathbf{v}}^{\overline{\tau}},
\theta^{\tau}(\operatorname{rignet}(\mathbf{w}, \theta_{\mathbf{v}}^\tau))])] \,.
\label{eq:edit}
\end{align}
\paragraph{Invariance Property}
Similarly as the edit property we phrase the invariance property also in image space as%
\begin{align}
\forall ~\mathbf{v}:\quad
\mathbf{I} = \mathbf{I}(
[\theta^{\overline{\tau}}(\operatorname{rignet}(\mathbf{w}, \theta^\tau_{\mathbf{v}})) , \theta^{\tau}_{\mathbf{I}}]) \,.
\end{align}
While the edit property imposes that the $\tau$-component of the edited 3DMM parameter $\theta^\tau(\operatorname{rignet}(\mathbf{w}, \theta^\tau_{\mathbf{v}}))$ is modified as desired, the invariance property takes care of all $\overline{\tau}$. It is fulfilled whenever it holds that $\theta^{\overline{\tau}}(\operatorname{rignet}(\mathbf{w}, \theta^\tau_{\mathbf{v}})) = \theta^{\overline{\tau}}_{\mathbf{I}}$, i.e. the components $\overline{\tau}$ that are not to be edited are maintained from the input image $\mathbf{I}$.
Analogously to the edit property, we base the respective energy on the combination of a photometric term and a landmark term as implemented by $\ell(\cdot)$, so that we obtain
\begin{align}
E_{\text{inv}}(\mathbf{w}) & = \lambda_{\text{inv}} \, \mathbb{E}_{\mathbf{v}}[\ell(\mathbf{I}, [\theta^{\overline{\tau}}(\operatorname{rignet}(\mathbf{w}, \theta^\tau_{\mathbf{v}})), \theta^{\tau}_{\mathbf{I}}])] \,.
\end{align}
\subsection{Face Recognition Consistency}\label{sec:recog}
\new{In addition to the synthesis and editing terms, we incorporate two face recognition consistency terms to preserve the facial integrity while editing.}
On the one hand, it is desirable that the synthesized image $\mathbf{I}(\mathbf{w})$ is recognized to depict the same person as shown in the given input image $\mathbf{I}$.
On the other hand, the edited image, $\operatorname{stylegan}(\operatorname{rignet}(\mathbf{w}, \theta^{\tau}_{\mathbf{v}}))$
should also depict the same person as shown in the input $\mathbf{I}$.
In order to do so, we use VGG-Face~\cite{Parkhi15} to extract \emph{face recognition features},
where we use the notation $\mathbf{\Psi}(\cdot)$ to refer to the function that extracts such features from a given input image. We define the recognition loss
\begin{align}
\ell_{\text{recog}}(\mathbf{I}', \mathbf{v}) = \|\mathbf{\Psi}(\mathbf{I}') - \mathbf{\Psi}(\mathbf{I}(\mathbf{v})) \|_F^2 \,,
\end{align}
which is then used to phrase the recognition energy term as
\begin{align}
\label{eq:recog}
E_{\text{recog}}(\mathbf{w}) = \lambda_{\text{r}_{\mathbf{w}}} \,\ell_{\text{recog}}(\mathbf{I}, \mathbf{w})
+ \lambda_{\text{r}_{\hat{\mathbf{w}}}} \,\mathbb{E}_{\mathbf{v}}[\ell_{\text{recog}}(\mathbf{I}, \operatorname{rignet}(\mathbf{w}, \theta^{\tau}_{\mathbf{v}}))] \,.
\end{align}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/pose-main-rev.jpg}
\vspace{-0.6cm}
\caption{
\new{Pose Editing.
Our approach can handle a large variety of head pose modifications including out-of-plane rotations in a realistic manner.
Image2StyleGAN~\cite{Abdal_2019_ICCV} embeddings often lead to artifacts when edited using StyleRig. Images from~\citet{shen2016deep}.} %
}
\label{fig:pos-main}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/light-main-rev.jpg}
\vspace{-0.6cm}
\caption{
\new{Illumination Editing.
Our approach can realistically relight portrait images.
Each edited image corresponds to changing a different Spherical Harmonics coefficient, while all other coefficients are kept fixed.
The environment maps are visualized in the inset.
Image2StyleGAN~\cite{Abdal_2019_ICCV} embeddings often lead to artifacts when edited using StyleRig. Images from~\citet{shen2016deep}.}
}
\label{fig:light-main-ne}
\end{figure*}
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/exp-main-rev.jpg}
\vspace{-0.6cm}
\caption{
\new{Expression Editing.
Our approach can also be used to edit the facial expressions in a portrait image in a realistic manner.
We obtain more plausible results, compared to Image2StyleGAN~\cite{Abdal_2019_ICCV} embeddings. Images from~\citet{shen2016deep} and~\citet{Shih14}.}
}
\label{fig:exp-main}
\end{figure}
\subsection{Optimization}
\label{sec:opt}
Our energy function $E(\cdot)$ in \eqref{eq:totalenergy} depends on a range of highly non-linear functions, such as $\operatorname{stylegan}(\cdot)$, $\operatorname{MoFA}(\cdot)$, $\Phi(\cdot)$ and $\Psi(\cdot)$, which are implemented in terms of (pretrained) neural networks.
We implement our energy minimization within TensorFlow~\cite{tensorflow2015-whitepaper} using ADADELTA optimization~\cite{zeiler2012adadelta}. In each iteration we stochastically sample a different $\mathbf{v}$.
The optimization uses a hierarchical approach that we describe next.
\paragraph{Hierarchical Optimization}
StyleGAN is based on a hierarchy of latent spaces, where a stage-one embedding $Z$ with $|Z|=512$ is randomly sampled first. This is then fed into
a mapping network that produces $W$ as output, where $|W|=512$.
Subsequently, $W$ is extended to $W^+$, where $|W^+| =18 \times 512$, and used as input to $18$ network layers.
It has been shown that $W^+$ is the most expressive space for fitting to real images~\cite{Abdal_2019_ICCV}.
However, we found that a direct optimization over this space leads to lower-quality editing results with severe artifacts. This is because the optimized variable can be far from the prior distribution of StyleGAN. To address this, we first optimize for the embedding in the $W$-space, meaning that in the first stage of our optimization the variable $\mathbf{w}$ is understood as an embedding in the $W$-space.
We optimize in $W$-space for $2000$ iterations.
We then transfer the result to $W^+$-space, initialize the variable $\mathbf{w}$ respectively, and
continue optimizing in the $W^+$-space for another $1000$ iterations.
Optimizing in this hierarchical way allows us to represent the coarse version of the image in the $W$-space, which is less expressive and thereby closer to the prior distribution.
Finetuning on the $W^+$ space then allows us to fit the fine-scale details, while preserving editing quality.
\section{Rigging StyleGAN-generated images}
StyleGAN~\cite{Karras2019cvpr} can synthesize human faces at an unprecedented level of photorealism. However, their edits are defined in terms of three main facial levels (coarse, medium and fine), with no semantic meaning attached to them.
StyleRig~\cite{anoynomous} attaches a semantic control for a StyleGAN embedding, allowing edits for head pose, illumination and expressions. The control is defined through a 3D Morphable Face Model (3DMM)~\cite{Blanz1999}.
\subsection{StyleRig in more detail}
\label{sec:fmodel}
Faces are represented by a 3DMM model with $m = \text{257}$ parameters
\begin{equation}
\label{eq:parameters}
\theta = (\phi,\rho,\alpha,\delta,\beta,\gamma) \in \mathbb{R}^{257} \,.
\end{equation}
Here, $(\phi,\rho)\in \mathbb{R}^{6}$ are the rotation and translation parameters of the head pose, where rotation is defined using Euler angles. The vector $\alpha \in \mathbb{R}^{80}$ represents the geometry of the facial identity, while $\beta \in \mathbb{R}^{64}$ are the expression parameters. Skin reflectance is defined by $\delta \in \mathbb{R}^{80}$ and the scene illumination by $\gamma \in \mathbb{R}^{27}$. The basis vectors of the geometry and reflectance models are learned from 200 facial 3D scans \cite{Blanz1999}. The expression model is learned from FaceWarehouse \cite{Cao2014b} and the Digital Emily project \cite{alexander2010digital}. Principal Components Analysis (PCA) is used to compress the original over-complete blendshapes to a subspace of 64 parameters. Faces are assumed to be Lambertian, where illumination is modeled using second-order spherical harmonics (SH) \cite{Ramamoorthi2001b}.
StyleRig~\cite{anoynomous} allows one to semantically edit synthetic StyleGAN images.
To this end, StyleRig trains a neural network, called \emph{RigNet}, which can be understood as a function $\operatorname{rignet}(\cdot,\cdot)$ that maps a pair of StyleGAN code $\mathbf{v}$ and subset of 3DMM parameters $\theta^{\tau}$ to a new StyleGAN code $\hat{\mathbf{v}}$, i.e. $\hat{\mathbf{v}} = \operatorname{rignet}(\mathbf{v},\theta^{\tau})$.
In practice, the 3DMM parameters are first transformed before being used in the network. Please refer to the supplemental document for details.
With that, $\mathbf{I}_{\hat{\mathbf{v}}}$ shows the face of $\mathbf{I}_{\mathbf{v}}$ modified according to $\theta^{\tau}$ (i.e. with edited head pose, scene lighting, or facial expression), \new{where $\mathbf{I}_{\mathbf{v}}$ is the StyleGAN image generated using the latent code $\mathbf{v}$}. Thus, editing a synthetic image $\mathbf{I}_{\mathbf{v}}$ amounts to modifying the component $\tau$ in the parameter $\theta$, and then obtaining the edited image as $\mathbf{I}_{\hat{\mathbf{v}}} = \mathbf{I}(\operatorname{rignet}(\mathbf{v},\theta^{\tau}))$.
Multiple RigNet models are trained, each to deal with just one mode of control (pose, expression, lighting). Although RigNet allows for editing of facial images, it has the major shortcoming that only \emph{synthetic} images can be manipulated, rather than real images. This is in contrast to this work, where we are able to perform semantic editing of \emph{real} images.
\new{Different from the original RigNet design where a differentiable face reconstruction network regresses the 3DMM parameters from a StyleGAN code, we use a model-based face autoencoder~\cite{tewari17MoFA} which takes an image as an input.
This change is necessary, as we initially do not have the StyleGAN code for the real image we want to edit.
}
\section{Results}
In the following, we demonstrate the high-quality results of our method, analyze its different components, as well as compare to several state-of-the-art approaches for portrait image editing.
\paragraph{Implementation Details}
We use the following weights for our energy terms: $\lambda_{\ell_2} = 10^{-6}$, $\lambda_{\text{p}} = 10^{-6}$, $\lambda_{\text{id}} = 1.0$, $\lambda_{\text{ph}} = 0.001$, $\lambda_{\text{lm}} = 0.2$, $\lambda_{\text{e}} = 10.0$, $\lambda_{\text{inv}} = 10.0$, $\lambda_{\text{r}_{\mathbf{w}}} = 0.1$, $\lambda_{\text{r}_{\hat{\mathbf{w}}}} = 0.1$.
We use a starting step size of $50$ when optimizing over embeddings in $W$ space, and $10$ in $W^+$ space.
The step size is then exponentially decayed by a factor of $0.1$ every $2000$ steps.
Optimization takes approximately $10$ minutes for $3000$ iterations per image on an NVIDIA V100 GPU.
Once the embedding is obtained, the portrait image can be edited at an interactive speed.
\new{\paragraph{Feedback}
\label{sec:feedback}
We noticed that a simple feedback loop allows us to get more accurate editing results.
We update the parameters used as input to RigNet in order to correct for the editing inaccuracies in the output.
Given target 3DMM parameters $\mathbf{\theta}$, we first obtain the embedding for the edited image, $\operatorname{rignet}(\mathbf{w}, \theta^\tau)$.
We then estimate the 3DMM parameters from the edited embedding, $\theta_{\text{est}} = \theta(\operatorname{rignet}(\mathbf{w}, \theta^{\tau}))$. %
The final embedding is computed as $\operatorname{rignet}(\mathbf{w}, \theta_{\text{new}}^\tau)$ with $\theta_{\text{new}} = \mathbf{\theta} + (\mathbf{\theta} - \mathbf{\theta_{\text{est}}})$.
}
\subsection{High-Fidelity Semantic Editing}
We evaluate our approach on a large variety of portrait images taken from~\citet{shen2016deep} and~\citet{Shih14}.
The images are preprocessed as in StyleGAN~\cite{Karras2019cvpr}.
Figs.~\ref{fig:pos-main},~\ref{fig:light-main-ne},~\ref{fig:exp-main}
show results of controlling the head pose, scene illumination, and facial expressions, respectively.
\new{The projections onto the StyleGAN space are detailed, preserving the facial identity.}
Our approach also produces photo-realistic edits.
Fig.~\ref{fig:pos-main} shows that our approach can handle a large variety of head pose modifications, including out-of-plane rotations.
It also automatically inpaints uncovered background regions in a photo-realistic manner.
Fig.~\ref{fig:light-main-ne} demonstrates our relighting results. Our approach can handle complex light material interactions, resulting in high photo-realism.
The relighting effects are not restricted to just the face region, with hair and even eyes being relit.
Our approach also allows for editing facial expressions, see Fig.~\ref{fig:exp-main}.
For smooth temporal editing results of portrait images, please refer to the supplementary video.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{figures/ablative-loss-rev.jpg}
\vspace{-0.6cm}
\caption{
Ablative analysis of the different loss functions.
\emph{Modification} refers to the edit, invariance and identity terms simultaneously.
The left block shows results for editing the head pose and the right block shows results for editing scene illumination.
All losses are required to obtain high-fidelity edits. Images from~\citet{shen2016deep}.
}
\label{fig:ablative-loss}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/ablative-Wjpg-rev.jpg}
\vspace{-0.6cm}
\caption{
Ablative analysis with and without hierarchical optimization.
The left block shows the results for pose editing and the right block for illumination editing.
Without our hierarchical optimization, the obtained embedding cannot be easily edited and artifacts appear in the modified images. Images from~\citet{shen2016deep}.
}
\label{fig:ablative-opt}
\end{figure*}
\begin{table*}[]
\caption{
We compare different settings quantitatively using several metrics for pose editing.
\new{All numbers are averaged over more than $2500$ pose editing results.
We measure the quality of the fit by comparing them to the input image using PSNR and SSIM metrics.
Editing error is measured as the angular difference between the desired and achieved face poses.
Recognition error measures the value of the facial recognition error for the edited images.
There is usually a trade-off between the quality and accuracy of editing, as lower recognition errors correspond to higher editing errors.
We also compare to Image2StyleGAN~\cite{Abdal_2019_ICCV} embeddings using these metrics.
While it achieves the highest quality fitting, the editing results do not preserve the facial identity well. }
}
\begin{tabular}{@{}lcccccc}
\cmidrule(r){1-7}
&
\multicolumn{1}{c}{synthesis} &
\begin{tabular}{@{}c@{}}synthesis + \\ recognition\end{tabular} &
\begin{tabular}{@{}c@{}}synthesis + \\ modification\end{tabular} &
\begin{tabular}[c]{@{}c@{}}all terms (PIE)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}all terms (direct opt.)\end{tabular} &
Image2StyleGAN \\ \cmidrule(r){1-7}
PSNR (dB) $\uparrow$ / SSIM $\uparrow$ & 30.15 / 0.70 & 29.84 / 0.69 & 30.15 / 0.70 & 29.96 / 0.70 & 29.76 / 0.69 & \textbf{31.21} / \textbf{0.75} \\
Editing Error (rad) $\downarrow$ & 0.06 & 0.11 & \textbf{0.036} & 0.08 & 0.037 & 0.07 \\
Recognition Error $\downarrow$ & 95.76 & 43.64 & 90.10 & \textbf{42.82} & 51.65 & 275.40 \\ \bottomrule
\end{tabular}
\label{tab:ablative}
\end{table*}
\subsection{Ablation Studies}
Here, we evaluate the importance of the different proposed loss functions, and also evaluate the hierarchical optimization strategy. Please refer to the supplemental document for the evaluation of the feedback strategy.
\paragraph{Loss Functions}
Fig.~\ref{fig:ablative-loss} shows qualitative ablative analysis for the different loss functions.
We group the edit, invariance and identity terms as \emph{modification terms}.
Adding face recognition consistency without the modification terms lead to incorrect editing in some cases.
Adding the modification terms without face recognition consistency leads to the method being able to accurately change the specified semantic property, but the identity of the person in the image is not preserved.
Using all terms together leads to results with photorealistic edits with preservation of identity.
We do not evaluate the importance of the individual components of the modification terms, as it was already evaluated in \citet{anoynomous}.
\paragraph{Hierarchical Optimization}
Hierarchical optimization is an important component of our approach.
Fig.~\ref{fig:ablative-opt} shows results with and without this component.
Without hierarchical optimization, the method directly optimizes for $\mathbf{w} \in W^+$.
While this leads to high-quality fits, the obtained embedding can be far from the training distribution of StyleRig.
Thus, the quality of edits is poor.
For example in Fig.~\ref{fig:ablative-opt} (top), the StyleGAN network interprets the ears as background, which leads to undesirable distortions.
With hierarchical optimization, the results do not suffer from artifacts.
\paragraph{Quantitative Analysis} We also analyze the effect of different design choices quantitatively, see Tab.~\ref{tab:ablative}.
We look at three properties, the quality of recostruction (measured using \new{PSNR and SSIM between the projected image and the input)}, the accuracy of edits \new{(measured as the angular distance between the desired and estimated head poses)}, and idenity preservation under edits \new{(measured using the second term in Eq.~\ref{eq:recog})} during editing.
The numbers reported are averaged over more than \new{$2500$} pose editing results.
We can see that removing the recognition term changes the identity of the face during editing, and removing the modification loss increases the editing and recognition error.
Hierarchical optimization also leads to better facial identity preservation, compared to direct optimization.
This is expected, since the results with direct optimization often have artifacts.
Note that the artifacts outside of the face region (hair, ears) would not increase the recognition errors significantly.
\new{The recognition term introduces a clear trade-off between the quality of identity preservation under edits and the accuracy of edits.
The modification terms allow for slight improvements in both identity preservation as well as the accuracy of the edits.
}
\subsection{Comparison to the State of the Art}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/comparisons_new.jpg}
\vspace{-0.3cm}
\caption{
Comparison of head pose editing for self-reenactment (first two rows) and cross-identity reenactment (last two rows).
We compare our approach to Wiles~\etal~\shortcite{Wiles18}, Wang~\etal~\shortcite{wang2018vid2vid}, Siarohin~\etal~\shortcite{Siarohin_2019_NeurIPS} and Geng~\etal~\shortcite{Geng2018WarpguidedGF}.
The pose from the reference images is transferred to the input.
Our approach obtains higher quality head pose editing results, specially in the case of cross-identity transfer.
All approaches other than ours are incapable of \emph{disentangled} edits, i.e., they cannot transfer the pose without also changing the expressions.
The implementation of Geng~\etal~\shortcite{Geng2018WarpguidedGF} does not handle cross-identity reenactment.
Note that while the three competing approaches require a reference image in order to generate the results, we allow for explicit control over the pose parameters. Image from~\citet{shen2016deep}.
}
\label{fig:pose-comparison}
\end{figure*}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/light-comparison-new.jpg}
\vspace{-0.6cm}
\caption{
Comparison of our relighting results with \citet{Zhou_2019_ICCV}.
The illumination in the reference image is transferred to the input.
Our results are more natural and achieve more accurate relighting.
We can edit colored illumination while \citet{Zhou_2019_ICCV} can only edit monochrome light.
In addition, we can also edit the head pose and facial expressions, while~\citet{Zhou_2019_ICCV} is trained only for relighting.
Images from~\citet{Shih14}.
}
\label{fig:light-comparison}
\end{figure}
\subsubsection{Image2StyleGAN}
\new{Image2StyleGAN~\cite{Abdal_2019_ICCV} also projects real images to the StyleGAN latent space, and is thus a closely related approach.
The source code of Image2StyleGAN was kindly provided by the authors.
We show editing results using Image2StyleGAN embeddings in Figs.~\ref{fig:teaser},~\ref{fig:pos-main},~\ref{fig:light-main-ne} and ~\ref{fig:exp-main}.
Since these embeddings are optimized only using the synthesis terms and without using hierarchical optimization, the results are often implausible, as is most evident when editing the head pose and scene illumination.
However, Image2StyleGAN projections are more detailed than ours.}
\new{We also quantitatively compare to Image2StyleGAN in Tab.~\ref{tab:ablative}.
Image2StyleGAN obtains the highest quality projections in terms of PSNR and SSIM.
When combined with StyleRig, it also leads to low editing errors.
However, the recognition errors are very high due to the artifacts in the results, as shown in the qualitative results.
}
\subsubsection{Other Aproaches}
We \new{also} compare our approach to a number of related techniques, X2Face~\cite{Wiles18}, \citet{Geng2018WarpguidedGF} and \citet{Siarohin_2019_NeurIPS}.
We compare our relighting capabilities to the single-image relighting approach of~\citet{Zhou_2019_ICCV}.
The source codes of \new{these} techniques are publicly available.
For Geng~\etal~\shortcite{Geng2018WarpguidedGF}, we estimated the landmarks using the dlib tracker~\cite{dlib09} as suggested by the authors.
We also trained the few shot video-to-video translation method of~\citet{wang2018fewshotvid2vid} for portrait image editing. We trained on 700 videos from the FaceForensics dataset~\cite{roessler2019faceforensics++}. Landmarks were extracted using the dlib tracker as recommended by the authors.
The approaches of~\citet{Geng2018WarpguidedGF},~\citet{Wiles18} ,~\citet{wang2018fewshotvid2vid} and \citet{Siarohin_2019_NeurIPS} are trained on a video corpus.
\new{In contrast, our method does not use any direct supervision of the edited images.}
We compare to these methods in two different settings, self-reenactment and cross-identity reenactment.
\paragraph{Self-Reenactment}
For self-reenactment, we capture several images of a person in different poses.
We pick the first image and use the other images of the person as reference to edit the head pose.
We captured $9$ people in different poses, resulting in $31$ images in the test set.
Fig.~\ref{fig:pose-comparison} shows some qualitative results.
\citet{Geng2018WarpguidedGF} use a warp-guided algorithm. While this enables expression changes and in-plane head motion, out-of-plane motion cannot be handled as shown in Fig.~\ref{fig:pose-comparison}.
We also compare to X2Face~\cite{Wiles18}, which samples a learned embedded face in order to synthesize portrait images with different poses and expressions.
As such, it shares its limitations with~\citet{Geng2018WarpguidedGF} and produces artifacts for strong pose changes.
\begin{table}[]
\caption{Evaluation of pose edits: We measure landmark alignment errors for same-subject reenactment on 31 images, and facial recognition distances for cross-subject reenactment on 49 images.
Existing landmark detection~\cite{Saragih2009} and facial recognition~\cite{dlib09} often fail on images from competing methods, implying higher realism of our results. }
\begin{tabular}{@{}lll@{}}
\toprule
& \begin{tabular}[c]{@{}l@{}}Landmark Alignment \\ (number of images)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Recognition \\ (number of images)\end{tabular} \\ \midrule
\citet{Wiles18} & \textbf{10.9 (22)} & 0.52 (42) \\
\citet{wang2018fewshotvid2vid} & 28.19 (24) & 0.49 (45) \\
\citet{Siarohin_2019_NeurIPS} & 11.97 (31) & 0.51 (46) \\
Ours & 20.12 (31) & \textbf{0.40 (49)} \\ \bottomrule
\end{tabular}
\label{tab:pose}
\end{table}
All approaches do not share the same cropping method, which makes it difficult to quantitatively evaluate the results.
In addition, translation of the head during capture can lead to different illumination conditions.
Thus, instead of directly computing errors in the image space, we first detect $66$ facial landmarks~\cite{Saragih2009} on all results, as well as the reference images.
We then compute the landmark alignment error, which is the averaged $\ell_2$-distance between the landmarks after 2D Procrustes alignment (including scale).
The implementation of ~\citet{Geng2018WarpguidedGF} often fails to generate such large pose edits, so we do not consider this approach in the quantitative evaluation.
Due to artifacts, the landmark detector fails on $29$\% images for the approach of~\citet{Wiles18} and on ~$23$\% for~\citet{wang2018fewshotvid2vid}.
All our results, as well as those of~\citet{Siarohin_2019_NeurIPS} pass through the detector.
This can be considered as a pseudo-metric of realism, since the landmark detector is trained on real portrait images, implying that our results are better than those of~\citet{Wiles18} and~\citet{wang2018fewshotvid2vid}, and on par with~\citet{Siarohin_2019_NeurIPS}.
Table~\ref{tab:pose} shows the errors for different methods.
The low errors for ~\citet{Wiles18} are possibly due to the landmark detector failing in challenging cases.
We obtain only slightly worse results compared to ~\citet{Siarohin_2019_NeurIPS}, even though our method does not have access to ground truth during training.
~\citet{Siarohin_2019_NeurIPS} train on videos allowing for supervised learning.
In addition, their edits are at a lower resolution of $256 \times 256$, compared to our image resolutions of $1024 \times 1024$.
\paragraph{Cross-identity Reenactment} We also compare to others in cross-identity reenactment, which is closer to our setting of semantically disentangled editing.
Here, the image being edited and the reference image have different identities.
Fig.~\ref{fig:pose-comparison} shows some qualitative results.
The implementation of~\citet{Geng2018WarpguidedGF} does not support this setting.
~\citet{Wiles18} and ~\citet{wang2018fewshotvid2vid} result in similar artifacts as discussed before.
Unlike other approaches,~\citet{Siarohin_2019_NeurIPS} uses two driving images in order to edit the input image, where they use the deformations between the two images as input.
In the case of self-reenactment, we provide the input image as the first driving image.
We do the same here, which leads to the two driving images with different identities.
This significantly alters the facial identity in the output image.
We also quantitatively evaluate the extent of identity preservation for different methods using a facial recognition tool~\cite{dlib09}, see Table.~\ref{tab:pose}.
All methods other than ours do not support semantically disentangled editing. As can be seen in Fig.~\ref{fig:pose-comparison} (bottom), other methods simultaneously change the expressions in addition to the head pose.
\paragraph{Interactive User Interface} While all existing approaches need a driving image(s) for editing, we allow for explicit editing, using intuitive controls.
We developed an interactive user interface to edit images, see supplemental video.
The user can change the head pose using a trackball mouse interface.
Spherical harmonic coefficients and blendshape coefficients are changed using keyboard controls.
All editing results run at around 5fps on a TITAN X Pascal GPU.
\paragraph{Relighting}
We compare our relighting results to the single-image relighting approach of~\citet{Zhou_2019_ICCV}, see Fig.~\ref{fig:light-comparison}.
Our approach allows for colored illumination changes, as shown in Fig.~\ref{fig:light-main-ne}.
Our approach produces higher-quality and more realistic output images.
We also quantitatively compare the relighting quality of these approaches in an illumination transfer setting, where the illumination in a reference image is transferred to a given input image.
Since we do not have ground truth data available, we compare the results using a network which predicts the illumination from the reference and the relighted results.
We use a model-based face autoencoder~\cite{tewari17MoFA}, trained on the VoxCeleb dataset~\cite{Chung18b}.
This network predicts a $27$ dimensional spherical harmonics coefficients.
We compare the predictions using a scale-invariant $\ell_2$-loss.
We obtain higher quality ($0.34$), compared to \citet{Zhou_2019_ICCV} ($0.36$).
The numbers are averaged over $100$ relighting results.
While the method of~\citet{Zhou_2019_ICCV} is only trained for relighting, our method allows us to also edit the head pose and facial expressions.
\new{
\subsection{Generality of the embeddings}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{figures/sequential.jpg}
\vspace{-0.6cm}
\caption{\new{PIE also allows for sequential editing.
We optimize for the StyleGAN embedding using the pose RigNet.
We can then use the edited pose results with the RigNets for other semantic components for sequential editing. Images from~\citet{shen2016deep}.}
}
\label{fig:sequential}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{figures/interfacegan.jpg}
\vspace{-0.6cm}
\caption{\new{Our embeddings obtain similar quality editing results with the InterFaceGAN~\cite{shen2020interpreting} editing approach. We also notice similar improvements over Image2StyleGAN~
\cite{Abdal_2019_ICCV} embeddings. Images from~\citet{shen2016deep}.
}}
\label{fig:interfacegan}
\end{figure}
\paragraph{Sequential Editing}
Our method also allows for sequential editing of the different semantic parameters, see Fig.~\ref{fig:sequential}.
Here, we optimize for the embedding using the pose RigNet network.
After editing the pose, we can use the new embedding as input to the illumination and expression RigNets.
Since all three versions of RigNet were trained on the same training data, this still produces plausible results.}
\new{
\paragraph{Other StyleGAN editing methods}
Our approach obtains a StyleGAN embedding which can be edited using StyleRig.
In order to test the generality of these embeddings, we attempt to edit them using InterFaceGAN~\cite{shen2020interpreting}, see Fig.~\ref{fig:interfacegan}.
Our improvements over Image2StyleGAN generalize to InterFaceGAN editings.
We better preserve the facial identity and produce fewer artifacts.
The editing results with InterFaceGAN are of a similar quality to those obtained using StyleRig.
However, InterFaceGAN cannot change the scene illumination.
}
\section{Limitations}
\label{sec:limitation}
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/limitations-rev.jpg}
\vspace{-0.6cm}
\caption{
Limitations: Large edits can lead to artifacts. High-frequency texture on the foreground or background is difficult to fit. Our method also cannot handle cluttered backgrounds or occlusions. Images from~\citet{shen2016deep}.
}
\label{fig:limitations}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{figures/large_edits.png}
\vspace{-0.6cm}
\caption{\new{Scatterplot of the editing (left) and recognition errors (right), with respect to the magnitude of the desired pose edits for over $2500$ pose editing results.
Larger edits lead to both higher editing and recognition errors.
}
}
\label{fig:large}
\end{figure}
Even though we have demonstrated a large variety of compelling portrait image editing results, there is still room for further improvement of our approach:
(1) At the moment, our approach has a limited expressivity, i.e., it does not allow the artifact-free exploration of the whole parameter space of the underlying 3D morphable face model.
For example, we cannot change the in-plane rotation of the face or arbitrarily change the lighting conditions.
The main limiting factor is the training corpus (FFHQ \cite{Karras2019cvpr}) that has been used to pretrain the StyleGAN-generator, since it does not contain such variations.
Due to the same reason, our approach is also not yet suitable for video-based facial reenactment, since the variety of facial expressions in the training corpus is severely limited.
This problem could be alleviated by pretraining the generator on a larger and less biased training corpus that covers all dimensions well.
(2) Our method only allows for independent control over the semantic parameters, which is important for editing applications.
\new{While sequential control is possible, simultaneous control is a more challenging problem.}
(3) Our approach does not provide explicit control over the synthesized background.
At the moment, the background changes during the edits and does not remain static as it should, since the network has learned correlations between the face and the background.
This could potentially be alleviated by learning an explicit foreground-background segmentation and having a consistency loss on the static background region.
(4) In challenging cases with large deformations, cluttered backgrounds or occlusions and high-frequency textures, our method can fail to faithfully fit to the input image and preserve editing properties at the same time, see Fig.~\ref{fig:limitations}.
In addition, 3D face reconstruction also often fails under occlusions which would lead to incorrect data for our approach.
\new{
(5) Larger edits generally correspond to worse results, and can often lead to artifacts, as shown in Fig.~\ref{fig:limitations}.
This can also be seen in Fig.~\ref{fig:large}, where larger pose edits correlate with higher editing and facial recognition errors.
}
(6) Similar to StyleGAN, our approach also sometimes shows droplet-like artifacts.
This could be alleviated by switching to a higher quality generator architecture, such as StyleGAN2 \cite{Karras2019stylegan2}, which has been shown to solve this problem.
\new{
(7) While we show results for people of different ethnicities, genders and ages, we did not extensively study the biases present in the method.
Some of the components used, such as the 3DMM are known to have racial biases~\cite{tewari2017self}.
}
(8) Our results are not guaranteed to be temporally consistent.
While we show temporal editing results (in the supplemental video), our results could be made even more consistent by employing a temporal network architecture and space-time versions of our losses.
Nevertheless, our approach, already now, enables the intuitive editing of portrait images at interactive frame rates.
\section{Conclusion}
\new{We have presented the first approach for embedding portrait photos in the latent space of StyleGAN, which allows for intuitive editing of the head pose, facial expression, and scene illumination}.
To this end, we devised a hierarchical optimization scheme that embeds a real portrait image in the latent space of a generative adversarial network, while ensuring the editability of the recovered latent code.
Semantic editing is achieved by mapping the control space of a 3D morphable face model to the latent space of the generator.
In addition, a novel identity preservation loss enables to better preserve the facial identity.
Our approach is a first step towards intuitive and interactive editing of portrait images using a semantic control space akin to computer animation controls.
In addition, our approach provides more insights into the inner workings of GANs, since it allows the intuitive and interactive exploration of the space of face images.
This can shed light on the biases the model has learned from the employed training corpus.
By using high-quality 3D face models, approaches such as StyleRig would produce better quality with more fine-grained control, and thus would further improve our results. Our paper brings the two different domains of 2D and 3D face models together, thus opening the road towards even more interesting edits.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Sequence modeling is an important problem in speech recognition. In both conventional hybrid \cite{bourlard2012connectionist} and end-to-end style (e.g., attention-based encoder-decoder \cite{bahdanau2016end, chiu2018state} or neural transducer\cite{he2019rnnt}) architectures, a neural encoder is used to extract a sequence of high-level embeddings from an input feature vector sequence. A feed-forward neural network extracts embeddings from a fixed window of local features \cite{hinton2012deep}. Recurrent neural networks (RNNs), especially the long short-term memory (LSTM) \cite{hochreiter1997long}, improve the embedding extraction by exploiting both long-term and short-term temporal patterns \cite{sak2014long}. Recently, attention (or self-attention if there is only one input sequence) has emerged as an alternative technique for sequence modeling \cite{vaswani2017attention}. Different from RNNs, attention connects arbitrary pairs of positions in the input sequences directly. To forward (or backward) signals between two positions that are $n$ steps away in the input, it only needs one step to traverse the network, compared with $O(n)$ steps in RNNs. Built on top of the attention operation, the transformer model \cite{vaswani2017attention} leverages multi-head attention and interleaves with feed-forward layers. It has achieved great success in both natural language processing \cite{devlin2018bert, radford2018improving} and speech applications \cite{karita2019comparative, wang2019transformer}.
However, two significant issues make transformer-based models impractical for online speech recognition applications. First, it requires access to the entire utterance before it can start producing output; second, the computational cost and memory usage grow quadratically with respect to the input sequence length if an infinite left context is used. There are a few methods that can partially solve these issues. First, {\it time-restricted self-attention}~\cite{povey2018time} can be used in which the computation of attention only uses the past input vectors and a limited length of future inputs (e.g. \cite{zhang2020transformer, moritz2020streaming}). However, since the reception field is linearly growing for the number of transformer layers, it usually generates a significant latency; it does not address the issue of quadratically growing cost either. Second, {\it block processing} is used in \cite{dong2019self}, which chunks the input utterances into segments, and self-attention performs on each segment. In this way, the computation cost and memory usage don't grow quadratically. It is similar to context-sensitive-chunk BPTT in \cite{Chen2016chunk} and truncated BLSTM in \cite{mohamed2015deep}, which was successfully deployed to build online speech recognition system based on BLSTM models. However, since the transformer cannot attend beyond the current segment, it is observed that this method yields significant accuracy degradation \cite{Tsunoo2019, dai2019transformer}. Third, {\it recurrent connection}, in which embeddings from the previous segment are carried over to the current one, can be combined with the block processing. This approach is similar to the idea proposed in latency controlled BLSTM (LC-BLSTM) \cite{zhang2016highway}. An example of this approach is transformer-XL \cite{dai2019transformer}, in which it can model a very long dependency on text data for language modeling. The work in \cite{Tsunoo2019, tian2019synchronous} have explored similar ideas for acoustic modeling.
Carrying over segment level information enables attention to access information beyond the current segment. A recurrent connection compresses the segment level information into a single memory slot. For a segment that is $k$ steps away, it takes $O(k)$ steps to retrieve the embedding extracted from that segment. Inspired by the neural Turing machine\cite{graves2014neural}, we propose a novel {\it augmented memory} transformer, which accumulates the segment level information into a memory bank with multiple memory slots. Attention is then performed over the memory bank, together with the embeddings from the current segment. In this way, all the information, regardless of whether it is in the current segment or $k$ segments away, can be equally accessible. We applied this {\it augmented memory} transformer to hybrid speech recognition architecture and performed an in-depth comparison with other methods on a widely used LibriSpeech benchmark \cite{panayotov2015librispeech}. Experimental results demonstrate that the proposed augmented memory transformer outperforms all the other methods by a large margin. Using our proposed method, we show that with similar look-ahead sizes, augmented memory transformer improves over the widely used LC-BLSTM model by over 15\% relatively. Though we only evaluate the proposed method in a hybrid speech recognition scenario, it is equally applicable to end-to-end style architectures.
The rest of this paper is organized as follows. In Section 2, we briefly review the self-attention and transformer-based acoustic model. We present the augmented memory transformer in Section 3. Section 4 demonstrates and analyzes the experimental results, followed by a summary in Section 5.
\section{Transformer-based acoustic models}
\label{sec:am}
We first give a brief introduction of self-attention that is the core of the transformer-based model. Then we describe the architecture of the transformer-based acoustic model from \cite{wang2019transformer}. The model in this paper extends its model architecture for online streaming speech recognition.
\subsection{Self-attention}
Given an input embedding sequence $\boldsymbol{X} = (\boldsymbol{x}_1, ..., \boldsymbol{x}_T)$ where $\boldsymbol{x}_t \in \mathbb{R}^{D}$,
self-attention projects the input to \emph{query}, \emph{key} and \emph{value} space using $\mathbf{W}_{\rm q}$, $\mathbf{W}_{\rm k}$ and $\mathbf{W}_{\rm v}$, respectively,
\begin{align}
\boldsymbol{Q}=\mathbf{W}_{\rm q}\boldsymbol{X},\ \ \ \boldsymbol{K}=\mathbf{W}_{\rm k}\boldsymbol{X},\ \ \
\boldsymbol{V}=\mathbf{W}_{\rm v}\boldsymbol{X}
\end{align}
where $\mathbf{W}_{\rm q}, \mathbf{W}_{\rm k}, \mathbf{W}_{\rm v}$ are learnable parameters.
Self-attention uses dot-product to get the attention distribution over \emph{query} and \emph{key}, i.e., for position $t$ in \emph{query}, a distribution $\boldsymbol{\alpha}_{t}$ is obtained by:
\begin{align}
\alpha_{t\tau} = \frac{
\exp(\beta \cdot\boldsymbol{Q}_t^{\sf T}\boldsymbol{K}_\tau )
}{
\sum_{\tau'} \exp(\beta \cdot\boldsymbol{Q}_t^{\sf T}\boldsymbol{K}_{\tau'} )}
\end{align}
where $\beta = \frac{1}{\sqrt{D}}$ is a scaling factor. Given $\boldsymbol{\alpha}_t$, the output embedding of self-attention is obtained via:
\begin{align}
\boldsymbol{z}_t = \sum_{\tau} \mathrm{Dropout}(\alpha_{t\tau}) \cdot \boldsymbol{V}_\tau.
\end{align}
In \cite{vaswani2017attention}, multiple head attentions are introduced. Each of the attention head is applied individually on the input sequences. The output of each head is concatenated and linearly transformed into the final output.
\subsection{Transformer-based acoustic model}
The transformer-based acoustic model \cite{wang2019transformer} is a deep stack transformer layers on top of VGG blocks \cite{simonyan2014very}. Each transformer layer consists of a multi-head self-attention followed by a position-wise feed-forward layer. Rather than using Sinusoid positional embedding \cite{vaswani2017attention}, the transformer-based acoustic model \cite{wang2019transformer} uses VGG blocks to implicitly encode the relative positional information \cite{mohamed2019transformers}. The layer normalization \cite{lei2016layer}, the iterated loss \cite{Andros2019}, residual connections, and dropout is applied to train the deep stack transformer layers effectively. More model details can be found from \cite{wang2019transformer}.
\section{Augmented Memory Transformer}
The original transformer model generates the outputs according to the attention on the whole input sequence, which is not suitable for streaming speech recognition. The proposed augmented memory transformer addresses this issue by the combination of two mechanisms.
First, similar to {\it block processing} \cite{dong2019self}, the whole utterance is segmented into segments padding with left context and right context. The size of each segment limits the computation and memory consumption in each transformer layer.
Second, to carry over information across segments, an {\it augmented memory bank} is used. Each slot in the {\it augmented memory bank} is the embedding representation of an observed segment.
Figure \ref{fig:amtrf} illustrates one forward step on the $n$-th segment using augmented memory transformer.
An augmented memory bank (red) is introduced to the self-attention function.
The input sequence is first segmented into segments.
Each segment $\boldsymbol{C}_n=(\boldsymbol{x}_{nB+1}, ..., \boldsymbol{x}_{(n+1)B})$ contains $B$ input embedding vectors, where $B$ is referred to as the {\it segment length}.
The $n$-th segment is formed by patching the current segment with left context $\boldsymbol{L}_n$ (length $L$) and right context $\boldsymbol{R}_n$ (length $R$). An embedding vector $\boldsymbol{s}_n$,
referred to as the {\it summarization query} is then computed by pooling over $\boldsymbol{C}_n$.
Different pooling methods, e.g. average pooling, max pooling, and the linear combination, can be used.
This paper focuses on the average pooling.
In the self-attention with augmented memory, the \emph{query} is the projection from the concatenation of current segment with context frames and the summarization query. The \emph{key} and the \emph{value} are the projections from the concatenation of the augmented memory bank and the current segment with context frames. They are formalized as
\begin{align}
\boldsymbol{Q}&=\mathbf{W}_{\rm q}[\boldsymbol L_n, \boldsymbol{C}_n, \boldsymbol R_n , \boldsymbol{s}_n],\\
\boldsymbol{K}&=\mathbf{W}_{\rm k}[\boldsymbol{M}_n, \boldsymbol{L}_n, \boldsymbol{C}_n, \boldsymbol{R}_n], \label{eq:amfk} \\
\boldsymbol{V}&=\mathbf{W}_{\rm v}[\boldsymbol{M}_n, \boldsymbol{L}_n, \boldsymbol{C}_n, \boldsymbol{R}_n] \label{eq:amfv}
\end{align}
where $\boldsymbol{M}_n=(\boldsymbol{m}_1, ..., \boldsymbol{m}_{n-1})$ is the augmented memory bank. Note $\boldsymbol{Q}$ has $(L+C+R+1)$ column vectors and $\boldsymbol{q}_{-1}$ is the projection from $\boldsymbol{s}_n$. The attention output for $\boldsymbol{q}_{-1}$ is stored into augmented memory bank as $\boldsymbol{m}_{n}$ for future forward steps, i.e.,
\begin{align}
\boldsymbol{m}_n = \sum_{\tau} \mathrm{Dropout}(\alpha_{(-1)\tau}) \cdot \boldsymbol{V}_\tau
\end{align}
where $\alpha_{(-1)\tau}$ is the attention weight for $\boldsymbol{q}_{-1}$. The attention output from $(\boldsymbol{q}_{1}, ..., \boldsymbol{q}_{L+C+R})$ is feed to the next layer, except for the last transformer layer, only the center $B$ vectors are used as the transformer network's output.
The output for the whole utterance is the concatenation of outputs from all the segments.
The proposed method is different to existing models in a variety of aspects.
Transformer-XL~\cite{dai2019transformer} incorporates
history information only from previous segment $\boldsymbol{C}_{n-1}$ via
\begin{equation}
\begin{split}
\boldsymbol{Q}&=\mathbf{W}_{\rm q}\boldsymbol{C}_n,\\ \boldsymbol{K}&=\mathbf{W}_{\rm k}[\boldsymbol{C}_{n-1},\boldsymbol{C}_n],
\boldsymbol{V}=\mathbf{W}_{\rm v}[\boldsymbol{C}_{n-1},\boldsymbol{C}_n].
\end{split}
\end{equation}
Also note that, in transformer-XL, $\boldsymbol C_{n-1}$ is from the lower layer. This makes the upper layers have an increasing long reception field. Our proposed augmented memory transformer explicitly holds the information from all the previous segments (Eq. \ref{eq:amfk} and \ref{eq:amfv}) and all the layers have the same reception field. Using a bank of memories to represent past segments is also explored in \cite{rae2019compressive}, primarily in language modeling tasks.
In \cite{povey2018time}, the {\it time-restricted transformer} restricts the attention to a context window in each transformer layer. This means the look-ahead length is linearly growing by the number of transformer layers. Our proposed method has a fixed look-ahead window, thus enable us to use many transformer layers without increasing look-ahead window size.
\begin{figure}[t!]
\includegraphics[width=3.2in]{amtrf-yyshi.png}
\label{fig:200_layer1}
\caption{Illustration of one forward step for the augmented memory transformer on the $n$-th segment. }
\label{fig:amtrf}
\vspace{-1pt}
\end{figure}
\section{Experiments}
The proposed model was evaluated on the LibriSpeech ASR task, and two of our internal video ASR tasks, German and Russian.
Neural network models were trained and evaluated using an in-house extension of the PyTorch-based \emph{fairseq}~\cite{ott2019fairseq} toolkit.
In terms of latency,
this paper focuses on the algorithmic latency,
i.e. the size of look-ahead window.
Different models were compared with similar look-ahead windows, for a fair comparison.
\subsection{LibriSpeech}
We first performed experiments on the LibriSpeech task \cite{panayotov2015librispeech}.
This dataset contains about 960 hours of read speech data for training, and 4 development and test sets (\texttt{\{dev, test\} - \{clean,other\}}) for evaluation, where \texttt{other} sets are more acoustically challenging.
The standard 4-gram language model (LM) with a 200K vocabulary was used for all first-pass decoding.
In all experiments,
80 dimensional log Mel-filter bank features with a 10ms frame-shift were used as input features.
The context- and position-dependent graphemes, i.e. \textit{chenones}~\cite{le2019senones}, were used as output labels.
\subsubsection{Experiment Setups}
\label{sec:libri_setup}
A GMM-HMM system was first trained following the standard Kaldi~\cite{Povey_ASRU2011} Librispeech recipe.
To speed up the training of neural networks,
the training data were segmented into utterances that were up to 10 seconds\footnote{The training-data segmentation was obtained from the alignments of an initial LC-BLSTM model.
According to our studies,
shorter segments in training can both improve the training throughput and decoding performance.};
speed perturbation \cite{ko2015audio} and \emph{SpecAugment} \cite{park2019specaugment} were performed on the training data.
In evaluation, no segmentation was performed on the test data.
This paper focuses on cross-entropy (CE) training for neural network models.
The proposed augmented memory transformer (AMTrf) was compared with streamable baselines including LC-BLSTM~\cite{zhang2016highway},
transformer-XL (Trf-XL)~\cite{dai2019transformer}
and time-restricted transformer (TRTrf)~\cite{povey2018time}. Also, the non-streamable original transformer (Trf) was included to indicate potential performance lower-bound.
We started with investigating models of a small configuration with approximately 40M parameters.
The LC-BLSTM baseline consists of 5 layers with 400 nodes in each layer each direction. Mixed frame rate \cite{peddinti2017low}, i.e. the output frames of the first layer are sub-sampled by factor of 2 before propagated to the the second layer, is used. The look-ahead window is set to 0.4 second, i.e. 40 frames; the chunk size in LC-BLSTM is 1.5 seconds.
For transformers, the topology is 12 transformer layers with 512 input embedding dimensions, 8 multi-heads, and 2048 feed-forward network (FFN) dimensions in each layer.
Following \cite{wang2019transformer}, two VGG blocks \cite{simonyan2014very} are introduced as lower layers before the stacked transformer layers\footnote{As studied in \cite{wang2019transformer}, the VGG blocks are a best practice of input positional embedding for transformers.
In experiments using VGG blocks on LC-BLSTM, insignificant gains obtained.}.
Each VGG block consists of two consecutive 3-by-3 convolution layers, a ReLu activation function, and a max-pooling layer; the first VGG block includes 32 channels, and the second VGG block has 64; 2-by-2 max-pooling is used in each block with stride 2 in the first and stride 1 in the second. The VGG blocks generate a 2560-D feature sequence at a 20ms frame rate.
In training,
the Adam optimizer \cite{kingma2014adam} was used for all the models.
Dropout was used: 0.5 for LC-BLSTMs and 0.1 for transformers.
The LC-BLSTM baseline was optimized for at most 30 epochs on 16 Nvidia V100 GPUs.
The learning rate was initially $10^{-4}$ and reduced 50\% after each epoch with accuracy degradation on the cross-validation data.
Transformer models were optimized using a tri-stage learning-rate strategy:
8K updates with a learning rate increased linearly from $10^{-5}$ to a holding learning rate $3\times10^{-4}$, 100K updates with the holding learning rate, and further updates with the learning rate decreased exponentially.
32 GPUs were used in training one transformer model.
Transformer models were updated up to 70 epochs.
The large configuration, i.e. approximately 80M parameters, was then investigated.
The large LC-BLSTM baseline consists of 5 layers with 800 nodes in each layer each direction.
The large transformer consists of 24 layers. The layer setting is identical to that of the small configuration; also, the same VGG blocks are used.
The training schedule of LC-BLSTM and transformers followed a similar fashion as that in the small configuration.
For large transformers,
to alleviate the gradient vanishing issue, iterated loss~\cite{Andros2019} is applied.
The outputs of the 6/12/18-th transformer layers are non-linearly transformed (projected to a 256-dimensional space with an linear transformation followed by a ReLU activation function), and auxiliary CE losses are calculated separately. These additional losses are interpolated with the original loss with an 0.3 weight.
In evaluation,
a fully-optimized, static 4-gram decoding graph built by Kaldi was used.
The results on test sets were obtained using the best epoch on the development set\footnote{A model that averaged the last 10 epochs was included as a model candidate.}.
Following \cite{luscher2019rwth},
the best checkpoints for \texttt{test-clean} and \texttt{test-other}
are selected the respective development sets.
\subsubsection{Segment and Context Length}
We investigate the effect of segment length and context size first.
A key issue on the proposed model is how to compromise between latency and accuracy.
\begin{table}[htb]
\centering
\caption{Effect of segment, left and right context length on \emph{LibriSpeech}. Length is measured by number of frames, where frames are shifted in a 10ms frame rate. }
\begin{tabular}{|ccc||cc|}
\hline
Left & Segment & Right & test-clean & test-other \\
\hline
\hline
0 & 64 & 0 & 10.7 & 13.9 \\
0 & 96 & 0 & 9.8 & 13.0 \\
0 & 128 & 0 & 7.7 & 10.4 \\
\hline
16 & 128 & 0 & 5.2 & 9.5 \\
32 & 128 & 0 & 3.6 & 8.5 \\
64 & 128 & 0 & 3.5 & 8.5 \\
\hline
0 & 128 & 16 & 5.5 & 9.3 \\
0 & 128 & 32 & 3.8 & 8.1 \\
\hline
32 & 128 & 32 & 3.3 & 8.0 \\
64 & 128 & 32 & 3.3 & 7.6 \\
\hline
--& $\infty$ & -- & 3.1 & 7.1 \\
\hline
\end{tabular}
\label{tab:context_libri}
\end{table}
The decoding performance is reported in Table~\ref{tab:context_libri}.
The first block shows the results without context.
By increasing the segment length from 64 to 128 frames, the word error rate (WER) decreased.
Next, various context settings were investigated with the segment length fixed to 128 frames.
The second and third blocks illustrate
the effect of left and right contexts, respectively.
Either left or right contexts contributed to alleviating the boundary effect.
A more extended context was shown to improve the recognition accuracy.
Finally, the effect of using both contexts was shown in the fourth block.
The left and right contexts showed some level of complementarity; thus, the performance further improved.
The $\infty$ system refers to an transformer-based acoustic model presented in \cite{wang2019transformer}, indicating the performance lower-bound.
The setting of 128 segment length, 64 left, and 32 right contexts were investigated in the following experiments.
It yields a look-ahead window of 32 frames, i.e. 0.32 seconds, which is comparable to that of the LC-BLSTM baseline.
\subsubsection{Limited Memory}
The second set of experiments investigated
the effect of limited memory size.
Instead of the complete observation of augmented memory bank,
models in this section were trained and tested by observing a fixed number of the most recent memory vectors. Note, when memory size equals to 1, our methods becomes almost the same as the encoder used in \cite{Tsunoo2019}.
These experiments were performed to investigate how much long-term history contributed to the final performance.
\vspace{-0.5em}
\begin{table}[htb]
\centering
\caption{Effect of limited memory size on \emph{LibriSpeech}.}
\begin{tabular}{|c||cc|}
\hline
MemSize & test-clean & test-other \\
\hline
\hline
0 & 3.2 & 8.1 \\
1 & 3.3 & 8.0 \\
3 & 3.2 & 7.9 \\
5 & 3.3 & 7.9 \\
$\infty$ & 3.3 & 7.6 \\
\hline
\end{tabular}
\label{tab:mem_libri}
\end{table}
Table~\ref{tab:mem_libri} reports the results using different memory sizes.
On the noisy set \texttt{test-other},
the performance was consistently improved
from no memory (0) to unlimited memory ($\infty$)\footnote{The longest utterance in the LibriSpeech test sets is about 35 seconds. Thus, the $\infty$ system used maximum 28 memory slots.}.
However on the clean data \texttt{test-clean},
little improvement was obtained.
This observation indicates that the global information in long-term memory
can alleviate more challenging acoustic conditions.
\subsubsection{Comparison with Other Streamable Models}
Table~\ref{tab:cmp_libri} compares the WERs of different models.
For a fair comparison on latency,
corresponding models of similar look-ahead window are compared.
The first block compares
models with about 40M parameters.
The transformer-XL baseline used a segment length of 128, which is identical to that of the proposed model.
The "+look-ahead" reports the extension of transformer-XL with right context\footnote{There is no context in the original design of transformer-XL. We applied a similar idea of right context (32 frames) on transformer-XL as the proposed model. Thus, both model has a look-ahead window of 0.32 second.}.
The TRTrf baseline used a context of 3 in each layer, resulting a look-ahead window of 0.72 second.
The proposed augmented memory transformer outperformed all the streamable baselines.
\begin{table}[htb]
\centering
\caption{Performance of different models on \emph{LibriSpeech}. ``Str'' stands for streamable, specifying if a model is a streamable one.}
\begin{tabular}{|c|c|l||cc|}
\hline
\#Param & Str & Model & test-clean & test-other \\
\hline
\hline
\multirow{6}{*}{$\simeq$40M} & \multirow{5}{*}{\cmark} & LC-BLSTM & 3.8 & 9.9 \\
& & Trf-XL & 4.2 & 10.7 \\
& & \ \ \ \ +look-ahead & 3.9 & 10.1 \\
& & TRTrf & 4.1 & 9.0 \\
& & AMTrf & \textbf{3.3} & \textbf{7.6} \\
\cline{2-5}
& \xmark & Trf & 3.1 & 7.1 \\
\hline
\hline
\multirow{6}{*}{$\simeq$80M}& \multirow{5}{*}{\cmark} & LC-BLSTM & 3.3 & 8.2 \\
& & Trf-XL& 3.5 & 8.3 \\
& & \ \ \ \ + look-ahead& 3.2 & 7.7 \\
& & AMTrf & 3.1 & 7.1 \\
& & \ \ \ \ +WAS & \textbf{2.8} & \textbf{6.7} \\
\cline{2-5}
& \xmark & Trf & 2.6 & 5.6 \\
\hline
\end{tabular}
\label{tab:cmp_libri}
\end{table}
Larger models with about 80M parameters are compared in the second block.
The augmented memory transformer shows consistent gains as the small-size one.
For further improvement,
the weak-attention suppression (WAS) \cite{yang2020weak} was applied on top of the proposed model, denoted by "+WAS".
Compared with the LC-BLSTM baseline,
the augmented memory transformer (with WAS) achieved 15\%-18\% relative error reduction on the two test sets.
At the time of writing, this is the best number that we acknowledge on LibriSpeech for streamable models.
\subsection{Video ASR}
To evaluate the model in more challenging acoustic conditions, our in-house Russian and German video ASR datasets were used.
The videos in this dataset are originally shared publicly by users; only the audio part of those videos are used in our experiments. These data are completely de-identified; both transcribers and researchers do not have access to any user-identifiable information.
For the Russian task,
the training data consisted of 1.8K hours from 100K video clips.
14.6 hours of audio (790 video clips) were used as validation data.
Two test sets were used in evaluation:
the 11-hour \texttt{clean} (466 videos),
and 24-hour \texttt{noisy} (1.3K videos) sets.
For the German task,
the training data consisted of 3K hour audios (135K videos).
The validation data was 14.5 hours (632 videos).
The test data were
the 25-hour \texttt{clean} (989 videos) and
24-hour \texttt{noisy} (1K videos) sets.
\vspace{-0.5em}
\begin{table}[htb]
\centering
\caption{Experiment results on our internal \emph{video ASR} tasks.}
\begin{tabular}{|c|l||cc|}
\hline
Language & Model & clean & noisy \\
\hline
\hline
\multirow{3}{*}{Russian} & LC-BLSTM & 19.8 & 24.4 \\
& AMTrf & \textbf{18.0} & \textbf{23.3} \\
\cline{2-4}
& Trf & 16.6 & 21.1 \\
\hline
\hline
\multirow{3}{*}{German} & LC-BLSTM & 19.6 & 19.5 \\
& AMTrf & \textbf{17.4} & \textbf{17.1} \\
\cline{2-4}
& Trf & 16.2 & 15.6 \\
\hline
\end{tabular}
\label{tab:video_asr}
\end{table}
\vspace{-0.5em}
The large network configuration, i.e. 80M-parameter models, was examined.
The training of all the models was performed in a similar fashion as presented in Section~\ref{sec:libri_setup} (large configuration).
Table~\ref{tab:video_asr} summarizes the decoding results.
On both languages,
the proposed model consistently outperformed the LC-BLSTM baseline by 9-11\% on clean test sets and 5-12\% on noisy test sets. There are still some accuracy gaps compared with the transformer which has the access to the whole utterance.
\vspace{-0.5em}
\section{Conclusions}
In this work, we proposed the augmented memory transformer for streaming transformer-based models for speech recognition.
It processes sequence data incrementally using short segments and an augmented memory, thus has the potential for latency-constrained tasks.
On LibriSpeech, the proposed model outperformed LC-BLSTM and all the existing streamable transformer baselines. Initial study on more challenging Russian and German video datasets also illustrated similar conclusions.
In this paper, the latency was measured in an algorithmic way, i.e. look-ahead window size;
we will investigate the real latency and measure the throughput of this model.
The proposed method can be also applied to transformer transducer \cite{zhang2020transformer, yeh2019transformer} or transformer-based sequence-to-sequence models (e.g. \cite{mohamed2019transformers, karita2019comparative}).
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let~$\Sigma$ be a connected orientable $C^2$ hypersurface
(compact or non-compact) in~$\Real^d$, with $d \geq 2$,
equipped with the Riemannian metric~$g$ induced by the embedding.
The orientation is specified by a globally defined
unit normal vector field $n:\Sigma\to \Sphere^{d-1}$.
Given a small positive parameter~$\eps$,
we consider the tubular neighbourhood
\begin{equation}\label{layer.intro}
\Omega_\eps := \big\{x+\eps\,t\,n(x) \in \Real^d \ \big| \
(x,t) \in \Sigma \times (0,1) \big\}
\,.
\end{equation}
We always assume that the map $(x,t) \mapsto x+\eps\,t\,n(x)$
is injective on $\overline{\Sigma} \times [0,1]$;
in particular, we require that the principal curvatures of~$\Sigma$,
$\kappa_1,\dots,\kappa_{d-1}$, are bounded functions.
Let $-\Delta_\textit{DN}^{\Omega_\eps}$ be the Laplacian on~$\Omega_\eps$,
subject to Dirichlet and Neumann boundary conditions
on~$\Sigma$ and $\Sigma_\eps:=\Sigma+\eps\,n(\Sigma)$, respectively.
If the boundary~$\partial\Sigma$ is not empty,
we impose Dirichlet boundary conditions on the remaining
part of~$\partial\Omega_\eps$.
We arrange the eigenvalues below the essential spectrum
of $-\Delta_\textit{DN}^{\Omega_\eps}$
in an increasing order and repeat them according to multiplicity,
$
\lambda_1(\eps) \leq \lambda_2(\eps) \leq \lambda_3(\eps) \leq \dots
$,
with the convention that all eigenvalues are included
if the essential spectrum is empty.
In fact, we make the sequence always infinite by defining
$\lambda_n := \inf\sigma_\mathrm{ess}(-\Delta_\textit{DN}^{\Omega_\eps})$
for all $n>N$, if the number of eigenvalues below the essential spectrum
is a finite (possibly zero) natural number~$N$.
The objective of this paper is to show that
the $d$-dimensional differential operator $-\Delta_\textit{DN}^{\Omega_\eps}$
can be approximated in the limit as $\eps \to 0$
by the $(d-1)$-dimensional Schr\"odinger-type operator
\begin{equation}\label{op.comparison}
H_\eps := -\Delta_g + \frac{\kappa}{\eps}
\qquad \mbox{on} \qquad
\sii(\Sigma)
\,.
\end{equation}
Here~$-\Delta_g$ denotes the Laplace-Beltrami operator of~$\Sigma$,
subject to Dirichlet boundary conditions if~$\partial\Sigma$ is not empty,
and
$
\kappa := \kappa_1+\dots+\kappa_{d-1}
$
is a $d-1$ multiple of the mean curvature of~$\Sigma$.
Note that the sign of~$\kappa$ depends on the choice of orientation~$n$,
that is on the direction in which the parallel surface~$\Sigma_\eps$
is constructed with respect to~$\Sigma$,
\cf~Figure~\ref{Fig}.
We arrange the eigenvalues below the essential spectrum
of the operator~$H_\eps$ using the same conventions as above,
$
\mu_1(\eps) \leq \mu_2(\eps) \leq \mu_3(\eps) \leq \dots
$.
In this paper we establish the following spectral asymptotics:
\begin{Theorem}\label{Thm.main}
For all $n \geq 1$,
\begin{equation}\label{expansion}
\lambda_n(\eps)
= \left(\frac{\pi}{2\eps}\right)^2 + \mu_n(\eps) + \mathcal{O}(1)
\qquad \mbox{as} \qquad
\eps \to 0
\,.
\end{equation}
\end{Theorem}
This asymptotic expansion was proved previously
by the author for~$d=2$ in~\cite{K5}.
Moreover, some form of norm-resolvent convergence of
$-\Delta_\textit{DN}^{\Omega_\eps}$ to~$H_\eps$ was established
and the result~\eqref{expansion} for $d=3$ was announced there.
In the present paper we extend the validity of formula~\eqref{expansion}
to any dimension and provide some details of the variational proof
which were missing in~\cite{K5}.
Using known results about the strong-coupling/semiclassical asymptotics
of eigenvalues of the Schr\"odinger-type operator~\eqref{op.comparison},
one has, for all $n \geq 1$,
\begin{equation}\label{strong}
\mu_n(\eps)
= \frac{\inf\kappa}{\eps} + o(\eps^{-1})
\qquad\mbox{as}\qquad
\eps \to 0
\,.
\end{equation}
This result seems to be well known;
we refer to~\cite[App.~A]{FK1} for a proof in a general Euclidean case,
which extends to the present situation.
Combining~\eqref{expansion} with~\eqref{strong},
we see that the two leading terms
in the $\eps$-expansion of~$\lambda_n(\eps)$ are independent of~$n$.
Furthermore, the geometry of~$\Omega_\eps$ is seen
in these terms only \emph{locally},
through the minimal value of the mean curvature of~$\Sigma$.
In view of the leading role of the mean curvature~$\kappa$
in the surface element of~$\Sigma_\eps$, \cf~\eqref{h.fomula},
we see that the minimal values of the mean curvature on~$\Sigma$
corresponds to points for which, roughly, the Neumann boundary has
``locally the largest area'' with respect to the opposite Dirichlet one;
see also~Figure~\ref{Fig}.
The results \eqref{expansion}--\eqref{strong} are thus consistent
with the physical intuition that ``Dirichlet conditions raise energies
and Neumann conditions lower energies''.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\textwidth]{fig.eps}
\vspace{-25ex}
\end{center}
\caption{The geometry of the tubular neighbourhood~$\Omega_\eps$
for $d=3$.}\label{Fig}
\end{figure}
The particular form of the thin-width expansions~\eqref{expansion}
has important
physical consequences for spectral properties of quantum waveguides
as explained in~\cite{K5}.
Let us also mention that the local character resembles
situations of Dirichlet tubes of variable radius
\cite{Friedlander-Solomyak_2007,Friedlander-Solomyak_2008a,
Borisov-Freitas_2009,Borisov-Freitas_2010,Lampart-Teufel-Wachsmuth}.
The case of Neumann or Dirichlet tubes of uniform radius
differs from the present situation in many respects.
Let us denote by $\{\lambda_n^N(\eps)\}_{n=1}^\infty$
and $\{\lambda_n^D(\eps)\}_{n=1}^\infty$
the set of eigenvalues below the essential spectrum
of the Neumann and Dirichlet Laplacian
on $\sii(\Omega_\eps)$, respectively,
with the same conventions as used above for $\{\lambda_n(\eps)\}_{n=1}^\infty$.
The case of the Neumann Laplacian is trivial in the sense that
its spectrum is known to converge to the spectrum
of the the underlying manifold~$\Sigma$, \cf~\cite{Schatzman_1996}.
More precisely,
\begin{equation}\label{expansion.Neumann}
\lambda_n^N(\eps)
= 0 + \mu_n^N + o(1)
\qquad \mbox{as} \qquad
\eps \to 0
\,,
\end{equation}
where $\{\mu_n^N\}_{n=1}^\infty$ is the set of eigenvalues
below the essential spectrum (with the aforementioned conventions)
of the Laplace-Beltrami operator~$-\Delta_g$ on~$\sii(\Sigma)$,
subject to Neumann boundary conditions on~$\partial\Sigma$.
In order to consistently compare~\eqref{expansion.Neumann}
with~\eqref{expansion} (and~~\eqref{expansion.Dirichlet} below),
we included into~\eqref{expansion.Neumann}
the vanishing lowest Neumann eigenvalue
of the transverse interval~$(0,\eps)$
and will refer to~$\mu_n^N$ as the ``second term''
in the expansion of~$\lambda_n^N(\eps)$.
In the Dirichlet case, we have~\cite{KRT}
\begin{equation}\label{expansion.Dirichlet}
\lambda_n^D(\eps)
= \left(\frac{\pi}{\eps}\right)^2 + \mu_n^D + \mathcal{O}(1)
\qquad \mbox{as} \qquad
\eps \to 0
\,,
\end{equation}
where $\{\mu_n^D\}_{n=1}^\infty$ is the set of eigenvalues
below the essential spectrum
(again with the aforementioned conventions)
of the Schr\"odinger-type operator
$-\Delta_g + V_\mathrm{eff}$ on~$\sii(\Sigma)$,
subject to Dirichlet boundary conditions on~$\partial\Sigma$.
Here~$V_\mathrm{eff}$ is a purely geometric, $\eps$-independent potential,
expressed solely in terms of the principal curvatures,
\begin{equation}\label{V.eff}
V_{\mathrm{eff}}
:=-\frac{\kappa_1^2+\dots+\kappa_{d-1}^2}{2}
+\frac{(\kappa_1+\dots+\kappa_{d-1})^2}{4}
\,.
\end{equation}
Summing up, contrary to Theorem~\ref{Thm.main},
in the purely Neumann or Dirichlet case
the second term in the asymptotic expansion of eigenvalues
is independent of~$\eps$
and determined by the \emph{global} geometry of~$\Sigma$.
In addition to this introductory part,
the paper consists of Section~\ref{Sec.Pre},
in which we collect some auxiliary material,
and Section~\ref{Sec.proof} devoted
to the proof of Theorem~\ref{Thm.main}.
\section{Preliminaries}\label{Sec.Pre}
We refer to~\cite{KRT} for a necessary geometric background
of tubes about hypersurfaces.
Using the Fermi ``coordinates''~$(x,t)$ that appear in~\eqref{layer.intro},
$\Omega_\eps$~can be identified with the Riemannian manifold $\Sigma\times(0,1)$
equipped with the metric~$G$ of the following block-diagonal structure
$G = G_{\mu\nu} \, dx^\mu dx^\nu + \eps^2 dt^2$.
Here the range of Greek indices is assumed to be $1,\dots,d-1$
and the Einstein summation convention is employed.
We shall not need the explicit formulae for the coefficients~$G_{\mu\nu}$,
just the bounds:
\begin{equation}\label{eq:metric_bound}
(1-C\eps)(g_{\mu\nu})\leq(G_{\mu\nu})\leq (1+C\eps)(g_{\mu\nu})
\,.
\end{equation}
(Of course, we implicitly assume that~$\eps$ is so small
that $1-C\eps$ is positive.)
Here and in the sequel, we adopt the convention that~$C, c$
and the constants involved in the ``big~$\mathcal{O}$'' notation
possibly depend on the supremum norm of the principal curvatures
$\kappa_1,\dots,\kappa_{d-1}$ and may vary from line to line.
On the other hand, we shall need the formula
for the determinant $|G| = \varepsilon^2 \, |g| \, h_\eps^2$,
where
\begin{equation}\label{h.fomula}
h_\eps(\cdot,t) := \prod_{\mu=1}^{d-1}(1-\varepsilon \, \kappa_{\mu} \, t)
= 1 - \eps \, \kappa \, t + \mathcal{O}(\eps^2)
\,.
\end{equation}
The volume element of~$\big(\Sigma\times(0,1),G\big)$
is thus given by $d\Omega_\eps = \eps \, h_\eps \, d\Sigma \wedge dt$,
where $d\Sigma = |g|^{1/2} dx^1 \wedge \dots \wedge dx^{d-1}$
is the surface element of $(\Sigma,g)$.
Using the above geometric preliminaries,
the Hilbert space $\sii(\Omega_\eps)$ can be identified with
$
\mathcal{H}_\eps :=
\sii\big(\Sigma\times(0,1),\eps \, h_\eps \, d\Sigma \wedge dt\big)
$.
The Laplacian $-\Delta_\textit{DN}^{\Omega_\eps}$
can be in turn identified with the self-adjoint operator
on $\mathcal{H}_\eps$ associated with the quadratic form
\begin{align*}
Q_\eps[\psi]
&:= \big\langle
\partial_{x^\mu}\psi,G^{\mu\nu}\partial_{x^\nu}\psi
\big\rangle_{\mathcal{H}_\eps}
+ \eps^{-2} \|\partial_t\psi\|_{\mathcal{H}_\eps}^2
\,,
\\
\psi \in \Dom(Q_\eps) &:=
\left\{
\psi \in W^{1,2}\big(\Sigma \times (0,1)\big) \ | \quad
\psi = 0 \quad \mbox{on} \quad
\partial\big(\Sigma \times (0,1)\big) \setminus \big(\Sigma\times\{1\}\big)
\right\}
\,.
\end{align*}
Here the boundary values of~$\psi$ are understood in the sense of traces.
Similarly, the operator~$H_\eps$ is associated with the form
\begin{align*}
q_\eps[\varphi]
&:= \big\langle
\partial_{x^\mu}\varphi,g^{\mu\nu}\partial_{x^\nu}\varphi
\rangle_{\sii(\Sigma)}
+ \eps^{-1} \langle
\varphi,\kappa\varphi
\rangle_{\sii(\Sigma)}
\,,
\\
\varphi \in \Dom(q_\eps) &:= W_0^{1,2}(\Sigma)
\,.
\end{align*}
The spectral numbers $\{\lambda(\eps)\}_{n=1}^\infty$ as defined above
can be fully characterised by the Rayleigh-Ritz variational
formula~\cite[Sec.~4.5]{Davies}
\begin{equation}\label{minimax}
\lambda_n(\eps) =
\inf_{\mathcal{L}_n}
\sup_{\psi \in \mathcal{L}_n}
\frac{Q_\eps[\psi]}{\ \|\psi\|_{\mathcal{H}_\eps}^2}
\,,
\end{equation}
where the infimum is taken over all $n$-dimensional subspaces
$\mathcal{L}_n\subset\Dom(Q_\eps)$.
An analogous formula holds for the spectral numbers
$\{\mu(\eps)\}_{n=1}^\infty$ of~$H_\eps$.
It follows from~\eqref{minimax} that the presence
of the multiplicative factor~$\eps$ in the weight of~$\mathcal{H}_\eps$
has no effect on the spectrum of~$-\Delta_\textit{DN}^{\Omega_\eps}$ .
Our strategy to prove Theorem~\ref{Thm.main} will be to show
that the forms~$Q_\eps$ and~$q_\eps$ are close to each other
in a sense as $\eps \to 0$.
Since the forms act on different Hilbert spaces,
this requires a suitable identification of~$\mathcal{H}_\eps$ with~$\sii(\Sigma)$.
First, notice that it follows from~\eqref{h.fomula} that~$\mathcal{H}_\eps$
(up to the irrelevant factor~$\eps$)
approaches the $\eps$-independent Hilbert space
$
\mathfrak{H} :=
\sii\big(\Sigma\times(0,1), d\Sigma \wedge dt\big)
$.
For this Hilbert space, we use the orthogonal-sum decomposition
\begin{equation}\label{direct}
\mathfrak{H} = \mathfrak{H}_1 \oplus \mathfrak{H}_1^\bot
\,,
\end{equation}
where the subspace~$\mathfrak{H}_1$ consists of functions~$\psi_1$
such that
\begin{equation}\label{psi1}
\psi_1(x,t) = \varphi(x) \chi_1(t)
\qquad \mbox{with} \qquad
\varphi \in \sii(\Sigma)
\,, \quad
\chi_1(t):=\sqrt{2} \sin\left(\pi t/2\right)
\,.
\end{equation}
Notice that~$\chi_1$ is the first eigenfunction of
the Laplacian on $\sii((0,1))$,
subject to the Dirichlet and Neumann boundary condition
at~$0$ and~$1$, respectively.
This operator has eigenvalues $\{(n\pi/2)^2\}_{n=1}^\infty$,
where the lowest one is of course related
to the leading term in~\eqref{expansion}.
Since~$\chi_1$ is normalised, we clearly have
$
\|\psi_1\|_{\mathfrak{H}}=\|\varphi\|_{\sii(\Sigma)}
$.
Given any $\psi\in\mathfrak{H}$, we have the decomposition
\begin{equation}\label{psi.decomposition}
\psi = \psi_1 + \psi_\bot
\qquad\mbox{with}\qquad
\psi_1 \in \mathfrak{H}_1, \ \psi_\bot\in \mathfrak{H}_1^\bot
\,,
\end{equation}
where~$\psi_1$ has the form~\eqref{psi1}
with $\varphi(x):=\int_0^1 \psi(x,t) \chi_1(t) \;\! dt$.
Note that $\psi_1,\psi_\bot\in\Dom(Q_\eps)$ if $\psi\in\Dom(Q_\eps)$.
The inclusion $\psi_\bot\in\mathfrak{H}_1^\bot$ means that
\begin{equation}\label{orth.identity1}
\int_0^1 \psi_\bot(x,t) \, \chi_1(x) \, dt = 0
\qquad\mbox{for a.e.}\quad x \in \Sigma
\,.
\end{equation}
If in addition $\psi_\bot \in \Dom(Q_\eps)$,
then one can differentiate the last identity to get
\begin{equation}\label{orth.identity2}
\int_0^1 \partial_{x^\mu}\psi_\bot(x,t) \, \chi_1(t) \, dt = 0
\qquad\mbox{for a.e.}\quad x \in \Sigma
\,.
\end{equation}
Since $\mathcal{H}_\eps$ and~$\mathfrak{H}$ can be identified
as vector spaces for any fixed~$\eps>0$,
the decomposition~\eqref{direct}
can be equally used for each function~$\psi \in \mathcal{H}_\eps$.
In view of the isomorphism
$
\sii(\Sigma)\ni\varphi \mapsto \psi_1 \in \mathfrak{H}_1
$,
we may think of~$H_\eps$ as acting on~$\mathfrak{H}_1$ as well.
\section{Proof of Theorem~\ref{Thm.main}}\label{Sec.proof}
Expansion~\eqref{expansion} will follow as a consequence
of upper and lower bounds to~$\lambda_n(\eps)$
that have the same leading order terms in their asymptotics.
It is convenient to define the shifted form
$\tilde{Q}_\eps:=Q_\eps-\pi^2/(2\eps)^2$
and focus on the first non-trivial term~$\mu_n(\eps)$ in~\eqref{expansion}.
Let us decompose any $\psi \in \Dom(Q_\eps)$
according to~\eqref{psi.decomposition}.
A straightforward calculation employing an integration by parts yields
\begin{equation}\label{crucial}
\begin{aligned}
\|\partial_t\psi\|_{\mathcal{H}_\eps}^2
- \left(\frac{\pi}{2}\right)^2 \|\psi\|_{\mathcal{H}_\eps}^2
=\ & \|\partial_t\psi_\bot\|_{\mathcal{H}_\eps}^2
- \left(\frac{\pi}{2}\right)^2 \|\psi_\bot\|_{\mathcal{H}_\eps}^2
- 2 \eps \, \Re \int \overline{\varphi}\, \chi_1' \,
\psi_\bot \, \partial_t h_\eps
\\
\ & + \frac{\eps}{2} \int |\varphi|^2 \, \chi_1^2 \, \partial_t^2 h_\eps
- \eps \int_\Sigma |\varphi|^2 \, \partial_t h_\eps |_{t=1}
\,.
\end{aligned}
\end{equation}
Here and in the sequel,
$\int$ and $\int_\Sigma$
abbreviate the integrals over~$\Sigma\times(0,1)$ and $\Sigma$
with the integration measures~$d\Sigma \wedge dt$ and $d\Sigma$, respectively,
and we do not write the variables on which the integrated functions depend.
Using~\eqref{h.fomula} and recalling that~$\chi_1$ is normalised,
we easily verify
\begin{equation}\label{varphi.estimates}
\begin{aligned}
\left|
\frac{1}{\eps^2} \int |\varphi|^2 \, \chi_1^2 \, \partial_t^2 h_\eps
\right|
\leq C \int_\Sigma |\varphi|^2
\,,
\\
\left|
- \frac{1}{\eps^2} \int_\Sigma |\varphi|^2 \, \partial_t h_\eps |_{t=1}
- \eps^{-1} \big\langle
\varphi,\kappa\varphi
\rangle_{\sii(\Sigma)}
\right|
\leq C \int_\Sigma |\varphi|^2
\,,
\end{aligned}
\end{equation}
which reveals the source of the potential term of~\eqref{op.comparison}.
At the same time, using~\eqref{eq:metric_bound},
\begin{equation}\label{kinetic.estimates}
\begin{aligned}
\pm \, \eps^{-1} \big\langle
\partial_{x^\mu}\psi,G^{\mu\nu}\partial_{x^\nu}\psi
\big\rangle_{\mathcal{H}_\eps}
&\leq \pm (1 \pm C\eps) \,
\big\langle
\partial_{x^\mu}\psi,g^{\mu\nu}\partial_{x^\nu}\psi
\big\rangle_{\mathfrak{H}}
\,,
\\
\pm \, \eps^{-1} \|\psi\|_{\mathcal{H}_\eps}^2
&\leq \pm (1 \pm C\eps) \, \|\psi\|_{\mathfrak{H}}^2
\,.
\end{aligned}
\end{equation}
Here, by the normalisation of~$\chi_1$
and \eqref{orth.identity1}--\eqref{orth.identity2},
\begin{equation}\label{orth.identities}
\begin{aligned}
\big\langle
\partial_{x^\mu}\psi,g^{\mu\nu}\partial_{x^\nu}\psi
\big\rangle_{\mathfrak{H}}
&= \big\langle
\partial_{x^\mu}\varphi,g^{\mu\nu}\partial_{x^\nu}\varphi
\big\rangle_{\sii(\Sigma)}
+ \big\langle
\partial_{x^\mu}\psi_\bot,g^{\mu\nu}\partial_{x^\nu}\psi_\bot
\big\rangle_{\mathfrak{H}}
\,,
\\
\|\psi\|_{\mathfrak{H}}^2
&= \|\varphi\|_{\sii(\Sigma)}^2 + \|\psi_\bot\|_{\mathfrak{H}}^2
\,.
\end{aligned}
\end{equation}
\subsection{Upper bound}
Let us restrict the subspaces~$\mathcal{L}_n$ in the formula~\eqref{minimax}
to the decoupled functions~\eqref{psi1}, where $\varphi \in \Dom(q_\eps)$.
Using~\eqref{crucial}--\eqref{orth.identities} with $\psi_\bot=0$,
we get the upper bound
\begin{equation}\label{upper.pre}
\frac{\tilde{Q}_\eps[\psi_1]}{\ \|\psi_1\|_{\mathcal{H}_\eps}^2}
\leq \frac{(1+C\eps) \, q_\eps[\varphi]+C \, \|\varphi\|_{\sii(\Sigma)}^2}
{(1-C\eps) \, \|\varphi\|_{\sii(\Sigma)}^2}
\,,
\end{equation}
which yields
\begin{equation}\label{upper}
\lambda_n(\eps) - \left(\frac{\pi}{2\eps}\right)^2
\leq \frac{1+C\eps}{1-C\eps} \, \mu_n(\eps) + \frac{C}{1-C\eps}
\,.
\end{equation}
Observing that, for each~$n \geq 1$,
\begin{equation}\label{mu.bounds}
- \|\kappa\|_\infty \leq
\eps \, \nu_n - \|\kappa\|_\infty
\leq \eps \, \mu_n(\eps) \leq
\eps \, \nu_n + \|\kappa\|_\infty
\,,
\end{equation}
where~$\nu_n$ are the ``eigenvalues'' of~$-\Delta_g$ as defined by~\eqref{minimax},
we conclude from~\eqref{upper} the desired asymptotic upper bound
\begin{equation}\label{upper.final}
\lambda_n(\eps)
\leq \left(\frac{\pi}{2\eps}\right)^2 + \mu_n(\eps) + \mathcal{O}(1)
\qquad \mbox{as} \qquad \eps \to 0
\,.
\end{equation}
It is worth noticing that the constant~$C$ in~\eqref{upper}
does not depend on~$n$; a possible dependence of the constants
appearing in~$\mathcal{O}(1)$ enters through
the upper bound of~\eqref{mu.bounds} only.
\subsection{Lower bound}
As usual, lower bounds are more difficult to establish.
In our situation, we need to carefully
exploit the Hilbert-space decomposition~\eqref{direct}.
Since $\psi_\bot \in \mathfrak{H}_1^\bot$, we have
$
\int_0^1 |\partial_t \psi_\bot(x,t)|^2 \, dt
\geq \pi^2 \int_0^1 |\psi_\bot(x,t)|^2 \, dt
$
for a.e.\ $x \in \Sigma$. This Poincar\'e-type estimate
extends to~$\mathfrak{H}$ by Fubini's theorem.
Hence, using~\eqref{eq:metric_bound}
to estimate~$h_\eps$ in~$d\Omega_\eps$,
we get
\begin{equation*}
\begin{aligned}
\|\partial_t\psi_\bot\|_{\mathcal{H}_\eps}^2
- \left(\frac{\pi}{2\eps}\right)^2 \|\psi_\bot\|_{\mathcal{H}_\eps}^2
\geq
\eps \left[
\left(\frac{3\pi^2}{4\eps^2}\right)
- C \left(\frac{5\pi^2}{4\eps}\right)
\right]
\|\psi_\bot\|_\mathfrak{H}^2
\geq
\eps \, \frac{c}{\eps^2} \, \|\psi_\bot\|_\mathfrak{H}^2
\,,
\end{aligned}
\end{equation*}
where the second inequality holds with a positive constant~$c$
for all sufficiently small~$\eps$.
Using~\eqref{h.fomula} and the Young inequality,
the last term on the right hand side of~\eqref{crucial}
can be estimated as follows
\begin{equation*}
\begin{aligned}
\frac{1}{\eps^2} \left|
2 \, \Re \int \overline{\varphi}\, \chi_1' \,
\psi_\bot \, \partial_t h_\eps
\right|
& \leq \frac{C}{\eps} \,
2 \int |\varphi \, \chi_1' \,
\psi_\bot |
\leq
\frac{C^2}{\delta} \|\varphi \, \chi_1'\|_\mathfrak{H}^2
+ \frac{\delta}{\eps^2} \|\psi_\bot\|_\mathfrak{H}^2
\end{aligned}
\end{equation*}
with any positive~$\delta$.
Here
$
\|\varphi \, \chi_1'\|_\mathfrak{H}
= (\pi/2) \;\! \|\varphi\|_{\sii(\Sigma)}
$.
Choosing~$\delta$ sufficiently small
and using \eqref{crucial}--\eqref{orth.identities},
we thus get the lower bound
\begin{equation}\label{lower.pre}
\frac{\tilde{Q}_\eps[\psi]}{\ \|\psi\|_{\mathcal{H}_\eps}^2}
\geq \frac{(1-C\eps) \, q_\eps[\varphi]-C \, \|\varphi\|_{\sii(\Sigma)}^2
+c \, \eps^{-2} \;\! \|\psi_\bot\|_\mathfrak{H}^2}
{(1+C\eps) \, \big(\|\varphi\|_{\sii(\Sigma)}^2
+\|\psi_\bot\|_{\mathfrak{H}}^2\big)}
\,.
\end{equation}
Here the numerator is in fact the quadratic form of
an operator direct sum
$
T_\eps \oplus T_\eps^\bot
$
with respect to the decomposition~\eqref{direct},
where $T_\eps := (1-C\eps) \, H_\eps - C$
and $T_\eps^\bot := c \, \eps^{-2}$.
In view of~\eqref{mu.bounds}, the spectrum of~$T_\eps^\bot$
diverges faster as $\eps \to 0$ than that of~$T_\eps$.
This enables us to conclude from~\eqref{lower.pre}
with help of~\eqref{minimax} that for any $n \geq 1$
there exist $C,c$ such that for all $\eps \leq c$, we have
\begin{equation}\label{lower}
\lambda_n(\eps) - \left(\frac{\pi}{2\eps}\right)^2
\geq \frac{1-C\eps}{1+C\eps} \, \mu_n(\eps) - \frac{C}{1+C\eps}
\,.
\end{equation}
Using~\eqref{mu.bounds}, we conclude from~\eqref{lower}
the desired asymptotic lower bound
\begin{equation}\label{lower.final}
\lambda_n(\eps)
\geq \left(\frac{\pi}{2\eps}\right)^2 + \mu_n(\eps) + \mathcal{O}(1)
\qquad \mbox{as} \qquad \eps \to 0
\,.
\end{equation}
Combining~\eqref{lower.final} with~\eqref{upper.final},
we complete the proof of Theorem~\ref{Thm.main}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Simulations enable researchers of all fields to run virtual experiments that are too expensive or impossible to be carried out in the real world. In many contexts, high-fidelity models are indispensable to represent the simulated process accurately. These high-fidelity simulations typically come with the burden of large computational cost such that an application in real-time or an evaluation for many different parameters is impossible respecting the given restrictions of computational resources at hand. Model order reduction (MOR) techniques can be used to reduce the computational cost of evaluations of the high-fidelity model by approximating these with a surrogate reduced-order model (ROM) \cite{LuminyBook2017}.
One class of high-fidelity models are systems of ordinary differential equations (ODEs) with a high order, i.e.\ a high dimension in the unknown variable. Such models typically arise from fine discretizations of time-dependent partial differential equations (PDEs). Since each point in the discretization requires one or multiple unknowns, fine discretizations with many discretization points yield a system of ODEs with a high order. In some cases, the ODE system takes the form of a finite-dimensional Hamiltonian system. Examples are linear elastic models \cite{Buchfink2018} or gyro systems \cite{Xu2005}.
Symplectic MOR \cite{Peng2016} allows to derive a ROM for high-dimensional Hamiltonian systems by lowering the order of the system while maintaining the Hamiltonian structure. Thus, it is also referred to as structure-preserving MOR for Hamiltonian systems \cite{Maboudi2017}. Technically speaking, a Petrov--Galerkin projection is used in combination with a symplectic reduced-order basis (ROB).
For a data-driven generation of the ROB, the conventional methods e.g.\ the Proper Orthogonal Decomposition (POD) \cite{LuminyBook2017} are not suited since they do not necessarily compute a symplectic ROB. To this end, the referenced works introduce the Proper Symplectic Decomposition (PSD) which is a data-driven basis generation technique for symplectic ROBs. Due to the high nonlineariy of the optimization problem, an efficient solution strategy is yet unknown for the PSD. The existing PSD methods (Cotangent Lift, Complex SVD, a nonlinear programming approach \cite{Peng2016} and a greedy procedure introduced in \cite{Maboudi2017}) each restrict to a specific subset of symplectic ROBs from which they select optimal solutions which might be globally suboptimal.
The present paper classifies the existing symplectic basis generation techniques in two classes of methods which either generate orthonormal or non-orthonormal bases. To this end, we show that the existing basis generation techniques for symplectic bases almost exclusively restrict to orthonormal bases. Furthermore, we prove that Complex SVD is the optimal solution of the PSD on the set of orthonormal, symplectic bases. During the proof, an alternative formulation of the Complex SVD for symplectic matrices is introduced. To leave the class of orthonormal, symplectic bases, we propose a new basis generation technique, namely the PSD SVD-like decomposition. It is based on an SVD-like decomposition of arbitrary matrices $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{n \times 2m}$ introduced in \cite{Xu2003}.
This paper is organized in the following way: \Cref{sec:MORAutoHamSys} is devoted to the structure-preserving MOR for autonomous and non-autonomous, parametric Hamiltonian systems and thus, introduces symplectic geometry, Hamiltonian systems and symplectic MOR successively. The data-driven generation of a symplectic ROB with PSD is discussed in \Cref{subsec:PSD}. The numerical results are presented and elaborated in \Cref{sec:NumRes} exemplified by a Lam\'e--Navier type elasticity model which we introduce at the beginning of that section together with a short comment on the software that is used for the experiments. The paper is summarized and concluded in \cref{sec:Conclusion}.
\section{Symplectic model reduction}
\label{sec:MORAutoHamSys}
Symplectic MOR for autonomous Hamiltonian systems is introduced in \cite{Peng2016}. We repeat the essentials for the sake of completeness and to provide a deeper understanding of the methods used. In the following $\ensuremath{\bm{\mu}} \in \mathcal{P} \subset \ensuremath{\mathbb{R}}^p$ describe $p \in \ensuremath{\mathbb{N}}$ parameters of the system from the parameter set $\mathcal{P}$. We might skip the explicit dependence on the parameter vector $\ensuremath{\bm{\mu}}$ if it is not relevant in this specific context.
\subsection{Symplectic geometry in finite dimensions}
\label{subsec:SymplGeo}
\begin{Definition}[Symplectic form over $\ensuremath{\mathbb{R}}$]
Let $\ensuremath{\mathbb{V}}$ be a finite-dimensional vector space over $\ensuremath{\mathbb{R}}$. We consider a skew-symmetric and non-degenerate bilinear form $\symplForm: \ensuremath{\mathbb{V}} \times \ensuremath{\mathbb{V}} \rightarrow \ensuremath{\mathbb{R}}$ , i.e.\ for all $\ensuremath{\bm{v}}_1,\ensuremath{\bm{v}}_2 \in \ensuremath{\mathbb{V}}$, it holds
\begin{align*}
&\symplFormb{\ensuremath{\bm{v}}_1}{\ensuremath{\bm{v}}_2} = -\symplFormb{\ensuremath{\bm{v}}_2}{\ensuremath{\bm{v}}_1}&
&\quad\text{and}\quad&
&\symplFormb{\ensuremath{\bm{v}}_2}{\ensuremath{\bm{v}}_3} = 0 \quad \forall \ensuremath{\bm{v}}_3 \in \ensuremath{\mathbb{V}}
\implies \ensuremath{\bm{v}}_3 = \ensuremath{\bm{0}}.
\end{align*}
The bilinear form $\symplForm$ is called symplectic form on $\ensuremath{\mathbb{V}}$ and the pair $(\ensuremath{\mathbb{V}}, \symplForm)$ is called symplectic vector space.
\end{Definition}
It can be shown that $\ensuremath{\mathbb{V}}$ is necessarily of even dimension \cite{daSilva2008}. Thus, $\ensuremath{\mathbb{V}}$ is isomorphic to $\ensuremath{\mathbb{R}}^{2n}$ which is why we restrict to $\ensuremath{\mathbb{V}} = \ensuremath{\mathbb{R}}^{2n}$ and write $\symplForm[2n]$ instead of $\symplForm$ in the following. In context of the theory of Hamiltonians, $\ensuremath{\mathbb{R}}^{2n}$ refers to the phase space which consists, in the context of classical mechanics, of position states $\ensuremath{\bm{q}} = \rTsb{q_1, \dots, q_n} \in \ensuremath{\mathbb{R}}^n$ of the configuration space and momentum states $\ensuremath{\bm{p}} = \rTsb{p_1, \dots, p_n} \in \ensuremath{\mathbb{R}}^n$ which form together the state $\ensuremath{\bm{x}} = \rTsb{q_1, \dots, q_n, p_1, \dots, p_n} \in \ensuremath{\mathbb{R}}^{2n}$.
It is guaranteed \cite{daSilva2008} that there exists a basis $\left\{ \ensuremath{\bm{e}}_1, \dots, \ensuremath{\bm{e}}_n, \ensuremath{\bm{f}}_1, \dots, \ensuremath{\bm{f}}_n \right\} \subset \ensuremath{\mathbb{R}}^{2n}$ such that the symplectic form takes the canonical structure
\begin{align} \label{eq:CanonicalSymplForm}
&\symplFormb[2n]{\ensuremath{\bm{v}}_1}{\ensuremath{\bm{v}}_2} = \rT\ensuremath{\bm{v}}_1 \ensuremath{\Jtwo{n}} \ensuremath{\bm{v}}_2
\quad \forall \ensuremath{\bm{v}}_1, \ensuremath{\bm{v}}_2 \in \ensuremath{\mathbb{R}}^{2n}, \qquad&
&\ensuremath{\Jtwo{n}} :=
\begin{bmatrix}
\Z{n} & \I{n} \\
-\I{n} & \Z{n}
\end{bmatrix},
\end{align}
where $\I{n} \in \ensuremath{\mathbb{R}}^{n \times n}$ is the identity matrix, $\Z{n} \in \ensuremath{\mathbb{R}}^{n \times n}$ is the matrix of all zeros and $\ensuremath{\Jtwo{n}}$ is called Poisson matrix. Thus, we restrict to symplectic forms of the canonical structure in the following. For the Poisson matrix, it holds for any $\ensuremath{\bm{v}} \in \ensuremath{\mathbb{R}}^{2n}$
\begin{align}\label{eq:StructMat}
&\ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}} = \I{2n},\quad&
&\ensuremath{\Jtwo{n}} \ensuremath{\Jtwo{n}} = \ensuremath{\TJtwo{n}} \ensuremath{\TJtwo{n}} = -\I{2n},\quad&
&\rT\ensuremath{\bm{v}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{v}} = 0.
\end{align}
These properties are intuitively understandable as the Poisson matrix is a $2n$-dimensional, $90^\circ$ rotation matrix and the matrix $-\I{2n}$ can be interpreted as a rotation by $180^{\circ}$ in this context.
\begin{Definition}[Symplectic map]
Let $A: \ensuremath{\mathbb{R}}^{2m} \rightarrow \ensuremath{\mathbb{R}}^{2n}$, $\ensuremath{\bm{y}} \mapsto \ensuremath{\bm{A}} \ensuremath{\bm{y}}$, $\ensuremath{\bm{A}} \in \ensuremath{\mathbb{R}}^{2n \times 2m}$ be a linear mapping for $n,m \in \ensuremath{\mathbb{N}}$ and $m\leq n$. We call $A$ a linear symplectic map and $\ensuremath{\bm{A}}$ a symplectic matrix with respect to $\symplForm[2n]$ and $\symplForm[2m]$ if
\begin{align} \label{eq:SymplMat}
\rT\ensuremath{\bm{A}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{A}} = \Jtwo{m}.
\end{align}
where $\symplForm[2m]$ is the canonical symplectic form on $\ensuremath{\mathbb{R}}^{2m}$ (and is equal to $\symplForm[2n]$ if $n=m$).
Let $\ensuremath{U} \subset \ensuremath{\mathbb{R}}^{2m}$ be an open set and $\ensuremath{\bm{g}}:\ensuremath{U} \rightarrow \ensuremath{\mathbb{R}}^{2n}$ a differentiable map on $\ensuremath{U}$. We call $\ensuremath{\bm{g}}$ a symplectic map if the Jacobian matrix $\ensuremath{\dd{\fy}}\ensuremath{\bm{g}}(\ensuremath{\bm{y}}) \in \ensuremath{\mathbb{R}}^{2n \times 2m}$ is a symplectic matrix for every $\ensuremath{\bm{y}} \in \ensuremath{U}$.
\end{Definition}
For a linear map, it is easy to check that the condition \cref{eq:SymplMat} is equivalent to the preservation of the symplectic form, i.e.\ for all $\ensuremath{\bm{v}}_1, \ensuremath{\bm{v}}_2 \in \ensuremath{\mathbb{R}}^{2m}$
\begin{align*}
\symplFormb[2n]{ \ensuremath{\bm{A}} \ensuremath{\bm{v}}_1 }{ \ensuremath{\bm{A}} \ensuremath{\bm{v}}_2 }
= \rT\ensuremath{\bm{v}}_1 \rT\ensuremath{\bm{A}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{A}} \ensuremath{\bm{v}}_2
= \rT\ensuremath{\bm{v}}_1 \Jtwo{m} \ensuremath{\bm{v}}_2
= \symplFormb[2m]{\ensuremath{\bm{v}}_1}{\ensuremath{\bm{v}}_2}.
\end{align*}
Now we give the definition of the so-called symplectic inverse which will be used in \Cref{subsec:SymplMOR}.
\begin{Definition}[Symplectic inverse]
For each symplectic matrix $\ensuremath{\bm{A}} \in \ensuremath{\mathbb{R}}^{2n \times 2m}$, we define the symplectic inverse
\begin{align}\label{eq:SymplInv}
\si\ensuremath{\bm{A}} = \TJtwo{m} \rT\ensuremath{\bm{A}} \ensuremath{\Jtwo{n}} \in \ensuremath{\mathbb{R}}^{2m \times 2n}.
\end{align}
\end{Definition}
The symplectic inverse $\si\ensuremath{\bm{A}}$ exists for every symplectic matrix and it holds the following inverse relation
\begin{align*}
\si{\ensuremath{\bm{A}}} \ensuremath{\bm{A}} = \TJtwo{m} \rT\ensuremath{\bm{A}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{A}} = \TJtwo{m} \Jtwo{m} = \I{2m}.
\end{align*}
\subsection{Finite-dimensional, autonomous Hamiltonian systems}
\label{subsec:AutoHam}
To begin with, we introduce the Hamiltonian system in a finite-dimensional, autonomous setting.
\begin{Definition}[Finite-dimensional, autonomous Hamiltonian system]
Let $\ensuremath{\mathcal{H}}:\ensuremath{\mathbb{R}}^{2n} \times \mathcal{P} \rightarrow \ensuremath{\mathbb{R}}$ be a scalar-valued function that we require to be continuously differentiable in the first argument and which we call Hamiltonian (function). Hamilton's equation is an initial value problem with the prescribed initial data $\ensuremath{t_{\mathrm{0}}} \in \ensuremath{\mathbb{R}}$, $\ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n}$ which describes the evolution of the solution $\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n}$ for all $t \in \ftInterval, \; \ensuremath{\bm{\mu}} \in \mathcal{P}$ with
\begin{align}\label{eq:AutoHamEq}
\ensuremath{\dd{t}} \ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) =&\; \ensuremath{\Jtwo{n}} \ensuremath{\nabla_{\fx}} \ensuremath{\mathcal{H}}(\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}) =: \ensuremath{\fX_{\Ham}}(\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}), \quad&
\ensuremath{\bm{x}}(\ensuremath{t_{\mathrm{0}}}, \ensuremath{\bm{\mu}}) =&\; \ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}})
\end{align}
where $\ensuremath{\fX_{\Ham}}(\bullet, \ensuremath{\bm{\mu}})$ is called Hamiltonian vector field. The triple $(\ensuremath{\mathbb{V}}, \symplForm[2n], \ensuremath{\mathcal{H}})$ is referred to as Hamiltonian system. We denote the flow of a Hamiltonian system as the mapping $\flow: \ensuremath{\mathbb{R}}^{2n} \times \mathcal{P} \rightarrow \ensuremath{\mathbb{R}}^{2n}$ that evolves the initial state $\ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n}$ to the corresponding solution $\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}};\; \ensuremath{t_{\mathrm{0}}}, \ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}}))$ of Hamilton's equation
\begin{align*}
\flow(\ensuremath{\fx_{\mathrm{0}}}, \ensuremath{\bm{\mu}}) := \ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}};\; \ensuremath{t_{\mathrm{0}}}, \ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}})),
\end{align*}
where $\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}};\; \ensuremath{t_{\mathrm{0}}}, \ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}}))$ indicates that it is the solution with the initial data $\ensuremath{t_{\mathrm{0}}},\ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}})$.
\end{Definition}
The two characteristic properties of Hamiltonian systems are (a) the preservation of the Hamiltonian function and (b) the symplecticity of the flow.
\begin{Proposition}[Preservation of the Hamiltonian]
The flow of Hamilton's equation $\flow$ preserves the Hamiltonian function $\ensuremath{\mathcal{H}}$.
\end{Proposition}
\begin{proof}
We prove the assertion by showing that the evolution over time is constant for any $\ensuremath{\bm{x}} \in \ensuremath{\mathbb{R}}^{2n}$ due to
\begin{align*}
\ensuremath{\dd{t}} \ensuremath{\mathcal{H}}(\flow(\ensuremath{\bm{x}}))
= \rTb{\ensuremath{\nabla_{\fx}}\ensuremath{\mathcal{H}}(\flow(\ensuremath{\bm{x}}))} \ensuremath{\dd{t}} \flow(\ensuremath{\bm{x}})
\stackrel{\cref{eq:AutoHamEq}}{=} \rTb{\ensuremath{\nabla_{\fx}}\ensuremath{\mathcal{H}}(\flow(\ensuremath{\bm{x}}))} \ensuremath{\Jtwo{n}} \ensuremath{\nabla_{\fx}}\ensuremath{\mathcal{H}}(\flow(\ensuremath{\bm{x}}))
\stackrel{\cref{eq:StructMat}}{=} 0.
\end{align*}
\end{proof}
\begin{Proposition}[Symplecticity of the flow]
Let the Hamiltonian function be twice continuously differentiable in the first argument. Then, the flow $\flow(\bullet, \ensuremath{\bm{\mu}}): \ensuremath{\mathbb{R}}^{2n} \rightarrow \ensuremath{\mathbb{R}}^{2n}$ of a Hamiltonian system is a symplectic map.
\end{Proposition}
\begin{proof}
See \cite[Chapter VI, Theorem 2.4]{Hairer2006}.
\end{proof}
\subsection{Symplectic model order reduction for autonomous Hamiltonian systems}
\label{subsec:SymplMOR}
The goal of MOR \cite{LuminyBook2017} is to reduce the order, i.e.\ the dimension, of high dimensional systems. To this end, we approximate the high-dimensional state $\ensuremath{\bm{x}}(t) \in \ensuremath{\mathbb{R}}^{2n}$ with
\begin{align*}
\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) \approx \ensuremath{\reconstruced{\fx}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\bm{V}} \ensuremath{\reduce{\fx}}(t, \ensuremath{\bm{\mu}}),\quad&
&\ensuremath{\mathcal{V}} = \colspanb{\ensuremath{\bm{V}}}
\end{align*}
with the reduced state $\ensuremath{\reduce{\fx}}(t) \in \ensuremath{\mathbb{R}}^{2k}$, the reduced-order basis (ROB) $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$, the reconstructed state $\ensuremath{\reconstruced{\fx}}(t) \in \ensuremath{\mathcal{V}}$ and the reduced space $\ensuremath{\mathcal{V}} \subset \ensuremath{\mathbb{R}}^{2n}$. The restriction to even-dimensional spaces $\ensuremath{\mathbb{R}}^{2n}$ and $\ensuremath{\mathbb{R}}^{2k}$ is not necessary for MOR in general but is required for the symplectic MOR in the following. To achieve a computational advantage with MOR, the approximation should introduce a clear reduction of the order, i.e.\ $2k \ll 2n$.
For Petrov--Galerkin projection-based MOR techniques, the ROB $\ensuremath{\bm{V}}$ is accompanied by a projection matrix $\ensuremath{\bm{W}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ which is chosen to be biorthogonal to $\ensuremath{\bm{V}}$, i.e. $\rT\ensuremath{\bm{W}} \ensuremath{\bm{V}} = \I{2k}$. The reduced-order model (ROM) is derived with the requirement that the residual $\ensuremath{\bm{r}}(t, \ensuremath{\bm{\mu}})$ vanishes in the space spanned by the columns of the projection matrix, i.e.\ in our case
\begin{align}\label{eq:Residual}
&\ensuremath{\bm{r}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\dd{t}}\ensuremath{\reconstruced{\fx}}(t, \ensuremath{\bm{\mu}}) - \ensuremath{\fX_{\Ham}}(\ensuremath{\reconstruced{\fx}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n},\quad&
&\rT\ensuremath{\bm{W}} \ensuremath{\bm{r}}(t, \ensuremath{\bm{\mu}}) = \Z{2k \times 1},
\end{align}
where $\Z{2k \times 1} \in \ensuremath{\mathbb{R}}^{2k}$ is the vector of all zeros. Due to the biorthogonality, this is equivalent to
\begin{align}\label{eq:ROM}
&\ensuremath{\dd{t}} \ensuremath{\reduce{\fx}}(t, \ensuremath{\bm{\mu}}) = \rT\ensuremath{\bm{W}} \ensuremath{\fX_{\Ham}}(\ensuremath{\reconstruced{\fx}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}) = \rT\ensuremath{\bm{W}} \ensuremath{\Jtwo{n}} \ensuremath{\nabla_{\fx}} \ensuremath{\mathcal{H}}(\ensuremath{\reconstruced{\fx}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}),\quad&
&\ensuremath{\reduce{\fx}}(\ensuremath{t_{\mathrm{0}}}, \ensuremath{\bm{\mu}}) = \rT\ensuremath{\bm{W}} \ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}}).
\end{align}
In the context of symplectic MOR, the ROB is chosen to be a symplectic matrix \cref{eq:SymplMat} which we call a symplectic ROB. Additionally, the transposed projection matrix is the symplectic inverse $\rT\ensuremath{\bm{W}} =\si\ensuremath{\bm{V}}$ and the projection in \cref{eq:ROM} is called a symplectic projection or symplectic Galerkin projection \cite{Peng2016}. The (possibly oblique) projection reads
\begin{align*}
\ensuremath{\bm{P}} = \ensuremath{\bm{V}} \invb{\rT\ensuremath{\bm{W}} \ensuremath{\bm{V}}} \rT\ensuremath{\bm{W}} = \ensuremath{\bm{V}} \invb{\si\ensuremath{\bm{V}} \ensuremath{\bm{V}}} \si\ensuremath{\bm{V}} = \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}.
\end{align*}
In combination, this choice of $\ensuremath{\bm{V}}$ and $\ensuremath{\bm{W}}$ guarantees that the Hamiltonian structure is preserved by the reduction which is shown in the following proposition.
\begin{Proposition}[Reduced autonomous Hamiltonian system]\label{theo:MORAutoHamSys}
Let $\ensuremath{\bm{V}}$ be a symplectic ROB with the projection matrix $\rT\ensuremath{\bm{W}} = \si\ensuremath{\bm{V}}$. Then, the ROM \cref{eq:ROM} of a high-dimensional Hamiltonian system $(\ensuremath{\mathbb{R}}^{2n}, \symplForm[2n], \ensuremath{\mathcal{H}})$ is a Hamiltonian system $(\ensuremath{\mathbb{R}}^{2k}, \symplForm[2k], \ensuremath{\reduce{\Ham}})$ on $\ensuremath{\mathbb{R}}^{2k}$ with the canonical symplectic form $\symplForm[2k]$ and the reduced Hamiltonian function $\ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}, \ensuremath{\bm{\mu}}) = \ensuremath{\mathcal{H}}(\ensuremath{\bm{V}} \ensuremath{\reduce{\fx}}, \ensuremath{\bm{\mu}})$ for all $\ensuremath{\reduce{\fx}} \in \ensuremath{\mathbb{R}}^{2k}$.
\end{Proposition}
\begin{proof}
First, we remark that the symplectic inverse is a valid biorthogonal projection matrix since it fulfils $\rT\ensuremath{\bm{W}} \ensuremath{\bm{V}} = \si\ensuremath{\bm{V}} \ensuremath{\bm{V}} = \I{2k}$. To derive the Hamiltonian form of the ROM in \cref{eq:ROM}, we use the identity
\begin{align}\label{eq:RelationWV}
\rT\ensuremath{\bm{W}} \ensuremath{\Jtwo{n}}
= \si\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}}
\stackrel{\eqref{eq:SymplInv}}{=} \ensuremath{\TJtwo{k}} \rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\Jtwo{n}}
= - \ensuremath{\TJtwo{k}} \rT\ensuremath{\bm{V}}
= \ensuremath{\Jtwo{k}} \rT\ensuremath{\bm{V}},
\end{align}
which makes use of the properties \cref{eq:StructMat} of the Poisson matrix. It follows with \cref{eq:AutoHamEq,eq:ROM,eq:RelationWV}
\begin{align*}
\ensuremath{\dd{t}} \ensuremath{\reduce{\fx}}(t)
= \rT\ensuremath{\bm{W}} \ensuremath{\Jtwo{n}} \ensuremath{\nabla_{\fx}} \ensuremath{\mathcal{H}}(\ensuremath{\reconstruced{\fx}}(t))
= \ensuremath{\Jtwo{k}} \rT\ensuremath{\bm{V}} \ensuremath{\nabla_{\fx}} \ensuremath{\mathcal{H}}(\ensuremath{\reconstruced{\fx}}(t))
= \ensuremath{\Jtwo{k}} \ensuremath{\nabla_{\fxr}} \ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}(t))
\end{align*}
where the last step follows from the chain rule. Thus, the evolution of the reduced state takes the form of Hamilton's equation and the resultant ROM is equal to the Hamiltonian system $(\ensuremath{\mathbb{R}}^{2k}, \symplForm[2k], \ensuremath{\reduce{\Ham}})$.
\end{proof}
\begin{Corollary}[Linear Hamiltonian system]\label{cor:QuadHam}
Hamilton's equation is a linear system in the case of a quadratic Hamiltonian $\ensuremath{\mathcal{H}}(\ensuremath{\bm{x}}, \ensuremath{\bm{\mu}}) = \txtfrac{1}{2} \; \rT\ensuremath{\bm{x}} \ensuremath{\bm{H}}(\ensuremath{\bm{\mu}}) \ensuremath{\bm{x}} + \rT\ensuremath{\bm{x}} \ensuremath{\bm{h}}(\ensuremath{\bm{\mu}})$ with $\ensuremath{\bm{H}}(\ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n \times 2n}$ symmetric and $\ensuremath{\bm{h}}(\ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{2n}$
\begin{align}\label{eq:LinSys}
\ensuremath{\dd{t}} \ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\bm{A}}(\ensuremath{\bm{\mu}}) \ensuremath{\bm{x}}(t,\ensuremath{\bm{\mu}}) + \ensuremath{\bm{b}}(\ensuremath{\bm{\mu}}),\quad&
&\ensuremath{\bm{A}}(\ensuremath{\bm{\mu}}) = \ensuremath{\Jtwo{n}} \ensuremath{\bm{H}}(\ensuremath{\bm{\mu}}),\quad&
&\ensuremath{\bm{b}}(\ensuremath{\bm{\mu}}) = \ensuremath{\Jtwo{n}} \ensuremath{\bm{h}}(\ensuremath{\bm{\mu}}).
\end{align}
The evolution of the reduced Hamiltonian system reads
\begin{align*}
\ensuremath{\dd{t}} \ensuremath{\reduce{\fx}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\reduce{\fA}}(\ensuremath{\bm{\mu}}) \ensuremath{\reduce{\fx}}(t,\ensuremath{\bm{\mu}}) + \ensuremath{\reduce{\fb}}(\ensuremath{\bm{\mu}}),\quad&
&\begin{split}
\ensuremath{\reduce{\fA}}(\ensuremath{\bm{\mu}}) =&\; \ensuremath{\Jtwo{k}} \ensuremath{\reduce{\fH}}(\ensuremath{\bm{\mu}}) \stackrel{\cref{eq:RelationWV}}{=} \rT\ensuremath{\bm{W}} \ensuremath{\bm{A}}(\ensuremath{\bm{\mu}}) \ensuremath{\bm{V}},\\
\ensuremath{\reduce{\fb}}(\ensuremath{\bm{\mu}}) =&\; \ensuremath{\Jtwo{k}} \ensuremath{\reduce{\fh}}(\ensuremath{\bm{\mu}}) \stackrel{\cref{eq:RelationWV}}{=} \rT\ensuremath{\bm{W}} \ensuremath{\bm{b}}(\ensuremath{\bm{\mu}}) \ensuremath{\bm{V}},
\end{split}&
&\begin{split}
\ensuremath{\reduce{\fH}}(\ensuremath{\bm{\mu}}) =&\; \rT\ensuremath{\bm{V}} \ensuremath{\bm{H}}(\ensuremath{\bm{\mu}}) \ensuremath{\bm{V}},\\
\ensuremath{\reduce{\fh}}(\ensuremath{\bm{\mu}}) =&\; \rT\ensuremath{\bm{V}} \ensuremath{\bm{h}}(\ensuremath{\bm{\mu}}).
\end{split}
\end{align*}
with the reduced Hamiltonian function $\ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}, \ensuremath{\bm{\mu}}) = \txtfrac{1}{2}\; \rT\ensuremath{\reduce{\fx}} \ensuremath{\reduce{\fH}}(\ensuremath{\bm{\mu}}) \ensuremath{\reduce{\fx}} + \rT\ensuremath{\reduce{\fx}} \ensuremath{\reduce{\fh}}(\ensuremath{\bm{\mu}})$.
\end{Corollary}
\begin{Remark}
We emphasise that the reduction of linear Hamiltonian systems follows the pattern of the classical projection-based MOR approaches \cite{Haasdonk2011b} to derive the reduced model with $\ensuremath{\reduce{\fA}} = \rT\ensuremath{\bm{W}} \ensuremath{\bm{A}} \ensuremath{\bm{V}}$ and $\ensuremath{\reduce{\fb}} = \rT\ensuremath{\bm{W}} \ensuremath{\bm{b}}$ which allows a straightforward implementation in existing frameworks.
\end{Remark}
Since the ROM is a Hamiltonian system, it preserves its Hamiltonian. Thus, it can be shown that the error in the Hamiltonian $\ensuremath{e_{\ensuremath{\mathcal{H}}}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\mathcal{H}}(\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}) - \ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}})$ is constant \cite{Peng2016}. Furthermore, there are a couple of results for the preservation of stability \cite[Theorem 18]{Maboudi2017}, \cite[Section 3.4.]{Peng2016} under certain assumptions on the Hamiltonian function.
\begin{Remark}[Offline/online decomposition]
A central concept in the field of MOR for parametric systems is the so-called offline/online decomposition. The idea is to split the procedure in a possibly costly offline phase and a cheap online phase where the terms costly and cheap refer to the computational cost. In the offline phase, the ROM is constructed. The online phase is supposed to evaluate the ROM fast. The ultimate goal is to avoid any computations that depend on the high dimension $2n$ in the online phase.
For a linear system, the offline/online decomposition can be achieved if $\ensuremath{\bm{A}}(\ensuremath{\bm{\mu}})$, $\ensuremath{\bm{b}}(\ensuremath{\bm{\mu}})$ and $\ensuremath{\fx_{\mathrm{0}}}(\ensuremath{\bm{\mu}})$ allow a parameter-separability condition \cite{Haasdonk2011b}.
For systems with non-linear parts, multiple approaches \cite{Barrault2004,Chaturantabut2009} exist to enable an offline/online decomposition by introducing an approximation of the non-linear terms. This allows an online-efficient MOR of non-linear systems. For symplectic MOR, the symplectic discrete empirical interpolation method (SDEIM) was introduced \cite[Section 5.2.]{Peng2016} to preserve the symplectic structure throughout the approximation of the non-linear terms.
\end{Remark}
\subsection{Finite-dimensional, non-autonomous Hamiltonian systems}
\label{subsec:NonAutoHam}
Non-autonomous Hamiltonian systems can be redirected to the case of autonomous systems if differentiability with respect to the time is assumed for the Hamiltonian function. The concept of the extended phase space is used. We briefly introduce the approach and explain the link to the symplectic MOR.
\begin{Definition}[Finite-dimensional, non-autonomous Hamiltonian system]
Let $\ensuremath{\mathcal{H}}:\ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}^{2n} \times \mathcal{P} \rightarrow \ensuremath{\mathbb{R}}$ be a scalar-valued function function that is continuously differentiable in the second argument. A non-autonomous (or time-dependent) Hamiltonian system $(\ensuremath{\mathbb{R}}^{2n}, \symplForm[2n], \ensuremath{\mathcal{H}})$ is of the form
\begin{align}\label{eq:NonAutoHamEq}
\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) = \ensuremath{\Jtwo{n}} \ensuremath{\nabla_{\fx}} \ensuremath{\mathcal{H}}(t, \ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}), \ensuremath{\bm{\mu}}).
\end{align}
We therefore call $\ensuremath{\mathcal{H}}(t, \ensuremath{\bm{x}})$ a time-dependent Hamiltonian function.
\end{Definition}
A problem for non-autonomous Hamiltonian systems occurs as the explicit time dependence of the Hamiltonian function introduces an additional variable, the time, and the carrier manifold becomes odd-dimensional. As mentioned in \cref{subsec:SymplGeo}, symplectic vector spaces are always even-dimensional which is why a symplectic description is no longer possible. Different approaches are available to circumvent this issue.
As suggested in \cite[Section 4.3]{Maboudi2018}, we use the methodology of the so-called symplectic extended phase space \cite[Chap.\ VI, Sec.\ 10]{Lanczos1940} to redirect the non-autonomous system to an autonomous system. The formulation is based on the extended Hamiltonian function $\ensuremath{\extended{\Ham}}: \ensuremath{\mathbb{R}}^{2n+2} \rightarrow \ensuremath{\mathbb{R}}$ with
\begin{align}\label{eq:ExtHam}
&\ensuremath{\extended{\Ham}}(\ensuremath{\extended{\fx}}) = \ensuremath{\mathcal{H}}(\ensuremath{\extended{q}}, \ensuremath{\bm{x}}) + \ensuremath{\extended{p}},&
&\ensuremath{\extended{\fx}} = \rTb{\ensuremath{\extended{q}}\; \ensuremath{\bm{q}}\; \ensuremath{\extended{p}}\; \ensuremath{\bm{p}}} \in \ensuremath{\mathbb{R}}^{2n + 2},&
&\ensuremath{\bm{x}} = \rTb{\ensuremath{\bm{q}}\; \ensuremath{\bm{p}}} \in \ensuremath{\mathbb{R}}^{2n},&
&\ensuremath{\extended{q}},\ensuremath{\extended{p}} \in \ensuremath{\mathbb{R}}.
\end{align}
Technically, the time is added to the extended state $\ensuremath{\extended{\fx}}$ with $\ensuremath{\extended{q}} = t$ and the corresponding momentum $\ensuremath{\extended{p}} = -\ensuremath{\mathcal{H}}(t, \ensuremath{\bm{x}}(t))$ is chosen such that the extended system is an autonomous Hamiltonian system.
This procedure requires the time-dependent Hamiltonian function to be differentiable in the time variable. Thus, it does for example not allow for the description of loads that are not differentiable in time in the context of mechanical systems. This might, e.g., exclude systems that model mechanical contact since loads that are not differentiable in time are required.
\subsection{Symplectic model order reduction of non-autonomous Hamiltonian systems}
\label{subsec:SymplMORExtSys}
For the MOR of the, now autonomous, extended system, only the original phase space variable $\ensuremath{\bm{x}} \in \ensuremath{\mathbb{R}}^{2n}$ is reduced. The time and the corresponding conjugate momentum $\ensuremath{\extended{q}},\ensuremath{\extended{p}}$ are not reduced. To preserve the Hamiltonian structure, a symplectic ROB $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ is used for the reduction of $\ensuremath{\bm{x}} \in \ensuremath{\mathbb{R}}^{2n}$ analogous to the autonomous case. The result is a reduced extended system which again can be written as a non-autonomous Hamiltonian system $(\ensuremath{\mathbb{R}}^{2k}, \symplForm[2k], \ensuremath{\reduce{\Ham}})$ with the time-dependent Hamiltonian $\ensuremath{\reduce{\Ham}}(t,\ensuremath{\reduce{\fx}},\ensuremath{\bm{\mu}}) = \ensuremath{\mathcal{H}}(t,\ensuremath{\bm{V}} \ensuremath{\reduce{\fx}}, \ensuremath{\bm{\mu}})$ for all $(t, \ensuremath{\reduce{\fx}}) \in [\ensuremath{t_{\mathrm{0}}}, \ensuremath{t_{\mathrm{end}}}] \times \ensuremath{\mathbb{R}}^{2k}$.
An unpleasant side effect of the extended formulation is that the linear dependency on the additional state variable $\ensuremath{\extended{p}}$ (see \cref{eq:ExtHam}) implies that the Hamiltonian cannot have strict extrema. Thus, the stability results listed in \cite{Peng2016} and \cite{Maboudi2017} do not apply if there is a true time-dependence in the Hamiltonian $\ensuremath{\mathcal{H}}(t, \ensuremath{\bm{x}})$. Nevertheless, symplectic MOR in combination with a non-autonomous Hamiltonian system shows stable results in the numerical experiments.
Furthermore, it is important to note that only the extended Hamiltonian $\ensuremath{\extended{\Ham}}$ is preserved throughout the reduction. The time-dependent Hamiltonian $\ensuremath{\mathcal{H}}(\cdot, t)$ is not necessarily preserved throughout the reduction, i.e. $\ensuremath{\extended{\Ham}}(\ensuremath{\extended{\fx}}(t)) = \ensuremath{\reduce{\Hame}}(\ensuremath{\reduce{\fxe}}(t))$ but potentially $\ensuremath{\mathcal{H}}(\ensuremath{\bm{x}}(t),t) \neq \ensuremath{\reduce{\Ham}}(\ensuremath{\bm{x}}(t),t)$.
\section{Symplectic basis generation with the Proper Symplectic Decomposition (PSD)}
\label{subsec:PSD}
We yet require a symplectic ROB for symplectic MOR. In the following, we pursue the approach of a ROB generated from a set of snapshots of the system. A snapshot is an element of the so-called solution manifold $\ensuremath{\mathcal{S}}$ that is approximated with a low-dimensional surrogate $\ensuremath{\widehat{\solManifold}_{\fV\fW}}$
\begin{align*}
&\ensuremath{\mathcal{S}} := \left\{ \ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) \,\big|\, t \in \ftInterval,\, \ensuremath{\bm{\mu}} \in \mathcal{P} \right\}
\subset \ensuremath{\mathbb{R}}^{2n},&
&\ensuremath{\widehat{\solManifold}_{\fV\fW}} := \left\{ \ensuremath{\bm{V}} \ensuremath{\reduce{\fx}}(t, \ensuremath{\bm{\mu}}) \,\big|\, t \in \ftInterval,\, \ensuremath{\bm{\mu}} \in \mathcal{P} \right\} \approx \ensuremath{\mathcal{S}}.
\end{align*}
In \cite{Peng2016}, the Proper Symplectic Decomposition (PSD) is proposed as a snapshot-based basis generation technique for symplectic ROBs. The idea is to derive the ROB from a minimization problem which is suggested in analogy to the very well established Proper Orthogonal Decomposition (POD, also Principal Component Analysis) \cite{LuminyBook2017}.
Classically, the POD chooses the ROB $\ensuremath{\fV_{\textrm{POD}}}$ to minimize the sum over squared norms of all $\ensuremath{n_{\mathrm{s}}} \in \ensuremath{\mathbb{N}}$ residuals $( \I{2n} - \ensuremath{\fV_{\textrm{POD}}} \rT\ensuremath{\fV_{\textrm{POD}}} ) \ensuremath{\fx^{\mathrm{s}}}_i$ of the orthogonal projection $\ensuremath{\fV_{\textrm{POD}}} \rT\ensuremath{\fV_{\textrm{POD}}} \ensuremath{\fx^{\mathrm{s}}}_i$ of the $1\leq i\leq \ensuremath{n_{\mathrm{s}}}$ single snapshots $\ensuremath{\fx^{\mathrm{s}}}_i \in \ensuremath{\mathcal{S}}$ measured in the 2-norm $\tnorm{\bullet}$ with the constraint that the ROB $\ensuremath{\fV_{\textrm{POD}}}$ is orthogonal, i.e.
\begin{align}\label{eq:POD}
&\minimize{\ensuremath{\fV_{\textrm{POD}}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}}
\sum_{i=1}^{\ensuremath{n_{\mathrm{s}}}} \tnorm{\left( \I{2n} - \ensuremath{\fV_{\textrm{POD}}} \rT\ensuremath{\fV_{\textrm{POD}}} \right) \ensuremath{\fx^{\mathrm{s}}}_i}^2&
&\textrm{subject to} \quad \rT\ensuremath{\fV_{\textrm{POD}}} \ensuremath{\fV_{\textrm{POD}}} = \I{2k}.
\end{align}
In contrast, the PSD requires the ROB to be symplectic instead of orthogonal which is expressed in the reformulated constraint. Furthermore, the orthogonal projection is replaced by the symplectic projection $\ensuremath{\bm{V}} \si\ensuremath{\bm{V}} \ensuremath{\fx^{\mathrm{s}}}_i$ which results in
\begin{align}\label{eq:PSDVectorBased}
&\minimize{\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}}
\sum_{i=1}^{\ensuremath{n_{\mathrm{s}}}} \tnorm{(\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fx^{\mathrm{s}}}_i}^2&
&\textrm{subject to} \quad \rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}} = \ensuremath{\Jtwo{k}}.
\end{align}
We summarize this in a more compact (matrix-based) formulation in the following definition.
\begin{Definition}[Proper Symplectic Decomposition (PSD)]
Given $\ensuremath{n_{\mathrm{s}}}$ snapshots $\ensuremath{\fx^{\mathrm{s}}}_1, \dots, \ensuremath{\fx^{\mathrm{s}}}_{\ensuremath{n_{\mathrm{s}}}} \in \ensuremath{\mathcal{S}}$, we denote the snapshot matrix as $\ensuremath{\fX_{\mathrm{s}}} = [\ensuremath{\fx^{\mathrm{s}}}_1, \dots, \ensuremath{\fx^{\mathrm{s}}}_{\ensuremath{n_{\mathrm{s}}}}] \in \ensuremath{\mathbb{R}}^{2n \times \ensuremath{n_{\mathrm{s}}}}$. Find a symplectic ROB $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ which minimizes
\begin{align} \label{eq:PSD}
&\minimize{\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}}
\Fnorm{(\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2&
&\textrm{subject to} \quad \rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}} = \ensuremath{\Jtwo{k}},
\end{align}
We denote the minimization problem \cref{eq:PSD} in the following as $\texttt{PSD}(\ensuremath{\fX_{\mathrm{s}}})$, where $\ensuremath{\fX_{\mathrm{s}}}$ is the given snapshot matrix.
\end{Definition}
The constraint in \cref{eq:PSD} ensures that the ROB $\ensuremath{\bm{V}}$ is symplectic and thus, guarantees the existence of the symplectic inverse $\si\ensuremath{\bm{V}}$. Furthermore, the matrix-based formulation \cref{eq:PSD} is equivalent to the vector-based formulation presented in \cref{eq:PSDVectorBased} due to the properties of the Frobenius norm $\Fnorm{\bullet}$.
\subsection{Symplectic, orthonormal basis generation}
\label{sec:SymplOrthonBasisGen}
The foremost problem of the PSD is that there is no explicit solution procedure known so far due to the high nonlinearity and possibly multiple local optima. This is an essential difference to the POD as the POD allows to find a global minimum by solving an eigenvalue problem \cite{LuminyBook2017}.
Current solution procedures for the PSD restrict to a certain subset of symplectic matrices and derive an optimal solution for this subset which might be suboptimal in the class of symplectic matrices. In the following, we show that this subclass almost exclusively restricts to symplectic, orthonormal ROBs.
\begin{Definition}[Symplectic, orthonormal ROB]
We call a ROB $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ symplectic, orthonormal (also orthosymplectic, e.g.\ in \cite{Maboudi2017}) if it is symplectic w.r.t.\ $\symplForm[2n]$ and $\symplForm[2k]$ and is orthonormal, i.e.\ the matrix $\ensuremath{\bm{V}}$ has orthonormal columns
\begin{align*}
&\rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}} = \ensuremath{\Jtwo{k}}&
&\text{and}&
&\rT\ensuremath{\bm{V}} \ensuremath{\bm{V}} = \I{2k}.
\end{align*}
\end{Definition}
In the following, we show an alternative characterization of a symplectic and orthonormal ROB. Therefore, we extend the results given e.g.\ in \cite{Paige1981} for square matrices $\ensuremath{\bm{Q}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$ in the following \cref{lem:SymplOrthonROB} to the case of rectangular matrices $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$. This was also partially addressed in \cite[Lemma 4.3.]{Peng2016}.
\begin{Proposition}[Characterization of a symplectic matrix with orthonormal columns]\label{lem:SymplOrthonROB}
The following statements are equivalent for any matrix $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$
\begin{enumerate}[label=(\roman*)]
\item $\ensuremath{\bm{V}}$ is symplectic with orthonormal columns,
\item $\ensuremath{\bm{V}}$ is of the form
\begin{align}\label{eq:SymplOrthonROB}
&\ensuremath{\bm{V}} = \left[ \ensuremath{\bm{E}}\quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}} \right] =: \ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 2k},&
&\ensuremath{\bm{E}} \in \ensuremath{\mathbb{R}}^{2n \times k},&
&\rT\ensuremath{\bm{E}} \ensuremath{\bm{E}} = \I{k},&
&\rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} = \Z{k},
\end{align}
\item $\ensuremath{\bm{V}}$ is symplectic and it holds $\rT\ensuremath{\bm{V}} = \si\ensuremath{\bm{V}}$.
\end{enumerate}
\end{Proposition}
We remark that these matrices are characterized in \cite{Peng2016} to be elements in $\text{Sp}(2k, \ensuremath{\mathbb{R}}^{2n}) \cap V_k(\ensuremath{\mathbb{R}}^{2n})$ where $\text{Sp}(2k, \ensuremath{\mathbb{R}}^{2n})$ is the symplectic Stiefel manifold and $V_k(\ensuremath{\mathbb{R}}^{2n})$ is the Stiefel manifold.
\begin{proof}
``(i) $\implies$ (ii)'': Let $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a symplectic matrix with orthonormal columns. We rename the columns to $\ensuremath{\bm{V}} = [\ensuremath{\bm{E}} \quad \ensuremath{\bm{F}}]$ with $\ensuremath{\bm{E}} = [\ensuremath{\bm{e}}_1, \dots, \ensuremath{\bm{e}}_k]$ and $\ensuremath{\bm{F}} = [\ensuremath{\bm{f}}_1, \dots, \ensuremath{\bm{f}}_k]$. The symplecticity of the matrix written in terms of $\ensuremath{\bm{E}}$ and $\ensuremath{\bm{F}}$ reads
\begin{align}\label{eq:SymplROBForEF}
\rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}}
=
\begin{bmatrix}
\rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} & \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{F}}\\
\rT\ensuremath{\bm{F}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} & \rT\ensuremath{\bm{F}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{F}}
\end{bmatrix}
=
\begin{bmatrix}
\Z{k} & \I{k}\\
-\I{k} & \Z{k}
\end{bmatrix}
\iff
\begin{split}
\rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} = \rT \ensuremath{\bm{F}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{F}} &= \Z{k},\\
-\rT\ensuremath{\bm{F}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} = \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{F}} &= \I{k}.
\end{split}
\end{align}
Expressed in terms of the columns $\ensuremath{\bm{e}}_i,\ensuremath{\bm{f}}_i$ of the matrices $\ensuremath{\bm{E}},\, \ensuremath{\bm{F}}$, this condition reads for any $1\leq i,j\leq k$
\begin{align*}
&\rT\ensuremath{\bm{e}}_i \ensuremath{\Jtwo{n}} \ensuremath{\bm{e}}_j = 0, &
&\rT\ensuremath{\bm{e}}_i \ensuremath{\Jtwo{n}}\ensuremath{\bm{f}}_j = \delta_{ij}, &
&\rT\ensuremath{\bm{f}}_i \ensuremath{\Jtwo{n}} \ensuremath{\bm{e}}_j = -\delta_{ij}, &
&\rT\ensuremath{\bm{f}}_i \ensuremath{\Jtwo{n}}\ensuremath{\bm{f}}_j = 0,
\end{align*}
and the orthonormality of the columns of $\ensuremath{\bm{V}}$ implies
\begin{align*}
&\rT\ensuremath{\bm{e}}_i \ensuremath{\bm{e}}_j = \delta_{ij},&
&\rT\ensuremath{\bm{f}}_i \ensuremath{\bm{f}}_j = \delta_{ij}.
\end{align*}
For a fixed $i \in \{1, \dots, k\}$, it is easy to show with $\ensuremath{\TJtwo{n}} \ensuremath{\Jtwo{n}} = \I{2n}$ that $\ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i$ is of unit length
\begin{align*}
1 = \delta_{ii} = \rT\ensuremath{\bm{f}}_i \ensuremath{\bm{f}}_i = \rT\ensuremath{\bm{f}}_i \ensuremath{\TJtwo{n}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i = \tnorm{\ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i}^2.
\end{align*}
Thus, $\ensuremath{\bm{e}}_i$ and $\ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i$ are both unit vectors which fulfill $\rT\ensuremath{\bm{e}}_i \ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i = \ip{\ensuremath{\bm{e}}_i}{\ensuremath{\Jtwo{n}}\ensuremath{\bm{f}}_i}_{\ensuremath{\mathbb{R}}^{2n}} = 1$. By the Cauchy-Schwarz inequality, it holds $\ip{\ensuremath{\bm{e}}_i}{\ensuremath{\Jtwo{n}}\ensuremath{\bm{f}}_i} = \norm{\ensuremath{\bm{e}}_i} \norm{\ensuremath{\Jtwo{n}}\ensuremath{\bm{f}}_i}$ if and only if the vectors are parallel. Thus, we infer $\ensuremath{\bm{e}}_i = \ensuremath{\Jtwo{n}} \ensuremath{\bm{f}}_i$ which is equivalent to $\ensuremath{\bm{f}}_i = \ensuremath{\TJtwo{n}} \ensuremath{\bm{e}}_i$. Since this holds for all $i \in \{1, \dots, k\}$, we conclude that $\ensuremath{\bm{V}}$ is of the form proposed in \cref{eq:SymplOrthonROB}.
``(ii) $\implies$ (iii)'':
Let $\ensuremath{\bm{V}}$ be of the form \cref{eq:SymplOrthonROB}. Direct calculation yields
\begin{align*}
\rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}}
= \begin{bmatrix} \rT\ensuremath{\bm{E}}\\ \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \end{bmatrix} \ensuremath{\Jtwo{n}} \begin{bmatrix} \ensuremath{\bm{E}} & \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}} \end{bmatrix}
= \begin{bmatrix} \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} & \rT\ensuremath{\bm{E}}\fE\\ -\rT\ensuremath{\bm{E}} \ensuremath{\bm{E}} & \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} \end{bmatrix}
\stackrel{\eqref{eq:SymplOrthonROB}}{=} \begin{bmatrix} \Z{k} & \I{k}\\ -\I{k} & \Z{k} \end{bmatrix}
= \ensuremath{\Jtwo{k}}
\end{align*}
which shows that $\ensuremath{\bm{V}}$ is symplectic. Thus, the symplectic inverse $\si\ensuremath{\bm{V}}$ exists. The following calculation shows that it equals the transposed $\rT\ensuremath{\bm{V}}$
\begin{align*}
\si\ensuremath{\bm{V}}
= \ensuremath{\TJtwo{k}} \rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}}
= \ensuremath{\TJtwo{k}} \begin{bmatrix} \rT\ensuremath{\bm{E}}\\ \rT\ensuremath{\bm{E}}\ensuremath{\Jtwo{n}} \end{bmatrix} \ensuremath{\Jtwo{n}}
= \begin{bmatrix} -\rT\ensuremath{\bm{E}}\ensuremath{\Jtwo{n}}\\ \rT\ensuremath{\bm{E}} \end{bmatrix} \ensuremath{\Jtwo{n}}
= \begin{bmatrix} -\rT\ensuremath{\bm{E}}\ensuremath{\Jtwo{n}}\Jtn\\ \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \end{bmatrix}
= \begin{bmatrix} \rT\ensuremath{\bm{E}}\\ \rT\ensuremath{\bm{E}}\ensuremath{\Jtwo{n}} \end{bmatrix}
= \rT\ensuremath{\bm{V}}.
\end{align*}
``(iii) $\implies$ (i)'': Let $\ensuremath{\bm{V}}$ be symplectic with $\rT \ensuremath{\bm{V}} = \si\ensuremath{\bm{V}}$. Then, we know that $\ensuremath{\bm{V}}$ has orthonormal columns since
\begin{align*}
\I{k} = \si\ensuremath{\bm{V}} \ensuremath{\bm{V}} = \rT\ensuremath{\bm{V}} \ensuremath{\bm{V}}.
\end{align*}
\end{proof}
\Cref{lem:SymplOrthonROB} essentially limits the symplectic, orthonormal ROB $\ensuremath{\bm{V}}$ to be of the form \cref{eq:SymplOrthonROB}. Later in the current section, we see how to solve the PSD for ROBs of this type. In \Cref{sec:NonOrthPSD}, we are interested in ridding the ROB $\ensuremath{\bm{V}}$ of this requirement to explore further solution methods of the PSD.
As mentioned before, the current solution procedures for the PSD almost exclusively restrict to the class of symplectic, orthonormal ROBs introduced in \cref{lem:SymplOrthonROB}. This includes the Cotangent Lift \cite{Peng2016}, the Complex SVD \cite{Peng2016}, partly the non-linear programming algorithm from \cite{Peng2016} and the greedy procedure presented in \cite{Maboudi2017}. We briefly review these approaches in the following proposition.
\begin{Proposition}[Symplectic, orthonormal basis generation]\label{lem:OrtonSymplBasisGen}
The Cotangent Lift (CT), Complex SVD (cSVD) and the greedy procedure for symplectic basis generation all derive a symplectic and orthonormal ROB. The non-linear programming (NLP) admits a symplectic, orthonormal ROB if the coefficient matrix $\ensuremath{\bm{C}}$ in \cite[Algorithm 3]{Peng2016} is symplectic and has orthonormal columns, i.e.\ it is of the form $\ensuremath{\bm{C}}_{\ensuremath{\bm{G}}} = [\ensuremath{\bm{G}} \quad \ensuremath{\TJtwo{k}} \ensuremath{\bm{G}}]$. The methods can be rewritten with $\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}}]$, where the different formulations of $\ensuremath{\bm{E}}$ read
\begin{align*}
&\ensuremath{\bm{E}}_{\textrm{CT}} = \begin{bmatrix} \CT{\ensuremath{\bm{\varPhi}}}\\ \Z{n \times k} \end{bmatrix}&
&\ensuremath{\bm{E}}_{\textrm{cSVD}} = \begin{bmatrix} \cSVD{\ensuremath{\bm{\varPhi}}}\\ \cSVD{\ensuremath{\bm{\Psi}}} \end{bmatrix},&
&\ensuremath{\bm{E}}_{\textrm{greedy}} = [\ensuremath{\bm{e}}_1, \dots, \ensuremath{\bm{e}}_k],&
&\ensuremath{\bm{E}}_{\textrm{NLP}} = \widetilde{\ensuremath{\fV_{\fE}}} \ensuremath{\bm{G}}
\end{align*}
where
\begin{enumerate}[label=(\roman*)]
\item $\CT{\ensuremath{\bm{\varPhi}}}, \cSVD{\ensuremath{\bm{\varPhi}}}, \cSVD{\ensuremath{\bm{\Psi}}} \in \ensuremath{\mathbb{R}}^{n \times k}$ are matrices that fulfil
\begin{align*}
&\rT{\CT{\ensuremath{\bm{\varPhi}}}}\CT{\ensuremath{\bm{\varPhi}}} = \I{k},&
&\rT{\cSVD{\ensuremath{\bm{\varPhi}}}} \cSVD{\ensuremath{\bm{\varPhi}}} + \rT{\cSVD{\ensuremath{\bm{\Psi}}}} \cSVD{\ensuremath{\bm{\Psi}}} = \I{k},&
&\rT{\cSVD{\ensuremath{\bm{\varPhi}}}} \cSVD{\ensuremath{\bm{\Psi}}} = \rT{\cSVD{\ensuremath{\bm{\Psi}}}} \cSVD{\ensuremath{\bm{\varPhi}}},
\end{align*}
which is technically equivalent to $\rT\ensuremath{\bm{E}} \ensuremath{\bm{E}} = \I{k}$ and $\rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} = \Z{k}$ (see \cref{eq:SymplOrthonROB}) for $\ensuremath{\bm{E}}_{\textrm{CT}}$ and $\ensuremath{\bm{E}}_{\textrm{cSVD}}$,
\item $\ensuremath{\bm{e}}_1, \dots, \ensuremath{\bm{e}}_k \in \ensuremath{\mathbb{R}}^{2n}$ are the basis vectors selected by the greedy algorithm,
\item $\widetilde{\ensuremath{\fV_{\fE}}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ is a ROB computed from CT or cSVD and $\ensuremath{\bm{G}} \in \ensuremath{\mathbb{R}}^{2k \times r}$, $r \leq k$, stems from the coefficient matrix $\ensuremath{\bm{C}}_{\ensuremath{\bm{G}}} = [\ensuremath{\bm{G}} \quad \ensuremath{\TJtwo{k}} \ensuremath{\bm{G}}]$ computed by the NLP algorithm.
\end{enumerate}
\end{Proposition}
\begin{proof}
All of the listed methods derive a symplectic ROB of the form $\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}}]$ which satisfies \cref{eq:SymplOrthonROB}. By \cref{lem:SymplOrthonROB}, these ROBs are each a symplectic, orthonormal ROB.
\end{proof}
In the following, we show that PSD Complex SVD is the solution of the PSD in the subset of symplectic, orthonormal ROBs. This was partly shown in \cite{Peng2016} which yet lacked the final step that, restricting to orthonormal, symplectic ROBs, a solution of $\texttt{PSD}([\ensuremath{\fX_{\mathrm{s}}} \quad -\ensuremath{\Jtwo{n}} \ensuremath{\fX_{\mathrm{s}}}])$ solves $\texttt{PSD}(\ensuremath{\fX_{\mathrm{s}}})$ and vice versa. This proves that the PSD Complex SVD is not only near optimal in this set but indeed optimal. Furthermore, the proof we show is alternative to the original and naturally motivates an alternative formulation of the PSD Complex SVD which we call the POD of $\ensuremath{\fY_{\mathrm{s}}}$ in the following. To begin with, we reproduce the definition of PSD Complex SVD from \cite{Peng2016}.
\begin{Definition}[PSD Complex SVD]\label{def:ComplexSVD}
We define the complex snapshot matrix
\begin{align}\label{eq:ComplexSnapMat}
&\ensuremath{\fC_{\textrm{s}}} = [\ensuremath{\fq^{\textrm{s}}}_1 + \ensuremath{{\mathrm{i}}} \ensuremath{\fp^{\textrm{s}}}_1, \dots, \ensuremath{\fq^{\textrm{s}}}_{\ensuremath{n_{\mathrm{s}}}} + \ensuremath{{\mathrm{i}}} \ensuremath{\fp^{\textrm{s}}}_{\ensuremath{n_{\mathrm{s}}}}] \in \ensuremath{\mathbb{C}}^{n \times \ensuremath{n_{\mathrm{s}}}},&
&\ensuremath{\fx^{\mathrm{s}}}_j = \begin{bmatrix} \ensuremath{\bm{q}}_j\\ \ensuremath{\bm{p}}_j \end{bmatrix} \text{for all } 1\leq j\leq \ensuremath{n_{\mathrm{s}}}
\end{align}
which is derived with the imaginary unit $\ensuremath{{\mathrm{i}}}$.
The PSD Complex SVD is a basis generation technique that requires the auxiliary complex matrix $\ensuremath{\fU_{\Cs}} \in \ensuremath{\mathbb{C}}^{n \times k}$ to fulfil
\begin{align}\label{eq:ComplexPOD}
&\minimize{\ensuremath{\fU_{\Cs}} \in \ensuremath{\mathbb{C}}^{n \times k}} \Fnorm{\ensuremath{\fC_{\textrm{s}}} - \ensuremath{\fU_{\Cs}} \cTb\ensuremath{\fU_{\Cs}} \ensuremath{\fC_{\textrm{s}}}}^2&
&\textrm{subject to} \quad \cTb\ensuremath{\fU_{\Cs}} \ensuremath{\fU_{\Cs}} = \I{k}
\end{align}
and builds the actual ROB $\ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ with
\begin{align*}
&\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}],&
&\ensuremath{\bm{E}} = \begin{bmatrix} \Reb{\ensuremath{\fU_{\Cs}}}\\ \Imb{\ensuremath{\fU_{\Cs}}} \end{bmatrix}.
\end{align*}
The solution of \cref{eq:ComplexPOD} is known to be based on the left-singular vectors of $\ensuremath{\fC_{\textrm{s}}}$ which can be explicitly computed with a complex version of the SVD.
\end{Definition}
We emphasize that we denote this basis generation procedure as PSD Complex SVD in the following to avoid confusions with the usual complex SVD algorithm.
\begin{Proposition}[Minimizing PSD in the set of symplectic, orthonormal ROBs]\label{theo:PsdForOrthSymplROBs}
Given the snapshot matrix $\ensuremath{\fX_{\mathrm{s}}} \in \ensuremath{\mathbb{R}}^{2n \times \ensuremath{n_{\mathrm{s}}}}$ we augment this with ``rotated'' snapshots to $\ensuremath{\fY_{\mathrm{s}}} = [\ensuremath{\fX_{\mathrm{s}}} \quad \ensuremath{\Jtwo{n}} \ensuremath{\fX_{\mathrm{s}}}]$. We assume that $2k$ is such that we obtain a gap in the singular values of $\ensuremath{\fY_{\mathrm{s}}}$, i.e. $\sigma_{2k}(\ensuremath{\fY_{\mathrm{s}}}) > \sigma_{2k+1}(\ensuremath{\fY_{\mathrm{s}}})$. Then, minimizing the PSD in the set of symplectic, orthonormal ROBs is equivalent to the following minimization problem
\begin{align}\label{eq:PsdForOrthSymplROBs}
&\minimize{\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}} \Fnorm{(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}}) \begin{bmatrix} \ensuremath{\fX_{\mathrm{s}}} & \ensuremath{\Jtwo{n}} \ensuremath{\fX_{\mathrm{s}}} \end{bmatrix}}^2&
&\textrm{subject to} \quad\rT\ensuremath{\bm{V}} \ensuremath{\bm{V}} = \I{2k}.
\end{align}
Clearly, this is equivalent to the POD \cref{eq:POD} applied to the snapshot matrix $\ensuremath{\fY_{\mathrm{s}}}$. We, thus, call this procedure the POD of $\ensuremath{\fY_{\mathrm{s}}}$ in the following. A minimizer can be derived with the SVD as it is common for POD \cite{LuminyBook2017}.
\end{Proposition}
\begin{proof}
The proof proceeds in three steps: we show
\begin{enumerate}[label=(\roman*)]
\item that $(\ensuremath{\bm{u}},\ensuremath{\bm{v}})$ is a pair of left- and right-singular vectors of $\ensuremath{\fY_{\mathrm{s}}}$ to the singular value $\sigma$ if and only if $(\ensuremath{\TJtwo{n}} \ensuremath{\bm{u}}, \ensuremath{\TJtwo{\ns}} \ensuremath{\bm{v}})$ also is a pair of left- and right-singular vectors of $\ensuremath{\fY_{\mathrm{s}}}$ to the same singular value $\sigma$,
\item that a solution of the POD of $\ensuremath{\fY_{\mathrm{s}}}$ is a symplectic, orthonormal ROB, i.e.\ $\ensuremath{\bm{V}} = \ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}]$,
\item that the POD of $\ensuremath{\fY_{\mathrm{s}}}$ is equivalent to the PSD for symplectic, orthonormal ROBs.
\end{enumerate}
We start with the first step (i). Let $(\ensuremath{\bm{u}},\ensuremath{\bm{v}})$ be a pair of left- and right-singular vectors of $\ensuremath{\fY_{\mathrm{s}}}$ to the singular value $\sigma$. We use that the left-singular (or right-singular) vectors of $\ensuremath{\fY_{\mathrm{s}}}$ are a set of orthonormal eigenvectors of $\ensuremath{\fY_{\mathrm{s}}}\rT\ensuremath{\fY_{\mathrm{s}}}$ (or $\rT\ensuremath{\fY_{\mathrm{s}}}\Ys$). To begin with, we compute
\begin{align} \label{eq:rotation}
\begin{split}
\ensuremath{\TJtwo{n}} \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{n}}
=&\; \ensuremath{\TJtwo{n}} (\ensuremath{\fX_{\mathrm{s}}} \rT\ensuremath{\fX_{\mathrm{s}}} + \ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}}\rT\ensuremath{\fX_{\mathrm{s}}}\ensuremath{\TJtwo{n}}) \ensuremath{\Jtwo{n}}
= \ensuremath{\TJtwo{n}}\ensuremath{\fX_{\mathrm{s}}}\rT\ensuremath{\fX_{\mathrm{s}}}\ensuremath{\Jtwo{n}} + \ensuremath{\fX_{\mathrm{s}}}\rT\ensuremath{\fX_{\mathrm{s}}}
= \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}},\\
\ensuremath{\TJtwo{\ns}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{\ns}}
=&\; \ensuremath{\TJtwo{\ns}} \begin{bmatrix} \rT\ensuremath{\fX_{\mathrm{s}}}\Xs & \rT\ensuremath{\fX_{\mathrm{s}}} \ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}}\\ \rT\ensuremath{\fX_{\mathrm{s}}}\ensuremath{\TJtwo{n}}\ensuremath{\fX_{\mathrm{s}}} & \rT\ensuremath{\fX_{\mathrm{s}}}\Xs \end{bmatrix} \ensuremath{\Jtwo{\ns}}
= \begin{bmatrix} \rT\ensuremath{\fX_{\mathrm{s}}}\Xs & -\rT\ensuremath{\fX_{\mathrm{s}}} \ensuremath{\TJtwo{n}}\ensuremath{\fX_{\mathrm{s}}}\\ -\rT\ensuremath{\fX_{\mathrm{s}}}\ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}} & \rT\ensuremath{\fX_{\mathrm{s}}}\Xs \end{bmatrix}
= \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}}
\end{split}
\end{align}
where we use $\ensuremath{\TJtwo{\ns}} = - \ensuremath{\Jtwo{\ns}}$. Thus, we can reformulate the eigenvalue problems of $\ensuremath{\fY_{\mathrm{s}}}\rT\ensuremath{\fY_{\mathrm{s}}}$ and, respectively, $\rT\ensuremath{\fY_{\mathrm{s}}}\Ys$ as
\begin{align*}
\sigma \ensuremath{\bm{u}}
=&\; \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\bm{u}}
= \ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}} \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}} \ensuremath{\bm{u}}&
&\stackrel{\ensuremath{\TJtwo{n}} \cdot \vert}{\iff}&
\sigma \ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}
=&\; \ensuremath{\TJtwo{n}} \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}
\stackrel{\cref{eq:rotation}}{=} \ensuremath{\fY_{\mathrm{s}}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}\\
\sigma \ensuremath{\bm{v}}
=&\; \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}} \ensuremath{\bm{v}}
= \ensuremath{\Jtwo{\ns}} \ensuremath{\TJtwo{\ns}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{\ns}} \ensuremath{\TJtwo{\ns}} \ensuremath{\bm{v}}&
&\stackrel{\ensuremath{\TJtwo{\ns}} \cdot \vert}{\iff}&
\sigma \ensuremath{\TJtwo{\ns}}\ensuremath{\bm{v}}
=&\; \ensuremath{\TJtwo{\ns}} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}} \ensuremath{\Jtwo{\ns}} \ensuremath{\TJtwo{\ns}}\ensuremath{\bm{v}}
\stackrel{\cref{eq:rotation}}{=} \rT\ensuremath{\fY_{\mathrm{s}}} \ensuremath{\fY_{\mathrm{s}}} \ensuremath{\TJtwo{\ns}}\ensuremath{\bm{v}}.
\end{align*}
Thus, $(\ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}, \ensuremath{\TJtwo{\ns}}\ensuremath{\bm{v}})$ is necessarily another pair of left- and right-singular vectors of $\ensuremath{\fY_{\mathrm{s}}}$ with the same singular value $\sigma$. We infer that the left-singular vectors $\ensuremath{\bm{u}}_i$, $1\leq i\leq 2n$, ordered by the magnitude of the singular values in a descending order can be written as
\begin{align}\label{eq:PODOfYs}
\ensuremath{\bm{U}} = [\ensuremath{\bm{u}}_1 \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{u}}_1 \quad \ensuremath{\bm{u}}_2 \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{u}}_2 \quad \dots \ensuremath{\bm{u}}_n \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{u}}_n] \in \ensuremath{\mathbb{R}}^{2n \times 2n}.
\end{align}
For the second step (ii), we remark that the solution of the POD is explicitly known to be any matrix which stacks in its columns $2k$ left-singular vectors of the snapshot matrix $\ensuremath{\fY_{\mathrm{s}}}$ with the highest singular value \cite{LuminyBook2017}. Due to the special structure \cref{eq:PODOfYs} of the singular vectors for the snapshot matrix $\ensuremath{\fY_{\mathrm{s}}}$, a minimizer of the POD of $\ensuremath{\fY_{\mathrm{s}}}$ necessarily adopts this structure. We are allowed to rearrange the order of the columns in this matrix and thus, the result of the POD of $\ensuremath{\fY_{\mathrm{s}}}$ can always be rearranged to the form
\begin{align*}
\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}}],&
&\ensuremath{\bm{E}} = [\ensuremath{\bm{u}}_1 \quad \ensuremath{\bm{u}}_2 \quad \dots \quad \ensuremath{\bm{u}}_k],&
&\ensuremath{\TJtwo{n}}\ensuremath{\bm{E}} = [\ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}_1 \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{u}}_2 \quad \dots \quad \ensuremath{\Jtwo{n}}\ensuremath{\bm{u}}_k].
\end{align*}
Note that it automatically holds that $\rT\ensuremath{\bm{E}} \ensuremath{\bm{E}} = \I{k}$ and $\rT\ensuremath{\bm{E}} (\ensuremath{\Jtwo{n}} \ensuremath{\bm{E}}) = \Z{k}$ since, in both products, we use the left-singular vectors from the columns of the matrix $\ensuremath{\bm{U}}$ from \cref{eq:PODOfYs} which is known to be orthogonal from properties of the SVD. Thus, \cref{eq:SymplOrthonROB} holds and we infer from \cref{lem:SymplOrthonROB} that the POD of $\ensuremath{\fY_{\mathrm{s}}}$ indeed is solved by a symplectic, orthonormal ROB.
For the final step (iii), we define the orthogonal projection operators
\begin{align*}
&\ensuremath{\fP_{\ensuremath{\fV_{\fE}}}} = \ensuremath{\fV_{\fE}} \rTb\ensuremath{\fV_{\fE}} = \ensuremath{\bm{E}} \rT\ensuremath{\bm{E}} + \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}} \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}},&
&\ensuremath{\fPVE^{\perp}} = \I{2n} - \ensuremath{\fP_{\ensuremath{\fV_{\fE}}}}.
\end{align*}
Both are idempotent and symmetric, thus $\rTb\ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}} = \ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}} = \ensuremath{\fPVE^{\perp}}$. Due to $\ensuremath{\Jtwo{n}}\ensuremath{\TJtwo{n}} = \I{2n}$, it further holds
\begin{align*}
\ensuremath{\Jtwo{n}} \rTb\ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}} \ensuremath{\TJtwo{n}} = \ensuremath{\Jtwo{n}} \ensuremath{\fPVE^{\perp}} \ensuremath{\TJtwo{n}} = \ensuremath{\Jtwo{n}}\ensuremath{\TJtwo{n}} - \ensuremath{\Jtwo{n}} \ensuremath{\bm{E}} \rT\ensuremath{\bm{E}} \ensuremath{\TJtwo{n}} - \ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}} \rT\ensuremath{\bm{E}} \ensuremath{\Jtwo{n}} \ensuremath{\TJtwo{n}} = \ensuremath{\fPVE^{\perp}} = \rTb\ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}}.
\end{align*}
Thus, it follows
\begin{align*}
\Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}^2
= \traceb{\rT\ensuremath{\fX_{\mathrm{s}}} \rTb\ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}
= \traceb{\rT\ensuremath{\fX_{\mathrm{s}}} \ensuremath{\Jtwo{n}} \rTb\ensuremath{\fPVE^{\perp}} \ensuremath{\fPVE^{\perp}} \ensuremath{\TJtwo{n}} \ensuremath{\fX_{\mathrm{s}}}}
= \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\TJtwo{n}} \ensuremath{\fX_{\mathrm{s}}}}^2
\end{align*}
and with $\ensuremath{\fY_{\mathrm{s}}} = [\ensuremath{\fX_{\mathrm{s}}} \quad \ensuremath{\TJtwo{n}} \ensuremath{\fX_{\mathrm{s}}}]$
\begin{align*}
2 \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}^2 = \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}^2 + \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\TJtwo{n}} \ensuremath{\fX_{\mathrm{s}}}}^2
= \Fnorm{\ensuremath{\fPVE^{\perp}} [\ensuremath{\fX_{\mathrm{s}}} \quad \ensuremath{\TJtwo{n}} \ensuremath{\fX_{\mathrm{s}}}]}^2
= \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fY_{\mathrm{s}}}}^2,
\end{align*}
where we use in the last step that for two matrices $\ensuremath{\bm{A}} \in \ensuremath{\mathbb{R}}^{2n \times u}$, $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times v}$ for $u,v \in \ensuremath{\mathbb{N}}$, it holds $\Fnorm{\ensuremath{\bm{A}}}^2 + \Fnorm{\ensuremath{\bm{B}}}^2 = \Fnorm{[\ensuremath{\bm{A}} \quad \ensuremath{\bm{B}}]}^2$ for the Frobenius norm $\Fnorm{\bullet}$.
Since it is equivalent to minimize a function $f: \ensuremath{\mathbb{R}}^{2n \times 2k} \rightarrow \ensuremath{\mathbb{R}}$ or a multiple $c f$ of it for any positive constant $c \in \ensuremath{\mathbb{R}_{>0}}$, minimizing $\Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}^2$ is equivalent to minimizing $2\Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fX_{\mathrm{s}}}}^2 = \Fnorm{\ensuremath{\fPVE^{\perp}} \ensuremath{\fY_{\mathrm{s}}}}^2$. Additionally, for a ROB of the form $\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}]$ the constraint of orthonormal columns is equivalent to the requirements in \cref{eq:SymplOrthonROB}. Thus, to minimize the PSD in the class of symplectic, orthonormal ROBs is equivalent to the POD of $\ensuremath{\fY_{\mathrm{s}}}$ \cref{eq:PsdForOrthSymplROBs}.
\end{proof}
\begin{Remark} \label{rem:EqivPSD}
We remark that in the same fashion as the proof of step (iii) in \cref{theo:PsdForOrthSymplROBs}, it can be shown that, restricting to symplectic, orthonormal ROBs, a solution of $\texttt{PSD}([\ensuremath{\fX_{\mathrm{s}}} \quad \ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}}])$ is a solution of $\texttt{PSD}(\ensuremath{\fX_{\mathrm{s}}})$ and vice versa, which is one detail that was missing in \cite{Peng2016} to show the optimality of PSD Complex SVD in the set of symplectic, orthonormal ROBs.
\end{Remark}
We next prove that PSD Complex SVD is equivalent to POD of $\ensuremath{\fY_{\mathrm{s}}}$ from \cref{eq:PsdForOrthSymplROBs} and thus, also minimizes the PSD in the set of symplectic, orthonormal bases. To this end, we repeat the optimality result from \cite{Peng2016} and extend it with the results of the present paper.
\begin{Proposition}[Optimality of PSD Complex SVD]
\label{theo:OptimalityCSVD}
Let $\mathbb{M}_2 \subset \ensuremath{\mathbb{R}}^{2n \times 2k}$ denote the set of symplectic bases with the structure $\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}}]$. The PSD Complex SVD solves $\texttt{PSD}([\ensuremath{\fX_{\mathrm{s}}} \quad -\ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}}])$ in $\mathbb{M}_2$.
\end{Proposition}
\begin{proof}
See \cite[Theorem 4.5.]{Peng2016}.
\end{proof}
\begin{Proposition}[Equivalence of POD of $\ensuremath{\fY_{\mathrm{s}}}$ and PSD Complex SVD]\label{prop:EquivalenceComplexSVD}
PSD Complex SVD is equivalent to the POD of $\ensuremath{\fY_{\mathrm{s}}}$. Thus, PSD Complex SVD yields a minimizer of the PSD for symplectic, orthonormal ROBs.
\end{Proposition}
\begin{proof}
By \cref{theo:OptimalityCSVD}, PSD Complex SVD minimizes \cref{eq:PsdForOrthSymplROBs} in the set $\mathbb{M}_2$ of symplectic bases with the structure $\ensuremath{\fV_{\fE}} = [\ensuremath{\bm{E}} \quad \ensuremath{\TJtwo{n}}\ensuremath{\bm{E}}]$. Thus, \cref{eq:SymplROBForEF} holds with $\ensuremath{\bm{F}} = \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}$ which is equivalent to the conditions on $\ensuremath{\bm{E}}$ required in \cref{eq:SymplOrthonROB}. By \cref{lem:SymplOrthonROB}, we infer that $\mathbb{M}_2$ equals the set of symplectic, orthonormal bases.
Furthermore, we can show that, in the set $\mathbb{M}_2$, a solution of $\texttt{PSD}([\ensuremath{\fX_{\mathrm{s}}} \quad -\ensuremath{\Jtwo{n}}\ensuremath{\fX_{\mathrm{s}}}])$ is a solution of $\texttt{PSD}(\ensuremath{\fX_{\mathrm{s}}})$ and vice versa (see \cref{rem:EqivPSD}). Thus, PSD Complex SVD minimizes the PSD for the snapshot matrix $\ensuremath{\fX_{\mathrm{s}}}$ in the set of orthonormal, symplectic matrices and PSD Complex SVD and the POD of $\ensuremath{\fY_{\mathrm{s}}}$ solve the same minimization problem.
\end{proof}
We emphasize that the computation of a minimizer of \cref{eq:PsdForOrthSymplROBs} via PSD Complex SVD requires less memory storage than the computation via POD of $\ensuremath{\fY_{\mathrm{s}}}$. The reason is that the complex formulation uses the complex snapshot matrix $\ensuremath{\fC_{\textrm{s}}} \in \ensuremath{\mathbb{C}}^{n \times \ensuremath{n_{\mathrm{s}}}}$ which equals $2 \cdot n \cdot \ensuremath{n_{\mathrm{s}}}$ floating point numbers while the solution with the POD of $\ensuremath{\fY_{\mathrm{s}}}$ method artificially enlarges the snapshot matrix to $\ensuremath{\fY_{\mathrm{s}}} \in \ensuremath{\mathbb{R}}^{2n \times 2\ensuremath{n_{\mathrm{s}}}}$ which are $4 \cdot n \cdot \ensuremath{n_{\mathrm{s}}}$ floating point numbers. Still, the POD of $\ensuremath{\fY_{\mathrm{s}}}$ might be computationally more efficient since it is a purely real formulation and thereby does not require complex arithmetic operations.
\subsection{Symplectic, non-orthonormal basis generation}
\label{sec:NonOrthPSD}
In the next step, we want to give an idea how to leave the class of symplectic, orthonormal ROBs. We call a basis generation technique symplectic, non-orthonormal if it is able to compute a symplectic, non-orthonormal basis.
In \cref{lem:OrtonSymplBasisGen}, we briefly showed that most existing symplectic basis generation techniques generate a symplectic, orthonormal ROB. The only exception is the NLP algorithm suggested in \cite{Peng2016}. It is able to compute a non-orthonormal, symplectic ROB. The algorithm is based on a given initial guess $\ensuremath{\bm{V}}_0 \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ which is a symplectic ROB e.g.\ computed with PSD Cotangent Lift or PSD Complex SVD. Nonlinear programming is used to leave the class of symplectic, orthonormal ROBs and derive an optimized symplectic ROB $\ensuremath{\bm{V}} = \ensuremath{\bm{V}}_0 \ensuremath{\bm{C}}$ with the symplectic coefficient matrix $\ensuremath{\bm{C}} \in \ensuremath{\mathbb{R}}^{2k \times 2r}$ for some $r \leq k$. Since this procedure searches a solution spanned by the columns of $\ensuremath{\bm{V}}_0$, it is not suited to compute a global optimum of the PSD which we are interested in the scope of this paper.
In the following, we present a new basis generation technique that is based on an SVD-like decomposition for matrices $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ presented in \cite{Xu2003}. To this end, we introduce this decomposition in the following.
\begin{Proposition}[SVD-like decomposition \cite{Xu2003}]\label{lem:SVDlikeDecomp}
Any real matrix $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ can be decomposed as the product of a symplectic matrix $\ensuremath{\bm{S}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$, a sparse and potentially non-diagonal matrix $\ensuremath{\bm{D}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ and an orthogonal matrix $\ensuremath{\bm{Q}} \in \ensuremath{\mathbb{R}}^{m \times m}$ with
\begin{align} \label{eq:SVDlikeDecomp}
&\ensuremath{\bm{B}} = \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}},&
&\ensuremath{\bm{D}} =
\begin{blockarray}{
>{\hspace{\arraycolsep}\scriptstyle}c*{3}{>{\scriptstyle}c}<{\hspace{\arraycolsep}}
>{\scriptstyle}c
}
p & q & p & m-2p-q \\
\begin{block}{
(>{\hspace{\arraycolsep}}cccc<{\hspace{\arraycolsep}})
>{\scriptstyle}c
}
\ensuremath{\fSigma_{\mathrm{s}}} & \Z{} & \Z{} & \Z{} & p \\
\Z{} & \I{} & \Z{} & \Z{} & q \\
\Z{} & \Z{} & \Z{} & \Z{} & n-p-q \\
\Z{} & \Z{} & \ensuremath{\fSigma_{\mathrm{s}}} & \Z{} & p \\
\Z{} & \Z{} & \Z{} & \Z{} & q \\
\Z{} & \Z{} & \Z{} & \Z{} & n-p-q \\
\end{block}
\end{blockarray},
&\begin{split}
\ensuremath{\fSigma_{\mathrm{s}}} = \diag(\ensuremath{\sigma^{\mathrm{s}}}_1, \dots, \ensuremath{\sigma^{\mathrm{s}}}_p) \in \ensuremath{\mathbb{R}}^{p \times p},\\
\ensuremath{\sigma^{\mathrm{s}}}_i > 0 \quad\text{for } 1\leq i\leq p.
\end{split}
\end{align}
with $p,q \in \ensuremath{\mathbb{N}}$ and $\rank(\ensuremath{\bm{B}}) = 2p+q$ and where we indicate the block row and column dimensions in $\ensuremath{\bm{D}}$ by small letters. The diagonal entries $\ensuremath{\sigma^{\mathrm{s}}}_i$, $1 \leq i\leq p$, of the matrix $\ensuremath{\fSigma_{\mathrm{s}}}$ are related to the pairs of purely imaginary eigenvalues $\lambda_j(\ensuremath{\bm{M}}), \lambda_{p+j}(\ensuremath{\bm{M}}) \in \ensuremath{\mathbb{C}}$ of $\ensuremath{\bm{M}} = \rT\ensuremath{\bm{B}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{m \times m}$ with
\begin{align*}
&\lambda_j(\ensuremath{\bm{M}}) = -(\ensuremath{\sigma^{\mathrm{s}}}_j)^2\ensuremath{{\mathrm{i}}},&
&\lambda_{p+j}(\ensuremath{\bm{M}}) = (\ensuremath{\sigma^{\mathrm{s}}}_j)^2\ensuremath{{\mathrm{i}}},&
&1 \leq j\leq p.
\end{align*}
\end{Proposition}
\begin{Remark}[Singular values]\label{rem:SingVal}
We call the diagonal entries $\ensuremath{\sigma^{\mathrm{s}}}_i$, $1 \leq i\leq p$, of the matrix $\ensuremath{\fSigma_{\mathrm{s}}}$ from \cref{lem:SVDlikeDecomp} in the following the symplectic singular values. The reason is the following analogy to the classical SVD.
The classical SVD decomposes $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ as $\ensuremath{\bm{B}} = \ensuremath{\bm{U}} \ensuremath{\bm{\Sigma}} \rT\ensuremath{\bm{V}}$ where $\ensuremath{\bm{U}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$, $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{m \times m}$ are each orthogonal matrices and $\ensuremath{\bm{\Sigma}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ is a diagonal matrix with the singular values $\sigma_i$ on its diagonal $\diag(\ensuremath{\bm{\Sigma}}) = [\sigma_1, \dots, \sigma_r, 0, \dots, 0] \in \ensuremath{\mathbb{R}}^{\min(2n,m)}$, $r = \rank(\ensuremath{\bm{B}})$. The singular values are linked to the real eigenvalues of $\ensuremath{\bm{N}} = \rT\ensuremath{\bm{B}} \ensuremath{\bm{B}}$ with $\lambda_i(\ensuremath{\bm{N}}) = \sigma_i^2$. Furthermore, for the SVD, it holds due to the orthogonality of $\ensuremath{\bm{U}}$ and $\ensuremath{\bm{V}}$, respectively, $\rT\ensuremath{\bm{B}} \ensuremath{\bm{B}} = \rT\ensuremath{\bm{V}} \ensuremath{\bm{\Sigma}}^2 \ensuremath{\bm{V}}$ and $\ensuremath{\bm{B}} \rT\ensuremath{\bm{B}} = \rT\ensuremath{\bm{U}} \ensuremath{\bm{\Sigma}}^2 \ensuremath{\bm{U}}$.
A similar relation can be derived for an SVD-like decomposition from \cref{lem:SVDlikeDecomp}. Due to the structure of the decomposition \cref{eq:SVDlikeDecomp} and the symplecticity of $\ensuremath{\bm{S}}$, it holds
\begin{align} \label{eq:SymplSingVal}
\begin{split}
\rT\ensuremath{\bm{B}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{B}}
=&\; \rT\ensuremath{\bm{Q}} \rT\ensuremath{\bm{D}} \overbrace{\rT\ensuremath{\bm{S}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{S}}}^{=\ensuremath{\Jtwo{n}}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}\\
=&\; \rT\ensuremath{\bm{Q}} \rT\ensuremath{\bm{D}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}},
\end{split}
&\rT\ensuremath{\bm{D}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{D}} =
\begin{blockarray}{
>{\hspace{\arraycolsep}\scriptstyle}c*{3}{>{\scriptstyle}c}<{\hspace{\arraycolsep}}
>{\scriptstyle}c
}
p & q & p & m-2p-q & \\
\begin{block}{
(>{\hspace{\arraycolsep}}cccc<{\hspace{\arraycolsep}})
>{\scriptstyle}c
}
\Z{} & \Z{} & \ensuremath{\fSigma_{\mathrm{s}}}^2 & \Z{} & p\\
\Z{} & \Z{} & \Z{} & \Z{} & q\\
-\ensuremath{\fSigma_{\mathrm{s}}}^2 & \Z{} & \Z{} & \Z{} & p\\
\Z{} & \Z{} & \Z{} & \Z{} & m-2p-q\\
\end{block}
\end{blockarray}.
\end{align}
This analogy is why we call the diagonal entries $\ensuremath{\sigma^{\mathrm{s}}}_i$, $1 \leq i\leq p$, of the matrix $\ensuremath{\fSigma_{\mathrm{s}}}$ symplectic singular values.
\end{Remark}
The idea for the basis generation now is to select $k \in \ensuremath{\mathbb{N}}$ pairs of columns of $\ensuremath{\bm{S}}$ in order to compute a symplectic ROB. The selection should be based on the importance of these pairs which we characterize by the following proposition by linking the Frobenius norm of a matrix with the symplectic singular values.
\begin{Proposition}\label{prop:FrobSymplSingVal}
Let $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ with an SVD-like decomposition $\ensuremath{\bm{B}} = \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}$ with $p,q \in \ensuremath{\mathbb{N}}$ from \cref{lem:SVDlikeDecomp}. The Frobenius norm of $\ensuremath{\bm{B}}$ can be rewritten as
\begin{align}\label{eq:FrobSymplSingVal}
&\Fnorm{\ensuremath{\bm{B}}}^2 = \sum_{i=1}^{p+q} (\ensuremath{w^\mathrm{s}}_i)^2,&
&\ensuremath{w^\mathrm{s}}_i = \begin{cases}
\ensuremath{\sigma^{\mathrm{s}}}_i \sqrt{\tnorm{\ensuremath{\bm{s}}_{i}}^2 + \tnorm{\ensuremath{\bm{s}}_{n+i}}^2}, & 1\leq i\leq p,\\
\tnorm{\ensuremath{\bm{s}}_{i}}, & p+1 \leq i\leq p+q
\end{cases}
\end{align}
where $\ensuremath{\bm{s}}_i \in \ensuremath{\mathbb{R}}^{2n}$ is the $i$-th column of $\ensuremath{\bm{S}}$ for $1\leq i\leq 2n$. In the following, we refer to each $\ensuremath{w^\mathrm{s}}_i$ as the weighted symplectic singular value.
\end{Proposition}
\begin{proof}
We insert the SVD-like decomposition $\ensuremath{\bm{B}} = \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}$ and use the orthogonality of $\ensuremath{\bm{Q}}$ to reformulate
\begin{align*}
\Fnorm{\ensuremath{\bm{B}}}^2
=&\; \Fnorm{\ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}}^2
= \Fnorm{\ensuremath{\bm{S}} \ensuremath{\bm{D}}}^2
= \trace(\rT\ensuremath{\bm{D}} \rT\ensuremath{\bm{S}} \ensuremath{\bm{S}} \ensuremath{\bm{D}})
= \sum_{i=1}^{p} (\ensuremath{\sigma^{\mathrm{s}}}_i)^2 \rT\ensuremath{\bm{s}}_i \ensuremath{\bm{s}}_i + \sum_{i=1}^{p} (\ensuremath{\sigma^{\mathrm{s}}}_i)^2 \rT\ensuremath{\bm{s}}_{n+i} \ensuremath{\bm{s}}_{n+i} + \sum_{i=1}^q \rT\ensuremath{\bm{s}}_{p+i} \ensuremath{\bm{s}}_{p+i}\\
=&\; \sum_{i=1}^p (\ensuremath{\sigma^{\mathrm{s}}}_i)^2 \left( \tnorm{\ensuremath{\bm{s}}_{i}}^2 + \tnorm{\ensuremath{\bm{s}}_{n+i}}^2 \right)
+ \sum_{i=1}^q \tnorm{\ensuremath{\bm{s}}_{p+i}}^2
\end{align*}
which is equivalent to \cref{eq:FrobSymplSingVal}.
\end{proof}
It proves true in the following \cref{prop:PSDDecay} that we can delete single addends $\ensuremath{w^\mathrm{s}}_i$ in \cref{eq:FrobSymplSingVal} with the symplectic projection used in the PSD if we include the corresponding pair of columns in the ROB. This will be our selection criterion in the new basis generation technique that we denote PSD SVD-like decomposition.
\begin{Definition}[PSD SVD-like decomposition]\label{def:PSDSVDlike}
We compute an SVD-like decomposition \cref{eq:SVDlikeDecomp} as $\ensuremath{\fX_{\mathrm{s}}} = \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}$ of the snapshot matrix $\ensuremath{\fX_{\mathrm{s}}} \in \ensuremath{\mathbb{R}}^{2n \times \ensuremath{n_{\mathrm{s}}}}$ and define $p,q \in \ensuremath{\mathbb{N}}$ as in \cref{lem:SVDlikeDecomp}. In order to compute a ROB $\ensuremath{\bm{V}}$ with $2k$ columns, find the $k$ indices $i \in \ensuremath{\mathcal{I}_{\textrm{PSD}}} = \{i_1, \dots, i_k\} \subset \{1, \dots, p+q\}$ which have large contributions $\ensuremath{w^\mathrm{s}}_i$ in \cref{eq:FrobSymplSingVal} with
\begin{align} \label{eq:IdxPSDSVDlike}
\ensuremath{\mathcal{I}_{\textrm{PSD}}} = \argmax_{\substack{\mathcal{I} \subset \{1, \dots, p+q\}\\ \abs{\mathcal{I}} = k}}
\left( \sum_{i \in \mathcal{I}} \left( \ensuremath{w^\mathrm{s}}_i\right)^2 \right).
\end{align}
To construct the ROB, we choose the $k$ pairs of columns $\ensuremath{\bm{s}}_i \in \ensuremath{\mathbb{R}}^{2n}$ from $\ensuremath{\bm{S}}$ corresponding to the selected indices $\ensuremath{\mathcal{I}_{\textrm{PSD}}}$ such that
\begin{align*}
\ensuremath{\bm{V}} = [\ensuremath{\bm{s}}_{i_1}, \dots, \ensuremath{\bm{s}}_{i_k}, \ensuremath{\bm{s}}_{n+i_1}, \dots, \ensuremath{\bm{s}}_{n+i_k}] \in \ensuremath{\mathbb{R}}^{2n \times 2k}.
\end{align*}
\end{Definition}
The special choice of the ROB is motivated by the following theoretical result which is very analogous to the results known for the classical POD in the framework of orthogonal projections.
\begin{Proposition}[Projection error by neglegted weighted symplectic singular values]\label{prop:PSDDecay}
Let $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a ROB constructed with the procedure described in \cref{def:PSDSVDlike} to the index set $\ensuremath{\mathcal{I}_{\textrm{PSD}}} \subset \{1, \dots, p+q\}$ with $p,q \in \ensuremath{\mathbb{N}}$ from \cref{lem:SVDlikeDecomp}. The PSD functional can be calculated by
\begin{align} \label{eq:PSDDecay}
\Fnorm{(\I{2n} - \ensuremath{\bm{V}}\si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2
= \sum_{i \in \{1, \dots, p+q\} \setminus \ensuremath{\mathcal{I}_{\textrm{PSD}}}} \left( \ensuremath{w^\mathrm{s}}_i \right)^2,
\end{align}
which is the cumulative sum of the squares of the neglected weighted symplectic singular values.
\end{Proposition}
\begin{proof}
Let $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a ROB constructed from an SVD-like decomposition $\ensuremath{\fX_{\mathrm{s}}} = \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}$ of the snapshot matrix $\ensuremath{\fX_{\mathrm{s}}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ with the procedure described in \cref{def:PSDSVDlike}. Let $p,q \in \ensuremath{\mathbb{N}}$ be defined as in \cref{lem:SVDlikeDecomp} and $\ensuremath{\mathcal{I}_{\textrm{PSD}}} = \{i_1, \dots, i_k\} \subset \{1, \dots, p+q\}$ be the set of indices selected with \cref{eq:IdxPSDSVDlike}.
For the proof, we introduce a slightly different notation of the ROB $\ensuremath{\bm{V}}$. The selection of the columns $\ensuremath{\bm{s}}_i$ of $\ensuremath{\bm{S}}$ is denoted with the selection matrix $\I{\ensuremath{\idxSetPSD^{2k}}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ based on
\begin{align*}
&\left( \I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}} \right)_{\alpha,\beta} =
\begin{cases}
1,& \alpha=i_\beta \in \ensuremath{\mathcal{I}_{\textrm{PSD}}}\\
0,& \text{else}
\end{cases}
&\text{for } \quad
\begin{split}
&1\leq \alpha \leq 2n,\\
&1\leq \beta \leq k,
\end{split}
&\I{\ensuremath{\idxSetPSD^{2k}}} = [\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}, \; \ensuremath{\TJtwo{n}}\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}].
\end{align*}
which allows us to write the ROB as the matrix--matrix product $\ensuremath{\bm{V}} = \ensuremath{\bm{S}} \I{\ensuremath{\idxSetPSD^{2k}}}$. Furthermore, we can select the neglected entries with $\I{2n} - \I{\ensuremath{\idxSetPSD^{2k}}} \rTb{\I{\ensuremath{\idxSetPSD^{2k}}}}$.
We insert the SVD-like decomposition and the representation of the ROB introduced in the previous paragraph in the PSD which reads
\begin{align*}
\Fnorm{(\I{2n} - \ensuremath{\bm{V}}\si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2
= \Fnorm{(\I{2n} - \ensuremath{\bm{S}} \I{\ensuremath{\idxSetPSD^{2k}}} \ensuremath{\TJtwo{k}} \rT{\I{\ensuremath{\idxSetPSD^{2k}}}} \rT\ensuremath{\bm{S}} \ensuremath{\Jtwo{n}}) \ensuremath{\bm{S}} \ensuremath{\bm{D}} \ensuremath{\bm{Q}}}^2
= \Big\lVert\ensuremath{\bm{S}} (\I{2n} - \I{\ensuremath{\idxSetPSD^{2k}}} \ensuremath{\TJtwo{k}} \rT{\I{\ensuremath{\idxSetPSD^{2k}}}} \overbrace{\rT\ensuremath{\bm{S}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{S}}}^{=\ensuremath{\Jtwo{n}}} ) \ensuremath{\bm{D}} \Big\rVert^2_{\mathrm{F}}
\end{align*}
where we use the orthogonality of $\ensuremath{\bm{Q}}$ and the symplecticity of $\ensuremath{\bm{S}}$ in the last step. We can reformulate the product of Poisson matrices and the selection matrix as
\begin{align*}
\ensuremath{\TJtwo{k}} \rT{\I{\ensuremath{\idxSetPSD^{2k}}}} \ensuremath{\Jtwo{n}}
=
\ensuremath{\TJtwo{k}}
\begin{bmatrix}
\rT{\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}}\\
\rT{\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}}\ensuremath{\Jtwo{n}}
\end{bmatrix}
\ensuremath{\Jtwo{n}}
=
\begin{bmatrix}
\Z{k} & -\I{k}\\
\I{k} & \Z{k}
\end{bmatrix}
\begin{bmatrix}
\rT{\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}}\ensuremath{\Jtwo{n}}\\
-\rT{\I{\ensuremath{\mathcal{I}_{\textrm{PSD}}}}}
\end{bmatrix}
= \rT{\I{\ensuremath{\idxSetPSD^{2k}}}}.
\end{align*}
Thus, we can further reformulate the PSD as
\begin{align*}
\Fnorm{(\I{2n} - \ensuremath{\bm{V}}\si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2
= \Fnorm{\ensuremath{\bm{S}} \left( \I{2n} - \I{\ensuremath{\idxSetPSD^{2k}}} \rTb{\I{\ensuremath{\idxSetPSD^{2k}}}} \right) \ensuremath{\bm{D}}}^2
= \sum_{i \in \{1, \dots, p+q\} \setminus \ensuremath{\mathcal{I}_{\textrm{PSD}}}} \left( \ensuremath{w^\mathrm{s}}_i \right)^2
\end{align*}
where $\ensuremath{w^\mathrm{s}}_i$ are the weighted symplectic singular values from \cref{eq:FrobSymplSingVal}. In the last step, we use that the resultant diagonal matrix in the braces sets all rows of $\ensuremath{\bm{D}}$ with indices $i, n+i$ to zero for $i \in \ensuremath{\mathcal{I}_{\textrm{PSD}}}$. Thus, the last step can be concluded analogously to the proof of \cref{prop:FrobSymplSingVal}.
\end{proof}
A direct consequence of \cref{prop:PSDDecay} is that the decay of the PSD functional is proportional to the decay of the sum over the neglected weighted symplectic singular values $\ensuremath{w^\mathrm{s}}_i$ from \cref{eq:FrobSymplSingVal}. In the numerical example \cref{subsubsec:NumExp:projErr}, we observe an exponential decrease of this quantities which induces an exponential decay of the PSD functional.
\begin{Remark}[Computation of the SVD-like decomposition]\label{rem:CompSVDlikeDecomp}
To compute an SVD-like decompostion \cref{eq:SVDlikeDecomp} of $\ensuremath{\bm{B}}$, several approaches exist. The original paper \cite{Xu2003} derives a decomposition based on the product $\rT\ensuremath{\bm{B}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{B}}$ which is not good for a numerical computation since errors can arise from cancellation. In \cite{Xu2005}, an implicit version is presented that does not require the computation of the full product $\rT\ensuremath{\bm{B}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{B}}$ but derives the decomposition implicitly by transforming $\ensuremath{\bm{B}}$. Furthermore, \cite{Agoujil2018} introduces an iterative approach to compute an SVD-like decomposition which computes parts of an SVD-like decomposition with a block-power iterative method. In the present case, we use the implicit approach \cite{Xu2005}.
\end{Remark}
\subsection{Interplay of non-orthonormal and orthonormal ROBs}
We give further results on the interplay of non-orthonormal and orthonormal ROBs. The fundamental statement in the current section is the Orthogonal SR decomposition \cite{Bunse1986,Xu2003}.
\begin{Proposition}[Orthogonal SR decomposition] \label{prop:OrthoSRDecomp}
For each matrix $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ with $m \leq n$, there exists a symplectic, orthogonal matrix $\ensuremath{\bm{S}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$, an upper triangular matrix $\ensuremath{\bm{R}}_{11} \in \ensuremath{\mathbb{R}}^{m \times m}$ and a strictly upper triangular matrix $\ensuremath{\bm{R}}_{21} \in \ensuremath{\mathbb{R}}^{m \times m}$ such that
\begin{align*}
&\ensuremath{\bm{B}} = \ensuremath{\bm{S}} \begin{bmatrix} \ensuremath{\bm{R}}_{11}\\ \Z{(n-m) \times m}\\ \ensuremath{\bm{R}}_{21}\\ \Z{(n-m) \times m} \end{bmatrix}
= [\ensuremath{\bm{S}}_m \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{S}}_m] \begin{bmatrix} \ensuremath{\bm{R}}_{11}\\ \ensuremath{\bm{R}}_{21} \end{bmatrix},&
&\begin{split}
\ensuremath{\bm{S}}_i =&\; [\ensuremath{\bm{s}}_1, \dots, \ensuremath{\bm{s}}_i], \quad 1\leq i\leq n,\\
\ensuremath{\bm{S}} =&\; [\ensuremath{\bm{s}}_1, \dots, \ensuremath{\bm{s}}_n, \ensuremath{\TJtwo{n}} \ensuremath{\bm{s}}_1, \dots, \ensuremath{\TJtwo{n}} \ensuremath{\bm{s}}_n].
\end{split}
\end{align*}
\end{Proposition}
We remark that a similar result can be derived for the case $m > n$ \cite{Xu2003} but it is not introduced since we do not need it in the following.
\begin{proof}
Let $\ensuremath{\bm{B}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ with $m \leq n$. We consider the QR decomposition
\begin{align*}
\ensuremath{\bm{B}} = \ensuremath{\bm{Q}} \begin{bmatrix} \ensuremath{\bm{R}}\\ \Z{(2n-m) \times m} \end{bmatrix}
\end{align*}
where $\ensuremath{\bm{Q}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$ is an orthogonal matrix and $\ensuremath{\bm{R}} \in \ensuremath{\mathbb{R}}^{2n \times m}$ is upper triangular. The original Orthogonal SR decomposition \cite[Corollary 4.5.]{Bunse1986} for the square matrix states that we can decompose $\ensuremath{\bm{Q}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$ as a symplectic, orthogonal matrix $\ensuremath{\bm{S}} \in \ensuremath{\mathbb{R}}^{2n \times 2n}$, an upper triangular matrix $\ensuremath{\widetilde{\fR}}_{11} \in \ensuremath{\mathbb{R}}^{n \times n}$, a strictly upper triangular matrix $\ensuremath{\widetilde{\fR}}_{21} \in \ensuremath{\mathbb{R}}^{n \times n}$ and two (possibly) full matrices $\ensuremath{\widetilde{\fR}}_{12}, \ensuremath{\widetilde{\fR}}_{22} \in \ensuremath{\mathbb{R}}^{n \times n}$
\begin{align*}
&\ensuremath{\bm{Q}} = \ensuremath{\bm{S}} \begin{bmatrix} \ensuremath{\widetilde{\fR}}_{11} & \ensuremath{\widetilde{\fR}}_{12}\\ \ensuremath{\widetilde{\fR}}_{21} & \ensuremath{\widetilde{\fR}}_{22} \end{bmatrix}&
&\text{and thus}&
&\ensuremath{\bm{B}}
= \ensuremath{\bm{S}} \begin{bmatrix} \ensuremath{\widetilde{\fR}}_{11} & \ensuremath{\widetilde{\fR}}_{12}\\ \ensuremath{\widetilde{\fR}}_{21} & \ensuremath{\widetilde{\fR}}_{22} \end{bmatrix} \begin{bmatrix} \ensuremath{\bm{R}}\\ \Z{(2n-m) \times m} \end{bmatrix}
= \ensuremath{\bm{S}} \begin{bmatrix} \ensuremath{\widetilde{\fR}}_{11}\\ \ensuremath{\widetilde{\fR}}_{21}\end{bmatrix} \begin{bmatrix} \ensuremath{\bm{R}}\\ \Z{(n-m) \times m} \end{bmatrix}.
\end{align*}
Since $\ensuremath{\bm{R}}$ is upper triangular, it does preserve the (strictly) upper triangular pattern in $\ensuremath{\widetilde{\fR}}_{11}$ and $\ensuremath{\widetilde{\fR}}_{21}$ and we obtain the (strictly) upper triangular matrices $\ensuremath{\bm{R}}_{11}, \ensuremath{\bm{R}}_{21} \in \ensuremath{\mathbb{R}}^{m \times m}$ from
\begin{align*}
\begin{bmatrix} \ensuremath{\bm{R}}_{11}\\ \Z{(n-m) \times m}\\ \ensuremath{\bm{R}}_{21}\\ \Z{(n-m) \times m} \end{bmatrix}
= \begin{bmatrix} \ensuremath{\widetilde{\fR}}_{11}\\ \ensuremath{\widetilde{\fR}}_{21}\end{bmatrix} \begin{bmatrix} \ensuremath{\bm{R}}\\ \Z{(n-m) \times m} \end{bmatrix}.
\end{align*}
\end{proof}
Based on the Orthogonal SR decomposition, the following two propositions prove bounds for the projection errors of PSD which allows an estimate for the quality of the respective method. In both cases we require the basis size to satisfy $k \leq n$ or $2k \leq n$, respectively. This restriction is not limiting in the context of symplectic MOR as in all application cases $k \ll n$.
\begin{Proposition}\label{prop:EstimatePODoPSD}
Let $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times k}$ be a minimizer of POD with $k \leq n$ basis vectors and $\ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a minimizer of the PSD in the class of orthonormal, symplectic matrices with $2k$ basis vectors. Then, the orthogonal projection errors of $\ensuremath{\fV_{\fE}}$ and $\ensuremath{\bm{V}}$ satisfy
\begin{align*}
\Fnorm{(\I{2n} - \ensuremath{\fV_{\fE}} \rT\ensuremath{\fV_{\fE}}) \ensuremath{\fX_{\mathrm{s}}}}^2 \leq \Fnorm{\left( \I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}} \right) \ensuremath{\fX_{\mathrm{s}}}}^2.
\end{align*}
\end{Proposition}
\begin{proof}
The Orthogonal SR decomposition (see \cref{prop:OrthoSRDecomp}) guarantees that a symplectic, orthogonal matrix $\ensuremath{\bm{S}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ and $\ensuremath{\bm{R}} \in \ensuremath{\mathbb{R}}^{2k \times k}$ exist with $\ensuremath{\bm{V}} = \ensuremath{\bm{S}} \ensuremath{\bm{R}}$. Since both matrices $\ensuremath{\bm{V}}$ and $\ensuremath{\bm{S}}$ are orthogonal and $\img(\ensuremath{\bm{V}}) \subset \img(\ensuremath{\bm{S}})$, we can show that $\ensuremath{\bm{S}}$ yields a lower projection error than $\ensuremath{\bm{V}}$ with
\begin{align*}
\Fnorm{\left( \I{2n} - \ensuremath{\bm{S}} \rT\ensuremath{\bm{S}} \right) \ensuremath{\fX_{\mathrm{s}}}}^2
=&\; \Fnorm{\left(\I{2n} - \ensuremath{\bm{S}} \rT\ensuremath{\bm{S}}\right) \left(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}}\right) \ensuremath{\fX_{\mathrm{s}}}}^2
= \sum_{i=1}^{\ensuremath{n_{\mathrm{s}}}} \tnorm{\left(\I{2n} - \ensuremath{\bm{S}} \rT\ensuremath{\bm{S}}\right) \left(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}}\right) \ensuremath{\fx^{\mathrm{s}}}_i}^2\\
\leq&\; \underbrace{\tnorm{\I{2n} - \ensuremath{\bm{S}} \rT\ensuremath{\bm{S}}}^2}_{\leq 1} \; \sum_{i=1}^{\ensuremath{n_{\mathrm{s}}}} \tnorm{\left(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}}\right) \ensuremath{\fx^{\mathrm{s}}}_i}^2
\leq \Fnorm{\left(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}}\right) \ensuremath{\fX_{\mathrm{s}}}}^2
\end{align*}
Let $\ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a minimizer of the PSD in the class of symplectic, orthonormal ROBs. By definition of $\ensuremath{\fV_{\fE}}$, it yields a lower projection error than $\ensuremath{\bm{S}}$. Since both ROBs are symplectic, orthonormal, we can exchange the symplectic inverse with the transposition (see \cref{lem:SymplOrthonROB}, (iii)). This proves the assertion with
\begin{align*}
\Fnorm{\left( \I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{V}} \right) \ensuremath{\fX_{\mathrm{s}}}}^2
\geq \Fnorm{\left(\I{2n} - \ensuremath{\bm{S}} \rT\ensuremath{\bm{S}}\right) \ensuremath{\fX_{\mathrm{s}}}}^2
\geq \Fnorm{\left( \I{2n} - \ensuremath{\fV_{\fE}} \rT\ensuremath{\fV_{\fE}}\right) \ensuremath{\fX_{\mathrm{s}}}}^2.
\end{align*}
\end{proof}
\cref{prop:EstimatePODoPSD} proves that we require at most twice the number of basis vectors to generate a symplectic, orthonormal basis with an orthogonal projection error at least as small as the one of the classical POD. An analogous result can be derived in the framework of a symplectic projection which is proven in the following proposition.
\begin{Proposition} \label{prop:EstimateoPSDPSD}
Assume there exists a minimizer $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ of the general PSD for a basis size $2k \leq n$ with potentially non-orthonormal columns. Let $\ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 4k}$ be a minimizer of the PSD in the class of symplectic, orthogonal bases of size $4k$. Then, we know that the symplectic projection error of $\ensuremath{\fV_{\fE}}$ is less than or equal to the one of $\ensuremath{\bm{V}}$, i.e.\
\begin{align*}
\Fnorm{(\I{2n} - \ensuremath{\fV_{\fE}} \si\ensuremath{\fV_{\fE}}) \ensuremath{\fX_{\mathrm{s}}}}^2 \leq \Fnorm{(\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2.
\end{align*}
\end{Proposition}
\begin{proof}
Let $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ be a minimizer of PSD with $2k \leq n$. By \cref{prop:OrthoSRDecomp}, we can determine a symplectic, orthogonal matrix $\ensuremath{\bm{S}} \in \ensuremath{\mathbb{R}}^{2n \times 4k}$ and $\ensuremath{\bm{R}} \in \ensuremath{\mathbb{R}}^{4k \times 2k}$ with $\ensuremath{\bm{V}} = \ensuremath{\bm{S}} \ensuremath{\bm{R}}$. Similar to the proof of \cref{prop:EstimatePODoPSD}, we can bound the projection errors. We require the identity
\begin{align*}
(\I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}}) (\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}})
= \I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}} + \underbrace{\ensuremath{\bm{S}} \overbrace{\si\ensuremath{\bm{S}} \ensuremath{\bm{S}}}^{=\I{4k}} \ensuremath{\bm{R}}}_{=\ensuremath{\bm{V}}} \underbrace{\ensuremath{\TJtwo{k}} \rT\ensuremath{\bm{R}} \rT\ensuremath{\bm{S}} \ensuremath{\Jtwo{n}}}_{=\si\ensuremath{\bm{V}}}
= \I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}}.
\end{align*}
With this identity, we proceed analogously to the proof of \cref{prop:EstimatePODoPSD} and derive for a minimizer $\ensuremath{\fV_{\fE}} \in \ensuremath{\mathbb{R}}^{2n \times 4k}$ of PSD in the class of symplectic, orthonormal ROBs
\begin{align*}
\Fnorm{(\I{2n} - \ensuremath{\fV_{\fE}} \si\ensuremath{\fV_{\fE}}) \ensuremath{\fX_{\mathrm{s}}}}^2
\leq&\; \Fnorm{(\I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}}) \ensuremath{\fX_{\mathrm{s}}}}^2
= \Fnorm{(\I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}}) (\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2\\
\leq&\; \underbrace{\tnorm{(\I{2n} - \ensuremath{\bm{S}} \si\ensuremath{\bm{S}})}^2}_{\leq 1} \Fnorm{(\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2
\leq \Fnorm{(\I{2n} - \ensuremath{\bm{V}} \si\ensuremath{\bm{V}}) \ensuremath{\fX_{\mathrm{s}}}}^2.
\end{align*}
\end{proof}
\cref{prop:EstimateoPSDPSD} proves that we require at most twice the number of basis vectors to generate a symplectic, orthonormal basis with a symplectic projection error at least as small as the one of a (potentially non-orthonormal) minimizer of PSD.
\section{Numerical results}
\label{sec:NumRes}
The numerical experiments in the present paper are based on a two-dimensional plane strain linear elasticity model which is described by a Lam\'e--Navier equation
\begin{align*}
\ensuremath{\rho_{0}} \sodeldel{t^2}{\ensuremath{\bm{u}}(\ensuremath{\bm{\xi}}, t, \ensuremath{\bm{\mu}})} - \ensuremath{\mu_{\textrm{L}}} \, \Delta_{\ensuremath{\bm{\xi}}} \ensuremath{\bm{u}}(\ensuremath{\bm{\xi}}, t, \ensuremath{\bm{\mu}}) + (\ensuremath{\lambda_{\textrm{L}}} + \ensuremath{\mu_{\textrm{L}}}) \grad[\ensuremath{\bm{\xi}}] \left( \divb[\ensuremath{\bm{\xi}}]{\ensuremath{\bm{u}}(\ensuremath{\bm{\xi}}, t, \ensuremath{\bm{\mu}})} \right) = \ensuremath{\rho_{0}} \, \ensuremath{\bm{g}}(\ensuremath{\bm{\xi}}, t).
\end{align*}
for $\ensuremath{\bm{\xi}} \in \ensuremath{\varOmega} \subset \ensuremath{\mathbb{R}}^2$ and $t \in \ftInterval$ with the density $\ensuremath{\rho_{0}} \in \ensuremath{\mathbb{R}_{>0}}$, the Lam\'e constants $ \ensuremath{\bm{\mu}} = (\ensuremath{\lambda_{\textrm{L}}}, \ensuremath{\mu_{\textrm{L}}}) \in \ensuremath{\mathbb{R}_{>0}}^2$, the external force $\ensuremath{\bm{g}}: \ensuremath{\varOmega} \times \ftInterval \rightarrow \ensuremath{\mathbb{R}}^2$ and Dirichlet boundary conditions on $\ensuremath{\boundary_{\fu}} \subset \ensuremath{\varGamma} := \partial\varOmega$ and Neumann boundary conditions on $\ensuremath{\varGamma_{\ft}} \subset \ensuremath{\varGamma}$. We apply non-dimensionalization (e.g.\ \cite[Chapter 4.1]{Langtangen2016}), apply the Finite Element Method (FEM) with first-order Lagrangian elements on a triangular mesh and rewrite the system as first-order system to arrive at the parametric linear system \cref{eq:LinSys} with
\begin{align}\label{eq:HamLinElast}
&\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}}) = \begin{bmatrix} \ensuremath{\bm{q}}(t, \ensuremath{\bm{\mu}})\\ \ensuremath{\bm{p}}(t, \ensuremath{\bm{\mu}}) \end{bmatrix},&
&\ensuremath{\bm{H}}(\ensuremath{\bm{\mu}}) = \begin{bmatrix} \ensuremath{\bm{K}}(\ensuremath{\bm{\mu}}) & \Z{n}\\ \Z{n} & \inv\ensuremath{\bm{M}} \end{bmatrix},&
&\ensuremath{\bm{h}}(t) = \begin{bmatrix} -\ensuremath{\bm{f}}(t)\\ \Z{n \times 1} \end{bmatrix}
\end{align}
where $\ensuremath{\bm{q}}(t, \ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^n$ is the vector of displacement DOFs, $\ensuremath{\bm{p}}(t, \ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^n$ is the vector of linear momentum DOFs, $\ensuremath{\bm{K}}(\ensuremath{\bm{\mu}}) \in \ensuremath{\mathbb{R}}^{n \times n}$ is the stiffness matrix, $\inv\ensuremath{\bm{M}} \in \ensuremath{\mathbb{R}}^{n \times n}$ is the inverse of the mass matrix and $\ensuremath{\bm{f}}(t, \ensuremath{\bm{\mu}})$ is the vector of external forces.
We remark that a Hamiltonian formulation with the velocity DOFs $\ensuremath{\bm{v}}(t) = \dd{t} \ensuremath{\bm{x}}(t) \in \ensuremath{\mathbb{R}}^n$ instead of the linear momentum DOFs $\ensuremath{\bm{p}}(t)$ is possible if a non-canonical symplectic structure is used. Nevertheless, in \cite[Remark 3.8.]{Peng2016} it is suggested to switch to a formulation with a canonical symplectic structure for the MOR of Hamiltonian systems.
In order to solve the system \cref{eq:HamLinElast} numerically with a time-discrete approximation $\ensuremath{\bm{x}}_i(\ensuremath{\bm{\mu}}) \approx \ensuremath{\bm{x}}(t_i, \ensuremath{\bm{\mu}})$ for each of $\ensuremath{n_{\textrm{t}}} \in \ensuremath{\mathbb{N}}$ time steps $t_i \in \ftInterval$, $1\leq i\leq \ensuremath{n_{\textrm{t}}}$, a numerical integrator is required. The preservation of the symplectic structure in the time-discrete system requires a so-called symplectic integrator \cite{Hairer2006,Bhatt2017}. In the context of our work, the implicit midpoint scheme is used in all cases.
\begin{Remark}[Modified Hamiltonian]\label{rem:ModHam}
We remark that, even though the symplectic structure is preserved by symplectic integrators, the Hamiltonian may be modified in the time-discrete system compared to the original Hamiltonian. In the case of a quadratic Hamiltonian (see \cref{cor:QuadHam}) and a symplectic Runge-Kutta integrator, the modified Hamiltonian equals the original Hamiltonian since these integrators preserve quadratic first integrals. For further details, we refer to \cite[Chapter IX.]{Hairer2006} or \cite[Section 5.1.2 and 5.2]{Leimkuhler2005}.
\end{Remark}
The model parameters are the first and second Lam\'e constants with $\ensuremath{\bm{\mu}} = (\ensuremath{\lambda_{\textrm{L}}}, \ensuremath{\mu_{\textrm{L}}}) \in \mathcal{P} = [35 \cdot 10^{9}, 125 \cdot 10^{9}] \;\txtfrac{\usiN}{\usim^2} \times [35 \cdot 10^{9}, 83 \cdot 10^{9}] \;\txtfrac{\usiN}{\usim^2}$ which varies between cast iron and steel with approx.\ $12\%$ chromium \cite[App.\ E 1 Table 1]{Dubbel2014ChapE}. The density is set to $\ensuremath{\rho_{0}} = 7856 \;\txtfrac{\usikg}{\usim^3}$. The non-dimensionalization constants are set to $\nondim\ensuremath{\lambda_{\textrm{L}}} = \nondim\ensuremath{\mu_{\textrm{L}}} = 81 \cdot 10^{9} \;\txtfrac{\usiN}{\usim^2}$, $\nondim\xi = 1 \;\usim$, $\nondim{g} = 9.81\;\txtfrac{\usim}{\usis^2}$. The geometry is a simple cantilever beam clamped on the left side with a force applied to the right boundary. The time interval is chosen to be $t \in [\ensuremath{t_{\mathrm{0}}}, \ensuremath{t_{\mathrm{end}}}]$ with $\ensuremath{t_{\mathrm{0}}} = 0 \,\usis$ and $\ensuremath{t_{\mathrm{end}}} = 7.2 \cdot 10^{-2} \,\usis$ which is one oscillation of the beam. For the numerical integration $\ensuremath{n_{\textrm{t}}} = 151$ time steps are used.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.0]{figures/geometry/beam.pdf}
\caption{an exaggerated illustration of the displacements $\ensuremath{\bm{q}}(t, \ensuremath{\bm{\mu}})$ of the non-autonomous beam model (a)~at the time with the maximum displacement (gray) and (b)~at the final time (blue).}
\end{figure}
The symplectic MOR techniques examined are PSD Complex SVD (\cref{def:ComplexSVD}), the greedy procedure \cite{Maboudi2017} and the newly introduced PSD SVD-like decomposition (\cref{def:PSDSVDlike}). The MOR techniques that do not necessarily derive a symplectic ROB are called non-symplectic MOR techniques in the following. The non-symplectic MOR techniques investigated in the scope of our numerical results are the POD applied to the full state $\ensuremath{\bm{x}}(t, \ensuremath{\bm{\mu}})$ (POD full state) and a POD applied to the displacement $\ensuremath{\bm{q}}(t, \ensuremath{\bm{\mu}})$ and linear momentum states $\ensuremath{\bm{p}}(t, \ensuremath{\bm{\mu}})$ separately (POD separate states). To summarize the basis generation methods, let us enlist them in \Cref{tab:basis_gen} where $\texttt{SVD}(\bullet)$ and $\texttt{cSVD}(\bullet)$ denote the SVD and the complex SVD, respectively.
\begin{table}[htbp]
\centering
{\renewcommand{\arraystretch}{1.5}
\begin{tabular}{p{2.2cm}p{5.6cm}lcc}
\hline\noalign{\smallskip}
method & solution & solution procedure & ortho- & sympl. \\[-.6em]
&&&norm.\\
\hline\noalign{\smallskip}
POD full &
$\ensuremath{\bm{V}}_k = \ensuremath{\bm{U}}(:,1:k)$ & $\ensuremath{\bm{U}} = \texttt{SVD}(\ensuremath{\fX_{\mathrm{s}}})$ &
\ding{51} &
\ding{55} \\[1em]
POD separate &
\multirow{2}{*}{$\ensuremath{\bm{V}}_k = \begin{bmatrix} \ensuremath{\bm{U}}_{\ensuremath{\bm{p}}}(:,1:k)\\ \ensuremath{\bm{U}}_{\ensuremath{\bm{q}}}(:,1:k) \end{bmatrix}$} &
$\ensuremath{\bm{U}}_{\ensuremath{\bm{p}}} = \texttt{SVD} \left( [\ensuremath{\bm{p}}_1, \dots, \ensuremath{\bm{p}}_{\ensuremath{n_{\mathrm{s}}}}] \right)$ &
\ding{51} &
\ding{55} \\
&& $\ensuremath{\bm{U}}_{\ensuremath{\bm{q}}} = \texttt{SVD} \left( [\ensuremath{\bm{q}}_1, \dots, \ensuremath{\bm{q}}_{\ensuremath{n_{\mathrm{s}}}}] \right)$\\[1em]
PSD cSVD &
$\ensuremath{\bm{V}}_{2k} = [\ensuremath{\bm{E}}(:,1:k) \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}(:,1:k)]$ &
$\ensuremath{\bm{E}} = \begin{bmatrix}\ensuremath{\bm{\varPhi}} \\ \ensuremath{\bm{\Psi}} \end{bmatrix}, \ensuremath{\bm{\varPhi}} +\ensuremath{{\mathrm{i}}}\ensuremath{\bm{\Psi}} = \texttt{cSVD}\left( \ensuremath{\fC_{\textrm{s}}} \right)$ &
\ding{51} &
\ding{51}\\
&& $\ensuremath{\fC_{\textrm{s}}} = [\ensuremath{\bm{p}}_1 + \ensuremath{{\mathrm{i}}} \ensuremath{\bm{q}}_1, \dots, \ensuremath{\bm{p}}_{\ensuremath{n_{\mathrm{s}}}} + \ensuremath{{\mathrm{i}}} \ensuremath{\bm{q}}_{\ensuremath{n_{\mathrm{s}}}}]$\\[1em]
PSD greedy &
$\ensuremath{\bm{V}}_{2k} = [\ensuremath{\bm{E}}(:,1:k) \quad \ensuremath{\TJtwo{n}} \ensuremath{\bm{E}}(:,1:k)]$ &
$\ensuremath{\bm{E}}$ from greedy algorithm &
\ding{51} &
\ding{51}\\[1em]
PSD SVD-like &
$\ensuremath{\bm{V}}_{2k} = [\ensuremath{\bm{s}}_{i_1}, \dots, \ensuremath{\bm{s}}_{i_k}, \ensuremath{\bm{s}}_{n+i_1}, \dots, \ensuremath{\bm{s}}_{n+i_k}]$ &
$\ensuremath{\bm{S}} = [\ensuremath{\bm{s}}_1, \dots, \ensuremath{\bm{s}}_{2n}]$ from \cref{eq:SVDlikeDecomp}, &
\ding{55} &
\ding{51}\\
&& $\ensuremath{\mathcal{I}_{\textrm{PSD}}} = \{i_1, \dots, i_k\}$ from \cref{eq:IdxPSDSVDlike}\\
\hline\noalign{\smallskip}
\end{tabular}
}
\caption{Basis generation methods used in the numerical experiments in summary, where we use the MATLAB\textsuperscript{\textregistered} notation to denote the selection of the first $k$ columns of a matrix e.g.\ in $\ensuremath{\bm{U}}(:,1:k)$.}
\label{tab:basis_gen}
\end{table}
All presented experiments are generalization experiments, i.e.\ we choose $9$ different training parameter vectors $\ensuremath{\bm{\mu}} \in \mathcal{P}$ on a regular grid to generate the snapshots and evaluate the reduced models for $16$ random parameter vectors that are distinct from the $9$ training parameter vectors. Thus, the number of snapshots is $\ensuremath{n_{\mathrm{s}}} = 9 \cdot 151 = 1359$. The size $2k$ of the ROB $\ensuremath{\bm{V}}$ is varied in steps of $20$ with $2k \in \left\{ 20, 40, \dots, 280, 300 \right\}$.
The software used for the numerical experiments is RBmatlab\footnote{https://www.morepas.org/software/rbmatlab/} which is an open-source library based on the proprietary software package MATLAB\textsuperscript{\textregistered} and contains several reduced simulation approaches. An add-on to RBmatlab is provided\footnote{https://doi.org/10.5281/zenodo.2578078} which includes all the additional code to reproduce the results of the present paper. The versions used in the present paper are RBmatlab 1.16.09 and MATLAB\textsuperscript{\textregistered} 2017a.
\subsection{Autonomous beam model}
In the first model, we load the beam on the free end (far right) with a constant force which induces an oscillation. Due to the constant force, the discretized system can be formulated as an autonomous Hamiltonian system. Thus, the Hamiltonian is constant and its preservation in the reduced models can be analysed. All other reduction results are very similar to the non-autonomous case and thus, are exclusively presented for the non-autonomous case in the following \cref{sec:NonAutoBeam}.
\subsubsection{Preservation over time of the modified Hamiltonian in the reduced model}
In the following, we investigate the preservation of the Hamiltonian of our reduced models. With respect to \cref{rem:ModHam}, we mean the preservation over time of the modified Hamiltonian. Since the Hamiltonian is quadratic in our example and the implicit midpoint is a symplectic Runge-Kutta integrator, the modified Hamiltonian equals the original which is why we speak of ``the Hamiltonian'' in the following.
We present in \cref{fig:NumRes:BeamAuto:Ex2:Hamiltonian} the count of the total $240$ simulations which show a preservation (over time) of the reduced Hamiltonian in the reduced model. The solution $\ensuremath{\reduce{\fx}}$ of a reduced simulation preserves the reduced Hamiltonian over time if $\txtfrac{\left( \ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}(t_i), \ensuremath{\bm{\mu}}) - \ensuremath{\reduce{\Ham}}(\ensuremath{\reduce{\fx}}(t_0), \ensuremath{\bm{\mu}}) \right)}{\ensuremath{\ensuremath{\mathcal{H}}_{\mathrm{rel}}}(\ensuremath{\bm{\mu}})} < 10^{-10}$ for all discrete times $t_i \in [\ensuremath{t_{\mathrm{0}}}, \ensuremath{t_{\mathrm{end}}}]$, $1\leq i\leq \ensuremath{n_{\textrm{t}}}$ where $\ensuremath{\ensuremath{\mathcal{H}}_{\mathrm{rel}}}(\ensuremath{\bm{\mu}}) > 0$ is a parameter-dependent normalization factor. The heat map shows that no simulation in the non-symplectic case preserves the Hamiltonian whereas the symplectic methods all preserve the Hamiltonian which is what was expected from theory.
In \cref{fig:NumRes:BeamAuto:Ex2:HamExample}, we exemplify the non-constant evolution of the reduced Hamiltonian for three non-symplectic bases generated by POD separate states with different basis sizes and one selected test parameter $(\lambda, \mu) \in \mathcal{P}$. It shows that in all three cases, the Hamiltonian starts to grow exponentially.
\begin{figure}[h]
\begin{minipage}{.47\textwidth}
\centering
\includegraphics[scale=1.0]{figures/results/beam_autonomous/Exp1/result.pdf}
\caption{Heat map which shows the preservation of the reduced Hamiltonian in the reduced model in $x$ of $y$ cases ($x/y$).}
\label{fig:NumRes:BeamAuto:Ex2:Hamiltonian}
\end{minipage}\hfill
\begin{minipage}{.47\textwidth}
\centering
\includegraphics[scale=1.0]{figures/results/beam_autonomous/Exp1/result2.pdf}
\caption{Evolution of the reduced Hamiltonian for POD separate states for a selected parameter $(\lambda, \mu) \in \mathcal{P}$.}
\label{fig:NumRes:BeamAuto:Ex2:HamExample}
\end{minipage}
\end{figure}
\subsection{Non-autonomous beam model}
\label{sec:NonAutoBeam}
The second model is similar to the first one. The only difference is that the force on the free (right) end of the beam is loaded with a time-varying force. The force is chosen to act in phase with the beam. The time dependence of the force necessarily requires a non-autonomous formulation which requires in the framework of the Hamiltonian formulation a time-dependent Hamiltonian function which we introduced in \cref{subsec:NonAutoHam}.
We use the model to investigate the quality of the reduction for the considered MOR techniques. To this end, we investigate the projection error, i.e.\ the error on the training data, the orthogonality and symplecticity of the ROB and the error in the reduced model for the test parameters.
\subsubsection{Projection error of the snapshots and singular values}
\label{subsubsec:NumExp:projErr}
The projection error is the error on the training data collected in the snapshot matrix $\ensuremath{\fX_{\mathrm{s}}}$, i.e.\
\begin{align*}
&\ensuremath{e_{l_{2}}}(2k) = \Fnorm{(\I{2n} - \ensuremath{\bm{V}} \rT\ensuremath{\bm{W}}) \ensuremath{\fX_{\mathrm{s}}}}^2,&
\begin{split}
\text{POD}: \rT\ensuremath{\bm{W}} =&\; \rT\ensuremath{\bm{V}},\\
\text{PSD}: \rT\ensuremath{\bm{W}} =&\; \si\ensuremath{\bm{V}} (= \rT\ensuremath{\bm{V}} \text {for orthosymplectic ROBs, \cref{lem:SymplOrthonROB}}).
\end{split}
\end{align*}
It is a measure for the approximation qualities of the ROB based on the training data. \cref{fig:NumRes:BeamNonAuto:Ex3:BasisProps} (left) shows this quantity for the considered MOR techniques and different ROB sizes $2k$. All basis generation techniques show an exponential decay. As expected from theory, POD full state minimizes the projection error for the orthonormal basis generation techniques (see \cref{tab:basis_gen}). PSD SVD-like decomposition shows a lower projection error than the other PSD methods for $2k \geq 80$ and yields a similar projection error for $k \leq 60$. Concluding this experiment, one might expect the full-state POD to yield decent results or even the best results. The following experiments prove this expectation to be wrong.
The decay of (a) the classical singular values $\sigma_i$, (b) the symplectic singular values $\ensuremath{\sigma^{\mathrm{s}}}_i$ (see \cref{rem:SingVal}) and (c) the weighted symplectic singular values $\ensuremath{w^\mathrm{s}}_i$ (see \cref{eq:FrobSymplSingVal}) sorted by the magnitude of the symplectic singular values is displayed in \cref{fig:NumRes:BeamNonAuto:Ex3:BasisProps} (right). All show an exponential decrease. The weighting introduced in \cref{eq:FrobSymplSingVal} for $\ensuremath{w^\mathrm{s}}_i$ does not influence the exponential decay rate of $\ensuremath{\sigma^{\mathrm{s}}}_i$. The decrease in the classical singular values is directly linked to the exponential decrease of the projection error of POD full state due to properties of the Frobenius norm (see \cite{LuminyBook2017}). A similar result was deduced in the scope of the present paper for PSD SVD-like decomposition and the PSD functional (see \cref{prop:PSDDecay}).
\begin{figure}[h]
\centering
\includegraphics[scale=1.0]{figures/results/beam_non_autonomous/Exp3/result.pdf}
\caption{Projection error (left) and decay of the singular values from \cref{rem:SingVal} and \cref{eq:FrobSymplSingVal} (right).}
\label{fig:NumRes:BeamNonAuto:Ex3:BasisProps}
\end{figure}
\subsubsection{Orthonormality and symplecticity of the bases}
To verify the orthonormality and the symplecticity numerically, we consider the two functions
\begin{align} \label{eq:ortho}
&\ensuremath{o_{\fV}}(2k) = \Fnorm{\rT\ensuremath{\bm{V}} \ensuremath{\bm{V}} - \I{2k}},&
&\ensuremath{s_{\fV}}(2k) = \Fnorm{\rT\ensuremath{\Jtwo{k}} \rT\ensuremath{\bm{V}} \ensuremath{\Jtwo{n}} \ensuremath{\bm{V}} - \I{2k}}&
\end{align}
which are zero / numerically zero if and only if the basis is orthonormal or symplectic, respectively. In \cref{fig:NumRes:BeamAuto:Ex1:Ortho}, we show both values for the considered basis generation techniques and RB sizes.
The orthonormality of the bases is in accordance with the theory. All procedures compute symplectic bases except for PSD SVD like-decomposition. PSD greedy shows minor loss in the orthonormality which is a known issue for the $\ensuremath{\Jtwo{n}}$-orthogonalization method used (modified symplectic Gram-Schmidt procedure with re-orthogonalization \cite{AlAidarous2011}). But no major impact on the reduction results could be attributed to this deficiency in the scope of this paper.
Also the symplecticity (or $\ensuremath{\Jtwo{n}}$-orthogonality) of the bases behaves as expected. All PSD methods generate symplectic bases whereas the POD methods do not. A minor loss of symplecticity is recorded for PSD SVD-like decomposition which is objected to the computational method that is used to compute an SVD-like decomposition. Further research on algorithms for the computation of an SVD-like decomposition should improve this result. Nevertheless, no major impact on the reduction results could be attributed to this deficiency in the scope of this paper.
\begin{figure}[h]
\centering
\includegraphics[scale=1.0]{figures/results/beam_non_autonomous/Exp2/result.pdf}
\caption{The orthonormality (left) and the $\ensuremath{\Jtwo{n}}$-orthogonality (right) from \cref{eq:ortho}.}
\label{fig:NumRes:BeamAuto:Ex1:Ortho}
\end{figure}
\subsubsection{Relative error in the reduced model}
We investigate the error introduced by MOR in the reduced model. The error is measured in the relative $\infty$-norm $\infnorm{\bullet}$ in time and space
\begin{align}\label{eq:ErrMOR}
\ensuremath{\overline{\err}}(2k, \ensuremath{\bm{\mu}}) :=
\frac{\displaystyle\max_{i\in\left\{ 1,\dots,\ensuremath{n_{\textrm{t}}} \right\}} \infnorm{\ensuremath{\bm{x}}(t_i, \ensuremath{\bm{\mu}}) - \ensuremath{\bm{V}} \ensuremath{\reduce{\fx}}(t_i, \ensuremath{\bm{\mu}})}}
{\displaystyle\max_{i\in\left\{ 1,\dots,\ensuremath{n_{\textrm{t}}} \right\}} \infnorm{\ensuremath{\bm{x}}(t_i, \ensuremath{\bm{\mu}})}},
\end{align}
where $2k$ indicates the size of the ROB $\ensuremath{\bm{V}} \in \ensuremath{\mathbb{R}}^{2n \times 2k}$ and $\ensuremath{\bm{\mu}} \in \mathcal{P}$ is one of the test parameters. To display the results for all $16$ test parameters at once, we use box plots in \cref{fig:NumRes:BeamNonAuto:Ex4:RelErr}. The box represents the $25\%$-quartile, the median and the $75\%$-quartile. The whiskers indicate the range of data points which lay within $1.5$ times the interquartile range (IQR). The crosses show outliers. For the sake of a better overview, we truncated relative errors above $10^0 = 100\%$.
The experiments show that the non-symplectic MOR techniques show a strongly non-monotonic behaviour for an increasing basis size. For many of the basis sizes, there exists a parameter which shows crude approximation results which lay above $100\%$ relative error. The POD full state is unable to produce results with a relative error below $2\%$.
On the other hand, the symplectic MOR techniques show an exponentially decreasing relative error. Furthermore, the IQRs are much lower than for the non-symplectic methods. We stress that the logarithmic scale of the $y$ axis distorts the comparison of the IQRs -- but only in favour of the non-symplectic methods. The low IQRs for the symplectic methods show that the symplectic MOR techniques derive a reliable reduced model that yields good results for any of the $16$ randomly chosen test parameters. Furthermore, none of the systems shows an error above $0.19\%$ -- for PSD SVD-like decomposition this bound is $0.018\%$, i.e.\ one magnitude lower.
In the set of the considered symplectic, orthogonal MOR techniques, PSD greedy shows the best result for most of the considered ROB sizes. This superior behaviour of PSD greedy in comparison to PSD complex SVD is unexpected since PSD greedy showed inferior results for the projection error in \cref{subsubsec:NumExp:projErr}. This was also observed in \cite{Maboudi2017}.
Within the set of investigated symplectic MOR techniques, PSD SVD-like decomposition shows the best results followed by PSD greedy and PSD complex SVD. While the two orthonormal procedures show comparable results, PSD SVD like-decomposition shows an improvement in the relative error. Comparing the best result of either PSD greedy or PSD complex SVD with the worst result of PSD SVD-like decomposition considering the $16$ different test parameters for a fixed basis size -- which is pretty much in favour of the orthonormal basis generation techniques --, the improvement of PSD SVD-like decomposition ranges from factor $3.3$ to $11.3$ with a mean of $6.7$.
\begin{figure}[h]
\centering
\includegraphics[scale=1.0]{figures/results/beam_non_autonomous/Exp4/result.pdf}
\caption{Relative error in the reduced model.}
\label{fig:NumRes:BeamNonAuto:Ex4:RelErr}
\end{figure}
\section{Summary and conclusions}
\label{sec:Conclusion}
We gave an overview of autonomous and non-autonomous Hamiltonian systems and the structure-preserving model order reduction (MOR) techniques for these kinds of systems \cite{Peng2016,Maboudi2017,Maboudi2018}. Furthermore, we classified the techniques in orthonormal and non-orthonormal procedures based on the capability to compute a symplectic, (non-)orthonormal reduced order basis (ROB). To this end, we introduced a characterization of rectangular, symplectic matrices with orthonormal columns. Based thereon, an alternative formulation of the PSD Complex SVD \cite{Peng2016} was derived which we used to prove the optimality with respect to the PSD functional in the set of orthonormal, symplectic ROBs. As a new method, we presented a symplectic, non-orthonormal basis generation procedure that is based on an SVD-like decomposition \cite{Xu2003}. First theoretical results show that the quality of approximation can be linked to a quantity we referred to as weighted symplectic singular values.
The numerical examples show advantages for the considered linear elasticity model for symplectic MOR if a symplectic integrator is used. We were able to reduce the error introduced by the reduction with the newly introduced non-orthonormal method.
We conclude that non-orthonormal methods are able to derive bases with a lower error for both, the training and the test data. Yet, it is still unclear if the newly introduced method computes the global optimum of the PSD functional. Further work should investigate if a global optimum of the PSD functional can be computed with an SVD-like decomposition.\\
\vspace{6pt}
\funding{This research was partly funded by the German Research Foundation (DFG) grant number HA5821/5-1 and within the GRK 2198/1.}
\acknowledgments{We thank the German Research Foundation (DFG) for funding this work. The authors thank Dominik Wittwar for inspiring discussions.}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Three blazars Markarian~421 (Punch et al.\ 1992), Markarian~501
(Quinn et al.\ 1996) and 1ES 2344+514 (Catanese et al.\ 1998) have
been detected at TeV energies. At GeV energies, the EGRET instrument
aboard the Compton Gamma-Ray Observatory found only upper limits for
Markarian~501 (Catanese et al. 1997) and 1ES 2344+514 (Thompson 1996).
Markarian~421 was detected by EGRET, but only very weakly
(Thompson et al. 1995). While blazar emission for X-ray selected objects
at lower energies (up to about 1-100 keV) is almost certainly
due to synchrotron emission from a beam of highly relativistic
electrons, the GeV/TeV emission forms a second component usually
attributed to inverse Compton scattering of relatively low energy
photons by the electron beam (see, e.g., Sikora, Begelman \&
Rees 1994) or perhaps to pion photoproduction by a proton component of
the beam (see, e.g., Mannheim \ 1993). The inverse Compton
models predict typical blazar gamma-ray energy cutoffs of 10 GeV to
about 30 TeV whereas proton beam models allow gamma-ray
energies exceeding 100 TeV. Spectrum measurements at the low energy
end (perhaps probing the energy threshold of the second component) and
the high energy end (perhaps showing an energy cutoff) are both important
for constraining models.
The TeV gamma-ray spectra of extra-galactic objects can be
modified by differential absorption due to photon-photon collisions
with inter-galactic IR radiation (Gould \& Schr\` eder 1967; Stecker, de
Jager \& Salamon 1992). Indeed, TeV observations of Markarian~421 and
Markarian~501 (Zweerink et al.\ 1997; Krennrich et al. 1997; Aharonian
et al.\ 1997) have been used to set upper limits on the density of intergalactic
IR radiation (Stanev \& Franceschini 1998; Biller et al.\ 1998).
These upper limits provide the best constraints on infrared densities
in the 0.02 - 0.2 eV regime and do not suffer from local galactic background
contributions as are present in direct measurements. At this time, no unambiguous
evidence has been found for an IR absorption spectral cutoff. In
order to infer the magnitude of intergalactic IR background radiation
from absorption effects on TeV spectra, it is necessary to have
a good model for intrinsic spectra or to have spectra from
several objects and assume that the intrinsic TeV spectra
of the objects are identical (or at least very similar), or to have
detections from many sources and assume that they are similar in a
statistical sense. For the redshift range (z = 0.031-0.044) of the
detected TeV blazars Markarian~421, Markarian~501 and 1ES~2344+514, a
recognizable cutoff (optical depth $\sim$ 1-5) is expected to occur
between 5-20 TeV (Stecker et al.\ 1997; Stecker et al. 1998).
We have recently published a brief report giving a detailed spectrum
of Markarian~501 spanning the energy range 260 GeV to 10 TeV derived
from observations with the Whipple Observatory gamma-ray imaging
Cherenkov telescope (Samuelson et al.\ 1998). This spectrum was
derived from observations made during ``high'' states of the AGN which
give good statistical precision. Markarian~501 (Quinn et al.\ 1998)
is variable in the TeV energy range showing changes on a time scale of
several hours. The data were taken at both standard small zenith angles
(SZA) of less than 25 degrees and at large zenith angles (LZA) in the
range of 55 to 60 degrees. The SZA observations are sensitive to
relatively low energies ($\rm E < $ 5 TeV) and the LZA observations yield
better statistics at high energies.
The details of the analysis were not explained in the brief
report but are given here. In particular, the characteristics of
Cherenkov imaging telescopes relevant for spectral determination
change substantially from SZA to LZA observations. In addition,
uncertainties in the spectrum, e.g., due to corresponding
uncertainties in atmospheric absorption, are also given here.
Finally, we have used the flux from the Crab Nebula (which often serves as a
standard candle in TeV gamma-ray astronomy) to check spectra extracted from
LZA observations against our standard spectrum (Hillas et al.~1998a).
These recent results from Markarian~501 indicate a spectrum which is not
consistent with a simple power-law in which the flux, J, is proportional
to E$^{-\gamma}$, but is more accurately described by a three parameter
curve (parabolic in a plot of $\log(\hbox{J})$ vs.\ $\log(\hbox{E})$)
given by
$\hbox{J} \sim \hbox{E}^{-2.22\pm 0.04 \pm 0.05-(0.47\pm 0.07)\log_{10}(\hbox{E})}$
where the first set of errors is statistical and the second systematic
and E is in TeV. (Previously published spectra of Markarian~501
(Bradbury et al.\ 1997; Aharonian et al. 1997) covered a smaller
energy span and were consistent with a simple power-law.) In
principle, the curvature of the spectrum could arise either from the
physics of gamma-ray emission from AGN, from intrinsic absorption or
from absorption in the inter-galactic medium.
In addition to Markarian~501, the Whipple Observatory Gamma-ray
collaboration has a database of both SZA and LZA data for Markarian
421. Fortuitously, these objects have almost identical redshifts:
0.031 for Markarian 421 and 0.033 for Markarian 501. Hence any
differences in their TeV
spectra must be intrinsic to the AGN and not due to intergalactic
absorption assuming the intergalactic IR background radiation is
uniform. Like Markarian 501, Markarian 421 is also highly
variable, with changes observed on a time scale as short as 15 minutes
(Gaidos et al.\ 1996). We have previously published a TeV spectrum
for Markarian~421 based on a single SZA observation, but very high
flux detection lasting only 2 hours (Zweerink et al.~1997). The
published spectrum was consistent with a simple power-law i.e., no
curvature was required for an acceptable fit. However the spectrum
did not cover as large an energy range as that for Markarian~501, nor
did the Markarian~421 spectrum have comparable statistical precision
at the high energy end of the spectrum. Hence it was not possible
to draw firm conclusions from a comparison of the two spectra.
Here we present a new TeV spectrum of Markarian~421 based upon both SZA
and LZA data taken while the AGN was in a high state of emission. The
SZA data set used consists of two flaring detections. The first is the
same 2-hour detection (average flux of 7.4 Crab units) used to obtain
the previously published spectrum (Zweerink et al.~1997), but we have
taken additional care in the treatment of energy threshold effects in order
to obtain flux values at lower energies. The second detection (27 minutes on-source)
had an average flux of 2.8 Crab units and exhibited a remarkably short
rise and fall time of 15 minutes. It occurred 8 days after the first
detection. The LZA observations consisted of 1.5 hours (1995 June) of on-source
data at zenith angles of 55 to 60 degrees with an average flux of 3.3 Crab units.
The new spectrum is consistent with our previously published result,
but spans a larger energy range (comparable to that published for
Markarian~501) and has better statistics at high energies. The
spectrum appears less curved than the Markarian~501 spectrum.
We show that the spectra for the two equidistant AGN (Markarian~421
and Markarian~501) clearly differ, and this reflects intrinsic spectral
differences near the sites of gamma-ray production. Combined with
data for the two AGN at X-ray energies, these spectra constrain models
of physical processes in the jets (see Hillas et al.~1998b). The
differences also point toward a major difficulty in inferring
intergalactic background radiation intensities via TeV photon
attenuation.
\section{Observations}
The observations presented here were made with the Whipple
Observatory 10 m imaging Cherenkov telescope.
The camera consisted of 109 (until 1996 December 4) or 151 (after 1996 December 4)
photomultiplier tubes (PMTs) placed on a 1/4 degree hexagonal matrix. These
cameras covered fields of view of about 2.7 and 3.2 degrees
respectively. Pulses from each of the photomultiplier tubes were
amplified and sent to gated integrating adc's (analog-to-digital converters)
and, in addition, those from the inner 91 tubes
were sent to discriminators with levels
corresponding to about 49 photoelectrons. When two of the
discriminators fired, a trigger was generated, and current pulses from
all photomultiplier tubes were integrated for 25 nanoseconds and
digitized (Cawley et al.~1990). The data were normally taken in an
on-off mode in which the source is tracked for typically 28 minutes, and
then the same range of elevation and azimuth angles is tracked for
another 28 minutes giving a background comparison region. The Crab
Nebula serves as a standard candle for TeV gamma-ray astronomy, and it
was observed using the same camera configurations and ranges of zenith
angles that were used for the blazar observations.
As described previously in Samuelson et al.~(1998), the Markarian 501
observations were made in the time interval from February 14 to June 8
of 1997 during a high state of the source in 1997 (Protheroe et
al. 1997). During this period the camera had 151 pixels giving the
larger (3.2 degree) field of view. A total of 16 hours of SZA
on-source data were taken at 8 to 25 degrees and 5.1 hours of LZA
on-source data were taken at 55 to 60 degrees. These observations
showed that the flux for the 1997 observing season varied from about 0.2
to 4 times the flux from the Crab Nebula with an average value of 1.4
(c.f. Quinn et al. 1998). This is a factor of 7 larger than the
average flux from the 1996 observing season, which is the basis for
identifying it as flare data. We are somewhat arbitrarily defining
flare data as that which has a flux level substantially above the
average flux as measured for a given source in our observations.
The SZA data for Markarian 421 consists of two detections on 1996 May 7 and
May 15, measured with the 109-photomultiplier tube camera. The
first of these was during the highest flux TeV flare observed. It
consisted of a 2 hours observation in which the data rate increased
steadily giving a count rate at the end of the run of about 15~gammas/minute
at $\rm E > $ 350 GeV which is 10 times the rate from the Crab Nebula. The
average rate during the runs was 7.4~Crab units. During the flare the
AGN was observed continuously, and hence background comparison regions
were taken from the same range of elevation and azimuth angles but
from other nights. However, because of the
strength of the signal, the data are almost free of background after
selection of gamma-ray-like images. The results are insensitive to
the exact background runs used (Zweerink et al.~1997; Zweerink
1997). The second detection eight days later consisted of 27 minutes
of on-source observations with corresponding off-source data. The
average flux was 2.8 Crab units and the data show a remarkable peak
with a rise and fall time of only 15 minutes. The Crab Nebula
database was measured with the same camera during the 1995/96 season
and consists of 49 on/off pairs giving the Crab Nebula spectrum
reported by Hillas et al.~(1998a).
Observations of Markarian~421 at LZA were carried out in 1995 with the
109 pixel camera (Krennrich et al.~1995), and the detection of 5-8 TeV gamma rays was
reported earlier (Krennrich et al.~1997). For the spectral analysis
presented here, we used a subset of the data for which the range of zenith angles
is 55-60 degrees where we have adequate observations of the Crab
Nebula to obtain a spectrum. This allows us to use the Crab Nebula to
test the LZA analysis procedure and show that it is consistent with
spectra derived at SZA. In addition, we required that Markarian 421 was in a
flare state, which gave a total of three on-off pairs of data measured
1995 on June 20, 29 and 30 with an average flux from the AGN of 3.3
Crab units, comparable to that of the May 15 flare (an average of 2.8
Crab units).
Data from observations of the Crab Nebula at zenith angles between 55
and 60 degrees were collected during 1995/1996/1997 using both the
109-pixel and 151-pixel cameras. A total of 17~on-off pairs (8 hours
on-source) were used for the derivation of the energy spectrum of
the Crab Nebula. The spectra derived from the two cameras were in agreement.
\section{Extraction of Spectra from SZA and LZA Data}
For gamma-rays with energies of a few hundred GeV incident near the
zenith, shower maximum ($shower$~$max$: the region along the
longitudinal development of the electromagnetic cascade with the
maximum number of electrons and positrons) occurs at about 7-9 km
above sea level. The Cherenkov light from such a shower forms a
pool of light about 200~m in diameter
at telescope altitude, 2.3~km for the Whipple Observatory. At large
zenith angles, $shower$~$max$ occurs farther away from the telescope,
increasing the area
over which the Cherenkov light is distributed. The lower light
density raises the telescope energy threshold, but for gamma rays with
sufficient energy to produce enough light for triggering, the
collection area is substantially larger (Sommers \& Elbert 1987).
Since $shower$~$max$ occurs at a high altitude for LZA showers, the
characteristic Cherenkov angles are smaller resulting in a smaller
Cherenkov light image nearer the center of the field of view of the
camera.
In this work we followed an established Whipple procedure in extracting
spectra, specifically Method I as described in detail in Mohanty et
al.~(1998). The following parts are required: (1) a method for
selecting gamma-ray initiated shower images from a background of
cosmic-ray initiated shower images based upon image shape and
orientation, (2) the effective telescope collection area for this selection
method, (3) a method to estimate the initial gamma-ray energy for each
event, and (4) the resolution function corresponding to this energy
estimate. The method for selecting gamma-ray events should be
relatively independent of the gamma-ray energy, E. The method of energy
estimation should give good resolution and be relatively free of bias.
These parts are described in the next two sections and, in the last
subsection, we show that spectra derived from Crab Nebula LZA
observations agree with our standard SZA results published earlier.
\subsection{Energy Threshold, Selection Criteria and Collection Area}
The Cherenkov imaging technique utilizes differences in focal plane
image shapes to differentiate cosmic-ray background from gamma rays.
The selection criteria (Mohanty et al. 1998) are based on image
shape through the $width$
and $length$ parameters, and on orientation through the $alpha$
parameter. Compared with cosmic-ray images gamma-ray images
are generally narrower, shorter and point toward the center
of the focal plane (see, e.g., Hillas~1985; Reynolds et al.~1993).
The effective collection area of an imaging atmospheric Cherenkov
telescope is limited not only by the dimension of the Cherenkov light
pool on the ground but also by the image parameter cuts which are applied
to increase the signal to noise ratio. The criteria are derived from the
parameter distributions of simulated gamma-ray showers as a
function of their total light intensity ($size$, hereafter)
in the photomultiplier camera. We set these criteria so that they keep
90 \% of gamma-ray images whose centroid is 0.5-1.1 degrees ($distance$)
from the center of the field of view for the 151-pixel
camera. This distance restriction improves the correlation of energy with
$size$. The cuts are scaled with $size$ so that the efficiency for
keeping gamma rays is approximately independent of energy.
The telescope is triggered when two of the inner 91 photomultiplier
tubes give pulses within a triggering gate of 30 ns
with 49 or more
photoelectrons. The trigger electronics (Cawley et al.~1990) is
difficult to model precisely. One of the problems is that the
photomultiplier tube pulses include both Cherenkov light ``signal''
and Poisson-fluctuating night-sky noise ``background'' which causes
the pulse shapes to vary. The pulses go both to a discriminator which
fires when the pulse voltage crosses a preset threshold and to an
integrating analog-to-digital converter which records the total
charge, $q$, in the pulse, (c.f. Mohanty~1995). Because of the variation
in pulse shape, there is no unique correspondence between pulse charge
and peak voltage effectively giving the discriminators a ``fuzzy''
edge having a width corresponding to 3.5 photoelectrons about a mean
trigger point of 49 photoelectrons. In addition, if the discriminator
levels are set very low, the background trigger rate for low-light
events can be sensitive to night-sky brightness.
We avoid these difficulties using software padding (Cawley 1993)
and by adding the additional software requirement that a
signal corresponding to at least $\rm D_{soft} = 80$ photoelectrons
is present in at least two pixels. This raises the
telescope energy threshold, but the collection area can be readily
calculated.
The resulting SZA (20 degrees) and LZA (55-60
degrees) telescope areas for events which pass both the triggering and
image-selection requirements are shown in Fig.~1 for the 151-pixel
camera. It is clear that only SZA measurements have sensitivity below
1~TeV whereas the LZA measurements have better sensitivity
beyond about 5~TeV. The LZA collection area shows a plateau between
about 3~TeV and 50~TeV. There is a SZA/LZA overlap region for cross
calibration between about 1 and 10~TeV.
One concern is that we have properly extracted SZA spectra at all but the
lowest energy point in Fig.~1. The points at 260 GeV and 380 GeV have
significantly reduced collection area and hence might be unusually
sensitive to small details in the simulations. We have looked for
such sensitivities by varying the (1) the telescope gain, (2)
reflector optical resolution and (3) background sky noised used in the
simulations. The result is that neither the calculated collection
area nor the extracted spectra change significantly if these
parameters are varied over physically reasonable values. (Indeed this
is a basis for arriving at systematic errors.) Furthermore, the
gamma-ray image parameter distributions extracted using on-off
histograms (see Mohanty et al., 1998) agree with simulations. Thus
the results appear to be robust.
\subsection{Energy Estimation and Resolution}
The accuracy of the energy reconstruction of gamma-ray primaries with
a single imaging Cherenkov telescope is limited by the following
effects: a) fluctuations in the first interactions which cause the height of
shower maximum to vary (hence the region where most of
the Cherenkov light is emitted varies causing fluctuations in the
light density detected at ground), b) the uncertainty in the shower core
distance to the telescope, and c) truncation of the shower images close to
the edge of the field of view.
All three effects occur in a similar fashion for SZA and LZA
observations. However, there are some differences: the central light
emitting region for a LZA shower appears geometrically smaller in the
camera, because of its larger distance from the instrument and the
smaller Cherenkov angles in the lower density atmosphere. Hence
a bigger fraction of the Cherenkov light image is contained in the
field of view. Also, the smaller Cherenkov angles for LZA observations shift the
center of gravity of images closer to the center of the field of
view. Therefore, truncation effects are less important for LZA data.
Following Mohanty et al.~(1998), we have found expressions for an energy
estimate, $\rm \widetilde E$ as a function of $\ln(size)$ and
$distance$ which are relatively free of bias and have good resolution. To a good
approximation, for fixed energy, E, the distribution $\rm \widetilde
E$ is lognormal (see Mohanty et al.~1998) with a width independent of E.
It follows that the telescope resolution $\Delta\hbox{E}$ in standard
deviations is given by $\Delta\hbox{E}/\hbox{E} = \sigma$ where
$\sigma$ is about 0.34 for SZA and slightly better, about 0.29 for
LZA.
\subsection{Atmospheric Effects}
Since Cherenkov light from showers at LZA passes through substantially
more atmosphere than at SZA, the uncertainties in the atmospheric
model may have correspondingly larger effects. There are four
important extinction mechanisms. (1) The largest is Rayleigh
scattering for which the cross-sections are well known. Variations
arise because of changes in barometric pressure (typically a few
percent) changing the column density along the line of sight from the
telescope to shower maximum. (2) Ozone exists mainly at altitudes
well above shower maximum, but the
cross sections for UV absorption are very large and small
concentrations extend into lower regions causing some absorption.
Seasonal variations are of order 20-25\% at the Whipple Observatory
latitude and there are daily variations as well. (3) Absorption by
$\hbox{O}_2$ becomes important below about 250 nm and removes almost
all the light at shorter wavelengths. There are significant
uncertainties in the absorption cross sections, but these
uncertainties do not have a large effect because the absorption turns
on rapidly and essentially all the light below 250 nm is absorbed in
any case. (4) Aerosols exist mainly at low altitudes with a scale
height of roughly 1 km, and hence the observatory altitude of 2.3 km
diminishes their effects. One of their primary characteristics is
variability. If aerosol absorption were significant, one would expect
significant variability in the telescope cosmic-ray induced trigger
rate, whereas this is usually stable to a few percent.
In order to estimate the effects of atmospheric uncertainties, we made
some simple calculations using the atmospheric model used for the ARTEMIS
project (Urban et al.~1996) and a simple aerosol parameterization used
for the Fly's Eye experiment (Baltrusaitis et al.~1985). Assuming a
Cherenkov light spectrum with wavelength-dependent mirror reflectivity
and photocathode quantum efficiency folded in, the transmission for
light from $shower$~$max$ (for 5 TeV $\gamma$-rays) to the telescope was calculated
under various assumptions. The altitude of shower maximum for showers
initiated by gamma rays at the zenith is 9 km and 10 km for gamma rays
incident at zenith angles of 60 degrees. The results are given in
Table~1: row 1 corresponds to standard atmospheric conditions; row 2
has the Rayleigh scattering column density increased by 3\%
(due to barometric pressure changes); row 3 has
the aerosol concentration increased by a factor of 4; row 5 has the
$O_2$ cross sections increased by a factor of 4; and row 6 has the
ozone concentration increased by a factor of 4. As can be seen from
the table, atmospheric transmission from shower maximum at 60 degrees is
78 \% of that at the zenith. However, changes in transmission due to
fairly large increases in various extinction mechanisms is on the
order of few percent. This is small compared with the overall
uncertainties in telescope gain of about 15 \% (Mohanty et al. 1998).
\subsection{The Crab Nebula Spectrum from LZA Data}
As a check on extraction of spectra from LZA observations, we have
analyzed existing data for the Crab Nebula (1995-1997 with the 109
and the 151 pixel camera). In the angular range of 55-60 degrees, this consists
of 8 hours of on-source data with corresponding off-source runs. When
analyzed as described above, the resulting spectrum is shown in
Fig.~2. Also shown in the figure is the standard spectrum as given in
Hillas et al.~(1998a) derived there using SZA observations. It is
apparent that the two agree over the common energy range of about 1.1
to 10 TeV. A power-law fit to LZA data yields: $ \rm J(E) = 3.78 \:
\times \: 10^{-7} \: E^{-2.59 \: \pm \: 0.15
\: \pm \: 0.15} \: photons \: m^{-2} \: s^{-1} \: TeV^{-1} $.
The first set of uncertainties are statistical and the second are systematic
calculated as in Mohanty et al. (1998). Based on the Crab analysis
and on our simulations of the effect of variations in the atmospheric
model we are confident about the SZA and the LZA energy estimate.
\section{The Markarian 421 Spectrum}
We have reanalyzed the SZA flare data of 1996 May 7, using a
more careful treatment of the telescope threshold region. The new
spectrum is consistent with our previously published result, but now
includes two lower energy points extending down to 260 GeV instead of 560 GeV.
The SZA flare data of 1996 May 15, were analyzed in exactly the same
way. As pointed out previously, the threshold region is difficult to
model, and to avoid it in the previous analysis, we imposed a
secondary software trigger level by requiring that two of the
triggering tubes have signals corresponding to at least 50
photoelectrons ($\rm D_{soft}$ = 49) and that the $size$ was at least
400 photoelectrons.
In the present analysis, we impose no direct limitation on the signal
$size$, but instead require that $\rm D_{soft} = 80$. This also
avoids the troublesome region, but with less cost in energy threshold.
We have studied the effect on varying $\rm D_{soft}$ and found that
the flux values in the spectrum are stable above a value of about 70.
As $\rm D_{soft}$ is increased above 100, the flux values do not
change within statistical errors, but these errors become
significantly larger. We have also investigated another trigger
configuration in which at least three tubes were required to have $\rm
D_{soft}$ greater than 80. This again led
to the same flux values within errors.
Finally, we reanalyzed the 1994/1995 Crab database and found that it is
well fit over the energy range 0.3 to 10 TeV with a simple power-law
consistent with the previous result given by Hillas et al.~(1998a).
The spectral flux values derived from the intense flare of Markarian~421
at SZA are shown as stars in Fig.~3. The data are fit by:
$ \rm J(E) = 2.2 \:
\times \: 10^{-6} \: E^{-2.54 \: \pm \: 0.04
\: \pm \: 0.1} \: photons \: \: m^{-2} \: s^{-1} \: TeV^{-1} $
giving a $\chi^2$ of 22.8 for 8 degrees of freedom (probability
0.005). This fit is marginal, perhaps indicating some curvature. The
May 15, 1996 data are shown in the same figure as boxes and are fit
by: $ \rm J(E) = 1.0 \:
\times \: 10^{-6} \: E^{-2.45 \: \pm \: 0.10
\: \pm \: 0.1} \: photons \: m^{-2} \: s^{-1} \: TeV^{-1} $
giving a $\chi^2$ of 3.2 for 7 degrees of freedom. The LZA energy
spectrum covers 1.5-10.4 TeV and is shown as open circles in the same
figure. The LZA points can be fitted by a power-law of the form: $ \rm
J(E) = 7.53 \: \times \: 10^{-7} \: E^{-2.52 \: \pm \: 0.18
\: \pm \: 0.15} \: photons \: \: m^{-2} \: s^{-1} \: TeV^{-1} $
giving a $\chi^2$ of 4.9 for 4 degrees of freedom.
Since all the spectral shapes are consistent we combine them in order to
reduce the statistical uncertainties.
In combining the two SZA data sets and the LZA data shown in Fig.~4,
the normalizations of the May 15 SZA data and the LZA were treated as
free parameters thus fixing the absolute normalization to the May 7
flare. The resulting fit is: $ \rm J(E) \propto \: E^{-2.54 \:
\pm \: 0.03 \: \pm \: 0.10} \: photons \: m^{-2} \: s^{-1} \: TeV^{-1} $
giving a $\chi^2$ of 31.5 for 21 degrees of freedom and a chance
probability of 0.07. Thus, the energy spectrum of Markarian~421
between 260 GeV - 10 TeV during flaring activity is
consistent with a single power-law. A curved fit for Markarian 421 yields:
$$ \hbox{J(E)} =(2.4\pm 0.1\pm0.3)\times
10^{-6} (\frac{\hbox{E}}{1
\hbox{TeV}})^ {-2.47\pm 0.04 \pm 0.05-(0.28\pm
0.09)\log_{10}(\rm E)}$$ $\rm photons \: m^{-2} \: s^{-1} \: TeV^{-1} $
with a $\chi^2$ value of 21.5 for 20 degrees of freedom giving a chance
probability of 0.4.
\section{The Markarian 501 Spectrum}
The Markarian 501 spectrum was analyzed in a similar way. The
results for the 5.1 hours of LZA observations are shown together with
the LZA spectrum for the Crab Nebula in Fig.~5. The flux level of
Markarian~501 was on average $\approx$ 2 Crab units during these
observations and the spectral slope is
similar to that for the Crab spectrum. The spectrum extends up to 10
TeV and can be fit between 1.1-10.4 TeV with a power-law yielding
$\chi^{2} = 14.7$ for 5 degrees of freedom (chance probability of
0.015):
$ \rm J(E) = 7.53 \: \times \: 10^{-7} \: E^{-2.67 \: \pm \: 0.09 \:
\pm \: 0.15} \: photons \: m^{-2} \: s^{-1} \: TeV^{-1} $.
The errors on the spectral index are given by a statistical
uncertainty of $\pm \: 0.09$, and a systematic uncertainty of $\pm \:
0.15$. The slightly high value of $\chi^2$ hints at curvature in the
spectrum.
The LZA data (5.1 hours) can be combined with the SZA data (15 hours)
treating the normalization of the former as a free parameter as
described in the last section. This yields the spectrum given
previously in Samuelson et al.~(1998) which is shown in Fig.~6. Fitting
this data with a simple power-law:
$ \rm J(E) = 6.9 \: \times \: 10^{-7} \: E^{-2.41 \: \pm \: 0.025
\: } \: photons \: m^{-2} \: s^{-1} \: TeV^{-1} $,
\noindent giving $\chi^2 = 59.7$ for 15 degrees of freedom with a chance
probability of $2.5 \times 10^{-7}$. Including a curvature term
yields
$ \rm J(E) = (8.6 \pm 0.3 \pm 0.7) \: \times \: 10^{-7} \: E^{-2.22
\: \pm \: 0.04 \pm \: 0.05 \: -(0.47 \pm 0.07)log_{10}(E) } \: photons
\: m^{-2} \: s^{-1} \: TeV^{-1} $.
\noindent giving $\chi^2 = 18$ for 14 degrees of freedom with a chance
probability of 0.2. As shown in Fig.~6, the Markarian 501 spectrum
is clearly curved. Spectral variability is not likely to account for
the curvature, the superposition of two different power-laws would result
in a concave spectrum rather than a convex shape.
\section{Discussion: Markarian 421 vs. Markarian~501}
The spectra derived from LZA and SZA data for Markarian 421 and Markarian 501
are shown in Fig.~7. It is apparent from the figure that they
differ; a $\chi^2$ test places the chance probability that they arise
from the same parent distribution at $ \rm 4 \times 10^{-3} $. We
conclude that the energy spectra of Markarian 421 and Markarian 501
during flaring activity are different.
Although Markarian 421 and Markarian 501 are at almost the same redshift, they
do differ in their X-ray spectrum. Observations of Markarian 421 by
the ASCA X-ray satellite experiment, although not contemporaneous with
the data presented here, indicate an energy break in the synchrotron
spectrum of 1.6~-~2.2~keV (Takahashi et al. 1996). In contrast,
X-ray observations of
Markarian 501 by BeppoSAX taken in April 1997 showed that its
synchrotron power can peak at hard X-ray energies at 100~keV (Pian et
al.~1997). These observations coincide with long term flaring activity
in TeV gamma rays (February to August 1997) and indicated that synchrotron
power from an AGN can peak at hard X-ray energies, beyond 100~keV.
In addition Markarian~501 has been detected by the OSSE
instrument aboard the Compton Gamma-ray Observatory at energies of
50-470~keV (Catanese et al. 1997) showing clearly, for the first
time, that synchrotron emission can peak above 100 keV.
At GeV energies, Markarian~421 is seen by EGRET (Lin et al. 1992),
albeit weakly, whereas Markarian~501 is not (Catanese et al. 1997). Thus, in
terms of the synchrotron-inverse Compton models for which the GeV
emission is from the inverse-Compton mechanism, it would appear that
both the synchrotron peak and the inverse-Compton peak are shifted to
higher energies leaving the EGRET GeV energy sensitivity range in the
gap between them for Markarian~501. As shown in Fig.~7, in the energy
range 260 GeV~-~10~TeV, the spectrum of Markarian~501 is harder at lower
energies and shows more curvature than Markarian~421. In fact the latter is
consistent with a straight line (i.e., pure power-law). This is
also consistent with the peak inverse-Compton power occuring at higher
energies for Markarian~501, nearer the range covering our
measurements. We see no obvious contradiction of our results with a
synchrotron-inverse Compton picture for the origin of the TeV
radiation.
In order to probe inter-galactic IR radiation via attenuation of TeV
gamma rays, it is first necessary to know the intrinsic energy spectra
of AGN. Spectral features such as the curvature of Markarian 501
cannot be ascribed a priori to this attenuation mechanism. This is
clear because Markarian 421 and Markarian 501 have almost
identical redshifts yet different spectra. The differences in
their spectra can perhaps be explained in the context of the
synchrotron-inverse Compton picture alluded to above (Hillas et al.
1998b). A proof of detection of the IR background radiation through a
TeV photon absorption requires a detailed study of the spectrum of TeV
blazars and their spectral variability. However, the IR limits
(Biller et al. 1998) that allow for uncertainties in spectral shape
are unchanged by this work.
In summary, we have shown that the TeV spectra of Markarian 421 and
Markarian 501 differ significantly, the latter showing more curvature
and a harder spectral slope below 2 TeV.
Since the redshifts are almost identical, this difference can only be
attributed to physics intrinsic to the objects themselves, and it is
not inconsistent with a synchrotron-inverse Compton picture.
\acknowledgments
We acknowledge the technical assistance of K. Harris and E. Roache.
This research is supported by grants from the U.S. Department of Energy
and by NASA, by PPARC in the UK and by Forbairt in Ireland.
\vfill\eject
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{This is an unnumbered first-level section head}
\section*{Introduction}
Hochschild cohomology was introduced by Hochschild in \cite{h} in order to study extensions of associative algebras over a field and to characterize the separability of this class of algebras. In the same paper (written while he was a draftee serving in the army) he defined for any associative algebra $A$ the cup product of cochains with coefficients in $A$. From Hochschild's definition it follows easily that the cup product on the cochains descends to one on the Hochschild cohomology $H^\bullet(A, A)$. Almost twenty years later Gerstenhaber proved in \cite{g1} that at the cohomology level the cup product is graded commutative. He also defined a Lie product whose properties, when combined with those of the cup product, determine on $H^\bullet(A, A)$ a rich algebraic structure which is now called a Gerstenhaber algebra (or $G$-algebra). $G$-algebra structures appear in other contexts of which we mention here the exterior algebra of a Lie algebra, the differential forms on a Poisson manifold, and the Hochschild cohomology of presheaves of algebras. In this paper we show that on the secondary Hochschild cohomology we can define a cup product and a Lie product, which naturally extend those on the Hochschild cohomology.
Consider a $B$-algebra $A$ determined by the $k$-algebra homomorphism $\varepsilon:B\to A$. The secondary Hochschild cohomology $H^\bullet((A,B,\varepsilon);A)$ was introduced in \cite{sta} in order to study the $B$-algebra structures on $A[[t]]$. It was proved there that a $B$-algebra structure on $A[[t]]$ is determined by a family of products $m_{\alpha}:A[[t]]\otimes A[[t]]\to A[[t]]$ that must satisfy a generalized associativity condition. For $a$, $b\in A$ and $\alpha\in B$ we have $m_{\alpha}(a\otimes b)=\varepsilon(\alpha)ab+c_1(a\otimes b\otimes \alpha)t+...$. Just like in the case of deformations of algebras, $c_1$ is a 2-cocycle that gives the deformation $mod \;t^2$. Its class $c_1\in H^2((A,B,\varepsilon);A)$ is determined by the isomorphism class of the $B$-algebra $A[[t]]$. Moreover, if we assume that $m_{\alpha}$ is associative $mod\; t^{n+1}$ then the obstruction to extend it to an associative product $mod\;t^{n+2}$ is the vanishing of the element $c_1\circ c_n+c_2\circ c_{n-1}+...+c_n\circ c_1$ in $H^3((A,B,\varepsilon);A)$.
The paper is organized in five sections. In the first section we define the secondary Hochschild cohomology. In the second we introduce the cup and Lie products for the secondary cohomology and then prove some of their properties. In the third section we discuss the connection between extensions of $B$-algebras $0\to M\to X\to A\to 0$ with $M^2=0$ and $H^2((A,B,\varepsilon);M)$. In the forth we give a Hodge type decomposition, in characteristic 0, for the secondary cohomology, one that it is consistent with the Hodge decomposition of the usual Hochschild cohomology. Finally, in the fifth section we investigate the (cup and bracket preserving) natural map $\Phi:H^n((A,B,\varepsilon);A)\to H^n(A,A)$. More precisely, we present examples which show that in general $\Phi$ is neither surjective nor injective. Our examples deal with subalgebras of the ring of polynomials. We show that requiring $\Phi_2$ to be injective is equivalent to the Jacobian problem stated in \cite{W}, a question first posed by Ott-Heinrich Keller in 1939.
\section{Preliminaries}
\subsection{Hochschild Cohomology of an algebra $A$}
In this paper $k$ is a field, $\otimes=\otimes_k$, and all $k$-algebras have a multiplicative unit. We recall from \cite{g2}, \cite{gs} and \cite{lo} the definition of the Hochschild cohomology.
Suppose that $A$ is an associative $k$-algebra (not necessarily commutative), and $M$ is an $A$-bimodule. Define $C^n(A, M)=Hom_k(A^{\otimes n}, M)$ and $\delta_n:C^n(A,M)\to C^{n+1}(A,M)$ determined by:
\begin{center}$\displaystyle\delta_n(f)(a_1\otimes a_2\otimes ...\otimes a_{n+1})=a_1f(a_2\otimes ...\otimes a_{n+1})+\sum_{i=1}^n(-1)^{i+1}f(a_1\otimes ...\otimes a_{i}a_{i+1}\otimes ...\otimes a_{n+1})+
(-1)^{n+2}f(a_1\otimes ...\otimes a_{n})a_{n+1}.$
\end{center}
One can show that $\delta_{n+1}\delta_n=0$.
The homology of this complex is denoted by $H^n(A, M)$ and is called the Hochschild cohomology of $A$ with coefficients in $M$.
\subsection{Secondary Cohomology of a Triple $(A,B,\varepsilon)$}
We recall from \cite{sta} the definition of the secondary Hochschild cohomology.
Let $A$ be an associative $k$-algebra, $B$ a commutative $k$-algebra, $\varepsilon:B\to A$ a morphism of $k$-algebras such that $\varepsilon(B)\subset {\mathcal Z}(A)$, and $M$ an $A$-bimodule. We assume that for every $\alpha\in B$ and $m\in M$ we have $\varepsilon(\alpha)m=m\varepsilon(\alpha)$. Let $$C^n((A,B,\varepsilon);M)=Hom_k(A^{\otimes n}\otimes B^{\otimes \frac{n(n-1)}{2}},M).$$
We want to define
$$\delta^{\varepsilon}_n:C^n((A,B,\varepsilon);M)\to C^{n+1}((A,B,\varepsilon);M).$$
It is convenient to think about an element $T\in A^{\otimes n}\otimes B^{\otimes \frac{n(n-1)}{2}}$ using the following matrix representation:
$$
T={\otimes}\left(
\begin{array}{cccccccc}
a_{1} & b_{1,2} &...&b_{1,n-2}&b_{1,n-1}&b_{1,n}\\
1 & a_{2} &...&b_{2,n-2} &b_{2,n-1}&b_{2,n}\\
. &. &...&.&.&.\\
1& 1 &...&1&a_{n-1}&b_{n-1,n}\\
1& 1 &...&1&1&a_{n}\\
\end{array}
\right),
$$
where $a_i\in A$, $b_{i,j}\in B$ and $1\in k$. Notice that we do not have exactly the same notation as in \cite{sta}, the difference here is that all the indices are shifted by one.
For $T\in A^{\otimes m+n-1}\otimes B^{\otimes \frac{(m+n-1)(m+n-2)}{2}}$ and for all $0\leq i\leq m-1$ we denote by $T_{i+n}^{i}$ the following "sub-tensor matrix"
\begin{eqnarray*}
T_{i+n}^i=
\displaystyle\otimes
\left(
\begin{array}{ccc}
a_{i+1}& ... & b_{i+1,i+n}\\
.& ... & .\\
1&...&a_{i+n}\\
\end{array}
\right).
\end{eqnarray*}
One should notice that unless $i=0$ it does not make sense to talk about $T^i_{i+n}$ as a tensor but only as a sub-tensor of $T$. Clearly we have $T=T^0_n$.
For a tensor matrix $T\in A^{\otimes n}\otimes B^{\otimes \frac{n(n-1)}{2}}$ and positive integers $l,i,$ and $k$ such that $1\leq l\leq i\leq k\leq n-1$ we consider the sub-tensor matrix
$$M_{i,i+1}^{l,k}=\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_l& b_{l,2} & ...&b_{l,i}b_{l,i+1}& ...& b_{l,k}& b_{l,k+1}\\
1 & a_{l+1} &...&b_{l+1,i}b_{l+1,i+1}& ...& b_{l+1,k}&b_{l+1,k+1}\\
. & . &...&...&...&.&.\\
1 & 1 &... &\varepsilon(b_{i,i+1})a_{i}a_{i+1}&...&b_{i,k}b_{i+1,k}&b_{i,k+1}b_{i+1,k+1}\\
. & . &...&...&...&.&.\\
1 & 1 &...&...&...&a_{k}&b_{k,k+1}\\
1 & 1& ...&...&...&1&a_{k+1}\\
\end{array}
\right).$$
With the above notations we define
$$\delta^{\varepsilon}_n:C^n((A,B,\varepsilon);M)\to C^{n+1}((A,B,\varepsilon);M),$$
\begin{eqnarray*}
&&\delta^{\varepsilon}_n(f)(T_{n+1}^0)=a_1\varepsilon(b_{1,2}b_{1,3}...b_{1,n+1})f(T^1_{n+1})-
f(M^{1,n}_{1,2})+ f(M^{1,n}_{2,3})+\\&&...+
(-1)^{i}f(M^{1,n}_{i,i+1})+...+(-1)^{n-1}f(M^{1,n}_{n-1,n})+(-1)^{n}f(M^{1,n}_{n,n+1})+\\
&&(-1)^{n+1}f(T^0_{n})a_{n+1}\varepsilon(b_{1,n+1}b_{1,n+1}...b_{n,n+1}).
\end{eqnarray*}
\begin{proposition} (\cite{sta}) $(C^n((A,B,\varepsilon);M),\delta_n^{\varepsilon})$ is a complex (i.e. $\delta_{n+1}^{\varepsilon}\delta_n^{\varepsilon}=0$). We denote its homology by $H^n((A,B,\varepsilon);M)$ and we call it the secondary Hochschild cohomology of the triple $(A,B,\varepsilon)$ with coefficients in $M$.
\end{proposition}
\begin{example} When $B=k$ and $\varepsilon:k\to A$ we have that $H^n((A,k,\varepsilon);M)$ is the usual Hochschild cohomology.
\end{example}
\subsection{Pre-Lie systems}
We recall from \cite{g1} the definition of a pre-Lie system.
\begin{definition} A pre-Lie system is a family of pairs $\{V_n,\circ_i\}$, where $V_n$ are $k$-vector spaces for all $n\in \mathbb{Z}$ and $\circ_i=\circ_i(m,n):V_m\otimes V_n\to V_{m+n}$ are $k$-linear maps for all $0\leq i\leq m$. Moreover the following identities hold
\begin{eqnarray*}(f^m\circ_i g^n)\circ_j h^p=(f^m\circ_j h^p)\circ_{i+p}g^n \; \; if \; \; 0\leq j\leq i-1,\\
(f^m\circ_i g^n)\circ_j h^p=f^m\circ_i(g^n\circ_{j-i} h^p) \; \; if \; \; i\leq j\leq n+1.
\end{eqnarray*}
\end{definition}
Given a pre-Lie system $\{V_n,\circ_i\}$ then for all $m\geq 0$ one can define
$\circ:V_m\otimes V_n\to V_{m+n}$
$$f^m\circ g^n=\sum_{i=0}^m(-1)^{ni}f^m\circ_i g^n.$$
The following result was proved in \cite{g1}.
\begin{theorem} Let $\{V_n,\circ_i\}$ be a pre-Lie system. Define $A=\oplus_{n}V_n$ and $[.,.]:A\otimes A\to A$, where $[f^m,g^n]=f^m\circ g^n-(-1)^{mn}[g^n,f^m]$. Then $(A,[.,.])$ is a graded Lie algebra. \label{theorem1}
\end{theorem}
\section{Cup Product and Bracket Product}
\subsection{Cup Product} This section follows closely the results from \cite{g1}. For $f\in C^m((A,B,\varepsilon);A)$ and $g\in C^n((A,B,\varepsilon);A)$ we define
\begin{eqnarray*}
&f\smile g\left(\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{1}& b_{1,2} & ...&b_{1,m+n-1}&b_{1,m+n}\\
1 & a_{2} &...&b_{2,m+n-1}&b_{2,m+n}\\
. & . &...&.&.\\
1& 1& ...&a_{m+n-1}&b_{m+n-1,m+n}\\
1 & 1& ...&1&a_{m+n}\\
\end{array}
\right)\right)=\\
&f\left(\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{1}& b_{1,2} & ...&b_{1,m-1}&b_{1,m}\\
1 & a_{2} &...&b_{2,m-1}&b_{2,m}\\
. & . &...&.&.\\
1& 1 &...&a_{m-1}&b_{m-1,m}\\
1 & 1 &...&1&a_{m}\\
\end{array}
\right)\right) \prod\limits_{\substack{m+1\leq j\leq m+n\\1\leq i\leq m}}\varepsilon(b_{i,j})\\
&g\left(\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{m+1}& b_{m+1,m+2} & ...&b_{m+1,m+n-1}&b_{m+1,m+n}\\
1 & a_{m+2} &...&b_{m+2,m+n-1}&b_{m+2,m+n}\\
. & . &...&.&.\\
1& 1 &...&a_{m+n-1}&b_{m+n-1,m+n}\\
1 & 1 &...&1&a_{m+n}\\
\end{array}
\right)\right).
\end{eqnarray*}
Using the notations introduced earlier we have the equivalent formula
$$(f\smile g)(T^0_{m+n})=f(T^{0}_m)g(T^m_{m+n})\prod\limits_{\substack{m+1\leq j\leq m+n\\1\leq i\leq m}}\varepsilon(b_{i,j}).$$
One can easily check that $$\smile:C^m((A,B,\varepsilon);A)\otimes C^n((A,B,\varepsilon);A)\to C^{m+n}((A,B,\varepsilon);A)$$
induces a graded associative algebra structure on $C^*((A,B,\varepsilon);A)$. Moreover, the cup product satisfies the identity
\begin{eqnarray}
\delta^{\varepsilon}_{m+n}(f\smile g)=\delta^{\varepsilon}_m(f)\smile g+(-1)^mf\smile \delta^{\varepsilon}_n(g).
\label{delta1}
\end{eqnarray}
To prove this let $f\in C^m((A,B,\varepsilon);A)$ and $g\in C^n((A,B,\varepsilon);A).$ Then we have
\begin{center}$\delta^{\varepsilon}_{m+n}(f\smile g)(T^0_{m+n+1})=\displaystyle a_1\prod_{i=2}^{m+n+1}\varepsilon(b_{1,i})(f\smile g)(T_{m+n+1}^1)-$\end{center}
\begin{center} $-(f\smile g)(M_{1,2}^{1,m+n})+\dots+(-1)^i(f\smile g)(M_{i, i+1}^{1,m+n})+\dots$\end{center}
\begin{center} $+(-1)^{m+n}(f\smile g)(M_{m+n, m+n+1}^{1,m+n})+$\end{center}
\begin{center} $+(-1)^{m+n+1}(f\smile g)(T_{m+n}^0)a_{m+n+1}\displaystyle\prod_{i=1}^{m+n}\varepsilon(b_{i,m+n+1})= a_1\prod_{i=2}^{m+n+1}\varepsilon(b_{1,i})f(T_{m+1}^1)\prod\limits_{\substack{2\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})g(T_{m+n+1}^{m+1})-$\end{center}
\begin{center} $-f(M_{1,2}^{1,m})\displaystyle\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})g(T_{m+n+1}^{m+1})+\dots+(-1)^if(M_{i,i+1}^{1,m})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})g(T_{m+n+1}^{m+1})+\dots+\displaystyle(-1)^mf(M_{m,m+1}^{1,m})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})g(T_{m+n+1}^{m+1})+(-1)^{m+1}f(T_m^0)g(M_{m+1,m+2}^{m+1,m+n})\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})+(-1)^{m+2}f(T_m^0)g(M_{m+2,m+3}^{m+1,m+n})\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})+\dots
+(-1)^{m+n}f(T_m^0)g(M_{m+n,m+n+1}^{m+1,m+n})\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})+(-1)^{m+n+1}f(T_m^0)g(T_{m+n}^m)a_{m+n+1}\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n}}\varepsilon(b_{i,j})\prod_{i=1}^{m+n}\varepsilon(b_{i,m+n+1}).$ \end{center}
On the other hand we have
\begin{center} $\left(\delta^{\varepsilon}_m(f)\smile g+(-1)^mf\smile \delta^{\varepsilon}_n(g)\right)(T^0_{m+n+1})=$\end{center}
\begin{center}$\displaystyle\delta^{\varepsilon}_m(f)(T_{m+1}^0)g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})+(-1)^mf(T_m^0)\delta^{\varepsilon}_n(g)(T_{m+n+1}^m)\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})=a_1\prod_{i=2}^{m+1}\varepsilon(b_{1,i})f(T_{m+1}^1)g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})-\displaystyle-f(M_{1,2}^{1,m})g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})+\dots+(-1)^if(M_{i,i+1}^{1,m})g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})+\dots+(-1)^{m}f(M_{m,m+1}^{1,m})g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})+(-1)^{m+1}f(T_{m}^0)a_{m+1}\displaystyle\prod_{i=1}^m\varepsilon(b_{i,m+1})g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m+1\\m+2\leq j\leq m+n+1}}\varepsilon(b_{i,j})+$\end{center}
\begin{center} $\displaystyle(-1)^{m}f(T_m^0)a_{m+1}\prod_{i=m+2}^{m+n+1}\varepsilon(b_{m+1,i})g(T_{m+n+1}^{m+1})\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})+(-1)^{m+1}f(T_m^0)\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})g(M_{m+1,m+2}^{m+1,m+n})+\dots+(-1)^{m}f(T_m^0)\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j})
(-1)^{n}g(M_{m+n,m+n+1}^{m+1,m+n})+(-1)^{m}f(T_m^0)(-1)^{n+1}g(T_{m+n}^m)a_{m+n+1}\prod_{i=m+1}^{m+n}\varepsilon(b_{i,m+n+1})
\prod\limits_{\substack{1\leq i\leq m\\m+1\leq j\leq m+n+1}}\varepsilon(b_{i,j}).$\end{center}
One should note that the $(m+1)^{th}$ term in the expansion of $\delta^{\varepsilon}_m(f)\smile g$ and the first term in that of
$(-1)^mf\smile \delta^{\varepsilon}_n(g)$ cancel each other. It is easy to see now that the terms in the expansion of $\delta^{\varepsilon}_{m+n}(f\smile g)$ are equal to the remaining terms in that of $\delta^{\varepsilon}_m(f)\smile g+(-1)^mf\smile \delta^{\varepsilon}_n(g)$ in the order in which they appear.
Therefore we have the following proposition.
\begin{proposition} The cup product defines a structure of graded associative algebra on the secondary Hochschild cohomology $H^{*}((A,B,\varepsilon);A)$.
$$\smile:H^{*}((A,B,\varepsilon);A)\otimes H^{*}((A,B,\varepsilon);A)\to H^{*}((A,B,\varepsilon);A).$$
\end{proposition}
\begin{proof}
It follows from (\ref{delta1}).
\end{proof}
\subsection{Bracket Product}
Next we want to define a pre-Lie system. Take $V_m=C^{m+1}((A,B,\varepsilon);A)$ for all $m\geq -1$, and $V_m=0$ for all $m<-1$. Notice that the $m$-cochains have degree $m-1$. Because of this shift it is more convenient to give the definition of $\circ_i:V_{m-1}\otimes V_{n-1}\to V_{m+n-2}$. For $m$ and $n\geq 0$ and $0\leq i\leq m-1$ we define
$$\circ_i:C^m((A,B,\varepsilon);A)\otimes C^n((A,B,\varepsilon);A)\to C^{m+n-1}((A,B,\varepsilon);A).$$
If $f^m\in V_{m-1}=C^m((A,B,\varepsilon);A)$, $g^n\in V_{n-1}=C^n((A,B,\varepsilon);A)$ and $0\leq i\leq m-1$ then
\begin{eqnarray*}
&&f^m\circ_i g^n\left(\displaystyle
\otimes
\left(
\begin{array}{cccccccc}
a_{1}& b_{1,2} & ...&b_{1,m+n-2}&b_{1,m+n-1}\\
1 & a_{2} &...&b_{2,m+n-2}&b_{2,m+n-1}\\
. & . &...&.&.\\
1& 1& ...&a_{m+n-2}&b_{m+n-2,m+n-1}\\
1 & 1& ...&1&a_{m+n-1}\\
\end{array}
\right)\right)=
\end{eqnarray*}
\begin{eqnarray*}
&&f^m\left(\displaystyle
\otimes
\left(
\begin{array}{ccccccccc}
a_{1}& ...&b_{1,i}&\prod\limits_{i< j\leq n+i}b_{1,j}&b_{1,n+i+1}&...&b_{1,m+n-1}\\
1 & ...&b_{2,i}&\prod\limits_{i< j\leq n+i}b_{2,j}&b_{2,n+i+1}&...&b_{2,m+n-1}\\
. & ...&.&.&.&...&.\\
1 & ...&a_{i}&\prod\limits_{i< j\leq n+i}b_{i,j}&b_{i,n+i+1}&...&b_{i,m+n-1}\\
1 & ...&1&g^n\left(T_{n+i}^i\right)&\prod\limits_{i< j\leq n+i}b_{j,n+i}&...&\prod\limits_{i< j\leq n+i}b_{j,m+n-1}\\
1 & ...&1&1&a_{n+i+1}&...&b_{n+i+1,m+n-1}\\
. & ... &.&.&.&...&.\\
1& ...&1&1&1&...&b_{m+n-2,m+n-1}\\
1 & ...&1&1&1&...&a_{m+n-1}\\
\end{array}
\right)\right).
\end{eqnarray*}
One can check that for all $0\leq i\leq m-1$ and $0\leq j\leq i-1$ we have
\begin{eqnarray*}
&&(f^m\circ_i g^n)\circ_j h^p\left(\displaystyle
\otimes
\left(
\begin{array}{cccccccc}
a_{1}& ...&b_{1,m+n+p-2}\\
. &...&.\\
1 & ...&a_{m+n+p-2}\\
\end{array}
\right)\right)=\\
&&\\
&&(f^m\circ_i g^n)\left(\displaystyle
\otimes
\left(
\begin{array}{ccccccccc}
a_{1}& ...&\prod\limits_{j< k\leq j+p}b_{1,k}&...&b_{1,m+n+p-2}\\
. &...&.&...&.\\
1 &...&h\left(T_{j+p}^j\right)&...&\prod\limits_{j< k\leq j+p}b_{k,m+n+p-2}\\
. &...&.&...&.\\
1 & ...&1&...&a_{m+n+p-2}\\
\end{array}
\right)\right)
=\\
&&\\
&&f^m\left(\displaystyle
\otimes
\left(
\begin{array}{ccccccccc}
a_{1}& ...&\prod\limits_{j< k\leq j+p}b_{1,k}&...&\prod\limits_{i+p-1< l\leq i+p+n-1}b_{1,l}&...&b_{1,m+n+p-2}\\
. &...&.&...&.&...&.\\
1 &...&h^p\left(T_{j+p}^j\right)&...&\prod\limits_{\substack{j< k\leq n+j\\
i+p-1<l\leq i+p+n-1}}b_{k,l}&...&\prod\limits_{j< k\leq j+p}b_{k,m+n+p-2}\\
. &...&.&...&.&...&.\\
1 &...&1&...&g^n\left(T_{i+p+n-1}^{i+p-1}\right)&...&\prod\limits_{i+p-1< l\leq i+p+n-1}b_{l,m+n+p-2}\\
. &...&.&...&.&...&.\\
1 & ...&1&...&1&...&a_{m+n+p-2}\\
\end{array}
\right)\right)
=\\
&&\\
&&(f^m\circ_j h^p)\left(\displaystyle
\otimes
\left(
\begin{array}{ccccccccc}
a_{1}& ...&\prod\limits_{i+p-1< l\leq i+p+n-1}b_{1,l}&...&b_{1,m+n+p-2}\\
. &...&.&...&.\\
1 &...&g\left(T_{i+p+n-1}^{i+p-1}\right)&...&\prod\limits_{i+p-1< l\leq i+p+n-1}b_{l,m+n+p-2}\\
. &...&.&...&.\\
1 & ...&1&...&a_{m+n+p-2}\\
\end{array}
\right)\right)
=\\
&&\\
&&(f^m\circ_j h^p)\circ_{i+p-1}g^n \left(\displaystyle
\otimes
\left(
\begin{array}{cccccccc}
a_{1}& ...&b_{1,m+n+p-2}\\
. &...&.\\
1 & ...&a_{m+n+p-2}\\
\end{array}
\right)\right).
\end{eqnarray*}
A similar computation shows that for all $i\leq j\leq n$ we have
\begin{eqnarray*}
&&(f^m\circ_i g^n)\circ_j h^p\left(\displaystyle
\otimes
\left(
\begin{array}{cccccccc}
a_{1} & ...&b_{1,m+n+p-2}\\
. &...&.\\
1 & ...&a_{m+n+p-2}\\
\end{array}
\right)\right)=\\
&&f^m\circ_i(g^n\circ_{j-i} h^p)\left(\displaystyle
\otimes
\left(
\begin{array}{cccccccc}
a_{1}& ...&b_{1,m+n+p-2}\\
. &...&.\\
1 & ...&a_{m+n+p-2}\\
\end{array}
\right)\right).
\end{eqnarray*}
Since there is a shift between the degree and the index of the cohomology group, we will write an explicit formula for the pre-Lie algebra structure. Let $f^m\in H^m((A,B,\varepsilon); A)$ and $g^n\in H^n((A,B,\varepsilon); A).$ We define
$$f^m\circ g^n=\sum_{i=0}^{m-1}(-1)^{(n-1)i}f^m\circ_i g^n.$$ The above considerations imply that we have the following theorem.
\begin{theorem} $\left(H^*((A,B,\varepsilon); A),[.,.]\right)$ is a graded Lie algebra, where
$$[f^m,g^n]=f^m\circ g^n-(-1)^{(m-1)(n-1)}g^n\circ f^m$$
and the degree of $f^m\in H^m((A,B,\varepsilon); A)$ is $m-1$.
\end{theorem}
\begin{proof}
It follows from Theorem \ref{theorem1} and the above computations.
\end{proof}
Define $\pi:A\otimes A\otimes B\to A$ determined by $$\pi(a\otimes b\otimes \alpha)=ab\varepsilon(\alpha).$$
It is easy to see that $\delta^{\varepsilon}_1(id_A)=\pi$ and so $\pi$ is a coboundary (of degree $1$). One can also show that
\begin{eqnarray}
f^m\smile g^n=(\pi \circ_0 f^m)\circ_m g^n, \label{equation1}
\end{eqnarray}
and
\begin{eqnarray}
\delta^{\varepsilon}_m(f^m)=[f^m,-\pi]=(-1)^{m-1}[\pi,f^m].\label{equation2}
\end{eqnarray}
At this point the proof of Theorem 3 from \cite{g1} can be used (and we won't reproduce it here) to get the following result:
\begin{theorem} For $f^m\in C^m((A,B,\varepsilon); A)$ and $g^n\in C^n((A,B,\varepsilon); A)$ we have
\begin{eqnarray*}
&f^m\circ \delta^{\varepsilon}(g^n)-\delta^{\varepsilon}(f^m\circ g^n)+(-1)^{n-1}\delta^{\varepsilon}(f^m)\circ g^n=&\\
&(-1)^{n-1}(g^n\smile f^m-(-1)^{mn}f^m\smile g^n).&
\end{eqnarray*}
\end{theorem}
As a simple consequence we obtain:
\begin{corollary} If $f^m\in H^m((A,B,\varepsilon); A)$ and $g^n\in H^n((A,B,\varepsilon); A)$ we have
$$f^m\smile g^n=(-1)^{mn}g^n\smile f^m.$$
\end{corollary}
\section{Extensions of $B$-algebras}
Suppose that $X$ is a $B$-algebra with $\varepsilon_X:B\to X$ and that there exists a surjective morphism of $B$-algebras $\pi:X\to A$ such that $ker(\pi)^2=0$. Let $M=ker(\pi)$. We require that the $B$-algebra structure induced on $A$ by the map $\pi\circ\varepsilon_X$ coincides with that defined by the map $\varepsilon$. Consider $s:A\to X$ a $k$-linear map such that $\pi s=id_A$.
Then $M$ is an $A$-bimodule with the multiplication given by
$$am=s(a)m,\; \; \; \; ma=ms(a),$$
for all $m\in M$ and $a\in A$. One can notice that this action does not depend on the choice of the section $s$. Moreover, for all $\alpha \in B$ and all $m\in M$ we have
$$\varepsilon(\alpha)m=m\varepsilon(\alpha)=\varepsilon_X(\alpha)m.$$
As a $k$-vector space we obviously have that $X=s(A)\oplus M$ (that is $s(A)+M=X$ and $s(A)\cap M=0$).
Because of Proposition 2.1 from \cite{sta}, we know that a $B$-algebra structure on $X$ is the same as an associative family of products $m_{\alpha,X}:X\otimes X\to X$ where $m_{\alpha,X}(x\otimes y)=\varepsilon_X(\alpha)xy$. Since $\pi:X\to A$ is a morphism o $B$-algebras we must have that
$$\pi(m_{\alpha,X}((s(a)+m)\otimes (s(b)+n)))=m_{\alpha}(\pi(s(a)+m)\otimes \pi(s(b)+n))=\varepsilon(\alpha)ab.$$
Using this and the linearity of the product we get
\begin{eqnarray*}
&&m_{\alpha,X}((s(a)+m)\otimes (s(b)+n))=\\
&&\varepsilon_X(\alpha)(s(a)s(b)+s(a)n+ms(b))=\\
&&s(\varepsilon(\alpha)ab)+\varepsilon(\alpha)an+mb\varepsilon(\alpha) +\varepsilon_X(\alpha)s(a)s(b)-s(\varepsilon(\alpha)ab).
\end{eqnarray*}
One can see that the $k$-linear map $c_s:A\otimes A\otimes B\to M$ defined by
$$c_s(a\otimes b\otimes \alpha)=\varepsilon_X(\alpha)s(a)s(b)-s(\varepsilon(\alpha)ab)$$
is a 2-cocycle. Moreover, if $t:A\to X$ is another section for $\pi$ then
$$\delta^{\varepsilon}_1(s-t)=c_s-c_t.$$
To summarize we have the following result
\begin{lemma} Let $X$ be a $B$-algebra and $\pi:X\to A$ a surjective morphism of $B$-algebras such that $M^2=0$ (where $M=ker(\pi)$). Then $\widehat{c_s}\in H^2((A,B,\varepsilon); M)$ does not depend on the choice of the section $s$. We will denote this element by $c_{X,\pi}$.
\label{lemma4}
\end{lemma}
Next we prove that $c_{X,\pi}$ depends only on the isomorphism class of the extension $0\to M\to X\stackrel{\pi}{\rightarrow} A\to 0$.
\begin{proposition}\label{equiv}
Let $X_1$ and $X_2$ be two $B$-algebras, $\pi_i:X_i\to A$ surjective morphisms of $B$-algebras such that $(ker(\pi_i))^2=0$. Moreover assume that there exists an isomorphism of $B$-algebras $F:X_1\to X_2$ such that $\pi_2\circ F=\pi_1$. Under the identification $M_2=\ker(\pi_2)=F(M_1)$ we have that $c_{X_2,\pi_2}=F^*(c_{X_1,\pi_1})\in H^2((A,B,\varepsilon); M_2)$.
\end{proposition}
\begin{proof}
The proof follows from Lemma \ref{lemma4} and the fact that if $s:A\to X_1$ is a section for $\pi_1$ then $Fs:A\to X_2$ is a section for $\pi_2$.
\end{proof}
In addition, for any $A$-bimodule $M$ such that $\varepsilon(\alpha)m=m\varepsilon(\alpha)$ and for any cocycle $c\in C^2((A, B,\varepsilon); M)$
we can define a $B$-algebra $X$ and a surjective morphism of $B$-algebras $\pi:X\rightarrow A$ such that $M=ker(\pi)$, $M^2=0$ and $\pi\circ\varepsilon_X=\varepsilon$. To see this we use Proposition 2.1 from \cite{sta} to define a family of products $m_{\alpha,X}:X\otimes X\to X$ as follows. First, we take $X=A\oplus M$, as a $k$-vector space. Second, we define $$m_{\alpha, X}((a+m)\otimes(b+n))=\varepsilon(\alpha)ab+\varepsilon(\alpha)an+mb\varepsilon(\alpha)+c(a\otimes b\otimes \alpha).$$
One can check without any difficulty that $(X, m_{1, X})$ is a $k$-algebra, with unit $1_X=1_A-c(1_A\otimes 1_A\otimes 1_B),$ and that for all $\alpha, \beta\in B$ and $q\in k$ we have $m_{\alpha+\beta, X}=m_{\alpha, X}+m_{\beta, X}$ and $m_{q\alpha, X}=qm_{\alpha, X}.$ The third condition of Proposition 2.1, $m_{\beta\gamma, X}(m_{\alpha, X}\otimes id)=m_{\alpha\beta, X}(id\otimes m_{\gamma, X})$ is equivalent to $c$ being a cocycle and it is satisfied, so $X$ is a $B$-algebra. We have $\varepsilon_X:B\to X$ defined by
$$\varepsilon_X(\alpha)=\varepsilon(\alpha)-2\varepsilon(\alpha)c(1_A \otimes 1_A\otimes 1_B)+c(1_A\otimes 1_A\otimes \alpha).$$
Third, it is clear that the canonical projection $\pi: X\rightarrow A$ is a surjective morphism of $k$-algebras such that $ker(\pi)=M$, $M^2=0$, and that $\pi\circ\varepsilon_X(\alpha)=\varepsilon(\alpha)$. To see that $\pi$ is a morphism of $B$-algebras note that for all $\alpha \in B, a\in A$, and $m\in M$ we have
\begin{eqnarray*}
&&\pi(\alpha(a+m))=\pi(m_{1, X}(\varepsilon_X(\alpha)\otimes(a+m)))=\\
&=&\pi( m_{1, X}(\varepsilon(\alpha)-2\varepsilon(\alpha)c(1_A\otimes 1_A\otimes 1_B)+c(1_A\otimes 1_A\otimes\alpha))\otimes(a+m)))\\ &=&\varepsilon(\alpha) a\\
&=&\alpha\pi(a+m).
\end{eqnarray*}
Finally, we show that the construction of $X$ depends only on the cohomology class of the cocycle $c\in C^2((A, B, \varepsilon); M)$. For this let $c_1, c_2$ be two cocycles in $C^2((A, B, \varepsilon); M)$ such that $c_1-c_2=\delta_1^\varepsilon f$, where $f:A\rightarrow M$ is $k$-linear. Denote by $X_1$ and $X_2$ the $B$-algebras defined by the cocycles $c_1$ and $c_2$, by $m_{\alpha, X_1}$ and $m_{\alpha, X_2}$ their corresponding families of products, and by $\pi_1$ and $\pi_2$ the canonical projections of $X_1$ and $X_2$ onto $A$. Note that by construction $X_1=X_2=A\oplus M$ as $k$-vector spaces. Then the map $F: X_1\rightarrow X_2$, defined by the formula $F(a+m)=a+m+f(a)$ is an isomorphism of $B$-algebras such that $\pi_2\circ F=\pi_1$. It is easy to see that $F$ is an isomorphism of $k$-algebras such that $\pi_2\circ F=\pi_1$, so we will only prove that $F$ is $B$-linear. Indeed, for $\alpha\in B, a\in A$ and $m\in M$ we have
\begin{eqnarray*}
&&F(\alpha (a+m))=F(m_{1, X_1}(\varepsilon_{X_1}(\alpha)\otimes (a+m)))\\
&&=F(m_{1, X_1}(\varepsilon(\alpha)-2\varepsilon(\alpha)c_1(1\otimes 1\otimes 1)+c_1(1\otimes 1\otimes\alpha))\otimes(a+m)))\\
&&=F(\varepsilon(\alpha)a+\varepsilon(\alpha)m-2\varepsilon(\alpha)c_1(1\otimes 1\otimes 1)a+c_1(1\otimes 1\otimes\alpha)a+c_1(\varepsilon(\alpha)\otimes a\otimes 1))\\
&&=\varepsilon(\alpha)a+\varepsilon(\alpha)m-2\varepsilon(\alpha)c_1(1\otimes 1\otimes 1)a+c_1(1\otimes 1\otimes\alpha)a+c_1(\varepsilon(\alpha)\otimes a\otimes 1)\\
&&+f(\varepsilon(\alpha)a).
\end{eqnarray*}
On the other hand we have
\begin{eqnarray*}
&&\alpha F(a+m)=m_{1,X_2}(\varepsilon_{X_2}(\alpha)\otimes(a+m+f(a)))\\
&&=m_{1, X_2}((\varepsilon(\alpha)-2\varepsilon(\alpha)c_2(1\otimes 1\otimes 1)+c_2(1\otimes 1\otimes\alpha))\otimes (a+m+f(a)))\\
&&=\varepsilon(\alpha)a+\varepsilon(\alpha)m+\varepsilon(\alpha)f(a)-2\varepsilon(\alpha)c_2(1\otimes 1\otimes 1)a+c_2(1\otimes 1\otimes\alpha)a\\&&+c_2(\varepsilon(\alpha)\otimes a\otimes 1).
\end{eqnarray*}
Thus we get $$F(\alpha (a+m))-\alpha F(a+m)=2\varepsilon(\alpha)(c_2(1\otimes 1\otimes 1)-c_1(1\otimes 1\otimes 1))a+$$
$$+(c_1(1\otimes 1\otimes\alpha)-c_2(1\otimes 1\otimes\alpha))a+(c_1(\varepsilon(\alpha)\otimes a\otimes 1)-c_2(\varepsilon(\alpha)\otimes a\otimes 1))-\varepsilon(\alpha)f(a)+f(\varepsilon(\alpha)a). $$
Since $c_1-c_2=\delta_1^\varepsilon f$ we have the following identities $$c_2(1\otimes 1\otimes 1)-c_1(1\otimes 1\otimes 1)=-f(1)$$
$$c_1(1\otimes 1\otimes\alpha)-c_2(1\otimes 1\otimes\alpha)=2\varepsilon(\alpha)f(1)-f(\varepsilon(\alpha))$$
$$c_1(\varepsilon(\alpha)\otimes a\otimes 1)-c_2(\varepsilon(\alpha)\otimes a\otimes 1)=\varepsilon(\alpha)f(a)-f(\varepsilon(\alpha)a)+f(\varepsilon(\alpha))a.$$
Therefore we obtain that $F(\alpha (a+m))-\alpha F(a+m)=0$, so $F$ is an isomorphism of $B$-algebras such that $\pi_2\circ F=\pi_1$.
Assume now that we have an extension given by the following data: a morphism of $k$-algebras $\varepsilon_X:B\to X$; a surjective morphism of $B$-algebras $\pi:X\to A$ such that $ker(\pi)^2=0$, $M=ker(\pi)$; $\pi\circ\varepsilon_X=\varepsilon$; and a $k$-linear map $s:A\to X$ such that $\pi s=id_A$. If we consider the cocycle $c_s\in C^2((A, B,\varepsilon); M)$ defined earlier and then we consider the extension associated to this cocycle then it is not hard to see that we obtain an extension equivalent to the initial one. Similarly, given an $A$-bimodule $M$ such that $\varepsilon(a)m=m\varepsilon(a)$ and a cocycle $c\in C^2((A, B, \varepsilon); M)$ we construct the extension associated to $c$. If we now take the cocycle $c_s$ determined by a section $s:A\to X$ with $\pi s=id_A$ then we have that $c_s-c=\delta^\varepsilon_1 u$, where $u:A\to M$ is the $k$-linear map induced by $s$ on $M$. Indeed, we have that $c_s(a\otimes b\otimes\alpha)= c(\varepsilon(\alpha)\otimes ab\otimes 1)+c(1\otimes 1\otimes\alpha)ab-2\varepsilon(\alpha)c(1\otimes 1\otimes 1)ab+\varepsilon(\alpha)c(a\otimes b\otimes 1)+ \delta u(a\otimes b\otimes\alpha)$ for all $a, b\in A$ and $\alpha\in B$. The key observation here is that the cocycle condition implies that $c(a\otimes b\otimes\alpha)=c(\varepsilon(\alpha)\otimes ab\otimes 1)+c(1\otimes 1\otimes\alpha)ab-2\varepsilon(\alpha)c(1\otimes 1\otimes 1)ab+\varepsilon(\alpha)c(a\otimes b\otimes 1).$
The above considerations allow us to conclude that $H^2((A, B, \varepsilon); M)$ can be naturally identified with the equivalence classes of extensions of $B$-algebras of $A$ by $M$, for any $A$-bimodule $M$ such that $\varepsilon(\alpha)m=m\varepsilon(\alpha)$.
\section{A Hodge Type Decomposition of the Secondary Cohomology}
In this section we will assume that $A$ is commutative, $k$ is a field of characteristic 0, and $M$ is a symmetric $A$-bimodule ($i.e.$ $ am=ma$ for all $a\in A$ and $m\in M$). We denote by $kS_n$ the group algebra of the group of permutations of $n$ objects. Under these conditions Barr proved in \cite{B} that $kS_n$ operates on the
$n$-cochains, $C^n(A, M)$, of the complex defining the Hochschild cohomology of $A$ with coefficients in $M$ and that there is a non-central idempotent $e_n\in\mathbb{Q}S_n$ such that $\delta_n(e_nf)=e_{n+1}(\delta_n f)$. This implies that the Hochschild
complex is a direct sum of two sub-complexes, corresponding to $e_n$ and $1-e_n$. Barr's ideas were extended in \cite{gs2} by Gerstenhaber and Schack
who showed that $\mathbb{Q}S_n$ contains $n$ mutually orthogonal idempotents $e_n(1), e_n(2), \dots, e_n(n)$ which sum to the identity and with the property that for each cochain $f\in C^n(A, M)$ we have $\delta_n(e_n(k)f)=e_{n+1}(k)(\delta _nf).$ From this it follows that the Hochschild cohomology $H^n(A, M)$ has a Hodge type decomposition into a direct sum of $n$ summands. Barr's original idempotent $e_n$ is $e_n(1)$ and the idempotents and the decomposition are labeled BGS (Barr-Gerstenhaber-Schack). The action of $S_n$ on the $n$-cochains $C^n(A, M)$ is given by $$(\pi f)(a_1\otimes a_2\otimes\dots\otimes a_n)=(f\pi^{-1})(a_1\otimes a_2\otimes\dots\otimes a_n)=f(a_{\pi(1)}\otimes a_{\pi(2)}\otimes\dots\otimes a_{\pi(n)}).$$
It is not hard to see that $S_n$ acts on the $n$-cochains of the secondary cohomology. Indeed, for $\pi\in S_n$ and $f\in C^n((A, B, \varepsilon); M)$ we define the left action of $S_n$ by setting
\begin{center} $(\pi f)\left(\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{1}& b_{1,2} & ...&b_{1,n}\\
1 & a_{2} & ...&b_{2,n}\\
. & . &...&.\\
1 & 1& ...&a_{n}\\
\end{array}
\right)\right)=f\left(\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{\pi(1)}& b_{\pi(1, 2)} & ...&b_{\pi(1, n)}\\
1 & a_{\pi(2)} & ...&b_{\pi(2, n)}\\
. & . &...&.\\
1& 1& ...&a_{\pi(n)}\\
\end{array}
\right)\right),$ \end{center}
where, for each $1\leq i < j\leq n,$ the element $b_{\pi(i, j)}$ is equal to $b_{\pi(i), \pi(j)}$ if $\pi(i)<\pi(j)$ and equal to $b_{\pi(j),\pi(i)}$ if $\pi(j)<\pi(i)$. Similarly, one defines the right action of $S_n$ on $C^n((A, B, \varepsilon); M)$ by using $\pi^{-1}$. It is important to note that the order
of the elements $a_{\pi(1)}, a_{\pi(2)}, \dots, a_{\pi(n)}$ on the diagonal of the above tensor matrix determines completely the positions of $b_{\pi(i, j)}$.
We want to show that for $f\in C^n((A, B, \varepsilon); M)$ we have that $\delta^{\varepsilon}_n(e_n(k)f)=e_{n+1}(k)(\delta^{\varepsilon}_n f)$. This will imply that the secondary cohomology $H^\bullet((A, B, \varepsilon); M)$ has a Hodge type decomposition. For this we use that the BGS idempotents $e_n(1), e_n(1),\dots ,e_n(n)$ are polynomials, with rational coefficients, of the total shuffle operator.
Following Barr \cite{B}, for $0<r<n$ and $\pi\in S_n$ we say that $\pi$ is a pure shuffle of $r$ through $n-r$ if $\pi(1)<\dots<\pi(r)$ and $\pi(r+1)<\dots <\pi(n)$. Then the $r^{th}$ shuffle operator is $s_{r, n-r}=\sum\limits_{\substack{\mathrm{pure} \\ \mathrm{shuffles}}}(-1)^\pi\pi$, where $(-1)^\pi$ is the sign of $\pi$. The total shuffle operator is defined by $s_n=\sum\limits_{1\leq r\leq n-1}s_{r, n-r}$ and satisfies $\delta_n(s_nf)=s_{n+1}(\delta_n f)$, for all $f\in C^n(A, M)$. Moreover, Gerstenhaber and Schack showed in \cite{gs2} that the minimal polynomial of $s_n$ over $\mathbb{Q}$ is $\mu_n(x)=\prod\limits_{1\leq i\leq n}[x-(2^i-2)]$. They defined \begin{center} $e_n(k)=\prod\limits_{\substack{1\leq i\leq n\\ i\neq k}}(\lambda_k-\lambda_i)^{-1}\prod\limits_{\substack{1\leq i\leq n\\i\neq k}}(s_n-\lambda_i), $\;$ \mathrm{where} $\;$ \lambda_i=2^i-2.$\end{center}
We want to justify that for every $f\in C^n((A, B, \varepsilon), M)$ we have $$\left(\delta_n^\varepsilon(s_n f)-s_{n+1}(\delta_n^\varepsilon f)\right)\left(\otimes\left(\begin{array}{cccccccc}
a_{1}& b_{1,2} & ...&b_{1,n+1}\\
1 & a_{2} & ...&b_{2,n+1}\\
. & . &...&.\\
1 & 1& ...&a_{n+1}\\
\end{array}
\right)\right)=0,$$ for $a_1, a_2, \dots, a_{n+1}\in A$ and $b_{ij}\in B$, $1\leq i<j\leq n+1$. The expansion of the left side shows that the
identity holds for $b_{i,j}=1$, a direct consequence of $\delta_n(s_n \bar{f})-s_{n+1}(\delta_n \bar{f})=0$ (where $\bar{f}$ is obtained from $f$ by taking $b_{i,j}=1$). This means that the diagonals of the tensor sub-matrices of types $T_1^{n+1}, T_n^0,$ and $M_{i, i+1}^{1,n}$ in the expansion of $\delta_n^\varepsilon(s_n f)-s_{n+1}(\delta_n^\varepsilon f)$ appear in identical pairs and with opposite signs. But, as a consequence of way we defined the action of $S_n$ on the secondary cochains and of the definition of $\delta_n^\varepsilon$, the order
of the elements $a_{\pi(1)}, a_{\pi(2)}, \dots, a_{\pi(n+1)}$ and of the products $a_{\pi(i)}a_{\pi(j)}\varepsilon(b_{\pi(i,j)})$ on the diagonal of the above tensor matrices determines completely the positions of all $b_{\pi(i, j)}$ and their products in $T_1^{n+1}, T_n^0,$ and $M_{i, i+1}^{1,n}$.
This implies that $\delta_n^\varepsilon(s_n f)=s_{n+1}(\delta_n^\varepsilon f)$.
In addition, because $\mu_n(s_n)=0$, we have the identity \begin{center} $\delta_n^\varepsilon(\mu_n (s_n) f)=\prod\limits_{\substack{1\leq i\leq n}}(s_{n+1}-\lambda_i)(\delta_n^\varepsilon f)=0,$\end{center} so we get that
\begin{center} $\delta_n^\varepsilon(e_n(k)f)=\prod\limits_{\substack{1\leq i\leq n\\i\neq k}}(\lambda_k-\lambda_i)^{-1}\prod\limits_{\substack{1\leq i\leq n\\i\neq k}}(s_{n+1}-\lambda_i)(\delta_n^\varepsilon f)=$\end{center}
\begin{center} $=\prod\limits_{\substack{1\leq i\leq n+1\\ i\neq k}}(\lambda_k-\lambda_i)^{-1}\prod\limits_{\substack{1\leq i\leq n\\ i\neq k}}(s_{n+1}-\lambda_i)
(\lambda_k-\lambda_{n+1}+s_{n+1}-\lambda_k)(\delta_n^\varepsilon f)=e_{n+1}(k)(\delta_n^\varepsilon f).$ \end{center}
Adopting the notations from \cite{gs2}, each idempotent $e_n(k)$ determines a submodule
of $C^n((A, B, \varepsilon); M)$, namely $$C^{k, n-k}((A, B, \varepsilon); M)=e_n(k)C^n((A, B, \varepsilon); M).$$
By setting $e_n(k)=0$ if $k>n$, $e_n(0)=0$ if $n\neq 0$, and $e_0(0)=1$ we have that the complex defining the secondary cohomology
decomposes as \begin{center} $C^\bullet ((A, B, \varepsilon); M)=\coprod\limits_{k\geq 0}C^{k, \bullet-k}((A, B, \varepsilon); M)=\coprod\limits_{k\geq 0}e_n(k)C^\bullet((A, B, \varepsilon); M).$ \end{center}
Denoting by $H^{k, \bullet-k}((A, B, \varepsilon); M)$ the homology of the complex $C^{k, \bullet-k}((A, B, \varepsilon); M)$ we have the following
\begin{theorem} If $\varepsilon: B\rightarrow A$ is a morphism of commutative $k$-algebras, $\mathbb{Q}\subset k$, and $M$ is a symmetric
$A$-bimodule then \begin{center} $H^\bullet((A, B, \varepsilon); M)=\coprod\limits_{k\geq 0}H^{k, \bullet-k}((A, B, \varepsilon); M)$. \end{center}
\end{theorem}
\section{Some Examples}
It was noticed in \cite{sta} that there exists a natural morphism $$\Phi_n:H^n((A,B,\varepsilon);M)\to H^n(A,M),$$
induced by the inclusion $i:A^{\otimes n}\to A^{\otimes n}\otimes B^{\otimes \frac{n(n-1)}{2}}$,
$$i_n(a_1\otimes ...\otimes a_n)=\displaystyle\otimes
\left(
\begin{array}{cccccccc}
a_{1}& 1 & ...&1&1\\
1 & a_{2} &...&1&1\\
. & . &...&.&.\\
1& 1& ...&a_{n-1}&1\\
1 & 1& ...&1&a_{n}\\
\end{array}
\right)
$$
In this section we will see that in general $\Phi_n$ is neither onto nor one to one.
First, notice that if $u:A\to M$ is $k$-linear such that $\delta^{\varepsilon}_1(u)=0$ then we must have that $a\varepsilon(\alpha)u(b)-u(ab\varepsilon(\alpha))+u(a)b\varepsilon(\alpha)=0$. This implies that $\Phi_1(u)$ is a derivation that is $B$-linear. Since, in general, not all $k$-derivations of $A$ are $B$-linear we get that $\Phi_1$ is not necessarily onto. We have the following result
\begin{proposition}
$$H^0((A,B,\varepsilon); M)= M^A,$$ $$H^1((A,B,\varepsilon); M)= Der_B(A,M)/Inn(A,M).$$
\end{proposition}
\begin{proof} Straightforward computation.
\end{proof}
\begin{proposition} Let $\Phi_2: H^2((A,B,\varepsilon); M)\to H^2(A,M)$. If on $M$ we consider the $B$-bimodule structure induced by $\varepsilon$, then there exists an isomorphism
$$\chi: \frac{Der_k(B,M)}{\varepsilon^*(Der_k(A,M))}\to ker(\Phi_2)$$
determined by $\chi(u)(a\otimes b\otimes \alpha)=au(\alpha)b$.
\label{prop6}
\end{proposition}
\begin{proof}
Let $\sigma\in Z^2((A,B,\varepsilon); M)$ such that $\Phi_2(\widehat{\sigma})=0\in H^2(A,M)$. This means that there exists a $k$-linear map $u:A\to M$ such that $$\sigma(a\otimes b\otimes 1)=\delta_1(u)(a\otimes b)=au(b)-u(ab)+au(b).$$
We consider the element $\tau\in Z^2((A,B,\varepsilon); M)$, $\tau=\sigma-\delta_1^{\varepsilon}(u).$ Obviously we have that $\widehat{\sigma}=\widehat{\tau}\in H^2((A,B,\varepsilon); M)$, and $\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& 1\\
1 & b\\
\end{array}
\right)\right)=0.$
Since $\tau\in Z^2((A,B,\varepsilon); M)$, we have
\begin{eqnarray*}
&a\varepsilon(\alpha\beta)\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
b& \gamma\\
1 & c\\
\end{array}
\right)\right)-\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
ab\varepsilon(\alpha)& \beta\gamma\\
1 & c\\
\end{array}
\right)\right)
+\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& \alpha\beta\\
1 & bc\varepsilon(\gamma)\\
\end{array}
\right)\right)&\\
&-\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& \alpha\\
1 & b\\
\end{array}
\right)\right)c\varepsilon(\beta\gamma)=0.&
\end{eqnarray*}
When $\alpha=\beta=1$ we have:
$$a\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
b& \gamma\\
1 & c\\
\end{array}
\right)\right)=\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
ab& \gamma\\
1 & c\\
\end{array}
\right)\right),$$
and similarly when $\beta=\gamma=1$
$$\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& \alpha\\
1 & bc\\
\end{array}
\right)\right)=\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& \alpha\\
1 & b\\
\end{array}
\right)\right)c.$$
If we define $v:B\to M$ by $v(\alpha)=\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
1& \alpha\\
1 & 1\\
\end{array}
\right)\right)$ then we get:
$$\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
a& \alpha\\
1 & b\\
\end{array}
\right)\right)=a\tau\left(\displaystyle\otimes
\left(
\begin{array}{ccc}
1& \alpha\\
1 & 1\\
\end{array}
\right)\right)b=av(\alpha)b.$$
We will denote the $2$-cocycle $\tau$ by $\sigma_{v}$.
One can easily check that $v(\alpha\beta)=\varepsilon(\alpha)v(\beta)+v(\alpha)\varepsilon(\beta)$ (i.e. $v\in Der_k(B,M)$).
If $\sigma_v=\delta_1^{\varepsilon}(w)$ for some $w:A\to M$, then we must have
\begin{eqnarray}
av(\alpha)b=a\varepsilon(\alpha)w(b)-w(a\varepsilon(\alpha)b)+w(a)\varepsilon(\alpha)b. \label{w1}
\end{eqnarray}
For $\alpha=1$ we get that $w(ab)=aw(b)+w(a)b$ and so $w\in Der_k(A,M)$. If in equation (\ref{w1}) we take $a=b=1$ then we have $$v(\alpha)=w(\varepsilon(\alpha)),$$
which concludes our proof.
\end{proof}
Next, we want to show that $\Phi_2$ need not be one to one. For this let $A=M=k[X],$ $f(X)\in k[X]$, $B=k[f]$, and let $\varepsilon:B\to A$, $\varepsilon(f)=f(X)$.
For $q(X)\in k[X]$ we consider $\sigma_{q(X)}:A\otimes A\otimes B\to A$ defined by $$\sigma_{q(X)}(P(X)\otimes Q(X)\otimes \alpha(f(X)))=q(X)P(X)Q(X)\alpha'(f(X)).$$
One can see that $\delta^{\varepsilon}_2(\sigma_{q(X)})=0.$ Since $H^2(A, A)=0$ we have that $H^2((A, B, \varepsilon); M)=ker(\Phi_2)$, so every $\hat{\sigma}\in H^2((A, B, \varepsilon); M)$ is of the form $\hat{\sigma}(a\otimes b\otimes\alpha)=av(\alpha)b$, for $v\in Der_k(B, M)$. With this remark we can prove the following result:
\begin{proposition}
Let $\widehat{\sigma} \in
H^2((A, B,\varepsilon);M)$ then there exists $q(X)\in k[X]$ such that $\widehat{\sigma}=\widehat{\sigma_{q(X)}}$. Moreover, if $p(X)$, $q(X)\in k[X]$ then $\widehat{\sigma_{q(X)}}= \widehat{\sigma_{p(X)}}\in H^2((A, B,\varepsilon);M)$ if and only if $\widehat{p(X)}=\widehat{q(X)}\in k[X]/<f'(X)>$.
\label{prop4}
\end{proposition}
\begin{proof} On $M=k[X]$ we have the $k[f]$-bimodule structure determined by $f\cdot P(X)=f(X)P(X)$. Let $u\in Der_k(B,M)$ and take $q(X)=u(f)$. Then $u(\Lambda(f))=\Lambda'(f(X))q(X)$.
Let $t\in Der_k(A,M)$, and take $t(X)=r(X)\in k[X]$. We have that $t(P(X))=P'(X)r(X)$ and so $t(\varepsilon(\Lambda(f)))=t(\Lambda(f(X)))=\Lambda'(f(X))f'(X)r(X)$. Now the result follows directly from Proposition \ref{prop6}.
\end{proof}
\begin{remark}If $f(X)\in k[X]$ has the property that the ideal generated by $f'(X)$ is not trivial then the map $\Phi$ is not one to one. Take for example $n\geq 2$ and $f(X)=X^n$ such that $n$ does not divide the characteristic of $k$. Then we have that $dim_k(H^2((k[X],k[X^n],\varepsilon);k[X]))= n-1$.
\end{remark}
\begin{remark} Using the results from \cite{sta}, one can notice that the element $\widehat{\sigma_{p(X)}}\in H^2((A,B,\varepsilon);M)$ corresponds to the $B$-algebra structure on $A[[t]]$ defined by the morphism $\varepsilon_t: k[f(X)]\to k[X][[t]]$ where $\varepsilon_t(f(X))=f(X)+tp(X)$.
\end{remark}
More generally, consider $A=M=k[X,Y]$.
Let $f(X,Y)$ and $g(X,Y)\in A=k[X,Y]$, take $B=k[f,g]$ and define $\varepsilon:k[f,g]\to k[X,Y]$ determined by $\varepsilon(f)=f(X,Y)$ and $\varepsilon(g)=g(X,Y)$. For any $a(X,Y)$ and $b(X,Y)\in k[X,Y]$ we can define $\sigma_{a,b}:A\otimes A\otimes B\to A$ by
\begin{eqnarray*}
&\sigma_{a,b}(P(X,Y)\otimes Q(X,Y)\otimes \Lambda(f,g))=&\\
&P(X,Y)Q(X,Y)(\frac{\partial \Lambda}{\partial f}(f(X,Y),g(X,Y))a(X,Y)+\frac{\partial\Lambda}{\partial g}(f(X,Y),g(X,Y))b(X,Y))&
\end{eqnarray*}
for all $P(X,Y)$, $Q(X,Y)\in k[X,Y]$ and $\Lambda(f,g)\in k[f,g]$.
\begin{proposition}
Let $\widehat{\sigma} \in ker(\Phi_2:
H^2((A, B,\varepsilon);M)\to H^2(A,M))$ then there exist $a(X,Y)$, $b(X,Y)\in k[X,Y]$ such that $\widehat{\sigma}=\widehat{\sigma_{a,b}}$. Moreover, $\widehat{\sigma_{a,b}}=\widehat{\sigma_{c,d}}\in H^2((A,B,\varepsilon),A)$ if and only if there exist $v(X,Y)$ and $w(X,Y)\in k[X,Y]$ such that
\begin{eqnarray*}
\left(
\begin{array}{cccccccc}
a(X,Y)-c(X,Y)\\
b(X,Y)-d(X,Y)
\end{array}
\right)=
\left(
\begin{array}{cccccccc}
\frac{\partial f}{\partial X}(X,Y)& \frac{\partial f}{\partial Y}(X,Y)\\
\frac{\partial g}{\partial X}(X,Y)&\frac{\partial g}{\partial Y}(X,Y)
\end{array}
\right)
\left(
\begin{array}{cccccccc}
v(X,Y)\\
w(X,Y)
\end{array}
\right)
\end{eqnarray*}
\label{prop5}
\end{proposition}
\begin{proof} The proof is similar with that of Proposition \ref{prop4}. On $M=k[X,Y]$ we have the $k[f,g]$-bimodule structure determined by $f\cdot P(X,Y)=f(X,Y)P(X,Y)$ and $g\cdot P(X,Y)=g(X,Y)P(X,Y)$. Let $u\in Der_k(B,M)$ and take $a(X,Y)=u(f)$ and $b(X,Y)=u(g)$, then
\begin{eqnarray*}
&u(\Lambda(f,g))=\frac{\partial \Lambda}{\partial f}(f(X,Y),g(X,Y))a(X,Y)+\frac{\partial\Lambda}{\partial g}(f(X,Y),g(X,Y))b(X,Y)).&
\end{eqnarray*}
Let $t\in Der_k(A,M)$, and take $t(X)=v(X,Y)$ and $t(Y)=w(X,Y)\in k[X,Y]$. We have that $t(P(X,Y))= \frac{\partial P}{\partial X}(X,Y)u(X,Y)+\frac{\partial P}{\partial Y}(X,Y)v(X,Y)$ and so
\begin{eqnarray*}
&t(\varepsilon(f))=t(f(X,Y))= \frac{\partial f}{\partial X}(X,Y)v(X,Y)+\frac{\partial f}{\partial Y}(X,Y)w(X,Y),&\\
&t(\varepsilon(g))=t(g(X,Y))= \frac{\partial g}{\partial X}(X,Y)v(X,Y)+\frac{\partial g}{\partial Y}(X,Y)w(X,Y).&
\end{eqnarray*}
Now the result follows directly from Proposition \ref{prop6}.
\end{proof}
\begin{remark}
\label{Jac} A similar statement can be proved if we take $A=k[X_1,...,X_n]$, $B=k[f_1,...,f_n]$ and $\varepsilon(f_i)=f_i(X_1,...X_n)\in k[X_1,...,X_n]$.
\end{remark}
\begin{remark} In Proposition \ref{prop5} we proved that the subspace $ker(\Phi_2)$ of $H^2((A,B,\varepsilon); A)$ is isomorphic with $(k[X,Y]\oplus k[X,Y])/Image(J(f,g))$, where $J(f,g):k[X,Y]\oplus k[X,Y]\to k[X,Y]\oplus k[X,Y]$ is determined by the Jacobian matrix associated to the pair $(f(X,Y),g(X,Y))$.
When $k$ is a field with $char(k)=p$, $f(X,Y)=X+X^p$ and $g(X,Y)=Y+Y^p$ then one can see that $Image(J(f,g))=k[X,Y]\oplus k[X,Y]$ and $\varepsilon$ is not onto. It is possible to have $ker(\Phi)=0$ without the map $\varepsilon$ being surjective. However, when $char(k)=0$ we can give the following reformulation, for polynomials in two variables, of the Jacobian problem stated in \cite{W} ($n$ variables if we consider \ref{Jac}).
\end{remark}
\begin{conjecture} Let $k$ be a field, $char(k)=0$. Take $A=k[X,Y]$, $B=k[f,g]$, $\varepsilon(f)=f(X,Y)$ and $\varepsilon(g)=g(X,Y)$. If $\Phi_2 :H^2((A,B,\varepsilon);A)\to H^2(A,A)$ is one to one, then $\varepsilon$ is surjective.
\end{conjecture}
\begin{remark} Notice that from Proposition \ref{prop6} we have an exact sequence:
$$H^1(A,M)\stackrel{\varepsilon^*}{\rightarrow}H^1(B,M)\stackrel{\chi}{\rightarrow} H^2((A,B,\varepsilon);M)\stackrel{\Phi_2}{\rightarrow}H^2(A,M).$$
It is reasonable to belive that this can be extended to a long exact sequence. Also, one can ask if the secondary cohomology can be seen as a derived functor (Ext functor) in an appropriate category. We are planing to investigate these problems in a follow up paper.
\end{remark}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Modern recommender systems (RSs) are a core component of many online services. An RS analyzes users' behavior and provides them with personalized recommendations for products or services that meet their needs. For example, Amazon recommends products to users based on their shopping histories; an online newspaper recommends articles to users based on what they have read.
Generally, an RS can be classified into two categories: Content-based approach and collaborative filtering-based (CF-based) approach. The content-based approach creates a description for each item and builds a profile for each user's preferences. In other words, the content-based approach recommends items that are similar to items for which the user has expressed interest in the past. In contrast, the CF-based approach relies on the past behavior of each user, without requiring any information about the items that the users have consumed. An advantage of the CF-based approach is that it does not require collection of item contents or analysis. In this work, we focus on the CF-based approach.
Input data for CF-based methods are the user-item interaction matrix, in which each entry is the feedback of a user to an item. The feedback can be explicit (e.g., rating scores/stars, like/dislike) or implicit (e.g., click, view, purchase). Early work mainly focused on explicit feedback data such as SVD++ \cite{koren2008factorization}, timeSVD \cite{journals/cacm/Koren10}, or probabilistic matrix factorization \cite{salakhutdinov2008a}. One advantage of explicit feedback is that it is easy to interpret because it directly expresses the preferences of users for items. However, explicit feedback is not always available and is extremely scarce, as few users provide explicit feedback.
Implicit feedback, in contrast, is generated in abundance while users interact with the system. However, interpreting the implicit feedback is difficult, because it does not directly express users' opinions about items. For example, a user's click on an item does not mean that he or she likes it; rather, the user may click and then find that he or she does not like the item. On the other hand, even though a user does not interact with an item, this does not imply that the user dislikes it; it may be because the user does not know that the item exists.
Hu et al. proposed the weighted matrix factorization (WMF) \cite{hu2008collaborative}, a special case of the matrix factorization technique targeted to implicit datasets. The model maps each user and item into a low-dimensional vector in a shared latent space, which encodes all the information that describes the user's preference or the item's characteristics. Locations of users and items in the space show their relationships. If two items are close together in the space, they are considered to be similar. On the other hand, if a user and an item are close in the space, that user is considered to like that item.
Detecting the relationships between items is crucial to the performance of the RS. We consider two kinds of relationships, the global relationship and a local one. The former indicates the global structure that relates simultaneously to most or all items, and is captured from the overall information encompassed in all user--item interactions. The latter, in contrast, indicates the relationships between a small set of closely related items \cite{koren2008factorization,DBLP_journals_sigkdd_BellK07}. Detecting the local relationship will benefit the RS in recommending correlated items. One example of correlated items in the movie domain is the three volumes of the film series ``Lord of the Rings.'' Usually, a user who watches one of them will watch the others. The detection of local relationships gives the system the ability to capture such correlations and recommend one of these volumes when it knows that the user has watched the others. However, while WMF as well as other MF-based algorithms are strong at capturing the global relationships, they are poor at detecting the local relationships \cite{koren2008factorization,DBLP_journals_sigkdd_BellK07}.
In this work, we propose a model that can capture both global and local relationships between items. The idea is to extract the relationships between items that frequently occur in the context of each other, and embed these relationships into the factorization model of WMF \cite{hu2008collaborative,pan:icdm08}. The ``context'' can be the items in a user's interaction list (i.e., the items that the user interacts with), or the items in a transaction. Two items are assumed to be similar if they often appear in a context with each other, and their representations should be located close to each other in the space. The proposed model identifies such relationships and reflects them into WMF. This was inspired by word-embedding techniques in natural language processing that represent words by vectors that can capture the relationships between each word and its surrounding words \cite{conf/nips/MikolovSCCD13,mikolov2013efficient,levy2014neural,le2014distributed}.
In detail, we build an item--item matrix containing the context information and embed information from this matrix into the factorization model. The embedding is performed by factorizing the user--item matrix and the item--item matrix simultaneously. In the model, the role of the item--item matrix factorization is to adjust the latent vectors of items to reflect item--item relationships.
The rest of this paper is organized as follows. In Sect. 2, we present the background knowledge related to this work. Section 3 presents the details of our idea and how we add item embedding to the original factorization model. In Sect. 4, we explain our empirical study and the experimental results. After reviewing some related work in Sect. 5, we discuss the results of this work and show some directions for future work in Sect. 6.
\section{Preliminary}
\label{preliminary}
\subsection{Weighted Matrix Factorization}
Suppose we have $N$ users and $M$ items. For each user $u$ and item $i$, we denote by $r_{ui}$ the number of times user $u$ has interacted with item $i$. We assume that user $u$ likes item $i$ if he or she has interacted with item $i$ at least once. For user $u$ and item $i$, we define a reference value $p_{ui}$ indicating whether user $u$ likes item $i$ (i.e., $p_{ui}=1$ if $r_{ui}>0$ and $p_{ui}=0$ otherwise), and a confidence level $c_{ui}$ to represent how confident we are about the value of $p_{ui}$. Following \cite{hu2008collaborative}, we define $c_{ui}$ as:
\begin{equation}
\label{eq:confidence_level}
c_{ui}=1+\alpha r_{ui},
\end{equation}
where $\alpha$ is a positive number.
Weighted matrix factorization (WMF) \cite{hu2008collaborative,pan:icdm08}, is a factorization model to learn the latent representations of all users and items in the dataset. The objective function of the model is:
\begin{equation}
\label{wmf_equation}
\mathcal{L}(X,Y)=\sum_{u,i}c_{ui}(p_{ui}-\mathbf{x}_u^\top\mathbf{y}_i)^2+\lambda\left(\sum_u||\mathbf{x}_u||^2_F+\sum_i||\mathbf{y}_i||^2_F\right),
\end{equation}
where $X\in\mathbb{R}^{d\times N}$ and $Y\in\mathbb{R}^{d\times M}$ are matrices with columns $\mathbf{x}_u$ and $\mathbf{y}_i$ that are the latent vectors of users and items, respectively; $||.||_F$ is the Frobenius norm of a vector. This optimization problem can be efficiently solved using the Alternating Least Square (ALS) method as described in \cite{hu2008collaborative}.
\subsection{Word Embedding}
\label{sub:word_embedding}
Word embedding models \cite{bengio2003neural,conf/nips/MikolovSCCD13,mikolov2013efficient,le2014distributed} have gained success in many natural language processing tasks. Their goal is to find vector representations of words that can capture their relationship with their context words (i.e., the surrounding words in a sentence or paragraph).
Given a corpus and a word $w$, a context word $c$ of $w$ is a word that occurs within a specific-size window around $w$ (context window) in the corpus. Let $\mathcal{D}$ denote the set of all word--context pairs, i.e., $\mathcal{D} = \{(w,c)|w\in V_W,c\in V_C\}$, where $V_W$ and $V_C$ are the set of words and set of context words, respectively. Word embedding models represent a word $w\in V_W$ and a context word $c\in V_C$ by vectors $\mathbf{w}\in\mathbb{R}^d$ and $\mathbf{c}\in \mathbb{R}^d$, respectively, where $d$ is the embedding's dimensionality.
Mikolov et al. proposed an efficient model for learning word vectors \cite{conf/nips/MikolovSCCD13}, which is performed by maximizing the log-likelihood function for every word-context pair $(w, c)\in\mathcal{D}$:
\begin{equation}
\label{eq:log_sgns_equation}
\log \sigma(\mathbf{w}^\top\mathbf{c})+ k\mathbb{E}_{c_N \propto P_D} \sigma(-\mathbf{w}^\top\mathbf{c}_N),
\end{equation}
where $\sigma(.)$ is the sigmoid function: $\sigma(x)=1/(1+\exp(-x))$, $P_D$ is a distribution for sampling false context words (hence, negative sampling) and $k$ is a hyper-parameter specifying the number of negative samples. This model is called Skip-gram negative sampling (SGNS) \cite{conf/nips/MikolovSCCD13}. Based on this model, Mikolov et al. released a well-known open source package named word2vec\footnote{https://code.google.com/archive/p/word2vec/}.
Levy et al. \cite{levy2014neural} showed that the optimal solutions $\mathbf{w}^*, \mathbf{c}^*$ of Eq. (\ref{eq:log_sgns_equation}) satisfy:
\begin{equation}
\mathbf{w}^{*\top}\mathbf{c}^*=\text{PMI}(w,c)-\log k,
\end{equation}
where $\text{PMI}(w, c)$ is the \textit{pointwise mutual information} between word $w$ and context word $c$. The symbol $k$, again, is the number of negative samples.
The PMI \cite{church90} of a word-context pair $(w, c)$ is a measure that quantifies the association between a word $w$ and a context word $c$. It is defined as:
\begin{equation}
\text{PMI}(w, c)=\log\frac{P(w,c)}{P(w)P(c)},
\end{equation}
where $P(w, c)$ is the probability that $c$ appears in the context of $w$; $P(w)$ and $P(c)$ are the probabilities that word $w$ and context word $c$ appear in the corpus, respectively. Empirically, PMI can be estimated using the actual number of observations in a corpus:
\begin{equation}
\text{PMI}(w, c)=\log\left(\frac{\#(w, c)|\mathcal{D}|}{\#(w)\#(c)}\right),
\end{equation}
where $\mathcal{|D|}$ is the size of $\mathcal{D}$; $\#(w,c)$ is the number of times the pair $(w,c)$ appears in $\mathcal{D}$; and $\#(w)=\sum_c\#(w,c)$ and $\#(c)=\sum_w\#(w,c)$ are the numbers of times $w$ and $c$ appear in $\mathcal{D}$, respectively.
Levy et al. \cite{levy2014neural} then proposed a word embedding model by factorizing the matrix $S$, which has elements $S_{wc}$ that are defined in Eq. (\ref{eq:sppmimatrix}). This matrix is called the shifted positive pointwise mutual information matrix (SPPMI matrix).
\begin{equation}
\label{eq:sppmimatrix}
S_{wc} = \max\{\text{PMI}(w,c)-\log k, 0\}.
\end{equation}
In other words, the SPPMI matrix $S$ is obtained by shifting the PMI matrix by $\log k$ and then replacing all negative values with zeroes (hence, shifted positive pointwise mutual information).
\section{Co-occurrence-based Item Embedding for Collaborative Filtering}
\label{methodology}
\subsection{Co-occurrence-based Item Embedding}
By considering each item as a word, we aim to extract the relationships between items in the same way as word embedding techniques do. Our motivation is that the representation of an item is governed not only by the users who interact with it but also by the other items that appear in its context. In this work, we define ``context'' as the items occurring in the interaction list of a user (i.e., the items that the user interacts with). However, other definitions of context can also be used without any problems. We argue that if items co-occur frequently in the interaction lists of some users, they are similar, and their latent vectors should be close in the latent space.
Inspired by the work of Levy et al. \cite{levy2014neural}, which we present in Sect. \ref{sub:word_embedding}, we construct an SPPMI matrix of items based on co-occurrences and embed it into the factorization model.
\subsubsection{Constructing the SPPMI matrix for items.} We now show how to construct the SPPMI matrix for items according to their co-occurrences.
Let $\mathcal{D}=\{(i, j)|i, j\in I_u, i\neq j, u\in U\}$, where $I_u$ is the set of items with which user $u$ has interacted. We use $\#(i, j)$ to denote the number of times the item pair $(i, j)$ appears in $\mathcal{D}$ and $\#(i)=\sum_j\#(i,j)$ to denote the number of times item $i$ appears in $\mathcal{D}$.
For example, if we have three users $u_1, u_2$, and $u_3$ whose interaction lists are $I_1=\{1, 2, 4\}$, $I_2=\{2, 3\}$, and $I_3=\{1, 2, 3\}$, respectively, we will have:
\begin{itemize}
\item $\mathcal{D}=\{(1,2), (1,4), (2,4), (2,3), (1,2), (1,3), (2,3)\}$
\item $\#(1, 2)=2, \#(1,3)=1, \#(1,4)=1, \#(2,3)=2, \#(2,4)=1$
\item $\#(1)=4, \#(2)=5, \#(3)=3, \#(4)=2$.
\end{itemize}
The item--item matrix $S$ has elements:
\begin{equation}
\label{eq:sij}
s_{ij} = \log\left(\frac{\#(i,j)|\mathcal{D}|}{\#(i)\#(j)}\right)-\log k,
\end{equation}
where $\log\left(\frac{\#(i,j)|\mathcal{D}|}{\#(i)\#(j)}\right)$ is the pointwise mutual information of pair $(i, j)$, as mentioned above, and $k$ is a positive integer corresponding to the number of negative samples in the SGNS model \cite{conf/nips/MikolovSCCD13}. In our experiments, we set $k=1$.
Because $S$ defined above is symmetric, instead of factorizing $S$ into two different matrices as in \cite{levy2014neural}, we factorize it into two equivalent matrices. In more detail, we factorize $S$ to the latent vectors of items:
\begin{equation}
\label{eq:sppmi_item_factorization}
S=Y^\top Y
\end{equation}
In this way, $S$ can also be viewed as a similarity matrix between items, where element $s_{ij}$ indicates the similarity between item $i$ and item $j$.
\subsection{Co-occurrence-based Item Embedded Matrix Factorization (CEMF)}
We can now show how to incorporate the co-occurrence information of items into the factorization model. The SPPMI matrix will be factorized to obtain the latent vectors of items. The learned latent factor vectors of items should minimize the objective function:
\begin{equation}
\label{eq:sppmi_factorization}
\sum_{i,j:s_{ij}>0}\left(s_{ij}-\mathbf{y}_i^\top\mathbf{y}_j\right)^2.
\end{equation}
Combining with the original objective function in Eq. (\ref{wmf_equation}), we obtain the overall objective function:
\begin{multline}
\label{eq:overal_loss}
\mathcal{L}(X, Y)=\sum_{u,i}c_{ui}\left(p_{ui}-\mathbf{x}_u^\top\mathbf{y}_i\right)^2+\sum_{\substack{i\\j>i\\ s_{i,j}>0}}\left(s_{ij}-\mathbf{y}_i^\top\mathbf{y}_j\right)^2\\
+\lambda\left(\sum_u||\mathbf{x}_u||^2_F+\sum_i||\mathbf{y}_i||^2_F\right).
\end{multline}
\subsubsection{Learning method.} This function is not convex with respect to $\mathbf{x}_u$ and $\mathbf{y}_i$, but it is convex if we keep one of these fixed. Therefore, it can be solved using the Alternating Least Square method, similar to the method described in \cite{hu2008collaborative}.
For each user $u$, at each iteration, we calculate the partial derivative of $\mathcal{L}$ with respect to $\mathbf{x}_u$ while fixing other entries. By setting this derivative to zero, $\frac{\partial \mathcal{L}}{\partial \mathbf{x}_u}=0$, we obtain the update rule for $\mathbf{x}_u$:
\begin{align}
\label{eq:xu}
\mathbf{x}_u={}&\left(\sum_i c_{ui}\mathbf{y}_i\mathbf{y}^\top_i+\lambda\mathbf{I}_d\right)^{-1}\left(\sum_i c_{ui}\mathbf{y}_ip_{ui}\right).
\end{align}
Similarly, for each item $i$, we calculate the partial derivative of $\mathcal{L}$ with respect to $\mathbf{y}_i$ while fixing other entries, and set the derivative to zero. We obtain the update rule for $\mathbf{y}_i$:
\begin{equation}
\label{eq:yi}
\begin{aligned}
\mathbf{y}_i={}&\left(\sum_u c_{ui}\mathbf{x}_u\mathbf{x}^\top_u+\sum_{j:s_{i,j}>0} \mathbf{y}_j\mathbf{y}^\top_j + \lambda\mathbf{I}_d\right)^{-1}\\
&\left(\sum_u c_{ui}p_{ui}\mathbf{x}_u+\sum_{j:s_{ij}>0}s_{ij}\mathbf{y}_j\right),
\end{aligned}
\end{equation}
where $\mathbf{I}_d\in\mathbb{R}^{d\times d}$ is the identity matrix (i.e., the matrix with ones on the main diagonal and zeros elsewhere).
\subsubsection{Computational complexity.} For user vectors, as analyzed in \cite{hu2008collaborative}, the complexity for updating $N$ users in an iteration is $\mathcal{O}(d^2\mathcal{|R|}+d^3N)$, where $|\mathcal{R}|$ is the number of nonzero entries of the preference matrix $P$. Since $|\mathcal{R}|>>N$, if $d$ is small, this complexity is linear in the size of the input matrix. For item vector updating, we can easily show that the running time for updating $M$ items in an iteration is $\mathcal{O}(d^2(|\mathcal{R}|+M|\mathcal{S}|)+d^3M)$, where $\mathcal{|S|}$ is the number of nonzero entries of matrix $S$. For systems in which the number of items is not very large, this complexity is not a big problem. However, the computations become significantly expensive for systems with very large numbers of items. Improving the computational complexity of updating item vectors will be part of our future work.
\section{Empirical Study}
\label{experiment}
In this section, we study the performance of CEMF. We compare CEMF with two competing methods for implicit feedback data: WMF \cite{hu2008collaborative,pan:icdm08} and CoFactor \cite{confrecsysLiangACB16}. Across three real-world datasets, CEMF outperformed these competing methods for almost all metrics.
\subsection{Datasets, Metrics, Competing Methods, and Parameter Setting}
\subsubsection{Datasets.} We studied datasets from different domains: movies, music, and location, with varying sizes from small to large. The datasets are:
\begin{itemize}
\item \textit{MovieLens-20M (ML-20M)} \cite{journals/tiis/HarperK16}: a dataset of users' movie ratings collected from MovieLens, an online film service. It contains 20 million ratings in the range 1--5 of 27,000 movies by 138,000 users. We binarized the ratings thresholding at 4 or above. The dataset is available at GroupLens\footnote{https://grouplens.org/datasets/movielens/20m/}.
\item \textit{TasteProfile}: a dataset of counts of song plays by users collected by Echo Nest\footnote{http://the.echonest.com/}. After removing songs that were listened to by less than 50 users, and users who listened to less than 20 songs, we binarized play counts and used them as implicit feedback data.
\item \textit{Online Retail Dataset (OnlineRetail)} \cite{Chen2012}: a dataset of online retail transactions provided at the UCI Machine Learning Repository\footnote{https://archive.ics.uci.edu/ml/datasets/Online+Retail}. It contains all the transactions from December 1, 2010 to December 9, 2011 for a UK-based online retailer.
\end{itemize}
For each user, we selected 20\% of interactions as ground truth for testing. The remaining portions from each user were divided in two parts: 90\% for a training set and 10\% for validation. The statistical information of the training set of each dataset is summarized in Table \ref{training_stats}.
\begin{table}[ht]
\begin{center}
\setlength\tabcolsep{6pt}
\renewcommand{\arraystretch}{1.0}
\caption{Statistical information of the datasets after post-preprocessing}
\label{training_stats}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
& ML-20M & TasteProfile & OnlineRetail\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\# of users & 138,493 & 629,113 & 3,704 \\
\# of items & 26,308 & 98,486 & 3,643\\
\# of interactions & 18M & 35.5M & 235K \\
Sparsity (\%) & 99.5 & 99.94 & 98.25 \\
Sparsity of SPPMI matrix (\%)& 75.42 & 76.34 & 66.24\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Evaluation metrics.}
The performance of the learned model was assessed by comparing the recommendation list with the ground-truth items of each user. We used Recall$@n$ and Precision@$n$ as the measures for evaluating the performance.
Recall@$n$ and Precision@$n$ are usually used as metrics in information retrieval. The former metric indicates the percentage of relevant items that are recommended to the users, while the latter indicates the percentage of relevant items in the recommendation lists. They are formulated as:
\begin{equation}
\label{eq:metric}
\begin{aligned}
\text{Recall}@n &= \frac{1}{N}\sum_{u=1}^N\frac{|S_u(n) \cap V_u|}{|V_u|}\\
\text{Precision}@n &= \frac{1}{N}\sum_{u=1}^N\frac{|S_u(n) \cap V_u|}{n}\\
\end{aligned}
\end{equation}
where $S_u(n)$ is the list of top-$n$ items recommended to user $u$ by the system and $V_u$ is the list of ground-truth items of user $u$.
\subsubsection{Competing methods.}
We compared CEMF with the following competing methods.
\begin{itemize}
\item \textit{CoFactor} \cite{confrecsysLiangACB16}: factorizes user--item and item--item matrices simultaneously as we do, where the item--item co-occurrence matrix is factorized into two matrices.
\item \textit{WMF} \cite{hu2008collaborative}: a weighted matrix factorization matrix for the implicit feedback dataset.
\end{itemize}
\subsubsection{Parameters.}
\begin{itemize}
\item \textit{Number of factors} $d$: we learn the model with the number of factors running from small to large values: $d=$ \{10, 20, 30, 40, 50, 60, 70, 80, 90, 100\}.
\item \textit{Regularization term}: we set the regularization parameter for the Frobenius norm of user and item vectors as $\lambda=0.01$.
\item \textit{Confidence matrix}: we set $c_{ui}=1+\alpha r_{ui}$ $(\alpha >0)$. We changed the value of $\alpha$ and chose the one that gave the best performance.
\end{itemize}
\subsection{Results}
We evaluated CEMF by considering its overall performance and its performance for different groups of users. Results for Precision@$n$ and Recall@$n$ show that our method outperformed the competing methods.
\subsubsection{Overall performance.} Overall prediction performance with respect to Precision and Recall are shown in Table \ref{precision} and Table \ref{recall} respectively. These are the results for $d=30$; larger values of $d$ produce higher accuracy but the differences in performance between the methods do not change much. The results show that CEMF improves the performances for the three datasets over almost all metrics, except for some metrics with $n>20$ for the \textit{TasteProfile}. If we use only small values of $n$, say $n=5$ or $n=10$, CEMF outperforms all competing methods over the three datasets.
\begin{table}[]
\centering
\setlength\tabcolsep{6pt}
\renewcommand{\arraystretch}{1.1}
\caption{Precision@$n$ of WMF, CoFactor, and CEMF over three datasets}
\label{precision}
\begin{tabular}{|l|lccccc|}
\hline
Dataset & Model & Pre@5 & Pre@10 & Pre@20 & Pre@50 & Pre@100 \\
\hline
\multirow{3}{5em}{ML-20M} & WMF & 0.2176 & 0.1818 & 0.1443 & 0.0974 & 0.0677 \\
& CoFactor & 0.2249 & 0.1835 & 0.1416 & 0.0926 & 0.0635 \\
& CEMF & \textbf{0.2369} & \textbf{0.1952} & \textbf{0.1523} & \textbf{0.1007} & \textbf{0.0690} \\
\hline\hline
\multirow{3}{5em}{TasteProfile} & WMF & 0.1152 & 0.0950 & 0.0755 & \textbf{0.0525} & \textbf{0.0378}\\
& CoFactor & 0.1076 & 0.0886 & 0.0701 & 0.0487 & 0.0353\\
& CEMF & \textbf{0.1181} & \textbf{0.0966} & \textbf{0.0760} & 0.0523 & 0.0373\\
\hline\hline
\multirow{3}{5em}{OnlineRetail} & WMF & 0.0870 & 0.0713 & 0.0582 & 0.0406 & 0.0294\\
& CoFactor & 0.0927 & 0.0728 & 0.0552 & 0.0381 & 0.0273\\
& CEMF & \textbf{0.0959} & \textbf{0.0779} & \textbf{0.0619} & \textbf{0.0425} & \textbf{0.0302}\\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\setlength\tabcolsep{3pt}
\renewcommand{\arraystretch}{1.1}
\caption{Recall@$n$ of WMF, CoFactor, and CEMF over three datasets}
\label{recall}
\begin{tabular}{|l|lccccc|}
\hline
Dataset & Model & Recall@5 & Recall@10 & Recall@20 & Recall@50 & Recall@100 \\
\hline
\multirow{3}{6em}{ML-20M} & WMF & 0.2366 & 0.2601 & 0.3233 & 0.4553 & 0.5788 \\
& CoFactor & 0.2420 & 0.2550 & 0.3022 & 0.4101 & 0.5194 \\
& CEMF & \textbf{0.2563} & \textbf{0.2750} & \textbf{0.3331} & \textbf{0.4605} & \textbf{0.5806} \\
\hline\hline
\multirow{3}{6em}{TasteProfile} & WMF & 0.11869& 0.1148 & \textbf{0.1377} & \textbf{0.2129} & 0.\textbf{2960}\\
& CoFactor & 0.1106 & 0.1060 & 0.1256 & 0.1947 & 0.2741\\
& CEMF & \textbf{0.1215} & \textbf{0.1159} & 0.1369 & 0.2092 & 0.2891\\
\hline\hline
\multirow{3}{6em}{OnlineRetail} & WMF & 0.1142 & 0.1463 & 0.2136 & 0.3428 & 0.4638 \\
& CoFactor & 0.1160 & 0.1384 & 0.1891 & 0.3020 & 0.4159 \\
& CEMF & \textbf{0.1232} & \textbf{0.1550} & \textbf{0.2191} & \textbf{0.3466} & \textbf{0.4676}\\
\hline
\end{tabular}
\end{table}
\subsubsection{Performance for different groups of users.} We divided the users into groups based on the number of items they had interacted with so far, and evaluated the performance for each group. There were three groups in our experiments:
\begin{itemize}
\item \textit{low}: users who had interacted with less than 20 items
\item \textit{medium}: users who had interacted with $20\sim 100$ items
\item \textit{high}: users who had interacted with more than 100 items.
\end{itemize}
The Precision@$n$ and Recall@$n$ for these groups are presented in Fig. \ref{fig:group_based}. The results show that CEMF outperforms the competing methods for almost all groups of users. For users with small numbers of interactions, CEMF is slightly better than WMF and much better than CoFactor. For users with many items in their interaction lists, CEMF shows much better performance than WMF and better than CoFactor.
In a system, we usually have users with few interactions and users with many interactions; therefore, using CEMF is more efficient than either WMF or CoFactor.
\begin{figure}[htp]
\centering
\includegraphics[width=.46\textwidth]{ml20_pre10_by_group}\hfill
\includegraphics[width=.46\textwidth]{ml20_recall10_by_group}\hfill
\caption{Precision@10 and Recall@10 for different groups of users with the ML-20M dataset}
\label{fig:group_based}
\end{figure}
\section{Related Work}
\label{related_work}
Standard techniques for implicit feedback data include weighted matrix factorization \cite{hu2008collaborative,pan:icdm08}, which is a special case of the matrix factorization technique that is targeted to implicit feedback data, where the weights are defined from the interaction counts, reflecting how confident we are about the preference of a user for an item. Gopalan et al. \cite{journals/corr/GopalanHB13} introduced a Poisson distribution-based factorization model that factorizes the user--item matrix.
The common point of these methods for matrix factorization is that they assume that the user--item interactions are independent; thus, they cannot capture the relationships between strongly related items in the latent representations.
Collective matrix factorization (CMF) \cite{SinghG_kdd08} proposes a framework for factorizing multiple related matrices simultaneously, to exploit information from multiple sources. This approach can incorporate the side information (e.g., genre information of items) into the latent factor model.
In \cite{conf/kdd/ZhengDMZ13}, the authors present a factorization-based method that uses item--item similarity to predict drug--target interactions. While this model uses the item--item similarity from additional sources as side information, we do not require side information in this work. Instead, we exploit the co-occurrence information that is drawn from the interaction matrix.
The CoFactor \cite{confrecsysLiangACB16} model is based on CMF \cite{SinghG_kdd08}. It factorizes the user--item and item--item matrices at the same time in a shared latent space. The main difference between our method and CoFactor is how we factorize the item--item co-occurrence matrix. Instead of representing each item by two latent vectors as in \cite{confrecsysLiangACB16}, where it is difficult to interpret the second one, we represent each item by a single latent vector.
\section{Discussion and Future Work}
\label{conclusion}
We have examined the effect of co-occurrence on the performance of recommendation systems. We proposed a method that combines the power of two worlds: collaborative filtering by MF and item embedding with item context for items in the interaction lists of users. Our goal is a latent factor model that reflects the strong associations of closed related items in their latent vectors. Our proposed method improved the recommendation performance on top-$n$ recommendation for three real-world datasets.
We plan to explore several ways of extending or improving this work. The first direction is to consider different definitions of ``context items''. One approach is to define context items as items that co-occur in the same transactions as the given items. In this way, we can extract relationships between items that frequently appear together in transactions and can recommend the next item given the current one, or recommend a set of items.
The second direction we are planning to pursue is to reduce the computational complexity of the current algorithm. As we mentioned in Sect. \ref{methodology}, the computational complexity for updating item vectors is $\mathcal{O}(d^2(\mathcal{|R|}+M\mathcal{|S|})+d^3M)$, which becomes significantly expensive for systems with large numbers of items. We hope to develop a new algorithm that can improve this complexity. An online learning algorithm, which updates user and item vectors when new data are collected without retraining the model from the beginning, is also in our plan to improve this work.
\subsubsection*{Acknowledgments.} This work was supported by a JSPS Grant-in-Aid for Scientific Research (B) (15H02789).
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction \label{introduction}}
There has been important progress in the mathematical study of mean field spin
glasses over the last $10$ years. By results of Guerra \cite{guerra} and
Talagrand \cite{TalagrandParisi}, the free energy of the
Sherrington-Kirkpatrick model is known to be given by the formula predicted by
Parisi \cite{parisi}. Furthermore, the description of the \textit{high}
temperature is remarkably accurate, see \cite{talagrand} and references
therein. On the other hand, results for the Gibbs measure at \textit{low}
temperature are more scarce and are restricted to models with a simpler
structure, like Derrida's generalized random energy model, the GREM,
\cite{BovierKurkova} and \cite{DerridaGREM}, the nonhierarchical GREMs
\cite{bokis_two} and the $p$-spin model with large $p$ \cite{talagrand}. To
put on rigorous ground the full Parisi picture remains a major challenge, and
even more so in view of its alleged universality, at least for mean-field models.
We introduce here a model which hopefully sheds some new light on the issue.
In this paper we derive the free energy, which can be analyzed by large
deviation techniques. The limiting free energy turns out to be given by a
Gibbs variational formula which can be linked to a Parisi-type formula
by a duality principle, so that it becomes evident why an infimum appears in
the latter. This duality also gives an interesting interpretation of the
Parisi order parameter in terms of the sequence of inverse of temperatures
associated to the extremal measures from the Gibbs variational principle.
In a forthcoming paper, we will give a full description of the Gibbs measure
in the thermodynamic limit in terms of the Ruelle cascades.
\section{A Perceptron version of the GREM\label{Sect_perceptron}}
Let $\left\{ X_{\alpha,i}\right\} _{\alpha\in\Sigma_{N},1\leq i\leq N},$ be
random variables which take values in a Polish space $S$ equipped with the
Borel $\sigma$-field $\mathcal{S},$ and defined on a probability space
$\left( \Omega,\mathcal{F},\mathbb{P}\right) .$ We write $\mathcal{M}%
_{1}^{+}\left( S\right) $ for the set of probability measures on $\left(
S,\mathcal{S}\right) ,$ which itself is a Polish space. $\Sigma_{N}$ is
exponential in size, typically $\left\vert \Sigma_{N}\right\vert =2^{N}.$ It
is assumed that all $X_{\alpha,i}$ have the same distribution $\mu$, and that
for any fixed $\alpha\in\Sigma_{N},$ the collection $\left\{ X_{\alpha
,i}\right\} _{1\leq i\leq N}$ is independent. It is however not assumed that
they are independent for different $\alpha.$ The perceptron Hamiltonian is
defined by%
\begin{equation}
-H_{N,\omega}\left( \alpha\right) \overset{\mathrm{def}}{=}\sum_{i=1}%
^{N}\phi\left( X_{\alpha,i}\left( \omega\right) \right)
,\label{general_perceptron}%
\end{equation}
where $\phi:S\rightarrow\mathbb{R}$ is a measurable function. One may allow
that the index set for $i$ is rather $\left\{ 1,\ldots,\left[ aN\right]
\right\} $ with $a$ some positive real number, but for convenience, we always
stick to $a=1$ here. The case which is best investigated (see \cite{talagrand}%
) takes for $\alpha$ spin sequences: $\alpha=\left( \sigma_{1},\ldots
,\sigma_{N}\right) \in\left\{ -1,1\right\} ^{N},$ $S=\mathbb{R},$ and the
$X_{\alpha,i}$ are centered Gaussians with%
\begin{equation}
\mathbb{E}\left( X_{\alpha,i}X_{\alpha^{\prime},i^{\prime}}\right)
=\delta_{i,i^{\prime}}\frac{1}{N}\sum_{j=1}^{N}\sigma_{j}\sigma_{j}^{\prime
}.\label{SK_perceptron}%
\end{equation}
This is closely related to the SK-model, and is actually considerably more
difficult. The model has been investigated by Talagrand \cite{talagrand}, but
a full Parisi formula for the free energy is lacking.
The Hamiltonian (\ref{general_perceptron}) can be written in terms of the
empirical measure%
\begin{equation}
L_{N,\alpha}\overset{\mathrm{def}}{=}\frac{1}{N}\sum_{i=1}^{N}\delta
_{X_{\mathbf{\sigma},i}}\label{empirical_Def}%
\end{equation}
i.e.%
\[
-H_{N,\omega}\left( \alpha\right) =N\int\phi\left( x\right) L_{N,\alpha
}\left( dx\right) .
\]
The quenched free energy is the almost sure limit of%
\[
\frac{1}{N}\log\sum_{\alpha}\exp\left[ -H_{N,\omega}\left( \alpha\right)
\right] ,
\]
and it appears natural to ask if this free energy can be obtained by a
quenched Sanov type large deviation principle for $L_{N,\alpha}$ in the
following form:
\begin{definition}
We say that $\left\{ L_{N}\right\} $ satisfies a \textbf{quenched large
deviation principle} (in short QLDP) with good rate function $J:\mathcal{M}%
_{1}^{+}\left( S\right) \rightarrow\left[ -\infty,\infty\right) ,$
provided the level sets of $J$ are compact, and for any weakly continuous
bounded map $\Phi:\mathcal{M}_{1}^{+}\left( S\right) \rightarrow\mathbb{R},
$ one has%
\[
\lim_{N\rightarrow\infty}\frac{1}{N}\log\sum_{\alpha\in\Sigma_{N}}\exp\left[
N\Phi\left( L_{N,\alpha}\right) \right] =\log2+\sup_{\nu\in\mathcal{M}%
_{1}^{+}\left( S\right) }\left[ \Phi\left( \nu\right) -J\left(
\mu\right) \right] ,,\ \mathbb{P}\mathrm{-a.s.}%
\]
\end{definition}
The annealed version of such a QLDP is just Sanov's theorem:%
\begin{align*}
\lim_{N\rightarrow\infty}\frac{1}{N}\log\sum_{\alpha}\mathbb{E}\exp\left[
N\Phi\left( L_{N,\alpha}\right) \right] & =\log2+\lim_{N\rightarrow\infty
}\frac{1}{N}\log\mathbb{E}\exp\left[ N\Phi\left( L_{N,\alpha}\right)
\right] \\
& =\log2+\sup_{\nu}\left( \Phi\left( \nu\right) -H\left( \nu%
\vert
\mu\right) \right)
\end{align*}
where $H\left( \nu%
\vert
\mu\right) $ is the usual relative entropy of $\nu$ with respect to $\mu,$
the latter being the distribution of the $X_{\alpha,i}:$%
\[
H\left( \nu%
\vert
\mu\right) \overset{\mathrm{def}}{=}\left\{
\begin{array}
[c]{cc}%
\int\log\frac{d\nu}{d\mu}\ d\nu & \mathrm{if\ }\nu\ll\mu\\
\infty & \mathrm{otherwise}%
\end{array}
\right. .
\]
There is no reason to believe that $H\left( \nu%
\vert
\mu\right) =J\left( \nu\right) .$
\begin{conjecture}
The empirical measures $\left\{ L_{N,\alpha}\right\} $ with
(\ref{SK_perceptron}) satisfy a QLDP.
\end{conjecture}
We don't know how this conjecture could be proved, nor do we have a clear
picture what $J$ should be in this case. The only support we have for the
conjecture is that it is true in a perceptron version of the GREM, a model we
are now going to describe.
For $n\in{\mathbb{N}}$, $\alpha=(\alpha_{1},\dots,\alpha_{n})$ with
$1\leq\alpha_{k}\leq2^{\gamma_{i}N}$, $\sum_{k}\gamma_{k}=1$, and $1\leq i\leq
N$, let
\[
X_{\alpha,i}=\left( X_{\alpha_{1},i}^{1},X_{\alpha_{1},\alpha_{2},i}%
^{2},\dots,X_{\alpha_{1},\alpha_{2},\dots,\alpha_{n},i}^{n}\right)
\]
where the $X^{j}$ are independent, taking values in some Polish Space
$(S,{\mathcal{S}})$ with distribution $\mu_{j}$. For notational convenience,
we assume that the $\gamma_{i}N$ are all integers. Put%
\[
\Gamma_{j}\overset{\mathrm{def}}{=}\sum_{k=1}^{j}\gamma_{j}.
\]
We assume that all the variables in the bracket are independent. The
$X_{\alpha,i}$ take values in $S^{n}.$ The distribution is%
\[
\mu\overset{\mathrm{def}}{=}\mu_{1}\otimes\cdots\otimes\mu_{n}%
\]
The empirical measure $L_{N,\alpha}$ is defined by (\ref{empirical_Def}) which
is a random element in ${\mathcal{M}}_{1}^{+}(S^{n})$. $n$ is fixed in all we
are doing.
Given a measure $\nu\in{\mathcal{M}}_{1}^{+}(S^{n})$, and $1\leq j\leq n,$ we
write $\nu^{(j)}$ for its marginal on the first $j$ coordinates. We define
subsets $\mathcal{R}_{j}$ of ${\mathcal{M}}_{1}^{+}(S^{n})$, $1\leq j\leq n$
by%
\[
\mathcal{R}_{j}\overset{\mathrm{def}}{=}\left\{ \nu\in{\mathcal{M}}_{1}%
^{+}(S^{n}):H\left( \nu^{(j)}\mid\mu^{(j)}\right) \leq\Gamma_{j}%
\log2\right\} .
\]
We will also consider the sets%
\[
\mathcal{R}_{j}^{=}\overset{\mathrm{def}}{=}\left\{ \nu\in{\mathcal{M}}%
_{1}^{+}(S^{n}):H\left( \nu^{(j)}\mid\mu^{(j)}\right) =\Gamma_{j}%
\log2\right\} .
\]
For $\nu\in{\mathcal{M}}_{1}^{+}(S^{n})$ let%
\[
J\left( \nu\right) =\left\{
\begin{array}
[c]{cc}%
H(\nu\mid\mu) & \mathrm{if\ }\nu\in\bigcap\nolimits_{j=1}^{n}\mathcal{R}_{j}\\
\infty & \mathrm{otherwise}%
\end{array}
\right. .
\]
It is evident that $J$ is convex and has compact level sets.
Our first main result is:
\begin{theorem}
\label{Th_GREM_perceptron} $\left\{ L_{N,\alpha}\right\} $ satisfies a QLDP
with rate function $J.$
\end{theorem}
For the rest of this section, we will focus on linear functionals, $\Phi
(\nu)=\int\phi(x)\nu({d}x)$, for a bounded continuous function $\phi
:S^{n}\rightarrow\mathbb{R}.$ For a probability measure $\nu$ on $S^{n}$, we
set%
\[
\operatorname{Gibbs}(\phi,\nu)\overset{\mathrm{def}}{=}\int\phi(x)\nu
(dx)-H(\nu\mid\mu),
\]
and define the Legendre transform of $J$ by%
\[
J^{\ast}\left( \phi\right) \overset{\mathrm{def}}{=}\sup_{\nu}\left[
\int\phi(x)\nu(dx)-J\left( \nu\right) \right] =\sup\left\{
\operatorname{Gibbs}(\phi,\nu):\nu\in\bigcap\nolimits_{j=1}^{n}\mathcal{R}%
_{j}\right\} .
\]
whenever the a.s.-limit exists. As a corollary of Theorem
\ref{Th_GREM_perceptron} we have
\begin{corollary}
\label{Cor_GREMperceptron_linear}Assume that $\phi:S\rightarrow{\mathbb{R}} $
is bounded and continuous.%
\[
\lim_{N\rightarrow\infty}\frac{1}{N}\log\sum_{\alpha}\exp\left[
\sum\nolimits_{i=1}^{N}\phi\left( X_{\alpha,i}\right) \right] =J^{\ast
}\left( \phi\right) +\log2,\ \mathrm{a.s.}%
\]
\end{corollary}
We next discuss a dual representation of $J^{\ast}\left( \phi\right) $.
Essentially, this comes up by investigating which measures solve the
variational problem. Remark that without the restrictions $\nu\in
\bigcap\nolimits_{j=1}^{n}\mathcal{R}_{j},$ we would simply get%
\[
d\nu=\frac{\mathrm{e}^{\phi}d\mu}{\int\mathrm{e}^{\phi}d\mu}%
\]
as the maximizer.
Let $\Delta$ be the set of sequences $\mathbf{m}=\left( m_{1},\ldots
,m_{n}\right) $ with $0<m_{1}\leq m_{2}\leq\cdots\leq m_{n}\leq1.$ For
$\mathbf{m}\in\Delta,$ and $\phi:S^{n}\rightarrow\mathbb{R}$ bounded, we
define recursively functions $\phi_{j},~0\leq j\leq n,\ \phi_{j}%
:S^{j}\rightarrow\mathbb{R},$ by%
\begin{equation}
\phi_{n}\overset{\mathrm{def}}{=}\phi,\label{Def_phin}%
\end{equation}%
\begin{equation}
\phi_{j-1}\left( x_{1},\ldots,x_{j-1}\right) \overset{\mathrm{def}}{=}%
\frac{1}{m_{j}}\log\int\operatorname{exp}\left[ {m}_{j}\phi_{j}\left(
x_{1},\dots,x_{j-1},x_{j}\right) \right] \mu_{j}\left( dx_{j}\right)
.\label{Def_phij}%
\end{equation}
$\phi_{0}$ is just a real number, which we denote by $\phi_{0}\left(
\mathbf{m}\right) .$
Remark that if some of the $m_{i}$ agree, say $m_{k}=m_{k+1}=\cdots=m_{l},$
$k<l,$ then $\phi_{k-1}$ is obtained from $\phi_{l}$ by%
\[
\phi_{k-1}\left( x_{1},\ldots,x_{k-1}\right) =\frac{1}{m_{k}}\log
\int\operatorname{exp}\left[ {m}_{k}\phi_{l}\left( x_{1},\dots,x_{k-1}%
,x_{k},\ldots,x_{l}\right) \right] \prod\limits_{j=k}^{l}\mu_{j}\left(
dx_{j}\right) .
\]
In particular, if all the $m_{i}$ are $1,$ then%
\[
\phi_{0}=\log\int\exp\left[ \phi\right] d\mu.
\]
This latter case corresponds to the \textquotedblleft replica
symmetric\textquotedblright\ situation. Put%
\begin{equation}
\operatorname{Parisi}\left( \mathbf{m},\phi\right) \overset{\mathrm{def}}%
{=}\sum\nolimits_{i=1}^{n}{\frac{\gamma_{i}\log2}{m_{i}}}+\phi_{0}\left(
\mathbf{m}\right) -\log2\label{Parisi}%
\end{equation}
\begin{theorem}
\label{Th_Parisi_formula}Assume that $\phi:S\rightarrow{\mathbb{R}}$ is
bounded and continuous. Then%
\begin{equation}
J^{\ast}\left( \phi\right) =\inf_{\mathbf{m}\in\Delta}\operatorname{Parisi}%
\left( \mathbf{m},\phi\right) .\label{Variationformula3}%
\end{equation}
\end{theorem}
The expression for $J^{\ast}\left( \phi\right) $ in this theorem is very
similar to the Parisi formula for the SK-model. Essentially the only
difference is the first summand which in the SK-case is a quadratic
expression. In our case (in contrast to the still open situation in the
SK-model), we can prove that the infimum is uniquely attained, as we will
discuss below.
The derivation of the theorem from Corollary \ref{Cor_GREMperceptron_linear}
is done by identifying first the possible maximizers in the variational
formula for $J^{\ast}\left( \phi\right) $. They belong to a family of
distributions, parametrized by $\mathbf{m}.$ The maximizer inside this family
is then obtained by minimizing $\mathbf{m}$ according to
(\ref{Variationformula3}), and one then identifies the two expressions. The
procedure is quite standard in large deviation situations.
Two conventions: $C$ stands for a generic positive constant, not necessarily
the same at different occurences. If there are inequalities stated between
expressions containing $N,$ it is tacitely assumed that they are valid maybe
only for large enough $N.$
\section{Proofs}
\subsection{The Gibbs variational principle: Proof of Theorem
\ref{Th_GREM_perceptron}}
If $A\in{\mathcal{S}}$, we put $H(A\mid\mu)\overset{\mathrm{def}}%
{=}\operatorname{inf}_{\nu\in A}H(\nu\mid\mu)$. If $S$ is a Polish Space, and
${\mathcal{S}}$ its Borel $\sigma$-field, then it is well known that
$\nu\rightarrow H(\nu\mid\mu)$ is lower semicontinuous in the weak topology.
This follows from the representation
\begin{equation}
H(\nu\mid\mu)=\sup_{u\in{\mathcal{U}}}\left[ \int u\,d\nu-\log\int
\mathrm{e}^{u}d\mu\right] ,\label{sup_representation}%
\end{equation}
where ${\mathcal{U}}$ is the set of bounded continuous functions
$S\rightarrow{\mathbb{R}}$.
For $(S,{\mathcal{S}}),(S^{\prime},{\mathcal{S}}^{\prime})$ two Polish Spaces,
and $\nu\in{\mathcal{M}}_{1}^{+}(S\times S^{\prime})$. If $\mu\in{\mathcal{M}%
}_{1}^{+}(S)$, $\mu^{\prime}\in{\mathcal{M}}_{1}^{+}(S^{\prime})$ we have,
\begin{equation}
H\left( \nu\mid\mu\otimes\mu^{\prime}\right) =H\left( \nu^{(1)}\mid
\mu\right) +H\left( \nu\mid\nu^{(1)}\otimes\mu^{\prime}\right)
,\label{Entropy_Add}%
\end{equation}
where $\nu^{(1)}$ is the first marginal of $\nu$ on $S$.
\begin{lemma}
\label{lower_semicontinuity} $H(\nu\mid\nu^{(1)}\otimes\mu^{\prime})$ is a
lower semicontinuous function of $\nu$ in the weak topology.
\end{lemma}
\begin{proof}
Applying (\ref{sup_representation}) to
\[
H(\nu\mid\nu^{(1)}\otimes\mu^{\prime})=\sup_{u\in{\mathcal{U}}}\left[ \int
ud\nu-\log\int\mathrm{e}^{u}d\left( \nu^{(1)}\otimes\mu^{\prime}\right)
\right] ,
\]
where ${\mathcal{U}}$ denotes the set of bounded continuous functions $S\times
S^{\prime}\rightarrow{\mathbb{R}}$. For any fixed $u\in{\mathcal{U}}$, both
functions $\nu\rightarrow\int u\,d\nu$ and $\nu\rightarrow\log\int
\mathrm{e}^{u}d\left( \nu^{(1)}\otimes\mu^{\prime}\right) $ are continuous,
and from this the desired semicontinuity property follows.
\end{proof}
We will need the following \textquotedblleft relative\textquotedblright%
\ version of Sanov's theorem. Consider three independent sequences of i.i.d.
random variables $(X_{i}),(Y_{i}),(Z_{i})$, taking values in three Polish
spaces $S,S^{\prime},S^{\prime\prime},$ and with laws $\mu,\mu^{\prime}%
,\mu^{\prime\prime}$. We consider the empirical processes
\[
L_{N}\overset{\mathrm{def}}{=}{\frac{1}{N}}\sum_{i=1}^{N}\delta_{(X_{i}%
,Y_{i})},\ R_{N}\overset{\mathrm{def}}{=}{\frac{1}{N}}\sum_{i=1}^{N}%
\delta_{\left( X_{i},Z_{i}\right) }.
\]
The pair $(L_{N},R_{N})$ takes values in ${\mathcal{M}}_{1}^{+}(S\times
S^{\prime})\times{{\mathcal{M}}}_{1}^{+}(S\times S^{\prime\prime}).$
\begin{lemma}
\label{ldp_empirical_measure_couple} The sequence $(L_{N},R_{N})$ satisfies a
LDP with rate function
\[
J(\nu,\theta)=%
\begin{cases}
H\left( \nu^{(1)}\mid\mu\right) +H\left( \nu\mid\nu^{(1)}\otimes\mu
^{\prime}\right) +H\left( \theta\mid\theta^{(1)}\otimes\mu^{\prime\prime
}\right) , & \mathrm{if}\;\nu^{(1)}=\theta^{(1)}\\
\infty & \mathrm{otherwise}.
\end{cases}
\]
\end{lemma}
\begin{proof}
We apply the Sanov theorem to the empirical measure
\[
M_{N}={\frac{1}{N}}\sum_{i=1}^{N}\delta_{(X_{i},Y_{i},Z_{i})}\in{{\mathcal{M}%
}}_{1}^{+}(S\times S^{\prime}\times S^{\prime\prime}).
\]
We use the two natural projections $p:S\times S^{\prime}\times S^{\prime
\prime}\rightarrow S\times S^{\prime}$ and $q:S\times S^{\prime}\times
S^{\prime\prime}\rightarrow S\times S^{\prime\prime}$. Then $(L_{N}%
,R_{N})=M_{N}(p,q)^{-1}$, and by continuous projection, we get that
$(L_{N},R_{N})$ satisfies a good LDP with rate function
\[
J^{\prime}(\nu,\theta)=\operatorname{inf}\left\{ H(\rho\mid\mu\otimes
\mu^{\prime}\otimes\mu^{\prime\prime}):\rho p^{-1}=\nu,\rho q^{-1}%
=\theta\right\} .
\]
It only remains to identify this rate function with the function $J$ given above.
Clearly $J^{\prime}(\nu,\theta)=\infty$ if $\nu^{(1)}\neq\theta^{(1)}$.
Therefore, assume $\nu^{(1)}=\theta^{(1)}$. If we define $\hat{\rho}\left(
\nu,\theta\right) \in\mathcal{M}_{1}^{+}\left( S\times S^{\prime}\times
S^{\prime\prime}\right) $ to have marginal $\nu^{(1)}=\theta^{(1)}$ on $S$,
and the conditional distribution on $S^{\prime}\times S^{\prime\prime}$ given
the first projection is the product of the conditional distributions of $\nu$
and $\theta$, then applying twice (\ref{Entropy_Add}), we get%
\[
H(\hat{\rho}\mid\mu\otimes\mu^{\prime}\otimes\mu^{\prime\prime})=H\left(
\nu^{(1)}\mid\mu\right) +H\left( \nu\mid\nu^{(1)}\otimes\mu^{\prime}\right)
+H\left( \theta\mid\theta^{(1)}\otimes\mu^{\prime\prime}\right) ,
\]
and therefore $J\geq J^{\prime}$.
To prove the other inquality, consider any $\rho$ satisfying $\rho p^{-1}%
=\nu,\rho q^{-1}=\theta$. We want to show that $J(\nu,\theta)\leq H\left(
\rho\mid\mu\otimes\mu^{\prime}\otimes\mu^{\prime\prime}\right) $. For that,
we can assume that the right hand side is finite. Then%
\[
H\left( \rho\mid\mu\otimes\mu^{\prime}\otimes\mu^{\prime\prime}\right)
=H\left( \rho\mid\hat{\rho}\left( \nu,\theta\right) \right) +\int
d\rho\log\frac{d\hat{\rho}\left( \nu,\theta\right) }{d\left( \mu\otimes
\mu^{\prime}\otimes\mu^{\prime\prime}\right) }.
\]
The first summand is $\geq0,$ and the second equals%
\[
\int d\hat{\rho}\left( \nu,\theta\right) \log\frac{d\hat{\rho}\left(
\nu,\theta\right) }{d\left( \mu\otimes\mu^{\prime}\otimes\mu^{\prime\prime
}\right) }=J(\nu,\theta).
\]
So, we have proved that%
\[
J(\nu,\theta)\leq H\left( \rho\mid\mu\otimes\mu^{\prime}\otimes\mu
^{\prime\prime}\right) ,
\]
for any $\rho$ satisfying $\rho p^{-1}=\nu,\rho q^{-1}=\theta.$
\end{proof}
We now step back to the setting of Theorem \ref{Th_GREM_perceptron}: For
$j=1,\dots,n,$ we have sequences $\left\{ X_{\alpha_{1},\dots,\alpha_{j}%
,i}^{j}\right\} $ of independent random variables with distribution $\mu_{j}%
$. We emphasize that henceforth $\mu=\mu_{1}\otimes\cdots\otimes\mu_{n}$ and
$\mu^{(j)}$ will denote the marginal on the first $k$ components. Moreover,
for $\alpha=(\alpha_{1},\dots,\alpha_{n})$, we write $\alpha^{(j)}=(\alpha
_{1},\dots,\alpha_{j})$ and set
\[
L_{N,\alpha^{(j)}}^{(j)}={\frac{1}{N}}\sum_{i=1}^{N}\delta_{\left(
X_{\alpha_{1},i}^{1},X_{\alpha_{1},\alpha_{2},i}^{2},\dots,X_{\alpha_{1}%
,\dots,\alpha_{j},i}^{j}\right) },
\]
for $j\leq n$, which is the marginal of $L_{N,\alpha}$ on $S^{j}$. With the
notation
\begin{align*}
& X_{\alpha,i}^{(j)}\overset{\mathrm{def}}{=}\left( X_{\alpha_{1},i}%
^{1},\dots,X_{\alpha_{1},\dots,\alpha_{j},i}^{j}\right) ,\\
& \hat{X}_{\alpha,i}^{(j)}\overset{\mathrm{def}}{=}\left( X_{\alpha_{1}%
,\dots,\alpha_{j+1},i}^{j+1},\dots,X_{\alpha_{1},\dots,\alpha_{n},i}%
^{n}\right) ,
\end{align*}
we can write
\begin{equation}
L_{N,\alpha}\overset{\mathrm{def}}{=}{\frac{1}{N}}\sum_{i=1}^{N}%
\delta_{\left( X_{\alpha,i}^{(j)},\hat{X}_{\alpha,i}^{(j)}\right)
}.\label{splitting_empirical}%
\end{equation}
For $A\subset{{\mathcal{M}}}_{1}^{+}(S^{n})$ we put $M_{N}(A)\overset
{\mathrm{def}}{=}\#\left\{ \alpha:L_{N,\alpha}\in A\right\} $.
\begin{lemma}
\label{very_many} Assume $\nu\in{\mathcal{M}}_{1}^{+}(S^{n})$ satisfies
$H(\nu\mid\mu)<\infty$, and let $V$ be an open neighborhood of $\nu$, and
$\varepsilon>0$. Then there exists an open neighborhood $U$ of $\nu$,
$U\subset V$, and $\delta>0$ such that
\[
{\mathbb{P}}\Big [M_{N}(U)\geq\operatorname{exp}\left[ N\left( \log
2-H(\nu\mid\mu)+\varepsilon\right) \right] \Big ]\leq\mathrm{e}^{-\delta N}.
\]
\end{lemma}
\begin{proof}
If $B_{r}(\nu)$ denotes the open $r$-ball around $\nu$ in one of the standard
metrics, e.g. the Prohorov metric, then by the semicontinuity property of the
relative entropy, on has%
\[
H(B_{r}(\nu)\mid\mu)\uparrow H(\nu\mid\mu)
\]
as $r\downarrow0.$ We can choose a sequence $r_{k}>0,r_{k}\downarrow0$ with
$H(B_{r_{k}}(\nu)\mid\mu)=H(\operatorname{cl}\left( B_{r_{k}}(\nu)\right)
\mid\mu)\uparrow H(\nu\mid\mu)$. Given $\varepsilon>0,$ and $V,$ we can find
$k$ such that%
\[
H(B_{r_{k}}(\nu)\mid\mu)=H(\operatorname{cl}\left( B_{r_{k}}(\nu)\right)
\mid\mu)\geq H(\nu\mid\mu)-\varepsilon/4
\]
and $B_{r_{k}}(\nu)\subset V.$ By Sanov's theorem we therefore get
\[
{\mathbb{P}}\Big [L_{N,\alpha}\in B_{r_{k}}(\nu)\Big ]\leq\operatorname{exp}%
\left[ N(-H(\nu\mid\mu)+\varepsilon/2)\right] ,
\]
and therefore%
\[
{{\mathbb{E}}}\Big [M_{N}\left( B_{r_{k}}(\nu)\right) \Big ]\leq
\operatorname{exp}\left[ N(\log2-H(\nu\mid\mu)+\varepsilon/2)\right] .
\]
By the Markov inequality, the claim follows by taking $\delta=\varepsilon/3.$
\end{proof}
\begin{lemma}
\label{still_existent} Assume $\nu\in{\mathcal{M}}_{1}^{+}(S^{n})$ satisfies
$H\left( \nu^{(j)}\mid\mu^{(j)}\right) >\Gamma_{j}\log2$ for some $j\leq n$,
and let $V$ be an open neighborhood of $\nu$. Then there is an open
neighborhood $U$ of $\nu$, $U\subset V$ and $\delta>0$ such that
\[
{\mathbb{P}}\big [M_{N}(U)\neq0\big ]\leq\mathrm{e}^{-\delta N}%
\]
for large enough $N$.
\end{lemma}
\begin{proof}
As in the previous lemma, we choose a neighborhood $U^{\prime}$ of $\nu^{(j)}$
in $S^{j}$ such that $H(\operatorname{cl}\left( U^{\prime}\right) \mid
\mu^{(j)})=H(U^{\prime}\mid\mu^{(j)})>\Gamma_{j}\log2+\eta,$ for some
$\eta>0.$ Then we put
\[
U\overset{\mathrm{def}}{=}\left\{ \nu\in{\mathcal{M}}_{1}^{+}(S^{n}):\nu\in
V,\nu^{(j)}\in U^{\prime}\right\} .
\]
If $L_{N,\alpha}\in U$ then $L_{N,\alpha}^{(j)}\in U^{\prime}$,
\begin{align*}
{\mathbb{P}}\left[ \exists\alpha:L_{N,\alpha}\in U\right] & \leq
{\mathbb{P}}\left[ \exists\alpha:L_{N,\alpha}^{(j)}\in U^{\prime}\right] \\
& \leq2^{\Gamma_{j}N}{\mathbb{P}}\left[ L_{N,\alpha}^{(j)}\in U^{\prime
}\right] \\
& \leq2^{\Gamma_{j}N}\operatorname{exp}\left[ -NH\left( \operatorname{cl}%
\left( U^{\prime}\right) \mid\mu^{(j)}\right) +N\eta/2\right] \\
& \leq2^{\Gamma_{j}N}\operatorname{exp}\left[ -N\Gamma_{j}\log2-N\eta
/2\right] =\mathrm{e}^{-N\eta/2}.
\end{align*}
This proves the claim.
\end{proof}
\begin{lemma}
\label{second_mom} Assume that $\nu\in{\mathcal{M}}_{1}^{+}(S^{n})$ satisfies
$H\left( \nu^{(j)}\mid\mu^{(j)}\right) <\Gamma_{j}\log2$ for all $j$, and
let $V$ be an open neighborhood of $\nu$, and $\varepsilon>0$. Then there
exists an open neighborhood $U$ of $\nu$, $U\subset V$, and a $\delta>0$ such
that
\[
{\mathbb{P}}\Big [M_{N}(U)\leq\operatorname{exp}\left[ N\left( \log
2-H(\nu\mid\mu)-\varepsilon\right) \right] \Big ]\leq\mathrm{e}^{-\delta N}.
\]
\end{lemma}
\begin{proof}
We claim that we can find $U$ as required, and some $\delta>0,$ such that
\begin{equation}
{\operatorname{var}}\left[ M_{N}(U)\right] \leq\mathrm{e}^{-2N\delta
}\left\{ {{\mathbb{E}}}\left[ M_{N}(U)\right] \right\} ^{2}%
\label{less_evident_sanov}%
\end{equation}
From this estimate, we easily get the claim: From Sanov's theorem, we have for
any $\chi>0$%
\begin{equation}
\mathbb{E}M_{N}(U)=2^{N}\mathbb{P}\left( L_{N,\alpha}\in U\right) \geq
\exp\left[ N\left( \log2-H(\nu\mid\mu)-\chi\right) \right]
.\label{Est_Exp_below}%
\end{equation}
Using this, we get by taking $\chi=\varepsilon/2$%
\begin{align*}
& {\mathbb{P}}\left( M_{N}(U)\leq\mathrm{e}^{N\left( \log2-H(\nu\mid
\mu)-\varepsilon\right) }\right) \\
& ={\mathbb{P}}\left( M_{N}(U)-\mathbb{E}M_{N}(U)\leq\mathrm{e}%
^{-N\varepsilon/2}\mathrm{e}^{N\left( \log2-H(\nu\mid\mu)-\varepsilon
/2\right) }-\mathbb{E}M_{N}(U)\right) \\
& \leq{\mathbb{P}}\left( M_{N}(U)-\mathbb{E}M_{N}(U)\leq\left(
\mathrm{e}^{-N\varepsilon/2}-1\right) \mathbb{E}M_{N}(U)\right) \\
& \leq{\mathbb{P}}\left( M_{N}(U)-\mathbb{E}M_{N}(U)\leq-\frac{1}%
{2}\mathbb{E}M_{N}(U)\right) \\
& \leq{\mathbb{P}}\left( \left\vert M_{N}(U)-\mathbb{E}M_{N}(U)\right\vert
\geq\frac{1}{2}\mathbb{E}M_{N}(U)\right) \\
& \leq4\frac{{\operatorname{var}}\left[ M_{N}(U)\right] }{\left\{
\mathbb{E}M_{N}(U)\right\} ^{2}}\leq4\mathrm{e}^{-2N\delta}\leq
\mathrm{e}^{-\delta N}.
\end{align*}
So it remains to prove (\ref{less_evident_sanov}). We first claim that for any
$j$%
\begin{align}
& \lim_{r\rightarrow0}\operatorname{inf}_{\rho,\theta\in{\operatorname{cl}%
}B_{r}(\nu):\rho^{(j)}=\theta^{(j)}}\left\{ H(\rho\mid\mu)+H\left(
\theta\mid\theta^{(j)}\otimes\hat{\mu}^{(j)}\right) \right\}
\label{lim_balls}\\
& =H(\nu\mid\mu)+H\left( \nu\mid\nu^{(j)}\otimes\hat{\mu}^{(j)}\right)
,\nonumber
\end{align}
where $\hat{\mu}^{(j)}\overset{\mathrm{def}}{=}\mu_{j+1}\otimes\cdots
\otimes\mu_{n}$. The inequality $\leq$ is evident by taking $\rho=\theta=\nu$,
and the opposite follows from the semicontinuity properties: One gets that for
a sequence $(\rho_{n},\theta_{n})$ with $\rho_{n}^{(j)}=\theta_{n}^{(j)}$ and
$\rho_{n},\theta_{n}\rightarrow\nu$, we have
\begin{align*}
\liminf_{n\rightarrow\infty}H\left( \rho_{n}\mid\mu\right) & \geq H(\nu
\mid\mu),\\
\liminf_{n\rightarrow\infty}H\left( \theta_{n}\mid\theta_{n}^{(j)}\otimes
\hat{\mu}^{(j)}\right) & \geq H\left( \nu\mid\nu^{(j)}\otimes\hat{\mu
}^{(j)}\right) ,
\end{align*}
the first inequality by the standard semi-continuity, and the second by Lemma
\ref{lower_semicontinuity}. This proves (\ref{lim_balls}).
Choose $\eta>0$ such that $H\left( \nu^{(j)}\mid\mu^{(j)}\right) <\Gamma
_{j}\log2-\eta$, for all $1\leq j\leq n$. By (\ref{lim_balls}) we may choose
$r$ small enough such that ${\operatorname{cl}}B_{r}(\nu)\subset V,$ and for
all $1\leq j\leq n$,
\begin{align*}
& \operatorname{inf}_{\rho,\theta\in{\operatorname{cl}}B_{r}(\nu):\rho
^{(j)}=\theta^{(j)}}\left\{ H(\rho\mid\mu)+H\left( \theta\mid\theta
^{(j)}\otimes\hat{\mu}^{(j)}\right) \right\} \\
& \geq H(\nu\mid\mu)+H\left( \nu\mid\nu^{(j)}\otimes\hat{\mu}^{(j)}\right)
-\eta/2\\
& =2H(\nu\mid\mu)-H\left( \nu^{(j)}\mid\mu^{(j)}\right) -\eta/2\\
& \geq2H(\nu\mid\mu)-\Gamma_{j}\log2+{\eta/2}.
\end{align*}
For two indices $\alpha,\alpha^{\prime}$ we write $q(\alpha,\alpha^{\prime
})\overset{\mathrm{def}}{=}\max\left\{ j:\alpha^{(j)}=\alpha^{\prime
(j)}\right\} $ with $\max\emptyset\overset{\mathrm{def}}{=}0$. Then
\begin{align*}
{{\mathbb{E}}}{M_{N}^{2}(U)} & =\sum_{j=0}^{n}\sum_{\alpha,\alpha^{\prime
}:q(\alpha,\alpha^{\prime})=j}{\mathbb{P}}\left[ L_{N,\alpha}\in
U,L_{N,\alpha^{\prime}}\in U\right] \\
& =\sum_{\alpha,\alpha^{\prime}:q(\alpha,\alpha^{\prime})=0}{\mathbb{P}%
}\left[ L_{N,\alpha}\in U\right] {\mathbb{P}}\left[ L_{N,\alpha^{\prime}%
}\in U\right] \\
& +\sum_{j=1}^{n}\sum_{\alpha,\alpha^{\prime}:q(\alpha,\alpha^{\prime}%
)=j}{\mathbb{P}}\left[ L_{N,\alpha}\in U,L_{N,\alpha^{\prime}}\in U\right] \\
& \leq{{\mathbb{E}}}[M_{N}({\operatorname{cl}}U)]^{2}+\\
& +\sum_{j=1}^{n}\sum_{\alpha,\alpha^{\prime}:q(\alpha,\alpha^{\prime}%
)=j}{\mathbb{P}}\left[ L_{N,\alpha}\in{\operatorname{cl}}U,L_{N,\alpha
^{\prime}}\in{\operatorname{cl}}U\right] .
\end{align*}
We write the empirical measure in the form (\ref{splitting_empirical}), and
use Lemma \ref{ldp_empirical_measure_couple}. For any $1\leq j\leq n$ we have%
\begin{align*}
& \sum_{\alpha,\alpha^{\prime}:q(\alpha,\alpha^{\prime})=j}{\mathbb{P}}\left[
L_{N,\alpha}\in\operatorname{cl}U,L_{N,\alpha^{\prime}}\in\operatorname{cl}%
U\right] \\
& =2^{\Gamma_{j}N}2^{(1-\Gamma_{j})N}\left( 2^{(1-\Gamma_{j})N}-1\right)
{\mathbb{P}}\left[ L_{N,\alpha}\in\operatorname{cl}U,L_{N,\alpha^{\prime}}%
\in\operatorname{cl}U\right] ,
\end{align*}
where on the right hand side $\alpha,\alpha^{\prime}$ is an arbitrary pair
with $q(\alpha,\alpha^{\prime})=j$. Using Lemma
\ref{ldp_empirical_measure_couple} we have
\begin{align*}
& {\mathbb{P}}\left[ L_{N,\alpha}\in\operatorname{cl}U,\;L_{N,\alpha}%
\in\operatorname{cl}U\right] \\
& \leq\operatorname{exp}\Bigg [-N\operatorname{inf}_{\rho,\theta
\in\operatorname{cl}U,\rho^{(j)}=\theta^{(j)}}\Big \{H\left( \rho^{(j)}%
\mid\mu^{(j)}\right) +\\
& +H\left( \rho\mid\rho^{(j)}\otimes\hat{\mu}^{(j)}\right) +H\left(
\theta\mid\theta^{(j)}\otimes\hat{\mu}^{(j)}\right) \Big \}+{\frac{N\eta}{4}%
}\Bigg ]\\
& =\operatorname{exp}\left[ -N\operatorname{inf}_{\rho,\theta\in
\operatorname{cl}U,\rho^{(j)}=\theta^{(j)}}\left\{ H(\rho\mid\mu)+H\left(
\theta\mid\theta^{(j)}\otimes\hat{\mu}^{(j)}\right) \right\} +{\frac{N\eta
}{4}}\right] \\
& \leq2^{\Gamma_{j}N}\operatorname{exp}\left[ -2NH(\nu\mid\mu)-{\frac{N\eta
}{4}}\right] ,
\end{align*}
and thus
\[
\sum_{\alpha,\alpha^{\prime}:q(\alpha,\alpha^{\prime})=j}{\mathbb{P}}\left[
L_{N,\alpha}\in\operatorname{cl}U,\;L_{N,\alpha}\in\operatorname{cl}U\right]
\leq2^{2N}\operatorname{exp}\left[ -2NH(\nu\mid\mu)-{\frac{N\eta}{4}}\right]
.
\]
Combining, we obtain by taking $\chi=\eta/16$ in (\ref{Est_Exp_below})%
\[
\operatorname{var}\left[ M_{N}(U)\right] \leq2^{2N}\operatorname{exp}\left[
-2NH(\nu\mid\mu)-{\frac{N\eta}{4}}\right] \leq\mathrm{e}^{-N\eta
/8}{{\mathbb{E}}}[M_{N}(U)]^{2},
\]
which proves our claim.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{Th_GREM_perceptron}]We set
\[
{\mathcal{G}}\overset{\mathrm{def}}{=}\left\{ \nu\in{\mathcal{M}}_{1}%
^{+}(S^{n}):H\left( \nu^{(j)}\mid\mu^{(j)}\right) \leq\Gamma_{j}%
\log2,\ j=1,\dots,n\right\} ,
\]
which is a compact set.
\textit{Step 1.} We first prove the lower bound. By compactness of
${\mathcal{G}}$ and the semicontinuity of $H$ there exists $\nu_{0}%
\in{\mathcal{G}}$ such that
\[
\sup_{\nu\in{\mathcal{G}}}\left\{ \Phi(\nu)-H(\nu\mid\mu)\right\} =\Phi
(\nu_{0})-H(\nu_{0}\mid\mu).
\]
We set $\nu_{\lambda}\overset{\mathrm{def}}{=}(1-\lambda)\nu_{0}+\lambda\mu$
for $0<\lambda<1$. By convexity of $H(\nu\mid\mu)$ in $\nu$ we see that
$H\left( \nu_{\lambda}^{(j)}\mid\mu^{(j)}\right) <\Gamma_{j}\log2$ for all
$1\leq j\leq n$. Furthermore $\nu_{\lambda}\rightarrow\nu_{0}$ weakly as
$\lambda\rightarrow0$, and $\Phi(\nu_{\lambda})\rightarrow\Phi(\nu_{0})$,
$H(\nu_{\lambda}\mid\mu)\rightarrow H(\nu_{0}\mid\mu)$.
Given $\varepsilon>0$ we choose $\lambda>0$ such that
\[
\Phi(\nu_{\lambda})-H(\nu_{\lambda}\mid\mu)\geq\Phi(\nu_{0})-H(\nu_{0}\mid
\mu)-\varepsilon.
\]
By the continuity of $\Phi$ and Lemma \ref{second_mom} we find a neighborhood
$U$ of $\nu_{\lambda}$, and $\delta>0$ such that
\[
\Phi(\theta)-\Phi(\nu_{\lambda})\leq\varepsilon,\ \theta\in U,
\]
and
\[
{\mathbb{P}}\left[ M_{N}(U)\leq2^{N}\operatorname{exp}\left[ -NH(\nu
_{\lambda}\mid\mu)-N\varepsilon\right] \right] \leq\mathrm{e}^{-\delta N},
\]
Then, with probability greater than $1-\mathrm{e}^{-\delta N}$,%
\begin{align*}
Z_{N} & =2^{-N}\sum_{\alpha}\operatorname{exp}\left[ N\Phi(L_{N,\alpha
})\right] \\
& \geq2^{-N}\sum_{\alpha:L_{N,\alpha}\in U}\operatorname{exp}\left[
N\Phi(L_{N,\alpha})\right] \\
& \geq\operatorname{exp}\left[ N\Phi(\nu_{\lambda})-N\varepsilon\right]
\operatorname{exp}\left[ -NH(\nu_{\lambda}\mid\mu)-N\varepsilon\right] \\
& \geq\operatorname{exp}\left[ N\sup_{\nu\in{\mathcal{G}}}\left\{ \Phi
(\nu)-H(\nu\mid\mu)\right\} -3N\varepsilon\right] .
\end{align*}
By Borel-Cantelli, we therefore get, as $\varepsilon$ is arbitrary,
\[
\liminf_{N\rightarrow\infty}{\frac{1}{N}}\log Z_{N}\geq\sup_{\nu
\in{\mathcal{G}}}\left\{ \Phi(\nu)-H(\nu\mid\mu)\right\}
\]
almost surely.
\textit{Step 2.} We prove the upper bound. Let again $\varepsilon>0$ and set
\[
\overline{{\mathcal{G}}}\overset{\mathrm{def}}{=}\{\nu:H(\nu\mid\mu)\leq
\log2\}.
\]
If $\nu\in{\mathcal{G}}$ we choose $r_{\nu}>0$ such that $\left\vert
\Phi(\theta)-\Phi(\nu)\right\vert \leq\varepsilon$, $\theta\in B_{r_{\nu}}%
(\nu)$ and
\[
{\mathbb{P}}\left[ M_{N}(B_{r_{\nu}}(\nu))\geq2^{N}\operatorname{exp}\left[
-NH(\nu\mid\mu)+N\varepsilon\right] \right] \leq\mathrm{e}^{-N\delta_{\nu}},
\]
for some $\delta_{\nu}>0$ and large enough $N$ (using Lemma \ref{very_many}).
If $\nu\in\overline{{\mathcal{G}}}\setminus{\mathcal{G}}$ we choose $r_{\nu}$
such that $\left\vert \Phi(\theta)-\Phi(\nu)\right\vert \leq\varepsilon$,
$\theta\in B_{r_{\nu}}(\nu)$, and
\begin{equation}
{\mathbb{P}}\left[ M_{N}(B_{r_{\nu}}(\nu))\neq0\right] \leq\mathrm{e}%
^{-N\delta_{\nu}},\label{still_existent_two}%
\end{equation}
again for large enough $N$ (and by Lemma \ref{still_existent}). As
$\overline{{\mathcal{G}}}$ is compact, we can cover it by a finite union of
such balls, i.e.
\[
\overline{{\mathcal{G}}}\subset U\overset{\mathrm{def}}{=}\bigcup_{j=1}%
^{m}B_{r_{j}}(\nu_{j}),
\]
where $r_{j}\overset{\mathrm{def}}{=}r_{\nu_{j}}$. We also set $\delta
\overset{\mathrm{def}}{=}\min_{j}\delta_{\nu_{j}}$. We then estimate
\begin{equation}
Z_{N}\leq2^{-N}\sum_{l=1}^{m}\sum_{\alpha:L_{N,\alpha}\in B_{r_{l}}(\nu_{l}%
)}\operatorname{exp}\left[ N\Phi(L_{N,\alpha})\right] +2^{-N}\sum
_{\alpha:L_{N,\alpha}\notin U}\operatorname{exp}\left[ N\Phi(L_{N,\alpha
})\right] .\label{upper_bound_abstract}%
\end{equation}
we first claim that almost surely the second summand vanishes provided $N$ is
large enough, i.e. that there is no $\alpha$ with $L_{N,\alpha}\notin U$. By
Sanov's theorem, we have
\[
\limsup_{N\rightarrow\infty}{\frac{1}{N}}\log{\mathbb{P}}\left[ L_{N,\alpha
}\notin U\right] \leq-\operatorname{inf}_{\nu\notin U}H(\nu\mid\mu)<-\log2.
\]
Therefore, almost surely, there is no $\alpha$ with $L_{N,\alpha}\notin U$,
and therefore the second summand in (\ref{upper_bound_abstract}) vanishes for
large enough $N$, almost surely. The same applies to those summands in the
first part for which $\nu_{l}\notin{\mathcal{G}}$, using
(\ref{still_existent_two}). We therefore have, almost surely, for large enough
$N$,
\begin{align*}
Z_{N} & \leq2^{-N}\sum_{l:\nu_{l}\in{\mathcal{G}}}\sum_{\alpha:L_{N,\alpha
}\in B_{r_{l}}(\nu_{l})}\operatorname{exp}\left[ N\Phi(L_{N,\alpha})\right]
\\
& \leq\mathrm{e}^{N\varepsilon}\sum_{l:\nu_{l}\in{\mathcal{G}}}%
\operatorname{exp}\left[ N\Phi(\nu_{l})\right] M_{N}(B_{r_{l}}(\nu_{l}))\\
& \leq\mathrm{e}^{2N\varepsilon}\sum_{l:\nu_{l}\in{\mathcal{G}}}%
\operatorname{exp}\left[ N\Phi(\nu_{l})\right] \operatorname{exp}\left[
-NH(\nu_{l}\mid\mu)\right] \\
& \leq\mathrm{e}^{2N\varepsilon}m\operatorname{exp}\left[ N\sup_{\nu
\in{\mathcal{G}}}\left\{ \Phi(\nu)-H(\nu\mid\mu)\right\} \right] .
\end{align*}
As $\varepsilon$ is arbitrary, we get
\[
\limsup_{N\rightarrow\infty}{\frac{1}{N}}\log Z_{N}\leq\sup_{\nu
\in{\mathcal{G}}}\left[ \Phi(\nu)-H(\nu\mid\mu)\right] .
\]
This finishes the proof of Theorem \ref{Th_GREM_perceptron}.
\end{proof}
\subsection{The dual representation. Proof of the Theorem
\ref{Th_Parisi_formula}}
We define a family $\mathcal{G}\left( \phi\right) =\left\{ G_{\phi
,\mathbf{m}}\right\} $ of probability distributions on $S^{n}$ which depend
on the parameter $\mathbf{m}=\left( m_{1},\ldots,m_{n}\right) \in\Delta.$
The probability measure $G=G_{\phi,\mathbf{m}}$ is described by a
\textquotedblleft starting\textquotedblright\ measure $\gamma$ on $S,$ and for
$2\leq j\leq n$ Markov kernels $K_{j}$ from $S^{j-1}$ to $S,$ so that $G$ is
the semi-direct product%
\[
G=\gamma\otimes K_{2}\otimes\cdots\otimes K_{n}.
\]%
\[
\gamma\left( dx\right) \overset{\mathrm{def}}{=}\frac{\exp\left[ m_{1}%
\phi_{1}\left( x\right) \right] \mu_{1}\left( dx\right) }{\exp\left[
m_{1}\phi_{0}\right] },
\]%
\[
K_{j}\left( \mathbf{x}^{\left( j-1\right) },dx_{j}\right) \overset
{\mathrm{def}}{=}\frac{\exp\left[ m_{j}\phi_{j}\left( \mathbf{x}^{\left(
j\right) }\right) \right] \mu_{j}\left( dx_{j}\right) }{\exp\left[
m_{j}\phi_{j-1}\left( \mathbf{x}^{\left( j-1\right) }\right) \right] },
\]
where we write $\mathbf{x}^{\left( j\right) }\overset{\mathrm{def}}%
{=}\left( x_{j},\ldots,x_{j}\right) .$ Remember the definition of the
function $\phi_{j}:S^{j}\rightarrow\mathbb{R}$ in (\ref{Def_phin}),
(\ref{Def_phij}). It should be remarked that these objects are defined for all
$\mathbf{m}\in\mathbb{R}^{n}$, and not just for $\mathbf{m}\in\Delta.$ We also
write%
\[
G^{\left( j\right) }\overset{\mathrm{def}}{=}\gamma\otimes K_{2}%
\otimes\cdots\otimes K_{j}%
\]
which is the marginal of $G$ on $S^{j}.$ In order to emphasize the dependence
on $\mathbf{m},$ we occasionally will write $\phi_{j,\mathbf{m}}%
,\ \gamma_{\mathbf{m}},\ K_{j,\mathbf{m}}$ etc.
We remark that by a simple computation%
\begin{gather}
\int H\left( K_{j}\left( \mathbf{x}^{\left( j-1\right) },\cdot\right)
\mid\mu_{j}\right) G^{\left( j-1\right) }\left( d\mathbf{x}^{\left(
j-1\right) }\right) \label{PAR1}\\
=m_{j}\left[ \int\phi_{j}dG^{\left( j\right) }-\int\phi_{j-1}dG^{\left(
j-1\right) }\right] .\nonumber
\end{gather}
$\phi_{j},\ldots,\phi_{n}$ do not depend on $m_{j},$ but $\phi_{0},\ldots
,\phi_{j-1}$ do. Differentiating the equation%
\[
\mathrm{e}^{m_{r+1}\phi_{r}}=\int\mathrm{e}^{m_{r+1}\phi_{r+1}}d\mu_{r+1}%
\]
with respect to $m_{j},$ we get for $0\leq r\leq j-2$%
\begin{equation}
\frac{\partial\phi_{r}\left( \mathbf{x}^{\left( r\right) }\right)
}{\partial m_{j}}=\int\frac{\partial\phi_{r+1}\left( \mathbf{x}^{\left(
r\right) },x_{r+1}\right) }{\partial m_{j}}K_{r+1}\left( d\mathbf{x}%
^{\left( r\right) },x_{r+1}\right) ,\label{PAR2}%
\end{equation}
and for $r=j-1$%
\[
\phi_{j-1}\mathrm{e}^{m_{j}\phi_{j}}+m_{j}\frac{\partial\phi_{j-1}}{\partial
m_{j}}\mathrm{e}^{m_{j}\phi_{j}}=\int\phi_{j}\mathrm{e}^{m_{j}\phi_{j}}%
d\mu_{j},
\]
i.e.%
\[
\frac{\partial\phi_{j-1}}{\partial m_{j}}\left( \mathbf{x}^{\left( j\right)
}\right) =\frac{1}{m_{j}}\left[ \int\phi_{j}\left( \mathbf{x}^{\left(
j-1\right) },x_{j}\right) K_{j}\left( \mathbf{x}^{\left( j-1\right)
},dx_{j}\right) -\phi_{j-1}\left( \mathbf{x}^{\left( j-1\right) }\right)
\right] .
\]
Combining that with (\ref{PAR1}), (\ref{PAR2}) we get%
\begin{align}
\frac{\partial\phi_{0,\mathbf{m}}}{\partial m_{j}} & =\frac{1}{m_{j}}\left[
\int\phi_{j}dG^{\left( j\right) }-\int\phi_{j-1}dG^{\left( j-1\right)
}\right] \label{PAR3}\\
& =\frac{1}{m_{j}^{2}}\int H\left( K_{j}\left( \mathbf{x}^{\left(
j-1\right) },\cdot\right) \mid\mu_{j}\right) G^{\left( j-1\right)
}\left( d\mathbf{x}^{\left( j-1\right) }\right) .\nonumber
\end{align}
Theorem \ref{Th_Parisi_formula} is immediate from the following result:
\begin{proposition}
\label{Th_Gibbs_maximizer}Assume that $\phi:S^{n}\rightarrow{\mathbb{R}}$ is
bounded and continuous. Then there is a unique measure $\nu$ maximizing
$\operatorname{Gibbs}\left( \nu,\phi\right) $ under the constraint $\nu
\in\bigcap\nolimits_{j=1}^{n}\mathcal{R}_{j}.$ This measure is of the form
$\nu=G_{\phi,\mathbf{m}}$ where $\mathbf{m}$ is the unique element in $\Delta$
minimizing (\ref{Variationformula3}). For this $\mathbf{m},$ we have%
\begin{equation}
\operatorname{Gibbs}\left( G,\phi\right) =\operatorname{Parisi}\left(
\phi,\mathbf{m}\right) .\label{Gibbs=Parisi}%
\end{equation}
\end{proposition}
\begin{proof}
From strict convexity of the relative entropy, and the fact that
$\bigcap\nolimits_{j=1}^{n}\mathcal{R}_{j}$ is compact and convex, it follows
that there is a unique maximizer $\nu$ of $\operatorname{Gibbs}\left(
\nu,\phi\right) $ under this constraint.
Also, a straightforward application of H\"{o}lder's inequality shows that
$\operatorname{Parisi}\left( \phi,\mathbf{m}\right) $ is a strictly convex
function in the variables $1/m_{j}.$ Therefore, it follows that there is a
uniquely attained minimum of $\operatorname{Parisi}\left( \phi,\mathbf{m}%
\right) $ as a function of $\mathbf{m}\in\Delta.$ This minimizing
$\mathbf{m}=\left( m_{1},\ldots,m_{n}\right) $, we can be split into
subblocks of equal values: There is a number $K,\ 0\leq K\leq n,$ and indices
$0<j_{1}<j_{2}<\cdots<j_{K}\leq n$ such that%
\begin{align*}
0 & <m_{1}=\cdots=m_{j_{1}}<m_{j_{1}+1}=\cdots=m_{j_{2}}\\
& <m_{j_{2}+1}\cdots<m_{j_{K-1}+1}=\cdots=m_{j_{K}}\\
& <m_{j_{K}+1}=\cdots m_{n}=1.
\end{align*}
$K=0$ just means that all $m_{i}=1.$ If $j_{K}=n,$ then all $m_{i}$ are $<1.$
We write $G=G_{\phi,\mathbf{m}}.$
>From (\ref{PAR3}), we immediately have%
\begin{equation}
\frac{\partial\operatorname{Parisi}\left( \phi,\mathbf{m}\right) }{\partial
m_{j}}=\frac{1}{m_{j}^{2}}\left[ \int H\left( K_{j}\left( \mathbf{x}%
^{\left( j-1\right) },\cdot\right) \mid\mu_{j}\right) G^{\left(
j-1\right) }\left( d\mathbf{x}^{\left( j-1\right) }\right) -\gamma
_{j}\log2\right] .\label{PAR_Derivative}%
\end{equation}
Set $d_{j}\overset{\mathrm{def}}{=}\int H\left( K_{j}\left( \mathbf{x}%
^{\left( j-1\right) },\cdot\right) \mid\mu_{j}\right) G_{\mathbf{m}%
}^{\left( j-1\right) }\left( d\mathbf{x}^{\left( j-1\right) }\right)
.$We use (\ref{PAR_Derivative}) and the minimality of $\operatorname{Parisi}%
\left( \phi,\mathbf{\cdot}\right) $ at $\mathbf{m.}$ We can perturb
$\mathbf{m}$ by moving a whole block $m_{j_{r}+1}=\cdots=m_{j_{r+1}}$ up and
down locally, without leaving $\Delta,$ provided it is not the possibly
present block of values $1.$ This leads to%
\[
\sum_{i=j_{r}+1}^{j_{r+1}}d_{i}=\log2\sum_{i=j_{r}+1}^{j_{r+1}}\gamma_{i}.
\]
Furthermore, we can always move first parts of blocks, say $m_{j_{r}+1}%
=\cdots=m_{k},\ k\leq j_{r+1}$ locally down, without leaving $\Delta,$ so that
we get%
\[
\sum_{i=j_{r}+1}^{j_{k}}d_{i}\leq\log2\sum_{i=j_{r}+1}^{j_{k}}\gamma_{i}.
\]
These two observations imply%
\begin{equation}
G\in\bigcap_{j=1}^{n}\mathcal{R}_{j}\cap\bigcap_{r=1}^{K}\mathcal{R}_{j_{r}%
}^{=}.\label{PAR4}%
\end{equation}
We next prove%
\begin{equation}
\operatorname{Gibbs}\left( \nu,\phi\right) \leq\operatorname{Gibbs}\left(
G,\phi\right) \label{PAR5}%
\end{equation}
for any $\nu\in\bigcap_{j=1}^{n}\mathcal{R}_{j}.$
We first prove the case $n=1.$ If $m<1,$ then%
\[
H\left( G\mid\mu\right) =\log2\geq H\left( \nu\mid\mu\right)
\]
by (\ref{PAR4}) and the assumption $\nu\in\mathcal{R}_{1}.$ Therefore, in any
case%
\begin{align*}
\operatorname{Gibbs}\left( G,\phi\right) -\operatorname{Gibbs}\left(
\nu,\phi\right) & \geq\int\phi dG-\frac{1}{m}H\left( G\mid\mu\right) \\
& -\left[ \int\phi d\nu-\frac{1}{m}H\left( \nu\mid\mu\right) \right] \\
& =\frac{1}{m}H\left( \nu\mid G\right) \geq0
\end{align*}
The general case follows by a slight extension of the above argument. Put%
\[
D_{k}\overset{\mathrm{def}}{=}\int\phi_{k}dG^{\left( k\right) }-\frac
{1}{m_{k+1}}H\left( G^{\left( k\right) }\mid\mu^{\left( k\right)
}\right) -\int\phi_{k}d\nu^{\left( k\right) }+\frac{1}{m_{k+1}}H\left(
\nu^{\left( k\right) }\mid\mu^{\left( k\right) }\right) ,
\]
$D_{0}\overset{\mathrm{def}}{=}0,D_{n}=\operatorname{Gibbs}\left(
G,\phi\right) -\operatorname{Gibbs}\left( \nu,\phi\right) .$ We prove
$D_{k-1}\leq D_{k}$ for all $k,$ so that the claim follows. Remark that as
above in the $n=1$ case, if $m_{k}<m_{k+1},$ then $H\left( G^{\left(
k+1\right) }\mid\mu^{\left( k+1\right) }\right) =\Gamma_{k}\log2,$ and
therefore, in any case%
\begin{align*}
D_{k} & \geq\int\phi_{k}dG^{\left( k\right) }-\frac{1}{m_{k}}H\left(
G^{\left( k\right) }\mid\mu^{\left( k\right) }\right) -\int\phi_{k}%
d\nu^{\left( k\right) }+\frac{1}{m_{k}}H\left( \nu^{\left( k\right) }%
\mid\mu^{\left( k\right) }\right) \\
& =\int\phi_{k-1}dG^{\left( k-1\right) }-\frac{1}{m_{k}}H\left( G^{\left(
k-1\right) }\mid\mu^{\left( k-1\right) }\right) -\int\phi_{k}d\nu^{\left(
k\right) }+\frac{1}{m_{k}}H\left( \nu^{\left( k\right) }\mid\mu^{\left(
k\right) }\right) .
\end{align*}
As%
\begin{align*}
& H\left( \nu^{\left( k\right) }\mid\mu^{\left( k\right) }\right)
-m_{k}\int\phi_{k}d\nu^{\left( k\right) }+m_{k}\int\phi_{k-1}d\nu^{\left(
k-1\right) }\\
& =H\left( \nu^{\left( k-1\right) }\mid\mu^{\left( k-1\right) }\right)
+\int\log\frac{\nu^{\left( k\right) }\left( dx_{k}\mid\mathbf{x}^{\left(
k-1\right) }\right) \mathrm{e}^{m_{k}\phi_{k-1}\left( \mathbf{x}^{\left(
k-1\right) }\right) }}{\mu_{k}\left( dx_{k}\right) \mathrm{e}^{m_{k}%
\phi_{k}\left( \mathbf{x}^{\left( k\right) }\right) }}\nu^{\left(
k\right) }\left( d\mathbf{x}^{\left( k\right) }\right) \\
& \geq H\left( \nu^{\left( k-1\right) }\mid\mu^{\left( k-1\right)
}\right) ,
\end{align*}
(\ref{PAR5}) is proved.
(\ref{PAR4}) and (\ref{PAR5}) identify $G=G_{\phi,\mathbf{m}}$ as the unique
maximizer of $G\left( \cdot,\phi\right) $ under the constraint
$\bigcap\nolimits_{j=1}^{n}\mathcal{R}_{j}.$
The identification (\ref{Gibbs=Parisi}) comes by a straightforward computation.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Since the discovery of cosmic rays (CR) a century ago, instrumental capabilities have steadily
improved. A large variety of types of experiments (balloon- or satellite-borne, flown on a shuttle,
installed on the international space station, or ground-based experiments) and techniques have been
used (nuclear emulsions, drift chambers, Cerenkov counters, spectrometers...) to refine our knowledge
of the CR composition and spectrum.
The presence of heavy $Z<30$ \citepads{1948PhRv...74.1818F,1948PhRv...74..213F}, and extremely heavy
$Z\ge30$ elements \citepads{1967RSPSA.301...39F,1969PhRvL..23..338B} in the cosmic radiation were among
the main discoveries related to Galactic CR nuclei, which culminated with the discovery of a few $Z>90$
events \citepads{1970RSPSA.318....1F,1971PhRvL..26..463O,1971PhRvD...3..815P}. Isotopes and in particular
the radioactive CR clocks were identified, with more or less difficulties due to their decreasing
abundance and mass separation with increasing atomic number: $^{10}$Be \citepads{1973Ap&SS..24...17W},
$^{36}$Cl \citepads{1981ApJ...246.1014Y}, $^{26}$Al \citepads{1982ApJ...252..386W}, and $^{54}$Mn
\citepads{1979ICRC....1..430W}. CR leptons were identified in the early 60's, with the first measurement
of electrons|also called negatrons at that time| \citepads{1961PhRvL...6..125E,1961PhRvL...6..193M}, and
of the positron fraction
\citepads{1964PhRvL..12....3D,1965JGR....70.2713H,1965ICRC....1..331A,1965ICRC....1..335D}. Anti-protons
are $\sim 10^{-4}$ times less abundant than protons and were only observed in the late 70's
\citepads{1979ICRC....1..330B,1979PhRvL..43.1196G}. Anti-deuterons, which are expected to be yet another
factor $\sim10^{-4}$ below \citepads{1997PhLB..409..313C}, are still to be detected: the best limit is
given by the {\sf BESS} balloon \citepads{2005PhRvL..95h1101F}|for limits on anti-helium, see
\citetads{2012PhRvL.108m1301A}|, and is still three orders of magnitude above what is required to reach
the expected astrophysical production. This level could be within reach of the AMS-02 detector on the
International Space Station \citepads{2008arXiv0801.3243A,2008ICRC....4..765C} and/or the {\sf GAPS}
balloon-borne experiment \citepads{2012NIMPA.682...90A}. Note that other milestones in CR studies are the
discovery of the $\gamma$-ray diffuse emissions reported first by \citetads{1972ApJ...177..341K}|and
studied by the contemporary {\sf Fermi-LAT} instrument \citepads{2012ApJ...750....3A}|, and the first
evidence of high-energy CRs from extensive air showers \citepads{1939RvMP...11..288A}|currently studied at
the {\sf Pierre Auger Observatory} \citepads[e.g.,][]{2010PhLB..685..239A}.
Interestingly, most of the even oldest CR measurements are not outdated yet. Indeed, many instruments
are designed to focus on specific CR species: neither all instruments have the isotopic resolution
capabilities, nor all species have been measured repeatedly. In the last twenty years, efforts have
also been more devoted in measuring the CR composition at higher energy than refining the low energy
data; this situation is currently changing with the {\sf AMS-02} experiment installed on the
International Space Station since May 2011. Some old experiments are also useful when one wishes to
inspect a possible charge-sign dependence (22 year cycle) of the Solar modulation effect as a function
of the Sun polarity, as first proposed by \citetads{1996ApJ...464..507C} and further studied in
\citepads{2002ApJ...568..216C}. For all these reasons, we believe it is worth providing an archival
database of CR measurements to the community.
CR data are the backbone of Galactic CR propagation studies
\citepads[e.g.,][]{2001ApJ...547..264J,2001ApJ...555..585M,2007ARNPS..57..285S,2008JCAP...10..018E}. In
the last twenty years, anti-protons and positron fraction measurements have also become a strong probe
for dark matter indirect searches \citepads{2011ARA&A..49..155P,2012CRPhy..13..740L}. A database would
therefore be useful to any researcher in these fields, but also to CR experimentalists who wish to
compare their data to previously published ones. Another independent effort to provide a CR database
was presented in \citetads{2009arXiv0907.0565S}. We present here a contextualised and more complete
version of the data\footnote{The data were gathered independently of the data presented in the
\citetads{2009arXiv0907.0565S} database.}, along with many user-friendly interfaces and tools to use them.
The paper is organised as follows: Sect.~\ref{sec:db_content} describes the database content;
Sect.~\ref{sec:website} describes the website and available tools. We conclude and comment on possible
improvements of the database in Sect.~\ref{sec:concl}. Appendix~\ref{app:rules} provides the rules used
by the database to combine CR quantities from a given experiment (e.g., to form quantity A/B from A and
B fluxes); App.~\ref{app:bibtex} gives a summary list of all the experiments/references contained in
this first release.
\section{Content of the database}
\label{sec:db_content}
In this section, we first describe the information gathered in the database, and the data themselves.
We then present how this information is organised in a \textsf{MySQL} framework.
\subsection{Definitions}
\label{sec:definition}
CR data are connected to experiments, analyses, and publications. The first step for creating the
database is to define what an experiment is. Then, the need to define a sub-experiment arises because
i) an experiment may consist of several detectors, or ii) an instrument may have flown several times,
or over distinct periods. Data from a sub-experiment often involve several CR species, the analyses of
which are published in one or several papers. For the sake of clarity, the following
keywords/definitions are used in the database:
\begin{description}
\item[\bf Experiment] Name associated with the instrument ({\em CREAM}, {\em AMS}). To identify
unnamed balloons, we use the syntax {\em Balloon (YYYY)}, and a further distinction is made if a
balloon was flown several times: a comma-separated list of years {\em Balloon (1966,1967)} is used
if the data were analysed and published for each flight; a plus-separated list {\em Balloon
(1967+1968)} is used if the data resulted from the combined analysis of the flights\footnote{The
naming convention chosen for the experiment and the sub-experiment ensures the many unnamed balloons
flown before the 90's to be uniquely defined.}.
\item[\bf Sub-experiment] Sub-detector name or experiment name concatenated with the flight number
and data taking period {\em (YYYY/MM)}, with start and stop dates separated by a hyphen for
durations over a month\footnote{Some of the start and stop dates for balloon flights are not given
in the publication and were taken from the {\sc StratoCat} database of stratospheric balloons
launched worldwide since 1947 (\url{http://stratocat.com.ar/globos/indexe.html}). For long-lived
instruments, we do not include in the database the excluded time periods (within the start and stop
dates) based on the analysis quality criteria (solar flares, high solar activity, instrument
stability\dots) because they are never given in the publication.}: {\em Balloon (1972/07)}, {\em
CREAM-I (2004/12-2005/01)}, {\em CREAM-II (2005/12-2006/01)}, {\em Ulysses-HET
(1990/10-1997/12)}.
\item[\bf Cosmic-ray quantity] Combination (sum, ratio, etc.) of measured CR species\footnote{A CR
must be a stable species with respect to the confinement time in the Galaxy, i.e. with an effective
lifetime $\gtrsim$~kyr (note that the electronic capture decay mode is suppressed because CR nuclei
are fully stripped of $e^-$ above $\sim 0.1$ GeV/n).}. It can be an elemental (e.g., C), isotopic
(e.g., $^1$H), or leptonic (e.g., $e^-$) flux, or any ratio of these quantities such as B/C,
$^{10}$Be/Be, $e^+/(e^-+e^+)$, etc. The keyword {\em SubFe} is used for the group $Z=21-23$, but no
other charge group is defined for now.
\item[\bf Energy axis] Detectors often measure the CR total energy $E_{\rm tot}$ or rigidity ${\cal
R}= pc/Ze$ ($p$ is the momentum, $Z$ the charge, $c$ the speed of light, and $e$ the electron
charge). Data are also very often presented as a function of the kinetic energy $E_{\rm k}=
E_{\rm tot} - m$ ($m$ is the CR mass) or the kinetic energy per nucleon $E_{\rm k/n}= E_{\rm
k}/A$ ($A$ is the atomic number). In the database, we allow four representations of the energy unit
and axis: {\em [GeV]} for $E_{\rm tot}$, {\em [GV]} for ${\cal R}$, {\em [GeV]} for $E_{\rm k}$, and
{\em [GeV/n]} for $E_{\rm k/n}$.
\item[\bf Publication] Refereed or non-refereed reference (journal or conference proceedings)
providing CR quantity data from (sub-)experiments. A publication is usually attached to a single
(sub-)experiment and it contains different CR measurements, but there are a few exceptions. Over
time, some of these publications may be superseded by newer analyses: a specific entry of the
database allows to keep track of deprecated analyses and references.
\item[\bf Data] CR quantity measurement and uncertainties at one or several energy bins.
Sect.~\ref{sec:data} gives a complete description of a data entry.
\end{description}
The fact that combinations of CR quantities are themselves CR quantities introduce a subtleties in the
choice of how to handle the database. One could be tempted to fill the database with all useful
combinations of data (e.g., the often used B/C ratio) from published quantities (e.g., B and C fluxes).
However, the number of combinations that can be formed is large (for $Z<30$, as many elements and about
a hundred isotopes can be combined), and the procedure to combine the errors on the measurements is not
always sound. For these reasons, we decided to fill the database with the published quantities only. We
leave the task of extracting the most complete dataset (for a given CR quantity) to the \tabtext{Data
Extraction} interface (Sect.~\ref{sec:data_extraction}), which combines the data found directly in the
database, and those obtained by looking for all combinations of data leading to this quantity (see
Sect.~\ref{sec:data_extraction}, and also App.~\ref{app:rules} for a discussion of the priority rules
and criteria to decide how and when to form new quantities and evaluate their uncertainties).
\subsection{Data description and units}
\label{sec:data}
The structure of a CR data entry (energy, energy range, measurement and uncertainties) for any measured
quantity is as follows:
\begin{description}
\item[$\mathbf{\langle E\rangle}$] `Central' energy given in the publication (unit is {\em [GeV]} if
the energy axis is $E_{\rm tot}$ or $E_{\rm k}$, {\em [GV]} for ${\cal R}$, and {\em [GeV/n]} for
$E_{\rm k/n}$). If only the bin range (see below) is given in the publication, the geometric mean
$\langle E\rangle=\sqrt{E_{\rm min}E_{\rm max}}$ is used.
\item[\bf Bin range] Energy range (same unit as $\langle E\rangle$). If only $\langle E\rangle$ is
given in the publication, $E_{\rm min}=E_{\rm max}=\langle E\rangle$.
\item[\bf Value] Measured CR quantity in unit of [$(\langle E \rangle\,{\rm m}^2\,{\rm s}\,{\rm
sr})^{-1}$] if this is a flux, or unit-less if this is a ratio. The data correspond to
top-of-atmosphere (TOA) quantities, i.e. modulated by the Sun's activity.
\item[\bf Stat Err] Statistical error (same unit as {\em Value}).
\item[\bf Syst Err] Systematic error (same unit as {\em Value}); set to $0$ if not given in
publication.
\item[$\boldsymbol\phi_{\rm \bf FF}$]{\bf [MV]} Solar modulation parameter\footnote{The Solar
modulation parameter is not a direct product of an experiment analysis, the flight period of the
instrument is. This parameter is a useful ingredient for GCR studies, and is generally estimated
from the measured TOA fluxes and an assumption about the interstellar flux.} in the force-field (FF)
approximation \citepads{1968ApJ...154.1011G,1973Ap&SS..25..387G} as given in the publication (or from
Solar modulation dedicated analyses, e.g.,
\citetads{2002SoPh..207..389U,2005JGRA..11012108U,2011JGRA..11602104U}).
\end{description}
In the database, these values must be filled for each data point from the published data. Whenever
available, we used the values given in the publication tables. However, most publications provide none,
and the data had to be retrieved from the plots (using the {\sc DataThief III}
software\footnote{\url{http://datathief.org}}).
\subsection{Database structure description}
\label{sec:structure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=1.02\columnwidth]{figures/mysql_paper.jpg}
\caption{Tables and keys of the database \textsf{MySQL} structure (see text for details).}
\label{fig:mysql}
\end{center}
\end{figure}
The database engine is {\sf MySQL5}, hosted at the Laboratoire de Physique and Cosmologie (LPSC) on a
backed up server. The structure and keys are shown in Fig.~\ref{fig:mysql} (keys in {\bf exp}, {\bf
subexp}, and {\bf publi} were discussed in Sect.~\ref{sec:definition}, those in {\bf data} in
Sect.~\ref{sec:data}). Each entry in a table is associated with a unique identifier. These identifiers
are used to link elements from one table to another (for example, several sub-experiments can be linked
to a single experiment). For completeness, all the tables of Fig.~\ref{fig:mysql} are briefly described
below:
\begin{description}
\item {\bf exp} Name, type, web site (if available), and flight date.
\item {\bf subexp} Link to the experiment it belongs to, name, description of the apparatus, flight
details (launch place and the number of flights for balloons), flight dates, distance to the Sun
[AU], and Solar modulation level [MV].
\item {\bf publi} Bibliographic reference, web link, publication year,
\textsc{Bib}\TeX\footnote{\url{http://www.bibtex.org}} entry (taken from the Astrophysics
Data System ADS\footnote{\url{http://cdsads.u-strasbg.fr}}), and link to other publications
(if more recent analyses exist).
\item {\bf subexp\_publi} Bridge table linking entries from {\bf publi} to one or several entries of
{\bf subexp}.
\item {\bf element} Name, mass, atomic number, charge, etc. for CR quantities (isotopes, elements,
$\bar{p}$, $e^{-}$, and $e^{+}$).
\item {\bf data} Type (flux or ratio of {\bf element}), energy axis, energy, bin range, value,
statistic and systematic errors.
\item {\bf user} Contact details of administrators (persons authorised to change and validate
submitted data).
\item {\bf validation} Contact details of persons submitting new data (see
Sect.~\ref{sec:add_data_tab} for the \tabtext{New Data} interface), validation date, and identity of
the person (user) who validated the data .
\end{description}
\section{Website, interfaces, and example plots}
\label{sec:website}
The CR database website \url{http://lpsc.in2p3.fr/cosmic-rays-db} is hosted by the LPSC
laboratory website, and is based on a {\sf LAMP} solution\footnote{The acronym {\sf LAMP} refers to a
stack of free open source softwares: {\sf Linux} operating system, {\sf Apache HTTP} server, {\sf MySQL}
database software, and {\sf PHP}.}. Authentication uses the {\tt https} protocol to ensure a good level
of confidentiality (only administrators own credentials to access protected areas). All web pages are
written using the {\sf PHP} (Hypertext PreProcessor) language, with a global structure made in {\sf
AJAX} (Asynchronous {\sf JavaScript} and {\sf XML}). The third-party libraries {\sf jquery},
{\sf jquery-ui}, {\sf jquery.cluetip}, and {\sf table-sorter} are also used.
The website is based on tabs, in which the user is guided by \boxtext{HELP} boxes (identified by
{\em question mark} icons). We give below a brief description of the implemented tabs:
\begin{description}
\item \tab{Welcome} Quick description and organisation of the database, log of the latest changes,
and link to download the database content formatted for the
{\sf USINE}\footnote{\url{http://lpsc.in2p3.fr/usine}} propagation code.
\item \tab{Experiments/Data} List of available data sorted by experiment names or dates. A list of
experiment acronyms is given.
\item \tab{Data extraction} Main interface to retrieve data in {\sf ASCII} files, {\sf
ROOT}\footnote{\url{http://root.cern.ch}} macros and plots, and \textsc{Bib}\TeX\ references for the
selection.
\item \tab{Admin} Shown for authenticated users only: internal checks of the database content,
validation of submitted data.
\item \tab{Links} Standard useful (here GCR-related) web links.
\item \tab{New Data} Interface to submit new data which will appear in the database after validation by
authorised users.
\end{description}
As underlined previously, native data (i.e. data directly from publications) are listed and accessed
from \tabtext{Experiments/Data} (Sect.~\ref{sec:exp_data_tab}). In the \tabtext{Data extraction} tab
(Sect.~\ref{sec:data_extraction}), native data and matching combinations of native data are combined to
provide the most complete list of data found for user-selected quantities/criteria. Adding new data
is possible from the \tabtext{New data} tab (Sect.~\ref{sec:add_data_tab}).
\subsection{Data access from \tab{Experiments/Data} tab}
\label{sec:exp_data_tab}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=1\columnwidth]{figures/snapshot_expdata.jpg}
\caption{Snapshot of the \tabtext{Experiments/Data} tab content (Sect.~\ref{sec:exp_data_tab}). The
\boxtext{HELP} box is activated clicking on the {\em question mark} icon, and a picture of the
instrumental setup (not shown) pops-up when clicking on the {\em magnifying glass} icon. The
\boxtext{Instrument description} box appears for a mouse-over action on the sub-experiment name. A
click on \clicktext{data} pops-up a new window with the data entries and a summary of all the
(sub-)experiment/publication informations (not shown).}
\label{fig:exp_data}
\end{center}
\end{figure}
Figure~\ref{fig:exp_data} shows a snapshot of the \tab{Experiments/Data} tab (and some enabled actions within this tab). The
list of published data is ordered by experiment name or date. For each experiment, the list
of sub-experiments is shown and sorted by start time\footnote{We refer the reader to
Sect.~\ref{sec:definition} for the definition of what is meant by experiment, sub-experiment,
publication, etc., and to Sect.~\ref{sec:structure} for the structure of the data in the \textsf{MySQL}
frame.}. The publication references related to this sub-experiment are then listed along with the
quantities measured (older analyses/publications of the same data are indicated). The most useful
actions/pop-up informations available for the user are:
\begin{itemize}
\item experiment description (name, type, official web page);
\item sub-experiment description (name, data periods\footnote{Full details of the flight dates are
given clicking on \clicktext{data}. The start and stop time format is {\em YYYY/MM/DD-hh:mm:ss}. A
new line is used for each flight.}, instrument description [{\em mouse-over} name], experimental
setup picture [{\em magnifying glass} icon]);
\item data for each publication (from a sub-experiment). A click on \clicktext{data} (see
Fig.~\ref{fig:exp_data}) pops-up a window (not shown in the examples): its upper half summarises all
the information on the sub-experiment (contained in the database) and gives the ADS link of the
reference; the lower half shows the data (see Sect.~\ref{sec:data} for their format): ratios are
sorted first and fluxes second (one can jump directly to the data of interest clicking on one of the
{\em Quantities} suggested in the upper half panel).
\end{itemize}
By default, CR data for nuclei and anti-nuclei are given as a function of kinetic energy per nucleon,
whereas leptons are given as a function of the kinetic energy. Whenever the energy axis is rigidity, the
flag \textcolor{red}{\small[Rigidity]} is added after the data.
\subsection{Selection and tools from \tab{Data extraction} tab} \label{sec:data_extraction}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{figures/snapshot_extraction.jpg}
\caption{Snapshot of the \tabtext{Data extraction} tab interface. In the upper panel, a CR quantity is
chosen by means of selection boxes (auto completion enabled). In the lower panel, more search criteria
are possible (energy range, list of experiments or sub-experiments, time period, etc.). A tick box
allows to add in the search the data points obtained from combinations of `native' data (see
App.~\ref{app:rules}). Clicking on the \button{Extract selection} box pops up the results, as
shown in the example Fig.~\ref{fig:extraction_results}.}
\label{fig:extraction}
\end{center}
\end{figure}
Figure~\ref{fig:extraction} shows a snapshot of the selection interface within the tab. A mandatory step
is the quantity selection ({\sf Flux or ratio selection}), for which a few predefined choices are
proposed. For a ratio, both the numerator and denominator selection boxes must be filled (auto
completion is enabled). The other optional selection criteria ({\sf Refine search criteria}) are:
\begin{itemize}
\item \button{Energy axis}: to be selected among {\sf EKN, EK, R} or {\sf Etot}\footnote{As already
said, most data are published in EKN for nuclei and anti-nuclei, and EK for leptons (very few data
are published on several energy axes). No conversion is proposed from one energy axis to another
because this operation is impossible for most combination of CR quantities (for instance, elements
contain several isotopes of different $A$, $Z$, and $m$ that do not translate in a unique value on a
new energy axis).};
\item \button{Flux rescaling}: multiplies the flux values and errors by $\langle E\rangle^{a}$
(useful for presentation purpose);
\item \button{Energy range}: restricts the energy range allowed;
\item \button{(Sub-)Experiment names}: list of comma-separated names (partial names allowed, e.g.,
{\em CREAM,BESS});
\item \button{Time interval}: selects only experiments falling into the selected period (format is
{\em YYYY/MM}).
\item \button{Show also data from combinations}: tick box to add in the search the data points
obtained from combinations of `native' data (see App.~\ref{app:rules}).
\end{itemize}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{figures/snapshot_extraction_result.jpg}
\caption{Snapshot of the result of the \tabtext{Data extraction} operation. This pop-up window appears
after the selection step shown in Fig.~\ref{fig:extraction} is completed. Buttons, links, and tables
give access to raw data and plots, see text for details.}
\label{fig:extraction_results}
\end{center}
\end{figure}
Hitting the \button{Extract Selection} button pops-up a new window with the data extracted from the user
selection. This is shown in Fig.~\ref{fig:extraction_results}, organised in three panels (click on
\clicktext{hide}/\clicktext{show} to collapse/expand each panel):
\begin{enumerate}
\item {\sf Plots and exports for the selection}: \button{Get ROOT Macro}, \button{Data Files},
\button{Plot}, and \button{USINE File} buttons return i) a {\sf ROOT} executable {\sf C++} file
{\tt database\_plot.C} to re-generate and/or modify the plot\footnote{Based on the {\sf ROOT}
library {\url{http://root.cern.ch}}. To execute, type {\tt root database\_plot.C} (the data are
hard-coded). The errors displayed correspond to the quadratic sum of statistical and systematic
uncertainties.}; ii) a tar-ball {\tt database\_plot.tar.gz} of {\sf ASCII} files containing the
data (one file per sub-experiment); iii) a high-resolution image {\tt database\_plot.png} of the
plot; iv) a {\sf USINE}-compliant file {\tt database\_plot.USINE} (i.e. that can be used as an
input of the {\sf USINE} propagation code).
\item {\sf List of experiments found for the selection}: summary of the data sorted by
(sub-)experiment (name, publication, number of data, etc.). The \button{Get Bibtex} and
\button{Latex cite} buttons provide respectively a \textsc{Bib}\TeX\ file ({\sf bibtex.bib}) to be
included in the references, and the text to cite this selection in the \LaTeX\
document\footnote{These files are useful to quickly prepare scientific manuscripts based on
\LaTeX\ and \textsc{Bib}\TeX. App.~\ref{app:bibtex} and the references of this paper were prepared
with the full list of references retrieved from the \button{Get Bibtex} and \button{Latex cite}
buttons in the \tabtext{Welcome} tab.}. As for the \tabtext{Experiments/Data} tab
(Sect.~\ref{sec:exp_data_tab}), links to the experiment website and the ADS publication are
provided.
\item {\sf Data for the selection}: data in a table (see Sect.~\ref{sec:data} for the content
description) sorted by experiment name or energy. An asterisk denotes the data obtained by
combinations of native data of the database (see App.~\ref{app:rules}).
\end{enumerate}
We remind that, with the default search criteria (i.e., none), all analyses of a CR quantity by the same
instrument show up in the result as long as they correspond to different data taking (or analysed)
periods (i.e., different sub-experiments). Most of the time, these data are independent, but in a few
cases, the analysed periods overlap. It happens, for instance, for the {\em Voyager} data (launched in
1977 and still taking data). In that case, it is up to the user to decide which data sets are relevant
for her/his analysis, and exclude it using the \button{(Sub-)Experiment names} or \button{Time interval}
selection box (see Fig.~\ref{fig:extraction}).
\subsection{New data from the \tab{Add Data} tab}
\label{sec:add_data_tab}
This tab allows anyone to interactively enter new data. This is an essential part of the database as it
provides the community with the possibility to contribute to the completion of the database (either by
adding data from new instruments, or adding missing data from older experiments).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.55\columnwidth]{figures/snapshot_newdata_id.jpg}\\
\vspace{0.1cm}
\includegraphics[width=\columnwidth]{figures/snapshot_newdata_fill.jpg}\\
\vspace{0.1cm}
\includegraphics[width=\columnwidth]{figures/snapshot_newdata.jpg}
\caption{Snapshots of the user interfaces in the \tabtext{Add data} tab. The first stage for adding new
data is the submitter identification (upper panel). The ordered 4-steps submission process comes next
(middle panel). The user has to either select among existing entries, or insert a new one (this pops-up
a window with fields to fill), except for the fourth step that concerns the CR data. The latter must be
filled one CR quantity at a time (and must belong to the list of quantities declared at the publication
step): the bottom panel shows the state of the panel once this step is reached, with the energy axis to
select, the data file to upload, and the \button{Graphical check before submission}. Note that at all
steps, {\em HELP} buttons exist for most of the fields to fill.}
\label{fig:add_data}
\end{center}
\end{figure}
Submitting new data consists in two parts: as shown in Fig.~\ref{fig:add_data} (top panel), the first
part concerns the submitter identification (contact details). The second part (same figure, middle and
bottom panel) is data submission. Four steps must be passed in order. For the first three steps
(experiment, sub-experiment, publication), the submitter is left with the choice of selecting her/his
entry among those already in the database, or to add a new entry (\button{Insert new} button): the
latter action pops-up a new window in which the submitter is guided|\boxtext{HELP} boxes are
provided for each item|to fill the necessary informations, which match the keys of the database structure described
in Sect.~\ref{sec:structure} (see also Fig.~\ref{fig:mysql}). Each time a new entry is submitted, the
submitted element becomes available for further submissions, though it does not appear yet in the
database (i.e., in \tabtext{Experiments/Data} and \tabtext{Extract data} tabs).
Once the three previous steps are completed, the last action is the submission of the CR data (see
bottom panel of Fig.~\ref{fig:add_data}. A template file for the required format (and units) of the data
is provided (see Sect.~\ref{sec:data}). Only one CR quantity at a time (with as many energy bins as
desired) can be submitted. Before the final submission of the data, the \button{Data graphical check}
button pops-up a summary of the uploaded file, along with a plot of the submitted data for a last visual
inspection. At this stage, the submission process can still be cancelled if any mistake is spotted.
For each submitted entry (experiment, sub-experiment, publication, data), an email is sent to the
administrators of the database content\footnote{For now, the only authorised persons are the
developers of the database. Any person wishing to get involved in further developments of the database
is welcome to contact us.}. Validation tools from the \tabtext{Admin} tab are then used (format check,
completeness, etc.) to authorise the addition in the database (pending validation, the data are inserted
in the database with a `not validated' flag, and do not appear on the web site).
\section{Conclusions and future improvements}
\label{sec:concl}
We have developed a database of charged CRs (\url{http://lpsc.in2p3.fr/cosmic-rays-db}) that
includes $e^-$, $e^+$, $\bar{p}$, and nuclides data up to $Z=30$ for energies below a few TeV/n. Each CR
data is linked to a description of the instrument that measured it (flight dates, picture of the
experiment setup, techniques used, etc.) and to the ADS reference in which it was published: this first
release contains more than 200 experiments and 200 publications. The data can be extracted according to
a selection on the CR quantity, the energy range, the experiments and the epoch of measurement:
{\sf ASCII} files, {\sc ROOT} macros, plots, and \textsc{Bib}\TeX\ for the corresponding publications
are then readily available.
The possibility to add new data by means of a user-friendly interface enables the database to be a
collaborative tool for the CR community, provided enough people take an interest, use it, and help
expand it. New data will be added as new results become available (we encourage experimentalists to
submit their data once they become public). The database could also be extended to a larger energy
domain (data from ground-based detectors $\gtrsim$~TeV/n data) or to heavier species ($Z>30$). Time
series, e.g., low-energy proton, helium and electron data with a monthly or finer time resolution from
long-lived high-precision instruments (e.g., {\sf AMS-02}) could also be very interesting inputs for
Solar modulation studies.
We welcome any help to further develop the database. Comments, questions, suggestions, and
corrections\footnote{Despite our best efforts, many data published in CR conferences (e.g., ICRC) are
probably missing, and many typos and errors probably remain to be corrected.} can be addressed to
\href{mailto:crdatabase@lpsc.in2p3.fr}{crdatabase@lpsc.in2p3.fr}.
\begin{acknowledgements}
We thank our colleagues B. Coste, F. Donato, and A. Putze who contributed to the collection of some of
the data gathered in the first release of this database. We thank A. Putze for useful comments on the
paper. This work is part of the {\sf USINE} project (CR propagation code). It has been financially
supported by the {\sf PNHE}.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\IEEEPARstart{E}{stimating} source directions of arrivals (DOAs) using sensors arrays plays an important role in the field of array signal processing. For uniform linear arrays (ULA), it is widely known that traditional subspace-based methods, such as MUSIC, can resolve up to $N - 1$ uncorrelated sources with $N$ sensors~\cite{schmidt_multiple_1986, stoica_music_1989, huang_exact_1993}. However, for sparse linear arrays, such as minimal redundancy arrays (MRA) \cite{moffet_minimum-redundancy_1968}, it is possible to construct an augmented covariance matrix by exploiting the coarray structure. We can then apply MUSIC to the augmented covariance matrix, and up to $\scriptO(N^2)$ sources can be resolved with only $N$ sensors \cite{moffet_minimum-redundancy_1968}.
Recently, the development of co-prime arrays \cite{pal_coprime_2011, tan_sparse_2014,tan_direction_2014,qin_generalized_2015} and nested arrays \cite{pal_nested_2010, han_wideband_2013, han_nested_2014}, has generated renewed interest in sparse linear arrays, and it remains to investigate the performance of these arrays.
The performance of the MUSIC estimator and its variants (e.g., root-MUSIC \cite{friedlander_root-music_1993, pesavento_unitary_2000}) was thoroughly analyzed by Stoica et al. in \cite{stoica_music_1989}, \cite{stoica_music_1990} and \cite{stoica_performance_1990}. The same authors also derived the asymptotic MSE expression of the MUSIC estimator, and rigorously studied its statistical efficiency. In \cite{li_performance_1993}, Li et al. derived a unified MSE expression for common subspace-based estimators (e.g., MUSIC and ESPRIT~\cite{roy_esprit-estimation_1989}) via first-order perturbation analysis. However, these results are based on the physical array model and make use of the statistical properties of the original sample covariance matrix, which cannot be applied when the coarray model is utilized. In \cite{gorokhov_unified_1996}, Gorokhov et al. first derived a general MSE expression for the MUSIC algorithm applied to matrix-valued transforms of the sample covariance matrix. While this expression is applicable to coarray-based MUSIC, its explicit form is rather complicated, making it difficult to conduct analytical performance studies. Therefore, a simpler and more revealing MSE expression is desired.
In this paper, we first review the coarray signal model commonly used for sparse linear arrays. We investigate two common approaches to constructing the augmented sample covariance matrix, namely, the direct augmentation approach (DAA)~\cite{abramovich_detection-estimation_2001, liu_remarks_2015} and the spatial smoothing approach~\cite{pal_nested_2010}. We show that MUSIC yields the same asymptotic estimation error for both approaches. We are then able to derive an explicit MSE expression that is applicable to both approaches. Our MSE expression has a simpler form, which may facilitate the performance analysis of coarray-based MUSIC algorithms. We observe that the MSE of coarray-based MUSIC depends on both the physical array geometry and the coarray geometry. We show that, when there are more sources than the number of sensors, the MSE does not drop to zero even if the SNR approaches infinity, which agrees with the experimental results in previous studies. Next, we derive the CRB of DOAs that is applicable to sparse linear arrays. We notice that when there are more sources than the number of sensors, the CRB is strictly nonzero as the SNR goes to infinity, which is consistent with our observation on the MSE expression.
It should be mentioned that during the review process of this paper, Liu et al.\ and Koochakzadeh et al. also independently derived the CRB for sparse linear arrays in \cite{koochakzadeh_cram_2016,liu_cramerrao}. In this paper, we provide a more rigorous proof the CRB's limiting properties in high SNR regions. We also include various statistical efficiency analysis by utilizing our results on MSE, which is not present in \cite{koochakzadeh_cram_2016,liu_cramerrao}.
Finally, we verify our analytical MSE expression and analyze the statistical efficiency of different sparse linear arrays via numerical simulations. We we observe good agreement between the empirical MSE and the analytical MSE, as well as complex efficiency patterns of coarray-based MUSIC.
Throughout this paper, we make use of the following notations. Given a matrix $\boldA$, we use $\boldA^T$, $\boldA^H$, and $\boldA^*$ to denote the transpose, the Hermitian transpose, and the conjugate of $\boldA$, respectively. We use $A_{ij}$ to denote the $(i,j)$-th element of $\boldA$, and $\bolda_i$ to denote the $i$-th column of $\boldA$. If $\boldA$ is full column rank, we define its pseudo inverse as $\boldA^\dagger = (\boldA^H \boldA)^{-1} \boldA^H$. We also define the projection matrix onto the null space of $\boldA$ as $\projp{\boldA} = \boldI - \boldA\boldA^\dagger$. Let $\boldA = [\bolda_1\,\bolda_2\,\ldots\,\bolda_N] \in \doubleC^{M \times N}$, and we define the vectorization operation as $\vecm(\boldA) = [\bolda_1^T\,\bolda_2^T\,\ldots\,\bolda_N^T]^T$, and $\matm_{M, N}(\cdot)$ as its inverse operation. We use $\otimes$ and $\odot$ to denote the Kronecker product and the Khatri-Rao product (i.e., the column-wise Kronecker product), respectively. We denote by $\Real(\boldA)$ and $\Imag(\boldA)$ the real and the imaginary parts of $\boldA$. If $\boldA$ is a square matrix, we denote its trace by $\trace(\boldA)$. In addition, we use $\boldT_M$ to denote a $M \times M$ permutation matrix whose anti-diagonal elements are one, and whose remaining elements are zero. We say a complex vector $\boldz \in \doubleC^M$ is \emph{conjugate symmetric} if $\boldT_M \boldz = \boldz^*$. We also use $\bolde_i$ to denote the $i$-th natural base vector in Euclidean space. For instance, $\boldA \bolde_i$ yields the $i$-th column of $\boldA$, and $\bolde_i^T \boldA$ yields the $i$-th row of $\boldA$.
\section{The Coarray Signal Model}
We consider a linear sparse array consisting of $M$ sensors whose locations are given by $\scriptD = \{d_1, d_2, \ldots, d_M\}$. Each sensor location $d_i$ is chosen to be the integer multiple of the smallest distance between any two sensors, denoted by $d_0$. Therefore we can also represent the sensor locations using the integer set $\bar{\scriptD} = \{\bar{d}_1, \bar{d}_2, \ldots, \bar{d}_M\}$, where $\bar{d}_i = d_i/d_0$ for $i = 1,2,\ldots,M$. Without loss of generality, we assume that the first sensor is placed at the origin. We consider $K$ narrow-band sources $\theta_1, \theta_2, \ldots, \theta_K$
impinging on the array from the far field. Denoting $\lambda$ as the wavelength of the carrier frequency, we can express the steering vector for the $k$-th source as
\begin{equation}
\label{eq:steering-vector-basic}
\bolda(\theta_k) = \begin{bmatrix}
1 & e^{j\bar{d}_2\phi_k} & \cdots & e^{j\bar{d}_M\phi_k}
\end{bmatrix}^T,
\end{equation}
where $\phi_k = (2\pi d_0 \sin\theta_k)/\lambda$.
Hence the received signal vectors are given by
\begin{equation}
\label{eq:recv-basic}
\boldy(t) = \boldA(\theta) \boldx(t) + \boldn(t), t = 1,2,\ldots,N,
\end{equation}
where $\boldA = [\bolda(\theta_1)\,\bolda(\theta_2)\,\ldots\,\bolda(\theta_K)]$ denotes the array steering matrix, $\boldx(t)$ denotes the source signal vector, $\boldn(t)$ denotes additive noise, and $N$ denotes the number of snapshots. In the following discussion, we make the following assumptions:
\begin{enumerate}[label=\textbf{A\arabic*}]
\item
\label{ass:a1-uc-source}
The source signals follow the unconditional model \cite{stoica_performance_1990} and are uncorrelated white circularly-symmetric Gaussian.
\item
\label{ass:a2-distinct-doa}
The source DOAs are distinct (i.e., $\theta_k \neq \theta_l\ \forall k \neq l$).
\item
\label{ass:a3-gaussian-noise}
The additive noise is white circularly-symmetric Gaussian and uncorrelated from the sources.
\item
\label{ass:a4-uc-snapshot}
The is no temporal correlation between each snapshot.
\end{enumerate}
Under \ref{ass:a1-uc-source}--\ref{ass:a4-uc-snapshot}, the sample covariance matrix is given by
\begin{equation}
\label{eq:cov-baisc}
\boldR = \boldA \boldP \boldA^H + \noisevar \boldI,
\end{equation}
where $\boldP = \diagm(p_1, p_2, \ldots, p_K)$ denotes the source covariance matrix, and $\noisevar$ denotes the variance of the additive noise. By vectorizing $\boldR$, we can obtain the following coarray model:
\begin{equation}
\label{eq:coarray-full-model}
\boldr = \Ad \boldp + \noisevar \boldi,
\end{equation}
where $\Ad = \boldA^* \odot \boldA$, $\boldp = [p_1, p_2, \ldots, p_K]^T$, and $\boldi = \vecm(\boldI)$.
It has been shown in \cite{pal_nested_2010} that $\Ad$ corresponds to the steering matrix of the coarray whose sensor locations are given by $\scriptD_\mathrm{co} = \{d_m - d_n|1 \leq m,n \leq M\}$. By carefully selecting rows of $(\boldA^* \odot \boldA)$, we can construct a new steering matrix representing a virtual ULA with enhanced degrees of freedom. Because $\scriptD_\mathrm{co}$ is symmetric, this virtual ULA is centered at the origin. The sensor locations of the virtual ULA are given by $[-\Mv+1, -\Mv+2, \ldots, 0, \ldots, \Mv-1]d_0$, where $\Mv$ is defined such that $2\Mv-1$ is the size of the virtual ULA. Fig.~\ref{fig:array} provides an illustrative example of the relationship between the physical array and the corresponding virtual ULA. The observation vector of the virtual ULA is given by
\begin{equation}
\label{eq:coarray-ula-model}
\boldz = \boldF\boldr
= \boldA_\mathrm{c} \boldp + \noisevar \boldF \boldi,
\end{equation}
where $\boldF$ is the coarray selection matrix, whose detailed definition is provided in Appendix~\ref{app:f-def}, and $\boldA_\mathrm{c}$ represents the steering matrix of the virtual array. The virtual ULA can be divided into $\Mv$ overlapping uniform subarrays of size $\Mv$. The output of the $i$-th subarray is given by $\boldz_i = \boldGamma_i \boldz$ for $i = 1,2,\ldots,\Mv$, where $\boldGamma_i = [\boldzero_{\Mv \times (i-1)}\,\boldI_{\Mv \times \Mv}\,\boldzero_{\Mv \times (\Mv-i)}]$ represents the selection matrix for the $i$-th subarray.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8]
\draw (0,3) node[left] {\small (a)};
\draw[-latex, line width=0.3mm] (0,3) -- (10,3);
\draw (5,2.9) -- (5,3.1);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-9,-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9}
\draw (\xscaled, 3) -- (\xscaled, 3.15);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in {0,2,3,4,6,9}
\draw[fill=black] (\xscaled,3) circle (0.08);
\draw[decorate, decoration={brace}] (5,3.2) -- (5.5,3.2)
node[midway, above] {\small $d_0$};
\draw (0,2) node[left] {\small (b)};
\draw[-latex, line width=0.3mm] (0,2) -- (10,2);
\draw (5,1.9) -- (5,2.1);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-9,-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9}
\draw (\xscaled, 2) -- (\xscaled, 2.15);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-9,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,9}
\draw[fill=white] (\xscaled,2) circle (0.08);
\draw (0,1) node[left] {\small (c)};
\draw[-latex, line width=0.3mm] (0,1) -- (10,1);
\draw (5,0.9) -- (5,1.1);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-9,-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9}
\draw (\xscaled, 1) -- (\xscaled, 1.15);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7}
\draw[fill=white] (\xscaled,1) circle (0.08);
\foreach \x [evaluate=\x as \xscaled using \x/2+5] in
{-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7}
\draw[dotted] (\xscaled, 1.1) -- (\xscaled, 1.9);
\draw (1.5, 1.7) node {\small $-M_\mathrm{v}d_0$};
\draw (8.5, 1.7) node {\small $M_\mathrm{v}d_0$};
\draw[decorate, decoration={brace}]
(1.5, 2.2) -- (8.5, 2.2)
node[midway, above, yshift=0.1cm] {\small ULA of $2M_\mathrm{v}-1$ sensors};
\draw[decorate, decoration={brace, mirror}]
(1.5, 0.8) -- (5, 0.8)
node[midway, below, yshift=-0.1cm] {\small 1\textsuperscript{st} subarray of size $M_\mathrm{v}$};
\end{tikzpicture}
\caption{A coprime array with sensors located at $[0, 2, 3, 4, 6, 9]\lambda/2$ and its coarray: (a) physical array; (b) coarray; (c) central ULA part of the coarray. }
\label{fig:array}
\end{figure}
Given the outputs of the $\Mv$ subarrays, the augmented covariance matrix of the virtual array $\Rv$ is commonly constructed via one of the following methods~\cite{pal_nested_2010,liu_remarks_2015}:
\begin{subequations}
\begin{align}
\label{eq:def-rv1}
\Rvone &= [\boldz_{\Mv}\,\boldz_{\Mv-1}\,\cdots\,\boldz_1], \\
\label{eq:def-rv2}
\Rvtwo &= \frac{1}{\Mv} \sum_{i=1}^{\Mv} \boldz_i \boldz_i^H,
\end{align}
\end{subequations}
where method~\eqref{eq:def-rv1} corresponds to DAA , while method~\eqref{eq:def-rv2} corresponds to the spatial smoothing approach.
Following the results in \cite{pal_nested_2010} and \cite{liu_remarks_2015}, $\Rvone$ and $\Rvtwo$ are related via the following equality:
\begin{equation}
\label{eq:rv1-rv2-relation}
\Rvtwo
= \frac{1}{\Mv}\Rvone^2
= \frac{1}{\Mv}(\Av\boldP\Av^H + \noisevar\boldI)^2,
\end{equation}
where $\Av$ corresponds to the steering matrix of a ULA whose sensors are located at $[0, 1, \ldots, \Mv-1]d_0$. If we design a sparse linear array such that $\Mv > M$, we immediately gain enhanced degrees of freedom by applying MUSIC to either $\Rvone$ or $\Rvtwo$ instead of $\boldR$ in \eqref{eq:cov-baisc}. For example, in Fig.~\ref{fig:array}, we have a co-prime array with $\Mv = 8 > 6 = M$. Because MUSIC is applicable only when the number of sources is less than the number of sensors, we assume that $K < \Mv$ throughout the paper. This assumption, combined with \ref{ass:a2-distinct-doa}, ensures that $\Av$ is full column rank.
It should be noted that the elements in \eqref{eq:def-rv1} are obtained via linear operations on the elements in $\boldR$, and those in \eqref{eq:def-rv2} are obtained via quadratic operations. Therefore the statistical properties of $\Rvone$ and $\Rvtwo$ are different from that of $\boldR$. Consequently, traditional performance analysis for the MUSIC algorithm based on $\boldR$ cannot be applied to the coarray-based MUSIC. For brevity, we use the term direct augmentation based MUSIC (DA-MUSIC), and the term spatial smoothing based MUSIC (SS-MUSIC) to denote the MUSIC algorithm applied to $\Rvone$ and $\Rvtwo$, respectively. In the following section, we will derive a unified analytical MSE expression for both DA-MUSIC and SS-MUSIC.
\section{The MSE of coarray-based MUSIC}
In practice, the real sample covariance matrix $\boldR$ is unobtainable, and its maximum-likelihood estimate $\hat{\boldR} = 1/N\sum_{t=1}^N \boldx(t) \boldx(t)^H$ is used. Therefore $\boldz$, $\Rvone$, and $\Rvtwo$ are also replaced with their estimated versions $\hat{\boldz}$, $\Rvoneh$, and $\Rvtwoh$. Due to the estimation error $\Delta \boldR = \hat{\boldR} - \boldR$, the estimated noise eigenvectors will deviate from the true one, leading to DOA estimation errors.
In general, the eigenvectors of a perturbed matrix are not well-determined \cite{stewart_error_1973}. For instance, in the very low SNR scenario, $\Delta \boldR$ may cause a subspace swap, and the estimated noise eigenvectors will deviate drastically from the true ones \cite{hawkes_performance_2001}. Nevertheless, as shown in \cite{li_performance_1993, gorokhov_unified_1996} and \cite{swindlehurst_performance_1992}, given enough samples and sufficient SNR, it is possible to obtain the closed-form expressions for DOA estimation errors via first-order analysis. Following similar ideas, we are able to derive the closed-form error expression for DA-MUSIC and SS-MUSIC, as stated in Theorem~\ref{thm:same-doa-err}.
\begin{theorem}
\label{thm:same-doa-err}
Let $\hat{\theta}_k^{(1)}$ and $\hat{\theta}_k^{(2)}$ denote the estimated values of the $k$-th DOA by DA-MUSIC and SS-MUSIC, respectively. Let $\Delta \boldr = \vecm(\hat{\boldR} - \boldR)$. Assume the signal subspace and the noise subspace are well-separated, so that $\Delta \boldr$ does not cause a subspace swap. Then
\begin{equation}
\label{eq:same-doa-err}
\hat{\theta}_k^{(1)} - \theta_k
\doteq \hat{\theta}_k^{(2)} - \theta_k
\doteq -(\gamma_k p_k)^{-1} \Real(\boldxi_k^T \Delta\boldr),
\end{equation}
where $\doteq$ denotes asymptotic equality, and
\begin{subequations}
\begin{align}
\label{eq:xi-k-def}
\boldxi_k &= \boldF^T \boldGamma^T (\boldbeta_k \otimes \boldalpha_k), \\
\label{eq:alpha-k-def}
\boldalpha_k^T &= -\bolde_k^T \Av^\dagger, \\
\label{eq:beta-k-def}
\boldbeta_k &= \projp{\Av} \Davk, \\
\label{eq:gamma-k-def}
\gamma_k &= \DavkH \projp{\Av} \Davk, \\
\label{eq:gamma-mat-def}
\boldGamma &= [\boldGamma_{\Mv}^T\,\boldGamma_{\Mv-1}^T\,\cdots\boldGamma_1^T]^T, \\
\label{eq:dav-def}
\Davk &= \frac{\partial\avk}{\partial\theta_k}.
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
See Appendix~\ref{app:thm-err-expression}.
\end{proof}
Theorem~\ref{thm:same-doa-err} can be reinforced by Proposition~\ref{prop:alpha-beta-xi-nonzero}. $\boldbeta_k \neq 0$ ensures that $\gamma_k^{-1}$ exists and \eqref{eq:same-doa-err} is well-defined, while $\boldxi_k \neq 0$ ensures that \eqref{eq:same-doa-err} depends on $\Delta \boldr$ and cannot be trivially zero.
\begin{proposition}
\label{prop:alpha-beta-xi-nonzero}
$\boldbeta_k, \boldxi_k \neq \boldzero$ for $k = 1,2,\ldots,K$.
\end{proposition}
\begin{proof}
We first show that $\boldbeta_k \neq \boldzero$ by contradiction. Assume $\boldbeta_k = \boldzero$. Then $\projp{\Av} \boldD \avk = \boldzero$, where $\boldD = \diagm(0,1,\ldots,\Mv-1)$. This implies that $\boldD \avk$ lies in the column space of $\Av$. Let $\boldh = e^{-j\phi_k} \boldD \avk$. We immediately obtain that $[\Av \ \boldh]$ is not full column rank. We now add $\Mv - K - 1$ distinct DOAs in $(-\pi/2, \pi/2)$ that are different from $\theta_1, \ldots, \theta_K$, and construct an extended steering matrix $\Avb$ of the $\Mv - 1$ distinct DOAs, $\theta_1, \ldots, \theta_{\Mv-1}$. Let $\boldB = [\Avb \ \boldh]$. It follows that $\boldB$ is also not full column rank. Because $\boldB$ is a square matrix, it is also not full row rank. Therefore there exists some non-zero $\boldc \in \doubleC^\Mv$ such that $\boldc^H \boldB = \boldzero$. Let $t_l = e^{j\phi_l}$ for $l = 1,2,\ldots,\Mv$, where $\phi_l = (2 \pi d_0 \sin\theta_k) / \lambda$. We can express $\boldB$ as
\begin{equation*}
\begin{bmatrix}
1 & 1 & \cdots & 1 & 0 \\
t_1 & t_2 & \cdots & t_{\Mv-1} & 1 \\
t_1^2 & t_2^2 & \cdots & t_{\Mv-1}^2 & 2t_k \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
t_1^{\Mv-1} & t_2^{\Mv-1} &
\cdots & t_{\Mv-1}^{\Mv-1} & (\Mv-1)t_k^{\Mv-2} \\
\end{bmatrix}.
\end{equation*}
We define the complex polynomial $f(x) = \sum_{l=1}^{\Mv} c_l x^{l-1}$. It can be observed that $\boldc^T \boldB = \boldzero$ is equivalent to $f(t_l) = 0$ for $l = 1,2,\ldots,\Mv-1$, and $f'(t_k) = 0$. By construction, $\theta_l$ are distinct, so $t_l$ are $\Mv - 1$ different roots of $f(x)$. Because $\boldc \neq \boldzero$, $f(x)$ is not a constant-zero polynomial, and has at most $\Mv - 1$ roots. Therefore each root $t_l$ has a multiplicity of at most one. However, $f'(t_k) = 0$ implies that $t_k$ has a multiplicity of at least two, which contradicts the previous conclusion and completes the proof of $\boldbeta_k \neq \boldzero$.
We now show that $\boldxi_k \neq \boldzero$. By the definition of $\boldF$ in Appendix~\ref{app:f-def}, each row of $\boldF$ has at least one non-zero element, and each column of $\boldF$ has at most one non-zero element. Hence $\boldF^T \boldx = \boldzero$ for some $\boldx \in \doubleC^{2\Mv - 1}$ if and only of $\boldx = \boldzero$. It suffices to show that $\boldGamma^T (\boldbeta_k \otimes \boldalpha_k) \neq \boldzero$. By the definition of $\boldGamma$, we can rewrite $\boldGamma^T (\boldbeta_k \otimes \boldalpha_k)$ as $\tilde{\boldB}_k \boldalpha_k$, where
\begin{equation*}
\tilde{\boldB}_k =
\begin{bmatrix}
\beta_{k\Mv}
& 0
& \cdots
& 0 \\
\beta_{k(\Mv-1)}
& \beta_{k\Mv}
& \cdots
& 0 \\
\vdots
& \vdots
& \ddots
& \vdots \\
\beta_{k1}
& \beta_{k2}
& \cdots
& \beta_{k\Mv} \\
0
& \beta_{k1}
& \cdots
& \beta_{k(\Mv-1)} \\
\vdots
& \vdots
& \ddots
& \vdots \\
0
& 0
& \cdots
& \beta_{k1}
\end{bmatrix},
\end{equation*}
and $\beta_{kl}$ is the $l$-th element of $\boldbeta_k$. Because $\boldbeta_k \neq \boldzero_k$ and $K < \Mv$, $\tilde{\boldB}_k$ is full column rank. By the definition of pseudo inverse, we know that $\boldalpha_k \neq \boldzero$. Therefore $\tilde{\boldB}_k \boldalpha_k \neq \boldzero$, which completes the proof of $\boldxi_k \neq \boldzero$.
\end{proof}
One important implication of Theorem~\ref{thm:same-doa-err} is that DA-MUSIC and SS-MUSIC share the same first-order error expression, despite the fact that $\Rvone$ is constructed from the second-order statistics, while $\Rvtwo$ is constructed from the fourth-order statistics. Theorem~\ref{thm:same-doa-err} enables a unified analysis of the MSEs of DA-MUSIC and SS-MUSIC, which we present in Theorem~\ref{thm:MSE-MUSIC}.
\begin{theorem}
\label{thm:MSE-MUSIC}
Under the same assumptions as in Theorem~\ref{thm:same-doa-err}, the asymptotic second-order statistics of the DOA estimation errors by DA-MUSIC and SS-MUSIC share the same form:
\begin{equation}
\label{eq:MSE-MUSIC-thm}
\doubleE[(\hat{\theta}_{k_1} - \theta_{k_1})
(\hat{\theta}_{k_2} - \theta_{k_2})]
\doteq \frac{
\Real[\boldxi_{k_1}^H (\boldR \otimes \boldR^T) \boldxi_{k_2}]
}{N p_{k_1} p_{k_2} \gamma_{k_1}\gamma_{k_2}}.
\end{equation}
\end{theorem}
\begin{proof}
See Appendix~\ref{app:thm-mse-music}.
\end{proof}
By Theorem~\ref{thm:MSE-MUSIC}, it is straightforward to write the unified asymptotic MSE expression as
\begin{equation}
\label{eq:doa-mse}
\epsilon(\theta_k)
= \frac{\boldxi_k^H (\boldR \otimes \boldR^T) \boldxi_k}
{N p_k^2 \gamma_k^2}.
\end{equation}
Therefore the MSE\footnote{For brevity, when we use the acronym ``MSE'' in the following discussion, we refer to the asymptotic MSE, $\epsilon(\theta_k)$, unless explicitly stated.} depends on both the physical array geometry and the coarray geometry. The physical array geometry is captured by $\boldA$, which appears in $\boldR \otimes \boldR^T$. The coarray geometry is captured by $\Av$, which appears in $\boldxi_k$ and $\gamma_k$. Therefore, even if two arrays share the same coarray geometry, they may not share the same MSE because their physical array geometry may be different.
It can be easily observed from \eqref{eq:doa-mse} that $\epsilon(\theta_k) \to 0$ as $N \to \infty$. However, because $p_k$ appears in both the denominator and numerator in \eqref{eq:doa-mse}, it is not obvious how the MSE varies with respect to the source power $p_k$ and noise power $\noisevar$. Let $\bar{p}_k = p_k / \noisevar$ denote the signal-to-noise ratio of the $k$-th source. Let $\bar{\boldP} = \diagm(\bar{p}_1, \bar{p}_2, \ldots, \bar{p}_K)$, and $\bar{\boldR} = \boldA \bar{\boldP} \boldA^H + \boldI$. We can then rewrite $\epsilon(\theta_k)$ as
\begin{equation}
\label{eq:doa-mse-normalized}
\epsilon(\theta_k)
= \frac{
\boldxi_k^H (\bar{\boldR} \otimes \bar{\boldR}^T) \boldxi_k
}{N \bar{p}_k^2 \gamma_k^2}.
\end{equation}
Hence the MSE depends on the SNRs instead of the absolute values of $p_k$ or $\noisevar$. To provide an intuitive understanding how SNR affects the MSE, we consider the case when all sources have the same power. In this case, we show in Corollary~\ref{corr:mse-desc} that the MSE asymptotically decreases as the SNR increases.
\begin{corollary}
\label{corr:mse-desc}
Assume all sources have the same power $p$. Let $\bar{p} = p/\noisevar$ denote the common SNR. Given sufficiently large $N$, the MSE $\epsilon(\theta_k)$ decreases monotonically as $\bar{p}$ increases, and
\begin{equation}\label{eq:mse-infty-snr}
\lim_{\bar{p} \to \infty}
\epsilon(\theta_k)
= \frac{1}{N\gamma_k^2}
\| \boldxi_k^H (\boldA \otimes \boldA^*) \|_2^2.
\end{equation}
\end{corollary}
\begin{proof}
The limiting expression can be derived straightforwardly from \eqref{eq:doa-mse-normalized}. For monotonicity, without loss of generality, let $p = 1$, so $\bar{p} = 1/\noisevar$. Because $f(x) = 1/x$ is monotonically decreasing on $(0, \infty)$, it suffices to show that $\epsilon(\theta_k)$ increases monotonically as $\noisevar$ increases. Assume $0 < s_1 < s_2$, and we have
\begin{equation*}
\left.\epsilon(\theta_k)\right|_{\noisevar = s_2} -
\left.\epsilon(\theta_k)\right|_{\noisevar = s_1}
= \frac{1}{N\gamma_k^2}
\boldxi_k^H \boldQ \boldxi_k,
\end{equation*}
where $\boldQ = (s_2 - s_1)[(\boldA\boldA^H) \otimes \boldI + \boldI \otimes (\boldA\boldA^H) + (s_2 + s_1)\boldI]$. Because $\boldA\boldA^H$ is positive semidefinite, both $(\boldA\boldA^H) \otimes \boldI$ and $\boldI \otimes (\boldA\boldA^H)$ are positive semidefinite. Combined with our assumption that $0 < s_1 < s_2$, we conclude that $\boldQ$ is positive definite. By Proposition~\ref{prop:alpha-beta-xi-nonzero} we know that $\boldxi_k \neq \boldzero$. Therefore $\boldxi_k^H \boldQ \boldxi_k$ is strictly greater than zero, which implies the MSE monotonically increases as $\noisevar$ increases.
\end{proof}
Because both DA-MUSIC and SS-MUSIC work also in cases when the number of sources exceeds the number of sensors, we are particularly interested in their limiting performance in such cases. As shown in Corollary~\ref{corr:mse-g-zero}, when $K \geq M$, the corresponding MSE is strictly greater than zero, even though the SNR approaches infinity. This corollary explains the ``saturation'' behavior of SS-MUSIC in the high SNR region as observed in \cite{qin_generalized_2015} and \cite{pal_nested_2010}.
Another interesting implication of Corollary~\ref{corr:mse-g-zero} is that when $2 \leq K < M$, the limiting MSE is not necessarily zero. Recall that in \cite{stoica_music_1989}, it was shown that the MSE of the traditional MUSIC algorithm will converge to zero as SNR approaches infinity. We know that both DA-MUSIC and SS-MUSIC will be outperformed by traditional MUSIC in high SNR regions when $2 \leq K < M$. Therefore, we suggest using DA-MUSIC or SS-MUSIC only when $K \geq M$.
\begin{corollary}
\label{corr:mse-g-zero}
Following the same assumptions in Corollary~\ref{corr:mse-desc},
\begin{enumerate}
\item When $K = 1$, $\lim_{\bar{p} \to \infty} \epsilon(\theta_k) = 0$;
\item When $2 \leq K < M$, $\lim_{\bar{p} \to \infty} \epsilon(\theta_k) \geq 0$;
\item When $K \geq M$, $\lim_{\bar{p} \to \infty} \epsilon(\theta_k) > 0$.
\end{enumerate}
\end{corollary}
\begin{proof}
The right-hand side of \eqref{eq:mse-infty-snr} can be expanded into
\begin{equation*}
\frac{1}{N\gamma_k^2}
\sum_{m=1}^K \sum_{n=1}^K \|\boldxi_k^H [\bolda(\theta_m)\otimes\bolda^*(\theta_n)]\|_2^2.
\end{equation*}
By the definition of $\boldF$, $\boldF [\bolda(\theta_m)\otimes\bolda^*(\theta_m)]$ becomes
\begin{equation*}
[e^{j(\Mv-1)\phi_m}, e^{j(\Mv-2)\phi_m}, \ldots, e^{-j(\Mv-1)\phi_m}].
\end{equation*}
Hence $\boldGamma \boldF [\bolda(\theta_m)\otimes\bolda^*(\theta_m)] = \bolda_{\mathrm{v}}(\theta_m) \otimes \bolda_{\mathrm{v}}^*(\theta_m)$. Observe that
\begin{equation*}
\begin{aligned}
\boldxi_k^H [\bolda(\theta_m)\otimes\bolda^*(\theta_m)]
=& (\boldbeta_k \otimes \boldalpha_k)^H
(\bolda_{\mathrm{v}}(\theta_m) \otimes \bolda_{\mathrm{v}}^*(\theta_m)) \\
=& (\boldbeta_k^H \bolda_{\mathrm{v}}(\theta_m))
(\boldalpha_k^H \bolda_{\mathrm{v}}^*(\theta_m)) \\
=& (\dot{\bolda}_{\mathrm{v}}^H(\theta_k) \projp{\Av}
\bolda_{\mathrm{v}}(\theta_m))
(\boldalpha_k^H \bolda_{\mathrm{v}}^*(\theta_m)) \\
=& 0.
\end{aligned}
\end{equation*}
We can reduce the right-hand side of \eqref{eq:mse-infty-snr} into
\begin{equation*}
\frac{1}{N\gamma_k^2}
\sum_{\substack{1\leq m,n \leq K \\ m\neq n}} \|\boldxi_k^H [\bolda(\theta_m)\otimes\bolda^*(\theta_n)]\|_2^2.
\end{equation*}
Therefore when $K = 1$, the limiting expression is exactly zero. When $2 \leq K < M$, the limiting expression is not necessary zero because when $m \neq n$, $\boldxi_k^H [\bolda(\theta_m)\otimes\bolda^*(\theta_n)]$ is not necessarily zero.
When $K \geq M$, $\boldA$ is full row rank. Hence $\boldA \otimes \boldA^*$ is also full row rank. By Proposition~\ref{prop:alpha-beta-xi-nonzero} we know that $\boldxi_k \neq 0$, which implies that $\epsilon(\theta_k)$ is strictly greater than zero.
\end{proof}
\section{The Cram\'{e}r-Rao Bound}
The CRB for the unconditional model \eqref{eq:recv-basic} has been well studied in \cite{stoica_performance_1990}, but only when the number of sources is less than the number of sensors and no prior knowledge of $\boldP$ is given. For the coarray model, the number of sources can exceed the number of sensors, and $\boldP$ is assumed to be diagonal. Therefore, the CRB derived in \cite{stoica_performance_1990} cannot be directly applied. Based on \cite[Appendix 15C]{kay_fundamentals_1993}, we provide an alternative CRB based on the signal model \eqref{eq:recv-basic}, under assumptions \ref{ass:a1-uc-source}--\ref{ass:a4-uc-snapshot}.
For the signal model \eqref{eq:recv-basic}, the parameter vector is defined by
\begin{equation}
\boldeta = [\theta_1, \ldots, \theta_K, p_1, \ldots, p_k, \noisevar]^T,
\end{equation}
and the $(m,n)$-th element of the Fisher information matrix (FIM) is given by \cite{kay_fundamentals_1993, stoica_performance_1990}
\begin{equation}
\label{eq:FIM-element}
\FIM_{mn} = N\trace\Bigg[
\frac{\partial{\boldR}}{\partial{\eta_m}}
\boldR^{-1}
\frac{\partial{\boldR}}{\partial{\eta_n}}
\boldR^{-1}
\Bigg].
\end{equation}
Observe that $\trace(\boldA\boldB) = \vecm(\boldA^T)^T \vecm(\boldB)$, and that $\vecm(\boldA\boldX\boldB) = (\boldB^T \otimes \boldA)\vecm(\boldX)$. We can rewrite \eqref{eq:FIM-element} as
\begin{equation*}
\begin{aligned}
\FIM_{mn}
&= N
\Bigg[\frac{\partial{\boldr}}{\partial{\eta_m}}\Bigg]^H
(\boldR^T \otimes \boldR)^{-1}
\frac{\partial{\boldr}}{\partial{\eta_n}}.
\end{aligned}
\end{equation*}
Denote the derivatives of $\boldr$ with respect to $\boldeta$ as
\begin{equation}
\label{eq:pr-peta-def}
\frac{\partial{\boldr}}{\partial{\boldeta}}
= \Bigg[
\frac{\partial{\boldr}}{\partial{\theta_1}}\,
\cdots\,
\frac{\partial{\boldr}}{\partial{\theta_K}}\,
\frac{\partial{\boldr}}{\partial{p_1}}\,
\cdots\,
\frac{\partial{\boldr}}{\partial{p_K}}\,
\frac{\partial{\boldr}}{\partial{\noisevar}}
\Bigg].
\end{equation}
The FIM can be compactly expressed by
\begin{equation}
\label{eq:fim-unpartitioned}
\FIM =
\Bigg[\frac{\partial{\boldr}}{\partial{\boldeta}}\Bigg]^H
(\boldR^T \otimes \boldR)^{-1}
\frac{\partial{\boldr}}{\partial{\boldeta}}.
\end{equation}
According to \eqref{eq:coarray-full-model}, we can compute the derivatives in \eqref{eq:pr-peta-def} and obtain
\begin{equation}
\label{eq:pr-peta-final}
\frac{\partial{\boldr}}{\partial{\boldeta}}
= \begin{bmatrix}
\DAd\boldP & \Ad & \boldi
\end{bmatrix},
\end{equation}
where $\DAd = \dot{\boldA}^* \odot \boldA + \boldA^* \odot \dot{\boldA}$, $\Ad$ and $\boldi$ follow the same definitions as in \eqref{eq:coarray-full-model}, and
\begin{equation*}
\dot{\boldA} =
\Bigg[
\frac{\partial \bolda(\theta_1)}{\partial \theta_1} \,
\frac{\partial \bolda(\theta_2)}{\partial \theta_2} \,
\cdots \,
\frac{\partial \bolda(\theta_K)}{\partial \theta_K}
\Bigg].
\end{equation*}
Note that \eqref{eq:pr-peta-final} can be partitioned into two parts, specifically, the part corresponding to DOAs and the part corresponding to the source and noise powers. We can also partition the FIM. Because $\boldR$ is positive definite, $(\boldR^T \otimes \boldR)^{-1}$ is also positive definite, and its square root $(\boldR^T \otimes \boldR)^{-1/2}$ also exists. Let
\begin{equation*}
\boldM_{\boldtheta} = (\boldR^T \otimes \boldR)^{-1/2}
\DAd\boldP,
\end{equation*}
\begin{equation*}
\boldM_{\bolds} = (\boldR^T \otimes \boldR)^{-1/2}
\big[ \Ad\, \boldi \big].
\end{equation*}
We can write the partitioned FIM as
\begin{equation*}
\FIM = N\begin{bmatrix}
\boldM_{\boldtheta}^H \boldM_{\boldtheta} &
\boldM_{\boldtheta}^H \boldM_{\bolds} \\
\boldM_{\bolds}^H \boldM_{\boldtheta} &
\boldM_{\bolds}^H \boldM_{\bolds}
\end{bmatrix}.
\end{equation*}
The CRB matrix for the DOAs is then obtained by block-wise inversion:
\begin{equation}
\label{eq:crb-final}
\CRB_{\boldtheta} = \frac{1}{N}
(\boldM_{\boldtheta}^H
\projp{\boldM_{\bolds}}
\boldM_{\boldtheta})^{-1},
\end{equation}
where $\projp{\boldM_{\bolds}} = \boldI - \boldM_{\bolds} (\boldM_{\bolds}^H \boldM_{\bolds})^{-1} \boldM_{\bolds}^H$. It is worth noting that, unlike the classical CRB for the unconditional model introduced in \cite[Remark 1]{stoica_performance_1990}, expression \eqref{eq:crb-final} is applicable even if the number of sources exceeds the number of sensors.
\begin{remark}
Similar to \eqref{eq:doa-mse}, $\CRB_{\boldtheta}$ depends on the SNRs instead of the absolute
values of $p_k$ or $\noisevar$. Let $\bar{p}_k = p_k / \noisevar$, and $\bar{\boldP} = \diagm(\bar{p}_1, \bar{p}_2, \ldots, \bar{p}_K)$. We have
\begin{align}
\label{eq:m-theta-normalized}
\boldM_{\boldtheta} &= (\bar{\boldR}^T \otimes \bar{\boldR})^{-1/2}
\DAd\bar{\boldP}, \\
\label{eq:m-s-normalized}
\boldM_{\bolds} &= \noisevarinv(\bar{\boldR}^T \otimes \bar{\boldR})^{-1/2}
\big[ \Ad\, \boldi \big].
\end{align}
Substituting \eqref{eq:m-theta-normalized} and \eqref{eq:m-s-normalized} into \eqref{eq:crb-final}, the term $\noisevar$ gets canceled, and the resulting $\CRB_{\boldtheta}$ depends on the ratios $\bar{p}_k$ instead of absolute values of $p_k$ or $\noisevar$.
\end{remark}
\begin{remark}
The invertibility of the FIM depends on the coarray structure. In the noisy case, $(\boldR^T \otimes \boldR)^{-1}$ is always full rank, so the FIM is invertible if and only if $\partial\boldr/\partial\boldeta$ is full column rank. By \eqref{eq:pr-peta-final} we know that the rank of $\partial\boldr/\partial\boldeta$ is closely related to $\Ad$, the coarray steering matrix. Therefore $\CRB_{\boldtheta}$ is not valid for an arbitrary number of sources, because $\Ad$ may not be full column rank when too many sources are present.
\end{remark}
\begin{proposition}
\label{prop:crb-snr-infty}
Assume all sources have the same power $p$, and $\partial\boldr / \partial\boldeta$ is full column rank. Let $\bar{p} = p / \noisevar$.
\begin{enumerate}[label=(\arabic*)]
\item If $K < M$, and $\lim_{\bar{p} \to \infty} \CRB_{\boldtheta}$ exists, it is zero under mild conditions.
\item If $K \geq M$, and $\lim_{\bar{p} \to \infty} \CRB_{\boldtheta}$ exists, it is positive definite.
\end{enumerate}
\end{proposition}
\begin{proof}
See Appendix~\ref{app:crb-snr-infty}.
\end{proof}
While infinite SNR is unachievable from a practical standpoint, Proposition~\ref{prop:crb-snr-infty} gives some useful theoretical implications. When $K < M$, the limiting MSE (13) in Corollary~\ref{corr:mse-desc} is not necessarily zero. However, Proposition~\ref{prop:crb-snr-infty} reveals that the CRB may approach zero when SNR goes to infinity. This observation implies that both DA-MUSIC and SS-MUSIC may have poor statistical efficiency in high SNR regions. When $K \geq M$, Proposition~\ref{prop:crb-snr-infty} implies that the CRB of each DOA will converge to a positive constant, which is consistent with Corollary~\ref{corr:mse-g-zero}.
\section{Numerical Analysis}
\label{sec:numerical-results}
In this section, we numerically analyze of DA-MUSIC and SS-MUSIC by utilizing \eqref{eq:doa-mse} and \eqref{eq:crb-final}. We first verify the MSE expression \eqref{eq:MSE-MUSIC-thm} introduced in Theorem~\ref{thm:MSE-MUSIC} through Monte Carlo simulations.
We then examine the application of \eqref{eq:same-doa-err} in predicting the resolvability of two closely placed sources, and analyze the asymptotic efficiency of both estimators from various aspects.
Finally, we investigate how the number of sensors affect the asymptotic MSE.
In all experiments, we define the signal-to-noise ratio (SNR) as
\begin{equation*}
\mathrm{SNR} = 10\log_{10}\frac{\min_{k=1,2,\ldots,K} p_k}{\noisevar},
\end{equation*}
where $K$ is the number of sources.
Throughout Section~\ref{subsec:verification}, \ref{subsec:res} and \ref{subsec:eff}, we consider the following three different types of linear arrays with the following sensor configurations:
\begin{itemize}
\item Co-prime Array~\cite{pal_coprime_2011}: $[0,3,5,6,9,10,12,15,20,25]\lambda/2$
\item Nested Array~\cite{pal_nested_2010}: $[1,2,3,4,5,10,15,20,25,30]\lambda/2$
\item MRA~\cite{ishiguro_minimum_1980}: $[0,1,4,10,16,22,28,30,33,35]\lambda/2$
\end{itemize}
All three arrays share the same number of sensors, but difference apertures.
\subsection{Numerical Verification}\label{subsec:verification}
We first verify \eqref{eq:doa-mse} via numerical simulations. We consider 11 sources with equal power, evenly placed between $-67.50^\circ$ and $56.25^\circ$, which is more than the number of sensors. We compare the difference between the analytical MSE and the empirical MSE under different combinations of SNR and snapshot numbers. The analytical MSE is defined by
\begin{equation*}
\mathrm{MSE}_\mathrm{an}
= \frac{1}{K}\sum_{k=1}^K \epsilon(\theta_k),
\end{equation*}
and the empirical MSE is defined by
\begin{equation*}
\mathrm{MSE}_\mathrm{em}
= \frac{1}{KL}\sum_{l=1}^L\sum_{k=1}^K
\big(\hat{\theta}_k^{(l)} - \theta_k^{(l)}\big)^2,
\end{equation*}
where $\theta_k^{(l)}$ is the $k$-th DOA in the $l$-th trial, and $\hat{\theta}_k^{(l)}$ is the corresponding estimate.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{journal_figures/acc_all_10000.png}
\caption{$|\MSEan - \MSEem|/\MSEem$ for different types of arrays under different numbers of snapshots and different SNRs.}
\label{fig:acc_all}
\end{figure}
Fig.~\ref{fig:acc_all} illustrates the relative errors between $\MSEan$ and $\MSEem$ obtained from 10,000 trials under various scenarios. It can be observed that $\MSEem$ and $\MSEan$ agree very well given enough snapshots and a sufficiently high SNR. It should be noted that at 0dB SNR, \eqref{eq:same-doa-err} is quite accurate when 250 snapshots are available. In addition. there is no significant difference between the relative errors obtained from DA-MUSIC and those from SS-MUSIC. These observations are consistent with our assumptions, and verify Theorem~\ref{thm:same-doa-err} and Theorem~\ref{thm:MSE-MUSIC}.
We observe that in some of the low SNR regions, $|\MSEan - \MSEem|/\MSEem$ appears to be smaller even if the number of snapshots is limited. In such regions, $\MSEem$ actually ``saturates'', and $\MSEan$ happens to be close to the saturated value. Therefore, this observation does not imply that \eqref{eq:doa-mse} is valid in such regions.
\subsection{Prediction of Resolvability}\label{subsec:res}
One direct application of Theorem~\ref{thm:MSE-MUSIC} is predicting the resolvability of two closely located sources. We consider two sources with equal power, located at $\theta_1 = 30^\circ - \Delta\theta/2$, and $\theta_2 = 30^\circ + \Delta\theta/2$, where $\Delta\theta$ varies from $0.3^\circ$ to $3.0^\circ$. We say the two sources are correctly resolved if the MUSIC algorithm is able to identify two sources, and the two estimated DOAs satisfy $|\hat{\theta}_i - \theta_i| < \Delta\theta/2$, for $i \in \{1,2\}$. The probability of resolution is computed from 500 trials. For all trials, the number of snapshots is fixed at 500, the SNR is set to 0dB, and SS-MUSIC is used.
For illustration purpose, we analytically predict the resolvability of the two sources via the following simple criterion:
\begin{equation}
\label{eq:res-criterion}
\epsilon(\theta_1) + \epsilon(\theta_2)
\underset{\mathrm{Resolvable}}{\overset{\mathrm{Unresovalble}}{\gtreqless}} \Delta\theta.
\end{equation}
Readers are directed to \cite{liu_statistical_2007} for a more comprehensive criterion.
Fig.~\ref{fig:res} illustrates the resolution performance of the three arrays under different $\Delta\theta$, as well as the thresholds predicted by \eqref{eq:res-criterion}. The MRA shows best resolution performance of the three arrays, which can be explained by the fact that the MRA has the largest aperture. The co-prime array, with the smallest aperture, shows the worst resolution performance. Despite the differences in resolution performance, the probability of resolution of each array drops to nearly zero at the predicted thresholds. This confirms that \eqref{eq:doa-mse} provides a convenient way of predicting the resolvability of two close sources.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{journal_figures/resolution_new}
\caption{Probability of resolution vs. source separation, obtained from 500 trials. The number of snapshots is fixed at 500, and the SNR is set to 0dB.}
\label{fig:res}
\end{figure}
\subsection{Asymptotic Efficiency Study}\label{subsec:eff}
In this section, we utilize \eqref{eq:doa-mse} and \eqref{eq:crb-final} to study the asymptotic statistical efficiency of DA-MUSIC and SS-MUSIC under different array geometries and parameter settings. We define their average efficiency as
\begin{equation}
\label{eq:avg-eff}
\kappa = \frac{\trace{\CRB_{\boldtheta}}}{\sum_{k=1}^K \epsilon(\theta_k)}.
\end{equation}
For efficient estimators we expect $\kappa = 1$, while for inefficient estimators we expect $0 \leq \kappa < 1$.
We first compare the $\kappa$ value under different SNRs for the three different arrays. We consider three cases: $K=1$, $K=6$, and $K = 12$. The $K$ sources are located at $\{-60^\circ + [120(k-1)/(K-1)]^\circ|k = 1,2,\ldots, K\}$, and all sources have the same power. As shown in Fig.~\subref*{fig:eff-1}, when only one source is present, $\kappa$ increases as the SNR increases for all three arrays. However, none of the arrays leads to efficient DOA estimation. Interestingly, despite being the least efficient geometry in the low SNR region, the co-prime array achieves higher efficiency than the nested array in the high SNR region.
When $K = 6$, we can observe in Fig.~\subref*{fig:eff-6} that $\kappa$ decreases to zero as SNR increases. This rather surprising behavior suggests that both DA-MUSIC and SS-MUSIC are not statistically efficient methods for DOA estimation when the number of sources is greater than one and less than the number of sensors. It is consistent with the implication of Proposition~\ref{prop:crb-snr-infty} when $K < M$.
When $K = 12$, the number of sources exceeds the number of sensors. We can observe in Fig.~\ref{fig:eff-12} that $\kappa$ also decreases as SNR increases. However, unlike the case when $K = 6$, $\kappa$ converges to a positive value instead of zero.
The above observations imply that DA-MUSIC and SS-MUSIC achieve higher degrees of freedom at the cost of decreased statistical efficiency. When statistical efficiency is concerned and the number of sources is less than the number of sensors, one might consider applying MUSIC directly to the original sample covariance $\boldR$ defined in \eqref{eq:cov-baisc} \cite{vaidyanathan_direct-music_2012}.
\begin{figure}[ht]
\centering
\subfloat[]{%
\includegraphics[scale=0.54]{journal_figures/eff_1}
\label{fig:eff-1}
}
\subfloat[]{%
\includegraphics[scale=0.54]{journal_figures/eff_6}
\label{fig:eff-6}
}
\subfloat[]{%
\includegraphics[scale=0.54]{journal_figures/eff_12}
\label{fig:eff-12}
}
\caption{Average efficiency vs. SNR: (a) $K=1$, (b) $K=6$, (c) $K=12$.}
\end{figure}
Next, we then analyze how $\kappa$ is affected by angular separation. Two sources located at $-\Delta\theta$ and $\Delta\theta$ are considered. We compute the $\kappa$ values under different choices of $\Delta\theta$ for all three arrays. For reference, we also include the empirical results obtained from 1000 trials. To satisfy the asymptotic assumption, the number of snapshots is fixed at 1000 for each trial.
As shown in Fig.~\subref*{fig:eff-sep-mra}--\subref*{fig:eff-sep-coprime}, the overall statistical efficiency decreases as the SNR increases from 0dB to 10dB for all three arrays, which is consistent with our previous observation in Fig.~\subref*{fig:eff-6}. We can also observe that the relationship between $\kappa$ and the normalized angular separation $\Delta\theta/\pi$ is rather complex, as opposed to the traditional MUSIC algorithm (c.f. \cite{stoica_music_1989}). The statistical efficiency of DA-MUSIC and SS-MUSIC is highly dependent on array geometry and angular separation.
\begin{figure}[ht]
\centering
\subfloat[]{%
\includegraphics[scale=0.55]{journal_figures/eff_sep_mra}
\label{fig:eff-sep-mra}
}
\subfloat[]{%
\includegraphics[scale=0.55]{journal_figures/eff_sep_nested}
\label{fig:eff-sep-nested}
}
\subfloat[]{%
\includegraphics[scale=0.55]{journal_figures/eff_sep_coprime}
\label{fig:eff-sep-coprime}
}
\caption{Average efficiency vs. angular separation for the co-prime array: (a) MRA, (b) nested array, (c) co-prime array. The solid lines and dashed lines are analytical values obtained from \eqref{eq:avg-eff}. The circles and crosses are emprical results averaged from 1000 trials.}
\end{figure}
\subsection{MSE vs. Number of Sensors}\label{subsec:mse-n-sensor}
In this section, we investigate how the number of sensors affect the asymptotic MSE, $\epsilon(\theta_k)$. We consider three types of sparse linear arrays: co-prime arrays, nested arrays, and MRAs. In this experiment, the co-prime arrays are generated by co-prime pairs $(q, q+1)$ for $q = 2,3,\ldots, 12$. The nested arrays are generated by parameter pairs $(q+1, q)$ for $q = 2,3,\ldots,12$. The MRAs are constructed according to \cite{ishiguro_minimum_1980}. We consider two cases: the one source case where $K = 1$, and the under determined case where $K = M$. For the former case, we placed the only source at the $0^\circ$. For the later case, we placed the sources uniformly between $-60^\circ$ and $60^\circ$. We set $\SNR = 0\dB$ and $N = 1000$. The empirical MSEs were obtained from 500 trials. SS-MUSIC was used in all the trials.
\begin{figure}[h]
\centering
\subfloat[]{%
\includegraphics[scale=0.56]{journal_figures/mse_n_sensor_k1}
\label{fig:mse-n-sensor-k1}
}
\subfloat[]{%
\includegraphics[scale=0.56]{journal_figures/mse_n_sensor_km}
\label{fig:mse-n-sensor-km}
}
\caption{MSE vs. $M$: (a) $K=1$, (b) $K=M$. The solid lines are analytical results. The ``$+$'', ``$\circ$'', and ``$\diamond$'' denote empirical results obtains from 500 trials. The dashed lines are trend lines used for comparison.}
\end{figure}
In Fig.~\subref*{fig:mse-n-sensor-k1}, we observe that when $K = 1$, the MSE decreases at a rate of approximately $\scriptO(M^{-4.5})$ for all three arrays. In Fig.~\subref*{fig:mse-n-sensor-km}, we observe that when $K = M$, the MSE only decreases at a rate of approximately $\scriptO(M^{-3.5})$. In both cases, the MRAs and the nested arrays achieve lower MSE than the co-prime arrays. Another interesting observation is that for all three arrays, the MSE decreases faster than $\scriptO(M^{-3})$. Recall that for a $M$-sensor ULA, the asymptotic MSE of traditional MUSIC decreases at a rate of $\scriptO(M^{-3})$ as $M \to \infty$ \cite{stoica_music_1989}. This observation suggests that given the same number of sensors, these sparse linear arrays can achieve higher estimation accuracy than ULAs when the number of sensors is large.
\section{Conclusion}
In this paper, we reviewed the coarray signal model and derived the asymptotic MSE expression for two coarray-based MUSIC algorithms, namely DA-MUSIC and SS-MUSIC. We theoretically proved that the two MUSIC algorithms share the same asymptotic MSE error expression. Our analytical MSE expression is more revealing and can be applied to various types of sparse linear arrays, such as co-prime arrays, nested arrays, and MRAs. In addition, our MSE expression is also valid when the number of sources exceeds the number of sensors. We also derived the CRB for sparse linear arrays, and analyzed the statistically efficiency of typical sparse linear arrays. Our results will benefit to future research on performance analysis and optimal design of sparse linear arrays. Throughout our derivations, we assume the array is perfectly calibrated. In the future, it will be interesting to extend the results in this paper to cases when model errors are present.
Additionally, we will further investigate how the number of sensors affect the MSE and the CRB for sparse linear arrays, as well as the possibility of deriving closed form expressions in the case of large number of sensors.
\appendices
\section{Definition and Properties of the coarray selection matrix}
\label{app:f-def}
According to \eqref{eq:cov-baisc},
\begin{equation*}
R_{mn} = \sum_{k=1}^K p_k \exp[
j(\bar{d}_m - \bar{d}_n)\phi_k
] + \delta_{mn}\noisevar,
\end{equation*}
where $\delta_{mn}$ denotes Kronecker's delta. This equation implies that the $(m,n)$-th element of $\boldR$ is associated with the difference $(\bar{d}_m - \bar{d}_n)$. To capture this property, we introduce the difference matrix $\boldDelta$ such that $\Delta_{mn} = \bar{d}_m - \bar{d}_n$. We also define the weight function $\omega(n): \doubleZ \mapsto \doubleZ$ as (see \cite{pal_nested_2010} for details)
\begin{equation*}
\omega(l) = |\{(m,n)|\Delta_{mn} = l\} |,
\end{equation*}
where $|\scriptA|$ denotes the cardinality of the set $\scriptA$. Intuitively, $\omega(l)$ counts the number of all possible pairs of $(\bar{d}_m, \bar{d}_n)$ such that $\bar{d}_m - \bar{d}_n = l$. Clearly, $\omega(l) = \omega(-l)$.
\begin{definition}
\label{def:F}
The coarray selection matrix $\boldF$ is a $(2\Mv-1) \times M^2$ matrix satisfying
\begin{equation}
\label{eq:f-def}
F_{m,p + (q-1)M} = \begin{cases}
\frac{1}{\omega(m - \Mv)} &,
\Delta_{pq} = m - \Mv,
\\
0 &, \mathrm{otherwise},
\end{cases}
\end{equation}
for $m = 1,2,\ldots,2\Mv-1, p = 1,2,\ldots,M, q=1,2,\ldots,M$.
\end{definition}
To better illustrate the construction of $\boldF$, we consider a toy array whose sensor locations are given by $\{0, d_0, 4d_0\}$. The corresponding difference matrix of this array is
\begin{equation*}
\boldDelta = \begin{bmatrix}
0 & -1 & -4 \\
1 & 0 & -3 \\
4 & 3 & 0
\end{bmatrix}.
\end{equation*}
The ULA part of the difference coarray consists of three sensors located at $-d_0$, $0$, and $d_0$. The weight function satisfies $\omega(-1) = \omega(1) = 1$, and $\omega(0) = 3$, so $\Mv = 2$. We can write the coarray selection matrix as
\begin{equation*}
\boldF = \begin{bmatrix}
0 & 0 & 0 &
1 & 0 & 0 &
0 & 0 & 0 \\
\frac{1}{3} & 0 & 0 &
0 & \frac{1}{3} & 0 &
0 & 0 & \frac{1}{3} \\
0 & 1 & 0 &
0 & 0 & 0 &
0 & 0 & 0
\end{bmatrix}.
\end{equation*}
If we pre-multiply the vectorized sample covariance matrix $\boldr$ by $\boldF$, we obtain the observation vector of the virtual ULA (defined in \eqref{eq:coarray-ula-model}):
\begin{equation*}
\boldz = \begin{bmatrix}
z_1 \\
z_2 \\
z_3 \\
\end{bmatrix}
=
\begin{bmatrix}
R_{12} \\
\frac{1}{3}(R_{11} + R_{22} + R_{33}) \\
R_{21}
\end{bmatrix}.
\end{equation*}
It can be seen that $z_m$ is obtained by averaging all the elements in $\boldR$ that correspond to the difference $m - \Mv$, for $m = 1,2,\ldots,2\Mv-1$.
Based on Definition~\ref{def:F}, we now derive several useful properties of $\boldF$.
\begin{lemma}
\label{lem:f-special-sym}
$F_{m,p+(q-1)M} = F_{2\Mv-m, q+(p-1)M}$ for $m = 1,2,\ldots,2\Mv-1, p = 1,2,\ldots,M, q=1,2,\ldots,M$.
\end{lemma}
\begin{proof}
If $F_{m,p+(q-1)M} = 0$, then $\Delta_{pq} \neq m - \Mv$. Because $\Delta_{qp} = -\Delta_{pq}$, $\Delta_{qp} \neq -(m - \Mv)$. Hence $(2\Mv-m)-\Mv = -(m - \Mv) \neq \Delta_{qp}$, which implies that $F_{2\Mv-m, q+(p-1)M}$ is also zero.
If $F_{m,p+(q-1)M} \neq 0$, then $\Delta_{pq} = m - \Mv$ and $F_{m,p+(q-1)M} = 1/\omega(m - \Mv)$. Note that $(2\Mv-m)-\Mv = -(m - \Mv) = -\Delta_{pq} = \Delta_{qp}$. We thus have $F_{2\Mv-m, q+(p-1)M} = 1/\omega(-(m - \Mv)) = 1/\omega(m - \Mv) = F_{m,p+(q-1)M}$.
\end{proof}
\begin{lemma}
\label{lem:fr-conj-sym}
Let $\boldR \in \doubleC^M$ be Hermitian symmetric. Then $\boldz = \boldF \vecm(\boldR)$ is conjugate symmetric.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:f-special-sym} and $\boldR = \boldR^H$,
\begin{equation*}
\begin{split}
z_m &= \sum_{p=1}^{M} \sum_{q=1}^{M} F_{m, p+(q-1)M} R_{pq} \\
&= \sum_{q=1}^{M} \sum_{p=1}^{M} F_{2\Mv - m, q+(p-1)M} R_{qp}^* \\
&= z_{2\Mv - m}^*.
\end{split}
\end{equation*}
\end{proof}
\begin{lemma}
\label{lem:ftz-hermitian}
Let $\boldz \in \doubleC^{2\Mv-1}$ be conjugate symmetric. Then $\matm_{M,M}(\boldF^T \boldz)$ is Hermitian symmetric.
\end{lemma}
\begin{proof}
Let $\boldH = \matm_{M,M}(\boldF^T \boldz)$. Then
\begin{equation}
H_{pq} = \sum_{m=1}^{2\Mv-1} z_m F_{m,p+(q-1)M}.
\end{equation}
We know that $\boldz$ is conjugate symmetric, so $z_{m} = z_{2\Mv-m}^*$. Therefore, by Lemma~\ref{lem:f-special-sym}
\begin{equation}
\begin{split}
H_{pq} &= \sum_{m=1}^{2\Mv-1} z_{2\Mv-m}^* F_{2\Mv-m,q+(p-1)M} \\
&= \Bigg[\sum_{m'=1}^{2\Mv-1} z_{m'} F_{m', q+(p-1)M}\Bigg]^* \\
&= H_{qp}^*.
\end{split}
\end{equation}
\end{proof}
\section{Proof of Theorem~\ref{thm:same-doa-err}}
\label{app:thm-err-expression}
We first derive the first-order expression of DA-MUSIC. Denote the eigendecomposition of $\Rvone$ by
\begin{equation*}
\Rvone = \Es \boldLambda_\mathrm{s1} \Es^H +
\En \boldLambda_\mathrm{n1} \En^H,
\end{equation*}
where $\En$ and $\Es$ are eigenvectors of the signal subspace and noise subspace, respectively, and $\Lsone, \Lnone$ are the corresponding eigenvalues. Specifically, we have $\boldLambda_\mathrm{n1} = \noisevar\boldI$.
Let $\Rvonet = \Rvone + \dRvone$, $\Enonet = \En + \dEnone$, and $\Lnonet = \Lnone + \dLnone$ be the perturbed versions of $\Rvone$, $\En$, and $\Lnone$. The following equality holds:
\begin{equation*}
(\Rvone + \dRvone)(\En + \dEnone) = (\En + \dEnone)(\Lnone + \dLnone).
\end{equation*}
If the perturbation is small, we can omit high-order terms and obtain~\cite{swindlehurst_performance_1992,li_performance_1993,stewart_error_1973}
\begin{equation}
\label{eq:AvH-delta-En-1}
\Av^H \dEnone \doteq -\boldP^{-1} \Av^\dagger \dRvone \En.
\end{equation}
Because $\boldP$ is diagonal, for a specific $\theta_k$, we have
\begin{equation}
\label{eq:ah-en-expression}
\bolda^H(\theta_k) \dEnone \doteq
-p_k^{-1} \bolde_k^T \Av^\dagger \dRvone \En,
\end{equation}
where $\bolde_k$ is the $k$-th column of the identity matrix $\boldI_{K\times K}$. Based on the conclusion in Appendix B of \cite{stoica_music_1989}, under sufficiently small perturbations, the error expression of DA-MUSIC for the $k$-th DOA is given by
\begin{equation}
\begin{aligned}
\label{eq:doa-err-expression-stoica}
\hat{\theta}_k^{(1)} - \theta_k
\doteq -\frac{\Real[\avkH \dEnone \En^H \Davk)]}{\DavkH \En \En^H \Davk},
\end{aligned}
\end{equation}
where $\Davk = \partial\avk / \partial\theta_k$.
Substituting \eqref{eq:ah-en-expression} into \eqref{eq:doa-err-expression-stoica} gives
\begin{equation}
\label{eq:doa-err-expression}
\hat{\theta}_k^{(1)} - \theta_k
\doteq -\frac{
\Real[\bolde_k^T \Av^\dagger \dRvone \En \En^H \Davk]
}{
p_k \DavkH \En \En^H \Davk
}.
\end{equation}
Because $\vecm(\boldA\boldX\boldB) = (\boldB^T \otimes \boldA) \vecm(\boldX)$ and $\En \En^H = \projp{\Av}$, we can use the notations introduced in \eqref{eq:alpha-k-def}--\eqref{eq:gamma-k-def} to express \eqref{eq:doa-err-expression} as
\begin{equation}
\label{eq:doa-err-simple-1}
\hat{\theta}_k^{(1)} - \theta_k
\doteq
-(\gamma_k p_k)^{-1} \Real[(\boldbeta_k \otimes \boldalpha_k)^T \drvone],
\end{equation}
where $\drvone = \vecm(\dRvone)$.
Note that $\Rvonet$ is constructed from $\tilde{\boldR}$. It follows that $\dRvone$ actually depends on $\Delta\boldR$, which is the perturbation part of the covariance matrix $\boldR$. By the definition of $\Rvone$,
\begin{equation*}
\drvone = \vecm(\begin{bmatrix}
\boldGamma_{\Mv}\Delta\boldz &
\cdots &
\boldGamma_2\Delta\boldz &
\boldGamma_1\Delta\boldz
\end{bmatrix})
= \boldGamma\boldF\Delta\boldr,
\end{equation*}
where $\boldGamma = [\boldGamma_{\Mv}^T\,\boldGamma_{\Mv-1}^T\,\cdots\boldGamma_1^T]^T$ and $\Delta\boldr = \vecm(\Delta\boldR)$.
Let $\boldxi_k = \boldF^T \boldGamma^T (\boldbeta_k \otimes \boldalpha_k)$. We can now express \eqref{eq:doa-err-simple-1} in terms of $\Delta\boldr$ as
\begin{equation}
\label{eq:doa-err-simple-2}
\hat{\theta}_k^{(1)} - \theta_k
\doteq -(\gamma_k p_k)^{-1} \Real(\boldxi_k^T \Delta \boldr),
\end{equation}
which completes the first part of the proof.
We next consider the first-order error expression of SS-MUSIC. From \eqref{eq:rv1-rv2-relation} we know that $\Rvtwo$ shares the same eigenvectors as $\Rvone$. Hence the eigendecomposition of $\Rvtwo$ can be expressed by
\begin{equation*}
\Rvtwo = \Es \boldLambda_\mathrm{s2} \Es^H +
\En \boldLambda_\mathrm{n2} \En^H,
\end{equation*}
where $\Lstwo$ and $\Lntwo$ are the eigenvalues of the signal subspace and noise subspace. Specifically, we have $\boldLambda_\mathrm{n2} = \noisevarsq/\Mv \boldI$.
Note that $\Rvtwo = (\Av\boldP\AvH + \noisevarsq\boldI)^2/\Mv$. Following a similar approach to the one we used to obtain \eqref{eq:AvH-delta-En-1}, we get
\begin{equation*}
\AvH \dEntwo \doteq -\Mv \boldP^{-1}
(\boldP \AvH \Av + 2\noisevar\boldI)^{-1} \Av^\dagger \dRvtwo \En,
\end{equation*}
where $\dEntwo$ is the perturbation of the noise eigenvectors produced by $\dRvtwo$. After omitting high-order terms, $\dRvtwo$ is given by
\begin{equation*}
\dRvtwo \doteq
\frac{1}{\Mv} \sum_{k=1}^{\Mv}
(\boldz_k \Delta\boldz_k^H + \Delta\boldz_k \boldz_k^H).
\end{equation*}
According to~\cite{pal_nested_2010}, each subarray observation vector $\boldz_k$ can be expressed by
\begin{equation}
\boldz_k = \Av \boldPsi^{\Mv-k} \boldp + \noisevar \boldi_{\Mv-k+1},
\end{equation}
for $k = 1,2,\ldots,\Mv$, where $\boldi_l$ is a vector of length $\Mv$ whose elements are zero except for the $l$-th element being one, and
\begin{equation*}
\boldPsi = \diagm(e^{-j\phi_1}, e^{-j\phi_2}, \ldots, e^{-j\phi_K}).
\end{equation*}
Observe that
\begin{equation*}
\sum_{k=1}^{\Mv} \noisevar \boldi_{\Mv-k+1} \Delta\boldz_k^H
= \noisevar \dRvone^H,
\end{equation*}
and
\begin{equation*}
\begin{split}
&\sum_{k=1}^{\Mv} \Av \boldPsi^{\Mv-k}\boldp \Delta\boldz_k^H \\
=&\Av \boldP \begin{bmatrix}
e^{-j(\Mv-1)\phi_1} & e^{-j(\Mv-2)\phi_1} & \cdots & 1 \\
e^{-j(\Mv-1)\phi_2} & e^{-j(\Mv-2)\phi_2} & \cdots & 1 \\
\vdots & \vdots & \ddots & \vdots \\
e^{-j(\Mv-1)\phi_K} & e^{-j(\Mv-2)\phi_K} & \cdots & 1
\end{bmatrix}
\begin{bmatrix}
\Delta\boldz_1^H \\
\Delta\boldz_2^H \\
\vdots \\
\Delta\boldz_{\Mv}^H
\end{bmatrix} \\
=&\Av \boldP (\boldT_{\Mv} \Av)^H \boldT_{\Mv} \dRvoneH \\
=&\Av \boldP \AvH \dRvoneH,
\end{split}
\end{equation*}
where $\boldT_{\Mv}$ is a $\Mv \times \Mv$ permutation matrix whose anti-diagonal elements are one, and whose remaining elements are zero. Because $\Delta\boldR = \Delta\boldR^H$, by Lemma~\ref{lem:fr-conj-sym} we know that $\Delta\boldz$ is conjugate symmetric. According to the definition of $\Rvone$, it is straightforward to show that $\dRvone = \dRvone^H$ also holds. Hence
\begin{equation*}
\dRvtwo \doteq \frac{1}{\Mv}
[(\Av\boldP\AvH + 2\noisevar\boldI)\dRvone +
\dRvone\Av\boldP\AvH].
\end{equation*}
Substituting $\dRvtwo$ into the expression of $\AvH \dEntwo$, and utilizing the property that $\AvH\En = \boldzero$,
we can express $\AvH \dEntwo$ as
\begin{equation*}
-\boldP^{-1}
(\boldP \AvH \Av + 2\noisevar\boldI)^{-1}
\Av^\dagger
(\Av\boldP\AvH + 2\noisevar\boldI)\dRvone \En.
\end{equation*}
Observe that
\begin{equation*}
\begin{aligned}
\Av^\dagger(\Av\boldP\AvH + 2\noisevar\boldI)
=&(\AvH\Av)^{-1}\AvH(\Av\boldP\AvH + 2\noisevar\boldI) \\
=&[\boldP\AvH + 2\noisevar(\AvH\Av)^{-1}\AvH] \\
=&(\boldP\AvH\Av + 2\noisevar\boldI)\Av^\dagger.
\end{aligned}
\end{equation*}
Hence the term $(\boldP\AvH\Av + 2\noisevar\boldI)$ gets canceled and we obtain
\begin{equation}
\AvH \dEntwo \doteq -\boldP^{-1} \Av^\dagger \dRvone \En,
\end{equation}
which coincides with the first-order error expression of $\AvH \dEnone$.
\section{Proof of Theorem~\ref{thm:MSE-MUSIC}}
\label{app:thm-mse-music}
Before proceeding to the main proof, we introduce the following definition.
\begin{definition}
\label{def:cab}
Let $\boldA = [\bolda_1\,\bolda_2\,\ldots\bolda_N] \in \doubleR^{N \times N}$, and $\boldB = [\boldb_1\,\boldb_2\,\ldots\boldb_N] \in \doubleR^{N \times N}$. The structured matrix $\boldC_{\boldA\boldB} \in \doubleR^{N^2 \times N^2}$ is defined as
\begin{equation*}
\boldC_{\boldA\boldB} =
\begin{bmatrix}
\bolda_1 \boldb_1^T &
\bolda_2 \boldb_1^T &
\ldots &
\bolda_N \boldb_1^T \\
\bolda_1 \boldb_2^T &
\bolda_2 \boldb_2^T &
\ldots &
\bolda_N \boldb_2^T \\
\vdots & \ddots & \vdots & \vdots \\
\bolda_1 \boldb_N^T &
\bolda_2 \boldb_N^T &
\ldots &
\bolda_N \boldb_N^T \\
\end{bmatrix}.
\end{equation*}
\end{definition}
We now start deriving the explicit MSE expression. According to \eqref{eq:doa-err-simple-2},
\begin{equation}
\label{eq:mse-expression-orig}
\begin{aligned}
&\doubleE[(\hat{\theta}_{k_1} - \theta_{k_1})
(\hat{\theta}_{k_2} - \theta_{k_2})] \\
\doteq&(\gamma_{k_1} p_{k_1})^{-1} (\gamma_{k_2} p_{k_2})^{-1}
\doubleE[\Real(\boldxi_{k_1}^T \Delta \boldr)
\Real(\boldxi_{k_2}^T \Delta \boldr)] \\
=&(\gamma_{k_1} p_{k_1})^{-1} (\gamma_{k_2} p_{k_2})^{-1}
\big\{
\Real(\boldxi_{k_1})^T
\doubleE[\Real(\Delta \boldr) \Real(\Delta \boldr)^T]
\Real(\boldxi_{k_2}) \\
&+ \Imag(\boldxi_{k_1})^T
\doubleE[\Imag(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_2}) \\
&- \Real(\boldxi_{k_1})^T
\doubleE[\Real(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_2}) \\
&- \Real(\boldxi_{k_2})^T
\doubleE[\Real(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_1})
\big\},
\end{aligned}
\end{equation}
where we used the property that $\Real(\boldA\boldB) = \Real(\boldA)\Real(\boldB) - \Imag(\boldA)\Imag(\boldB)$ for two complex matrices $\boldA$ and $\boldB$ with proper dimensions.
To obtain the closed-form expression for \eqref{eq:mse-expression-orig}, we need to compute the four expectations. It should be noted that in the case of finite snapshots, $\Delta\boldr$ does not follow a circularly-symmetric complex Gaussian distribution. Therefore we cannot directly use the properties of the circularly-symmetric complex Gaussian distribution to evaluate the expectations. For brevity, we demonstrate the computation of only the first expectation in \eqref{eq:mse-expression-orig}. The computation of the remaining three expectations follows the same idea.
Let $\boldr_i$ denote the $i$-th column of $\boldR$ in \eqref{eq:cov-baisc}. Its estimate, $\hat{\boldr}_i$, is given by $\sum_{t=1}^N \boldy(t) y_i^*(t)$, where $y_i(t)$ is the $i$-th element of $\boldy(t)$. Because $\doubleE[\hat{\boldr}_i] = \boldr_i$,
\begin{equation}
\label{eq:e-re-re-orig}
\begin{split}
&\doubleE[\Real(\Delta \boldr_i) \Real(\Delta \boldr_l)^T] \\
=& \doubleE[\Real(\hat{\boldr}_i) \Real(\hat{\boldr}_l)^T]
- \Real(\boldr_i) \Real(\boldr_l)^T.
\end{split}
\end{equation}
The second term in \eqref{eq:e-re-re-orig} is deterministic, and the first term in \eqref{eq:e-re-re-orig} can be expanded into
\begingroup
\allowdisplaybreaks
\begin{align}
\label{eq:e-re-re}
& \frac{1}{N^2} \doubleE\Bigg[
\Real\Big(\sum_{s=1}^N \boldy(s)y_i^*(s)\Big)
\Real\Big(\sum_{t=1}^N \boldy(t)y_l^*(t)\Big)^T
\Bigg] \nonumber \\
=& \frac{1}{N^2} \doubleE\Bigg[
\sum_{s=1}^N \sum_{t=1}^N
\Real(\boldy(s)y_i^*(s))
\Real(\boldy(t)y_l^*(t))^T
\Bigg] \nonumber \\
=& \frac{1}{N^2} \sum_{s=1}^N \sum_{t=1}^N \doubleE\Big\{
\big[\Real(\boldy(s))\Real(y_i^*(s))
- \Imag(\boldy(s))\Imag(y_i^*(s))\big]
\nonumber \\
&\quad
\big[\Real(\boldy(t))^T\Real(y_l^*(t))
- \Imag(\boldy(t))^T\Imag(y_l^*(t))\big]
\Big\} \nonumber \\
=& \frac{1}{N^2} \sum_{s=1}^N \sum_{t=1}^N \Big\{
\doubleE[
\Real(\boldy(s))\Real(y_i(s))
\Real(\boldy(t))^T\Real(y_l(t))] \nonumber \\
&+ \doubleE[
\Real(\boldy(s))\Real(y_i(s))
\Imag(\boldy(t))^T\Imag(y_l(t))] \nonumber \\
&+ \doubleE[
\Imag(\boldy(s))\Imag(y_i(s))
\Real(\boldy(t))^T\Real(y_l(t))] \nonumber \\
&+ \doubleE[
\Imag(\boldy(s))\Imag(y_i(s))
\Imag(\boldy(t))^T\Imag(y_l(t))]
\Big\}.
\end{align}
\endgroup
We first consider the partial sum of the cases when $s \neq t$. By \ref{ass:a4-uc-snapshot}, $\boldy(s)$ and $\boldy(t)$ are uncorrelated Gaussians.
Recall that for $\boldx \sim \scriptC\scriptN(\boldzero, \boldSigma)$,
\begin{equation*}
\begin{aligned}
\doubleE[\Real(\boldx)\Real(\boldx)^T]
= \frac{1}{2}\Real(\boldSigma) &,\
\doubleE[\Real(\boldx)\Imag(\boldx)^T]
= -\frac{1}{2}\Imag(\boldSigma) \\
\doubleE[\Imag(\boldx)\Real(\boldx)^T]
= \frac{1}{2}\Imag(\boldSigma) &,\
\doubleE[\Imag(\boldx)\Imag(\boldx)^T]
= \frac{1}{2}\Real(\boldSigma).
\end{aligned}
\end{equation*}
We have
\begin{equation*}
\begin{split}
&\doubleE[\Real(\boldy(s))\Real(y_i(s))
\Real(\boldy(t))^T\Real(y_l(t))] \\
=& \doubleE[\Real(\boldy(s))\Real(y_i(s))]
\doubleE[\Real(\boldy(t))^T\Real(y_l(t))] \\
=& \frac{1}{4} \Real(\boldr_i) \Real(\boldr_l)^T.
\end{split}
\end{equation*}
Similarly, we can obtain that when $s \neq t$,
\begin{equation}
\begin{aligned}
\doubleE[\Real(\boldy(s))\Real(y_i(s))
\Imag(\boldy(t))^T\Imag(y_l(t))]
&= \frac{1}{4} \Real(\boldr_i) \Real(\boldr_l)^T, \\
\doubleE[\Imag(\boldy(s))\Imag(y_i(s))
\Real(\boldy(t))^T\Real(y_l(t))]
&= \frac{1}{4} \Real(\boldr_i) \Real(\boldr_l)^T, \\
\doubleE[\Imag(\boldy(s))\Imag(y_i(s))
\Imag(\boldy(t))^T\Imag(y_l(t))]
&= \frac{1}{4} \Real(\boldr_i) \Real(\boldr_l)^T. \\
\end{aligned}
\end{equation}
Therefore the partial sum of the cases when $s \neq t$ is given by $(1-1/N) \Real(\boldr_i) \Real(\boldr_l)^T$.
We now consider the partial sum of the cases when $s = t$. We first consider the first expectation inside the double summation in \eqref{eq:e-re-re}. Recall that for $\boldx \sim \scriptN(\boldzero, \boldSigma)$, $\doubleE[x_i x_l x_p x_q] = \sigma_{il}\sigma_{pq} + \sigma_{ip}\sigma_{lq} + \sigma_{iq}\sigma_{lp}$.
We can express the $(m,n)$-th element of the matrix $\doubleE[\Real(\boldy(t))\Real(y_i(t))\Real(\boldy(t))^T\Real(y_l(t))]$ as
\begin{align*}
&\doubleE[\Real(y_m(t))\Real(y_i(t))
\Real(y_n(t))\Real(y_l(t))] \nonumber \\
=&\doubleE[\Real(y_m(t))\Real(y_i(t))
\Real(y_l(t))\Real(y_n(t))] \nonumber \\
=&\doubleE[\Real(y_m(t))\Real(y_i(t))]
\doubleE[\Real(y_l(t))\Real(y_n(t))] \\
&+ \doubleE[\Real(y_m(t))\Real(y_l(t))]
\doubleE[\Real(y_i(t))\Real(y_n(t))] \nonumber \\
&+ \doubleE[\Real(y_m(t))\Real(y_n(t))]
\doubleE[\Real(y_i(t))\Real(y_l(t))] \nonumber \\
=& \frac{1}{4}[\Real(R_{mi})\Real(R_{ln})
+ \Real(R_{ml})\Real(R_{in})
+ \Real(R_{mn})\Real(R_{il})]. \nonumber
\end{align*}
Hence
\begin{equation*}
\begin{split}
&\doubleE[\Real(\boldy(t))\Real(y_i(t))
\Real(\boldy(t))^T\Real(y_l(t))] \\
=& \frac{1}{4}[\Real(\boldr_i)\Real(\boldr_l)^T
+ \Real(\boldr_l)\Real(\boldr_i)^T
+ \Real(\boldR)\Real(R_{il})].
\end{split}
\end{equation*}
Similarly, we obtain that
\begin{equation*}
\begin{split}
&\doubleE[\Imag(\boldy(t))\Imag(y_i(t))
\Imag(\boldy(t))^T\Imag(y_l(t))] \\
=& \frac{1}{4}[\Real(\boldr_i)\Real(\boldr_l)^T
+ \Real(\boldr_l)\Real(\boldr_i)^T
+ \Real(\boldR)\Real(R_{il})],
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&\doubleE[\Real(\boldy(t))\Real(y_i(t))
\Imag(\boldy(t))^T\Imag(y_l(t))] \\
=& \doubleE[\Imag(\boldy(t))\Imag(y_i(t))
\Real(\boldy(t))^T\Real(y_l(t))] \\
=& \frac{1}{4}[\Real(\boldr_i)\Real(\boldr_l)^T
- \Imag(\boldr_l)\Imag(\boldr_i)^T
+ \Imag(\boldR)\Imag(R_{il})].
\end{split}
\end{equation*}
Therefore the partial sum of the cases when $s = t$ is given by
$(1/N)\Real(\boldr_i)\Real(\boldr_l)^T + (1/2N)[\Real(\boldR)\Real(R_{il}) + \Imag(\boldR)\Imag(R_{il}) + \Real(\boldr_l)\Real(\boldr_i)^T - \Imag(\boldr_l)\Imag(\boldr_i)^T]$
. Combined with the previous partial sum of the cases when $s \neq t$, we obtain that
\begin{equation}
\begin{split}
&\doubleE[\Real(\Delta \boldr_i) \Real(\Delta \boldr_l)^T] \\
=&\frac{1}{2N}[\Real(\boldR)\Real(R_{il})
+ \Imag(\boldR)\Imag(R_{il}) \\
&+ \Real(\boldr_l)\Real(\boldr_i)^T
- \Imag(\boldr_l)\Imag(\boldr_i)^T ].
\end{split}
\end{equation}
Therefore
\begin{equation}
\label{eq:e-re-re-middle}
\begin{split}
&\doubleE[\Real(\Delta \boldr) \Real(\Delta \boldr)^T] \\
=& \frac{1}{2N}[\Real(\boldR) \otimes \Real(\boldR)
+ \Imag(\boldR) \otimes \Imag(\boldR) \\
&\quad+ \boldC_{\Real(\boldR)\Real(\boldR)}
- \boldC_{\Imag(\boldR)\Imag(\boldR)}],
\end{split}
\end{equation}
which completes the computation of first expectation in \eqref{eq:mse-expression-orig}. Utilizing the same technique, we obtain that
\begin{equation}
\label{eq:e-im-im-middle}
\begin{split}
&\doubleE[\Imag(\Delta \boldr) \Imag(\Delta \boldr)^T] \\
=& \frac{1}{2N}[\Real(\boldR) \otimes \Real(\boldR)
+ \Imag(\boldR) \otimes \Imag(\boldR) \\
&\quad+ \boldC_{\Imag(\boldR)\Imag(\boldR)}
- \boldC_{\Real(\boldR)\Real(\boldR)}],
\end{split}
\end{equation}
and
\begin{equation}
\label{eq:e-re-im-middle}
\begin{split}
&\doubleE[\Real(\Delta \boldr) \Imag(\Delta \boldr)^T] \\
=& \frac{1}{2N}[\Imag(\boldR) \otimes \Real(\boldR)
- \Real(\boldR) \otimes \Imag(\boldR) \\
&\quad+ \boldC_{\Real(\boldR)\Imag(\boldR)}
+ \boldC_{\Imag(\boldR)\Real(\boldR)}].
\end{split}
\end{equation}
Substituting \eqref{eq:e-re-re-middle}--\eqref{eq:e-re-im-middle} into \eqref{eq:mse-expression-orig} gives a closed-form MSE expression. However, this expression is too complicated for analytical study. In the following steps, we make use of the properties of $\boldxi_k$ to simply the MSE expression.
\begin{lemma}
\label{lem:aka-caa}
Let $\boldX, \boldY, \boldA, \boldB \in \doubleR^{N \times N}$ satisfying $\boldX^T = (-1)^{n_x}\boldX$, $\boldA^T = (-1)^{n_a}\boldA$, and $\boldB^T = (-1)^{n_b}\boldB$, where $n_x, n_a, n_b \in \{0,1\}$. Then
\begin{equation*}
\vecm(\boldX)^T (\boldA \otimes \boldB) \vecm(\boldY)
= (-1)^{n_x+n_b}\vecm(\boldX)^T \boldC_{\boldA\boldB} \vecm(\boldY),
\end{equation*}
\begin{equation*}
\vecm(\boldX)^T (\boldB \otimes \boldA) \vecm(\boldY)
= (-1)^{n_x+n_a}\vecm(\boldX)^T \boldC_{\boldB\boldA} \vecm(\boldY).
\end{equation*}
\end{lemma}
\begin{proof}
By Definition~\ref{def:cab},
\begingroup
\allowdisplaybreaks
\begin{align*}
&\vecm(\boldX)^T \boldC_{\boldA \boldB} \vecm(\boldY) \\
=& \sum_{m=1}^N \sum_{n=1}^N
\boldx_m^T \bolda_n \boldb_m^T \boldy_n \\
=& \sum_{m=1}^N \sum_{n=1}^N
\Big( \sum_{p=1}^N A_{pn} X_{pm} \Big)
\Big( \sum_{p=1}^N B_{qm} Y_{qn} \Big) \\
=& \sum_{m=1}^N \sum_{n=1}^N \sum_{p=1}^N \sum_{q=1}^N
A_{pn} X_{pm} B_{qm} Y_{qn} \\
=& (-1)^{n_x+n_b}
\sum_{p=1}^N \sum_{n=1}^N \sum_{m=1}^N \sum_{q=1}^N
(X_{mp} B_{mq} Y_{qn}) A_{pn} \\
=& (-1)^{n_x+n_b}
\sum_{p=1}^N \sum_{n=1}^N
\boldx_p^T A_{pn} \boldB \boldy_n \\
=& (-1)^{n_x+n_b}
\vecm(\boldX)^T (\boldA \otimes \boldB)
\vecm(\boldY).
\end{align*}
\endgroup
The proof of the second equality follows the same idea.
\end{proof}
\begin{lemma}
\label{lem:flip-pn}
$\TMv \projp{\Av} \TMv = (\projp{\Av})^*$.
\end{lemma}
\begin{proof}
Since $\projp{\Av} = \boldI - \Av (\Av^H \Av)^{-1} \Av^H$, it suffices to show that $\TMv \Av (\Av^H \Av)^{-1} \Av^H \TMv = (\Av (\Av^H \Av)^{-1} \Av^H)^*$. Because $\Av$ is the steering matrix of a ULA with $\Mv$ sensors, it is straightforward to show that $\TMv \Av = (\Av \boldPhi)^*$, where $\boldPhi = \diagm(e^{-j(\Mv-1)\phi_1}, e^{-j(\Mv-1)\phi_2}, \ldots, e^{-j(\Mv-1)\phi_K})$.
Because $\TMv\TMv = \boldI, \TMv^H = \TMv$,
\begin{equation*}
\begin{split}
&\TMv \Av (\Av^H \Av)^{-1} \Av^H \TMv \\
=& \TMv \Av (\Av^H \TMv^H \TMv \Av)^{-1}
\Av^H \TMv^H \\
=& (\Av \boldPhi)^* ((\Av \boldPhi)^T (\Av \boldPhi)^*)^{-1}
(\Av \boldPhi)^T \\
=& (\Av (\Av^H \Av)^{-1} \Av^H)^*.
\end{split}
\end{equation*}
\end{proof}
\begin{lemma}
\label{lem:xi-k-symmetry}
Let $\boldXi_k = \matm_{M, M}(\boldxi_k)$. Then $\boldXi_k^H = \boldXi_k$ for $k = 1,2,\ldots, K$.
\end{lemma}
\begin{proof}
Note that $\boldxi_k = \boldF^T \boldGamma^T (\boldbeta_k \otimes \boldalpha_k)$. We first prove that $\boldbeta_k \otimes \boldalpha_k$ is conjugate symmetric, or that $(\TMv \otimes \TMv)(\boldbeta_k \otimes \boldalpha_k) = (\boldbeta_k \otimes \boldalpha_k)^*$. Similar to the proof of Lemma~\ref{lem:flip-pn}, we utilize the properties that $\TMv \Av = (\Av \boldPhi)^*$ and that $\TMv \avk = (\avk e^{-j(\Mv - 1)\phi_k})^*$ to show that
\begin{equation}
\label{eq:flip-av}
\TMv (\Av^\dagger)^H \bolde_k \avkH \TMv
= [(\Av^\dagger)^H \bolde_k \avkH]^*.
\end{equation}
Observe that $\Davk = j\dot{\phi}_k \boldD \avk$, where $\dot{\phi}_k = (2\pi d_0\cos\theta_k) / \lambda$ and $\boldD = \diagm(0,1,\ldots,\Mv-1)$. We have
\begin{equation*}
\begin{split}
&(\TMv \otimes \TMv)(\boldbeta_k \otimes \boldalpha_k)
= (\boldbeta_k \otimes \boldalpha_k)^* \\
\iff& \TMv \boldalpha_k \boldbeta_k^T \TMv
= (\boldalpha_k \boldbeta_k^T)^* \\
\iff& \TMv[(\Av^\dagger)^H \bolde_k \avkH \boldD \projp{\Av}]^* \TMv \\
&= -(\Av^\dagger)^H \bolde_k \avkH \boldD \projp{\Av}.
\end{split}
\end{equation*}
Since $\boldD = \TMv \TMv \boldD \TMv \TMv$, combining with Lemma~\ref{lem:flip-pn} and \eqref{eq:flip-av}, it suffices to show that
\begin{equation}
\label{eq:flip-akb-requirement}
\begin{split}
&(\Av^\dagger)^H \bolde_k \avkH \TMv \boldD
\TMv \projp{\Av} \\
&= - (\Av^\dagger)^H \bolde_k \avkH \boldD \projp{\Av}.
\end{split}
\end{equation}
Observe that $\TMv \boldD \TMv + \boldD = (\Mv-1)\boldI$. We have
\begin{equation*}
\projp{\Av}(\TMv \boldD \TMv + \boldD) \avk = \boldzero,
\end{equation*}
or equivalently
\begin{equation}
\label{eq:flip-akb-final}
\avkH \TMv \boldD \TMv \projp{\Av} =
- \avkH \boldD \projp{\Av}.
\end{equation}
Pre-multiplying both sides of \eqref{eq:flip-akb-final} with $(\Av^\dagger)^H \bolde_k$ leads to \eqref{eq:flip-akb-requirement}, which completes the proof that $\boldbeta_k \otimes \boldalpha_k$ is conjugate symmetric.
According to the definition of $\boldGamma$ in \eqref{eq:gamma-mat-def}, it is straightforward to show that $\boldGamma^T (\boldbeta_k \otimes \boldalpha_k)$ is also conjugate symmetric. Combined with Lemma~\ref{lem:ftz-hermitian} in Appendix~\ref{app:f-def}, we conclude that $\matm_{M,M}(\boldF^T \boldGamma^T (\boldbeta_k \otimes \boldalpha_k))$ is Hermitian symmetric, or that $\boldXi_k = \boldXi_k^H$.
\end{proof}
Given Lemma~\ref{lem:aka-caa}--\ref{lem:xi-k-symmetry}, we are able continue the simplification. We first consider the term $\Real(\boldxi_{k_1})^T \doubleE[\Real(\Delta \boldr) \Real(\Delta \boldr)^T] \Real(\boldxi_{k_2})$ in \eqref{eq:mse-expression-orig}. Let $\boldXi_{k_1} = \matm_{M,M}(\boldxi_{k_1})$, and $\boldXi_{k_2} = \matm_{M,M}(\boldxi_{k_2})$. By Lemma~\ref{lem:xi-k-symmetry}, we have $\boldXi_{k_1} = \boldXi_{k_1}^H$, and $\boldXi_{k_2} = \boldXi_{k_2}^H$. Observe that $\Real(\boldR)^T = \Real(\boldR)$, and that $\Imag(\boldR)^T = \Imag(\boldR)$. By Lemma~\ref{lem:aka-caa} we immediately obtain the following equalities:
\begin{equation*}
\begin{split}
&\Real(\boldxi_{k_1})^T
(\Real(\boldR) \otimes \Real(\boldR))
\Real(\boldxi_{k_2}) \\
=&
\Real(\boldxi_{k_1})^T
\boldC_{\Real(\boldR)\Real(\boldR)}
\Real(\boldxi_{k_2}),
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&\Real(\boldxi_{k_1})^T
(\Imag(\boldR) \otimes \Imag(\boldR))
\Real(\boldxi_{k_2}) \\
=&
-\Real(\boldxi_{k_1})^T
\boldC_{\Imag(\boldR)\Imag(\boldR)}
\Real(\boldxi_{k_2}).
\end{split}
\end{equation*}
Therefore $\Real(\boldxi_{k_1})^T \doubleE[\Real(\Delta \boldr) \Real(\Delta \boldr)^T] \Real(\boldxi_{k_2})$ can be compactly expressed as
\begin{equation}
\label{eq:r1t-err-r2-final}
\begin{aligned}
&\Real(\boldxi_{k_1})^T
\doubleE[\Real(\Delta \boldr) \Real(\Delta \boldr)^T]
\Real(\boldxi_{k_2}) \\
=& \frac{1}{N}
\Real(\boldxi_{k_1})^T[
\Real(\boldR) \otimes \Real(\boldR)
+ \Imag(\boldR) \otimes \Imag(\boldR)
]\Real(\boldxi_{k_2}) \\
=& \frac{1}{N}
\Real(\boldxi_{k_1})^T
\Real(\boldR^T \otimes \boldR)
\Real(\boldxi_{k_2}),
\end{aligned}
\end{equation}
where we make use of the properties that $\boldR^T = \boldR^*$, and $\Real(\boldR^* \otimes \boldR) = \Real(\boldR) \otimes \Real(\boldR) + \Imag(\boldR) \otimes \Imag(\boldR)$. Similarly, we can obtain that
\begin{equation}
\label{eq:i1t-err-i2-final}
\begin{split}
&\Imag(\boldxi_{k_1})^T
\doubleE[\Imag(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_2}) \\
=& \frac{1}{N}
\Imag(\boldxi_{k_1})^T
\Real(\boldR^T \otimes \boldR)
\Imag(\boldxi_{k_2}),
\end{split}
\end{equation}
\begin{equation}
\label{eq:r1t-eri-i2-final}
\begin{split}
&\Real(\boldxi_{k_1})^T
\doubleE[\Real(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_2}) \\
=& -\frac{1}{N}
\Real(\boldxi_{k_1})^T
\Imag(\boldR^T \otimes \boldR)
\Imag(\boldxi_{k_2}),
\end{split}
\end{equation}
\begin{equation}
\label{eq:r2t-eri-i1-final}
\begin{split}
&\Real(\boldxi_{k_2})^T
\doubleE[\Real(\Delta \boldr) \Imag(\Delta \boldr)^T]
\Imag(\boldxi_{k_1}) \\
=& -\frac{1}{N}
\Real(\boldxi_{k_2})^T
\Imag(\boldR^T \otimes \boldR)
\Imag(\boldxi_{k_1}).
\end{split}
\end{equation}
Substituting \eqref{eq:r1t-err-r2-final}--\eqref{eq:r2t-eri-i1-final} into \eqref{eq:mse-expression-orig} completes the proof.
\section{Proof of Proposition~\ref{prop:crb-snr-infty}}
\label{app:crb-snr-infty}
Without loss of generality, let $p = 1$ and $\noisevar \to 0$. For brevity, we denote $\boldR^T \otimes \boldR$ by $\boldW$. We first consider the case when $K < M$.
Denote the eigendecomposition of $\boldR^{-1}$ by $\Es \Ls^{-1} \Es^H + \noisevarinv \En \En^H$. We have
\begin{equation*}
\boldW^{-1} = \noisevarp{-4} \boldK_1 + \noisevarp{-2} \boldK_2 + \boldK_3,
\end{equation*}
where
\begin{align*}
\boldK_1 &= \En^*\En^T \otimes \En\En^H, \\
\boldK_2 &= \Es^*\Ls^{-1}\Es^T \otimes \En\En^H + \En^*\En^T \otimes \Es\Ls^{-1}\Es^H, \\
\boldK_3 &= \Es^*\Ls^{-1}\Es^T \otimes \Es\Ls^{-1}\Es^H.
\end{align*}
Recall that $\boldA^H \En = \boldzero$. We have
\begingroup
\allowdisplaybreaks
\begin{align}
\boldK_1 \DAd &= (\En^*\En^T \otimes \En\En^H)
(\dot{\boldA}^* \odot \boldA + \boldA^* \odot \dot{\boldA}) \nonumber\\
&= \En^*\En^T \dot{\boldA}^* \odot \En\En^H \boldA
+ \En^*\En^T \boldA^* \odot \En\En^H \dot{\boldA} \nonumber\\
&= \boldzero.
\end{align}
\endgroup
Therefore
\begin{equation}
\boldM_{\boldtheta}^H \boldM_{\boldtheta}
= \DAd^H \boldW^{-1} \DAd
= \noisevarinv \DAd^H (\boldK_2 + \noisevar \boldK_3) \DAd.
\end{equation}
Similar to $\boldW^{-1}$, we denote $\boldW^{-\frac{1}{2}} = \noisevarp{-2} \boldK_1 + \noisevarp{-1} \boldK_4 + \boldK_5$, where
\begin{align*}
\boldK_4 &= \Es^*\Ls^{-\frac{1}{2}}\Es^T \otimes
\En\En^H + \En^*\En^T \otimes \Es\Ls^{-\frac{1}{2}}\Es^H, \\
\boldK_5 &= \Es^*\Ls^{-\frac{1}{2}}\Es^T \otimes \Es\Ls^{-\frac{1}{2}}\Es^H.
\end{align*}
Therefore
\begin{align*}
&\boldM_{\boldtheta}^H \proj{\boldM_{\bolds}} \boldM_{\boldtheta} \\
=& \DAd^H \boldW^{-\frac{1}{2}} \proj{\boldM_{\bolds}} \boldW^{-\frac{1}{2}} \DAd \\
=& \noisevarp{-2} \DAd^H (\noisevarsqrt \boldK_5 + \boldK_4) \proj{\boldM_{\bolds}}
(\noisevarsqrt \boldK_5 + \boldK_4) \DAd,
\end{align*}
where $\proj{\boldM_{\bolds}} = \boldM_{\bolds} \boldM_{\bolds}^\dagger$. We can then express the CRB as
\begin{equation}
\label{eq:crb-q1-q2-q3}
\CRB_{\boldtheta}
= \noisevar (\boldQ_1 + \noisevarsqrt \boldQ_2 + \noisevar \boldQ_3)^{-1},
\end{equation}
where
\begin{align*}
\boldQ_1 &= \DAd^H (\boldK_2 - \boldK_4 \proj{\boldM_{\bolds}} \boldK_4) \DAd, \\
\boldQ_2 &= - \DAd^H (\boldK_4 \proj{\boldM_{\bolds}} \boldK_5 +
\boldK_5 \proj{\boldM_{\bolds}} \boldK_4) \DAd, \\
\boldQ_3 &= \DAd^H (\boldK_3 - \boldK_5 \proj{\boldM_{\bolds}} \boldK_5) \DAd.
\end{align*}
When $\noisevar = 0$, $\boldR$ reduces to $\boldA\boldA^H$. Observe that the eigendecomposition of $\boldR$ always exists for $\noisevar \geq 0$. We use $\boldK_1^\star$--$\boldK_5^\star$ to denote the corresponding $\boldK_1$--$\boldK_5$ when $\noisevar \to 0$.
\begin{lemma}
\label{lem:proj-ms-exist}
Let $K < M$. Assume $\partial\boldr / \partial\boldeta$ is full column rank. Then $\lim_{\noisevar \to 0^+} \proj{\boldM_{\bolds}}$ exists.
\end{lemma}
\begin{proof}
Because $\boldA^H \En = \boldzero$,
\begin{align*}
\boldK_2 \Ad
=& (\Es^*\Ls^{-1}\Es^T \otimes \En\En^H)(\boldA^* \odot \boldA) \\
&+ (\En^*\En^T \otimes \Es\Ls^{-1}\Es^H)(\boldA^* \odot \boldA) \\
=& \Es^*\Ls^{-1}\Es^T \boldA^* \odot \En\En^H \boldA \\
&+ \En^*\En^T \boldA^* \odot \Es\Ls^{-1}\Es^H \boldA \\
=& \boldzero
\end{align*}
Similarly, we can show that $\boldK_4 \Ad = \boldzero$, $\boldi^H \boldK_2 \boldi = \boldi^H \boldK_4 \boldi = 0$, and $\boldi^H \boldK_1 \boldi = \mathrm{rank}(\En) = M-K$. Hence
\begin{equation*}
\boldM_{\bolds}^H \boldM_{\bolds}
= \begin{bmatrix}
\Ad^H \boldK_3 \Ad & \Ad^H \boldK_3 \boldi \\
\boldi^H \boldK_3 \Ad & \boldi^H \boldW^{-1} \boldi
\end{bmatrix}.
\end{equation*}
Because $\partial\boldr / \partial\boldeta$ is full column rank, $\boldM_{\bolds}^H \boldM_{\bolds}$ is full rank and positive definite. Therefore the Schur complements exist, and we can inverse $\boldM_{\bolds}^H \boldM_{\bolds}$ block-wisely. Let $\boldV = \Ad^H \boldK_3 \Ad$ and $v = \boldi^H \boldW^{-1} \boldi$. After tedious but straightforward computation, we obtain
\begin{align*}
\proj{\boldM_{\bolds}}
=& \boldK_5 \Ad \boldS^{-1} \Ad^H \boldK_5 \\
&- s^{-1} \boldK_5 \Ad \boldV^{-1} \Ad^H \boldK_3 \boldi \boldi^H
(\boldK_5 + \noisevarinv \boldK_1) \\
&- v^{-1} (\boldK_5 + \noisevarinv \boldK_1)
\boldi \boldi^H \boldK_3 \Ad \boldS^{-1} \Ad^H \boldK_5 \\
&+ s^{-1} (\boldK_5 + \noisevarinv \boldK_1) \boldi \boldi^H
(\boldK_5 + \noisevarinv \boldK_1),
\end{align*}
where $\boldS$ and $s$ are Schur complements given by
\begin{align*}
\boldS &= \boldV - v^{-1} \Ad^H \boldK_3 \boldi \boldi^H \boldK_3 \Ad, \\
s &= v - \boldi^H \boldK_3 \Ad \boldV^{-1} \Ad^H \boldK_5 \boldi.
\end{align*}
Observe that
\begin{equation*}
v
= \boldi^H \boldW^{-1} \boldi
= \noisevarp{-4}(M-K) + \boldi^H \boldK_3 \boldi.
\end{equation*}
We know that both $v^{-1}$ and $s^{-1}$ decrease at the rate of $\noisevarp{4}$. As $\noisevar \to 0$, we have
\begin{align*}
&\boldS \to \Ad^H \boldK_3^\star \Ad, \\
&s^{-1} (\boldK_5 + \noisevarinv \boldK_1) \to \boldzero, \\
&v^{-1} (\boldK_5 + \noisevarinv \boldK_1) \to \boldzero, \\
&s^{-1} (\boldK_5 + \noisevarinv \boldK_1) \boldi \boldi^H
(\boldK_5 + \noisevarinv \boldK_1) \to \frac{\boldK_1^\star \boldi \boldi^H \boldK_1^\star}{M-K}.
\end{align*}
We now show that $\Ad^H \boldK_3^\star \Ad$ is nonsingular. Denote the eigendecomposition of $\boldA\boldA^H$ by $\Es^\star \Ls^\star (\Es^\star)^H$. Recall that for matrices with proper dimensions, $(\boldA \odot \boldB)^H (\boldC \odot \boldD) = (\boldA^H \boldC) \circ (\boldB^H \boldD)$, where $\circ$ denotes the Hadamard product. We can expand $\Ad^H \boldK_3^\star \Ad$ into
\begin{align*}
[\boldA^H \Es^\star (\Ls^\star)^{-1} (\Es^\star)^H \boldA]^*
\circ [\boldA^H \Es^\star (\Ls^\star)^{-1} (\Es^\star)^H \boldA].
\end{align*}
Note that $\boldA\boldA^H \Es^\star (\Ls^\star)^{-1} (\Es^\star)^H \boldA = \Es^\star (\Es^\star)^H \boldA = \boldA$, and that $\boldA$ is full column rank when $K < M$. We thus have $\boldA^H \Es^\star (\Ls^\star)^{-1} (\Es^\star)^H \boldA = \boldI$. Therefore $\Ad^H \boldK_3^\star \Ad = \boldI$, which is nonsingular.
Combining the above results, we obtain that when $\noisevar \to 0$,
\begin{equation*}
\proj{\boldM_{\bolds}} \to
\boldK_5^\star \Ad \Ad^H \boldK_5^\star
+ \frac{\boldK_1^\star \boldi \boldi^H \boldK_1^\star}{M-K}.
\end{equation*}
\end{proof}
For sufficiently small $\noisevar > 0$, it is easy to show that $\boldK_1$--$\boldK_5$ are bounded in the sense of Frobenius norm (i.e., $\| \boldK_i \|_F \leq C$ for some $C > 0$, for $i \in \{1,2,3,4,5\}$). Because $\partial\boldr / \partial\boldeta$ is full rank, $\boldM_{\bolds}$ is also full rank for any $\noisevar > 0$, which implies that $\proj{\boldM_{\bolds}}$ is well-defined for any $\noisevar > 0$. Observe that $\proj{\boldM_{\bolds}}$ is positive semidefinite, and that $\trace(\proj{\boldM_{\bolds}}) = \mathrm{rank}(\boldM_{\bolds})$. We know that $\proj{\boldM_{\bolds}}$ is bounded for any $\noisevar > 0$. Therefore $\boldQ_2$ and $\boldQ_3$ are also bounded for sufficiently small $\noisevar$, which implies that $\noisevarsqrt \boldQ_2 + \noisevar \boldQ_3 \to \boldzero$ as $\noisevar \to 0$.
By Lemma~{\ref{lem:proj-ms-exist}, we know that $\boldQ_1 \to \boldQ_1^\star$ as $\noisevar \to 0$, where
\begin{equation*}
\boldQ_1^\star
= \DAd^H (\boldK_2^\star - \boldK_4^\star \proj{\boldM_{\bolds}}^\star \boldK_4^\star) \DAd,
\end{equation*}
and $\proj{\boldM_{\bolds}}^\star = \lim_{\noisevar \to 0^+} \proj{\boldM_{\bolds}}$ as derived in Lemma~\ref{lem:proj-ms-exist}. Assume $\boldQ_1^\star$ is nonsingular\footnote{The condition when $\boldQ_1^\star$ is singular is difficult to obtain analytically. In numerical simulations, we have verified that it remains nonsingular for various parameter settings.}. By \eqref{eq:crb-q1-q2-q3} we immediately obtain that $\CRB_{\boldtheta} \to \boldzero$ as $\noisevar \to 0$.
When $K \geq M$, $\boldR$ is full rank regardless of the choice of $\noisevar$. Hence $(\boldR^T \otimes \boldR)^{-1}$ is always full rank. Because $\partial\boldr/\partial\boldeta$ is full column rank, the FIM is positive definite, which implies its Schur complements are also positive definite. Therefore $\CRB_{\boldtheta}$ is positive definite.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\par
Intermediate energy heavy-ion collisions produce rich amount of
information on the correlations and fluctuations and eventually on
the dynamics and interactions among the nucleons. The breaking of
nuclei, i.e., multifragmentation, is one of the rare phenomena
that has attracted major attention in recent years \cite{bege}.
The physics behind multifragmentation is so complicated that many
different theoretical approaches have been developed
\cite{bege,aich1,qmd1,dorso}. Since no theoretical model simulates
fragments, one needs afterburners to identify clusters. Since
correlations and fluctuations are the main features of the
molecular dynamics model, the quantum molecular dynamics (QMD)
model is very successful in explaining the phenomena of
multifragmentation. Once the phase space is accessible, one
generally clusterize the phase space with simple spatial
correlation method where one binds the nucleons in a fragment they
lie within a distance of 4 fm. This method is known as minimum
spanning tree (MST) method \cite{jsingh}. At the same time
fragments formed in MST method will be highly unstable (especially
in central collisions) as there the two nucleons may not be well
formed and therefore can be unstable that will decay after a
while. In order to filter out such unstable fragments, we impose
another cut in terms of relative momentum of nucleons. This
method, dubbed as minimum spanning tree with momentum cut (MSTP)
method was discussed by Puri \emph{et al.} \cite{kumar1}. In our
recent work, we study the role of momentum cut on fragment
structure \cite{rajni}. We also studied the role of colliding
geometry on the fragmentation when momentum cut is being imposed.
No study exits in literature to see the role of momentum cut on
the various fragment properties like the rapidity distribution,
p$_{t}$ spectra and E$_{rat}$. So the the present paper, we plan
to see the role of momentum cut on various fragment properties and
to investigate how these properties vary with impact parameter.
The present study is carried out within the framework of QMD model
\cite{aich1,qmd1} which is described in the following section.
\section{The Formalism}
\subsection{Quantum Molecular dynamics (QMD) model}
\par
We describe the time evolution of a heavy-ion reaction within the
framework of Quantum Molecular Dynamics (QMD) model
\cite{aich1,qmd1} which is based on a molecular dynamics picture.
This model has been successful in explaining collective flow
\cite{sood2}, elliptic flow \cite{kumar3}, multifragmentation
\cite{dhawan} as well as dense and hot matter \cite{fuchs}. Here
each nucleon is represented by a coherent state of the form
\begin{equation}
\phi_{\alpha}(x_1,t)=\left({\frac {2}{L \pi}}\right)^{\frac
{3}{4}} e^{-(x_1-x_{\alpha }(t))^2}
e^{ip_{\alpha}(x_1-x_{\alpha})} e^{-\frac {i p_{\alpha}^2 t}{2m}}.
\label {e1}
\end{equation}
Thus, the wave function has two time dependent parameters
$x_{\alpha}$ and $p_{\alpha}$. The total n-body wave function is
assumed to be a direct product of coherent states:
\begin{equation}
\phi=\phi_{\alpha}
(x_1,x_{\alpha},p_{\alpha},t)\phi_{\beta}(x_2,x_{\beta},
p_{\beta},t)...., \label {e2}
\end{equation}
where antisymmetrization is neglected. One should, however, keep
in the mind that the Pauli principle, which is very important at
low incident energies, has been taken into account. The initial
values of the parameters are chosen in a way that the ensemble
($A_T$+$A_P$) nucleons give a proper density distribution as well
as a proper momentum distribution of the projectile and target
nuclei. The time evolution of the system is calculated using the
generalized variational principle. We start out from the action
\begin{equation}
S=\int_{t_1}^{t_2} {\cal {L}} [\phi,\phi^{*}] d\tau, \label {e3}
\end{equation}
with the Lagrange functional
\begin{equation}
{\cal {L}} =\left(\phi\left|i\hbar \frac
{d}{dt}-H\right|\phi\right), \label {e4}
\end{equation}
where the total time derivative includes the derivatives with
respect to the parameters. The time evolution is obtained by the
requirement that the action is stationary under the allowed
variation of the wave function
\begin{equation}
\delta S=\delta \int_{t_1}^{t_2} {\cal {L}} [\phi ,\phi^{*}] dt=0.
\label{e5}
\end{equation}
If the true solution of the Schr\"odinger equation is contained in
the restricted set of wave function
$\phi_{\alpha}\left({x_{1},x_{\alpha},p_{\alpha}}\right),$ this
variation of the action gives the exact solution of the
Schr\"odinger equation. If the parameter space is too restricted,
we obtain that wave function in the restricted parameter space
which comes close to the solution of the Schr\"odinger equation.
Performing the variation with the test wave function (2), we
obtain for each parameter $\lambda$ an Euler-Lagrange equation;
\begin{equation}
\frac{d}{dt} \frac{\partial {\cal {L}}}{\partial {\dot
{\lambda}}}-\frac{\partial \cal {L}} {\partial \lambda}=0.
\label{e6}
\end{equation}
For each coherent state and a Hamiltonian of the form, \\
$H=\sum_{\alpha}
\left[T_{\alpha}+{\frac{1}{2}}\sum_{\alpha\beta}V_{\alpha\beta}\right]$,
the Lagrangian and the Euler-Lagrange function can be easily
calculated
\begin{equation}
{\cal {L}} = \sum_{\alpha}{\dot {\bf x}_{\alpha}} {\bf
p}_{\alpha}-\sum_{\beta} \langle{V_{\alpha
\beta}}\rangle-\frac{3}{2Lm}, \label{e7}
\end{equation}
\begin{equation}
{\dot {\bf x}_{\alpha}}=\frac{{\bf
p}_\alpha}{m}+\nabla_{p_{\alpha}}\sum_{\beta} \langle{V_{\alpha
\beta}}\rangle, \label {e8}
\end{equation}
\begin{equation}
{\dot {\bf p}_{\alpha}}=-\nabla_{{\bf x}_{\alpha}}\sum_{\beta}
\langle{V_{\alpha \beta}}\rangle. \label {e9}
\end{equation}
Thus, the variational approach has reduced the n-body
Schr\"odinger equation to a set of 6n-different equations for the
parameters which can be solved numerically. If one inspects the
formalism carefully, one finds that the interaction potential
which is actually the Br\"{u}ckner G-matrix can be divided into
two parts: (i) a real part and (ii) an imaginary part. The real
part of the potential acts like a potential whereas imaginary part
is proportional to the cross section.
In the present model, interaction potential comprises of the
following terms:
\begin{equation}
V_{\alpha\beta} = V_{loc}^{2} + V_{loc}^{3} + V_{Coul} + V_{Yuk}
\label {e10}
\end {equation}
$V_{loc}$ is the Skyrme force whereas $V_{Coul}$, $V_{Yuk}$ and
$V_{MDI}$ define, respectively, the Coulomb, and Yukawa
potentials. The Yukawa term separates the surface which also plays
the role in low energy processes like fusion and cluster
radioactivity \cite{puri}. The expectation value of these
potentials is calculated as
\begin{eqnarray}
V^2_{loc}& =& \int f_{\alpha} ({\bf p}_{\alpha}, {\bf r}_{\alpha},
t) f_{\beta}({\bf p}_{\beta}, {\bf r}_{\beta}, t)V_I ^{(2)}({\bf
r}_{\alpha}, {\bf r}_{\beta})
\nonumber\\
& & \times {d^{3} {\bf r}_{\alpha} d^{3} {\bf r}_{\beta}
d^{3}{\bf p}_{\alpha} d^{3}{\bf p}_{\beta},}
\end{eqnarray}
\begin{eqnarray}
V^3_{loc}& =& \int f_{\alpha} ({\bf p}_{\alpha}, {\bf
r}_{\alpha}, t) f_{\beta}({\bf p}_{\beta}, {\bf r}_{\beta},t)
f_{\gamma} ({\bf p}_{\gamma}, {\bf r}_{\gamma}, t)
\nonumber\\
& & \times V_I^{(3)} ({\bf r}_{\alpha},{\bf r}_{\beta},{\bf
r}_{\gamma}) d^{3} {\bf r}_{\alpha} d^{3} {\bf r}_{\beta} d^{3}
{\bf r}_{\gamma}
\nonumber\\
& & \times d^{3} {\bf p}_{\alpha}d^{3} {\bf p}_{\beta} d^{3} {\bf
p}_{\gamma}.
\end{eqnarray}
where $f_{\alpha}({\bf p}_{\alpha}, {\bf r}_{\alpha}, t)$ is the
Wigner density which corresponds to the wave functions (eq. 2). If
we deal with the local Skyrme force only, we get
{\begin{equation} V^{Skyrme} = \sum_{{\alpha}=1}^{A_T+A_P}
\left[\frac {A}{2} \sum_{{\beta}=1} \left(\frac
{\tilde{\rho}_{\alpha \beta}}{\rho_0}\right) + \frac
{B}{C+1}\sum_{{\beta}\ne {\alpha}} \left(\frac {\tilde
{\rho}_{\alpha \beta}} {\rho_0}\right)^C\right].
\end{equation}}
Here A, B and C are the Skyrme parameters which are defined
according to the ground state properties of a nucleus. Different
values of C lead to different equations of state. A larger value
of C (= 380 MeV) is often dubbed as stiff equation of state.The
finite range Yukawa ($V_{Yuk}$) and effective Coulomb potential
($V_{Coul}$) read as:
\begin{equation}
V_{Yuk} = \sum_{j, i\neq j} t_{3}
\frac{exp\{-|\textbf{r}_{\textbf{i}}-\textbf{r}_{\textbf{j}}|\}/\mu}{|\textbf{r}_{\textbf{i}}-\textbf{r}_{\textbf{j}}|/\mu},
\end{equation}
\begin{equation}
V_{Coul} = \sum_{j, i\neq
j}\frac{Z_{eff}^{2}e^{2}}{|\textbf{r}_{\textbf{i}}-\textbf{r}_{\textbf{j}}|}.
\end{equation}
\par
The Yukawa interaction (with $t_{3}$= -6.66 MeV and $\mu$ = 1.5
fm) is essential for the surface effects. The relativistic effect
does not play role in low incident energy of present interest
\cite{lehm}.
\par
The phase space of nucleons is stored at several time steps. The
QMD model does not give any information about the fragments
observed at the final stage of the reaction. In order to construct
the fragments, one needs
clusterization algorithms. We shall concentrate here on the MST
and MSTP methods.
\par
According to MST method
\cite{jsingh}, two nucleons are allowed to share the same fragment
if their centroids are closer than a distance $r_{min}$,
\begin{equation}
|\textbf{r}_{\textbf{i}}-\textbf{r}_{\textbf{j}}| \leq r_{min}.
\end{equation}
where $\textbf{r}_{\textbf{i}}$ and $\textbf{r}_{\textbf{j}}$ are
the spatial positions of both nucleons and r$_{min}$ taken to be
4fm.
\par
For MSTP method,we impose a additional cut in the
momentum space, i.e., we allow only those nucleons to form a
fragment which in addition to equation(16) also satisfy
\begin{eqnarray}
|\textbf{p}_{\textbf{i}}-\textbf{p}_{\textbf{j}}| \leq p_{min},
\end{eqnarray}
where p$_{min}$ = 150 MeV/c.
\par
\section{Results and Discussion}
We simulated the reactions of $^{12}$C+$^{12}$C and
$^{40}$Ca+$^{40}$Ca at 100 MeV/nucleon at central and peripheral
colliding geometries, i.e., at $\hat{b}$ = 0.0 and 0.8,
respectively. We use a soft equation of state with standard
energy-dependent Cugon cross section.
\par
\begin{figure}[!t]
\centering
\vskip 1cm
\includegraphics[angle=0,width=12cm]{fig1.eps}
\vskip -0cm \caption{ The rapidity distribution for free nucleons and LCPs for the reaction of $^{12}$C+$^{12}$C at incident energy of
100 MeV/nucleon at central (left panel) and peripheral (right) geometries with MST and MSTP methods. }\label{fig1}
\end{figure}
\begin{figure}[!t]
\centering \vskip 1cm
\includegraphics[angle=0,width=12cm]{fig2.eps}
\vskip -0cm \caption{Same as Fig. 1 but for the reaction of
$^{40}$Ca+$^{40}$Ca.}\label{fig2}
\end{figure}
\begin{figure}[!t]
\centering \vskip 1cm
\includegraphics[angle=0,width=12cm]{fig3.eps}
\vskip -0cm \caption{The $\frac{dN}{p_{t}dp_{t}}$
(1/(MeV/c)$^{2}$) as a function of transverse energy p$_{t}$ for
the free nucleons and LCPs for the reaction of $^{12}$C+$^{12}$C
at central (left panel) and peripheral (right) geometries with MST
and MSTP methods. Lines have same meaning as in Fig.
1.}\label{fig3}
\end{figure}
\begin{figure}[!t] \centering
\vskip 1cm
\includegraphics[angle=0,width=12cm]{fig4.eps}
\vskip -0cm \caption{ Same as Fig. 3 but for the reaction of
$^{40}$Ca+$^{40}$Ca.}\label{fig5}
\end{figure}
\begin{figure}[!t]
\centering
\vskip 1cm
\includegraphics[angle=0,width=12cm]{fig5.eps}
\vskip -0cm \caption{ The time evolution of E$_{rat}$ for free nucleons and LCPs for the reaction of
$^{12}$C+$^{12}$C.}\label{fig6}
\end{figure}
\begin{figure}[!t]
\centering
\vskip 1cm
\includegraphics[angle=0,width=12cm]{fig6.eps}
\vskip -0cm \caption{ Same as Fig. 5 but for the reaction of
$^{40}$Ca+$^{40}$Ca.}\label{fig6}
\end{figure}
\par
In figure 1, we display the rapidity distribution of free nucleon
and LCP's for the reaction of $^{12}$C+$^{12}$C at energy of 100
MeV at central (left panel) and peripheral (right) colliding
geometry. The solid and dashed lines indicate the calculations of
MST and MSTP methods, respectively. From the figure, we see that
there is quantitative difference in the results of MST and MSTP
methods, though qualitatively, both methods give similar behaviour
of the rapidity distribution of nucleons and fragments.
\par
For central collisions (left panel), we see that the peak of dN/dY
plot is pronounced for the MSTP method, thus, indicating enhanced
production of free nucleons in MSTP method as compared to MST
method. This is due to the fact that in the MST method, we have a
single big fragment because of no restriction is being imposed on
the relative momentum of nucleons forming fragments. The
production of LCP's is more with MSTP method compared to MST
method which is supported by Ref. \cite{kumar1,rajni}. At
peripheral collisions, the behaviour of the rapidity plots of free
nucleons is similar as for the central one whereas the trend
reverses for the LCP's plot. For peripheral collisions, we have
greater production with MSTP method.
\par
In figure 2, we display the rapidity distribution of free
nucleons, LCP's and IMF's for the reaction of $^{40}$Ca+$^{40}$Ca.
Left(right) panels display the results for b/b$_{max}$=0.0 (0.8).
We find similar behaviour of free nucleons and LCP's as reported
for the reaction of $^{12}$C+$^{12}$C. The IMF's also follow the
similar trend as for LCP's, i.e., we have more (less) production
of IMF's with MST method at central (peripheral) collisions.
\par
In figures 3 and 4, we display dN/p$_{t}$dp$_{t}$ versus p$_{t}$
for the reaction of $^{12}$C+$^{12}$C and $^{40}$Ca+$^{40}$Ca,
respectively. We see that dN/p$_{t}$dp$_{t}$ spectra follow the
similar behaviour for both MST and MSTP methods. We have a higher
peak in the spectra of free nucleons with MST method at both the
colliding geometries. The difference between MST and MSTP methods
in spectra of LCP's is less significant. Similar behaviour is also
observed for the reaction of $^{40}$Ca+$^{40}$Ca.
\par
In figure 5, we display the time evolution of E$_{rat}$ of free
nucleons and LCP's for the reaction of $^{12}$C+$^{12}$C at
central (left panel) and peripheral (right) colliding geometry.
For MST and MSTP methods, we find a significant difference between
MST and MSTP methods for both free nucleons and LCP's. The
difference is more for the central collisions as compared to the
peripheral one.
\par
In figure 6, we display time evolution of E$_{rat}$ of free
nucleons, LCP's and IMF's for the reaction of $^{40}$Ca+$^{40}$Ca
at central (left panel) and peripheral(right) collisions. The
solid (dashed) lines represent the results of MST (MSTP) method.
From figure, we find a significant difference of E$_{rat}$ with
MST and MSTP method as in case of the reaction of
$^{12}$C+$^{12}$C. We also find that the difference between MST
and MSTP reduces at peripheral colliding geometries.
\section{Summary}
Using the quantum molecular dynamic model, we studied the role of
momentum correlations in the properties of fragments. This was achieved by
imposing cut in momentum space during the process of clusterization. We find that this cut yields significant
difference in the fragment properties of system at all colliding
geometries.
\section{Acknowledgement}
This work is done under the supervision of Dr. Rajeev K. Puri,
Department of Physics, Panjab University, Chandigarh, India. This
work has been supported by a grant from Centre of Scientific and
Industrial Research (CSIR), Govt. of India.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:motivation}
The great success of the finite element method (FEM) can be attributed to its solid theoretical rooting in the fields of variational calculus, and functional analysis \cite{strangfix,hughes2012finite}.
It is widely used for numerically solving partial differential equations (PDEs) in Computer-Aided Engineering (CAE) systems.
Most FEM computations consist of two phases; a) a transformation of the PDE to a discrete form by mapping onto the finite-dimensional approximation space, b) solving the resulting system of linear or nonlinear algebraic equations \cite{FEM}.
Commonly, the FEM implementation is made by computing local integral subroutines by elements, defining local element matrices that are subsequently integrated and assembled in the global system of matrix equations.
A current hot topic regarding numerical FEM approximations is the IsoGeometric Analysis FEM (IGA-FEM) \cite{Hughes}.
It integrates the geometrical modeling of CAD systems with engineering computations of CAE systems.
IGA-FEM computations share the same structure as the traditional FEM.
However, the main difference is that IGA-FEM employs B-splines basis functions for spanning the approximation space \cite{SubroutinePackageForCalculating}.
In several scenarios, mainly when dealing with time-dependency or Non-linearity, it is well known that FEM can give rise to the resolution of a high-cost computational problem.
For instance, one of the standard techniques for numerically solving time-dependent PDEs is to perform a finite difference method (FDM) in time, coupled with a FEM discretization in space.
Last implies assembling multiple FEM matrices at every time step in several scenarios.
Indeed, for nonlinear PDEs, it may be required to integrate and assemble FEM matrices at each iteration step of the nonlinear solver.
In particular, if an implicit method in time is employed \cite{isotumor3d, Puzyrev, CHICCS2016}.
Furthermore, the cost of the assembling grows with the space dimension \cite{cost}.
Therefore, the cost associated with the integration in assembling FEM matrices is critical in terms of computation time.
Traditionally, the integration procedure is performed in parallel, element-by-element, making a level of concurrent operations in which data is independent.
However, in \cite{parallel_integration} is proposed a methodology based on adding two levels of parallelism within each element that distinguishes the independent operations.
The goal was to reduce the computational time in the integration procedure using the modern parallel architectures of a GPU \cite{CUDA}.
The paper aims to compare the practical concurrent implementation performance of the classical integration method and sum factorization with different parallelization schemes.
To do that, we apply the methodology presented in \cite{parallel_integration} to sum factorization.
For this, we start by briefly describing the principal concepts involved in the case study.
\subsection{Architecture}
\label{sec:architecture}
State-of-the-art supercomputers are designed as multi-level hierarchical hybrid systems \cite{cyfronet,stampede,Summit}.
A representative architecture is shown in Figure~\ref{fig:node}.
They consist of classical nodes (servers) communicating over a network (specialized solutions, such as Infiniband) through a Message Passing Interface (MPI).
Inside every server (node), multiple multi-core CPUs partially share RAM.
Furthermore, these systems have a massively parallel co-processor such as GPUs.
GPUs have dedicated memory, with a hierarchical memory organization \cite{CUDAmemory}, which is not shared with the CPU.
Concurrent algorithms dedicated to such systems are crucial for efficient hardware utilization and reduced carbon trace (green computing).
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{architecture.pdf}
\caption{Architecture of a single node of a modern supercomputer.
Figure based on Summit \cite{Summit} supercomputer.}
\label{fig:node}
\end{figure}
\subsection{Sum factorization}
\label{sec:sumfact_intro}
Sum factorization (see, e.g.~\cite{REV1}) was first introduced in \cite{sumfactbook}.
It was initially employed for the standard higher-order finite element method \cite{REV0}.
However, currently it is preferred the technique of choice for efficient formation of local element matrices in hp-finite elements \cite{VOS20105161, 10.1137/11082539X, karniadakis2005spectral, eibner2005fast} and IGA with higher-order B-splines \cite{REV1,ANTOLIN2015817, BRESSAN2019437}.
In essence, sum factorization is a reordering of the computations in such a way as to exploit the underlying tensor product of the test and trial spaces involved.
When employed, the cost of integration is reduced from ${\cal O}(m^3n^3q^3)$ to ${\cal O}(m^3n^3q+m^2n^2q+mnq^3)$, where $m$ denotes the number of test functions over an element in each direction, $n$ is the number of trial functions over the element, and $q$ is the number of quadrature points over the element in each spatial direction.
Last in practice implies that, for a given polynomial degree $p$, the total reduction is from ${\cal O}(p^9)$ to ${\cal O}(p^7)$ when considering Gaussian quadrature, and down up to ${\cal O}(p^6)$ for weighted quadrature.
\subsection{Trace theory}
\label{sec:tracetheory}
There exist multiple methods used in the formal verification of concurrent computations.
One of the most popular is the \textit{Trace Theory} \cite{TraceTheory}.
Other methods include \textit{Petri Net} \cite{PetriNet}, \textit{Process Calculi} \cite{ProcessCalculi} and \textit{Actor Model} \cite{ActorModel}.
Trace Theory delivers the Foata Normal Form (FNF) \cite{BookOfTraces} and Diekert dependency graphs, which help characterize the processing in a single element and simplify the parallel implementation on massively parallel machines, such as GPUs.
Finally, it makes a base for near-optimal scheduling. It simplifies concurrent implementation on a GPU while providing a theoretical framework for verifying the correctness of such a parallel algorithm.
\subsection{Structure of the article}
The rest of the article is organized as follows.
First, in Section \ref{sec:model} we describe the model problem, together with its discretization in time and space, used for the benchmarks.
Next, in Section~\ref{sec:integration} we discuss the integration algorithms and apply trace theory to create a concurrent algorithm performing sum factorization.
In Section~\ref{sec:numres} we consider several numerical experiments to show and discuss the
performance of the integration methodologies.
Finally, we conclude the paper in Section \ref{sec:conclussions}.
\section{Model problem and IGA-discrete variational formulation}
\label{sec:model}
\subsection{Model problem}
\label{sec:formulation}
With the spirit of presenting the proposed methodology in a simple setting (i.e., the extension \cite{parallel_integration}
to the concurrent sum factorization algorithm),
we will consider the following heat-transfer model problem:
\begin{equation}
\left\{
\begin{aligned}
\DPart{u}{t} &=
\Delta u
&\qquad&\text{in }\Omega\times[0, T]\\
\nabla u \cdot \hat{\Vect{n}} &= 0
&\qquad&\text{on }\partial\,\Omega\times[0,T] \\
u &= u_0 \text{, at } t=0 &\qquad&\text{in }\Omega
\end{aligned}
\right.
\label{eq:heat}
\end{equation}
where $\Omega = (0 , 1)^3 \subset \mathbb{R}^3$ denotes the spatial domain, $\hat{\Vect{n}}$ denotes the normal vector to the domain boundary $\partial \Omega$, $T>0$~is a length of the time interval, and $u_0$ is a given initial state.
\subsection{Discretization in time}
To obtain a fully-discrete formulation of problem~\eqref{eq:heat}, we start by considering its corresponding continuous weak formulation in space, given as follows:
Find~$u \in \mathcal{C}^1\left(\left(0, T\right), H^1\left(\Omega\right)\right)$ such that $u=u_0$ at $t=0$ and, for each~$t \in \left(0, T\right)$, it holds:
\begin{equation}
\label{eq:weak_space}
\int_\Omega \DPart{u}{t} v \, dx =
- \int_\Omega \nabla u \cdot \nabla v \, dx, \, \qquad \forall \, v \in H^1\left(\Omega\right).
\end{equation}
For simplicity, we consider a discrete-in-time version of problem~\eqref{eq:weak_space} by employing the forward Euler method.
This is, denoting by $u_n$ the approximation of $u$ at time $t = n\Delta_t$, with $n=0,\dots,N$ where $\Delta_t = T/N$ denotes a fixed time step for a given integer $N>0$, we obtain $u_{n+1} \in H^1(\Omega)$ as the solution of the following variational problem:
\begin{equation}
\label{eq:Euler}
\int_\Omega u_{n+1} v \, dx =
\int_\Omega u_n v \, dx - \Delta_t \int_\Omega \nabla u_n \cdot \nabla v \, dx, \qquad \forall \, v \in H^1\left(\Omega\right).
\end{equation}
\subsection{B-splines basis functions}
\label{sec:splines}
A B-spline is a convenient function representing polynomial splines (see e.g.~\cite{boor,shumaker}).
B-splines are characterized by the polynomial degree inside the respective elements and their regularity at the interfaces between them of the finite element mesh.
For simplicity, in this work, we will consider 3D-tensor B-splines basis functions of the same polynomial degree and regularity at the interior faces of the tensor mesh.
However, the methodology can be easily extended to more general B-spline basis functions.
Consider a partitioning of $\overline{\omega} = [ 0 , 1 ]$ into $K$ uniform elements $[\widehat{x}_{k-1},\widehat{x}_k]$, with
$$ \widehat{x}_0 = 0 < \widehat{x}_1 < \dots < \widehat{x}_{k-1} < \widehat{x}_{k} < \dots < \widehat{x}_{K} = 1.$$
For a given $p>0$, the B-spline basis functions being piece-wise polynomials of degree $p$, with $C^{p-1}$ regularity at the interior knots $\{\widehat{x}_k\}_{k=1}^{K-1}$, are defined trough the following knot vector:
\begin{equation}\label{eq:knot_vector}
\Xi = \{ \gamma_i \}^{K+p-1}_{i=0} :=
\{ \underbrace{0,\dots,0}_{p+1} , \dots , \widehat{x}_{k-1}, \widehat{x}_k, \dots, \underbrace{1,\dots,1}_{p+1} \}.
\end{equation}
More precisely, the $i$-th B-spline basis function, with $1\leq i\leq K + p - 1$, is constructed using the Cox--de--Boor recursive formulae \cite{SubroutinePackageForCalculating}
\begin{equation}
B_{i;0}(\xi):=\left\{\begin{array}{ll} 1, & \mathrm {if} \quad \gamma_{i}\leq \xi <\gamma_{i+1} , \\ 0, & \mathrm {otherwise} ,
\end{array}\right.
\label{eq:cox1}
\end{equation}
\begin{equation}
B_{i;q}(\xi):={\frac {\xi-\gamma_{i}}{\gamma_{i+q}-\gamma_{i}}}B_{i;q-1}(\xi)+{\frac {\gamma_{i+q+1}-\xi}{\gamma_{i+q+1}-\gamma_{i+1}}}B_{i+1;q-1}(\xi), \text{ for } 1\leq q \leq p,
\label{eq:cox2}
\end{equation}
where $B_{i,q}(\xi)$ denotes the value of the $i$-th B-spline function of degree $q$ at the point $\xi$.
In formula (\ref{eq:cox2}), the limit case $0/0$ is defined as $0$.
We notice that the Cox--de--Boor recursive formulae \eqref{eq:cox1} and \eqref{eq:cox2} define a total of $K+p$ 1D B-splines basis functions.\\
We define 3D basis functions by tensor product of 1D B-splines basis functions that, for simplicity, we construct considering the same number of element partitions for all three spatial directions.\\
For $x = (x_1,x_2,x_3) \in \Omega$, we will denote by
\begin{equation}\label{eq:basis_construction}
B_{\delta;p}(x) = B_{k;p}(x_1) B_{l;p}(x_2) B_{m;p}(x_3), \text{ with } \delta = \{k,l,m\} \in \mathcal{K},
\end{equation}
the evaluation in $x$ of a generic 3D B-spline basis function, where
\begin{equation}
\mathcal{K} = \{0,1,\dots,K+p-1\}^3.
\end{equation}
Finally, we will denote by
\begin{equation}\label{eq:B_spline_space}
\mathcal{B}_{\mathcal{K};p}:=\text{span}\left\{B_{\delta;p}, \text{ with } \delta \in \mathcal{K}\right\} \subset H^1(\Omega)
\end{equation}
the space generated by the 3D--tensor B--spline basis functions of degree $p$ and global regularity $p-1$.
\subsection{Fully-discrete variational formulation}
For a given polynomial degree $p$, the fully--discrete formulation of problem~\eqref{eq:heat} is obtained from \eqref{eq:Euler} by considering the $H^1$--conforming space $\mathcal{B}_{\mathcal{K};p}$ as the approximation space for the discrete solution $U_{n+1} \approx u_{n+1}$.
This is, given $U_{n} \in \mathcal{B}_{\mathcal{K};p}$, we obtain $U_{n+1} \in \mathcal{B}_{\mathcal{K};p}$ as the solution of the following discrete variational formulation problem:
\begin{equation}\label{eq:discrete_variational_formulation}
\textrm{Find } U_{n+1} \in \mathcal{B}_{\mathcal{K};p}, \textrm{ such that } a\left(U_{n+1},B_{\delta;p}\right)=l_n\left(B_{\delta;p}\right), \forall \,\delta \in \mathcal{K},
\end{equation}
where
\begin{equation}
a\left(U_{n+1},B_{\delta;p}\right)=\int_{\Omega} U_{n+1} B_{\delta;p}\, dx,
\end{equation}
\begin{equation}
l_n\left(B_{\delta;p}\right)=\int_\Omega U_{n} B_{\delta;p}\, dx - \Delta t \int_{\Omega} \nabla U_{n} \cdot \nabla B_{\delta;p} \, dx,
\end{equation}
and $U_{0}$ corresponds to the classical $L^2$-projection of the initial state $u_0$ in the B-spline space $\mathcal{B}_{\mathcal{K};p}$.\\
As a consequence of the finite number of basis functions for the discrete space $\mathcal{B}_{\mathcal{K};p}$, we can assume that the wanted discrete solution is written as:
\begin{equation}
U_{n+1}(x) = \sum_{\beta \in \{1, \dots, K+p\}^3} \mu_\beta B_{\beta;p}(x).
\end{equation}
Therefore, after considering an appropriate ordering for the basis functions that here we will consider implicit for the sake of simplicity, problem~\eqref{eq:discrete_variational_formulation} can be equivalently written in matrix form as:
\begin{equation}\label{eq:matrix_form}
\text{Find } \mu \in \mathbb{R}^{(K+p)^3}, \text{ such that } A \mu = L,
\end{equation}
with the right-hand side $L_\delta = l_n \left(B_{\delta;p}\right)$, and the Gram matrix
\begin{equation}\label{eq:mass_term}
A_{\delta,\beta} = a\left(B_{\beta;p}, B_{\delta;p}\right).
\end{equation}
\section{Integration algorithms}
\label{sec:integration}
\subsection{Element-by-element integration}
For the sake of simplicity, here we will focus on the integration and assembling of the Gram matrix $A$.
The standard integration strategy consists of assembling the linear system \eqref{eq:matrix_form} element-by-element.
To exemplify the procedure, we assume that the domain $\Omega$ is decomposed into a set of $K^3$ cubic elements.
\begin{equation}\label{eq:Edelta}
E_{\gamma} = (\widehat{x}_i,\widehat{x}_{i+1}) \times (\widehat{x}_j,\widehat{x}_{j+1}) \times (\widehat{x}_k,\widehat{x}_{k+1}),
\end{equation}
where $\widehat{x}_j = j/K$ (cf.~Section~\ref{sec:splines}), and $\gamma = (i,j,k) \in \{1, 2, \ldots, K\}^3$.
Denoting by $\delta = (h,i,j)$, and by $\beta = (k,l,m)$, the matrix element $A_{\delta,\beta}$ (see \eqref{eq:mass_term}) is computed as the sum
\begin{equation}
A_{\delta,\beta} = \sum_{\gamma \in \{1, 2, \ldots, K\}^3} A_{\delta,\beta}^{\gamma},
\end{equation}
where $A_{\delta, \beta}^{\gamma}$ is given in terms of the 1D B-spline basis functions as (see \eqref{eq:basis_construction}):
\begin{equation}
A_{\delta, \beta}^{\gamma} = \int_{E_{\gamma}}
B_{h;\, p}(x_1) \, B_{i;\, p}(x_2) \, B_{j;\, p}(x_3) \, B_{k;\, p}(x_1) \, B_{l;\, p}(x_2) \, B_{m;\, p}(x_3) \, dx.
\label{eq:88}
\end{equation}
Let us consider a proper exact quadrature with the particular set of weights and nodes
$\{\omega^n = (\omega^{n_1},\omega^{n_2}, \omega^{n_3})$,
$x^n = (x^{n_1}, x^{n_2}, x^{n_3}) \in E_{\gamma}\}$, with
$n_1=1,\dots,P_1$,
$n_2=1,\dots,P_2$,
$n_3=1,\dots,P_3$,
$n=1,\ldots,P$,
and $P = P_1 P_2 P_3$ depending on the quadrature rule and polynomial order $p$. Then, the matrix element \eqref{eq:88} is computed as:
\begin{equation}
\displaystyle A_{\delta, \beta}^{\gamma} = \sum_{n_1=1}^{P_1} \sum_{n_2=1}^{P_2} \sum_{n_3=1}^{P_3} \omega^{n_1} \omega^{n_2} \omega^{n_3} \, \Pi(x^n) \, J(x^n) \, dx,
\label{eq:10}
\end{equation}
where $\Pi(x^n) = B_{h;\, p}(x^{n_1}) \, B_{i;\, p}(x^{n_2}) \, B_{j;\, p}(x^{n_3}) \,
B_{k;\, p}(x^{n_1}) \, B_{l;\, p}(x^{n_2}) \, B_{m;\, p}(x^{n_3}) $ and $J(x^n)$ corresponds to the Jacobian of the particular element evaluated at $x^n$.
\begin{rem}[Element-by-element pre-computations]
We notice that~\eqref{eq:10} can be efficiently calculated by first pre-computing, over each element, only the integral of the B-splines products with non-empty support. Therefore, for a given $\alpha=(i,j,k)$, it will be helpful to introduce the set of multi-indices
\begin{equation}
\mathcal{K}^\Delta_{\alpha} = \{ (z,r,s): z \in \{i, \dots, i+p\}, r \in \{j, \dots, j+p\}, s \in \{k, \dots, k+p\}\}
\end{equation}
corresponding to the indexes of the $(p+1)^3$ basis functions with non-empty support in the $\alpha$-element.
\end{rem}
\subsection{Algorithm descriptions and computational cost}
\label{sec:algorithm}
In this section, we describe the two algorithms to be compared in subsequent sections, the classical integration algorithm, and the sum factorization algorithm.
On one side, in the {\bf classical integration} algorithm, local contributions to the left-hand-side Gram matrix are represented as a sum over quadrature points, as shown in equation (\ref{eq:10}) and described in Algorithm~\ref{algorithm1}.
In this case, the associated computational cost is known that scales, concerning the polynomial degree $p$, as ${\cal O}(p^9)$ \cite{HIEMSTRA2019234}.
\begin{algorithm}
\For{element $E \in \Omega $}{
\For{test function $B_{i,x}$}{
\For{trial function $B_{j,x}$}{
\For{test function $B_{i,y}$}{
\For{trial function $B_{j,y}$}{
\For{test function $B_{i,z}$}{
\For{trial function $B_{j,z}$}
{
\For{quadrature point $(\xi, w)$ in $E$}{
$A_{\delta,\beta} \gets A_{\delta,\beta} + B_{i,x}(\xi_x)B_{j,x}(\xi_x) \, B_{i,y}(\xi_y) B_{j,y}(\xi_y) \, B_{i,z}(\xi_z) B_{j,z}(\xi_z) \, J(\xi) \, w$\;
}
}}}
}}}
}
\caption{Classical integration algorithm}
\label{algorithm1}
\end{algorithm}
On another side, {\bf Sum factorization} algorithm consists of reorganizing the integration terms of equation \eqref{eq:10} to reduce the computational cost, in terms of the polynomial degree $p$, associated with the sum procedure.
In practice, equation \eqref{eq:10} is written as:
\begin{equation}
A_{\beta, \delta} = \sum_{n_3=1}^{P_3} \omega^n_3 \, B_{j;\, p}(x^n_3) \, B_{m;\, p}(x^n_3) \,
\textcolor{red}{C(i_2,i_3,j_2,j_3,k_1)},
\label{eq:finalsum}
\end{equation}
where buffer~$C$ is given by
\begin{equation}
\textcolor{red}{C(i_2,i_3,j_2,j_3,k_1)} =
\sum_{n_2=1}^{P_2} \omega^n_2 \, B_{i;\, p}(x^n_2) \, B_{l;\, p}(x^n_2)
\textcolor{blue}{\underbrace{\sum_{n_1=1}^{P_1} \omega^n_1 \, B_{h;\, p}(x^n_1) \, B_{k;\, p}(x^n_1) \, J(x^n)}_{D(i_3,j_3,k_1,k_2)}}.
\label{eq:bufferC}
\end{equation}
The algorithm is described in Algorithm~\ref{algorithm2}.
Here we can observe three distinct groups of loops.
As a consequence, this implies that the computational cost associated with sum factorization is ${\cal O}(p^7)$ \cite{HIEMSTRA2019234}.
\begin{algorithm}
\For{test function $B_{i,z}$ - ($i_3$)}{
\For{trial function $B_{j,z}$ - ($j_3$)}{
\For{quadrature point $(\xi_x, w_x)$ in $E$ - ($k_1$)}{
\For{quadrature point $(\xi_y, w_y)$ in $E$ - ($k_2$)}{
\For{quadrature point $(\xi_z, w_z)$ in $E$ - ($k_3$)}{
$D(i_3,j_3,k_1,k_2) \gets D(i_3,j_3,k_1,k_2) + B_{i,z}(\xi_z) B_{j,z}(\xi_z) \, w_z \, J(\xi)$\;
}
}}
}}
\For{test function $B_{i,y}$ - ($i_2$)}{
\For{trial function $B_{j,y}$ - ($j_2$)}{
\For{test function $B_{i,z}$ - ($i_3$)}{
\For{trial function $B_{j,z}$ - ($j_3$)}{
\For{quadrature point $(\xi_x, w_x)$ in $E$ - ($k_1$)}{
\For{quadrature point $(\xi_y, w_y)$ in $E$ - ($k_2$)}{
$C(i_2,i_3,j_2,j_3,k_1) \gets C(i_2,i_3,j_2,j_3,k_1) + B_{i,y}(\xi_y) B_{j,y}(\xi_y) \, D(i_3,j_3,k_1,k_2) \, w_y$\;
}}
}}
}}
\For{test function $B_{i,x}$ - ($i_1$)}{
\For{trial function $B_{j,x}$ - ($j_1$)}{
\For{test function $B_{i,y}$ - ($i_2$)}{
\For{trial function $B_{j,y}$ - ($j_2$)}{
\For{test function $B_{i,z}$ - ($i_3$)}{
\For{trial function $B_{j,z}$ - ($j_3$)}{
\For{quadrature point $(\xi_x, w_x)$ in $E$ - ($k_1$)}{
$A(i_1,j_1) \gets A(i_1,j_1) + B_{i,x}(\xi_x)B_{j,x}(\xi_x) \, C(i_2,i_3,j_2,j_3,k_1) \, w_x$\; \smallskip
}
}}
}}
}}
\caption{Sum factorization algorithm}
\label{algorithm2}
\end{algorithm}
\subsection{Concurrency model for sum factorization}
\label{sec:concurencymodel}
Multiple methods are used to verify concurrent computations by creating a concurrency model formally.
In~\cite{parallel_integration}, a concurrency model based on the \textit{Trace Theory}, introduced by Diekert and Mazurkiewicz \cite{TraceTheory}, is discussed.
It contains four levels of concurrency:
\begin{enumerate}
\item Concurrent computations on parts of the mesh.
\item Concurrent computations on single elements.
\item Concurrent computations of single entries in an element matrix.
\item Concurrent computations of Cox--de--Boor formulae and 3D B-spline functions evaluation.
\end{enumerate}
Using the same methodology, we will discuss the last two levels of concurrency for the sum factorization algorithm.
\noindent
The alphabet of tasks for the integration of B-Spline basis functions over a given element consists of the following nine tasks:
\begin{enumerate}
\item
$t_{\alpha;d}^{0;r;n}$
- computational task evaluating a 1D basis function with subscript $r$ and order $0$, over the element $E_{\alpha}$ at the coordinate of quadrature point $x_d^n$.
Task $t_{\alpha;d}^{0;r;n}$ refers to formula \eqref{eq:cox1}.
Namely, it computes the function $B_{r;0}(x_{d}^{n})$ over the element $E_{\alpha}$.
\item $t_{\alpha;d}^{p;r;n}$, ($p > 0$)
- computational task evaluating a 1D basis functions with subscript $r$ and order $p$, over the element $E_{\alpha}$ at the coordinate of quadrature point $x_d^n$.
Task $t_{\alpha;d}^{p;r;x}$ refers to formula \eqref{eq:cox2}.
It contains a series of sums, subtractions, multiplications, and divisions, using output from tasks $t_{\alpha;d}^{p-1;r;n}$ and $t_{\alpha;d}^{p-1;r+1;n}$.
Namely, it computes the function $B_{r;p}(x_{d}^{n})$ over the element $E_{\alpha}$.
\item $s_{\alpha}^{p;n}$
- computational task evaluating the Jacobian value $J(x^n)$ over the element $E_\alpha$.
Namely, it computes $J(x^n)$ according to formula (\ref{eq:10}).
\item $t_{\alpha;1}^{p;\beta,\gamma;n}$
- computational task evaluating the value of the product of two 1D basis functions $B^1_{\beta;p}$, $B^1_{\gamma;p}$, and the Jacobian value $J(x^n)$, over the element $E_{\alpha}$, at the quadrature point $x^n$.
Task $t_{\alpha;1}^{p;\beta,\gamma;n}$ consists of a multiplication of output from tasks $t_{\alpha;1}^{p;{a};n}$, $t_{\alpha;1}^{p;{b};n}$, and $s_{\alpha}^{p;n}$.
Namely, it computes buffer $K_{\beta,\gamma;p}(x^n) = B_{a;p}(x_1^n)B_{b;p}(x_1^n)J(x^n)$ according to formula (\ref{eq:bufferC}).
\item $s_{\alpha;1}^{p;\beta,\gamma;n}$
- computational task evaluating the sum of $K_{\beta,\gamma;p}(x^n)$ along $x_1$.
Task $s_{\alpha;1}^{p;\beta,\gamma;x}$ consists of a sum of outputs from tasks $t_{\alpha;1}^{p;\beta,\gamma;n}$.
Namely, it computes the buffer $C_{\beta,\gamma;p}(x^n) = \sum\limits_{n=1}^{P_1} K_{\beta,\gamma;p}(x^n)$ according to formula (\ref{eq:bufferC}).
\item $t_{\alpha;2}^{p;\beta,\gamma;n}$
- computational task evaluating the value of product of two 1D basis function $B_{\beta;p}$, $B_{\gamma;p}$, and sums it with the previous buffer value $C_{\beta,\gamma;p}$, over the element $E_{\alpha}$, at the quadrature point $x^n_2$.
Task $t_{\alpha;2}^{p;\beta,\gamma;x}$ consists of a multiplication of output from tasks $t_{\alpha;2}^{p;{a};n}$, $t_{\alpha;2}^{p;{b};n}$, and $t_{\alpha;1}^{p;\beta,\gamma;n}$.
Namely, it computes the buffer $F_{\beta,\gamma;p}(x^n) = B_{a;p}(x_2^n)B_{b;p}(x_2^n) + C_{\beta,\gamma;p}(x^n)$ according to formula (\ref{eq:bufferC}).
\item $s_{\alpha;2}^{p;\beta,\gamma;n}$
- computational task evaluating the sum of $E_{\beta,\gamma;p}(x^n)$ along $x_1$.
Task $s_{\alpha;2}^{p;\beta,\gamma;x}$ consists of a sum of outputs from tasks $t_{\alpha;2}^{p;\beta,\gamma;n}$.
Namely, it computes the buffer $D_{\beta,\gamma;p}(x^n) = \sum\limits_{n=1}^{P_2} F_{\beta,\gamma;p}(x^n)$ according to formula (\ref{eq:bufferC}).
\item $t_{\alpha;3}^{p;\beta,\gamma;n}$
- computational task evaluating the value of the product of two 1D basis functions $B_{\beta;p}$, $B_{\gamma;p}$, and sums it with the previous buffer value $D_{\beta,\gamma;p}$, over element $E_{\alpha}$, at the quadrature point $x^n_3$.
Task $t_{\alpha;3}^{p;\beta,\gamma;x}$ consists of a multiplication of outputs from tasks $t_{\alpha;3}^{p;{a};n}$, $t_{\alpha;3}^{p;{b};n}$, and $t_{\alpha;2}^{p;\beta,\gamma;n}$.
Namely, it computes $H_{\beta,\gamma}^\alpha = B_{a;p}(x_3^n)B_{b;p}(x_3^n) + D_{\beta,\gamma;p}(x^n)$ according to formula (\ref{eq:finalsum}).
\item $s_{\alpha;3}^{p;\beta,\gamma;n}$
- computational task evaluating the sum of $E_{\beta,\gamma;p}(x^n)$ along $x_1$.
Task $s_{\alpha;3}^{p;\beta,\gamma;x}$ consists of a sum of outputs from tasks $t_{\alpha;1}^{p;\beta,\gamma;n}$.
Namely, it computes the buffer $A_{\beta,\gamma} = \sum\limits_{n=1}^{P_3} H_{\beta,\gamma;p}(x^n)$ according to formula (\ref{eq:finalsum}).
\end{enumerate}
Summarizing, each task has two, three, or four upper subscripts in two or three groups divided by a semicolon.
The first group $p$ determines the B-spline order.
The second group (optional) of multi indexes $\beta$, $\gamma$ or index $r$ determines B-spline functions indexes.
The third (optional) group $n$ determines quadrature point $x^n$, at which the functions are evaluated.
Additionally, tasks have one or two bottom subscripts.
The first one, with the index $\alpha$, determines the element over which we perform computations.
The second (optional) determines the direction in the $x$, $y$, or $z$ axis.
It is important to recall that a particular task cannot be performed until the completion of the tasks for which its output is required.
\subsubsection{Set of dependencies}
\label{sec:setdepen}
In this section, we define the alphabet of tasks $\Sigma$ and the set of dependencies between them denoted by $D$.
For this, we start by setting the variables:
\begin{align}
& n \in \{1,2,\dots,P\}, \nonumber \\
& r \in \{0,1,\dots,p \}, \nonumber \\
& d \in \{1,2,3\} \nonumber. \\
& f \in \{ k, k+1, \dots, k+p \}, \nonumber \\
& g \in \{ l, l+1, \dots, l+p \}, \\
& h \in \{ m, m+1, \dots, m+p \}, \nonumber \\
& \alpha = (k,l,m) \in \{1,\dots,K\}^3, \nonumber \\
& \beta = (a,b,c) \in \mathcal{K}^\Delta_\alpha, \nonumber \\
& \gamma \in \mathcal{K}^\Delta_\alpha. \nonumber
\label{eq:ranges}
\end{align}
We also set $I_\alpha$ as the function computing the index in the local element ($E_\alpha$) matrix based on the multi index $(a,b,c)$.
This is,
\begin{equation}
I_\alpha : \mathcal{K}_\alpha^\Delta \rightarrow \{0,1,2,\dots,(p+1)^3 \}.
\label{eq:index_function}
\end{equation}
We define the alphabet of tasks as:
\begin{equation}
\Sigma = \left\{ t_{\alpha;1}^{r;f;n} , t_{\alpha;2}^{r;g;n}, t_{\alpha;3}^{r;h;n},
t_{\alpha}^{p;\beta;n}, s^{p;n}_{\alpha} \right\}
\cup \left\{
t_{\alpha;d}^{p;\beta,\gamma;n} , s_{\alpha;d}^{p;\beta,\gamma}; \, I_\alpha(\beta) \geq I_\alpha(\gamma) \right\},
\label{eq:alphabet}
\end{equation}
and the set of dependencies between tasks from the alphabet $\Sigma$ as:
\begin{align}
D = & \, J^+ \cup (J^+)^{-1} \cup I_\Sigma, \label{eq:formulaD}
\end{align}
where
\begin{align}
J = & \, J_1 \cup J_2 \cup J_3 \cup J_4,
\label{eq:formulaJ}
\end{align}
with
\begin{align*}
J_1 = & \, \Big\{ ( t_{\alpha;1}^{r-1;f;n} , t_{\alpha;1}^{r;f;n} ) , ( t_{\alpha;1}^{r-1;f+1;n} , t_{\alpha;1}^{r;f;n} ) , ( t_{\alpha;2}^{r-1;g;n} , t_{\alpha;2}^{r;g;n} ) , \nonumber\\
& \vspace{2cm}( t_{\alpha;2}^{r-1;g+1;n} , t_{\alpha;2}^{r;g;n} ), ( t_{\alpha;3}^{r-1;h;n} , t_{\alpha;3}^{r;h;n} ) , ( t_{\alpha;3}^{r-1;h+1;n} , t_{\alpha;3}^{r;h;n} )\Big\}, \\%\label{eq:formulaJ1} \\
J_2 = & \left\{ ( t_{\alpha;1}^{p;\alpha;n} , t_{\alpha;1}^{p;\beta,\gamma;n}) , ( t_{\alpha;2}^{p;\alpha;n} , t_{\alpha;2}^{p;\beta,\gamma;n}) , ( t_{\alpha;3}^{p;\alpha;n} , t_{\alpha;3}^{p;\beta,\gamma;n}) \right\}, \\%\label{eq:formulaJ2}\\
J_3 = & \left\{ (t_{\alpha;1}^{p;\beta,\gamma;n}, s_{\alpha;1}^{p;\beta,\gamma;n}), (t_{\alpha;2}^{p;\beta,\gamma;n}, s_{\alpha;2}^{p;\beta,\gamma;n}) , (t_{\alpha;3}^{p;\beta,\gamma;n}, s_{\alpha;3}^{p;\beta,\gamma;n}) \right\}, \\%\label{eq:formulaJ3}\\
J_4 = & \left\{ (s_{\alpha}^{p;n}, t_{\alpha;1}^{p;\beta,\gamma;n}) , (s_{\alpha;1}^{p;\beta,\gamma;n}, t_{\alpha;2}^{p;\beta,\gamma;n}) , (s_{\alpha;2}^{p;\beta,\gamma;n}, t_{\alpha;3}^{p;\beta,\gamma;n}) \right\}.
\end{align*}
Primitives described above define the monoid of traces for the problems under consideration.
$J$ defined in equation~\eqref{eq:formulaJ} will stand for edges in Diekert dependency graph\cite{TraceTheory}, which will be drawn later in frame of this model in Figures \ref{fig:n_part1}-\ref{fig:n_part7}.
After building the primitives of the trace monoid. This is, the alphabet of tasks (\ref{eq:alphabet}) and the dependency relation (\ref{eq:formulaD}),
we define the pseudo-code allowing to compute the value of integral (\ref{eq:88}), presented in Tables \ref{tab:algorithm1}-\ref{tab:algorithm3}, that we have split into three parts to facilitate its reading.
The dependencies in this algorithm's record determine only the sequence of operations in one string representing the desired trace.
The alphabet of tasks $\Sigma$ (\ref{eq:alphabet}), the dependencies relation $D$ (\ref{eq:formulaD}), and the trace defined by pseudocode (Tables \ref{tab:algorithm1}-\ref{tab:algorithm3}) allow us to compute the Diekert dependency graph, which is convenient for the correct and effective scheduling of tasks in a heterogeneous computer environment.
\lstset
{
basicstyle=\fontsize{9}{11}\selectfont\ttfamily,
numbers=left,
stepnumber=1,
showstringspaces=false,
tabsize=1,
breaklines=true,
breakatwhitespace=false,
}
\begin{table}[htp!]
\centering
\begin{tabular}{c}
\noindent\begin{minipage}{1.05\linewidth}
\begin{lstlisting}[escapeinside={(*}{*)}, frame = single, framexleftmargin=20pt]
(*\textbf{BEGIN}*)
//loop over elements
(*\textbf{FOREACH}*) (*$\{\alpha::=(k,l,m)\} \in \mathcal{K}^\Delta$*)
//compute local element matrix
element_matrix = zeros(*$\left((p+1)^3,(p+1)^3\right)$*)
local_matrix = zeros(*$\left((p+1)^3,(p+1)^3,P\right)$*)
local_C_matrix = zeros((*$P_y,P_z,p+1,p+1,P_x$*))
element_C_matrix = zeros((*$P_y,P_z,p+1,p+1$*))
local_D_matrix = zeros((*$P_z,p+1,p+1,p+1,p+1,P_x$*))
element_D_matrix = zeros((*$P_z,p+1,p+1,p+1,p+1$*))
//loop over quadrature points
(*\textbf{FOR}*) (*$n_x$*)=1,(*$P_x$*)
1D_matrix = zeros((*$p+1$*))
//compute 1D functions
(*\textbf{FOR}*) (*$r$*)=0,(*$p$*)
(*$t_{\alpha;1}^{p;k;n}$*): 1D_matrix((*$r$*)) = compute recursive (* $\left( B_{k+r;p}(x^n_1) \right)$ *)
(*\textbf{ENDFOR}*)
(*\textbf{FOR}*) (*$n_y$*)=1,(*$P_y$*)
(*\textbf{FOR}*) (*$n_z$*)=1,(*$P_z$*)
(*$s_{\alpha}^{p;n}$*): (*$c=J(x^n)$*)
//compute product of two functions
(*\textbf{FOREACH}*) (*$ \beta = (a,:,:) \in \mathcal{K}^\Delta_\alpha $*)
(*$i$*) = index_in_local_matrix((*$a,:,:$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,:,:) \in \mathcal{K}^\Delta_\alpha$*)
(*$j$*) = index_in_local_matrix((*$d,:,:$*))
(*$t_{\alpha;1}^{p;\beta,\gamma;n}$*): local_C_matrix((*$n_y,n_z,i,j,n_x$*)) =
= 1D_matrix((*$a$*)) * 1D_matrix((*$d$*)) * c
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
//sum local components from each quadrature point
(*\textbf{FOREACH}*) (*$ \beta = (a,:,:) \in \mathcal{K}^\Delta_\alpha $*)
(*$i$*) = index_in_local_matrix((*$a,:,:$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,:,:) \in \mathcal{K}^\Delta_\alpha$*)
(*$j$*) = index_in_local_matrix((*$d,:,:$*))
(*$s_{\alpha;1}^{p;\beta,\gamma}$*): element_C_matrix((*$n_y,n_z,i,j$*)) =
= reduction(local_C_matrix((*$n_y,n_z,i,j,:$*)),+)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
\end{lstlisting}
\end{minipage}
\end{tabular}
\caption{The algorithm generating sample string of tasks representing sum factorization for the Gram matrix.
Part 1.}
\label{tab:algorithm1}
\end{table}
\begin{table}[htp!]
\centering
\begin{tabular}{c}
\noindent\begin{minipage}{1.05\linewidth}
\begin{lstlisting}[escapeinside={(*}{*)}, frame = single, framexleftmargin=20pt]
//loop over quadrature points
(*\textbf{FOR}*) (*$n_y$*)=1,(*$P_y$*)
1D_matrix = zeros((*$p+1$*))
//compute 1D functions
(*\textbf{FOR}*) (*$r$*)=0,(*$p$*)
(*$t_{\alpha;2}^{p;k;n}$*): 1D_matrix((*$r$*)) = comformulaJ3pute recursive (* $\left( B_{k+r;p}(x^n_2) \right)$ *)
(*\textbf{ENDFOR}*)
(*\textbf{FOR}*) (*$n_z$*)=1,(*$P_z$*)
//compute product of two functions
(*\textbf{FOREACH}*) (*$ \beta = (a,b,:) \in \mathcal{K}^\Delta_\alpha $*)
(*$[i_1,i_2]$*) = index_in_local_matrix((*$a,b,:$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,e,:) \in \mathcal{K}^\Delta_\alpha$*)
(*$[j_1,j_2]$*) = index_in_local_matrix((*$d,e,:$*))
(*$t_{\alpha;2}^{p;\beta,\gamma;n}$*): local_D_matrix((*$n_z,i_1,i_2,j_1,j_2,n_y$*)) =
= 1D_matrix((*$b$*)) * 1D_matrix((*$e$*)) *
* element_C_matrix((*$n_y,n_z,i_1,j_1$*))
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
//sum local components from each quadrature point
(*\textbf{FOREACH}*) (*$ \beta = (a,b,:) \in \mathcal{K}^\Delta_\alpha $*)
(*$[i_1,i_2]$*) = index_in_local_matrix((*$a,b,:$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,e,:) \in \mathcal{K}^\Delta_\alpha$*)
(*$[j_1,j_2]$*) = index_in_local_matrix((*$d,e,:$*))
(*$s_{\alpha;2}^{p;\beta,\gamma}$*): element_D_matrix((*$n_z,i_1,i_2,j_1,j_2$*)) =
= reduction(local_D_matrix((*$n_z,i_1,i_2,j_1,j_2,:$*)),+)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
\end{lstlisting}
\end{minipage}
\end{tabular}
\caption{The algorithm generating sample string of tasks representing sum factorization for the Gram matrix.
Part 2.}
\label{tab:algorithm2}
\end{table}
\begin{table}[htp!]
\centering
\begin{tabular}{c}
\noindent\begin{minipage}{1.05\linewidth}
\begin{lstlisting}[escapeinside={(*}{*)}, frame = single, framexleftmargin=20pt]
//loop over quadrature points
(*\textbf{FOR}*) (*$n_z$*)=1,(*$P_z$*)
1D_matrix = zeros((*$p+1$*))
//compute 1D functions
(*\textbf{FOR}*) (*$r$*)=0,(*$p$*)
(*$t_{\alpha;3}^{p;k;n}$*): 1D_matrix((*$r$*)) = compute recursive (* $\left( B_{k+r;p}(x^n_3) \right)$ *)
(*\textbf{ENDFOR}*)
//compute product of two functions
(*\textbf{FOREACH}*) (*$ \beta = (a,b,c) \in \mathcal{K}^\Delta_\alpha $*)
(*$[i;i_1,i_2,i_3]$*) = index_in_local_matrix((*$a,b,c$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,e,f) \in \mathcal{K}^\Delta_\alpha$*)
(*$[j;j_1,j_2,j_3]$*) = index_in_local_matrix((*$d,e,f$*))
(*$t_{\alpha;3}^{p;\beta,\gamma;n}$*): local_matrix((*$i,j,n_z$*)) =
= 1D_matrix((*$c$*)) * 1D_matrix((*$f$*)) *
* element_D_matrix((*$n_z,i_1,i_2,j_1,j_2$*))
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
//sum local components from each quadrature point
(*\textbf{FOREACH}*) (*$ \beta = (a,b,c) \in \mathcal{K}^\Delta_\alpha $*)
(*$[i;i_1,i_2,i_3]$*) = index_in_local_matrix((*$a,b,c$*))
(*\textbf{FOREACH}*) (*$ \gamma = (d,e,f) \in \mathcal{K}^\Delta_\alpha$*)
(*$[j;j_1,j_2,j_3]$*) = index_in_local_matrix((*$d,e,f$*))
(*$s_{\alpha;3}^{p;\beta,\gamma}$*): element_matrix((*$i,j$*)) =
= reduction(local_matrix((*$i,j,n_z$*)),+)
(*\textbf{ENDFOR}*)
(*\textbf{ENDFOR}*)
//insert local matrices into global ones
insert_local_element_2_global(element_matrix,(*$\alpha$*))
(*\textbf{ENDFOR}*)
(*\textbf{END}*)
\end{lstlisting}
\end{minipage}
\end{tabular}
\caption{The algorithm generating sample string of tasks representing sum factorization for the Gram matrix.
Part 3.}
\label{tab:algorithm3}
\end{table}
\subsection{Application of trace theory to sum factorization}
\label{sec:porder}
This section describes the methodology for creating the Diekert Dependency Graph (DG) and the Foata Normal Form (FNF), applied to the sum factorization integration method of $p$-order B-spline basis functions.
DG presents all computational tasks performed in computation and dependencies between them.
Within DG and FNF, we can distinguish Foata classes, which help with practically implementing concurrent computations.
For a given polynomial degree $p$, there are $(p+1)^3$ basis functions with non-empty support over each cubic element $E_\alpha$, with $\alpha \in \mathcal{K}^\Delta$.
Therefore, for every $E_\alpha$, we require to construct a Gram element matrix of size $(p+1)^3\times(p+1)^3$, according to equation \eqref{eq:88}.
However, due to the symmetry of the Gram matrix, it is not necessary to compute the full element matrix.
Indeed, we only require to compute $(p + 1 + (p+1)^3\times(p+1)^3)/2$ matrix entries.
To exemplify the cost associated with the computation of a single entry in the Gram matrix, let us assume that a quadrature of $P=P_xP_yP_z$ points per element is employed.
Let us also denote by $x^1,x^2,\dots,x^P$ the corresponding quadrature points, where $x^n = (x_1^n, x_2^n, x_3^n)$, for $n=1,\dots,P$.
In the procedure for each quadrature point, we start by computing $(p+1)$ 1D functions in each direction ($3p+3$ functions in total) employing the Cox--de--Boor formulae (Classes $0,1,\dots,p$ in \mbox{Figure \ref{fig:n_part1}}).
This completes all tasks of type $t_{\alpha,d}^{0;r;n}$, $t_{\alpha,d}^{1;r;n}$ up to $t_{\alpha,d}^{p;r;n}$ (see Table \ref{table66}).
Within the class $p$, we include one extra task computing $s_{\alpha}^{p;n}$.
The class $p+1$, in \mbox{Figure \ref{fig:n_part3}}), completes all tasks of the type $t_{\alpha;1}^{p;\beta,\gamma;n}$ (see Table \ref{table66}).
The concurrently computed components can be summed to evaluate scalar products of the 1D basis functions over the element $E_{\alpha}$, $(k,l,m)=\alpha \in \mathcal{K}^\Delta$, which completes all tasks of the type $s_{\alpha;1}^{p;\beta,\gamma}$ (see Table \ref{table66}).
Next, we construct two pairs of classes $p+3$ and $p+4$ (Figures \ref{fig:n_part4}, \ref{fig:n_part5}), and $p+5$ and $p+6$ (Figures \ref{fig:n_part6}, \ref{fig:n_part7}), in similar manner to classes $p+1$ and $p+2$.
Finally, we present all tasks in Tables \ref{table66} and \ref{table67}.
\begin{table}[ht!]
\centering
\begin{tabular}{|m{1.2cm}|m{7cm}|m{3.5cm}|}
\hline
$ s_{\alpha;3}^{p;\beta} $ & $ B_{\beta;p}(x^n) = B_{m;p}(x_{3}^{n}) \, B_{c;p}(x_{3}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
$ t_{\alpha;3}^{p;\beta;n} $ & $ B_{\beta;p}(x^n) = B_{m;p}(x_{3}^{n}) \, B_{c;p}(x_{3}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
$ s_{\alpha;2}^{p;\beta} $ & $ B_{\beta;p}(x^n) = B_{l;p}(x_{2}^{n}) \, B_{b;p}(x_{2}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
$ t_{\alpha;2}^{p;\beta;n} $ & $ B_{\beta;p}(x^n) = B_{l;p}(x_{2}^{n}) \, B_{b;p}(x_{2}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
\end{tabular}
\caption{Computational tasks for performing computations of sum factorization algorithm of 3D order $p$ basis functions over element $E_{\alpha}$, $(k,l,m)=\alpha \in \mathcal{K}^\Delta$.
Part 1.}
\label{table66}
\end{table}
\begin{table}[ht!]
\centering
\begin{tabular}{|m{1.2cm}|m{7cm}|m{3.5cm}|}
\hline
$ s_{\alpha;1}^{p;\beta} $ & $ B_{\beta;p}(x^n) = B_{k;p}(x_{1}^{n}) \, B_{a;p}(x_{1}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
$ t_{\alpha;1}^{p;\beta;n} $ & $ B_{\beta;p}(x^n) = B_{k;p}(x_{1}^{n}) \, B_{a;p}(x_{1}^{n})$ & \vtop{\hbox{}\hbox{}\hbox{$n \in \{1,2,\dots,P\}$,}\hbox{$\beta=\{k,l,m\} \in \mathcal{K}^\Delta_\alpha$} \hbox{$\gamma=\{a,b,c\} \in \mathcal{K}^\Delta_\alpha$}\hbox{$ I(\beta) \geq I(\gamma)$ }} \\
\hline
$ t_{\alpha;d}^{p;r;n} $ & $ B_{r;p}(x_{d}^{n}) $ & \vtop{\hbox{}\hbox{}\hbox{$d \in \{1,2,3\}$,}\hbox{$n \in \{1,2,\dots,P\}$ }}\\
\hline
\vdots&\vdots&\vdots\\
\hline
$ t_{\alpha;d}^{1;r;n} $ & $ B_{r;1}(x_{d}^{n}) $ & \vtop{\hbox{}\hbox{}\hbox{$d \in \{1,2,3\}$,}\hbox{$n \in \{1,2,\dots,P\}$ }}\\
\hline
$ t_{\alpha;d}^{0;r;n} $ & $ B_{r;0}(x_{d}^{n}) $ & \vtop{\hbox{}\hbox{}\hbox{$ \in \{1,2,3\}$,}\hbox{$n \in \{1,2,\dots,P\}$ }}\\
\hline
\end{tabular}
\caption{Computational tasks for performing computations of sum factorization algorithm of 3D order $p$ basis functions over element $E_{\alpha}$, $(k,l,m)=\alpha \in \mathcal{K}^\Delta$.
Part2.}
\label{table67}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{Fig2.pdf}
\caption{Relationships between classes $0$ to $p$ for $p$ - order functions.
Tasks belonging to one class correspond to going through one iteration of the Cox-de Boor recursion formulae (\ref{eq:cox1}, \ref{eq:cox2}) for each of the three dimensions of the model.
The dimensions are differentiated by color.}
\label{fig:n_part1}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=1.0\textwidth]{Fig3.pdf}
\caption{Relationships between classes $p$ and $p+1$ for $p$ - order functions.
Each task in class $p+1$ corresponds to the dot product of two 1D B-spline functions, so it depends on the two tasks in class $p$.
To maintain the transparency of the chart, the relationships between the second and third classes are marked with a border type.}
\label{fig:n_part2}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]{Fig4.pdf}
\caption{Relationships between classes $p+1$ and $p+2$ for $p$ - order functions.
Each task from class $p+3$ corresponds to the approximation of the function value using Gaussian quadrature, therefore it depends on $P$ tasks from class $p+1$.
The task $s$ depends on all tasks $t$ with regards to its distribution in subsequent sheets $1, 2, \dots, P$.}
\label{fig:n_part3}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=1.0\textwidth]{Fig5.pdf}
\caption{Relationships between classes $p$, $p+2$, and $p+3$ for $p$ - order functions.
Each task in class $p+1$ corresponds to the dot product of two 1D B-spline functions and a sum, so it depends on the two tasks in class $p$ and some tasks from class $p+2$.
To maintain the transparency of the chart, the relationships between the second and third classes are marked with a border type.}
\label{fig:n_part4}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]{Fig6.pdf}
\caption{Relationships between classes $p+3$ and $p+4$ for $p$ - order functions.
Each task from class $p+3$ corresponds to the approximation of the function value using Gaussian quadrature, therefore it depends on $P$ tasks from class $p+3$.
The task $s$ depends on all tasks $t$ with regards to its distribution in subsequent sheets $1, 2, \dots, P$.}
\label{fig:n_part5}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=1.0\textwidth]{Fig7.pdf}
\caption{Relationships between classes $p$, $p+4$, and $p+5$ for $p$ - order functions.
Each task in class $p+5$ corresponds to the dot product of two 1D B-spline functions and a sum, so it depends on the two tasks in class $p$ and some tasks from class $p+4$.
To maintain the transparency of the chart, the relationships between the second and third classes are marked with a border type.}
\label{fig:n_part6}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]{Fig8.pdf}
\caption{Relationships between classes $p+5$ and $p+6$ for $p$ - order functions.
Each task from class $p+6$ corresponds to the approximation of the function value using Gaussian quadrature, therefore it depends on $P$ tasks from class $p+5$.
The task $s$ depends on all tasks $t$ with regards to its distribution in subsequent sheets $1, 2, \dots, P$.}
\label{fig:n_part7}
\end{figure}
\subsection{Scheduling algorithm}
\label{sec:scheduling_algorithm}
To obtain a similar scheduling quality to the classical algorithm on massively parallel shared-memory machines, we employ the Foata-Normal-Form (FNF) \cite{traces}.
The Diekert dependency graphs (see Section \ref{sec:porder}) show the consecutive Foata classes for each considered case of sum factorization.
Within a given Foata class, tasks can be executed in any order.
Completion of the entire previous Foata class is a sufficient condition to begin the computation of the next one.
The proposed strategy ensures no deadlocks, high-quality scheduling, and no need for intra-class synchronization.
Based on Figures \ref{fig:n_part1}, \ref{fig:n_part2} and \ref{fig:n_part3}, we can describe a general procedure for creating subsequent Foata classes, containing the following tasks:
\begin{itemize}
\item Class $m$, where $m \in \{0,\dots,p-1\}$
\begin{equation}
\{ t_{\alpha;d}^{m;r;n} ;
d\in \{1,2,3\}, \, n \in \{ 1,2,\dots,P\} \},
\label{classm}
\end{equation}
\item Class $p$
\begin{equation}
\{ t_{\alpha;d}^{p;r;n} , s_{\alpha}^{p,n} ;
d\in \{1,2,3\}, \, n \in \{ 1,2,\dots,P\} \},
\label{classp}
\end{equation}
\item Class $p+1$
\begin{equation}
\{ t_{\alpha;1}^{p;\beta,\gamma;n};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp1}
\end{equation}
\item Class $p+2$
\begin{equation}
\{ s_{\alpha;1}^{p;\beta,\gamma};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp2}
\end{equation}
\item Class $p+3$
\begin{equation}
\{ t_{\alpha;2}^{p;\beta,\gamma;n};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp3}
\end{equation}
\item Class $p+4$
\begin{equation}
\{ s_{\alpha;2}^{p;\beta,\gamma};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp4}
\end{equation}
\item Class $p+5$
\begin{equation}
\{ t_{\alpha;3}^{p;\beta,\gamma;n};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp5}
\end{equation}
\item Class $p+6$
\begin{equation}
\{ s_{\alpha;3}^{p;\beta,\gamma};
(\beta,\gamma) \in \mathcal{K}^\Delta_\alpha \times \mathcal{K}^\Delta_\alpha, \, I(\beta) \geq I(\gamma), \, n \in \{1,2,\dots,P\} \},
\label{classp6}
\end{equation}
\end{itemize}
The first Foata classes (\ref{classm}, \ref{classp1}) are responsible for valuating the values of 1D $n$-order basis functions over the element $E_\alpha$, at the Gaussian quadrature points, using recursive Cox--de--Boor formulae (\ref{eq:cox1}, \ref{eq:cox2}) and the Jacobian (\ref{classp1}).
Subsequent Foata classes of two kinds follow this:
\begin{enumerate}
\item Computational tasks (\ref{classp1}, \ref{classp3}, \ref{classp5}) evaluating the values of the dot products of 1D $n$-order basis functions over the element $E_\alpha$ at Gaussian quadrature point,
\item computational tasks (\ref{classp2}, \ref{classp4}, \ref{classp6}) evaluating the values of the dot products of 1D $n$-order basis functions over the element $E_\alpha$ and buffers.
\end{enumerate}
All the tasks mentioned above are performed on a homogeneous architecture.
Thus, we can expect near-identical execution time for each of them inside a particular Foata class.
Consequently, all tasks from the particular Foata class can be effectively scheduled as a common bag.
Over each element $E_\alpha$ we repeat the same procedure of invoking tasks using parameters associated with this element.
We invoke Foata classes starting from the Foata class 0, and each time wait for all tasks to be completed before invoking the next Foata class.
Using a simplified scheduling method, based on FNF and the proposed above, despite having no theoretical proof, results in near-optimal performance in practical applications while maintaining a relatively simple implementation.
\section{Numerical results}
\label{sec:numres}
Now, we compare the computational performance of parallel integration using the classical algorithm and sum factorization.
In both cases, implementation was done in Fortran 2003, using OpenMP for loop parallelization.
The measurements concern the execution time for the sequential integration algorithm executed on CPU and the concurrent integration algorithm run on a shared memory CPU with 12 cores.
Computations were performed on a Banach Linux workstation equipped with AMD Ryzen 9 3900X processor and 64GB RAM.
It is worth noting that the CPU, despite having 3.8 GHz base clock speed and 4.6 GHz boost, was working at a constant 4.0 GHz in the multi-threaded (12 cores) workload and at 4.1 GHz in single-threaded workload (1 core).
The computations have been performed using the code compiled with ifort with -O2 level of optimization.
In Sections \ref{sec:inside_element}, \ref{sec:over_element}, and \ref{sec:amdahl} we present the experimental results.
In Section \ref{sec:discussion} we discuss obtained results.
\subsection{Inside element scalability}
\label{sec:inside_element}
We first performed computations with parallelization inside an element, then sequential looping over elements.
In such a case, we consider a mesh of $20^3$ elements.
The comparison of the scalability for different polynomial orders is presented in Figures \ref{fig:scal_class} and \ref{fig:scal_sum}.
Figures \ref{fig:spd_class} and \ref{fig:spd_sum} represent speedup.
Finally, in Figures \ref{fig:eff_class} and \ref{fig:eff_sum} we presented efficiency for the classical integration algorithm and sum factorization respectively.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{scal_class.pdf}
\caption{Strong scaling time for classical integration algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:scal_class}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{scal_sum.pdf}
\caption{Strong scaling time for sum factorization algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:scal_sum}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{spd_class.pdf}
\caption{Strong scaling speedup for classical integration algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:spd_class}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{spd_sum.pdf}
\caption{Strong scaling speedup for sum factorization algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:spd_sum}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{eff_class.pdf}
\caption{Strong scaling efficiency for classical integration algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:eff_class}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{eff_sum.pdf}
\caption{Strong scaling efficiency for sum factorization algorithm.
Computations performed on $20^3$ elements mesh, different polynomial orders.
Parallelism inside element.}
\label{fig:eff_sum}
\end{figure}
\newpage
\subsection{Over element scalability}
\label{sec:over_element}
As a second experiment, we performed computations with sequential computations inside the element and parallel looping over elements.
In this case, we also used a mesh of $30^3$ elements.
The comparison of scaling for different polynomial orders is presented in Figures \ref{fig:scal_class_elem} and \ref{fig:scal_sum_elem}.
Figures \ref{fig:spd_class_elem} and \ref{fig:spd_sum_elem} represent the speedup.
Finally, in Figures \ref{fig:eff_class_elem} and \ref{fig:eff_sum_elem} we present the efficiency for the classical integration algorithm and sum factorization respectively.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{scal_class_elem.pdf}
\caption{Strong scaling time for classical integration algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:scal_class_elem}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{scal_sum_elem.pdf}
\caption{Strong scaling time for sum factorization algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:scal_sum_elem}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{spd_class_elem.pdf}
\caption{Strong scaling speedup for classical integration algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:spd_class_elem}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{spd_sum_elem.pdf}
\caption{Strong scaling speedup for sum factorization algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:spd_sum_elem}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{eff_class_elem.pdf}
\caption{Strong scaling efficiency for classical integration algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:eff_class_elem}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.95\textwidth]{eff_sum_elem.pdf}
\caption{Strong scaling efficiency for sum factorization algorithm.
Computations performed on $30^3$ elements mesh, different polynomial orders.
Parallelism over all elements.}
\label{fig:eff_sum_elem}
\end{figure}
\newpage
\subsection{Speedup limits}
\label{sec:amdahl}
As a final experiment, we estimate the maximum speedup for both the parallelization schemes (see Sections \ref{sec:inside_element} and \ref{sec:over_element}), and also its combination.
When considering integration inside a single element, the problem size is fixed regardless of the mesh size.
Amdahl's law is appropriate for this kind of scenario.
Therefore, to find the percentage of the algorithm which benefits from speedup $\mathcal{P}$, we invoke the Amdahl's equation:
\begin{equation}
\mathcal{S}(\nu) = \frac{1}{ (1-\mathcal{P}) + \frac{\mathcal{P}}{\nu} }
\label{eq:amdahl}
\end{equation}
where $\mathcal{P}$ denotes the percentage of the algorithm which benefits from the parallel speedup,
$\nu$ is the number of threads, and $\mathcal{S}(\nu)$ is the measured speedup when using $\nu$ threads.
From the previous equation, we can derive the value of $P$ and the speedup limit, which are explicitly given by:
\begin{equation}
\mathcal{P} = \frac{ \frac{\nu}{\mathcal{S}(\nu)} -\nu }{ 1 - \nu},
\label{eq:p-compute}
\end{equation}
and
\begin{equation}
\mathcal{S}(\infty) = \lim\limits_{\nu \rightarrow \infty} \frac{1}{ (1-\mathcal{P}) + \frac{\mathcal{P}}{\nu}} = \frac{1}{1-\mathcal{P}},
\label{eq:limit}
\end{equation}
respectively.
For different values of $p$, we consider the maximum experimental speedup observed from the numerical results for both methods.
Next, using equations (\ref{eq:p-compute}) and (\ref{eq:limit}), we computed the percentage of algorithm that benefits from the parallel speedup and the theoretical maximum speedup.
Finally, we estimated the combined maximum speedup by assuming two layers of parallelism.
This is, one layer representing the scheme of Section~\ref{sec:inside_element}, and another representing the scheme of Section~\ref{sec:over_element}.
The results for the classical integration algorithm are presented in Table~\ref{tab:speedup_classic}, while the results for sum factorization in Table~\ref{tab:speedup_sumfact}.
\begin{table}[ht!]
\centering
\begin{tabular}{c||c|c|c|c||c|c|c|c||c}
$p$ &
$\nu_i$ & $\mathcal{S}_{i}(\nu)$ & $\mathcal{P}_{i}$ & $\mathcal{S}_{i}(\infty)$ &
$\nu_e$ & $\mathcal{S}_{e}(\nu)$ & $\mathcal{P}_{e}$ & $\mathcal{S}_{e}(\infty)$ &
$\mathcal{S}_{c}(\infty)$\\
\hline
1 & 8 & 1.38 & 0.31 & 1.46 & 3 & 2.5 & 0.9 & 10.00 & 14.59 \\
2 & 8 & 2.53 & 0.69 & 3.24 & 6 & 5.4 & 0.98 & 45.00 & 145.69 \\
3 & 12 & 3.85 & 0.81 & 5.20 & 9 & 7.8 & 0.98 & 52.00 & 270.21 \\
4 & 11 & 5.27 & 0.89 & 9.20 & 10 & 7.8 & 0.97 & 31.91 & 293.47 \\
5 & 12 & 6.53 & 0.92 & 13.13 & 12 & 11.29 & 0.99 & 174.92 & 2296.93 \\
6 & 12 & 7.15 & 0.94 & 16.22 & 12 & 11.15 & 0.99 & 144.29 & 2339.94 \\
7 & 12 & 7.11 & 0.94 & 15.99 & 12 & 10.75 & 0.99 & 94.60 & 1513.02 \\
8 & 12 & 8.12 & 0.96 & 23.02 & 12 & 10.88 & 0.99 & 106.86 & 2459.92 \\
9 & 12 & 8.24 & 0.96 & 24.11 & 12 & 10.44 & 0.99 & 73.62 & 1774.60 \\
\end{tabular}
\caption{classical integration method.
Bottom index $i$ stands for "inside element", $e$ over all elements, and $c$ combined.}
\label{tab:speedup_classic}
\end{table}
\begin{table}[ht!]
\centering
\begin{tabular}{c||c|c|c|c||c|c|c|c||c}
$p$ &
$\nu_i$ & $\mathcal{S}_{i}(\nu)$ & $\mathcal{P}_{i}$ & $\mathcal{S}_{i}(\infty)$ &
$\nu_e$ & $\mathcal{S}_{e}(\nu)$ & $\mathcal{P}_{e}$ & $S_{e}(\infty)$ &
$\mathcal{S}_{c}(\infty)$\\
\hline
1 & 1 & 1 & 0 & 1 & 2 & 1.5 & 0.67 & 3 & 3.00 \\
2 & 1 & 1 & 0 & 1 & 4 & 2.9 & 0.87 & 7.91 & 7.91 \\
3 & 1 & 1 & 0 & 1 & 4 & 2.9 & 0.87 & 7.91 & 7.91 \\
4 & 12 & 1.07 & 0.07 & 1.08 & 4 & 3.3 & 0.93 & 14.14 & 15.23 \\
5 & 10 & 1.11 & 0.11 & 1.12 & 4 & 3.5 & 0.95 & 21 & 23.60 \\
6 & 10 & 1.17 & 0.16 & 1.19 & 4 & 3.5 & 0.95 & 21 & 25.04 \\
7 & 12 & 1.77 & 0.47 & 1.9 & 4 & 3.2 & 0.92 & 12 & 22.84 \\
8 & 10 & 1.26 & 0.23 & 1.3 & 4 & 3.4 & 0.94 & 17 & 22.06 \\
9 & 11 & 1.36 & 0.29 & 1.41 & 4 & 3.5 & 0.95 & 21 & 29.63 \\
\end{tabular}
\caption{Sum factorization.
Bottom index $i$ stands for "inside element", $e$ over all elements, and $c$ combined.}
\label{tab:speedup_sumfact}
\end{table}
\newpage
\subsection{Discussion of the numerical results}
\label{sec:discussion}
For different values of $p$, we consider the maximum experimental speedup observed from the numerical results for both methods.
Next, using equations (\ref{eq:p-compute}) and (\ref{eq:limit}), we computed the percentage of algorithm that benefits from the parallel speedup and the theoretical maximum speedup.
From Figures \ref{fig:spd_class} and \ref{fig:spd_class_elem} we can observe outstanding speedup for classical method in both scenarios of parallelism.
Furthermore Figures \ref{fig:eff_class} and \ref{fig:eff_class_elem} proven high efficiency of hardware utilization.
Figures present increased parallel performance (speedup and efficiency) for higher polynomial order ($p$) B-spline basis functions.
Figures \ref{fig:scal_sum} and \ref{fig:spd_sum} present unexpected behaviour of sum factorization with parallel loops inside elements.
Even parallel loops over all elements, presented in Figure \ref{fig:spd_sum_elem} scale up to 4 cores with expected behavior. Above four cores, speedup remains at a constant level.
This corresponds with low efficiency in multicore applications, as can be seen in Figures \ref{fig:eff_sum} and \ref{fig:eff_sum_elem}.
From Tables \ref{tab:speedup_classic} and \ref{tab:speedup_sumfact}, we can observe that the theoretical maximum speedup for the classical method behaves similarly to the results presented in \cite{parallel_integration}.
In Diekert graphs (Figures \ref{fig:n_part1}-\ref{fig:n_part7}), it can be observed that sum factorization requires a multitude more memory synchronizations than the classical method.
We also compare computational times for the classical integration and the sum factorization in several scenarios.
We focused on $p=9$ since, theoretically, it should be the best scenario of sum factorization.
We take into consideration three scenarios for a $30^3$ mesh size;
1) Single-core CPU execution,
2) Shared memory CPU computations,
3) (Multiple) GPU execution.
Classical integration on single-core takes 9931.758 seconds,
12 core OpenMP implementation takes 951 seconds,
and estimated GPU implementation should take 4.596 seconds.
Sum factorization integration on a single core takes 403.586 seconds,
Four-core OpenMP implementation takes 118.296 seconds
and estimated GPU implementation should take 13.62 seconds.
\section{Conclusions}
\label{sec:conclussions}
In terms of computational performance, we discussed and compared two standard methods used for the integration in IGA-FEM; the classical integration method and sum factorization.
For the comparison, we considered several scenarios of performing a shared memory layer of computations on hybrid memory clusters.
First, we consider a single-core implementation as the baseline.
Then, we measure experimental performance in two ways of parallel integration in shared memory, using OpenMP, with parallel loops over elements and parallel loops inside elements.
In the final scenario, we estimate performance on massively parallel shared-memory machines, such as GPU, by combining maximum scalability estimates (see Section \ref{sec:amdahl}).
As expected, when assigned to a specific computational node, the sum factorization method performs better than the classical integration method.
From the numerical results with a polynomial degree $p=9$, being the worst-case scenario from the considered experiments, we can observe that the classical method is approximately 70 times slower than the sum factorization method in both scenarios of parallel integration in shared memory.
Even though, when comparing single-core sum factorization with parallelized on 12 CPU cores classical integration method, still sum factorization is the clear winner.
When considering parallelized loops inside the elements, we observe very efficient parallelization for the classical integration method.
However, sum factorization does not parallelize as expected.
Indeed, we observe an evident loss in performance when considering more than one core.
Additionally, when considering the standard loops over elements, we observe performance gain for sum factorization only up to 4 cores in a shared memory (see Figures \ref{fig:spd_sum}, \ref{fig:spd_sum_elem}).
Finally, based on the previous work \cite{parallel_integration}, we can assume that estimate the performance for both parallelization methods mixed on massively parallel machines, such as GPUs.
In such a case, the classical integration method parallelizes outstandingly, resulting in faster execution than sum factorization.
In other words, numerical results show that the classical integration method running on a GPU can be faster than sum factorization by one or two orders of magnitude.
A possible explanation for this small performance gain, or lack of such in some cases for sum factorization, is possibly limited by the memory synchronization and the memory access.
Despite the higher computational cost of the classical method concerning sum factorization, such a method requires fewer data dependencies and synchronizations than sum factorization.
However, when considering low cores machines, sum factorization is the method of choice over the classical one.
The best parallelization strategy we observe in such a case is to use 4 CPU cores in shared memory.
\vspace{0.5cm}
\noindent{\bf{Acknowledgments}}
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777778 (MATHROCKS).
The work of SR has also been partially supported by the Chilean grant ANID Fondecyt No 3210009.
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Appendix}
\label{sec:appendix}
\subsection{Complete proof of Proposition \ref{proposition:sre2sfa}}
\label{sec:proof:sre2sfa}
\begin{proposition*}
For every symbolic regular expression $R$ there exists a symbolic finite automaton $M$ such that $\mathcal{L}(R) = \mathcal{L}(M)$.
\end{proposition*}
\begin{proof}
Except for the first case, for the other three cases the induction hypothesis is that the theorem holds for the sub-expressions of the initial expression.
\begin{figure}
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.95\textwidth]{sre2sfa_base.pdf}
\caption{Base case of a single predicate. $R = \psi$.}
\label{fig:sre2sfa:base}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.95\textwidth]{sre2sfa_or.pdf}
\caption{OR. $R = R_{1} + R_{2}$.}
\label{fig:sre2sfa:or}
\end{subfigure}\\
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.95\textwidth]{sre2sfa_seq.pdf}
\caption{Concatenation. $R = R_{1} \cdot R_{2}$.}
\label{fig:sre2sfa:seq}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.95\textwidth]{sre2sfa_iter.pdf}
\caption{Iteration. $R^{'} = R^{*}$.}
\label{fig:sre2sfa:iter}
\end{subfigure}
\caption{The four cases for constructing a SFA from a SRE}
\label{fig:sre2sfa}
\end{figure}
\textbf{Case where $R = \psi, \psi \in \Psi$.}\\
We construct a SFA as in Figure \ref{fig:sre2sfa:base}.
If $w \in \mathcal{L}(R)$, then $w$ is a single character and $w \in \llbracket \psi \rrbracket$, i.e., $\psi$ evaluates to \textsf{\footnotesize TRUE}\ for $w$.
Thus, upon seeing $w$, the SFA of Figure \ref{fig:sre2sfa:base} moves from $q^{s}$ to $q^{f}$ and since $q^{f}$ is a final state, then $w$ is accepted by this SFA.
Conversely, if a string $w$ is accepted by this SFA then it must again be a single character and $\psi$ must evaluate to \textsf{\footnotesize TRUE}\ since the SFA moved to its final state through the transition equipped with $\psi$. Thus, $w \in \llbracket \psi \rrbracket$ and $w \in \mathcal{L}(R)$.
\textbf{Case where $R = R_{1} + R_{2}$.}\\
We construct a SFA as in Figure \ref{fig:sre2sfa:or}.
If $w \in \mathcal{L}(R)$, then either $w \in \mathcal{L}(R_{1})$ or $w \in \mathcal{L}(R_{2})$ (or both).
Without loss of generality, assume $w \in \mathcal{L}(R_{1})$.
From the induction hypothesis, it also holds that $w \in \mathcal{L}(M_{R_{1}})$.
Thus, from Figure \ref{fig:sre2sfa:or}, upon reading $w$, $M_{R_{1}}$ will have reached $q_{1}^{f}$.
Therefore, $M_{R}$ will have reached $q^{f}$ throught the $\epsilon$-transition connecting $q_{1}^{f}$ to $q^{f}$ and thus $w$ is accepted by $M_{R}$.
Conversely, if $w \in \mathcal{L}(M_{R})$, then the SFA $M_{R}$ of Figure \ref{fig:sre2sfa:or} must have reached $q^{f}$ and therefore also $q_{1}^{f}$ or $q_{2}^{f}$ (or both).
Assume it has reached $q_{1}^{f}$.
Then $w \in \mathcal{L}(M_{R_{1}})$ and, from the induction hypothesis $w \in \mathcal{L}(R_{1})$.
Similarly, if its has reached $q_{2}^{f}$, then $w \in \mathcal{L}(R_{2})$.
Therefore, $w \in \mathcal{L}(R_{1}) \cup \mathcal{L}(R_{2}) = \mathcal{L}(R)$.
\textbf{Case where $R = R_{1} \cdot R_{2}$.}\\
We construct a SFA as in Figure \ref{fig:sre2sfa:seq}.
If $w \in \mathcal{L}(R)$, then $w \in \mathcal{L}(R_{1}) \cdot \mathcal{L}(R_{2})$ or $w=w_{1} \cdot w_{2}$ such that $w_{1} \in \mathcal{L}(R_{1})$ and $w_{2} \in \mathcal{L}(R_{2})$.
Therefore, from the induction hypothesis, upon reading $w_{1}$, $M_{R}$ will have reached $q_{1}^{f}$ and $q_{2}^{s}$.
Upon reading the rest of $w$ ($w_{2}$), again from the induction hypothesis, $M_{R}$ will have reached $q_{2}^{f}$.
As a result, $w \in \mathcal{L}(M_{R})$.
Conversely, if $w \in \mathcal{L}(M_{R})$, $M_{R}$ will have reached $q_{2}^{f}$ upon reading $w$ and therefore will have also passed through $q_{1}^{f}$ upon reading a prefix $w_{1}$ of $w$.
Thus, $w = w_{1} \cdot w_{2}$ with $w_{1} \in \mathcal{L}(M_{R_{1}})$ and $w_{2} \in \mathcal{L}(M_{R_{2}})$.
From the induction hypothesis, it also holds that $w_{1} \in \mathcal{L}(R_{1})$ and $w_{2} \in \mathcal{L}(R_{2})$ and therefore that $w \in \mathcal{L}(R)$.
\textbf{Case where $R^{'} = R^{*}$.}\\
We construct a SFA as in Figure \ref{fig:sre2sfa:iter}.
If $w \in \mathcal{L}(R^{'})$, then $w \in (\mathcal{L}(R))^{*}$ or, equivalently, $w=w_{1} \cdot w_{2} \cdot \cdots \cdot w_{k}$ such that $w_{i} \in \mathcal{L}(R)$ for all $w_{i}$.
From the induction hypothesis and Figure \ref{fig:sre2sfa:iter}, upon reading $w_{1}$, $M_{R^{'}}$ will have reached $q_{1}^{f}$ and $q_{1}^{s}$.
Therefore, the same will be true after reading $w_{2}$ and all other $w_{i}$, including $w_{k}$.
Thus, $w \in \mathcal{L}(M_{R^{'}})$.
Note that if $w=\epsilon$, the $\epsilon$-transition from $q^{s}$ to $q^{f}$ ensures that $w \in \mathcal{L}(M_{R^{'}})$.
Conversely, assume $w \in \mathcal{L}(M_{R^{'}})$.
If $w=\epsilon$, then by the definition of the $^{*}$ operator, $w \in (\mathcal{L(R)})^{*}$.
In every other case, $M_{R^{'}}$ must have reached $q_{1}^{f}$ and must have passed through $q_{1}^{s}$.
Therefore, $w$ may be written as $w=w_{1} \cdot w_{2}$ where $w_{2} \in \mathcal{M_{R}}$
and, for $w_{1}$, upon reading it, $M_{R^{'}}$ must have reached $q_{1}^{s}$.
There are two cases then: either $w_{1}=\epsilon$ and $q_{1}^{s}$ was reached from $q^{s}$ or $w_{1} \neq \epsilon$ and $q_{1}^{s}$ was reached from $q_{1}^{f}$.
In the former case, $w = \epsilon \cdot w_{2} = w_{2}$ and thus $w \in (\mathcal{L}(R))^{*}$.
In the latter case, we can apply a similar reasoning recursively to $w_{1}$ in order to split it to sub-strings $w_{i}$ such that $w_{i} \in \mathcal{L}(R)$.
Therefore, $w \in (\mathcal{L}(R))^{*}$ and $w \in \mathcal{L}(R^{'})$.
\end{proof}
\subsection{Proof of Proposition \ref{proposition:streamingsre}}
\label{sec:proof:streamingsre}
\begin{proposition*}
If $S=t_{1},t_{2},\cdots$ is a stream of domain elements from an effective Boolean algebra $\mathcal{A} = (\mathcal{D}$, $\Psi$, $\llbracket \_ \rrbracket$, $\bot$, $\top$, $\vee$, $\wedge$, $\neg$), where $t_{i} \in \mathcal{D}$, and $R$ is a symbolic regular expression over the same algebra,
then, for every $S_{m..k}$, $S_{m..k} \in \mathcal{L}(R)$ iff $S_{1..k} \in \mathcal{L}(R_{s})$ (and $S_{1..k} \in \mathcal{L}(M_{R_{s}})$).
\end{proposition*}
\begin{proof}
First, assume that $S_{m..k} \in \mathcal{L}(R)$ for some $m, 1 \leq m \leq k$
(we set $S_{1..0} = \epsilon$).
Then, for $S_{1..k} = S_{1..(m-1)} \cdot S_{m..k}$, $S_{1..(m-1)} \in \mathcal{L}(\top^{*})$,
since $\top^{*}$ accepts every string (sub-stream),
including $\epsilon$.
We know that $S_{m..k} \in \mathcal{L}(R)$, thus $S_{1..k} \in \mathcal{L}(\top^{*}) \cdot \mathcal{L}(R) = \mathcal{L}(\top^{*} \cdot R) = \mathcal{L}(R_{s})$.
Conversely, assume that $S_{1..k} \in \mathcal{L}(R_{s})$.
Therefore, $S_{1..k} \in \mathcal{L}(\top^{*} \cdot R) = \mathcal{L}(\top^{*}) \cdot \mathcal{L}(R)$.
As a result, $S_{1..k}$ may be split as $S_{1..k} = S_{1..(m-1)} \cdot S_{m..k}$ such that $S_{1..(m-1)} \in \mathcal{L}(\top^{*})$ and $S_{m..k} \in \mathcal{L}(R)$.
Note that $S_{1..(m-1)} = \epsilon$ is also possible, in which case the result still holds, since $\epsilon \in \mathcal{L}(\top^{*})$.
\end{proof}
\subsection{Proof of Theorem \ref{theorem:finals}}
\label{sec:proof:finals}
\begin{theorem*}
Let $\boldsymbol{\Pi}$ be the transition probability matrix of a homogeneous Markov chain $Y_{t}$ in the form of Equation \eqref{eq:matrix}
and $\boldsymbol{\xi}_{init}$ its initial state distribution.
The probability for the time index $n$ when the system first enters the set of states $F$,
starting from a state in $F$,
can be obtained from
\begin{equation*}
P(Y_{n} \in F, Y_{n-1} \notin F,\cdots,Y_{1} \in F \mid \boldsymbol{\xi_{init}}) =
\begin{cases}
\boldsymbol{\xi_{F}}^{T} \boldsymbol{F} \boldsymbol{1} & \quad \text{if } n=2 \\
\boldsymbol{\xi_{F}}^{T} \boldsymbol{F_{N}} \boldsymbol{N}^{n-2}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1} & \quad \text{otherwise} \\
\end{cases}
\end{equation*}
where $\xi_{F}$ is the vector consisting of the elements of $\xi_{init}$ corresponding to the states of $F$.
\end{theorem*}
\begin{proof}
\textbf{Case where $n=2$.}\\
In this case,
we are in a state $i \in F$ and we take a transition that leads us back to $F$ again.
Therefore, $P(Y_{2} \in F, Y_{1}=i \in F \mid \boldsymbol{\xi_{init}}) = \boldsymbol{\xi}(i) \sum_{j \in F} \pi_{ij}$,
i.e.,
we first take the probability of starting in $i$ and multiply it by the sum of all transitions from $i$ that lead us back to $F$.
This result folds for a certain state $i \in F$.
If we start in any state of $F$,
$P(Y_{2} \in F, Y_{1} \in F \mid \boldsymbol{\xi_{init}}) = \sum_{i \in F} \boldsymbol{\xi}(i) \sum_{j \in F} \pi_{ij}$.
In matrix notation, this is equivalent to
$P(Y_{2} \in F, Y_{1} \in F \mid \boldsymbol{\xi_{init}}) = \boldsymbol{\xi_{F}}^{T} \boldsymbol{F} \boldsymbol{1}$.
\textbf{Case where $n>2$.}\\
In this case,
we must necessarily first take a transition from $i \in F$ to $j \in N$,
then, for multiple transitions we remain in $N$ and we finally take a last transition from $N$ to $F$.
We can write
\begin{equation}
\label{eq:broken}
\begin{aligned}
P(Y_{n} \in F, Y_{n-1} \notin F,...,Y_{1} \in F \mid \boldsymbol{\xi_{init}} ) = & P(Y_{n} \in F, Y_{n-1} \notin F,...,Y_{2} \notin F \mid \boldsymbol{\xi^{'}_{N}} ) \\
= & P(Y_{n-1} \in F, Y_{n-2} \notin F,...,Y_{1} \notin F \mid \boldsymbol{\xi^{'}_{N}} )
\end{aligned}
\end{equation}
where $\boldsymbol{\xi^{'}_{N}}$ is the state distribution (on states of $N$) after having taken the first transition from $F$ to $N$.
This is given by $\boldsymbol{\xi^{'}_{N}} = \boldsymbol{\xi_{F}}^{T} \boldsymbol{F_{N}}$.
By using this as an initial state distribution in Eq. \ref{eq:wtd:non-finals} and running the index $n$ from $1$ to $n-1$,
as in Eq. \ref{eq:broken},
we get
\begin{equation*}
P(Y_{n} \in F, Y_{n-1} \notin F,...,Y_{1} \in F \mid \boldsymbol{\xi_{init}}) = \boldsymbol{\xi_{F}}^{T} \boldsymbol{F_{N}} \boldsymbol{N}^{n-2}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}
\end{equation*}
\end{proof}
\subsection{Proof of Correctness for Algorithm \ref{algorithm:interval}}
\label{sec:proof:interval}
\begin{proposition*}
Let $P$ be a waiting-time distribution with horizon $h$ and let $\theta_{fc} < 1.0$ be a confidence threshold.
Algorithm \ref{algorithm:interval} correctly finds the smallest interval whose probability exceeds $\theta_{fc}$.
\end{proposition*}
\begin{proof}[Proof of correctness for Algorithm \ref{algorithm:interval}]
\input{figs_intervals}
We only need to prove the loop invariant.
Assume that after the $k^{th}$ iteration of the outer while loop $i=i_{k}$ and $j=j_{k}$ and that after the $(k+1)^{th}$ iteration $i=i_{k+1}$ and $j=j_{k+1}$.
If the invariant holds after the $k_{th}$ iteration,
then all intervals with $e \leq j_{k}$ have been checked and we know that $(s,e)$ is the best interval up to $j_{k}$.
It can be shown that,
during the $(k+1)^{th}$ iteration,
the intervals up to $j_{k+1}$ that are not explicitly checked are intervals which cannot possibly exceed $\theta_{fc}$ or cannot be better than the currently held best interval $(s,e)$.
There are three such sets of unchecked intervals (see also Figure \ref{fig:intervals}):
\begin{itemize}
\item All intervals $(i',j')$ such that $i' < i_{k}$ and $j_{k} \leq j' \leq j_{k+1}$,
i.e., we ignore all intervals that start before $i_{k}$.
Even if these intervals exceed $\theta_{fc}$, they cannot possibly be smaller than $(s,e)$, since we know that $(s,e)=(i_{k},j_{k})$ or that $(s,e)$ is even smaller than $(i_{k},j_{k})$.
\item All intervals $(i',j')$ such that $i' > i_{k+1}$ and $j_{k} \leq j' \leq j_{k+1}$,
i.e., we ignore all intervals that start after $i_{k+1}$.
These intervals cannot possibly exceed $\theta_{fc}$, since $(i_{k+1}+1,j_{k+1})$ is below $\theta_{fc}$ and all these intervals are sub-intervals of $(i_{k+1}+1,j_{k+1})$.
\item We are thus left with intervals $(i',j')$ such that $i_{k} \leq i' \leq i_{k+1}$ and $j_{k} \leq j' \leq j_{k+1}$.
Out of all the interval that can be constructed from combining these $i'$ and $j'$,
the algorithm checks the intervals $(i'=i_{k},j')$ and $(i',j'=j_{k+1})$.
The intervals that are thus left unchecked are the intervals $(i', j')$ such that $i_{k} < i' \leq i_{k+1}$ and $j_{k} \leq j' < j_{k+1}$. The question is: is it possible for such an interval to exceed $\theta_{fc}$. The answer is negative. Assume that there is such an interval $(i',j')$.
If this were the case, then the algorithm, during its expansion phase, would have stopped at $j'$,
because $(i_{k},j')$ would exceed $\theta_{fc}$. Therefore, these intervals cannot exceed $\theta_{fc}$.
\end{itemize}
A similar reasoning allows us to show that the loop invariant holds after the first iteration.
It thus holds after every iteration.
\end{proof}
\subsection{Proofs of Complexity Results}
\subsubsection{Proof of Proposition \ref{proposition:complexity1}}
\label{sec:proof:complexity1}
\begin{proposition*}[Step 1 in Figure \ref{fig:vmmflow}]
Let $S_{1..k}$ be a stream and $m$ the maximum depth of the Counter Suffix Tree $T$ to be constructed from $S_{1..k}$.
The complexity of constructing $T$ is $O(m(k-m))$.
\end{proposition*}
\begin{proof}
There are three operations that affect the cost:
incrementing the counter of a node by $1$, with constant cost $i$;
inserting a new node, with constant cost $n$;
visiting an existing node with constant cost $v$;
We assume that $n > v$.
For every $S_{l-m+1..l}$, $m \leq l \leq k$ of length $m$,
there will be $m$ increment operations and $m$ nodes will be ``touched'',
i.e.,
either visited if already existing or created.
Therefore, the total number of increment operations is $(k-m+1) m = km - m^{2} +m = m(k-m)+m$.
The same result applies for the number of node ``touches''.
It is always true that $m < k$ and typically $m \ll k$.
Therefore, the cost of increments is $O(m(k-m))$ and the cost of visits/creations is also $O(m(k-m))$.
Thus, the total cost is $O(m(k-m)) + O(m(k-m)) = O(m(k-m))$.
In fact,
the worst case is when all $S_{l-m+1..l}$ are different and have no common suffixes.
In this case,
there are no visits to existing nodes,
but only insertions,
which are more expensive than visits.
Their cost would again be $O(nm(k-m))=O(m(k-m))$, ignoring the constant $n$.
\end{proof}
\subsubsection{Proof of Proposition \ref{proposition:complexity3}}
\label{sec:proof:complexity3}
\begin{proposition*}[Step 3a in Figure \ref{fig:vmmflow}]
Let $T$ be a $\mathit{PST}$\ of maximum depth $m$, learned with the $t$ minterms of a $\mathit{DSFA}$\ $M_{R}$.
The complexity of constructing a $\mathit{PSA}$\ $M_{S}$ from $T$ is $O(t^{m+1} \cdot m)$.
\end{proposition*}
\begin{proof}
We assume that the cost of creating new states and transitions for $M_{S}$ is constant.
In the worst case,
all possible suffixes of length $m$ have to be added to $T$ as leaves.
$T$ will thus have $t^{m}$ leaves.
The main idea of the algorithm for converting a $\mathit{PST}$\ $T$ to a $\mathit{PSA}$\ $M_{S}$ is to use the leaves
of $T$ as states of $M_{S}$ and for every symbol (minterm) $\sigma$ find the next state/leaf and set the transition probability to be equal to the probability of $\sigma$ from the source leaf.
If we assume that the cost of accessing a leaf is constant
(e.g., by keeping separate pointers to the leaves),
the cost for constructing $M_{S}$ is dominated by the cost of constructing the $k^{m}$ states of $M_{S}$ and the $t$ transitions from each such state.
For each transition,
finding the next state requires traversing a path of length $m$ in $T$.
The total cost is thus $O(t^{m} \cdot t \cdot m)$ = $O(t^{m+1} \cdot m)$.
\end{proof}
\subsubsection{Proof of Proposition \ref{proposition:complexity4}}
\label{sec:proof:complexity4}
\begin{proposition*}[Step 4 in Figure \ref{fig:vmmflow}]
Let $M_{R}$ be a $\mathit{DSFA}$\ with $t$ minterms and $M_{S}$ a $\mathit{PSA}$\ learned with the minterms of $M_{R}$.
The complexity of constructing an embedding $M$ of $M_{S}$ in $M_{S}$ with Algorithm \ref{algorithm:merging} is $O(t \cdot \lvert M_{R}.Q \times M_{S}.Q\rvert)$.
\end{proposition*}
\begin{proof}
We assume that the cost of constructing new states and transitions for $M$ is constant.
We also assume that the cost of finding a given state in both $M_{R}$ and $M_{S}$ is constant,
e.g.,
by using a linked data structure for representing the automaton with a hash table on its states (or an array),
and the cost of finding the next state from a given state is also constant.
In the worst case,
even with the incremental algorithm \ref{algorithm:merging},
we would need to create the full Cartesian product $M_{R}.Q \times M_{S}.Q$ to get the states of $M$.
For each of these states,
we would need to find the states of $M_{R}$ and $M_{S}$ from which it will be composed and to create $t$ outgoing transitions.
Therefore,
the complexity of creating $M$ would be
$O(t \cdot \lvert M_{R}.Q \times M_{S}.Q\rvert)$.
\end{proof}
\subsubsection{Proof of Proposition \ref{proposition:complexity5}}
\label{sec:proof:complexity5}
\begin{proposition*}[Step 5 in Figure \ref{fig:vmmflow}]
Let $M$ be the embedding of a $\mathit{PSA}$\ $M_{S}$ in a $\mathit{DSFA}$\ $M_{R}$.
The complexity of estimating the waiting-time distribution for a state of $M$ and a horizon of length $h$ using Theorem \ref{theorem:non-finals} is $O((h-1) k^{2.37})$ where $k$ is the dimension of the square matrix $\boldsymbol{N}$.
\end{proposition*}
\begin{proof}
We want to use Equation \ref{eq:wtd:non-finals} to estimate the distribution of a state.
The equation is repeated below:
\begin{equation*}
P(Y_{n} \in F, Y_{n-1} \notin F,...,Y_{1} \notin F \mid \boldsymbol{\xi_{init}}) =
\boldsymbol{\xi_{N}}^{T}\boldsymbol{N}^{n-1}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}
\end{equation*}
We want to estimate the distribution for the $h$ points of the horizon,
i.e.,
for $n=2$, $n=3$ up to $n=h+1$.
For $n=2$,
we have
\begin{equation*}
P(Y_{2} \in F, Y_{1} \notin F \mid \boldsymbol{\xi_{init}}) =
\boldsymbol{\xi_{N}}^{T}\boldsymbol{N}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}
\end{equation*}
For $n=3$,
we have
\begin{equation*}
P(Y_{3} \in F, Y_{2} \notin F, Y_{1} \notin F \mid \boldsymbol{\xi_{init}}) =
\boldsymbol{\xi_{N}}^{T}\boldsymbol{N}^{2}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}
\end{equation*}
In general,
for $n=i$,
we can use the power of $\boldsymbol{N}$ that we have estimated in the previous step for $n=i-1$, $\boldsymbol{N}^{i-2}$, in order to estimate the next power $\boldsymbol{N}^{i-1}$ via a multiplication by $\boldsymbol{N}$ so as to avoid estimating this power from scratch.
Then $\boldsymbol{N}^{i-1}$ can be multiplied by $(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}$,
which remains fixed for all $i$ and can thus be calculated only once.
The cost of estimating $(\boldsymbol{I}-\boldsymbol{N})$ is $k^{2}$ due to the $k^{2}$ subtractions.
Multiplying the matrix $(\boldsymbol{I}-\boldsymbol{N})$ by the vector $\boldsymbol{1}$ results in a new vector with $k$ elements.
Each of these elements requires $k$ multiplications and $k - 1$ additions or $2k - 1$ operations.
Thus, the estimation of $(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}$ has a cost of $k^{2} + k(2k - 1) = 3 k^{2} - k$.
Now, for $n=i$, estimating the power $\boldsymbol{N}^{i-1}$ from $\boldsymbol{N}^{i-2}$ has a cost of $k^{2.37}$ using an efficient multiplication algorithm such as the Coppersmith–Winograd algorithm \cite{DBLP:journals/jsc/CoppersmithW90} or the improvement proposed by Stothers \cite{stothers2010complexity}.
Additionally, $\boldsymbol{N}^{i-1}$ must then be multiplied by the vector $(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}$,
with a cost of $2k^{2}-k$,
resulting in a new vector with $k$ elements.
This vector must then be multiplied by $\boldsymbol{\xi_{N}}^{T}$ to produce the final probability value with a cost of $2k-1$ for the $k$ multiplications and the $k-1$ additions.
We thus have a fixed initial cost of $3k^{2} - k$ and then for every iteration $i$ a cost of
$k^{2.37} I_{\{i>2\}} + 2k^{2}-k + 2k-1 = k^{2.37} I_{\{i>2\}} + 2k^{2} + k - 1$,
where $I_{\{i>2\}}$ is an indicator function ($1$ for $i>2$ and $0$ otherwise).
Note that the cost $k^{2.37}$ is not included for $i=2$ because in this case we do not need to raise $\boldsymbol{N}$ to a power.
The total cost would thus be:
\begin{equation*}
\begin{aligned}
3k^{2} - k + & k^{2.37} \cdot 0 + 2k^{2} + k - 1 & \text{for } n=2 \\
+ & k^{2.37} \cdot 1 + 2k^{2} + k - 1 & \text{for } n=3 \\
& \cdots & \\
+ & k^{2.37} \cdot 1 + 2k^{2} + k - 1 & \text{for } n=h+1 \\
= & (h-1)k^{2.37} + (2h+3)k^{2} + (h-1)k -h & = O((h-1)k^{2.37})
\end{aligned}
\end{equation*}
\end{proof}
\subsubsection{Proof of Proposition \ref{proposition:complexity6}}
\label{sec:proof:complexity6}
\begin{proposition*}[Step 6 in Figure \ref{fig:vmmflow}]
For a waiting-time distribution with a horizon of length $h$,
the complexity of finding the smallest interval that exceeds a confidence threshold $\theta_{fc}$ with Algorithm \ref{algorithm:interval} is $O(h)$.
\end{proposition*}
\begin{proof}
Indexes $i$ and $j$ of Algorithm \ref{algorithm:interval} scan the distribution only once.
The cost for $j$ is the cost of $h$ points of the distribution that need to be accessed plus
$h-1$ additions.
Similarly, the cost for $i$ is the cost of (at most) $h$ accessed points plus the cost of (at most) $h-1$ subtractions.
Thus the total cost is $O(h)$.
\end{proof}
\subsubsection{Proof of Proposition \ref{proposition:complexity3prime}}
\label{sec:proof:complexity3prime}
\begin{proposition*}[Step 3b in Figure \ref{fig:vmmflow}]
Let $T$ be a $\mathit{PST}$\ of maximum depth $m$, learned with the $t$ minterms of a $\mathit{DSFA}$\ $M_{R}$.
The complexity of estimating the waiting-time distribution for a state of $M_{R}$ and a horizon of length $h$ directly from $T$ is $O((m+3) \frac{t - t^{h+1}}{1 - t})$.
\end{proposition*}
\begin{proof}
After every new event arrival,
we first have to construct the tree of future states,
as shown in Figure \ref{fig:future}.
In the worst case,
no paths can be pruned and the tree has to be expanded until level $h$.
The total number of nodes that have to be created is thus a geometric progress:
$t + t^{2} + \cdots + t^{h}=\sum_{i=1}^{h}t^{i}=\frac{t - t^{h+1}}{1-t}$.
Assuming that it takes constant time to create a new node,
this formula gives the cost of creating the nodes of the trees.
Another cost that is involved concerns the time required to find the proper leaf of the $\mathit{PST}$\ $T$ before the creation of each new node.
In the worst case,
all leaves will be at level $m$.
The cost of each search will thus be $m$.
The total search cost for all nodes will be
$mt + mt^{2} + \cdots + mt^{h}=\sum_{i=1}^{h}mt^{i}=m\frac{t - t^{h+1}}{1-t}$.
The total cost (node creation and search) for constructing the tree is
$\frac{t - t^{h+1}}{1-t} + m\frac{t - t^{h+1}}{1-t} = (m+1)\frac{t - t^{h+1}}{1-t}$.
With the tree of future states at hand,
we can now estimate the waiting-time distribution.
In the worst case,
the tree will be fully expanded and we will have to access all its paths until level $h$.
We will first have to visit the $t$ nodes of level 1,
then the $t^{2}$ nodes of level 2, etc.
The access cost will thus be
$t + t^{2} + \cdots + t^{h}=\sum_{i=1}^{h}t^{i}=\frac{t - t^{h+1}}{1-t}$.
We also need to take into account the cost of estimating the probability of each node.
For each node,
one multiplication is required,
assuming that we store partial products and do not have to traverse the whole path to a node to estimate its probability.
As a result,
the number of multiplications will also be $\frac{t - t^{h+1}}{1-t}$.
The total cost (path traversal and multiplications) will thus be $2\frac{t - t^{h+1}}{1-t}$,
where we ignore the cost of summing the probabilities of final states,
assuming it is constant.
By adding the cost of constructing the tree ($(m+1)\frac{t - t^{h+1}}{1-t}$) and the cost of estimating the distribution ($2\frac{t - t^{h+1}}{1-t}$),
we get a complexity of $O((m+3)\frac{t - t^{h+1}}{1-t})$.
\end{proof}
\section{Complexity Analysis}
\label{sec:complexity}
Figure \ref{fig:vmmflow} depicts the steps required for estimating forecasts,
along with the input required for each of them.
The first step (box $1$) takes as input the minterms of a $\mathit{DSFA}$, the maximum order $m$ of dependencies to be captured and a training stream.
Its output is a $\mathit{CST}$\ of maximum depth $m$ (Section \ref{sec:prob_empirical}).
In the next step (box $2$),
the $\mathit{CST}$\ is converted to a $\mathit{PST}$,
using an approximation parameter $\alpha$ and a parameter $n$ for the maximum number of states for the $\mathit{PSA}$\ to be constructed subsequently (Section \ref{sec:pst}).
For the third step,
we have two options:
we can either use the $\mathit{PST}$\ to directly estimate the waiting-time distributions (box $3b$, Section \ref{sec:no-mc})
or we can convert the $\mathit{PST}$\ to a $\mathit{PSA}$,
by using the leaves of the $\mathit{PST}$\ as states of the $\mathit{PSA}$\ (box $3a$, Section \ref{sec:pst}).
If we follow the first path,
we can then move on directly to the last step of estimating the actual forecasts,
using the confidence threshold $\theta_{fc}$ provided by the user (box $6$).
If we follow the alternative path,
the $\mathit{PSA}$\ is merged with the initial $\mathit{DSFA}$\ to create the embedding of the $\mathit{PSA}$\ in the $\mathit{DSFA}$\ (box $4$, Section \ref{sec:embed}).
From the embedding we can calculate the waiting-time distributions (box $5$),
which can be used to derive the forecasts (box $6$).
The learning algorithm of step $2$,
as presented in \cite{DBLP:journals/ml/RonST96},
is polynomial in $m$, $n$, $\frac{1}{\alpha}$ and the size of the alphabet (number of minterms in our case).
Below,
we give complexity results for the remaining steps.
\input{figs_vmmflow}
\begin{proposition}[Step 1 in Figure \ref{fig:vmmflow}]
\label{proposition:complexity1}
Let $S_{1..k}$ be a stream and $m$ the maximum depth of the Counter Suffix Tree $T$ to be constructed from $S_{1..k}$.
The complexity of constructing $T$ is $O(m(k-m))$.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity1}.
\end{proof}
\begin{proposition}[Step 3a in Figure \ref{fig:vmmflow}]
\label{proposition:complexity3}
Let $T$ be a $\mathit{PST}$\ of maximum depth $m$, learned with the $t$ minterms of a $\mathit{DSFA}$\ $M_{R}$.
The complexity of constructing a $\mathit{PSA}$\ $M_{S}$ from $T$ is $O(t^{m+1} \cdot m)$.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity3}.
\end{proof}
\begin{proposition}[Step 3b in Figure \ref{fig:vmmflow}]
\label{proposition:complexity3prime}
Let $T$ be a $\mathit{PST}$\ of maximum depth $m$, learned with the $t$ minterms of a $\mathit{DSFA}$\ $M_{R}$.
The complexity of estimating the waiting-time distribution for a state of $M_{R}$ and a horizon of length $h$ directly from $T$ is $O((m+3) \frac{t - t^{h+1}}{1 - t})$.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity3prime}.
\end{proof}
\begin{proposition}[Step 4 in Figure \ref{fig:vmmflow}]
\label{proposition:complexity4}
Let $M_{R}$ be a $\mathit{DSFA}$\ with $t$ minterms and $M_{S}$ a $\mathit{PSA}$\ learned with the minterms of $M_{R}$.
The complexity of constructing an embedding $M$ of $M_{S}$ in $M_{S}$ with Algorithm \ref{algorithm:merging} is $O(t \cdot \lvert M_{R}.Q \times M_{S}.Q\rvert)$.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity4}.
\end{proof}
Notice that the cost of learning a $\mathit{PSA}$\ might be exponential in $m$.
In the worst case,
all permutations of the $t$ minterms of length $m$ need to be added to the $\mathit{PST}$\ and the $\mathit{PSA}$.
This may happen if the statistical properties of the training stream are such that all these permutations are deemed as important.
In this case,
the final embedding in the $\mathit{DSFA}$\ $M_{R}$ might have up to $t^{m} \cdot \lvert M_{R}.Q \rvert$ states.
This is also the upper bound of the number of states of the automaton the would be created using the method of \cite{DBLP:conf/debs/AlevizosAP17},
where every state of an initial automaton is split into at most $t^{m}$ sub-states,
regardless of the properties of the stream.
Thus,
in the worst case,
our approach would create an automaton of size similar to an automaton encoding a full-order Markov chain.
Our approach provides an advantage when the statistical properties of the training stream allow us to retain only some of the dependencies of order up to $m$.
\begin{proposition}[Step 5 in Figure \ref{fig:vmmflow}]
\label{proposition:complexity5}
Let $M$ be the embedding of a $\mathit{PSA}$\ $M_{S}$ in a $\mathit{DSFA}$\ $M_{R}$.
The complexity of estimating the waiting-time distribution for a state of $M$ and a horizon of length $h$ using Theorem \ref{theorem:non-finals} is $O((h-1) k^{2.37})$, where $k$ is the dimension of the square matrix $\boldsymbol{N}$. A similar result may be obtained for Theorem \ref{theorem:finals}.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity5}.
\end{proof}
\begin{proposition}[Step 6 in Figure \ref{fig:vmmflow}]
\label{proposition:complexity6}
For a waiting-time distribution with a horizon of length $h$,
the complexity of finding the smallest interval that exceeds a confidence threshold $\theta_{fc}$ with Algorithm \ref{algorithm:interval} is $O(h)$.
\end{proposition}
\begin{proof}
See Appendix, Section \ref{sec:proof:complexity6}.
\end{proof}
The complexity of the last step ($6$),
when the forecasts are ``classification'' decisions about whether a CE will occur within the next $w$ input events,
is $O(w)$.
In order to make such a positive or negative decision,
we can simply sum the probabilities of the first $w$ points of a waiting-time distribution and compare this sum to the given threshold $\theta_{fc}$.
If this sum exceeds the given threshold,
then the decision is positive.
The cost of the summation is $O(w)$.
\section{Empirical Evaluation}
\label{sec:experiments}
\input{experiments_setting}
\input{experiments_cards}
\input{experiments_maritime}
\subsection{Credit Card Fraud Management}
\label{sec:cards}
The first dataset used in our experiments is a synthetic one,
inspired by the domain of credit card fraud management \cite{DBLP:conf/debs/ArtikisKCBMSFP17}.
We start with a synthetically generated dataset in order to investigate how our method performs under conditions that are controlled and produce results more readily interpretable.
The data generator was developed in collaboration with Feedzai\footnote{\url{https://feedzai.com}},
our partner in the SPEEDD project\footnote{\url{http://speedd-project.eu}}.
In this dataset,
each event is supposed to be a credit card transaction,
accompanied by several arguments,
such as the time of the transaction, the card ID, the amount of money spent, the country where the transaction took place, etc.
In the real world,
a very small proportion of such transactions are fraudulent and the goal of a CER system would be to detect,
with very low latency,
fraud instances.
To do so,
a set of fraud patterns must be provided to the engine.
For typical cases of such patterns in a simplified form,
see \cite{DBLP:conf/debs/ArtikisKCBMSFP17}.
In our experiments,
we use one such pattern,
consisting of a sequence of consecutive transactions,
where the amount spent at each transaction is greater than that of the previous transaction.
Such a trend of steadily increasing amounts constitutes a typical fraud pattern.
The goal in our forecasting experiments is to predict if and when such a pattern will be completed,
even before it is detected by the engine (if in fact a fraud instance occurs),
so as to possibly provide a wider margin for action to an analyst.
We generated a dataset consisting of 1,000,000 transactions in total from 100 different cards.
About $20\%$ of the transactions are fraudulent.
Not all of these instances of fraud belong to the pattern of increasing amounts.
We actually inject seven different types of known fraudulent patterns in the dataset,
including, for instance, a decreasing trend.
Each fraudulent sequence for the increasing trend consists of eight consecutive transactions with increasing amounts,
where the amount is increased each time by $100$ monetary units or more.
We additionally inject sequences of transactions with increasing amounts,
which are not fraudulent.
In those cases,
we randomly interrupt the sequence before it reaches the eighth transaction.
In the legitimate sequences
the amount is increased each time by $0$ or more units.
With this setting,
we want to test the effect of long-term dependencies on the quality of the forecasts.
For example,
a sequence of six transactions with increasing amounts,
where all increases are $100$ or more units is very likely to lead to a fraud detection.
On the other hand,
a sequence of just two transactions with the same characteristics,
could still possibly lead to a detection,
but with a significantly reduced probability.
We thus expect that models with deeper memories will perform better.
We used $75\%$ of the dataset for training and the rest for testing.
No k-fold cross validation is performed,
since each fold would have exactly the same statistical properties.
Formally, the symbolic regular expression that we use to capture the pattern of an increasing trend in the amount spent is the following:
\begin{equation}
\label{exp:amount}
\begin{aligned}
R := \ & \ (\mathit{amountDiff > 0}) \cdot (\mathit{amountDiff > 0}) \cdot (\mathit{amountDiff > 0}) \cdot (\mathit{amountDiff > 0}) \cdot \\
\ &\ (\mathit{amountDiff > 0}) \cdot (\mathit{amountDiff > 0}) \cdot (\mathit{amountDiff > 0})
\end{aligned}
\end{equation}
$\mathit{amountDiff}$ is an extra attribute
(besides the card ID, the amount spent, the transaction country and the other standard attributes)
with which we enrich each event and is equal to the difference between the amount spent by the current transaction and that spent by the immediately previous transaction from the same card.
The expression consists of seven terminal sub-expressions,
in order to capture eight consecutive events.
The first terminal sub-expression captures an increasing amount between the first two events in a fraudulent pattern.
If we attempted to perform forecasting based solely on Pattern \eqref{exp:amount},
then the minterms that would be created would be based only on the predicate $\mathit{amountDiff}>0$:
namely, the predicate itself, along with its negation $\neg (\mathit{amountDiff}>0)$.
As expected,
such an approach does not yield good results,
as the language is not expressive enough to differentiate between fraudulent and legitimate transaction sequences.
In order to address this lack of informative (for forecasting purposes) predicates,
we have incorporated a mechanism in our system that allows us to incorporate extra predicates when building a probabilistic model,
without affecting the semantics of the initial expression (exactly the same matches are detected).
We do this by using any such extra predicates during the construction of the minterms.
For example,
if $\mathit{country}=\mathit{MA}$ is such an extra predicate that we would like included,
then we would construct the following minterms for Pattern \eqref{exp:amount}:
a) $m_{1} = (\mathit{amountDiff > 0}) \wedge (\mathit{country}=\mathit{MA})$;
b) $m_{2} = (\mathit{amountDiff > 0}) \wedge \neg (\mathit{country}=\mathit{MA})$;
c) $m_{3} = \neg (\mathit{amountDiff > 0}) \wedge (\mathit{country}=\mathit{MA})$;
d) $m_{4} = \neg (\mathit{amountDiff > 0}) \wedge \neg (\mathit{country}=\mathit{MA})$).
We can then use these enhanced minterms as guards on the automaton transitions in a way that does not affect the semantics of the expression.
For example,
if an initial transition has the guard $\mathit{amountDiff > 0}$,
then we can split it into two derived transitions,
one for $m_{1}$ and one for $m_{2}$.
The derived transitions would be triggered exactly when the initial one is triggered,
the only difference being that the derived transitions also have information about the country.
For our experiments and for Pattern \eqref{exp:amount},
if we include the extra predicate
$\mathit{amountDiff}>100$,
we expect the model to be able to differentiate between sequences involving genuine transactions
(where the difference in the amount can by any value above $0$)
and fraudulent sequences
(where the difference in the amount is always above $100$ units).
We now present results for SDE forecasting.
As already mentioned in Section \ref{sec:test-models},
for this type of experiments we do not use the automaton created by Pattern \eqref{exp:amount}.
We instead use only its minterms which will constitute our ``alphabet''.
In our case,
there are four minterms:
a) $\mathit{amountDiff}>0 \wedge \mathit{amountDiff}>100$;
b) $\mathit{amountDiff}>0 \wedge \neg (\mathit{amountDiff}>100)$;
c) $\neg (\mathit{amountDiff}>0) \wedge \mathit{amountDiff}>100$;
d) $\neg (\mathit{amountDiff}>0) \wedge \neg (\mathit{amountDiff}>100)$.
Thus, the goal is to predict,
as the stream is consumed,
which one of these minterms will be satisfied.
Notice that, for every possible event,
exactly one minterm is satisfied
(the third one, $\neg (\mathit{amountDiff}>0) \wedge \mathit{amountDiff}>100$, is actually unsatisfiable).
We use $75\%$ of the original dataset
(which amounts to 750.000 transactions)
for training and the rest (250.000) for testing.
We do not employ cross-validation,
as the dataset is synthetic and the statistical properties of its folds would not differ.
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=0.95\textwidth]{cards_sde_IncLogLossScore.pdf}
\caption{Average log-loss.}
\label{fig:exp-feedzai-sde-logloss}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=0.95\textwidth]{cards_sde_IncLogLossStates.pdf}
\caption{Number of states/nodes.}
\label{fig:exp-feedzai-sde-states}
\end{subfigure}
\caption{Results for SDE forecasting from the domain of credit card fraud management. Fx stands for a Full-order Markov Model of order $x$. Vx stands for a Variable-order Markov Model (a prediction suffix tree) of maximum order $x$.}
\label{fig:exp-feedzai-sde}
\end{figure}
\input{figs_exp-feedzai-classification-roc}
Figure \ref{fig:exp-feedzai-sde-logloss} shows the average log-loss obtained for various models and orders $m$
and Figure \ref{fig:exp-feedzai-sde-states} shows the number of states for the full-order models or nodes for the variable-order models,
which are prediction suffix trees.
The best result is achieved with a variable-order Markov model of maximum order 7.
The full-order Markov models are slightly better than their equivalent (same order) variable-order models.
This is an expected behavior,
since the variable-order models are essentially approximations of the full-order ones.
We increase the order of the full-order models until $m=3$,
in which case the Markov chain has $\approx\ 750$ states.
We avoid increasing the order any further,
because Markov chains with more than 1000 states become significantly difficult to manage in terms of memory usage
(and in terms of the computational cost of estimating the waiting-time distributions for the experiments of CE forecasting that follow).
Note that a Markov chain with $1000$ states would require a transition matrix with $1000^{2}$ entries.
On the contrary,
we can increase the maximum order of the variable-order model until we find the best one,
i.e., the order after which the average log-loss starts increasing again.
The size of the prediction suffix tree can be allowed to increase to more than $1000$ nodes,
since we are not required to build a transition matrix.
\input{figs_exp-feedzai-classification-performance}
We now move on to the classification experiments.
Figure \ref{fig:exp-feedzai-classification-roc} shows the ROC curves of the variable-order model that directly uses a $\mathit{PST}$.
We show results for two different ``expected'' distance ranges:
$\mathit{distance} \in [0.2,0.4]$ in Figure \ref{fig:exp-feedzai-classification-roc1}
and $\mathit{distance} \in [0.4,0.6]$ in Figure \ref{fig:exp-feedzai-classification-roc2}.
The ideal operating point in the ROC is the top-left corner and thus,
the closer to that point the curve is, the better.
Thus, the first observation is that by increasing the maximum order we obtain better results.
Notice, however, that in Figure \ref{fig:exp-feedzai-classification-roc2},
where the distance is greater and the forecasting problem is harder,
increasing the order from 6 to 7 yields only a marginal improvement.
Figure \ref{fig:exp-feedzai-classification-aucroc} displays ROC results for different distances and all models,
in terms of the Area Under the ROC Curve (AUC),
which is a measure of the models' classification accuracy.
The first observation is that the MEAN and HMM methods consistently underperform,
compared to the Markov models.
Focusing on the Markov models,
as expected,
the task becomes more challenging and the ROC scores decrease,
as the distance increases.
It is also evident that higher orders lead to better results.
The advantage of increasing the order becomes less pronounced
(or even non-existent)
as the distance increases.
The variable-order models that use an embedding are only able to go as far as $m=4$,
due to increasing memory requirements,
whereas the tree-based versions can go up to $m=7$
(and possibly even further, but we did not try to extend the order beyond this point).
Although the embedding ($\mathit{PSA}$) can indeed help achieve better scores than full-order models by reaching higher orders,
this is especially true for the tree-based models which bypass the embedding.
We can thus conclude that full-order models are doing well up to the order that they we can achieve with them.
$\mathit{PSA}$\ models can reach roughly the same levels,
as they are also practically restricted.
The performance of $\mathit{PST}$\ models is similar to that of the other models for the same order,
but the fact that they can use higher orders allows them to finally obtain superior performance.
We show performance results in Figure \ref{fig:exp-feedzai-classification-performance},
in terms of computation and memory efficiency.
Figure \ref{fig:exp-feedzai-classification-throughput} displays throughput results.
We can observe the trade-off between the high forecasting accuracy of the tree-based high-order models and the performance penalty that these models incur.
The models based on $\mathit{PST}$\ have a throughput figure that is almost half that of the full-order models and the embedding-based variable-order ones.
In order to emit a forecast,
the tree-based models
need to traverse a tree after every new event arrives at the system,
as described in Section \ref{sec:no-mc}.
The automata-based full- and variable-order models,
on the contrary,
only need to evaluate the minterms on the outgoing transitions of their current state and simply jump to the next state.
It would be possible to improve the throughput of the tree-based models,
by using caching techniques,
so that we can reuse some of the previously estimated forecasts,
but we reserve such optimizations for future work.
By far the worst throughput, however,
is observed for the HMM models.
The reason is that the waiting-time distributions and forecasts are always estimated online,
as explained in Section \ref{sec:test-models}.
Figure \ref{fig:exp-feedzai-classification-training} shows training times as a stacked, bar plot.
For each model,
the total training time is broken down into 4 different components,
each corresponding to a different phase of the forecast building process.
\emph{modelTime} is the time required to actually construct the model from the training dataset.
\emph{wtTime} is the time required to estimate the waiting-time distributions,
once the model has been constructed.
\emph{inTime} measures the time required to estimate the forecast of each waiting-time distribution.
Finally, \emph{extraTime} measures the time required to determinize the automaton of our initial pattern.
For the full-order Markov models,
it also includes the time required to convert the deterministic automaton into its equivalent, disambiguated automaton.
We observe that the tree-based models exhibit significantly higher times than the rest,
for high orders.
The other models have similar training times,
almost always below $5$ seconds.
Thus, if we need high accuracy,
we again have to pay a price in terms of training time.
Even in the case of high-order tree-based models though,
the training time is almost half a minute for a training dataset composed of 750,000 transactions,
which allows us to be confident that training could be performed online.
Figure \ref{fig:exp-feedzai-classification-states} shows the memory footprint of the models in terms of the size of their basic data structures.
For automata-based methods,
we show the number of states,
whereas for the tree-based methods we show the number of nodes.
We see that variable-order models,
especially the tree-based ones,
are significantly more compact than the full-order ones,
for the same order.
We also observe that the tree-based methods,
for the same order,
are much more compact (fewer nodes) than the ones based on the embedding (more states).
This allows us to increase the order up to $7$ with the tree-based approach,
but only up to $4$ with the embedding.
\subsection{Maritime Situational Awareness}
\label{sec:maritime}
The second dataset that we used in our experiments is a real--world dataset coming from the field of maritime monitoring.
It is composed of a set of trajectories from ships sailing at sea,
emitting AIS (Automatic Identification System) messages that relay information about their position, heading, speed, etc.,
as described in the running example of Section \ref{sec:example}.
These trajectories can be analyzed,
using the techniques of Complex Event Recognition,
in order to detect interesting patterns in the behavior of vessels \cite{DBLP:journals/geoinformatica/PatroumpasAAVPT17}.
The dataset that we used is publicly available, contains AIS kinematic messages from vessels sailing in the Atlantic Ocean around the port of Brest, France, and spans a period from 1 October 2015 to 31 March 2016 \cite{ray_cyril_2018_1167595}.
We used a derivative dataset that contains clean and compressed trajectories,
consisting only of critical points \cite{patroumpas_2018_2563256}.
Critical points are the important points of a trajectory that indicate a significant change in the behavior of a vessel.
Using critical points,
one can reconstruct quite accurately the original trajectory \cite{DBLP:journals/geoinformatica/PatroumpasAAVPT17}.
We further processed the dataset by interpolating between the critical points in order to produce trajectories where two consecutive points have a temporal distance of exactly 60 seconds.
The reason for this pre-processing step is that AIS messages typically arrive at unspecified time intervals.
These intervals can exhibit a very wide variation,
depending on many factors (e.g., human operators may turn on/off the AIS equipment),
without any clear pattern that could be encoded by our probabilistic model.
Consequently, our system performs this interpolation as a first step.
\input{figs_port-traffic}
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=0.95\textwidth]{maritime_sde_PortLogLossScore.pdf}
\caption{Average log-loss.}
\label{fig:exp-maritime-sde-logloss}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=0.95\textwidth]{maritime_sde_PortLogLossStates.pdf}
\caption{Number of states/nodes.}
\label{fig:exp-maritime-sde-states}
\end{subfigure}
\caption{Results for SDE forecasting from the domain of maritime monitoring. Fx stands for a Full-order Markov Model of order $x$. Vx stands for a Variable-order Markov Model (a prediction suffix tree) of maximum order $x$.}
\label{fig:exp-maritime-sde}
\end{figure}
The pattern that we used in the experiments is a movement pattern in which a vessel approaches the main port of Brest.
The goal is to forecast when a vessel will enter the port.
This way,
port traffic management may be optimized,
in order to reduce the carbon emissions of vessels waiting to enter the port.
The symbolic regular expression for this pattern is the following:
\begin{equation}
\label{exp:port}
\begin{aligned}
R := \ & \ (\neg \mathit{InsidePort(Brest)})^{*} \cdot (\neg \mathit{InsidePort(Brest)}) \cdot \\
\ &\ (\neg \mathit{InsidePort(Brest)}) \cdot (\mathit{InsidePort(Brest)})
\end{aligned}
\end{equation}
The intention is to detect the entrance of a vessel in the port of Brest.
The predicate $\mathit{InsidePort(Brest)}$ evaluates to \textsf{\footnotesize TRUE}\ whenever a vessel has a distance of less than 5 km from the port of Brest (see Figure \ref{fig:port-traffic}).
In fact,
the predicate is generic and takes as arguments the longitude and latitude of any point,
but we show here a simplified version,
using the port of Brest,
for reasons of readability.
The pattern defines the entrance to the port as a sequence of at least 3 consecutive events,
only the last of which satisfies the $\mathit{InsidePort(Brest)}$ predicate.
In order to detect an entrance,
we must first ensure that the previous event(s) indicated that the vessel was outside the port.
For this reason,
we require that,
before the last event,
there must have occurred at least 2 events where the vessel was outside the port.
We require 2 or more such events to have occurred (instead of just one),
in order to avoid detecting ``noisy'' entrances.
\input{figs_exp-maritime-classification-roc}
In addition to the $\mathit{InsidePort(Brest)}$ predicate,
we included 5 extra ones providing information about the distance of a vessel from a port when it is outside the port.
Each of these predicates evaluates to \textsf{\footnotesize TRUE}\ when a vessel lies within a specified range of distances from the port.
The first returns \textsf{\footnotesize TRUE}\ when a vessel has a distance between 5 and 6 km from the port,
the second when the distance is between 6 and 7 km and the other three extend similarly 1 km until 10 km.
We investigated the sensitivity of our models to the presence of various extra predicates in the recognition pattern.
For all experimental results that follow,
we always present average values over 4 folds of cross-validation.
We start by analyzing the trajectories of a single vessel and then move to multiple, selected vessels.
There are two issues that we tried to address by separating our experiments into single-vessel and multiple-vessel ones.
First, we wanted to have enough data for training.
For this reason,
we only retained vessels for which we can detect a significant number of matches for Pattern \eqref{exp:port}.
Second, our system can work in two modes:
a) it can build a separate model for each monitored object and use this collection of models for personalized forecasting;
b) it can build a global model out of all the monitored objects.
We thus wanted to examine whether building a global model from multiple vessels could produce equally good results,
as these obtained for a single vessel with sufficient training data.
We first used Pattern \eqref{exp:port} to perform recognition on the whole dataset in order to find the number of matches detected for each vessel.
The vessel with the most matches was then isolated and we retained only the events emitted from this vessel.
In total, we detected 368 matches for this vessel and the number of SDEs corresponding to it is $\approx$ 30.000.
Figure \ref{fig:port-traffic} shows the isolated trajectories for this vessel,
seemingly following a standard route between various ports around the Brest area.
\input{figs_exp-maritime-classification-performance}
Figure \ref{fig:exp-maritime-sde} shows results for SDE forecasting.
The best average log-loss is achieved with a full-order Markov model, with $m=3$, and is $\approx 0.51$.
For the best hyper-parameter values out of those that we tested for the variable-order model,
with $m=10$,
we can achieve an average log-loss that is $\approx 0.57$.
Contrary to the case of credit card data,
increasing the order of the variable-order model does not allow us to achieve a better log-loss score than the best one achieved with a full-oder model.
However, as we will show,
this does not imply that the same is true for CE forecasting.
Using the vessel of Figure \ref{fig:port-traffic},
we obtained the results shown in Figures \ref{fig:exp-maritime-classification-roc} and \ref{fig:exp-maritime-classification-performance}.
Since the original $\mathit{DSFA}$\ is smaller in this case
(one start and one final state plus two intermediate states),
we have fewer distance ranges
(e.g., there no states in the range $[0.4,0.6]$).
Thus, we use only two distance ranges: $[0,0.5]$ and $[0.5,1]$.
We observe the importance of being able to increase the order of our models for distances smaller than $50\%$.
For distances greater than $50\%$,
the area under curve is $\approx$ 0.5 for all models.
This implies that they cannot effectively differentiate between positives and negatives.
Their forecasts are either all positive,
where we have $\mathit{Recall}=100\%$ and $\mathit{Specificity}=0\%$,
or all negative,
where we have $\mathit{Recall}=0\%$ and $\mathit{Specificity}=100\%$
(see Figure \ref{fig:exp-maritime-classification-roc2}).
Notice that the full-order Markov models can now only go up to $m=2$,
since the existence of multiple extra predicates makes it prohibitive to increase the order any further.
Achieving higher accuracy with higher-order models comes at a computational cost,
as shown in Figure \ref{fig:exp-maritime-classification-performance}.
The results are similar to those in the credit card experiments.
The training time for variable-order models tends to increase as we increase the order,
but is always less than 8 seconds.
The effect on throughput is again significant for the tree-based variable-order models.
Throughput figures are also lower here compared to the credit card fraud experiments,
since the predicates that we need to evaluate for every new input event
(like $\mathit{InsidePort(Brest)}$) involve more complex calculations
(the $\mathit{amountDiff}>0$ predicate is a simple comparison).
As a next step,
we wanted to investigate the effect of the optimization technique mentioned at the end of Section \ref{sec:no-mc} on the accuracy and performance of our system.
The optimization prunes future paths whose probability is below a given cutoff threshold.
We re-run the experiments described above for distances between $0\%$ and $50\%$ for various values of the cutoff threshold,
starting from $0.0001$ up to $0.2$.
Figure \ref{fig:exp-maritime-classification-cutoff} shows the relevant results.
We observe that the accuracy is affected only for high values of the cutoff threshold, above $0.1$
(Figure \ref{fig:exp-maritime-classification-cutoff-auc}).
We can also see that throughput remains essentially unaffected (Figure \ref{fig:exp-maritime-classification-cutoff-throughput}).
This result is expected,
since the cutoff threshold is only used in the estimation of the waiting-time distributions.
Throughput reflects the online performance of our system,
after the waiting-time distributions have been estimated,
and is thus not affected by the choice of the cutoff threshold.
However,
the training time is indeed significantly affected
(Figure \ref{fig:exp-maritime-classification-cutoff-training}).
As expected,
the result of increasing the value of the cutoff threshold is a reduction of the training time,
as fewer paths are retained.
Beyond a certain point though,
further increases of the cutoff threshold affect the accuracy of the system.
Therefore, the cutoff threshold should be below $0.01$ so as not to compromise the accuracy of our forecasts.
\input{figs_exp-maritime-classification-cutoff}
We additionally investigated the sensitivity of our approach to the extra predicates that are used.
Figure \ref{fig:exp-maritime-classification-aucroc-distance3} shows results when the extra 5 predicates referring to the distance of a vessel from the port are modified so that each ``ring'' around the port has a width of 3 km,
instead of 1 km.
With these extra features,
increasing the order does indeed make an important difference,
but only when the order becomes high (5 and beyond),
which is possible only by using the tree-based variable-order models.
Moreover, the best score achieved is still lower than the best score achieved with ``rings'' of 1 km (Figure \ref{fig:exp-maritime-classification-aucroc}).
``Rings'' of 1 km are thus more appropriate as predictive features.
We also wanted to investigate whether other information about the vessel's movement could affect forecasting.
In particular,
we kept the 1 km ``rings'' and we also added a predicate to check whether the heading of a vessel points towards the port.
More precisely, we used the vessel's speed and heading to project its location 1 hour ahead in the future and then checked whether this projected segment and the circle around the port intersect.
The intuition for adding this feature is that the knowledge of whether a vessel is heading towards the port has predictive value.
As shown in Figure \ref{fig:exp-maritime-classification-aucroc-distanceheading},
this additional information led to higher scores even with low full-order orders
(compare to Figure \ref{fig:exp-maritime-classification-aucroc}).
The heading feature is indeed important.
On the other hand, the high-order models that did not use this feature seemed to be able to compensate for the missing information about the vessel's heading by going into higher orders.
A plateau is thus reached,
which cannot be ``broken'' with the heading information.
Notice that here we can only go up to $m=1$ for full-order models.
The inclusion of the heading predicate leads to an increase of the number of states beyond $1200$.
\input{figs_exp-maritime-classification-auc-features}
Finally, we also tested our method when more than one vessel need to be monitored.
Instead of isolating the single vessel with the most matches,
we isolated all vessels which had more than 100 matches.
There are in total 9 such vessels in the dataset.
The resulting dataset has $\approx$ 222.000 events.
Out of the 9 retained vessels,
we constructed a global probabilistic model and produced forecasts.
An alternative option would be to build a single model for each vessel,
but in this scenario we wanted to test the robustness of our aprroach when a global model is built from multiple entities.
Figure \ref{fig:exp-maritime-classification-aucroc-multi} presents the corresponding results.
Interestingly, the scores of the global model remain very close to the scores of the experiments for the single vessel with the most matches (Figure \ref{fig:exp-maritime-classification-aucroc}).
This is an indication of the ability of the global model to capture the peculiarities of individual vessels.
\subsection{Models Tested}
\label{sec:test-models}
In the experiments that we present,
we evaluated the variable-order Markov model that we have presented in this paper in its two versions:
the memory efficient one that bypasses the construction of a Markov chain and makes direct use of the $\mathit{PST}$\ learned from a stream (Section \ref{sec:no-mc}) and the computationally efficient one that constructs a $\mathit{PSA}$\ (Section \ref{sec:embed}).
We compared these against four other models inspired by the relevant literature.
The first, described in \cite{DBLP:conf/debs/AlevizosAP17,DBLP:conf/lpar/AlevizosAP18},
is the most similar in its general outline to our proposed method.
It is a previous version of our system presented in this paper and is also based on automata and Markov chains.
The main difference is that it attempts to construct full-order Markov models of order $m$
and is thus typically restricted to low values for $m$.
The second model is presented in \cite{DBLP:conf/debs/MuthusamyLJ10},
where automata and Markov chains are used once again.
However, the automata are directly mapped to Markov chains and no attempt is made to ensure that the Markov chain is of a certain order.
Thus, in the worst case,
this model essentially makes the assumption that SDEs are i.i.d. and $m=0$.
As a third alternative,
we evaluated a model that is based on Hidden Markov Models (HMM),
similar to the work presented in \cite{DBLP:conf/colcom/PandeyNC11}.
That work uses the Esper event processing engine \cite{esper} and attempts to model a business process as a HMM.
For our purposes,
we use a HMM to describe the behavior of an automaton,
constructed from a given symbolic regular expression.
The observation variable of the HMM corresponds to the states of the automaton.
Thus, the set of possible values of the observation variable is the set of automaton states.
An observation sequence of length $l$ for the HMM consists of the sequence of $l$ states visited by the automaton after consuming $l$ SDEs.
The $l$ SDEs (symbols) are used as values for the hidden variable.
The last $l$ symbols are the last $l$ values of the hidden variable.
Therefore, this HMM always has $l$ hidden states,
whose values are taken from the SDEs,
connected to $l$ observations,
whose values are taken from the automaton states.
We can train such a HMM with the Baum-Welch algorithm,
using the automaton to generate a training observation sequence from the original training stream.
We can then use this learned HMM to produce forecasts on a test dataset.
We produce forecasts in an online manner as follows:
as the stream is consumed,
we use a buffer to store the last $l$ states visited by the pattern automaton.
After every new event,
we ``unroll'' the HMM using the contents of the buffer as the observation sequence
and the transition and emission matrices learned during the training phase.
We can then use the forward algorithm to estimate the probability of all possible future observation sequences (up to some length),
which, in our case, correspond to future states visited by the automaton.
Knowing the probability of every future sequence of states allows us to estimate the waiting-time distribution for the current state of the automaton and thus build a forecast,
as already described.
Note that,
contrary to the previous approaches,
the estimation of the waiting-time distribution via a HMM must be performed online.
We cannot pre-compute the waiting-time distributions and store the forecasts in a look-up table,
due to the possibly large number of entries.
For example,
assume that $l=5$ and the size of the ``alphabet'' (SDE symbols) of our automaton is $10$.
For each state of the automaton,
we would have to pre-compute $10^{5}$ entries.
In other words,
as with Markov chains,
we still have a problem of combinatorial explosion.
We try to ``avoid'' this problem by estimating the waiting-time distributions online.
Our last model is inspired by the work presented in \cite{DBLP:journals/is/AalstSS11}.
This method comes from the process mining community and has not been previously applied to CEF.
However,
due to its simplicity,
we use it here as a baseline method.
We again use a training dataset to learn the model.
In the training phase,
every time the pattern automaton reaches a certain state $q$,
we simply count how long (how many transitions) we have to wait until it reaches a final state.
After the training dataset has been consumed,
we end up with a set of such ``waiting times'' for every state.
The forecast to be produced by each state is then estimated simply by calculating the average ``waiting time''.
As far as the Markov models are concerned,
we try to increase their order to the highest possible value,
in order to determine if and how high-order values offer an advantage.
We have empirically discovered that our system can efficiently handle automata and Markov chains that have up to about $1200$ states.
Beyond this point,
it becomes almost prohibitive (with our hardware) to create and handle transition matrices with more than $1200^{2}$ elements.
We have thus set this number as an upper bound and increased the order of a model until this number is reached.
This restriction is applied both to full-order models and variable-order models that use a $\mathit{PSA}$\ and an embedding,
since in both of these cases we need to construct a Markov chain.
For the variable-order models that make direct use of a $\mathit{PST}$,
no Markov chain is constructed.
We thus increase their order until their performance scores seem to reach a stable number or a very high number,
beyond which it makes little sense to continue testing.
\subsection{Hardware and Software Settings}
\label{sec:settings}
All experiments were run on a 64-bit Debian 10 (buster) machine with Intel Core i7-8700 CPU @ 3.20GHz × 12 processors and 16 GB of memory.
Our framework was implemented in Scala 2.12.10.
We used Java 1.8,
with the default values for the heap size.
For the HMM models,
we relied on the Smile machine learning library \cite{smile}.
All other models were developed by us.
No attempt at parallelization was made.
\section{Introduction}
The avalanche of streaming data in the last decade has sparked an interest in technologies processing high-velocity data streams.
Complex Event Recognition (CER) is one of these technologies which have enjoyed increased popularity
\cite{DBLP:journals/csur/CugolaM12,DBLP:journals/vldb/GiatrakosAADG20}.
The main goal of a CER system is to detect interesting activity patterns occurring within a stream of events,
coming from sensors or other devices.
Complex Events must be detected with minimal latency.
As a result,
a significant body of work has been devoted to computational optimization issues.
Less attention has been paid to forecasting event patterns \cite{DBLP:journals/vldb/GiatrakosAADG20},
despite the fact that forecasting has attracted considerable attention in various related research areas,
such as time-series forecasting \cite{montgomery2015introduction},
sequence prediction \cite{DBLP:journals/jair/BegleiterEY04,buhlmann1999variable,DBLP:journals/ml/RonST96,DBLP:journals/tcom/ClearyW84,DBLP:journals/tit/WillemsST95},
temporal mining \cite{DBLP:conf/icdm/VilaltaM02,DBLP:conf/kdd/LaxmanTW08,DBLP:journals/eswa/ZhouCG15,DBLP:journals/vldb/ChoWYZC11}
and process mining \cite{DBLP:journals/tsc/Marquez-Chamorro18}.
The need for Complex Event Forecasting (CEF) has been acknowledged though,
as evidenced by several conceptual proposals \cite{DBLP:conf/bci/FulopBTDVF12,DBLP:conf/edoc/ChristKK16,DBLP:journals/tasm/ArtikisBBWEFGHLPSS14,DBLP:conf/debs/EngelE11}.
Consider, for example, the domain of credit card fraud management \cite{DBLP:conf/debs/ArtikisKCBMSFP17},
where the detection of suspicious activity patterns of credit cards must occur with minimal latency that is in the order of a few milliseconds.
The decision margin is extremely narrow.
Being able to forecast that a certain sequence of transactions is very likely to be a fraudulent pattern provides wider margins both for decision and for action.
For example, a processing system might decide to devote more resources and higher priority to those suspicious patterns to ensure that the latency requirement will be satisfied.
The field of moving object monitoring (for ships at sea, aircrafts in the air or vehicles on the ground) provides yet another example where CEF could be a crucial functionality \cite{DBLP:conf/edbt/VourosVSDPGTPAA18}.
Collision avoidance is obviously of paramount importance for this domain.
A monitoring system with the ability to infer that two (or more) moving objects are on a collision course and forecast that they will indeed collide if no action is taken would provide significant help to the relevant authorities.
CEF could play an important role even in in-silico biology,
where computationally demanding simulations of biological systems are often executed to determine the properties of these systems and their response to treatments \cite{ozik2019learning}.
These simulations are typically run on supercomputers and are evaluated afterwards to determine which of them seem promising enough from a therapeutic point of view.
A system that could monitor these simulations as they run, forecast which of them will turn out to be non-pertinent and decide to terminate them at an early stage, could thus save valuable computational resources and significantly speed-up the execution of such in-silico experiments.
Note that these are domains with different characteristics.
For example, some of them have a strong geospatial component (monitoring of moving entities),
whereas in others this component is minimal (in-silico biology).
Domain-specific solutions (e.g., trajectory prediction for moving objects) cannot thus be universally applied.
We need a more general framework.
Towards this direction,
we present a formal framework for CEF,
along with an implementation and extensive experimental results on real and synthetic data from diverse application domains.
Our framework allows a user to define a pattern for a complex event,
e.g., a pattern for fraudulent credit card transactions or for two moving objects moving in close proximity and towards each other.
It then constructs a probabilistic model for such a pattern in order to forecast,
on the basis of an event stream,
if and when a complex event is expected to occur.
We use the formalism of symbolic automata \cite{DBLP:conf/cav/DAntoniV17} to encode a pattern and that of prediction suffix trees \cite{DBLP:journals/ml/RonST96,DBLP:conf/nips/RonST93} to learn a probabilistic model for the pattern.
We formally show how symbolic automata can be combined with prediction suffix trees to perform CEF.
Prediction suffix trees fall under the class of the so-called variable-order Markov models,
i.e., Markov models whose order (how deep into the past they can look for dependencies) can be increased beyond what is computationally possible with full-order models.
They can do this by avoiding a full enumeration of every possible dependency and focusing only on ``meaningful'' dependencies.
Our empirical analysis shows the advantage of being able to use high-order models over related non-Markov methods for CEF and methods based on low-order Markov models (or Hidden Markov Models).
The price we have to pay for this increased accuracy is a decrease in throughput,
which still however remains high (typically tens of thousands of events per second).
The training time is also increased,
but still remains within the same order of magnitude.
This fact allows us to be confident that training could also be performed online.
Our contributions may be summarized as follows:
\begin{itemize}
\item We present a CEF framework that is both formal and easy to use.
It is often the case that CER frameworks lack clear semantics,
which in turn leads to confusion about how patterns should be written and which operators are allowed \cite{DBLP:journals/vldb/GiatrakosAADG20}.
This problem is exacerbated in CEF,
where a formalism for defining the patterns to be forecast may be lacking completely.
Our framework is formal, compositional and as easy to use as writing regular expressions.
The only basic requirement is that the user declaratively define a pattern and provide a training dataset.
\item Our framework can uncover deep probabilistic dependencies in a stream by using a variable-order Markov model.
By being able to look deeper into the past, we achieve higher accuracy scores compared to other state-of-the-art solutions for CEF, as shown in our extensive empirical analysis.
\item Our framework can perform various types of forecasting and thus subsumes previous methods that restrict themselves to one type of forecasting.
It can perform both simple event forecasting (i.e., predicting what the next input event might be) and Complex Event forecasting (events defined through a pattern).
As we explain later, moving from simple event to Complex Event forecasting is not trivial.
Using simple event forecasting to project in the future the most probable sequence of input events and then attempt to detect Complex Events on this future sequence yields sub-optimal results.
A system that can perform simple event forecasting cannot thus be assumed to perform CEF as well.
\item We also discuss the issue of how the forecasts of a CEF system may be evaluated with respect to their quality.
Previous methods have used metrics borrowed from time-series forecasting (e.g., the root mean square error) or typical machine learning tasks (e.g., precision).
We propose a more comprehensive set of metrics that takes into account the idiosyncrasies of CEF.
Besides accuracy itself, the usefulness of forecasts is also judged by their ``earliness''.
We discuss how the notion of earliness may be quantified.
\end{itemize}
\subsection{Running Example}
\label{sec:example}
We now present the general approach of CER/CEF systems,
along with an example that we will use throughout the rest of the paper to make our presentation more accessible.
The input to a CER system consists of two main components:
a stream of events, also called simple derived events (SDEs);
and a set of patterns that define relations among the SDEs.
Instances of pattern satisfaction are called Complex Events (CEs).
The output of the system is another stream, composed of the detected CEs.
Typically, CEs must be detected with very low latency,
which, in certain cases, may even be in the order of a few milliseconds \cite{luckham_power_2001,DBLP:books/daglib/0024062,hedtstuck_complex_2017}.
\begin{table*}[!ht]
\centering
\caption{Example event stream from the maritime domain.}
\begin{tabular}{cccccccc}
\toprule
Navigational status & fishing & fishing & fishing & under way & under way & under way & ... \\
\midrule
vessel id & 78986 & 78986 & 78986 & 78986 & 78986 & 78986 & ... \\
\midrule
speed & 2 & 1 & 3 & 22 & 19 & 27 & ... \\
\midrule
timestamp & 1 & 2 & 3 & 4 & 5 & 6 & ... \\
\bottomrule
\end{tabular}
\label{table:stream}
\end{table*}
As an example, consider the scenario of a system receiving an input stream consisting of events emitted from vessels sailing at sea.
These events may contain information regarding the status of a vessel,
e.g., its location, speed and heading.
This is indeed a real-world scenario and the emitted messages are called AIS (Automatic Identification System) messages.
Besides information about a vessel's kinematic behavior,
each such message may contain additional information about the vessel's status (e.g., whether it is fishing),
along with a timestamp and a unique vessel identifier.
Table \ref{table:stream} shows a possible stream of AIS messages,
including \emph{speed} and \emph{timestamp} information.
A maritime expert may be interested to detect several activity patterns for the monitored vessels,
such as sudden changes in the kinematic behavior of a vessel (e.g., sudden accelerations),
sailing in protected (e.g., NATURA) areas, etc.
The typical workflow consists of the analyst first writing these patterns in some (usually) declarative language,
which are then used by a computational model applied on the stream of SDEs to detect CEs.
\subsection{Structure of the Paper}
The rest of the paper is structured as follows.
We start by presenting in Section \ref{sec:related} the relevant literature on CEF.
Since work on CEF has been limited thus far,
we also briefly mention forecasting ideas from some other related fields that can provide inspiration to CEF.
Subsequently, in Section \ref{sec:symbolic} we discuss the formalism of symbolic automata and how it can be adapted to perform recognition on real-time event streams.
Section \ref{sec:prob} shows how we can create a probabilistic model for a symbolic automaton by using prediction suffix trees,
while Section \ref{sec:complexity} presents a detailed complexity analysis.
We then discuss how we can quantify the quality of forecasts in Section \ref{sec:metrics}.
We finally demonstrate the efficacy of our framework in Section \ref{sec:experiments},
by showing experimental results on two application domains.
We conclude with Section \ref{sec:outro},
discussing some possible directions for future work.
The paper assumes a basic familiarity with automata theory, logic and Markov chains.
In Table \ref{table:notation} we have gathered the notation that we use throughout the paper,
along with a brief description of every symbol.
\input{notation_table}
\section{Measuring the Quality of Forecasts}
\label{sec:metrics}
As described in Section \ref{sec:forecasts},
there are various types of forecasts that could be produced from the waiting-time distributions of an automaton.
In this section,
we discuss in more detail these different forecasting tasks and how the quality of the produced forecasts can be quantified in each case.
We distinguish three different types of forecasting tasks:
a) SDE forecasting, where our goal is to forecast the next SDE in the input stream;
b) regression CE forecasting, where our goal is to forecast when a CE will occur (either \emph{REGRESSION-ARGMAX} or \emph{REGRESSION-INTERVAL});
c) classification CE forecasting, where our goal is to forecast whether or not a CE will occur within a short future window (\emph{CLASSIFICATION-NEXTW}).
\subsection{SDE Forecasting}
Although our main focus is on forecasting occurrences of CEs,
we can start with a simpler task,
targeting SDEs.
Not only does this allow us to establish a baseline with some more easily interpretable results,
but it also enables us to show the differences between SDE and CE forecasting.
As we will show,
CE forecasting is more challenging than SDE forecasting,
in terms of the feasibility of looking deep into the past.
Another reason for running experiments for SDE forecasting is to find the best values for the hyperparameters used for learning a prediction suffix tree.
Since it is much faster to run this type of experiments,
compared to CE forecasting experiments,
we can use a hypergrid of hyperparameter values and for each hypergrid point we run SDE forecasting.
In this type of experiments,
our goal is to investigate how well our proposed framework can forecast the next SDE to appear in the stream,
given that we know the last $m$ SDEs.
This task is the equivalent of \emph{next symbol prediction} in the terminology of the compression community \cite{DBLP:journals/jair/BegleiterEY04}.
As explained in Section \ref{sec:vmm},
the metric that we use to estimate the quality of a predictor $\hat{P}$ is the average log-loss with respect to a test sequence $S_{1..k}=t_{1},t_{2},\cdots,t_{k}$,
given by
$l(\hat{P},S_{1..k}) = - \frac{1}{T} \sum_{i=1}^{k} log \hat{P}(t_{i} \mid t_{1} \cdots t_{i-1})$.
The lower the average log-loss, the better the predictor is assumed to be.
We remind that the ``symbols'' which we try to predict in these experiments are essentially the minterms of our $\mathit{DSFA}$\ in our case.
In other words,
we do not try to predict exactly what the next SDE will be,
but which minterm the next SDE will satisfy.
For example,
if we have the minterms of Table \ref{table:minterms_simplified},
then our task is to predict whether the next SDE will satisfy $\psi_{A}$ (i.e., the speed of the vessel will be below 5 knots),
$\psi_{B}$ (i.e., the speed will be above 20 knots) or
$\neg \psi_{A} \wedge \neg \psi_{B}$ (i.e., the speed will be between 5 and 20 knots).
\subsection{Regression CE Forecasting}
\label{sec:regression}
After SDE forecasting,
we move on to regression CE forecasting.
Our goal in this task to forecast when a CE will occur.
We call them \emph{regression} experiments due to the fact that the forecasts are ``continuous'' values,
in the form of forecast intervals/points.
This is in contrast to the next forecasting task,
where each forecast is a binary value,
indicating whether a CE will occur or not and is thus called a \emph{classification} task.
One important difference between SDE and CE forecasting (both regression and classification) is that,
in SDE forecasting, a forecast is emitted after every new SDE is consumed.
On the other hand, in CE forecasting,
emitting a forecast after every new SDE is feasible in principle,
but not very useful and can also produce results that are misleading.
By their very nature,
CEs are relatively rare within a stream of input SDEs.
As a result,
if we emit a forecast after every new SDE,
some of these forecasts (possibly even the vast majority) will have a significant temporal distance from the CE to which they refer.
As an example,
consider a pattern from the maritime domain which detects the entrance of a vessel in the port of Tangiers.
We can also try to use this pattern for forecasting,
with the goal of predicting when the vessel will arrive at the port of Tangiers.
However, the majority of the vessel's messages may lie in areas so distant from the port (e.g., in the Pacific ocean) that it would be practically useless to emit forecasts when the vessel is in these areas.
Moreover, if we do emit forecasts from these distant areas,
the scores and metrics that we use to evaluate the quality of the forecasts will be dominated by these, necessarily low-quality, distant forecasts.
For these reasons,
before running a regression experiment,
we must first go through a preprocessing step.
We must find the timepoints within a stream where it is ``meaningful'' to emit forecasts.
We call these timepoints the \emph{checkpoints} of the stream.
To do so,
we must first perform recognition on the stream to find the timepoints where CEs are detected.
We then set a required distance $d$ that we want to separate a forecast from its CE,
in the sense that we require each forecast to be emitted $d$ events before the CE.
After finding all the CEs in a stream and setting a value for $d$,
we set as checkpoints all the SDEs which occur $d$ events before the CEs.
This typically means that we end up with as many checkpoints as CEs for a given value of $d$
(unless the distance between two consecutive CEs is smaller than $d$,
in which case no checkpoint for the second CE exists).
We can then show results for various values of $d$,
starting from the smallest possible value of $1$
(i.e., emitting forecasts from the immediately previous SDE before the CE).
At each checkpoint,
a forecast interval is produced,
as per Section \ref{sec:forecasts}.
Some of the metrics we can use to assess the quality of the forecasts assume that forecasts are in the form of points.
Such point metrics are the following:
the Root Mean Squared Error (RMSE) and
the Mean Absolute Error (MAE)
(the latter is less sensitive than RMSE to outliers).
If we denote by $C$ the set of all checkpoints,
by $y_{c}$ the actual distance (in number of events) between a checkpoint $c$ and its CE (which is always $d$)
and by $\hat{y}_{c}$ our forecast,
then the definitions for RMSE and MAE are as follows:
\begin{equation}
\mathit{RMSE} = \sqrt{ \frac{1}{\lvert C \rvert} \sum_{c \in C}{ {\lvert \hat{y}_{c} - y_{c} \rvert}}^{2} }
\end{equation}
\begin{equation}
\label{eq:mae}
\mathit{MAE} = \frac{1}{\lvert C \rvert} \sum_{c \in C}{\lvert \hat{y}_{c} - y_{c} \rvert}
\end{equation}
When these metrics are used,
we need to impose an extra constraint on the forecasts,
requiring that the maximum spread of each forecast is $0$,
i.e., we produce point (instead of interval) forecasts.
Besides these points metrics,
we can also use an interval metric,
the so-called \emph{negatively oriented interval score} \cite{gneiting2007strictly}.
If $\hat{y}_{c}=(l_{c},u_{c})$ is an interval forecast produced with confidence $b=1-a$ and $y_{c}$ the actual observation (distance),
then the negatively oriented interval score (NOIS) for this forecast is given by:
\begin{equation}
\label{eq:nois}
\mathit{NOIS}_{c} = (u_{c}-l_{c}) + \frac{2}{a}(l_{c}-y_{c})I_{x<l_{c}} + \frac{2}{a}(y_{c}-u_{c})I_{x>u_{c}}
\end{equation}
We can then estimate the average negatively oriented interval score (ANOIS) as follows:
\begin{equation}
\mathit{ANOIS} = \frac{1}{\lvert C \rvert} \sum_{c \in C}{\mathit{NOIS}_{c}}
\end{equation}
The best possible value for ANOIS is $0$ and is achieved only when all forecasts are point forecasts
(so that $u_{c}-l_{c}$ is always $0$) and all of them are also correct (so that the last two terms in Eq. \ref{eq:nois} are always $0$).
In every other case,
a forecast is penalized if its interval is long
(so that focused intervals are promoted),
regardless of whether it is correct.
If it is indeed correct,
no other penalty is added.
If it is not correct,
then an extra penalty is added,
which is essentially the deviation of the forecast from the actual observation,
weighted by a factor that grows with the confidence of the forecast.
For example, if the confidence is $100\%$,
then $b=1.0$, $a=0.0$ and the extra penalty,
according to Eq. \ref{eq:nois},
grows to infinity.
Incorrect forecasts produced with high confidence are thus severely penalized.
Note that if we emit only point forecasts ($\hat{y}_{c}=l_{c}=u_{c}$),
then NOIS and ANOIS could be written as follows:
\begin{equation}
\label{eq:nois0}
\mathit{NOIS}_{c} = \frac{2}{a} \lvert \hat{y}_{c}-y_{c} \rvert
\end{equation}
\begin{equation}
\mathit{ANOIS} = \frac{1}{\lvert C \rvert} \sum_{c \in C}{ \frac{2}{a}\lvert \hat{y}_{c}-y_{c} \rvert }
\end{equation}
ANOIS then becomes a weighted version of MAE.
\subsection{Classification CE Forecasting}
The last forecasting task is the most challenging.
In contrast to regression experiments,
where we emit forecasts only at a specified distance before each CE,
in classification experiments we emit forecasts regardless of whether a CE occurs or not.
The goal is to predict the occurrence of CEs within a short future window or provide a ``negative'' forecast if our model predicts that no CE is likely to occur within this window.
In practice,
this task could be the first one performed at every new event arrival.
If a positive forecast is emitted,
then the regression task could also be performed in order to pinpoint more accurately when the CE will occur.
One issue with classification experiments is that it is not so straightforward to establish checkpoints in the stream.
In regression experiments,
CEs provide a natural point of reference.
In classification experiments,
we do not have such reference points,
since we are also required to predict the absence of CEs.
As a result,
instead of using the stream to find checkpoints,
we can use the structure of the automaton itself.
We may not know the actual distance to a CE,
but the automaton can provide us with an ``expected'' or ``possible'' distance,
as follows.
For an automaton that is in a final state,
it can be said that the distance to a CE is $0$.
More conveniently,
we can say that the ``process'' which it describes has been completed or, equivalently, that there remains $0\%$ of the process until completion.
For an automaton that is in a non-final state but separated from a final state by $1$ transition,
it can be said that the ``expected'' distance is $1$.
We use the term ``expected'' because we are not interested in whether the automaton will actually take the transition to a final state.
We want to establish checkpoints both for the presence and the absence of CEs.
When the automaton fails to take the transition to a final state (and we thus have an absence of a CE),
this ``expected'' distance is not an actual distance,
but a ``possible'' one that failed to materialize.
We also note that there might also exist other walks from this non-final state to a final one whose length could be greater than $1$
(in fact, there might exist walks with ``infinite length'', in case of loops).
In order to estimate the ``expected'' distance of a non-final state,
we only use the shortest walk to a final state.
After estimating the expected distances of all states,
we can then express them as percentages by dividing them by the greatest among them.
A $0\%$ distance will thus refer to final states,
whereas a $100\%$ distance to the state(s) that are the most distant to a final state,
i.e.,
the automaton has to take the most transitions to reach a final state.
These are the start states.
We can then determine our checkpoints by specifying the states in which the automaton is permitted to emit forecasts,
according to their ``expected'' distance.
For example,
we may establish checkpoints by allowing only states with a distance between $40\%$ and $60\%$ to emit forecasts.
The intuition here is that,
by increasing the allowed distance,
we make the forecasting task more difficult.
Another option for measuring the distance of a state to a possible future CE would be to use the waiting-time distribution of the state and set its expectation as the distance.
However, this assumes that we have gone through the training phase first and learned the distributions.
For this reason, we avoid using this way to estimate distances.
The evaluation task itself consists of the following steps.
At the arrival of every new input event,
we first check whether the distance of the new automaton state falls within the range of allowed distances,
as explained above.
If the new state is allowed to emit a forecast,
we use its waiting-time distribution to produce the forecast.
Two parameters are taken into account:
the length of the future window $w$ within which we want to know whether a CE will occur and the confidence threshold $\theta_{fc}$.
If the probability of the first $w$ points of the distribution exceeds the threshold $\theta_{fc}$,
we emit a positive forecast, essentially affirming that a CE will occur within the next $w$ events;
otherwise, we emit a negative forecast, essentially rejecting the hypothesis that a CE will occur.
We thus obtain a binary classification task.
As a consequence,
we can make use of standard classification measures,
like precision and recall.
Each forecast is evaluated:
a) as a \emph{true positive} (TP) if the forecast is positive and the CE does indeed occur within the next $w$ events from the forecast;
b) as a \emph{false positive} (FP) if the forecast is positive and the CE does not occur;
c) as a \emph{true negative} (TN) if the forecast is negative and the CE does not occur and
d) as a \emph{false negative} (FN) if the forecast is negative and the CE does occur;
Precision is then defined as $\mathit{Precision}=\frac{TP}{TP + FP}$ and recall (also called sensitivity or true positive rate) as $\mathit{Recall}=\frac{TP}{TP + FN}$.
As already mentioned,
CEs are relatively rare in a stream.
It is thus important for a forecasting engine to be as specific as possible in identifying the true negatives.
For this reason,
besides precision and recall,
we also use \emph{specificity} (also called true negative rate),
defined as $\mathit{Specificity}=\frac{TN}{TN + FP}$.
A classification experiment is performed as follows.
For various values of the ``expected'' distance and the confidence threshold $\theta_{fc}$,
we estimate precision, recall and specificity on a test dataset.
For a given distance,
$\theta_{fc}$ acts as a cut-off parameter.
For each value of $\theta_{fc}$,
we estimate the recall (sensitivity) and specificity scores and we plot these scores as a ROC curve.
For each distance,
we then estimate the area under curve (AUC) for the ROC curves.
The higher the AUC value,
the better the model is assumed to be.
The setting described above is the most suitable for evaluation purposes,
but might not be the most appropriate when such a system is actually deployed.
For deployment purposes,
another option would be to simply set a best, fixed confidence threshold
(e.g., by selecting, after evaluation, the threshold with the highest F1-score or Matthews correlation coefficient)
and emit only positive forecasts,
regardless of their distance.
Forecasts with low probabilities (i.e., negative forecasts) will thus be ignored/suppressed.
This is justified by the fact that a user would typically be more interested in positive forecasts.
For evaluation purposes,
this would not be an appropriate experimental setting,
but it would suffice for deployment purposes,
where we would then be focused on fine-tuning the confidence threshold.
In this paper,
we focus on evaluating our system and thus do not discuss further any deployment solution.
\section{Summary \& Future Work}
\label{sec:outro}
We have presented a framework for Complex Event Forecasting (CEF),
based on a variable-order Markov model.
It allows us to delve deeper into the past and capture long-term dependencies,
not feasible with full-order models.
Our comprehensive evaluation on two application domains has illustrated the advantages of being able to use such high-order models.
Namely, the use of higher-order modeling allows us to achieve higher accuracy than what is possible with full-order models or other state-of-the-art solutions.
We have described two alternative ways in which variable-order models may be used,
depending on the imposed requirements.
One option is to use a highly efficient but less accurate model,
when online performance is a top priority.
We also provide an option that achieves high accuracy scores,
but with a performance cost.
Another important feature of our proposed framework is that it requires minimal intervention by the user.
A given Complex Event pattern is declaratively defined and subsequently automatically translated to an automaton and then to a Markov model,
without requiring domain knowledge that should guide the modeling process.
Still, the user needs to set up the model and there seems to be room for further automation.
In particular,
the user needs to set the maximum order allowed by the probabilistic model.
Additionally, we have started investigating ways to handle concept drift by continuously training and updating the probabilistic model of a pattern.
With respect to the expressive power of our framework,
there is one functionality that we do not currently support and whose incorporation would require us to move to a more advanced automaton model.
This is the functionality of applying $n$-ary (with $n>1$) predicates to two or more sub-expressions of an expression,
instead of only unary predicates,
as is allowed in symbolic automata.
Note that $n$ refers to the number of ``terminal symbols'' / events that a predicate may reference.
Each transition predicate of a symbolic automaton may refer only to one event,
the last event consumed.
It cannot apply a predicate to the last event and other earlier events.
As an example,
consider the pattern $R := x \cdot y\ \textsf{\footnotesize WHERE}\ y.\mathit{speed} > x.\mathit{speed}$,
detecting an increase in the speed of a vessel,
where we now need to use the variables $x$ and $y$.
Such patterns cannot be captured with $\mathit{SFA}$\,
since they would require a memory structure to store some of the past events of a stream,
as is possible with extended symbolic automata \cite{DBLP:journals/fmsd/DAntoniV15}.
We intend to present in future work an automaton model which can support patterns with memory,
suitable for CER.
Some results towards this direction may be found in \cite{DBLP:journals/corr/abs-1804-09999}.
Finally, our framework could also be used for a task that is not directly related to Complex Event Forecasting.
Since predictive modeling and compression are two sides of the same coin,
our framework could be used for pattern-driven lossless stream compression,
in order to minimize the communication cost,
which is a severe bottleneck for geo-distributed CER \cite{DBLP:journals/vldb/GiatrakosAADG20}.
The probabilistic model that we construct with our approach could be pushed down to the event sources,
such as the vessels in the maritime domain,
in order to compress each individual stream and then these compressed streams could be transmitted to a centralized CER engine to perform recognition.
\section{Building a Probabilistic Model}
\label{sec:prob}
The main idea behind our forecasting method is the following:
Given a pattern $R$ in the form of a $\mathit{SRE}$, we first construct a $\mathit{sSFA}$\ as described in the previous section.
For event recognition, this would already be enough,
but in order to perform event forecasting, we translate the $\mathit{sSFA}$\ to an equivalent deterministic $\mathit{SFA}$\ ($\mathit{DSFA}$).
This $\mathit{DSFA}$\ can then be used to learn a probabilistic model,
typically a Markov chain,
that encodes dependencies among the events in an input stream.
Note that a non-deterministic automaton cannot be directly converted to a Markov chain,
since from each state we might be able to move to multiple other target states with a given event.
Therefore, we first determinize the automaton.
The probabilistic model is learned from a portion of the input stream which acts as a training dataset and it is then used to derive forecasts about the expected occurrence of the CE encoded by the automaton.
The issue that we address in this paper is how to build a model which retains long-term dependencies that are useful for forecasting.
Figure \ref{fig:flow} depicts all the required steps in order to produce forecasts for a given pattern.
We have already described steps 1 and 2.
In Section \ref{sec:dsfa} we describe step 3.
In Sections \ref{sec:vmm} - \ref{sec:embed} we present step 4, our proposed method for constructing a probabilistic model for a pattern,
based on prediction suffix trees.
Steps 5 and 6 are described in Section \ref{sec:forecasts}.
After learning a model,
we first need to estimate the so-called \emph{waiting-time distributions} for each state of our automaton.
Roughly speaking, these distributions let us know the probability of reaching a final state from any other automaton state in $k$ events from now.
These distributions are then used to estimate forecasts,
which generally have the form of an interval within which a CE has a high probability of occurring.
Finally, Section \ref{sec:no-mc} discusses an optimization that allows us to bypass the explicit construction of the Markov chain
and Section \ref{sec:complexity} presents a full complexity analysis.
\input{figs_flow}
\input{prob_prelim}
\input{prob_vmm}
\input{prob_pst}
\input{prob_forecasts}
\input{prob_empirical}
\subsection{Estimation of Empirical Probabilities}
\label{sec:prob_empirical}
We have thus far described how an embedding of a $\mathit{PSA}$\ $M_{S}$ in a $\mathit{DSFA}$\ $M_{R}$ can be constructed
and how we can estimate the forecasts for this embedding.
We have also presented how this can be done directly via a $\mathit{PST}$,
without going through a $\mathit{PSA}$.
However,
before learning the $\mathit{PST}$,
as described in Section \ref{sec:pst},
we first need to estimate the empirical probabilities for the various symbols.
We describe here this extra initial step.
In \cite{DBLP:journals/ml/RonST96},
it is assumed that,
before learning a $\mathit{PST}$,
the empirical probabilities of symbols given various contexts are available.
The suggestion in \cite{DBLP:journals/ml/RonST96} is that these empirical probabilities can be calculated either by repeatedly scanning the training stream or by using a more time-efficient algorithm that keeps pointers to all occurrences of a given context in the stream.
We opt for a variant of the latter choice.
First, note that the empirical probabilities of the strings ($s$) and the expected next symbols ($\sigma$) observed in a stream are given by the following formulas \cite{DBLP:journals/ml/RonST96}:
\begin{equation}
\label{eq:emprob1}
\hat{P}(s) = \frac{1}{k-m}\sum_{j=m}^{k-1}\chi_{j}(s)
\end{equation}
\begin{equation}
\label{eq:emprob2}
\hat{P}(\sigma \mid s) = \frac{\sum_{j=m}^{k-1}\chi_{j+1}(s \cdot \sigma)}{\sum_{j=m}^{k-1}\chi_{j}(s)}
\end{equation}
where $k$ is the length of the training stream $S_{1..k}$,
$m$ is the maximum length of the strings ($s$) that will be considered
and
\begin{equation}
\label{eq:counters}
\chi_{j}(s) =
\begin{cases}
1 & \quad \text{if } S_{(j - \lvert s \rvert + 1) \cdots j} = s \\
0 & \quad \text{otherwise} \\
\end{cases}
\end{equation}
In other words,
we need to count the number of occurrences of the various candidate strings $s$ in $S_{1..k}$.
The numerators and denominators in Eq. \eqref{eq:emprob1} and \eqref{eq:emprob2} are essentially counters for the various strings.
In order to keep track of these counters,
we can use a tree data structure which allows to scan the training stream only once.
We call this structure a \emph{Counter Suffix Tree} ($\mathit{CST}$).
Each node in a $\mathit{CST}$\ is a tuple $(\sigma,c)$ where $\sigma$ is a symbol from the alphabet (or $\epsilon$ only for the root node) and $c$ a counter.
For each level $k$ of the tree,
it always holds that
$\mathit{SumOfCountersAtK} \leq \mathit{ParentCounter}$
and
$\mathit{SumOfCountersAtK} \geq \mathit{ParentCounter} - (k-1)$.
By following a path from the root to a node,
we get a string $s=\sigma_{0} \cdot \sigma_{1} \cdots \sigma_{n}$,
where $\sigma_{0} = \epsilon$ corresponds to the root node.
The property maintained as a $\mathit{CST}$\ is built from a stream $S_{1..k}$ is that the counter of the node $\sigma_{n}$ that is reached with $s$
gives us the number of occurrences of the string $\sigma_{n} \cdot \sigma_{n-1} \cdots \sigma_{1}$ (the reversed version of $s$) in $S_{1..k}$.
As an example,
see Figure \ref{fig:cst},
which depicts the $\mathit{CST}$\ of maximum depth 2 for the stream $S=aaabaabaaa$.
If we want to retrieve the number of occurrences of the string $b \cdot a$ in $S$,
we follow the left child $(a,7)$ of the root and then the right child of this.
We thus reach $(b,2)$ and indeed $b \cdot a$ occurs twice in $S$.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{cst.pdf}
\caption{Example of a Counter Suffix Tree with $m=2$ and $S=aaabaabaaa$.}
\label{fig:cst}
\end{figure}
A $\mathit{CST}$\ can be incrementally constructed by maintaining a buffer of size $m$ that always holds the last $m$ symbols of $S$.
The contents of the buffer are fed into the $\mathit{CST}$\ after the arrival of a new symbol.
The update algorithm follows a path through the $\mathit{CST}$\ according to the whole string provided by the buffer.
For every node that already exists,
its counter is incremented by 1.
If a node does not exist,
it is created and its counter is set to 1.
At any point,
having been updated with the training stream,
the $\mathit{CST}$\ can be used to retrieve the necessary counters
and estimate the empirical probabilities of Equations \eqref{eq:emprob1} and \eqref{eq:emprob2} that are subsequently used in the $\mathit{PST}$\ construction.
\subsection{Emitting Forecasts}
\label{sec:forecasts}
Our ultimate goal is to use the statistical properties of a stream,
as encoded in a $\mathit{PST}$\ or a $\mathit{PSA}$,
in order to infer when a Complex Event (CE) with a given Symbolic Regular Expression ($\mathit{SRE}$) $R$ will be detected.
Equivalently, we are interested in inferring when the $\mathit{SFA}$\ of $R$ will reach one of its final states.
To achieve this goal, we work as follows.
We start with a $\mathit{SRE}$\ $R$ and a training stream $S$.
We first use $R$ to construct an equivalent $\mathit{sSFA}$\ and then determinize this $\mathit{sSFA}$\ into a $\mathit{DSFA}$\ $M_{R}$.
$M_{R}$ can be used to perform recognition on any given stream,
but cannot be used for probabilistic inference.
Next, we use the minterms of $M_{R}$
(acting as ``symbols'', see Lemma \ref{lemma:isomorphism})
and the training stream $S$ to learn a $\mathit{PST}$\ $T$ and (if required) a $\mathit{PSA}$\ $M_{S}$ which encode the statistical properties of $S$.
These probabilistic models do not yet have any knowledge of the structure of $R$ (they only know its minterms),
are not acceptors (the $\mathit{PSA}$\ does not have any final states) and cannot be used for recognition.
We therefore need to combine the learned probabilistic model ($T$ or $M_{S}$) with the automaton used for recognition ($M_{R}$).
At this point,
there is a trade-off between memory and computation efficiency.
If the online performance of our system is critical and we are not willing to make significant sacrifices in terms of computation efficiency,
then we should combine the recognition automaton $M_{R}$ with the $\mathit{PSA}$\ $M_{S}$.
Using the $\mathit{PSA}$\, we can have a very efficient solution with minimal overhead on throughput.
The downside of this approach is its memory footprint, which may limit the order of the model.
Although we may increase the order beyond what is possible with full-order models,
we may still not achieve the desired values,
due to the significant memory requirements.
Hence,
if high accuracy and thus high order values are necessary,
then we should combine the recognition automaton $M_{R}$ directly with the $\mathit{PST}$\ $T$,
bypassing the construction of the $\mathit{PSA}$.
In practice prediction suffix trees often turn out to be more compact and memory efficient than probabilistic suffix automata,
but trees need to be constantly traversed from root to leaves whereas an automaton simply needs to find the triggered transition and immediately jump to the next state.
In the remainder of this Section,
we present these two alternatives.
\subsubsection{Using a Probabilistic Suffix Automaton ($\mathit{PSA}$)}
\label{sec:embed}
We can combine a recognition automaton $M_{R}$ and a $\mathit{PSA}$\ $M_{S}$ into a single automaton $M$ that has the power of both and can be used for recognizing and for forecasting occurrences of CEs of the expression $R$.
We call $M$ the \emph{embedding} of $M_{S}$ in $M_{R}$.
The reason for merging the two automata is that we need to know at every point in time the state of $M_{R}$ in order to estimate which future paths might actually lead to a final state (and thus a complex event).
If only SDE forecasting was required,
this merging would not be necessary.
We could use $M_{R}$ for recognition and then $M_{S}$ for SDE forecasting.
In our case,
we need information about the structure of the pattern automaton and its current state to determine if and when it might reach a final state.
The formal definition of an embedding is given below,
where,
in order to simplify notation,
we use Lemma \ref{lemma:isomorphism} and represent $\mathit{DSFA}$\ as classical deterministic automata.
\begin{definition}[Embedding of a $\mathit{PSA}$\ in a $\mathit{DSFA}$]
Let $M_{R}$ be a $\mathit{DSFA}$\ (actually its mapping to a classical automaton) and $M_{S}$ a $\mathit{PSA}$\ with the same alphabet.
An embedding of $M_{S}$ in $M_{R}$ is a tuple $M=(Q,Q^{s},Q^{f},\Sigma,\Delta,\Gamma,\pi)$, where:
\begin{itemize}
\item $Q$ is a finite set of states;
\item $Q^{s} \subseteq Q$ is the set of initial states;
\item $Q^{f} \subseteq Q$ is the set of final states;
\item $\Sigma$ is a finite alphabet;
\item $\Delta: Q \times \Sigma \rightarrow Q$ is the transition function;
\item $\Gamma: Q \times \Sigma \rightarrow [0,1]$ is the next symbol probability function;
\item $\pi: Q \rightarrow [0,1]$ is the initial probability distribution.
\end{itemize}
The language $\mathcal{L}(M)$ of $M$ is defined, as usual, as the set of strings that lead $M$ to a final state.
The following conditions must hold,
in order for $M$ to be an embedding of $M_{S}$ in $M_{R}$:
\begin{itemize}
\item $\Sigma = M_{R}.\Sigma = M_{S}.\Sigma$;
\item $\mathcal{L}(M) = \mathcal{L}(M_{R})$;
\item For every string/stream $S_{1..k}$, $P_{M}(S_{1..k}) = P_{M_{S}}(S_{1..k})$,
where $P_{M}$ denotes the probability of a string calculated by $M$ (through $\Gamma$) and $P_{M_{S}}$ the probability calculated by $M_{S}$ (through $\gamma$).
\end{itemize}
\end{definition}
The first condition ensures that all automata have the same alphabet.
The second ensures that $M$ is equivalent to $M_{R}$ by having the same language.
The third ensures that $M$ is also equivalent to $M_{S}$,
since both automata return the same probability for every string.
It can be shown that such an equivalent embedding can indeed be constructed for every $\mathit{DSFA}$\ and $\mathit{PSA}$.
\begin{theorem}
\label{theorem:embedding}
For every $\mathit{DSFA}$\ $M_{R}$ and $\mathit{PSA}$\ $M_{S}$ constructed using the minterms of $M_{R}$,
there exists an embedding of $M_{S}$ in $M_{R}$.
\end{theorem}
\begin{proof}
Construct an embedding in the following straightforward manner:
First let its states be the Cartesian product $M_{R}.Q \times M_{S}.Q$,
i.e., for every $q \in Q$, $q = (r,s)$ and $r \in M_{R}.Q$, $s \in M_{S}.Q$.
Set the initial states of $M$ as follows:
for every $q = (r,s)$ such that $r = M_{R}.q^{s}$, set $q \in Q^{s}$.
Similarly, for the final states,
for every $q = (r,s)$ such that $r \in M_{R}.Q^{f}$, set $q \in Q^{f}$.
Then let the transitions of $M$ be defined as follows:
A transition $\delta((r,s), \sigma) = (r^{'},s^{'})$ is added to $M$
if there exists a transition $\delta_{R}(r, \sigma) = r^{'}$ in $M_{R}$
and a transition $\tau(s, \sigma) = s^{'}$ in $M_{S}$.
Let also $\Gamma$ be defined as follows: $\Gamma((r,s), \sigma) = \gamma(s,\sigma)$.
Finally, for the initial state distribution, we set:
\[ \pi((r,s)) =
\begin{cases}
M_{S}.\pi(s) & \quad \text{if } r = M_{R}.q^{s} \\
0 & \quad \text{otherwise} \\
\end{cases}
\]
Proving that $\mathcal{L}(M) = \mathcal{L}(M_{R})$ is done with induction on the length of strings.
The inductive hypothesis is that, for strings $S_{1..k} = t_{1} \cdots t_{k}$ of length $k$,
if $q=(r,s)$ is the state reached by $M$ and $q_{R}$ the state reached by $M_{R}$,
then $r=q_{R}$.
Note that both $M_{R}$ and $M$ are deterministic and complete automata and thus only one state is reached for every string (only one run exists).
If a new element $t_{k+1}$ is read, $M$ will move to a new state $q^{'}=(r^{'},s^{'})$ and $M_{R}$ to $q_{R}^{'}$.
From the construction of the transitions of $M$, we see that $r^{'} = q_{R}^{'}$.
Thus, the induction hypothesis holds for $S_{1..k+1}$ as well.
It also holds for $k=0$,
since,
for every $q = (r,s) \in Q^{s}$, $r = M_{R}.q^{s}$.
Therefore, it holds for all $k$.
As a result, if $M$ reaches a final state $(r,s)$,
$r$ is reached by $M_{R}$.
Since $r \in M_{R}.Q^{f}$, $M_{R}$ also reaches a final state.
For proving probabilistic equivalence,
first note that the probability of a string given by a predictor $P$ is
$P(S_{1..k})=\prod_{i=1}^{k}P(t_{i} \mid t_{1} \dots t_{i-1})$.
Assume now that a $\mathit{PSA}$\ $M_{S}$ reads a string $S_{1..k}$ and follows a run
$\varrho = [l,q_{l}] \overset{t_{l}}{\rightarrow} [l+1,q_{l+1}] \overset{t_{l+1}}{\rightarrow} \cdots \overset{t_{k}}{\rightarrow} [k+1,q_{k+1}]$.
We define a run in a manner similar to that for runs of a $\mathit{DSFA}$.
The difference is that a run of a $\mathit{PSA}$\ may begin at an index $l>1$,
since it may have to wait for $l$ symbols before it can find a state $q_{l}$ whose label is equal to $S_{1..l}$.
We also treat the $\mathit{PSA}$\ as a reader (not a generator) of strings for which we need to calculate their probability.
The probability of $S_{1..k}$ is then given by
$P_{M_{S}}(S_{1..k}) = M_{S}.\pi(q_{l}) \cdot \prod_{i=l}^{k} M_{S}.\gamma(q_{i},t_{i})$.
Similarly, for the embedding $M$,
assume it follows the run
$\varrho^{'} = [l,q^{'}_{l}] \overset{t_{l}}{\rightarrow} [l+1,q^{'}_{l+1}] \overset{t_{l+1}}{\rightarrow} \cdots \overset{t_{k}}{\rightarrow} [k+1,q^{'}_{k+1}]$.
Then,
$P_{M}(S_{1..k}) = M.\pi(q^{'}_{l}) \cdot \prod_{i=l}^{k} M.\Gamma(q^{'}_{i},t_{i})$.
Now note that $M$ has the same initial state distribution as $M_{S}$,
i.e.,
the number of the initial states of $M$ is equal to the number of states of $M_{S}$ and they have the same distribution.
With an inductive proof, as above, we can prove that whenever $M$ reaches a state $q=(r,s)$ and $M_{S}$ reaches $q_{S}$,
$s = q_{S}$.
As a result, for the initial states of $M$ and $M_{S}$,
$M.\pi(q^{'}_{l}) = M_{S}.\pi(q_{l})$.
From the construction of the embedding,
we also know that $M_{S}.\gamma(s,\sigma) = M.\Gamma(q,\sigma)$ for every $\sigma \in \Sigma$.
Therefore,
$M_{S}.\gamma(q_{i},t_{i}) = M.\Gamma(q^{'}_{i},t_{i})$ for every $i$ and
$P_{M}(S_{1..k}) = P_{M_{S}}(S_{1..k})$.
\end{proof}
\input{figs_dsfapsa}
As an example,
consider the $\mathit{DSFA}$\ $M_{R}$ of Figure \ref{fig:dsfaab}
for the expression $R = a \cdot b$ with $\Sigma = \{a,b\}$.
We present it as a classical automaton,
but we remind readers that symbols in $\Sigma$ correspond to minterms.
Figure \ref{fig:pstab1} depicts a possible $\mathit{PST}$\ $T$ that could be learned from a training stream composed of symbols from $\Sigma$.
Figure \ref{fig:psaab1} shows the $\mathit{PSA}$\ $M_{S}$ constructed from $T$.
Figure \ref{fig:merged} shows the embedding $M$ of $M_{S}$ in $M_{R}$ that would be created,
following the construction procedure of the proof of Theorem \ref{theorem:embedding}.
Notice, however, that this embedding has some redundant states and transitions;
namely the states indicated with red that have no incoming transitions and are thus inaccessible.
The reason is that some states of $M_{R}$ in Figure \ref{fig:dsfaab} have a ``memory'' imbued to them from the structure of the automaton itself.
For example, state 2 of $M_{R}$ has only a single incoming transition with $b$ as its symbol.
Therefore, there is no point in merging this state with all the states of $M_{S}$,
but only with state $b$.
If we follow a straightforward construction,
as described above,
the result will be the automaton depicted in Figure \ref{fig:merged},
including the redundant red states.
To avoid the inclusion of such states,
we can merge $M_{R}$ and $M_{S}$ in an incremental fashion
(see Algorithm \ref{algorithm:merging}).
The resulting automaton would then consist only of the black states and transitions of Figure \ref{fig:merged}.
In a streaming setting,
we would thus have to wait at the beginning of the stream for some input events to arrive before deciding the start state with which to begin.
For example,
if $b$ were the first input event,
we would then begin with the bottom left state $(0,b)$.
On the other hand,
if $a$ were the first input event,
we would have to wait for yet another event.
If another $a$ arrived as the second event,
we would begin with the top left state $(0,aa)$.
In general,
if $m$ is our maximum order,
we would need to wait for at most $m$ input events before deciding.
\input{algorithms_merging}
After constructing an embedding $M$ from a $\mathit{DSFA}$\ $M_{R}$ and a $\mathit{PSA}$\ $M_{S}$,
we can use $M$ to perform forecasting on a test stream.
Since $M$ is equivalent to $M_{R}$,
it can also consume a stream and detect the same instances of the expression $R$ as $M_{R}$ would detect.
However,
our goal is to use $M$ to forecast the detection of an instance of $R$.
More precisely,
we want to estimate the number of transitions from any state in which $M$ might be until it reaches for the first time one of its final states.
Towards this goal,
we can use the theory of Markov chains.
Let $N$ denote the set of non-final states of $M$ and $F$ the set of its final states.
We can organize the transition matrix of $M$ in the following way
(we use bold symbols to refer to matrices and vectors and normal ones to refer to scalars or sets):
\begin{equation}
\label{eq:matrix}
\boldsymbol{\Pi} =
\begin{pmatrix}
\boldsymbol{N} & \boldsymbol{N_{F}} \\
\boldsymbol{F_{N}} & \boldsymbol{F}
\end{pmatrix}
\end{equation}
where $\boldsymbol{N}$ is the sub-matrix containing the probabilities of transitions from non-final to non-final states,
$\boldsymbol{F}$ the probabilities from final to final states,
$\boldsymbol{F_{N}}$ the probabilities from final to non-final states
and $\boldsymbol{N_{F}}$ the probabilities from non-final to final states.
By partitioning the states of a Markov chain into two sets,
such as $N$ and $F$,
the following theorem can be used to estimate the probability of reaching a state in $F$ starting from a state in $N$:
\begin{theorem}[\cite{fu2003distribution}]
\label{theorem:non-finals}
Let $\boldsymbol{\Pi}$ be the transition probability matrix of a homogeneous Markov chain $Y_{t}$ in the form of Equation \eqref{eq:matrix}
and $\boldsymbol{\xi}_{init}$ its initial state distribution.
The probability for the time index $n$ when the system first enters the set of states $F$,
starting from a state in $N$,
can be obtained from
\begin{equation}
\label{eq:wtd:non-finals}
P(Y_{n} \in F, Y_{n-1} \in N, \cdots, Y_{2} \in N, Y_{1} \in N \mid \boldsymbol{\xi_{init}}) =
\boldsymbol{\xi_{N}}^{T}\boldsymbol{N}^{n-1}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}
\end{equation}
where $\xi_{N}$ is the vector consisting of the elements of $\xi_{init}$ corresponding to the states of $N$.
\end{theorem}
In our case,
the sets $N$ and $F$ have the meaning of being the non-final and final states of $M$.
The above theorem then gives us the desired probability of reaching a final state.
However, notice that this theorem assumes that we start in a non-final state ($Y_{1} \notin F$).
A similar result can be given if we assume that we start in a final state.
\begin{theorem}
\label{theorem:finals}
Let $\boldsymbol{\Pi}$ be the transition probability matrix of a homogeneous Markov chain $Y_{t}$ in the form of Equation \eqref{eq:matrix}
and $\boldsymbol{\xi}_{init}$ its initial state distribution.
The probability for the time index $n$ when the system first enters the set of states $F$,
starting from a state in $F$,
can be obtained from
\begin{equation}
\label{eq:wtd:finals}
P(Y_{n} \in F, Y_{n-1} \in N, \cdots, Y_{2} \in N, Y_{1} \in F \mid \boldsymbol{\xi_{init}}) =
\begin{cases}
\boldsymbol{\xi_{F}}^{T} \boldsymbol{F} \boldsymbol{1} & \quad \text{if } n=2 \\
\boldsymbol{\xi_{F}}^{T} \boldsymbol{F_{N}} \boldsymbol{N}^{n-2}(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1} & \quad \text{otherwise} \\
\end{cases}
\end{equation}
where $\xi_{F}$ is the vector consisting of the elements of $\xi_{init}$ corresponding to the states of $F$.
\end{theorem}
\begin{proof}
The proof may be found in the Appendix, Section \ref{sec:proof:finals}.
\end{proof}
Note that the above formulas do not use $\boldsymbol{N_{F}}$,
as it is not needed when dealing with probability distributions.
As the sum of the probabilities is equal to $1$,
we can derive $\boldsymbol{N_{F}}$ from $\boldsymbol{N}$.
This is the role of the term $(\boldsymbol{I}-\boldsymbol{N})\boldsymbol{1}$ in the formulas,
which is equal to $\boldsymbol{N_{F}}$ when there is only a single final state and equal to the sum of the columns of $\boldsymbol{N_{F}}$ when there are multiple final states,
i.e., each element of the matrix corresponds to the probability of reaching one of the final states from a given non-final state.
Using Theorems \ref{theorem:non-finals} and \ref{theorem:finals},
we can calculate the so-called waiting-time distributions for any state $q$ of the automaton,
i.e.,
the distribution of the index $n$,
given by the waiting-time variable
$W_{q}=inf\{n: Y_{0},Y_{1},...,Y_{n}, Y_{0}=q, q \in Q \backslash F, Y_{n} \in F\}$.
Theorems \ref{theorem:non-finals} and \ref{theorem:finals} provide a way to calculate the probability of reaching a final state,
given an initial state distribution $\boldsymbol{\xi_{init}}$.
In our case,
as the automaton is moving through its various states,
$\boldsymbol{\xi_{init}}$ takes a special form.
At any point in time,
the automaton is (with certainty) in a specific state $q$.
In that state,
$\boldsymbol{\xi_{init}}$ is a vector of $0$,
except for the element corresponding to the current state of the automaton,
which is equal to $1$.
Figure \ref{fig:wtdfas} shows an example of an automaton
(its exact nature is not important,
as long as it can also be described as a Markov chain),
along with the waiting-time distributions for its non-final states.
For this example,
if the automaton is in state 2,
then the probability of reaching the final state 4 for the first time in 2 transitions is $\approx 50\%$.
However, it is $0\%$ for 3 transitions,
as the automaton has no path of length 3 from state 2 to state 4.
\input{figs_wtdfas}
We can use the waiting-time distributions to produce various kinds of forecasts.
In the simplest case,
we can select the future point with the highest probability and return this point as a forecast.
We call this type of forecasting \emph{REGRESSION-ARGMAX}.
Alternatively,
we may want to know how likely it is that a CE will occur within the next $w$ input events.
For this,
we can sum the probabilities of the first $w$ points of a distribution
and if this sum exceeds a given threshold
we emit a ``positive'' forecast (meaning that a CE is indeed expected to occur);
otherwise a ``negative'' (no CE is expected) forecast is emitted.
We call this type of forecasting \emph{CLASSIFICATION-NEXTW}.
These kinds of forecasts are easy to compute.
There is another kind of useful forecasts,
which are however more computationally demanding.
Given that we are in a state $q$,
we may want to forecast whether the automaton,
with confidence at least $\theta_{fc}$,
will have reached its final state(s) in $n$ transitions,
where $n$ belongs to a future interval $I=[\mathit{start},\mathit{end}]$.
The confidence threshold $\theta_{fc}$ is a parameter set by the user.
The forecasting objective is to select the shortest possible interval $I$ that satisfies $\theta_{fc}$.
Figure \ref{fig:wt1} shows the forecast interval produced for state 1 of the automaton of Figure \ref{fig:dfaabbb},
with $\theta_{fc} = 50\%$.
We call this third type of forecasting \emph{REGRESSION-INTERVAL}.
We have implemented all of the above types of forecasting.
\input{algorithms_interval}
A naive way to estimate the forecast interval from a waiting-time distribution whose domain is $[1,h]$
(we call $h$, the maximum index of the distribution, its \emph{horizon})
is to first enumerate all possible intervals $(\mathit{start},\mathit{end})$,
such that $1 \leq \mathit{start},\mathit{end} \leq h$ and $\mathit{start} \leq \mathit{end}$,
and then calculate each interval's probability by summing the probabilities of all of its points.
The complexity of such an exhaustive algorithm is $O(h^{3})$.
To prove this,
first note that the algorithm would have to check 1 interval of length $h$, 2 intervals of length $h-1$, etc., and $h$ intervals of length 1.
Assuming that the cost of estimating the probability of an interval is proportional to its length $l$
($l$ points need to be retrieved and $l-1$ additions be performed),
the total cost would thus be:
\begin{equation*}
\begin{aligned}
1h + 2(h-1) + 3(h-2) + \cdots + h1 = & \sum_{i=1}^{h}i(h - (i-1)) \\
= & \sum_{i=1}^{h}(ih - i^{2} + i) \\
= & h\sum_{i=1}^{h}i - \sum_{i=1}^{h}i^{2} + \sum_{i=1}^{h}i \\
= & h \frac{h(h+1)}{2} - \frac{h(h+1)(2h+1)}{6} + \frac{h(h+1)}{2} \\
= & \cdots \\
= & \frac{1}{6}h(h+1)(h+2) \\
= & O(h^{3})
\end{aligned}
\end{equation*}
Note that this is just the cost of estimating the probabilities of the intervals,
ignoring the costs of actually creating them first and then searching for the best one,
after the step of probability estimation.
We can find the best forecast interval with a more efficient algorithm that has a complexity linear in $h$
(see Algorithm \ref{algorithm:interval}).
We keep two pointers $i,j$ that we initially set them equal to the first index of the distribution.
We then repeatedly move $i,j$ in the following manner:
We first move $j$ to the right by incrementing it by 1 until $P(i,j)$ exceeds $\theta_{fc}$,
where each $P(i,j)$ is estimated incrementally by repeatedly adding $P(j)$ to an accumulator.
We then move $i$ to the right by $1$ until $P(i,j)$ drops below $\theta_{fc}$,
where $P(i,j)$ is estimated by incremental subtractions.
If the new interval $(i,j)$ is smaller than the smallest interval exceeding $\theta_{fc}$ thus far,
we discard the old smallest interval and keep this new one.
This wave-like movement of $i,j$ stops when $j=h$.
This algorithm is more efficient (linear in the $h$, see Proposition \ref{proposition:complexity6} in Section \ref{sec:complexity}) by avoiding intervals that cannot possibly exceed $\theta_{fc}$.
The proof for the algorithm's correctness is presented in the Appendix, Section \ref{sec:proof:interval}.
Note that the domain of a waiting-time distribution is not composed of timepoints and thus a forecast does not explicitly refer to time.
Each value of the index $n$ on the $x$ axis essentially refers to the number of transitions that the automaton needs to take before reaching a final state, or, equivalently, to the number of future input events to be consumed.
If we were required to output forecasts referring to time,
we would need to convert these basic event-related forecasts to time-related ones.
If input events arrive at regular time intervals,
then this conversion is a straightforward multiplication of the forecast by the time interval.
However, in the general case where the intervals between input events are not regular and fixed,
we would need to build another probabilistic model describing the time that elapses between events and use this model to convert event-related to time-related forecasts.
Building such a time model might not always be possible or might be prohibitively expensive.
In this paper we decided to focus on the number of steps for two reasons:
a) Sometimes it might not be desirable to give time-related forecasts.
Event-related forecasts might be more suitable,
as is the case, for example, in the domain of credit card fraud management,
where we need to know whether or not the next transaction(s) will be fraudulent.
We examine this use case in Section \ref{sec:cards}.
b) Time-related forecasts might be very difficult (or almost impossible) to produce if the underlying process exhibits a high degree of randomness.
For example,
this is the case in the maritime domain,
where the intervals between vessel position (AIS) messages are wildly random and depend on many (even human-related) factors,
e.g., the crew of a vessel simply forgetting to switch on the AIS equipment.
In such cases,
it might be preferable to perform some form of sampling or interpolation on the original stream of input events in order to derive another stream similar to the original one but with regular intervals.
This is the approach we follow in our experiments in the maritime domain (Section \ref{sec:maritime}).
For these reasons, we initially focused on event-related forecasts.
This, however, does not exclude the option of using event-related forecasts as a first step in order to subsequently produce time-related ones,
whenever this is possible.
For example,
a simple solution would be to try to model the time elapsed between events via a Poisson process.
We intend to pursue this line of work in the future.
\subsubsection{Using a Prediction Suffix Tree ($\mathit{PST}$)}
\label{sec:no-mc}
The reason for constructing an embedding of the $\mathit{PSA}$\ $M_{S}$ learned from the data into the automaton $M_{R}$ used for recognition,
as described in the previous section,
is that the embedding is based on a variable-order model that will consist on average of much fewer states than a full-order model.
There is, however, one specific step in the process of creating an embedding that may act as a bottleneck and prevent us from increasing the order to desired values:
the step of converting a $\mathit{PST}$\ to a $\mathit{PSA}$.
The number of nodes of a $\mathit{PST}$\ is often order of magnitudes smaller than the number of states of the $\mathit{PSA}$\ constructed from that $\mathit{PST}$.
Motivated by this observation,
we devised a way to estimate the required waiting-time distributions without actually constructing the embedding.
Instead, we make direct use of the $\mathit{PST}$,
which is more memory efficient.
Thus, given a $\mathit{DSFA}$\ $M_{R}$ and its $\mathit{PST}$\ $T$,
we can estimate the probability for $M_{R}$ to reach for the first time one of its final states
in the following manner.
As the system processes events from the input stream,
besides feeding them to $M_{R}$,
it also stores them in a buffer that holds the $m$ most recent events,
where $m$ is equal to the maximum order of the $\mathit{PST}$\ $T$.
After updating the buffer with a new event,
the system traverses $T$ according to the contents of the buffer and arrives at a leaf $l$ of $T$.
The probability of any future sequence of events can be estimated with the use of the probability distribution at $l$.
In other words,
if $S_{1..k}=\cdots,t_{k-1},t_{k}$ is the stream seen thus far,
then the next symbol probability for $t_{k+1}$,
i.e., $P(t_{k+1} \mid t_{k-m+1},\cdots,t_{k})$,
can be directly retrieved from the distribution of the leaf $l$.
If we want to look further into the future,
e.g., into $t_{k+2}$,
we can repeat the same process as necessary.
Namely, if we fix $t_{k+1}$,
then the probability for $t_{k+2}$,
$P(t_{k+2} \mid t_{k-m+2},\cdots,t_{k+1})$,
can be retrieved from $T$,
by retrieving the leaf $l^{'}$ reached with $t_{k+1},\cdots,t_{k-m+2}$.
In this manner, we can estimate the probability of any future sequence of events.
Consequently,
we can also estimate the probability of any future sequence of states of the $\mathit{DSFA}$\ $M_{R}$,
since we can simply feed these future event sequences to $M_{R}$ and let it perform ``forward'' recognition with these projected events.
In other words,
we can let $M_{R}$ ``generate'' a sequence of future states,
based on the sequence of projected events,
in order to determine when $M_{R}$ will reach a final state.
Finally, since we can estimate the probability for any future sequence of states of $M_{R}$,
we can use the definition of the waiting-time variable
(${W_{q}=inf\{n: Y_{0},Y_{1},...,Y_{n}, Y_{0}=q, q \in Q \backslash F, Y_{n} \in F\}}$)
to calculate the waiting-time distributions.
Figure \ref{fig:nomc} shows an example of this process for the automaton $M_{R}$ of Figure \ref{fig:dsfaab}.
Figure \ref{fig:pstab} displays an example $\mathit{PST}$\ $T$ learned with the minterms/symbols of $M_{R}$.
\begin{figure}
\centering
\begin{subfigure}[t]{0.75\textwidth}
\includegraphics[width=0.99\textwidth]{pstab.pdf}
\caption{The $\mathit{PST}$\ $T$ for the automaton $M_{R}$ of Figure \ref{fig:dsfaab}.}
\label{fig:pstab}
\end{subfigure}\\
\begin{subfigure}[t]{0.7\textwidth}
\includegraphics[width=0.99\textwidth]{future.pdf}
\caption{Future paths followed by $M_{R}$ and $T$ starting from state $1$ of $M_{R}$ and node $aa$ of $T$. Purple nodes correspond to the only path of length $k=2$ that leads to a final state. Pink nodes are pruned. Nodes with double borders correspond to final states of $M_{R}$.}
\label{fig:future}
\end{subfigure}
\caption{Example of estimating a waiting-time distribution without a Markov chain.}
\label{fig:nomc}
\end{figure}
One remark should be made at this point in order to showcase how an attempt to convert $T$ to a $\mathit{PSA}$\ could lead to a blow-up in the number of states.
The basic step in such a conversion is to take the leaves of $T$ and use them as states for the $\mathit{PSA}$.
If this was sufficient,
the resulting $\mathit{PSA}$\ would always have fewer states than the $\mathit{PST}$.
As this example shows,
this is not the case.
Imagine that the states of the $\mathit{PSA}$\ are just the leaves of $T$ and that we are in the right-most state/node,
$b,(0.5,0.5)$.
What will happen if an $a$ event arrives?
We would be unable to find a proper next state.
The state $aa,(0.75,0.25)$ is obviously not the correct one,
whereas states $aba,(0.9,0.1)$ and $bba,(0.1,0.9)$ are both ``correct'',
in the sense that $ba$ is a suffix of both $aba$ and $bba$.
In order to overcome this ambiguity regarding the correct next state,
we would have to first expand node $b,(0.5,0.5)$ of $T$ and then use the children of this node as states of the $\mathit{PSA}$.
In this simple example,
this expansion of a single problematic node would not have serious consequences.
But for deep trees and large alphabets,
the number of states generated by such expansions are far more than the number of the original leaves.
For this reason,
the size of the $\mathit{PSA}$\ is far greater than that of the original, unexpanded $\mathit{PST}$.
Figure \ref{fig:future} illustrates how we can estimate the probability for any future sequence of states of the $\mathit{DSFA}$\ $M_{R}$,
using the distributions of the $\mathit{PST}$\ $T$.
Let us assume that,
after consuming the last event,
$M_{R}$ is in state $1$ and $T$ has reached its left-most node, $aa,(0.75,0.25)$.
This is shown as the left-most node also in Figure \ref{fig:future}.
Each node in this figure has two elements:
the first one is the state of $M_{R}$ and the second the node of $T$,
starting with $\{1,aa\}$ as our current ``configuration''.
Each node has two outgoing edges, one for $a$ and one for $b$,
indicating what might happen next and with what probability.
For example,
from the left-most node of Figure \ref{fig:future},
we know that,
according to $T$,
we might see $a$ with probability $0.75$ and $b$ with probability $0.25$.
If we do encounter $b$,
then $M_{R}$ will move to state 2 and $T$ will reach leaf $b,(0.5,0.5)$.
This is shown in Figure \ref{fig:future} as the white node $\{2,b\}$.
This node has a double border to indicate that $M_{R}$ has reached a final state.
In a similar manner,
we can keep expanding this tree into the future
and use it to estimate the waiting-time distribution for its node $\{1,aa\}$.
In order to estimate the probability of reaching a final state for the first time in $k$ transitions,
we first find all the paths of length $k$ which start from the original node
and end in a final state without including another final state.
In our example of Figure \ref{fig:future},
if $k=1$,
then the path from $\{1,aa\}$ to $\{2,b\}$ is such a path and its probability is $0.25$.
Thus, $P(W_{\{1,aa\}}=1)=0.25$.
For $k=2$,
the path with the purple nodes leads to a final state after 2 transitions.
Its probability is $0.75*0.25=0.1875$,
i.e., the product of the probabilities on the path edges.
Thus, $P(W_{\{1,aa\}}=2)=0.1875$.
If there were more such alternative paths,
we would have to add their probabilities.
Note that the tree-like structure of Figure \ref{fig:future} is not an actual data structure that we need to construct and maintain.
It is only a graphical illustration of the required computation steps.
The actual computation is performed recursively on demand.
At each recursive call,
a new frontier of virtual future nodes at level $k$ is generated.
We thus do not maintain all the nodes of this tree in memory,
but only access the $\mathit{PST}$\ $T$,
which is typically much more compact than a $\mathit{PSA}$.
Despite this fact though,
the size of the frontier after each recursive call grows exponentially as we try to look deeper into the future.
This cost can be significantly reduced by employing the following optimizations.
First, note in Figure \ref{fig:future},
that the paths starting from the two $\{2,b\}$ nodes are pink.
This indicates that these paths do not actually need to be generated,
as they start from a final state.
We are only interested in the first time $M_{R}$ reaches a final state and not in the second, third, etc.
As a result,
paths with more than one final states are not useful.
With this optimization,
we can still do an exact estimation of the waiting-time distribution.
Another useful optimization is to prune paths that we know will have a very low probability,
even if they are necessary for an exact estimation of the distributions.
The intuition is that such paths will not contribute significantly to the probabilities of our waiting-time distribution,
even if we do expand them.
We can prune such paths,
accepting the risk that we will have an approximate estimation of the waiting-time distribution.
This pruning can be done without generating the paths in their totality.
As soon as a partial path has a low probability,
we can stop its expansion,
since any deeper paths will have even lower probabilities.
We have found this optimization to be very efficient while having negligible impact on the distribution for a wide range of cut-off thresholds.
\subsection{Deterministic Symbolic Automata}
\label{sec:dsfa}
The definition of $\mathit{DSFA}$\ is similar to that of classical deterministic automata.
Intuitively, we require that, for every state and every tuple/character,
the $\mathit{SFA}$\ can move to at most one next state upon reading that tuple/character.
We note though that it is not enough to require that all outgoing transitions from a state have different predicates as guards.
Symbolic automata differ from classical in one important aspect.
For the latter,
if we start from a given state and we have two outgoing transitions with different labels,
then it is not possible for both of these transition to be triggered simultaneously (i.e., with the same character).
For symbolic automata,
on the other hand,
two predicates may be different but still both evaluate to \textsf{\footnotesize TRUE}\ for the same tuple and thus two transitions with different predicates may both be triggered with the same tuple.
Therefore, the formal definition for $\mathit{DSFA}$\ must take this into account:
\begin{definition}[Deterministic SFA \cite{DBLP:conf/cav/DAntoniV17}]
A $\mathit{SFA}$\ $M$ is deterministic if, for all transitions $(q,\psi_{1},q_{1}),(q,\psi_{2},q_{2}) \in M.\Delta$,
if $q_{1} \neq q_{2}$ then $\llbracket \psi_{1} \wedge \psi_{2} \rrbracket = \emptyset$.
\end{definition}
Using this definition for $\mathit{DSFA}$\ it can be proven that $\mathit{SFA}$\ are indeed closed under determinization \cite{DBLP:conf/cav/DAntoniV17}.
The determinization process first needs to create the \emph{minterms} of the predicates of a $\mathit{SFA}$\ $M$,
i.e., the set of maximal satisfiable Boolean combinations of such predicates,
denoted by $\mathit{Minterms}(\mathit{Predicates}(M))$,
and then use these minterms as guards for the $\mathit{DSFA}$\ \cite{DBLP:conf/cav/DAntoniV17}.
There are two factors that can lead to a combinatorial explosion of the number of states of the resulting $\mathit{DSFA}$:
first, the fact that the powerset of the states of the original $\mathit{SFA}$\ must be constructed (similarly to classical automata);
second, the fact that the number of minterms (and, thus, outgoing transitions from each $\mathit{DSFA}$\ state) is an exponential function of the number of the original $\mathit{SFA}$\ predicates.
In order to mitigate this doubly exponential cost,
we follow two simple optimization techniques.
As is typically done with classical automata as well,
instead of constructing the powerset of states of the $\mathit{SFA}$\ and then adding transitions,
we construct the states of the $\mathit{DSFA}$\ incrementally, starting from its initial state,
without adding states that will be inaccessible in the final $\mathit{DSFA}$.
We can also reduce the number of minterms by taking advantage of some previous knowledge about some of the predicates that we might have.
In cases where we know that some of the predicates are mutually exclusive,
i.e.,
at most one of them can evaluate to \textsf{\footnotesize TRUE},
then we can both discard some minterms and simplify some others.
For example,
if we have two predicates,
$\psi_{A} := \mathit{speed} < 5$ and $\psi_{B} := \mathit{speed} >20$,
then we also know that $\psi_{A}$ and $\psi_{B}$ are mutually exclusive.
As a result,
we can simplify the minterms,
as shown in Table \ref{table:minterms_simplified}.
\begin{table*}[t]
\centering
\caption{The set of simplified minterms for the predicates $\psi_{A} := \mathit{speed} < 5$ and $\psi_{B} := \mathit{speed} > 20$.}
\begin{tabular}{ccc}
\toprule
Original & Simplified & Reason\\
\midrule
$\psi_{A} \wedge \psi_{B}$ & discard & unsatisfiable \\
\midrule
$\psi_{A} \wedge \neg \psi_{B}$ & $\psi_{A}$ & $\psi_{A} \vDash \neg \psi_{B}$ \\
\midrule
$\neg \psi_{A} \wedge \psi_{B}$ & $\psi_{B}$ & $\psi_{B} \vDash \neg \psi_{B}$ \\
\midrule
$\neg \psi_{A} \wedge \neg \psi_{B}$ & $\neg \psi_{A} \wedge \neg \psi_{B}$ & for events whose speed is between 5 and 20 \\
\bottomrule
\end{tabular}
\label{table:minterms_simplified}
\end{table*}
Before moving to the discussion about how a $\mathit{DSFA}$\ can be converted to a Markov chain,
we present a useful lemma.
We will show that a $\mathit{DSFA}$\ always has an equivalent (through an isomorphism) deterministic classical automaton.
This result is important for two reasons:
a) it allows us to use methods developed for classical automata without having to always prove that they are indeed applicable to symbolic automata as well, and
b) it will help us in simplifying our notation,
since we can use the standard notation of symbols instead of predicates.
First note that $\mathit{Minterms}(\mathit{Predicates}(M))$ induces a finite set of equivalence classes on the (possibly infinite) set of domain elements of $M$ \cite{DBLP:conf/cav/DAntoniV17}.
For example, if $\mathit{Predicates}(M)=\{\psi_{1},\psi_{2}\}$,
then $\mathit{Minterms}(\mathit{Predicates}(M))=\{\psi_{1} \wedge \psi_{2}, \psi_{1} \wedge \neg \psi_{2}, \neg \psi_{1} \wedge \psi_{2}, \neg \psi_{1} \wedge \neg \psi_{2}\}$, and we can map each domain element, which, in our case, is a tuple, to exactly one of these 4 minterms:
the one that evaluates to \textsf{\footnotesize TRUE}\ when applied to the element.
Similarly, the set of minterms induces a set of equivalence classes on the set of strings (event streams in our case).
For example, if $S{=}t_{1},\cdots,t_{k}$ is an event stream,
then it could be mapped to $S'{=}a,\cdots,b$,
with $a$ corresponding to $\psi_{1} \wedge \neg \psi_{2}$ if $\psi_{1}(t_{1}) \wedge \neg \psi_{2}(t_{1}) = \textsf{\footnotesize TRUE}$, $b$ to $\psi_{1} \wedge \psi_{2}$, etc.
\begin{definition}[Stream induced by the minterms of a $\mathit{DSFA}$]
If $S$ is a stream from the domain elements of the algebra of a $\mathit{DSFA}$\ $M$ and $N=\mathit{Minterms}(\mathit{Predicates}(M))$,
then the stream $S'$ induced by applying $N$ on $S$ is the equivalence class of $S$ induced by $N$.
\end{definition}
We can now prove the lemma,
which states that for every $\mathit{DSFA}$\ there exists an equivalent classical deterministic automaton.
\begin{lemma}
\label{lemma:isomorphism}
For every deterministic symbolic finite automaton ($\mathit{DSFA}$) $M_{s}$ there exists a deterministic classical finite automaton ($\mathit{DFA}$) $M_{c}$ such that $\mathcal{L}(M_{c})$ is the set of strings induced by applying $N=\mathit{Minterms}(\mathit{Predicates}(M_{s}))$ to $\mathcal{L}(M_{s})$.
\end{lemma}
\begin{proof}
From an algebraic point of view,
the set $N=\mathit{Minterms}(\mathit{Predicates}(M))$ may be treated as a generator of the monoid $N^{*}$,
with concatenation as the operation.
If the cardinality of $N$ is $k$,
then we can always find a set $\Sigma=\{a_{1},\cdots,a_{k}\}$ of $k$ distinct symbols and then a morphism (in fact, an isomorphism) $\phi: N^{*} \rightarrow \Sigma^{*}$ that maps each minterm to exactly one, unique $a_{i}$.
A classical $\mathit{DFA}$\ $M_{c}$ can then be constructed by relabeling the $\mathit{DSFA}$\ $M_{s}$ under $\phi$,
i.e.,
by copying/renaming the states and transitions of the original $\mathit{DSFA}$\ $M_{s}$ and by replacing the label of each transition of $M_{s}$ by the image of this label under $\phi$.
Then, the behavior of $M_{c}$ (the language it accepts) is the image under $\phi$ of the behavior of $M_{s}$ \cite{DBLP:books/daglib/0023547}.
Or, equivalently, the language of $M_{c}$ is the set of strings induced by applying $N=\mathit{Minterms}(\mathit{Predicates}(M_{s}))$ to $\mathcal{L}(M_{s})$.
\end{proof}
A direct consequence drawn from the proof of the above lemma is that,
for every run
$\varrho = [1,q_{1}] \overset{\delta_{1}}{\rightarrow} [2,q_{2}] \overset{\delta_{2}}{\rightarrow} \cdots \overset{\delta_{k}}{\rightarrow} [k+1,q_{k+1}]$
followed by a $\mathit{DSFA}$\ $M_{s}$ by consuming a symbolic string (stream of tuples) $S$,
the run that the equivalent $\mathit{DFA}$\ $M_{c}$ follows by consuming the induced string $S'$ is also
$\varrho' = [1,q_{1}] \overset{\delta_{1}}{\rightarrow} [2,q_{2}] \overset{\delta_{2}}{\rightarrow} \cdots \overset{\delta_{k}}{\rightarrow} [k+1,q_{k+1}]$,
i.e., $M_{c}$ follows the same copied/renamed states and the same copied/relabeled transitions.
This direct relationship between $\mathit{DSFA}$\ and classical $\mathit{DFA}$\ allows us to transfer techniques developed for classical $\mathit{DFA}$\ to the study of $\mathit{DSFA}$.
Moreover, we can simplify our notation by employing the terminology of symbols/characters and strings/words that is typical for classical automata.
Henceforth,
we will be using symbols and strings as in classical theories of automata and strings (simple lowercase letters to denote symbols),
but the reader should bear in mind that,
in our case,
each symbol always corresponds to a predicate and,
more precisely,
to a minterm of a $\mathit{DSFA}$.
For example, the symbol $a$ may actually refer to the minterm $(\mathit{speed} < 5) \wedge (\mathit{speed} >20)$,
the symbol $b$ to $(\mathit{speed} < 5) \wedge \neg (\mathit{speed} >20)$, etc.
\subsection{Prediction Suffix Trees}
\label{sec:pst}
We use Prediction Suffix Trees ($\mathit{PST}$),
as described in \cite{DBLP:journals/ml/RonST96,DBLP:conf/nips/RonST93},
as our VMM of choice.
The reason is that,
once a $\mathit{PST}$\ has been learned,
it can be readily converted to a probabilistic automaton.
More precisely,
we learn a probabilistic suffix automaton ($\mathit{PSA}$),
whose states correspond to contexts of variable length.
The outgoing transitions from each state of the $\mathit{PSA}$\ encode the conditional distribution of seeing a symbol given the context of that state.
As we will show,
this probabilistic automaton (or the tree itself)
can then be combined with a symbolic automaton in a way that allows us to infer when a CE is expected to occur.
The formal definition of a PST is the following:
\begin{definition}[Prediction Suffix Tree \cite{DBLP:journals/ml/RonST96}]
Let $\Sigma$ be an alphabet.
A PST $T$ over $\Sigma$ is a tree
whose edges are labeled by symbols $\sigma \in \Sigma$ and each internal node has exactly one edge for every $\sigma \in \Sigma$
(hence, the degree is $\mid \Sigma \mid$).
Each node is labeled by a pair $(s,\gamma_{s})$,
where $s$ is the string associated with the walk starting from that node and ending at the root,
and $\gamma_{s}: \Sigma \rightarrow [0,1]$ is the next symbol probability function related with $s$.
For every string $s$ labeling a node, $\sum_{\sigma \in \Sigma} \gamma_{s}(\sigma) = 1$.
The depth of the tree is its order $m$.
\end{definition}
Figure \ref{fig:pstab1} shows an example of a $\mathit{PST}$\ of order $m=2$.
According to this tree,
if the last symbol that we have encountered in a stream is $a$
and we ignore any other symbols that may have preceded it,
then the probability of the next input symbol being again $a$ is $0.7$.
However, we can obtain a better estimate of the next symbol probability by extending the context and looking one more symbol deeper into the past.
Thus, if the last two symbols encountered are $b,a$,
then the probability of seeing $a$ again is very different ($0.1$).
On the other hand,
if the last symbol encountered is $b$,
the next symbol probability distribution is $(0.5,0.5)$ and,
since the node $b,(0.5,0.5)$ has not been expanded,
this implies that its children would have the same distribution if they had been created.
Therefore, the past does not affect the prediction and will not be used.
Note that a $\mathit{PST}$\ whose leaves are all of equal depth $m$ corresponds to a full-order Markov model of order $m$,
as its paths from the root to the leaves correspond to every possible context of length $m$.
Our goal is to incrementally learn a $\mathit{PST}$\ $\hat{T}$ by adding new nodes only when it is necessary
and then use $\hat{T}$ to construct a $\mathit{PSA}$\ $\hat{M}$ that will approximate the actual $\mathit{PSA}$\ $M$ that has generated the training data.
Assuming that we have derived an initial predictor $\hat{P}$
(as described in more detail in Section \ref{sec:prob_empirical}),
the learning algorithm in \cite{DBLP:journals/ml/RonST96} starts with a tree having only a single node,
corresponding to the empty string $\epsilon$.
Then,
it decides whether to add a new context/node $s$ by checking two conditions:
\begin{itemize}
\item First, there must exist $\sigma \in \Sigma$ such that $\hat{P}(\sigma \mid s) > \theta_{1}$ must hold, i.e., $\sigma$ must appear ``often enough'' after the suffix $s$;
\item Second, $\frac{\hat{P}(\sigma \mid s)}{\hat{P}(\sigma \mid \mathit{suffix}(s))} > \theta_{2}$ (or $\frac{\hat{P}(\sigma \mid s)}{\hat{P}(\sigma \mid \mathit{suffix}(s))} < \frac{1}{\theta_{2}}$) must hold,
i.e., it is ``meaningful enough'' to expand to $s$ because there is a significant difference in the conditional probability of $\sigma$ given $s$ with respect to the same probability given the shorter context $\mathit{suffix}(s)$,
where $\mathit{suffix}(s)$ is the longest suffix of $s$ that is different from $s$.
\end{itemize}
The thresholds $\theta_{1}$ and $\theta_{2}$ depend, among others,
on parameters $\alpha$, $n$ and $m$,
$\alpha$ being an approximation parameter,
measuring how close we want the estimated $\mathit{PSA}$\ $\hat{M}$ to be compared to the actual $\mathit{PSA}$\ $M$,
$n$ denoting the maximum number of states that we allow $\hat{M}$ to have
and $m$ denoting the maximum order/length of the dependencies we want to capture.
For example,
consider node $a$ in Figure \ref{fig:pstab1} and assume that we are at a stage of the learning process where we have not yet added its children, $aa$ and $ba$.
We now want to check whether it is meaningful to add $ba$ as a node.
Assuming that the first condition is satisfied,
we can then check the ratio $\frac{\hat{P}(\sigma \mid s)}{\hat{P}(\sigma \mid \mathit{suffix}(s))} = \frac{\hat{P}(a \mid ba)}{\hat{P}(a \mid a)} = \frac{0.1}{0.7} \approx 0.14$.
If $\theta_{2} = 1.05$, then $\frac{1}{\theta_{2}} \approx 0.95$ and the condition is satisfied,
leading to the addition of node $ba$ to the tree.
For more details, see \cite{DBLP:journals/ml/RonST96}.
\input{figs_pstpsa}
Once a $\mathit{PST}$\ $\hat{T}$ has been learned,
we can convert it to a $\mathit{PSA}$\ $\hat{M}$.
The definition for $\mathit{PSA}$\ is the following:
\begin{definition}[Probabilistic Suffix Automaton \cite{DBLP:journals/ml/RonST96}]
\label{def:psa}
A Probabilistic Suffix Automaton $M$ is a tuple $(Q,\Sigma,\tau,\gamma,\pi)$, where:
\begin{itemize}
\item $Q$ is a finite set of states;
\item $\Sigma$ is a finite alphabet;
\item $\tau: Q \times \Sigma \rightarrow Q$ is the transition function;
\item $\gamma: Q \times \Sigma \rightarrow [0,1]$ is the next symbol probability function;
\item $\pi: Q \rightarrow [0,1]$ is the initial probability distribution over the starting states;
\end{itemize}
The following conditions must hold:
\begin{itemize}
\item For every $q \in Q$, it must hold that $\sum_{\sigma \in \Sigma} \gamma(q,\sigma) = 1$ and $\sum_{q \in Q} \pi(q) = 1$;
\item Each $q \in Q$ is labeled by a string $s \in \Sigma^{*}$ and the set of labels is suffix free,
i.e., no label $s$ is a suffix of another label $s'$;
\item For every two states $q_{1},q_{2} \in Q$ and for every symbol $\sigma \in \Sigma$,
if $\tau(q_{1},\sigma)=q_{2}$ and $q_{1}$ is labeled by $s_{1}$,
then $q_{2}$ is labeled by $s_{2}$, such that $s_{2}$ is a suffix of $s_{1} \cdot \sigma$;
\item For every $s$ labeling some state $q$,
and every symbol $\sigma$ for which $\gamma(q,\sigma) > 0$,
there exists a label which is a suffix of $s \cdot \sigma$;
\item Finally, the graph of $M$ is strongly connected.
\end{itemize}
\end{definition}
Note that a $\mathit{PSA}$\ is a Markov chain.
$\tau$ and $\gamma$ can be combined into a single function,
ignoring the symbols,
and this function,
together with the first condition of Definition \ref{def:psa},
would define the transition matrix of a Markov chain.
The last condition about $M$ being strongly connected also ensures that the Markov chain is
composed of a single recurrent class of states.
Figure \ref{fig:psaab1} shows an example of a $\mathit{PSA}$,
the one that we construct from the $\mathit{PST}$\ of Figure \ref{fig:pstab1},
using the leaves of the tree as automaton states.
A full-order $\mathit{PSA}$\ for $m=2$ would require a total of 4 states,
given that we have two symbols.
If we use the $\mathit{PST}$\ of Figure \ref{fig:pstab1},
we can construct the $\mathit{PSA}$\ of Figure \ref{fig:psaab1} which has 3 states.
State $b$ does not need to be expanded to states $bb$ and $ab$,
since the tree tells us that such an expansion is not statistically meaningful.
Using a $\mathit{PSA}$\,
we can process a stream of symbols and at every point be able to provide an estimate about the next symbols that will be encountered along with their probabilities.
The state of the $\mathit{PSA}$\ at every moment corresponds to a suffix of the stream.
For example,
according to the $\mathit{PSA}$\ of Figure \ref{fig:psaab1},
if the last symbol consumed from the stream is $b$,
then the $\mathit{PSA}$\ would be in state $b$ and the probability of the next symbol being $a$ would be $0.5$.
If the last symbol in the stream is $a$,
we would need to expand this suffix to look at one more symbol in the past.
If the last two symbols are $aa$,
then the $\mathit{PSA}$\ would be in state $aa$ and the probability of the next symbol being $a$ again would be $0.75$.
Note that a $\mathit{PSA}$\ does not act as an acceptor (there are no final states),
but can act as a generator of strings.
It can use $\pi$, its initial distribution on states, to select an initial state and generate its label as a first string
and then continuously use $\gamma$ to generate a symbol, move to a next state and repeat the same process.
At every time, the label of its state is always a suffix of the string generated thus far.
A $\mathit{PSA}$\ may also be used to read a string or stream of symbols.
In this mode,
the state of the $\mathit{PSA}$\ at every moment corresponds again to a suffix of the stream and the $\mathit{PSA}$\ can be used to calculate the probability of seeing any given string in the future,
given the label of its current state.
Our intention is to use this derived $\mathit{PSA}$\ to process streams of symbols,
so that,
while consuming a stream $S_{1..k}$,
we can know what its meaningful suffix and use that suffix for any inferences.
However, there is a subtle technical issue about the convertibility of a $\mathit{PST}$\ to a $\mathit{PSA}$.
Not every $\mathit{PST}$\ can be converted to a $\mathit{PSA}$\ (but every $\mathit{PST}$\ can be converted to a larger class of so-call probabilistic automata).
This is achievable under a certain condition.
If this condition does not hold,
then the $\mathit{PST}$\ can be converted to an automaton that is composed of a $\mathit{PSA}$\ as usual,
with the addition of some extra states.
These states, viewed as states of a Markov chain, are transient.
This means that the automaton will move through these states for some transitions,
but it will finally end into the states of the $\mathit{PSA}$,
stay in that class and never return to any of the transient states.
In fact, if the automaton starts in any of the transient states,
then it will enter the single, recurrent class of the $\mathit{PSA}$\ in at most $m$ transitions.
Given the fact that in our work we deal with streams of infinite length,
it is certain that,
while reading a stream,
the automaton will have entered the $\mathit{PSA}$\ after at most $m$ symbols.
Thus, instead of checking this condition,
we prefer to simply construct only the $\mathit{PSA}$\ and wait (for at most $m$ symbols)
until the first $k \leq m$ symbols of a stream have been consumed and are equal to a label of the $\mathit{PSA}$.
At this point, we set the current state of the $\mathit{PSA}$\ to the state with that label and start processing.
The above discussion seems to suggest that a $\mathit{PSA}$\ is constructed from the leaves of a $\mathit{PST}$.
Thus, it should be expected that the number of states of a $\mathit{PSA}$\ should always be smaller than the total number of nodes of its $\mathit{PST}$.
However, this is not true in the general case.
In fact, in some cases the $\mathit{PST}$\ nodes might be significantly less than the $\mathit{PSA}$\ states.
The reason is that a $\mathit{PST}$,
as is produced by the learning algorithm described previously,
might not be sufficient to construct a $\mathit{PSA}$.
To remedy this situation,
we need to expand the original $\mathit{PST}$\ $\hat{T}$ by adding more nodes in order to get a suitable $\mathit{PST}$\ $\hat{T}'$ and then construct the $\mathit{PSA}$\ from $\hat{T}'$.
The leaves of $\hat{T}'$ (and thus the states of the $\mathit{PSA}$) could be significantly more than the leaves of $\hat{T}$.
This issue is further discussed in Section \ref{sec:no-mc}.
\subsection{Variable-order Markov Models}
\label{sec:vmm}
Assuming that we have a deterministic automaton,
the next question is how we can build a probabilistic model that captures the statistical properties of the streams to be processed by this automaton.
With such a model,
we could then make inferences about the automaton's expected behavior as it reads event streams.
One approach would be to map each state of the automaton to a state of a Markov chain,
then apply the automaton on a training stream of symbols,
count the number of transitions from each state to every other target state and use these counts to calculate the transition probabilities.
This is the approach followed in \cite{DBLP:conf/debs/MuthusamyLJ10}.
However, there is an important issue with the way in which this approach models transition probabilities.
Namely, a probability is attached to the transition between two states,
say state 1 and state 2,
ignoring the way in which state 1 has been reached, i.e.,
failing to capture the sequence of symbols.
For example, in Figure \ref{fig:dfasr},
state $0$ can be reached after observing symbol $b$ or symbol $c$.
The outgoing transition probabilities do not distinguish between the two cases.
Instead, they just capture the probability of $a$ given that the previous symbol was $b$ or $c$.
This introduces ambiguity and if there are many such states in the automaton,
we may end up with a Markov chain that is first-order (with respect to its states),
but nevertheless provides no memory of the stream itself.
It may be unable to capture first-order (or higher order) dependencies in the stream of events. In the worst case (if every state can be reached with any symbol), such a Markov chain may essentially assume that the stream is composed of i.i.d. events.
\input{figs_dfasr}
An alternative approach,
followed in \cite{DBLP:conf/lpar/AlevizosAP18,DBLP:conf/debs/AlevizosAP17},
is to first set a maximum order $m$ that we need to capture
and then iteratively split each state of the original automaton into as many states as required so that each new state can remember the past $m$ symbols that have led to it.
The new automaton that results from this splitting process is equivalent to the original,
in the sense that they recognize the same language,
but can always remember the last $m$ symbols of the stream.
With this approach,
it is indeed possible to guarantee that $m$-order dependencies can be captured.
As expected though, higher values of $m$ can quickly lead to an exponential growth of the number of states and the approach may be practical only for low values of $m$.
We propose the use of a variable-order Markov model (VMM) to mitigate the high cost of increasing the order $m$
\cite{DBLP:journals/jair/BegleiterEY04,buhlmann1999variable,DBLP:journals/ml/RonST96,DBLP:conf/nips/RonST93,DBLP:journals/tcom/ClearyW84,DBLP:journals/tit/WillemsST95}.
This allows us to increase $m$ to values not possible with the previous approaches and thus capture longer-term dependencies,
which can lead to a better accuracy.
An alternative would be to use hidden Markov models (HMMs) \cite{rabiner1989tutorial},
which are generally more expressive than bounded-order (either full or variable) Markov models.
However, HMMs often require large training datasets \cite{DBLP:journals/jair/BegleiterEY04,DBLP:journals/ml/AbeW92}.
Another problem is that it is not always obvious how a domain can be modeled through HMMs and a deep understanding of the domain may be required \cite{DBLP:journals/jair/BegleiterEY04}.
The relation between an automaton and the observed state of a HMM is not straightforward
and it is not evident how a HMM would capture an automaton's behavior.
Different Markov models of variable order have been proposed in the literature
(see \cite{DBLP:journals/jair/BegleiterEY04} for a nice comparative study).
The general approach of such models is as follows:
let $\Sigma$ denote an alphabet,
$\sigma \in \Sigma$ a symbol from that alphabet and $s \in \Sigma^{m}$ a string of length $m$ of symbols from that alphabet.
The aim is to derive a predictor $\hat{P}$ from the training data such that the average log-loss on a test sequence $S_{1..k}$ is minimized.
The loss is
given by
$l(\hat{P},S_{1..k}) = - \frac{1}{T} \sum_{i=1}^{k} log \hat{P}(t_{i} \mid t_{1} \cdots t_{i-1})$.
Minimizing the log-loss is equivalent to maximizing the likelihood $\hat{P}(S_{1..k})=\prod_{i=1}^{k}\hat{P}(t_{i} \mid t_{1} \dots t_{i-1})$.
The average log-loss may also be viewed as a measure of the average compression rate achieved on the test sequence \cite{DBLP:journals/jair/BegleiterEY04}.
The mean (or expected) log-loss ($-\boldsymbol{E}_{P}\{log \hat{P}(S_{1..k}) \}$) is minimized if the derived predictor $\hat{P}$ is indeed the actual distribution $P$ of the source emitting sequences.
For full-order Markov models,
the predictor $\hat{P}$ is derived through the estimation of conditional distributions $\hat{P}(\sigma \mid s)$,
with $m$ constant and equal to the assumed order of the Markov model.
Variable-order Markov Models (VMMs), on the other hand, relax the assumption of $m$ being fixed.
The length of the ``context'' $s$ (as is usually called) may vary,
up to a \emph{maximum} order $m$,
according to the statistics of the training dataset.
By looking deeper into the past only when it is statistically meaningful,
VMMs can capture both short- and long-term dependencies.
\section{Related Work}
\label{sec:related}
There are multiple ways to define the task of forecasting over time-evolving data streams.
Before proceeding with the presentation of previous work on forecasting,
we first begin with a terminological clarification.
It is often the case that the terms ``forecasting'' and ``prediction'' are used interchangeably as equivalent terms.
For reasons of clarity,
we opt for the term of ``forecasting'' to describe our work,
since there does exist a conceptual difference between forecasting and prediction, as the latter term is understood in machine learning.
In machine learning,
the goal is to ``predict'' the output of a function on previously unseen input data.
The input data need not necessarily have a temporal dimension and the term ``prediction'' refers to the output of the learned function on a new data point.
For this reason we avoid using the term ``prediction''.
Instead, we choose the term ``forecasting'' to define the task of predicting the temporally future output of some function or the occurrence of an event.
Time is thus a crucial component for forecasting.
Moreover, an important challenge stems from the fact that,
from the (current) timepoint where a forecast is produced until the (future) timepoint for which we try to make a forecast,
no data is available.
A forecasting system must (implicitly or explicitly) fill in this data gap in order to produce a forecast.
In what follows,
we present previous work on CEF,
as defined above,
in order of increasing relevance to CER.
Since work on CEF has been limited thus far,
we start by briefly mentioning some forecasting ideas from other fields and discuss how CEF differs from these research areas.
\emph{Time-series forecasting.}
Time-series forecasting is an area with some similarities to CEF and a significant history of contributions \cite{montgomery2015introduction}.
However, it is not possible to directly apply techniques from time-series forecasting to CEF.
Time-series forecasting typically focuses on streams of (mostly) real-valued variables and the goal is to forecast relatively simple patterns.
On the contrary, in CEF we are also interested in categorical values,
related through complex patterns and involving multiple variables.
Another limitation of time-series forecasting methods is that they do not provide a language with which we can define complex patterns,
but simply try to forecast the next value(s) from the input stream/series.
In CER, the equivalent task would be to forecast the next input event(s) (SDEs).
This task in itself is not very useful for CER though,
since the majority of SDE instances should be ignored and do not contribute to the detection of CEs
(see the discussion on selection policies in Section \ref{sec:symbolic}).
For example,
if we want to determine whether a ship is following a set of pre-determined waypoints at sea,
we are only interested in the messages where the ship ``touches'' each waypoint.
All other intermediate messages are to be discarded and should not constitute part of the match.
CEs are more like ``anomalies'' and their number is typically orders of magnitude lower than the number of SDEs.
One could possibly try to leverage techniques from SDE forecasting to perform CE forecasting.
At every timepoint, we could try to estimate the most probable sequence of future SDEs,
then perform recognition on this future stream of SDEs and check whether any future CEs are detected.
We have experimentally observed that such an approach yields sub-optimal results.
It almost always fails to detect any future CEs.
This behavior is due to the fact that CEs are rare.
As a result, projecting the input stream into the future creates a ``path'' with high probability but fails to include the rare ``paths'' that lead to a CE detection.
Because of this serious under-performance of this method,
we do not present detailed experimental results.
\emph{Sequence prediction (compression).}
Another related field is that of prediction of discrete sequences over finite alphabets and is closely related to the field of compression,
as any compression algorithm can be used for prediction and vice versa.
The relevant literature is extensive.
Here we focus on a sub-field with high importance for our work,
as we have borrowed ideas from it.
It is the field of sequence prediction via variable-order Markov models \cite{DBLP:journals/jair/BegleiterEY04,buhlmann1999variable,DBLP:journals/ml/RonST96,DBLP:conf/nips/RonST93,DBLP:journals/tcom/ClearyW84,DBLP:journals/tit/WillemsST95}.
As the name suggests,
the goal is to perform prediction by using a high-order Markov model.
Doing so in a straightforward manner,
by constructing a high-order Markov chain with all its possible states,
is prohibitively expensive due to the combinatorial explosion of the number of states.
Variable-order Markov models address this issue by retaining only those states that are ``informative'' enough.
In Section \ref{sec:vmm},
we discuss the relevant literature in more details.
The main limitation of previous methods for sequence prediction is that they they also do not provide a language for patterns and focus exclusively on next symbol prediction,
i.e., they try to forecast the next symbol(s) in a stream/string of discrete symbols.
As already discussed,
this is a serious limitation for CER.
An additional limitation is that they work on single-variable discrete sequences of symbols,
whereas CER systems consume streams of events,
i.e., streams of tuples with multiple variables, both numerical and categorical.
Notwithstanding these limitations,
we show that variable-order models can be combined with symbolic automata in order to overcome their restrictions and perform CEF.
\emph{Temporal mining.}
Forecasting methods have also appeared in the field of temporal pattern mining \cite{DBLP:conf/icdm/VilaltaM02,DBLP:conf/kdd/LaxmanTW08,DBLP:journals/eswa/ZhouCG15,DBLP:journals/vldb/ChoWYZC11}.
A common assumption in these methods is that patterns are usually defined either as association rules \cite{DBLP:conf/sigmod/AgrawalIS93} or as frequent episodes \cite{DBLP:journals/datamine/MannilaTV97}.
In \cite{DBLP:conf/icdm/VilaltaM02} the goal is to identify sets of event types that frequently precede a rare, target event within a temporal window, using a framework similar to that of association rule mining.
In \cite{DBLP:conf/kdd/LaxmanTW08}, a forecasting model is presented,
based on a combination of standard frequent episode discovery algorithms, Hidden Markov Models and mixture models.
The goal is to calculate the probability of the immediately next event in the stream.
In \cite{DBLP:journals/eswa/ZhouCG15} a method is presented for batch, online mining of sequential patterns.
The learned patterns are used to test whether a prefix matches the last events seen in the stream and therefore make a forecast.
The method proposed in \cite{DBLP:journals/vldb/ChoWYZC11} starts with a given episode rule (as a Directed Acyclic Graph) and detects the
minimal occurrences of the antecedent of a rule defining a complex event,
i.e., those ``clusters'' of antecedent events that are closer together in time.
From the perspective of CER,
the disadvantage of these methods is that they usually target simple patterns,
defined either as strict sequences or as sets of input events.
Moreover, the input stream is composed of symbols from a finite alphabet,
as is the case with the compression methods mentioned above.
\emph{Sequence prediction based on neural networks.}
Lately, a significant body of work has focused on event sequence prediction and point-of-interest recommendations through the use of neural networks
(see, for example, \cite{DBLP:conf/ijcai/LiDL18,DBLP:conf/ijcai/ChangPPKK18}).
These methods are powerful in predicting the next input event(s) in a sequence of events,
but they suffer from limitations already mentioned above.
They do not provide a language for defining complex patterns among events and their focus is thus on SDE forecasting.
An additional motivation for us to first try a statistical method rather than going directly to neural networks is that,
in other related fields,
such as time series forecasting,
statistical methods have often been proven to be more accurate and less demanding in terms of computational resources than ML ones \cite{makridakis2018statistical}.
\emph{Process mining.}
Compared to the previous categories for forecasting,
the field of process mining is more closely related to CER \cite{van2011process}.
Processes are typically defined as transition systems (e.g., automata or Petri nets) and are used to monitor a system,
e.g., for conformance testing.
Process mining attempts to automatically learn a process from a set of traces,
i.e., a set of activity logs.
Since 2010,
a significant body of work has appeared,
targeting process prediction,
where the goal is to forecast if and when a process is expected to be completed
(for surveys, see \cite{DBLP:journals/tsc/Marquez-Chamorro18,DBLP:conf/bpm/Francescomarino18}).
According to \cite{DBLP:journals/tsc/Marquez-Chamorro18},
until 2018, 39 papers in total have been published dealing with process prediction.
At a first glance,
process prediction seems very similar to CEF.
At a closer look though,
some important differences emerge.
An important difference is that processes are usually given directly as transition systems,
whereas CER patterns are defined in a declarative manner.
The transition systems defining processes are usually composed of long sequences of events.
On the other hand,
CER patterns are shorter,
may involve Kleene-star, iteration operators (usually not present in processes)
and may even be instantaneous.
Consider,
for example,
a pattern for our running example,
trying to detect speed violations by simply checking whether a vessel's speed exceeds some threshold.
This pattern could be expanded to detect more violations by adding more disjuncts,
e.g., for checking whether a vessel is sailing within a restricted area,
all of which might be instantaneous.
A CEF system cannot always rely on the memory implicitly encoded in a transition system and has to be able to learn the sequences of events that lead to a (possibly instantaneous) CE.
Another important difference is that process prediction focuses on traces, which are complete, full matches,
whereas CER focuses on continuously evolving streams which may contain many irrelevant events.
A learning method has to take into account the presence of these irrelevant events.
In addition to that,
since CEs are rare events,
the datasets are highly imbalanced,
with the vast majority of ``labels'' being negative
(i.e., most forecasts should report that no CE is expected to occur,
with very few being positive).
A CEF system has to strike a fine balance between the positive and negative forecasts it produces
in order to avoid drowning the positives in the flood of all the negatives and, at the same time, avoid over-producing positives that lead to false alarms.
This is also an important issue for process prediction,
but becomes critical for a CEF system,
due to the imbalanced nature of the datasets.
In Section \ref{sec:experiments},
we have included one method from the field of process prediction to our empirical evaluation.
This ``unfair'' comparison (in the sense that it is applied on datasets more suitable for CER) shows that this method consistently under-performs with respect to other methods from the field of CEF.
\emph{Complex event forecasting.}
Contrary to process prediction, forecasting has not received much attention in the field of CER,
although some conceptual proposals have acknowledged the need for CEF \cite{DBLP:conf/bci/FulopBTDVF12,DBLP:conf/debs/EngelE11,DBLP:conf/edoc/ChristKK16}.
The first concrete attempt at CEF was presented in \cite{DBLP:conf/debs/MuthusamyLJ10}.
A variant of regular expressions was used to define CE patterns,
which were then compiled into automata.
These automata were translated to Markov chains through a direct mapping,
where each automaton state was mapped to a Markov chain state.
Frequency counters on the transitions were used to estimate the Markov chain's transition matrix.
This Markov chain was finally used to estimate if a CE was expected to occur within some future window.
As we explain in Section \ref{sec:vmm},
in the worst case,
such an approach assumes that all SDEs are independent
(even when the states of the Markov chain are not independent)
and is thus unable to encode higher-order dependencies.
This issue is explained in more detail in Section \ref{sec:vmm}.
Another example of event forecasting was presented in \cite{DBLP:conf/wf-iot/AkbarCMZ15}.
Using Support Vector Regression,
the proposed method was able to predict the next input event(s) within some future window.
This technique is similar to time-series forecasting,
as it mainly targets the prediction of the (numerical) values of the attributes of the input (SDE) events
(specifically, traffic speed and intensity from a traffic monitoring system).
Strictly speaking,
it cannot therefore be considered a CE forecasting method,
but a SDE forecasting one.
Nevertheless, the authors of \cite{DBLP:conf/wf-iot/AkbarCMZ15} proposed the idea that these future SDEs may be used by a CER engine to detect future CEs.
As we have already mentioned though,
in our experiments,
this idea has yielded poor results.
In \cite{DBLP:conf/colcom/PandeyNC11},
Hidden Markov Models (HMM) are used to construct a probabilistic model for the behavior of a transition system describing a CE.
The observable variable of the HMM corresponds to the states of the transition system,
i.e., an observation sequence of length $l$ for the HMM consists of the sequence of states visited by the system after consuming $l$ SDEs.
These $l$ SDEs are mapped to the hidden variable,
i.e.,
the last $l$ values of the hidden variable are the last $l$ SDEs.
In principle,
HMMs are more powerful than Markov chains.
In practice, however, HMMs are hard to train (\cite{DBLP:journals/jair/BegleiterEY04,DBLP:journals/ml/AbeW92}) and require elaborate domain modeling,
since mapping a CE pattern to a HMM is not straightforward (see Section \ref{sec:vmm} for details).
In contrast,
our approach constructs seamlessly a probabilistic model from a given CE pattern (declaratively defined).
Automata and Markov chains are again used in \cite{DBLP:conf/debs/AlevizosAP17,DBLP:conf/lpar/AlevizosAP18}.
The main difference of these methods compared to \cite{DBLP:conf/debs/MuthusamyLJ10} is that they can accommodate higher-order dependencies by creating extra states for the automaton of a pattern.
The method presented in \cite{DBLP:conf/debs/AlevizosAP17} has two important limitations:
first, it works only on discrete sequences of finite alphabets;
second, the number of states required to encode long-term dependencies grows exponentially.
The first issue was addressed in \cite{DBLP:conf/lpar/AlevizosAP18},
where symbolic automata are used that can handle infinite alphabets.
However, the problem of the exponential growth of the number of states still remains.
We show how this problem can be addressed by using variable-order Markov models.
A different approach is followed in \cite{li20data},
where knowledge graphs are used to encode events and their timing relationships.
Stochastic gradient descent is employed to learn the weights of the graph's edges that determine how important an event is with respect to another target event.
However, this approach falls in the category of SDE forecasting,
as it does not target complex events.
More precisely, it tries to forecast which predicates the forthcoming SDEs will satisfy,
without taking into account relationships between the events themselves (e.g., through simple sequences).
\section{Complex Event Recognition with Symbolic Automata}
\label{sec:symbolic}
Our approach for CEF is based on a specific formal framework for CER,
which we are presenting here.
There are various surveys of CER methods,
presenting various CER systems and languages \cite{DBLP:journals/csur/CugolaM12,DBLP:journals/csur/AlevizosSAP17,DBLP:journals/vldb/GiatrakosAADG20}.
Despite this fact though, there is still no consensus about which operators must be supported by a CER language and what their semantics should be.
In this paper,
we follow \cite{DBLP:journals/vldb/GiatrakosAADG20} and \cite{DBLP:conf/icdt/GrezRU19},
which have established some core operators that are most often used.
In a spirit similar to \cite{DBLP:conf/icdt/GrezRU19},
we use automata as our computational model and define a CER language whose expressions can readily be converted to automata.
Instead of choosing one of the automaton models already proposed in the CER literature,
we employ symbolic regular expressions and automata \cite{DBLP:conf/cav/DAntoniV17,DBLP:journals/fmsd/DAntoniV15,DBLP:conf/icst/VeanesHT10}.
The rationale behind our choice is that,
contrary to other automata-based CER models,
symbolic regular expressions and automata have nice closure properties and clear (both declarative and operational), compositional semantics
(see \cite{DBLP:conf/icdt/GrezRU19} for a similar line of work,
based on symbolic transducers).
In previous automata-based CER systems,
it is unclear which operators may be used and if they can be arbitrarily combined
(see \cite{DBLP:conf/icdt/GrezRU19,DBLP:journals/vldb/GiatrakosAADG20} for a discussion of this issue).
On the contrary,
the use of symbolic automata allows us to construct any pattern that one may desire through an arbitrary use of the provided operators.
One can then check whether a stream satisfies some pattern either declaratively
(by making use of the definition for symbolic expressions, presented below) or operationally (by using a symbolic automaton).
This even allows us to write unit tests (as we have indeed done) ensuring that the semantics of symbolic regular expressions do indeed coincide with those of symbolic automata,
something not possible with other frameworks.
In previous methods,
there is also a lack of understanding with respect to the properties of the employed computational models,
e.g., whether the proposed automata are determinizable,
an important feature for our work.
Symbolic automata, on the other hand, have nice closure properties and are well-studied.
Notice that this would also be an important feature for possible optimizations based on pattern re-writing,
since such re-writing would require us to have a mechanism determining whether two expressions are equivalent.
Our framework provides such a mechanism.
\subsection{Symbolic Expressions and Automata}
The main idea behind symbolic automata is that each transition,
instead of being labeled with a symbol from an alphabet,
is equipped with a unary formula from an effective Boolean algebra \cite{DBLP:conf/cav/DAntoniV17}.
A symbolic automaton can then read strings of elements and,
upon reading an element while in a given state,
can apply the predicates of this state's outgoing transitions to that element.
The transitions whose predicates evaluate to \textsf{\footnotesize TRUE}\ are said to be ``enabled'' and the automaton moves to their target states.
The formal definition of an effective Boolean algebra is the following:
\begin{definition}[Effective Boolean algebra \cite{DBLP:conf/cav/DAntoniV17}]
An effective Boolean algebra is a tuple
($\mathcal{D}$, $\Psi$, $\llbracket \_ \rrbracket$, $\bot$, $\top$, $\vee$, $\wedge$, $\neg$)
where
\begin{itemize}
\item $\mathcal{D}$ is a set of domain elements;
\item $\Psi$ is a set of predicates closed under the Boolean connectives;
\item $\bot,\top\in \Psi$ ;
\item and the component $\llbracket \_ \rrbracket : \Psi \rightarrow 2^{\mathcal{D}}$ is a function
such that
\begin{itemize}
\item $\llbracket \bot \rrbracket = \emptyset$
\item $\llbracket \top \rrbracket = \mathcal{D}$
\item and $\forall \phi,\psi \in \Psi$:
\begin{itemize}
\item $\llbracket \phi \vee \psi \rrbracket = \llbracket \phi \rrbracket \cup \llbracket \psi \rrbracket$
\item $\llbracket \phi \wedge \psi \rrbracket = \llbracket \phi \rrbracket \cap \llbracket \psi \rrbracket $
\item $\llbracket \neg \phi \rrbracket = \mathcal{D} \setminus \llbracket \phi \rrbracket$
\end{itemize}
\end{itemize}
\end{itemize}
It is also required that checking satisfiability of $\phi$, i.e., whether $\llbracket \phi \rrbracket \neq \emptyset$, is decidable and that the operations of $\vee$, $\wedge$ and $\neg$ are computable.
\end{definition}
Using our running example,
such an algebra could be one consisting of AIS messages,
corresponding to $\mathcal{D}$,
along with two predicates about the speed of a vessel,
e.g., $\mathit{speed} < 5$ and $\mathit{speed} > 20$.
These predicates would correspond to $\Psi$.
The predicate $\mathit{speed} < 5$ would be mapped,
via $\llbracket \_ \rrbracket$,
to the set of all AIS messages whose speed level is below 5 knots.
According to the definition above,
$\bot$ and $\top$ should also belong to $\Psi$,
along with all the combinations of the original two predicates constructed from the Boolean connectives,
e.g., $\neg (\mathit{speed} < 5) \wedge \neg (\mathit{speed} > 20)$.
Elements of $\mathcal{D}$ are called \emph{characters} and finite sequences of characters are called \emph{strings}.
A set of strings $\mathcal{L}$ constructed from elements of $\mathcal{D}$ ($\mathcal{L} \subseteq \mathcal{D}^{*}$, where $^{*}$ denotes Kleene-star) is called a language over $\mathcal{D}$.
As with classical regular expressions \cite{DBLP:books/daglib/0016921},
we can use symbolic regular expressions to represent a class of languages over $\mathcal{D}$.
\begin{definition}[Symbolic regular expression]
A symbolic regular expression ($\mathit{SRE}$) over an effective Boolean algebra ($\mathcal{D}$, $\Psi$, $\llbracket \_ \rrbracket$, $\bot$, $\top$, $\vee$, $\wedge$, $\neg$) is recursively defined as follows:
\begin{itemize}
\item The constants $\epsilon$ and $\emptyset$ are symbolic regular expressions with $\mathcal{L}(\epsilon) = \{\epsilon\}$ and $\mathcal{L}(\emptyset) = \{ \emptyset \}$;
\item If $\psi \in \Psi$, then $R := \psi$ is a symbolic regular expression, with $\mathcal{L}(\psi) = \llbracket \psi \rrbracket$,
i.e., the language of $\psi$ is the subset of $\mathcal{D}$ for which $\psi$ evaluates to \textsf{\footnotesize TRUE};
\item Disjunction / Union: If $R_{1}$ and $R_{2}$ are symbolic regular expressions, then $R := R_{1} + R_{2}$ is also a symbolic regular expression, with $\mathcal{L}(R) = \mathcal{L}(R_{1}) \cup \mathcal{L}(R_2)$;
\item Concatenation / Sequence: If $R_{1}$ and $R_{2}$ are symbolic regular expressions, then $R := R_{1} \cdot R_{2}$ is also a symbolic regular expression, with $\mathcal{L}(R) = \mathcal{L}(R_{1}) \cdot \mathcal{L}(R_2)$, where $\cdot$ denotes concatenation. $\mathcal{L}(R)$ is then the set of all strings constructed from concatenating each element of $\mathcal{L}(R_{1})$ with each element of $\mathcal{L}(R_{2})$;
\item Iteration / Kleene-star: If $R$ is a symbolic regular expression, then $R' := R^{*}$ is a symbolic regular expression, with $\mathcal{L}(R^{*})=(\mathcal{L}(R))^{*}$,
where $\mathcal{L}^{*} = \bigcup\limits_{i \geq 0}{\mathcal{L}^{i}}$ and $\mathcal{L}^{i}$ is the concatenation of $\mathcal{L}$ with itself $i$ times.
\end{itemize}
\end{definition}
As an example,
if we want to detect instances of a vessel accelerating suddenly,
we could write the expression $R := (\mathit{speed < 5}) \cdot (\mathit{speed > 20})$.
The third and fourth events of the stream of Table \ref{table:stream} would then belong to the language of $R$.
Given a Boolean algebra, we can also define symbolic automata.
The definition of a symbolic automaton is the following:
\begin{definition}[Symbolic finite automaton \cite{DBLP:conf/cav/DAntoniV17}]
A symbolic finite automaton ($\mathit{SFA}$) is a tuple $M=$($\mathcal{A}$, $Q$, $q^{s}$, $Q^{f}$, $\Delta$),
where
\begin{itemize}
\item $\mathcal{A}$ is an effective Boolean algebra;
\item $Q$ is a finite set of states;
\item $q^{s} \in Q$ is the initial state;
\item $Q^{f} \subseteq Q$ is the set of final states;
\item $\Delta \subseteq Q \times \Psi_{\mathcal{A}} \times Q$ is a finite set of transitions.
\end{itemize}
\end{definition}
A string $w=a_{1}a_{2} \cdots a_{k}$ is accepted by a $\mathit{SFA}$\ $M$ iff,
for $1 \leq i \leq k$, there exist transitions $q_{i-1} \overset{a_{i}}{\rightarrow} q_{i}$ such that $q_{0}=q^{s}$ and $q_{k} \in Q^{f}$.
We refer to the set of strings accepted by $M$ as the language of $M$, denoted by $\mathcal{L}(M)$ \cite{DBLP:conf/cav/DAntoniV17}.
Figure \ref{fig:sfa_speed} shows a $\mathit{SFA}$\ that can detect the expression of sudden acceleration for our running example.
As with classical regular expressions and automata,
we can prove that every symbolic regular expression can be translated to an equivalent (i.e., with the same language) symbolic automaton.
\begin{proposition}
\label{proposition:sre2sfa}
For every symbolic regular expression $R$ there exists a symbolic finite automaton $M$ such that $\mathcal{L}(R) = \mathcal{L}(M)$.
\end{proposition}
\begin{proof}
The proof is essentially the same as that for classical expressions and automata \cite{DBLP:books/daglib/0016921}.
It is a constructive proof starting from the base case of an expression that is a single predicate (instead of a symbol, as in classical expressions) and then proceeds in a manner identical to that of the classical case.
For the sake of completeness,
the full proof is provided in the Appendix, Section \ref{sec:proof:sre2sfa}.
\end{proof}
\input{figs_ssfa}
\subsection{Streaming Expressions and Automata}
Our discussion thus far has focused on how $\mathit{SRE}$\ and $\mathit{SFA}$\ can be applied to bounded strings that are known in their totality before recognition.
We feed a string to a $\mathit{SFA}$\ and we expect an answer about whether the whole string belongs to the automaton's language or not.
However, in CER and CEF we need to handle continuously updated streams of events and detect instances of $\mathit{SRE}$\ satisfaction as soon as they appear in a stream.
For example, the automaton of the (classical) regular expression $a \cdot b$ would accept only the string $a,b$.
In a streaming setting,
we would like the automaton to report a match every time this string appears in a stream.
For the stream $a,b,c,a,b,c$,
we would thus expect two matches to be reported,
one after the second symbol and one after the fifth.
In order to accommodate this scenario,
slight modifications are required so that $\mathit{SRE}$\ and $\mathit{SFA}$\ may work in a streaming setting.
First,
we need to make sure that the automaton can start its recognition after every new element.
If we have a classical regular expression $R$,
we can achieve this by applying on the stream the expression $\Sigma^{*} \cdot R$,
where $\Sigma$ is the automaton's (classical) alphabet.
For example,
if we apply $R := \{a,b,c\}^{*} \cdot (a \cdot b)$ on the stream $a,b,c,a,b,c$,
the corresponding automaton would indeed reach its final state after reading the second and the fifth symbols.
In our case,
events come in the form of tuples with both numerical and categorical values.
Using database systems terminology we can speak of tuples from relations of a database schema \cite{DBLP:conf/icdt/GrezRU19}.
These tuples constitute the set of domain elements $\mathcal{D}$.
A stream $S$ then has the form of an infinite sequence $S=t_{1},t_{2},\cdots$, where each $t_{i}$ is a tuple ($t_{i} \in \mathcal{D}$).
Our goal is to report the indices $i$ at which a CE is detected.
More precisely, if $S_{1..k}=\cdots,t_{k-1},t_{k}$ is the prefix of $S$ up to the index $k$,
we say that an instance of a $\mathit{SRE}$\ $R$ is detected at $k$ iff there exists a suffix $S_{m..k}$ of $S_{1..k}$ such that $S_{m..k} \in \mathcal{L}(R)$.
In order to detect CEs of a $\mathit{SRE}$\ $R$ on a stream, we use a streaming version of $\mathit{SRE}$\ and $\mathit{SFA}$.
\begin{definition}[Streaming SRE and SFA]
If $R$ is a $\mathit{SRE}$, then $R_{s}= \top^{*} \cdot R$ is called the streaming $\mathit{SRE}$\ ($\mathit{sSRE}$) corresponding to $R$.
A $\mathit{SFA}$\ $M_{R_{s}}$ constructed from $R_{s}$ is called a streaming $\mathit{SFA}$\ ($\mathit{sSFA}$) corresponding to $R$.
\end{definition}
Using $R_{s}$ we can detect CEs of $R$ while reading a stream $S$,
since a stream segment $S_{m..k}$ belongs to the language of $R$ iff the prefix $S_{1..k}$ belongs to the language of $R_{s}$.
The prefix $\top^{*}$ lets us skip any number of events from the stream and start recognition at any index $m, 1 \leq m \leq k$.
\begin{proposition}
\label{proposition:streamingsre}
If $S=t_{1},t_{2},\cdots$ is a stream of domain elements from an effective Boolean algebra $\mathcal{A} = (\mathcal{D}$, $\Psi$, $\llbracket \_ \rrbracket$, $\bot$, $\top$, $\vee$, $\wedge$, $\neg$), where $t_{i} \in \mathcal{D}$, and $R$ is a symbolic regular expression over the same algebra,
then, for every $S_{m..k}$, $S_{m..k} \in \mathcal{L}(R)$ iff $S_{1..k} \in \mathcal{L}(R_{s})$ (and $S_{1..k} \in \mathcal{L}(M_{R_{s}})$).
\end{proposition}
\begin{proof}
The proof is provided in the Appendix, Section \ref{sec:proof:streamingsre}.
\end{proof}
As an example,
if $R := (\mathit{speed < 5}) \cdot (\mathit{speed > 20})$ is the pattern for sudden acceleration,
then its $\mathit{sSRE}$\ would be $R_{s} := \top^{*} \cdot (\mathit{speed < 5}) \cdot (\mathit{speed > 20})$.
After reading the fourth event of the stream of Table \ref{table:stream},
$S_{1..4}$ would belong to the language of $\mathcal{L}(R_{s})$ and $S_{3..4}$ to the language of $\mathcal{L}(R)$.
Note that $\mathit{sSRE}$\ and $\mathit{sSFA}$\ are just special cases of $\mathit{SRE}$\ and $\mathit{SFA}$\ respectively.
Therefore, every result that holds for $\mathit{SRE}$\ and $\mathit{SFA}$\ also holds for $\mathit{sSRE}$\ and $\mathit{sSFA}$\ as well.
Figure \ref{fig:ssfa_speed} shows an example $\mathit{sSFA}$.
The streaming behavior of a $\mathit{sSFA}$\ as it consumes a stream $S$ can be formally defined using the notion of configuration:
\begin{definition}[Configuration of sSFA]
Assume $S=t_{1},t_{2},\cdots$ is a stream of domain elements from an effective Boolean algebra,
$R$ a symbolic regular expression over the same algebra
and $M_{R_{s}}$ a $\mathit{sSFA}$\ corresponding to $R$.
A configuration $c$ of $M_{R_{s}}$ is a tuple $[i,q]$,
where $i$ is the current position of the stream,
i.e., the index of the next event to be consumed,
and $q$ the current state of $M_{R_{s}}$.
We say that $c'=[i',q']$ is a successor of $c$ iff:
\begin{itemize}
\item $\exists \delta \in M_{R_{s}}.\Delta: \delta = (q,\psi,q') \wedge (t_{i} \in \llbracket \psi \rrbracket \vee \psi = \epsilon)$;
\item $i=i'$ if $\delta = \epsilon$. Otherwise, $i' = i + 1$.
\end{itemize}
We denote a succession by $[i,q] \overset{\delta}{\rightarrow} [i',q']$.
\end{definition}
For the initial configuration $c^{s}$, before consuming any events,
we have that $i=1$ and $c^{s}.q = M_{R_{s}}.q^{s}$,
i.e. the state of the first configuration is the initial state of $M_{R_{s}}$.
In other words, for every index $i$, we move from our current state $q$ to another state $q'$ if there is an outgoing transition from $q$ to $q'$ and the predicate on this transition evaluates to \textsf{\footnotesize TRUE}\ for $t_{i}$.
We then increase the reading position by 1.
Alternatively, if the transition is an $\epsilon$-transition, we move to $q'$ without increasing the reading position.
The actual behavior of a $\mathit{sSFA}$\ upon reading a stream is captured by the notion of the run:
\begin{definition}[Run of sSFA over stream]
A run $\varrho$ of a $\mathit{sSFA}$\ $M$ over a stream $S_{1..k}$ is a sequence of successor configurations
$[1,q_{1}=M.q^{s}] \overset{\delta_{1}}{\rightarrow} [2,q_{2}] \overset{\delta_{2}}{\rightarrow} \cdots \overset{\delta_{k}}{\rightarrow} [k+1,q_{k+1}]$.
A run is called accepting iff $q_{k+1} \in M.Q^{f}$.
\end{definition}
A run $\varrho$ of a $\mathit{sSFA}$\ $M_{R_{s}}$ over a stream $S_{1..k}$ is accepting iff $S_{1..k} \in \mathcal{L}(R_{s})$,
since $M_{R_{s}}$, after reading $S_{1..k}$, must have reached a final state.
Therefore, for a $\mathit{sSFA}$\ that consumes a stream, the existence of an accepting run with configuration index $k+1$ implies that a CE for the $\mathit{SRE}$\ $R$ has been detected at the stream index $k$.
As far as the temporal model is concerned,
we assume that all SDEs are instantaneous.
They all carry a \emph{timestamp} attribute which is single, unique numerical value.
We also assume that the stream of SDEs is temporally sorted.
A sequence/concatenation operator is thus satisfied if the event of its first operand precedes in time the event of its second operand.
The exception to the above assumptions is when the stream is composed of multiple partitions and the defined pattern is applied on a per-partition basis.
For example,
in the maritime domain a stream may be composed of the sub-streams generated by all vessels and we may want to detect the same pattern for each individual vessel.
In such cases,
the above assumptions must hold for each separate partition but not necessarily across all partitions.
Another general assumption is that there is no imposed limit on the time elapsed between consecutive events in a sequence operation.
\subsection{Expressive Power of Symbolic Regular Expressions}
We conclude this section with some remarks about the expressive power of $\mathit{SRE}$\ and $\mathit{SFA}$\ and how it meets the requirements of a CER system.
As discussed in \cite{DBLP:journals/vldb/GiatrakosAADG20,DBLP:conf/icdt/GrezRU19},
besides the three operators of regular expressions that we have presented and implemented in this paper (disjunction, sequence, iteration),
there exist some extra operators which should be supported by a CER system.
\emph{Negation} is one them.
If we use $!$ to denote the negation operator,
then $R' := !R$ defines a language which is the complement of the language of $R$.
Since $\mathit{SFA}$\ are closed under complement \cite{DBLP:conf/cav/DAntoniV17},
negation is an operator that can be supported by our framework and has also been implemented
(but not discussed further).
The same is true for the operator of \emph{conjunction}.
If we use $\wedge$ to denote conjunction,
then $R := R_{1} \wedge R_{2}$ is an expression whose language consists of concatenated elements of $\mathcal{L}(R_{1})$ and $\mathcal{L}(R_{2})$, regardless of their order,
i.e., $\mathcal{L}(R) = \mathcal{L}(R_{1}) \cdot \mathcal{L}(R_{2}) \cup \mathcal{L}(R_{2}) \cdot \mathcal{L}(R_{1})$.
This operator can thus be equivalently expressed using the already available operators of concatenation ($\cdot$) and disjunction ($+$).
Another important notion in CER is that of \emph{selection policies}.
An expression like $R := R_{1} \cdot R_{2}$ typically implies that an instance of $R_{2}$ must immediately follow an instance of $R_{1}$.
As a result,
for the stream of Table \ref{table:stream} and $R := (\mathit{speed < 5}) \cdot (\mathit{speed > 20})$,
only one match will be detected at $\mathit{timestamp}=4$.
With selection policies,
we can relax the requirement for contiguous instances.
For example,
with the so-called \textsf{\footnotesize skip-till-any-match}\ policy,
any number of events are allowed to occur between $R_{1}$ and $R_{2}$.
If we apply this policy on $R := (\mathit{speed < 5}) \cdot (\mathit{speed > 20})$,
we would detect six CEs,
since the first three events of Table \ref{table:stream} can be matched with the two events at $\mathit{timestamp}=4$ and at $\mathit{timestamp}=6$,
if we ignore all intermediate events.
Selection policies can also be accommodated by our framework and have been implemented.
For a proof, using symbolic transducers, see \cite{DBLP:conf/icdt/GrezRU19}.
Notice, for example,
that an expression $R := R_{1} \cdot R_{2}$ can be evaluated with \textsf{\footnotesize skip-till-any-match}\ by being rewritten as $R' := R_{1} \cdot \top^{*} \cdot R_{2}$,
so that any number of events may occur between $R_{1}$ and $R_{2}$.
Support for hierarchical definition of patterns,
i.e., the ability to define patterns in terms of other patterns,
is yet another important feature in many CER systems.
Since $\mathit{SRE}$\ and $\mathit{SFA}$\ are compositional by nature,
hierarchies are supported by default in our framework.
Although we do not treat these operators and functionalities explicitly in this paper,
their incorporation is possible within the expressive limits of $\mathit{SRE}$\ and $\mathit{SFA}$\ and the results that we present in the next sections would still hold.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Why non-magnetic disorders in 3-d
quantum spin Hall systems ?}
A 3-d $Z_2$ quantum spin Hall
insulator~\cite{prb-mb,prb-roy,prlb-fkm} is defined to
be accompanied by the integer number of surface conducting
channels, each of which is described by the 2+1
massless Dirac fermion, i.e. helical surface state.
The stability of each helical
surface state is protected by the Kramers degeneracy
at the time-reversal (${\cal T}$-) symmetric (surface)
crystal momentum. The associated spin is directed within
the $XY$ plane, and rotates odd-number of times when the
surface crystal momentum rotates once around this
${\cal T}$-symmetric $k$-point.
While an individual helical surface state is
protected by the ${\cal T}$-symmetry, level-crossings
between any two different helical surface states are
generally accidental: they can be lifted
by a certain ${\cal T}$-
symmetric perturbation~\cite{prlb-fkm,prl-km,prl-wbz,prb-xm}.
As a result of this pair annihilation, those
insulators having even number of helical surface states
can be adiabatically connected to an ordinary
band insulator having no surface conducting channels
at all (`weak topological insulator'). On the other
hand, those with odd numbers of helical surface states
are always accompanied by (at least) one gapless
surface state, even when subjected under this pair
annihilation (`strong topological insulator').
In the presence of the ${\cal T}$-symmetry,
the latter type of insulators cannot be decomposed
into any two copies of spinless wavefunctions, and
therefore regarded as a new quantum state of
matter which goes beyond the quantum Hall paradigm.
The quantum critical
point~\cite{njp-m,prb-mk,prb-qtz,prb-srfl}
which intervenes this 3-d topological insulator and
a ${\cal T}$-symmetric ordinary band insulator
is also non-trivial. That is, the stability of
its critical nature is tightly connected to
the stability of each helical surface
state in the topological phase. To be specific,
consider that a certain ${\cal T}$-symmetric
model-parameter is changed,
such that the system transits
from the topological phase
to a ${\cal T}$-symmetric
ordinary phase. During this,
the helical surface state in the topological phase
is always protected by the ${\cal T}$-symmetry.
When the system reaches the quantum critical
point, however, the bulk-wavefunction becomes extended once.
Thus, the two helical surface states localized
at two opposite sample boundaries can
communicate via this extended bulk-wavefunction,
only to annihilate with each other.
Thanks to this pair annihilation
{\it mediated by the extended bulk},
the system can safely remove the stable
helical surface state and enter the ordinary
phase having no surface conducting channels,
while simultaneously keeping the ${\cal T}$-symmetry.
To put this reversely,
the bulk-extended character of the
quantum critical point is required to be stable
against any ${\cal T}$-symmetric perturbations,
provided that an individual helical surface state
in the topological phase is stable against
the same perturbations. Such a quantum critical
point can be dubbed as the strong topological
quantum critical point.
The above picture in the clean limit implies non-trivial
disorder effects around this topological quantum critical
point~\cite{prb-sm}.
To describe this with clarity, let us first define three
independently-tunable physical
parameters in the topological phase: (i) the (topological) band
gap $W_{\rm topo}$, (ii) the band width
of the conduction/valence band $W_{\rm band}$, and
(iii) non-magnetic disorder strengths $W_{\rm dis}$.
Provided that
$W_{\rm dis}<W_{\rm topo}$, the helical surface state
in the topological phase is stable against any
non-magnetic disorders irrespective
of $W_{\rm band}$~\cite{prl-btbb,prl-rmof,prl-nkr}.
On increasing $W_{\rm dis}$ such that
$W_{\rm band}<W_{\rm dis}<W_{\rm topo}$, those bulk-states
far from the two band-centers, i.e. that of the conduction
band and of the valence band, become localized
(see Fig.~1(a))~\cite{prl-aalr}.
Especially near the zero-energy region, $\mu=0$,
the system has no extended bulk wavefunctions.
Thus, two helical surface states localized
at the two opposite boundaries respectively
are disconnected from each other. Starting from
such a localized phase, consider decreasing the
topological band gap (or increasing the disorder
strength), such that $W_{\rm band} < W_{\rm topo} < W_{\rm dis}$.
During this decrease, these two disconnected surface
states should communicate with each other once, so as to
annihilate with each other, before the system enters
the ordinary insulator phase. This requires that, at
$W_{\rm topo} \simeq W_{\rm dis}$, the bulk-wavefunctions
at $\mu \simeq 0$ must become extended once, so as to mediate
these two opposite boundaries.
Moreover, it also requires that such a bulk-extended region
always intervenes between the topological phase
and the ordinary insulator phase in the
three-dimensional parameter space spanned by the chemical
potential $\mu$, $W_{\rm dis}$ and $W_{\rm topo}$ (see Fig.~1(b,c)).
Such behaviour of the bulk extended region is often dubbed as
the levitation and pair-annihilation phenomena, and was
ubiquitously observed also in other topological
systems, such as quantum Hall systems~\cite{prb-h,prl-aa,prl-hb}
and 2-d $Z_2$ quantum spin Hall systems~\cite{prl-oan,prb-ofrm,arx-gwab}.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.7]{sch2-b.eps}
\caption{ (a) A schematic density of state at
$W_{\rm dis}<W_{\rm topo}(\equiv m)$.
(b,c) schematic phase diagrams in the
$\mu$-$W_{\rm topo}$-$W_{\rm dis}$ plane. The
blue regions in these three figures stand for the
delocalized state, which always intervene between the
topological insulator phase and an ordinary insulator
phase in any `${\cal T}$-symmetric' phase diagram.
The figures are derived from Ref.~\cite{prb-sm}. }
\label{spd1}
\end{center}
\end{figure}
Now that the 3-d $Z_2$ topological band insulator
is a new quantum state of matter having
no quantum-Hall analogue, the bulk-delocalized phase which
intervenes this insulating phase (or localized state
adiabatically connected to this) and the ordinary band
insulator (or corresponding localized state) is also
a new type of three-dimensional metallic phase,
which has no analogue in any other condensed
matter paradigms.
Taking into account the chemical-potential type
disorder (the most representative non-magnetic
disorder), we have previously studied~\cite{prb-sm}
the self-consistent Born phase digram
around this topological metallic phase (or quantum
critical point) and calculated the
weak-localization correction to the charge
conductivity.
In actual solids~\cite{nat-452-970-08,sci-325-178-09,
np-5-438-09,sci-323-919-09,nat-460-1101-09}, however,
electronic disorders can generally exist not
just in the chemical potential, but also
in the transfer integral and spin-orbit
interaction potential itself.
On the one hand, the above argument based on
the bulk-edge correspondence clearly dictates
that the critical nature of the topological
quantum critical point is stable against any of these
non-magnetic disorders (not just against the chemical
potential type disorder).
Motivated by this, we will
treat in this paper various
types of non-magnetic disorder potentials
on an equal footing, so as to argue
actual/generic properties of
the topological quantum critical point
in disordered $Z_2$ quantum spin Hall
systems. The organization of this paper is as follows.
In sec.~2, we summarize the
effective continuum model studied in this paper.
We next argue in sec.~3 that
the non-trivial Berry phase (i.e. $e^{i\pi}$) inherent
in the 3-d topological quantum critical point (TQCP)
induces the perfect cancellation in the backward
scattering channels mediated by the chemical-potential
type disorder. This {\it partially} upholds the above
argument based on the bulk-edge correspondence. However,
it also becomes clear that this `Berry phase' argument
does {\it not necessarily} work in the presence of those
non-magnetic disorders other than the
chemical-potential type one. More accurately,
those backward scattering processes which do not conserve
a certain internal degree of freedom, i.e. parity density
degree of freedom, are {\it not generally}
set off by their ${\cal T}$-reversal
counter processes. Promoted by this theoretical observation,
we further derive in sec.~4
the self-consistent Born phase diagram in the
presence of {\it generic} ${\cal T}$-symmetric disorder
potentials. Based on this, we argue in sec.~5
the weak-localization correction to the charge conductivity in the
presence of both the chemical-potential type disorder and
the topological-mass type disorder.
The distinctions between the case with only the
chemical potential disorder and that with
{\it generic} ${\cal T}$-symmetric disorders are
summarized/clarified in sec.~6.
\section{effective continuum model and ${\cal T}$-symmetric disorders}
An effective continuum model describing the
topological quantum critical point
is given by a following $3+1$ Dirac
fermion~\cite{prlb-fkm,njp-m,prb-mk,prb-qtz};
\begin{eqnarray}
{\cal H}_0 \equiv \int d^3 r \psi^{\dagger}(r)
\bigg\{ \sum_{\mu=1}^3 \hat{\gamma}_{\mu} (-i\partial_{\mu})
+ m \hat{\gamma}_5 \bigg\} \psi(r), \label{ham}
\end{eqnarray}
where the two massive phases, $m>0$ and $m<0$, represent the
quantum spin Hall insulator and the ${\cal T}$-symmetric
conventional (band) insulator respectively: `$m$' corresponds to
the topological band gap $W_{\rm topo}$. The $4$ by $4$
$\gamma$-matrices consist of the Pauli spin matrix part ${\bm s}$ and
the other Pauli matrix ${\bm \sigma}$ representing the sublattice
(or orbital) degree of freedom. There exist at most five
such $\gamma$-matrices which anti-commute
with one another. Here they are dubbed as
$\gamma_{1}$, $\gamma_2$,
$\gamma_3$, $\gamma_4$ and $\gamma_5$.
The product of these five always reduces
to (the minus of) the unit matrix;
$\hat{\gamma}_1 \hat{\gamma}_2 \hat{\gamma}_3
\hat{\gamma}_4 \hat{\gamma}_5 = -1$.
Thus, even numbers of $\gamma$-matrices out of these five
should be time-reversal (${\cal T}$)-odd and spatial
inversion (${\cal I}$)-odd. In eq.~(\ref{ham}), three of
them are already linearly coupled with the momentum.
Thus, four of them should be both ${\cal T}$-odd and ${\cal I}$-odd.
Without loss of generality, eq.~(\ref{ham}) took
$\gamma_{1,2,3,4}$ to be ${\cal T}$-odd and ${\cal I}$-odd,
so that $\gamma_{5}$ is ${\cal T}$-even and ${\cal I}$-even.
Under this convention, $\gamma_{15} \equiv -i\gamma_1\gamma_5$,
$\gamma_{25}$, $\gamma_{35}$ and
$\gamma_{45}$ are all ${\cal T}$-even and
${\cal I}$-odd, while $\gamma_{12}$, $\gamma_{13}$, $\gamma_{14}$,
$\gamma_{23}$, $\gamma_{34}$ and $\gamma_{42}$ are all
${\cal T}$-odd and ${\cal I}$-even.
Thus, arbitrary `on-site' type non-magnetic disorder
potentials are generally given by,
\begin{eqnarray}
{\cal H}_{\rm imp} \equiv \int d^3r
\psi^{\dagger}(r) \bigg\{ v_0(r) \hat{\gamma}_0
+ v_5 (r) \hat{\gamma}_5 +
\sum_{j=1}^4 v_{j5}(r) \hat{\gamma}_{j5} \bigg\} \psi(r). \label{imp}
\end{eqnarray}
with the real-valued functions $v_{j}$ $(j=0,5,15,\cdots,45)$.
In Ref.~\cite{prb-sm}, we have further assumed
that the chemical-potential type disorders is the most dominant
and have set the other five to be zero:
$v_{5}=v_{15}=\cdots = v_{45}=0$.
Based on these simplifications, we have carried out the
self-consistent Born analysis and weak-localization
calculations, and obtained a simple microscopic picture,
which can indeed explain how the bulk-extended region
emerges when the topological band gap `$m$' changes its
sign~\cite{prb-sm}. In the next section, we will
describe an alternative argument, which more directly
dictates that the topological metallic phase
(or quantum critical point) is free from any localization
effect induced by generic non-magnetic disorders.
\section{`Berry phase' effect in 3-d topological metallic phases}
Our argument here is a straightforward 3-dimensional generalization of
the `Berry phase' argument invented by Ando
{\it et al.}~\cite{jpsj-ans} in the context of the
carbon-nanotube (or 2-d graphene) subjected
under {\it long}-range impurity potentials.
Specifically, we will prove that an individual backward
scattering process is precisely canceled by its time-reversal
counterpart, which leads to a complete absence of the backward
scattering in the 3-d topological quantum critical point.
Consider first the most simplest case, where the $T$-matrix is
composed only by the chemical-potential-type impurity $v_0(r)$,
\begin{eqnarray}
\hspace{-1cm}
\hat{T} \equiv v_
+ v_
\frac{1}{\epsilon-\hat{\gamma}_\mu (-i\partial_{\mu})}
v_
+ v_
\frac{1}{\epsilon-\hat{\gamma}_\mu (-i\partial_{\mu})}
v_
\frac{1}{\epsilon-\hat{\gamma}_\mu (-i\partial_{\mu})}
v_
+ \cdots . \nonumber
\end{eqnarray}
In the momentum representation, the $(p+1)$-th order backward scattering term reads,
\begin{eqnarray}
&& \hspace{-2.5cm}
\langle -k,\epsilon_{-k},\sigma|\hat{T}^{(p+1)}|k,\epsilon_{k},\sigma'\rangle
= \sum_{\tau_1,k_1,\sigma_1}\cdots \sum_{\tau_p,k_p,\sigma_p}
\frac{v_0(-k-k_{p})\cdots v_{0}(k_2-k_1) v_0(k_1-k)}
{(\epsilon-\tau_{p}\epsilon_{k_{p}})\cdots
(\epsilon-\tau_{2}\epsilon_{k_{2}})
(\epsilon-\tau_{1}\epsilon_{k_{1}})}\times \nonumber \\
&& \hspace{-1.8cm}
\langle - k,\epsilon_{-k},\sigma|k_{p},\tau_{p}\epsilon_{k_p},
\sigma_{p} \rangle \times \cdots \times
\langle k_{2},\tau_{2}\epsilon_{k_2},
\sigma_{2} | k_{1},\tau_{1}\epsilon_{k_1},\sigma_1\rangle
\langle k_1,\tau_1\epsilon_{k_1},\sigma_1|
k,\epsilon_k,\sigma'\rangle, \label{mat}
\end{eqnarray}
where $\tau_j=\pm$ ($j=1,\cdots,p$) in
combination with $\epsilon_{k_j}$
specifies the eigen-energy of the hamiltonian at the
momentum $k_j$, i.e. $\tau_j \epsilon_{k_j} = \pm |k_j|$.
Such an eigen-energy is always accompanied by
doubly degenerate eigen-states, which is specified
by the spin index, $\sigma_j=\uparrow,\downarrow$.
Namely, each eigen-state at the momentum $k_j$ is
uniquely identified by these two indices,
$|k_j,\tau_j\epsilon_{k_j},\sigma_j\rangle$. Without
loss of generality, we can take the initial ket-state
and final bra-state to have an positive eigen-energy,
i.e. $|k,\epsilon_k,\sigma\rangle$ and
$\langle -k,\epsilon_{-k},\sigma|$.
We can also fix the gauge of the eigen-wavefunctions, e.g.
\begin{eqnarray}
|k,\tau \epsilon_{k_j},\sigma\rangle \equiv
e^{i\frac{\theta}{2}\big(\sin\phi\hat{\gamma}_{23} -
\cos\phi \hat{\gamma}_{31}\big)} |\tau,\sigma\rangle_3 \equiv
\hat{U}_{k} |\tau,\sigma\rangle_3, \nonumber
\end{eqnarray}
with $k=|k|(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$.
$|\tau,\sigma\rangle_3$ is the two-fold degenerate eigenstates of
$\hat{\gamma}_3$ which belong to its eigenvalue $\tau$, i.e.
$\hat{\gamma}_3 |\tau,\sigma\rangle_3 \equiv
\tau |\tau,\sigma\rangle_3$.
Based on this set-up, we can explicitly argue that the
following two backward scattering processes, which
are related by time-reversal operation, set off each
other in eq.~(\ref{mat}),
\begin{eqnarray}
&&\hspace{-0.3cm}
|k,+,\sigma \rangle \rightarrow |k_1,\tau_1,\cdots\rangle
\rightarrow \cdots \rightarrow |k_p,\tau_{p},\cdots \rangle
\rightarrow |-k,+,\sigma'\rangle, \nonumber \\
&&\hspace{-0.3cm}
|k,+,\sigma \rangle \rightarrow |-k_p,\tau_{p},\cdots\rangle
\rightarrow \cdots \rightarrow |-k_1,\tau_1,\cdots\rangle
\rightarrow |-k,+,\sigma'\rangle. \nonumber
\end{eqnarray}
where the summation over the spin indices associated with the
intermediate states are assumed. To be more explicit, we can
show that the following two have an opposite sign with each other,
\begin{eqnarray}
&& \hspace{-0.2cm}
S^{(p+1)}_{\sigma\sigma^{\prime}}
= \sum_{\{\sigma_j\}}
\langle - k,\epsilon_{k},\sigma|k_{p},\tau_{p}\epsilon_{k_p},
\sigma_{p} \rangle \cdots
\langle k_1,\tau_1\epsilon_{k_1},\sigma_1|
k,\epsilon_k,\sigma' \rangle, \nonumber \\
&&\hspace{-0.2cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}} = \sum_{\{\sigma_j\}}
\langle - k,\epsilon_{k},\sigma|-k_{1},\tau_{1}\epsilon_{k_1},
\sigma_{1} \rangle \cdots \langle -k_{p},\tau_{p}\epsilon_{k_p},
\sigma_{p} | k,\epsilon_{k},\sigma'\rangle. \nonumber
\end{eqnarray}
for any $\sigma$ and $\sigma'$. To see this,
take the sum over all the intermediate spin indices
first.
This leads to
\begin{eqnarray}
&& \hspace{-2.cm}
S^{(p+1)}_{\sigma\sigma^{\prime}} =
\ _3 \langle +,\sigma| \hat{U}^{\dagger}_{-k}
\cdot (\hat{\gamma}_0 - n_{p,\mu} \hat{\gamma}_{\mu})
\cdots (\hat{\gamma}_0 - n_{2,\nu} \hat{\gamma}_{\nu})
\cdot
(\hat{\gamma}_0 - n_{1,\lambda} \hat{\gamma}_{\lambda})
\cdot \hat{U}_k |+,\sigma'\rangle_3. \nonumber \\
&&\hspace{-2.cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}} =\ _3 \langle +,\sigma|
\hat{U}^{\dagger}_{-k}
\cdot (\hat{\gamma}_0 + n_{1,\mu} \hat{\gamma}_{\mu})
\cdot (\hat{\gamma}_0 + n_{2,\nu} \hat{\gamma}_{\nu})
\cdots
(\hat{\gamma}_0 + n_{p,\lambda} \hat{\gamma}_\lambda) \cdot
\hat{U}_k |+,\sigma'\rangle_3.
\nonumber
\end{eqnarray}
where $n_j$ is a product between the normalized vector parallel
to $k_j$ and $\tau_j$, $n_j \equiv \tau_j k_j/|k_j|$.
The sequential product of the projection operators can be
further expanded in this unit vector, i.e.
\begin{eqnarray}
&& \hspace{-2.2cm}
(1+n_p\cdot \gamma)\cdots (1+n_2\cdot \gamma)\cdot
(1+ n_1\cdot \gamma) \nonumber\\
&&\hspace{-0.7cm}
= 1 + \sum n_j \cdot \gamma
+ \sum_{i>j} (n_i \cdot \gamma) (n_j \cdot \gamma)
+ \cdots + (n_p \cdot \gamma) \cdots (n_{2}\cdot \gamma)
(n_1\cdot \gamma). \label{exp}
\end{eqnarray}
An even-order term in this expansion is
always a linear combination of $\gamma_0$, $\gamma_{23}$,
$\gamma_{31}$ and $\gamma_{12}$,
\begin{eqnarray}
\hspace{0.7cm}
(n_{2l}\cdot \gamma)\cdots (n_{1}\cdot \gamma)
= A_{l} \gamma_0 + \epsilon_{\mu\nu\lambda}B_{l,\mu}
\gamma_{\nu\lambda}. \nonumber
\end{eqnarray}
$A_{l}$ is a scalar quantity, which is composed
only by inner products among $2l$ unit vectors.
On the other hand, $B_l$ clearly behaves as a vector, being
coupled to $\gamma_{23}$, $\gamma_{31}$ and $\gamma_{12}$.
It is expressed as (a linear combination of) the product
between $l-1$ inner products and one outer product,
\begin{eqnarray}
B_{l,\mu} \equiv \sum_{l,m,\cdots,n,o\ne i,j}
b^{l}_{ij,lm\cdots no} (n_i \times n_j)_{\mu} (n_l\cdot n_m) \cdots
(n_{n}\cdot n_{o}), \nonumber
\end{eqnarray}
with some coefficients $b^{\cdots}_{\cdots}$ free from $\{n_1,\cdots,n_{2l}\}$.
Multiply this even-order term by one additional $n \cdot \gamma$,
one can obtain any odd-order term in eq.~(\ref{exp}) as a
linear combination of $\gamma_1$, $\gamma_2$,
$\gamma_3$ and $\gamma_{45}$;
\begin{eqnarray}
\hspace{0.7cm}
(n_{2l+1}\cdot \gamma)\cdots (n_{1}\cdot \gamma)
= C_{l,\mu} \gamma_{\mu} + D_{l}\gamma_{45}. \nonumber
\end{eqnarray}
$C_{l}$ is a vector composed of $l$ inner products,
while $D_l$ is a scalar containing one scalar triple product,
\begin{eqnarray}
&&\hspace{-0.2cm}
C_{l,\mu} = \sum_{n,o,\cdots,p,q\ne m} c^{l}_{m,no\cdots pq}
n_{m,\mu} (n_n\cdot n_o)\cdots (n_p\cdot n_q), \nonumber \\
&&\hspace{-0.2cm}
D_l = \sum_{n,0\cdots,p,q \ne i,j,m} d^l_{ijm,no\cdots pq}
(n_i \times n_j)\cdot n_m (n_n\cdot n_o)\cdots (n_p\cdot n_q). \nonumber
\end{eqnarray}
Thus, under the time-reversal operation, i.e.
$(n_p,n_{p-1},\cdots,n_1) \rightarrow (-n_1,-n_2,\cdots,-n_p)$,
$A_l$ and $D_l$ do not change the sign, while $B_l$ and $C_l$
change their signs. These observations lead to a perfect
cancellation between $S^{(p+1)}_{\sigma\sigma'}$ and
$S^{(p+1)\prime}_{\sigma\sigma'}$,
\begin{eqnarray}
\hspace{-1.6cm}
S^{(p+1)}_{\sigma\sigma^{\prime}} +
S^{(p+1)\prime}_{\sigma\sigma^{\prime}}
&=& 2{\sum}\ _3 \langle +,\sigma|\hat{U}^{\dagger}_{-k}
\cdot \big\{A \hat{\gamma}_0 + D \hat{\gamma}_{45} \big\}\cdot \hat{U}_k
|+,\sigma^{\prime}\rangle_3, \nonumber \\
&=& 2i{\sum}\ _3 \langle +,\sigma| (-\sin\phi \hat{\gamma}_{23}
+ \cos\phi \hat{\gamma}_{31})\cdot ( A \hat{\gamma}_0 + D \hat{\gamma}_{45})
|+,\sigma^{\prime}\rangle_3 \nonumber \\
&=& \sum_{\sigma^{\prime\prime}}
(\cdots) \cdot \ _3 \langle +,\sigma|-,\sigma^{\prime\prime}
\rangle_3 = 0, \nonumber
\end{eqnarray}
for arbitrary $\sigma$ and $\sigma'$.
This dictates that the backward-scattering process mediated
by the chemical-potential-type disorder vanishes completely
at the 3-d TQCP.
More generally, we can prove that any
backward scattering process {\it which conserves the eigenvalue
of $\hat{\gamma}_{45}$} is always set off by its corresponding
${\cal T}$-reversal counterpart. Such a process is always
mediated by {\it even} number of either
$v_{j5}(r)$ ($j=1,2,3$) or $v_5(r)$, since
both $\hat{\gamma}_{j5}$ ($j=1,2,3$) and
$\hat{\gamma}_{5}$ anticommute with
$\hat{\gamma}_{45}$. To uphold the complete absence of the backward scattering
in this case, we have only to
show that the following two matrix elements
have an opposite sign with each other,
\begin{eqnarray}
&&\hspace{-2.3cm}
S^{(p+1)}_{\sigma\sigma^{\prime}}
= {_3} \langle +,\sigma| \hat{U}^{\dagger}_{-k}
\hat{\gamma}_{a_p}
(\hat{\gamma}_0 - n_{p,\mu}\hat{\gamma}_{\mu})
\cdots \hat{\gamma}_{a_2}(\hat{\gamma}_0 -
n_{2,\nu}\hat{\gamma}_{\nu}) \hat{\gamma}_{a_1}
(\hat{\gamma}_0 - n_{1,\lambda}\hat{\gamma}_{\lambda})
\hat{\gamma}_{a_0} \hat{U}_k |+,\sigma^{\prime} \rangle_3 \nonumber \\
&&\hspace{-2.3cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}}
= {_3} \langle +,\sigma| \hat{U}^{\dagger}_{-k}
\hat{\gamma}_{a_0}
(\hat{\gamma}_0 + n_{1,\mu}\hat{\gamma}_{\mu})
\hat{\gamma}_{a_1}
(\hat{\gamma}_0 +
n_{2,\nu}\hat{\gamma}_{\nu})
\hat{\gamma}_{a_2} \cdots
(\hat{\gamma}_0 + n_{p,\lambda}\hat{\gamma}_{\lambda})
\hat{\gamma}_{a_p}
\hat{U}_k |+,\sigma^{\prime} \rangle_3, \nonumber
\end{eqnarray}
for arbitrary $\sigma$ and $\sigma^{\prime}$.
The indices $a_0$, $a_1,a_2,\cdots $ and $a_p$ can be either
$0$, $15$, $25$, $35$ , $45$ or $5$, under the condition
that the total number of those $\hat{\gamma}_{15}$, $\hat{\gamma}_{25}$,
$\hat{\gamma}_{35}$ and $\hat{\gamma}_5$ which mediate
the initial state and the final state is always {\it even}. To
see the relative sign between these two,
let us first transport all the intervening
$\gamma_{a_j}$s in the leftward/rightward until they meet with
the bra/ket states in
$S^{(p+1)}_{\sigma\sigma^{\prime}}$
/$S^{(p+1)\prime}_{\sigma\sigma^{\prime}}$.
This transport brings about an appropriate
redefinition of the normalized vectors,
$n_{j} \rightarrow \overline{n}_j$,
\begin{eqnarray}
&& \hspace{-1.5cm}
S^{(p+1)}_{\sigma\sigma^{\prime}} = {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}
\hat{\gamma}_{a_p}\cdots
\hat{\gamma}_{a_1}\hat{\gamma}_{a_0} \cdot
(\hat{\gamma}_0-\overline{n}_{p,\mu}\hat{\gamma}_{\mu})
\cdots
(\hat{\gamma}_0-\overline{n}_{1,\lambda}\hat{\gamma}_{\lambda})
\hat{U}^{\dagger}_k|+,
\sigma^{\prime}\rangle_3 \nonumber \\
&&\hspace{-1.5cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}} = {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}
(\hat{\gamma}_0+\overline{n}_{1,\mu}\hat{\gamma}_{\mu})
\cdots(\hat{\gamma}_0+\overline{n}_{p,\lambda}\hat{\gamma}_{\lambda})
\cdot
\hat{\gamma}_{a_0}\hat{\gamma}_{a_1}\cdots
\hat{\gamma}_{a_p}
\hat{U}_k |+,
\sigma^{\prime}\rangle_3. \nonumber
\end{eqnarray}
$S^{(p+1)}$ and $S^{(p+1)\prime}$ is still
connected by the ${\cal T}$-reversal operation,
$(\overline{n}_p,\overline{n}_{p-1},\cdots,\overline{n}_1)
\leftrightarrow (-\overline{n}_1,\cdots,-\overline{n}_{p-1},
-\overline{n}_p)$. Thus, the preceding
expansion can be used again,
\begin{eqnarray}
&&\hspace{-2cm}
(\hat{\gamma}_0-\overline{n}_{p,\mu}\hat{\gamma}_{\mu})
\cdots(\hat{\gamma}_0 -\overline{n}_{2,\nu}\hat{\gamma}_{\nu})
(\hat{\gamma}_0-\overline{n}_{1,\lambda}\hat{\gamma}_{\lambda})
= \overline{A} \hat{\gamma}_0 +
\overline{B}_{\mu}\epsilon_{\mu\nu\lambda}
\hat{\gamma}_{\nu\lambda} +
\overline{C}_{\mu}\hat{\gamma}_{\mu} + \overline{D}
\hat{\gamma}_{45}, \nonumber \\
&&\hspace{-2cm}
(\hat{\gamma}_0+\overline{n}_{1,\mu}\hat{\gamma}_{\mu})
(\hat{\gamma}_0 + \overline{n}_{2,\nu}\hat{\gamma}_{\nu})
\cdots
(\hat{\gamma}_0 + \overline{n}_{p,\lambda}\hat{\gamma}_{\lambda})
= \overline{A} \hat{\gamma}_0 -
\overline{B}_{\mu}\epsilon_{\mu\nu\lambda}
\hat{\gamma}_{\nu\lambda} -
\overline{C}_{\mu}\hat{\gamma}_{\mu} + \overline{D}
\hat{\gamma}_{45}. \nonumber
\end{eqnarray}
The condition imposed on $a_j$ dictates
that $\hat{\gamma}_{a_p}\cdots \hat{\gamma}_{a_1}
\hat{\gamma}_{a_0}$ and $\hat{\gamma}_{a_0}\hat{\gamma}_{a_1}
\cdots \hat{\gamma}_{a_p}$ always reduce to
either (i) $\hat{\gamma}_{j}$ and $-\hat{\gamma}_j$
($j=1,2,3,12,23,31$) respectively or (ii)
$\hat{\gamma}_{m}$ and $\hat{\gamma}_{m}$ ($m = 0,45$)
respectively.
Consider the former case first, e.g.
\begin{eqnarray}
&&\hspace{-1cm}
S^{(p+1)}_{\sigma\sigma^{\prime}}
= {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}
\cdot \hat{\gamma}_{1}\cdot
(\overline{A} \hat{\gamma}_0 +
\overline{B} \hat{\gamma}_{\nu\lambda} +
\overline{C}\hat{\gamma}_{\mu} + \overline{D}
\hat{\gamma}_{45})\cdot
\hat{U}_k|+,
\sigma^{\prime}\rangle_3 \nonumber \\
&&\hspace{0.1cm} = {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}
\hat{U}_k
\cdot m_{\rho}\hat{\gamma}_{\rho}\cdot
(\overline{A} \hat{\gamma}_0 +
\overline{B}^{\prime} \hat{\gamma}_{\nu\lambda} +
\overline{C}^{\prime} \hat{\gamma}_{\mu}
+ \overline{D}
\hat{\gamma}_{45})|+,
\sigma^{\prime}\rangle_3, \nonumber \\
&&\hspace{-1cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}}
= {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}\cdot
(-\overline{A} \hat{\gamma}_0 +
\overline{B} \hat{\gamma}_{\nu\lambda} +
\overline{C}\hat{\gamma}_{\mu} - \overline{D}
\hat{\gamma}_{45})\cdot
\hat{\gamma}_{1}\cdot
\hat{U}_k|+,
\sigma^{\prime}\rangle_3 \nonumber \\
&&\hspace{0.1cm}
= {_3}
\langle +,\sigma| \hat{U}^{\dagger}_{-k}
\hat{U}_k \cdot
(-\overline{A} \hat{\gamma}_0 +
\overline{B}^{\prime} \hat{\gamma}_{\nu\lambda} +
\overline{C}^{\prime}\hat{\gamma}_{\mu}
- \overline{D} \hat{\gamma}_{45})\cdot
m_{\rho}\hat{\gamma}_{\rho} |+,
\sigma^{\prime}\rangle_3. \nonumber
\end{eqnarray}
with $\hat{U}^{\dagger}_{-k}\hat{U}_{k} = -\sin\phi \hat{\gamma}_{23}
+ \cos\phi \hat{\gamma}_{31}$ and
$m_{\rho}\hat{\gamma}_{\rho} \equiv
\hat{U}^{\dagger}_k \hat{\gamma}_1 \hat{U}_k$.
When $\hat{\gamma}_{a_p}\cdots
\hat{\gamma}_{a_2}\hat{\gamma}_{a_1} = \hat{\gamma}_{12}$
(or $\hat{\gamma}_{23}$, $\hat{\gamma}_{31}$),
$m_{\rho}\hat{\gamma}_{\rho}$ should be replaced by
$m_{\rho}\epsilon_{\rho\phi\psi}\hat{\gamma}_{\phi\psi}$.
In either cases, $S^{(p+1)}_{\sigma\sigma^{\prime}}$
and $S^{(p+1)\prime}_{\sigma\sigma^{\prime}}$
set off each other completely e.g.
\begin{eqnarray}
&& \hspace{-2cm}
\hat{S}^{(p+1)}_{\sigma\sigma^{\prime}}
+ \hat{S}^{(p+1)\prime}_{\sigma\sigma^{\prime}}
= {_3}
\langle +,\sigma| (-s_{\phi} \gamma_{23} + c_{\phi} \gamma_{31})
\cdot
\{
\overline{B}^{\prime} \hat{\gamma}_{\nu\lambda}
+ \overline{C}^{\prime}\hat{\gamma}_{\mu},
m_{\rho}\hat{\gamma}_{\rho}\} |+,
\sigma^{\prime}\rangle_3 \nonumber \\
&&\hspace{0.74cm}
= 2\times {_3}
\langle +,\sigma| (-s_{\phi} \gamma_{23} + c_{\phi} \gamma_{31})
\cdot (\overline{B}^{\prime}_{\mu}m_{\mu} \hat{\gamma}_{45}
+ \overline{C}^{\prime}_{\nu}m_{\nu} \hat{\gamma}_0)
|+,\sigma^{\prime}\rangle_3 \nonumber \\
%
&& \hspace{0.74cm} = (\cdots)\times {_3} \langle - ,\sigma^{\prime\prime}|
+, \sigma^{\prime} \rangle_3 = 0. \nonumber
\end{eqnarray}
A similar cancellation also holds true for the case with (ii)
$\hat{\gamma}_{a_p}\cdots
\hat{\gamma}_{a_2}\hat{\gamma}_{a_1}
= \hat{\gamma}_{a_1} \hat{\gamma}_{a_2} \cdots
\hat{\gamma}_{a_p} = \hat{\gamma}_{45}$.
These observations thus conclude that any backward scattering process which
conserves the eigenvalue of $\gamma_{45}$ is always canceled
by its time-reversal counter-process.
Like in the 2-d graphene
case~\cite{jpsj-ans}, this cancellation is
actually a direct consequence of
the non-trivial Berry phase `$e^{i\pi}$' accumulated
along a closed loop composed by two time-reversal
counterparts in the momentum space
(see Fig.~\ref{pd0}).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{bp1.eps}
\caption{Red curves denote
two backward scattering processes on a sphere
in the momentum space ($|k_1|=\cdots=|k_p|=|k|$),
which are mutually time-reversal to
each other. The blue-shaded region, whose boundary
is shaped by this two path, occupies {\it one half} of
the whole surface of a sphere. Thus, an electron
acquires the Berry phase $\pi$ (instead of $2\pi$), when
it travels once around this boundary.}
\label{pd0}
\end{center}
\end{figure}
The above 'Berry phase' argument, however, does not hold
true for the case with those backward scattering processes
which change the eigenvalues
of $\hat{\gamma}_{45}$. To see this, consider, for example, the
backward scattering processes mediated by {\it odd} number
of $v_5(r)$ and compare following two
${\cal T}$-reversal paired processes,
\begin{eqnarray}
&& \hspace{-2.1cm}
S^{(p+1)}_{\sigma\sigma^{\prime}} =
\ _3 \langle +,\sigma| \hat{U}^{\dagger}_{-k} \cdot
\hat{\gamma}_5
\cdot (\hat{\gamma}_0 - n_{p,\mu} \hat{\gamma}_{\mu})
\cdots (\hat{\gamma}_0 - n_{2,\nu} \hat{\gamma}_{\nu})
\cdot
(\hat{\gamma}_0 - n_{1,\lambda} \hat{\gamma}_{\lambda})
\cdot \hat{U}_k |+,\sigma'\rangle_3, \nonumber \\
&&\hspace{-2.1cm}
S^{(p+1)\prime}_{\sigma\sigma^{\prime}} =\ _3 \langle +,\sigma|
\hat{U}^{\dagger}_{-k}
\cdot (\hat{\gamma}_0 + n_{1,\mu} \hat{\gamma}_{\mu})
\cdot (\hat{\gamma}_0 + n_{2,\nu} \hat{\gamma}_{\nu})
\cdots
(\hat{\gamma}_0 + n_{p,\lambda} \hat{\gamma}_\lambda) \cdot
\hat{\gamma}_5 \cdot
\hat{U}_k |+,\sigma'\rangle_3.
\nonumber
\end{eqnarray}
We have already made any two `neighboring' $\hat{\gamma}_5$ to
annihilate each other, which leads an appropriate redefinition
of the normalized unit vectors.
The extra $\hat{\gamma}_5$ was then transported
leftward/rightward in $S^{(p+1)}$/$S^{(p+1)\prime}$,
until it meets with the bra/ket-state.
Exploiting the expansion described above, we can readily see
that these two do not cancel exactly in general,
\begin{eqnarray}
&&\hspace{-2.2cm}
S^{(p+1)}_{\sigma\sigma'} +
S^{(p+1)\prime}_{\sigma\sigma'}
= 2i\sum \ _3\langle +, \sigma| (-\sin\phi \hat{\gamma}_{23}
+ \cos\phi \hat{\gamma}_{31})\cdot (A + \overline{C}_3 \hat{\gamma}_{3})
\cdot \hat{\gamma}_5 | +, \sigma'\rangle_3 \ne 0. \nonumber
\end{eqnarray}
This situation is reminiscent
of the 2-d graphene (or carbon-nanotube)
subjected under {\it short}-range impurity potentials, where
the {\it intervalley} scattering\footnote{which corresponds to
the {\it inter-'eigenvalue'} scattering in the current context.} suppresses
the cancellation due to
the Berry phase and consequently induces the crossover from
the symplectic class to the orthogonal class~\cite{jpsj-ans,jpsj-as}.
The situation in the 3-d TQCP is, however, quite different
from this 2-d analogue. First of all, the preceding argument based
on the bulk-edge correspondence requires the extended character
of the 3-d TQCP to be stable against {\it any} non-magnetic disorders.
Moreover, it always belongs to
the symplectic class irrespectively of the type of non-magnetic
disorders, which was not the case in the
2-d graphene~\cite{jpsj-as}. Therefore,
not only from the actual material's
viewpoint~\cite{nat-452-970-08,sci-325-178-09,
np-5-438-09,sci-323-919-09,nat-460-1101-09},
but also from the theoretical standpoint, it is quite
intriguing and important to investigate the effect
of {\it general} ${\cal T}$-symmetric
disorder potentials (including the topological-mass
type one) in the 3-d topological metallic phase.
In the next section, we will first study how a single-body
Green function is renormalized by these {\it general}
non-magnetic disorders. Based on this analysis, we
will argue the behaviour of the quantum conductivity correction
in the presence of both the chemical-potential type
disorder $v_0$ and topological-mass type disorder $v_5$.
\section{self-consistent Born phase diagram -- general case --}
Consider ${\cal H}_{0}+{\cal H}_{\rm imp}$ with randomly
distributed real-valued $v_{j}(r)$ ($j=0,5,\cdots,45$).
Because of the ${\cal T}$-reversal
symmetry, each eigenstate of this, say $|\psi_n (r) \rangle$,
has its own Kramers-paired state $| \overline{\psi}_n (r) \rangle$
: $\langle \overline{\psi}_n (r)| \equiv -i\hat{s}_y |\psi_n(r) \rangle$.
Thus, the single-body Green functions obey
the following relation;
\begin{eqnarray}
\hspace{-1cm}\hat{G}^{R(A)}(r,r';\mu)
&\equiv& \sum_{n} \frac{| \psi_n (r) \rangle \langle \psi(r') |}
{\mu-\epsilon_n \pm i\delta}
= \hat{s}_y
\big\{\hat{G}^{R(A)}(r',r;\mu)\big\}^t \hat{s}_y, \label{tsymmetry}
\end{eqnarray}
where `$+/-$' sign is for the retarded/advanced Green function
$\hat{G}^{R/A}$.
The quenched disorder average is taken at the Gaussian level with
the short-ranged correlation;
\begin{eqnarray}
\overline{\cdots} \equiv \frac{1}{\cal N}\int {\cal D}[v]
\cdots \exp\bigg[-\sum_{j,l} \int d^3 r \Delta_{j,l} v_j (r) v_l (r)\bigg].
\label{que}
\end{eqnarray}
Green functions thus averaged acquire the
translational symmetry,
$\hat{G}^{R(A)}(r,r';\mu)=\hat{G}^{R(A)}(r-r';\mu)$.
When Fourier-transformed, they can be expanded in terms of
the 16 $\gamma$-matrices
with some complex-valued coefficients,
\begin{eqnarray}
\hspace{-2.2cm}
\hat{G}^{R}(k;\mu) = \sum_{n=1}^{4} \hat{\gamma}_n
\overline{\sf F}_{n}(k;\mu) + \sum_{j=12,13,\cdots}^{42} \hat{\gamma}_{j}
\overline{\sf F}_{j}(k;\mu) + \sum_{m=0,5} \hat{\gamma}_m
\overline{\sf F}_{m}(k;\mu) + \sum_{l=15}^{45} \hat{\gamma}_{l}
\overline{\sf F}_{l}(k;\mu), \nonumber
\end{eqnarray}
and $\hat{G}^{A}(k;\mu)=\{\hat{G}^{R}(k;\mu)\}^{\dagger}$.
The ${\cal T}$-symmetry requires
$\overline{\sf F}_{1,2,3,4}(k)$ and
$\overline{\sf F}_{12,13,14,23,34,42}(k)$ to be
odd functions of $k$, while
$\overline{\sf F}_{0,5}(k)$ and
$\overline{\sf F}_{15,25,35,45}(k)$
to be even in $k$. Thus, only the latter six
participate in the self-consistent
Born equation,
\begin{eqnarray}
&&\hspace{-0.85cm}
\hat{\Sigma}^{R}(\mu) \equiv
\hat{G}^{R,-1}_0 (k;\mu) - \hat{G}^{R,-1}(k;\mu)
= \sum_{j,l} \Delta_{j,l} \int dk' \hat{\gamma}_j \cdot
\hat{G}^{R}(k';\mu) \cdot \hat{\gamma}_l, \nonumber \\
&&\hspace{0.17cm}
= \sum_{j,l} \Delta_{j,l} \int dk' \hat{\gamma}_j \cdot
\Big\{\hat{\gamma}_0 \overline{\sf F}_0(k') +
\hat{\gamma}_5 \overline{\sf F}_5(k') +
\sum_{n=1}^{4} \hat{\gamma}_{n5} \overline{\sf F}_{n5}(k')
\Big\} \cdot \hat{\gamma}_l. \label{scb1}
\end{eqnarray}
The bare Green function is defined as,
\begin{eqnarray}
\hat{G}^{R,-1}_0 (k;\mu) = (\mu + i\delta) \hat{\gamma}_0 -
\sum_{j=1}^3 k_{j}\hat{\gamma}_{j} - m \hat{\gamma}_5
\equiv \sum_{j=0,1,\cdots,5} {\sf f}_j
\hat{\gamma}_{j}.
\nonumber
\end{eqnarray}
The inverse of the Green function thus determined is
at most a linear combination of $\gamma_0$, $\gamma_5$,
$\gamma_{15}$, $\cdots$ and $\gamma_{45}$,
\begin{eqnarray}
G^{R,-1}(\mu) \equiv {\sf F}_0 \hat{\gamma}_0
+ {\sf F}_5 \hat{\gamma}_5 + \sum_{j=1}^4 {\sf F}_{j5}
\hat{\gamma}_{j5}. \nonumber
\end{eqnarray}
Eq.~(\ref{scb1}) determines their coefficients,
\begin{eqnarray}
&&\hspace{-2cm} {\sf F}_0 - {\sf f}_0 = -
\Big\{\Delta_{0,0} + \Delta_{5,5} + \sum_{j=1}^4 \Delta_{j5,j5}\Big\}\cdot
\int d^3 k' \overline{\sf F}_0(k') \nonumber \\
&&\hspace{1.5cm} - 2\Delta_{0,5}
\int d^3 k' \overline{\sf F}_5(k') - 2 \sum_{j=1}^4
\bigg\{ \Delta_{0,j5} \int d^3 k' \overline{\sf F}_{j5}(k')
\bigg\}, \label{gapp1} \\
&& \hspace{-2cm} {\sf F}_5 - {\sf f}_5 = -
\Big\{\Delta_{0,0} + \Delta_{5,5}
- \sum_{j=1}^4 \Delta_{j5,j5}\Big\}\cdot
\int d^3 k' \overline{\sf F}_5(k') \nonumber \\
&&\hspace{1.5cm} - 2\Delta_{0,5} \int d^3 k' \overline{\sf F}_0 (k')
- 2 \sum_{j=1}^4
\bigg\{ \Delta_{5,j5} \int d^3 k' \overline{\sf F}_{j5}(k')
\bigg\}, \label{gapp2} \\
&& \hspace{-2cm} {\sf F}_{j5} = -
\Big\{ \Delta_{0,0} - \Delta_{5,5} - \sum_{m=1}^4 \Delta_{m5,m5}
\Big\} \cdot \int d^3 k' \overline{\sf F}_{j5}(k') \nonumber \\
&&\hspace{-1.6cm} - 2 \Delta_{0,j5} \int d^3 k' \overline{\sf F}_{0}(k')
- 2 \Delta_{5,j5} \int d^3 k' \overline{\sf F}_5(k')
- 2 \sum_{m=1}^4 \bigg\{ \Delta_{m5,j5}
\int d^3 k' \overline{\sf F}_{m5}(k') \bigg\}. \label{gapp3}
\end{eqnarray}
That is, the right hand sides are given by
these coefficients themselves,
\begin{eqnarray}
&&\hspace{-2.3cm}
\overline{\sf F}_0(k) =
- \frac{{\sf F}_0}{g(k)}\Big\{k^2 - {\sf F}^2_0 + {\sf F}^2_5
+ \sum_{j=1}^4 {\sf F}^2_{j5}\Big\}, \ \ \
\overline{\sf F}_5(k) =
\frac{{\sf F}_5}{g(k)}\Big\{k^2 - {\sf F}^2_0 + {\sf F}^2_5
+ \sum_{j=1}^4 {\sf F}^2_{j5}\Big\}, \nonumber \\
&& \hspace{-1.52cm}
\overline{\sf F}_{j5}(k) =
- \frac{{\sf F}_{j5}}{g(k)}\Big\{k^2 + {\sf F}^2_0 - {\sf F}^2_5
- \sum_{m=1}^4 {\sf F}^2_{m5}\Big\} -
\big(1-\delta_{j4}\big) \frac{2k_j}{g(k)}
\Big\{\sum_{m=1}^3 {\sf F}_{m5} k_m\Big\}, \nonumber
\end{eqnarray}
with $g(k)$ being defined as,
\begin{eqnarray}
&&\hspace{-1.5cm}
g(k) \equiv \bigg\{- {\sf F}^2_0 + {\sf F}^2_5
+ \sum_{j=1}^4 {\sf F}^2_{j5} - k^2
\bigg\}^2 + 4 k^2 \Big\{{\sf F}^2_5 - {\sf F}^2_0\Big\} +
4 \bigg\{\sum_{m=1}^3 {\sf F}_{m5} k_m\bigg\}^2. \nonumber
\end{eqnarray}
Note that all integrals in eqs.~(\ref{gapp1},\ref{gapp2})
depend on the ultraviolet cutoff; $\int d^3 k \equiv \int_{|k|<\Lambda} d^3 k$.
We will solve these coupled integral equations, assuming
that {\it the spatial inversion symmetry is recovered
after the quenched average}. Namely, we assume
\begin{eqnarray}
\hspace{1.6cm}
\hat{\gamma}_5 \cdot \hat{G}^{R(A)}(k;\mu)\cdot \hat{\gamma}_5
= \hat{G}^{R(A)}(-k;\mu), \label{inv}
\end{eqnarray}
because only $\gamma_5$ anticommutes with both
$\gamma_{1,\cdots,4}$ and $\gamma_{15,\cdots,45}$.
This requires
$\Delta_{0,j5}$, $\Delta_{5,j5}$ and ${\sf F}_{j5}$
$(j=1,\cdots,4)$ to be zero, so that the
equation becomes,
\begin{eqnarray}
&&\hspace{-2.5cm}
\left[\begin{array}{c}
F_0 \\
F_5 \\
\end{array}\right]
+ G\left[\begin{array}{cc}
\Delta_{s} + \Delta_{a} & - B \\
B & - \Delta_{s} + \Delta_{a} \\
\end{array}\right]
\left[\begin{array}{c}
F_0 \\
F_5 \\
\end{array}\right] =
\left[\begin{array}{c}
\mu \\
- m \\
\end{array}\right], \
G = 2 \int_{0}^{1}
\frac{k^2 dk} {F^2_0 - F^2_5 - k^2}, \label{gap3}
\end{eqnarray}
with $\Delta_{s} \equiv 2\pi \Lambda (\Delta_{0,0} + \Delta_{5,5})$,
$\Delta_{a} \equiv \sum_{j=1}^{5} 2\pi \Lambda \Delta_{j5,j5}$, and
$B=4\pi \Lambda\Delta_{0,5}$. In eq.~(\ref{gap3}), the momenta and
energies are rescaled by the ultraviolet cut-off $\Lambda$,
\begin{eqnarray}
\hspace{-2.3cm}
\Lambda \rightarrow 1, \
k \rightarrow k\Lambda^{-1}, \
{\sf F}_{0,5} \rightarrow F_{0,5}
\equiv {\sf F}_{0,5}\Lambda^{-1}, \
(\mu,m)_{\rm old}
\rightarrow (\mu,m)_{\rm new}
\equiv (\mu,m)_{\rm old}\Lambda^{-1}. \label{rescale}
\end{eqnarray}
The first part of eq.~(\ref{gap3}) can
be `diagonalized' by the following canonical
transformation,
\begin{eqnarray}
\hspace{-1.7cm}
\left[ \begin{array}{c}
{\cal F}_0 \\
{\cal F}_5 \\
\end{array}
\right] = \left[ \begin{array}{cc}
{\rm ch} \theta & -{\rm sh} \theta \\
- {\rm sh} \theta & {\rm ch} \theta \\
\end{array}
\right] \left[ \begin{array}{c}
F_0 \\
F_5 \\
\end{array}
\right], \ \ \left[ \begin{array}{c}
\psi_1 \\
\psi_2 \\
\end{array} \right] = \left[
\begin{array}{cc}
{\rm ch} \theta & -{\rm sh} \theta \\
- {\rm sh} \theta & {\rm ch} \theta \\
\end{array}
\right] \left[ \begin{array}{c}
\mu \\
- m \\
\end{array} \right], \label{cano1}
\end{eqnarray}
where the angle $\theta$ is chosen as
\begin{eqnarray}
\big( {\rm ch} \theta , {\rm sh} \theta \big) \equiv
\frac{\big(\Delta_s + \sqrt{\Delta^2_s-B^2},B\big)}
{\sqrt{2}\{\Delta^2_s-B^2\}^{\frac{1}{4}}
\{\sqrt{\Delta^2_s - B^2} + \Delta_s\}^{\frac{1}{2}}}.
\label{bog}
\end{eqnarray}
Under this transformation, we obtain,
\begin{eqnarray}
\hspace{-1.4cm}
\left\{ \begin{array}{l}
(1+\eta_{+} G) {\cal F}_0 = \psi_{1} \\
(1-\eta_{-} G) {\cal F}_5 = \psi_{2}, \\
\end{array}\right. \ \ G = 2 \int_{0}^{1}
\frac{k^2 dk} {{\cal F}^2_0 - {\cal F}^2_5 - k^2} \equiv
2\int_{0}^{1} \frac{k^2 dk}{(a+ib)^2 - k^2}, \label{gap3a}
\end{eqnarray}
with $\eta_{\pm} = \sqrt{\Delta^2_s-B^2}\pm \Delta_a$
and
$(a+ib)^2\equiv {\cal F}^2_0-{\cal F}^2_5$.
The real/imaginary part of `$G$' is an even/odd function
of both $a$ and $b$,
\begin{eqnarray}
&&\hspace{-1.6cm} {\rm Re}G = -2 - \frac{a}{2}{\rm log}
\Bigg[\frac{(1-a)^2+b^2}{(1+a)^2+b^2}\Bigg]
+ b \bigg( {\rm arctan}\Big[\frac{1-a}{b}\Big] +
{\rm arctan}\Big[\frac{1+a}{b}\Big]\bigg), \label{reg} \\
&&\hspace{-1.6cm} {\rm Im}G = - \frac{b}{2}{\rm log}
\Bigg[\frac{(1-a)^2+b^2}{(1+a)^2+b^2}\Bigg]
- a \bigg({\rm arctan}\Big[\frac{1-a}{b}\Big] +
{\rm arctan}\Big[\frac{1+a}{b}\Big]\bigg). \label{img}
\end{eqnarray}
Eqs.~(\ref{gap3a}-\ref{img}) can be solved in a quite
similar fashion as we did in Ref~\cite{prb-sm} (compare
them with eqs.~(38-41) in this reference). As a
result, the final self-consistent Born phase diagram
obtained in this section is basically same as
that obtained there. The only difference is
that the phase boundaries which separate
two gapped phases and compressible (diffusive)
phase are deformed in the $\mu$-$m$ parameter space,
according to the canonical
transformation eq.~(\ref{cano1}): they become
symmetric with respect to $\psi_1$ and $\psi_2$,
instead of $\mu$ and $m$
(compre Fig.~\ref{pd2} with Figs.~9,10
in the reference). Those who are well-informed
of Ref.~\cite{prb-sm} therefore might as well
skip the remaining part of
this section and start from sec.~5. To make this article
self-contained, we will review the arguments in the context
of eqs.~(\ref{gap3a}-\ref{img}).
\subsection{$\psi_{1}=\psi_{2}=0$ case}
Consider the simplest case first, $\mu=m=0$. Since
$\eta_{+}+\eta_{-}\equiv 2\sqrt{\Delta^2_s-B^2}\neq 0$
\footnote{The convergence of the gaussian integral in
eq.~(\ref{que}) requires $\Delta_s > B$.}, the
possible solutions of eqs.~(\ref{gap3a}-\ref{img})
are clearly three-folded:
\begin{eqnarray}
\hspace{1.5cm}
\left\{ \begin{array}{l}
{\rm (i)}: \ {\cal F}_0 = {\cal F}_5 = 0, \\
{\rm (ii)}:\ 1+\eta_{+} G = 0 \cap {\cal F}_5 = 0, \\
{\rm (iii)}:\ 1-\eta_{-} G = 0 \cap {\cal F}_0 = 0. \\
\end{array} \right. \label{gap5}
\end{eqnarray}
The type-(iii)
solution can realize when $\eta_{-}<-\frac{1}{2}$.
In reality, however, such a parameter region is very limited and
the solution itself is also physically unlikely.
Thus, we will investigate only the first two types.
The type-(i) solution
is a diffusionless solution, where the massless zero-energy
state is free from any disorders.
The type-(ii) solution describes the diffusive massless
state: the zero energy state at the critical point
acquires a finite life-time due to the non-magnetic disorders.
The gap equations have an intrinsic critical
disorder strength $\eta_{+,c}$, below which
only the type-(i) solution is allowed. When the disorder
strength exceeds this critical value, $\eta_{+}>\eta_{+,c}$,
this type-(i) solution becomes unphysical and
type-(ii) solution becomes a unique physical
solution. These two solutions are continuously
connected at $\eta_{+}=\eta_{+,c}$.
To see this situation, begin with the
type-(ii) solution:
\begin{eqnarray}
\hspace{1cm} 1 + \eta_{+} {\rm Re}G = 0 \ \cap \ {\rm Im} G = 0
\ \cap \ {\cal F}_5 = 0.
\end{eqnarray}
${\rm Im}G=0$ requires either (a) $a=0$ or
(b) $b=0 \cap |a|>1$. Now that `$a$' and `$b$' as well as `$F_0$' and
`$F_5$' are rescaled by the ultraviolet
cutoff $\Lambda$ as in eq.~(\ref{rescale}),
they should be much smaller than unit.
This leads to $a=0$. With this, $1+\eta_{+} {\rm Re}G=0$
determines `$b$' as a function of the
disorder strength $\eta_{+}$:
\begin{eqnarray}
\hspace{1.5cm} b \cdot {\rm arctan}[b^{-1}] = 1 - \frac{1}{2\eta_{+}}. \label{gap5}
\end{eqnarray}
This equation dictates that `$b$' can be finite
only when $\eta_{+}$ is greater than some critical
value. That is $\eta_{+,c}=\frac{1}{2}$. Above this, the
type-(ii) solution becomes possible, $({\cal F}_0,{\cal F}_5) = (ib,0)$.
When the disorder strength falls below this critical value,
the solution reduces continuously to the trivial one,
$({\cal F}_0,{\cal F}_5)=(0,0)$.
In the followings, we will summarize the behaviours of these
two solutions in the presence of finite $\psi_{1}$ and $\psi_{2}$.
\subsection{$\psi_{1}\ne 0$ and $\psi_{2}= 0$ case}
Introduce small $\psi_1$ into the type-(i) solution first.
Employing ${\cal F}_{5}=0$, we have only to solve the first
line of eq.~(\ref{gap3a}) in favor for $a+ib={\cal F}_0$.
Since its real part, ${\cal F}^{\prime}_{0}$, is an odd function of
$\psi_{1}$ and its imaginary part
${\cal F}^{\prime\prime}_{0}$ is even in $\psi_{1}$,
`$a$' is the first order in small $\psi_{1}$ while
`$b$' is the second order,
$a = {\cal O}(\psi_{1})$ and $b= {\cal O}(\psi^2_1)$.
With this, eq.~(\ref{gap3a}) can be solved iteratively in
small $\psi_{1}$:
\begin{eqnarray}
a = \frac{\psi_{1}}{1-2\eta_{+}} + {\cal O}(\psi^3_1), \ \ \
b = \frac{\pi\eta_{+}}{(1-2\eta_{+})^3}\psi^2_{1}
+ {\cal O}(\psi^4_1). \label{asym1}
\end{eqnarray}
This indicates that, for $\eta_{+}>\eta_{+,c}$,
the sign of the renormalized chemical potential `$a$' and
that of the bare chemical potential `$\psi_1$' becomes opposite
with each other. This is, however, physically unlikely. When
$\eta_{+}$ exceeds this critical strength,
the type-(ii) solution instead of type-(i)
becomes a physical solution. Indeed, having a finite `$b$' already at
the zero-th order in $\psi_{1}$, the type-(ii) solution behaves
as follows,
\begin{eqnarray}
a = \frac{\psi_{1}}{2\eta_{+}-1} + {\cal O}(\psi^3_1, b^2 \psi_{1}),
\ \ b = \frac{1}{\pi}\frac{2\eta_{+}-1}{\eta_{+}} + {\cal O}(\psi^2_1,b^2),
\label{asym2}
\end{eqnarray}
in small $\psi_1$ region. For $\eta_{+}>\eta_{+,c}$,
`$a$' has the same sign as that of the bare one.
\subsection{$\psi_{1}= 0$ and $\psi_{2}\ne 0$ case}
In the presence of finite $\psi_{2}$,
the solutions of eqs.~(\ref{gap3a}-\ref{img}) are two-folded,
depending on the relative ratio between the disorder strength and
the topological mass $\psi_2$. When the topological mass
is less than a certain critical value, say $\psi_{2,c}$, the system
is in the diffusive (compressible) phase, which is basically
equivalent to the type-(ii) solution at $\psi_2=0$,
\begin{eqnarray}
\hspace{0.5cm}
{\cal F}_0 = i
\sqrt{\tau^{-2}-\Big(\frac{\eta_{+}\psi_{2}}{\eta_{+}+\eta_{-}}\Big)^2},
\ \ {\cal F}_5 = \frac{\eta_{+}\psi_{2}}{\eta_{+} + \eta_{-}}. \label{sol0}
\end{eqnarray}
$\tau^{-1}$ and $\psi_{2,c}$ are defined by $\eta_{\pm}$:
\begin{eqnarray}
\hspace{0.5cm}
\psi_{2,c}(\eta_{+},\eta_{-}) = \frac{\eta_{+} + \eta_{-}}{\eta_{+}\tau}, \ \
\tau^{-1}{\rm arctan}[\tau] = 1 - \frac{1}{2\eta_{+}}. \label{bound}
\end{eqnarray}
When $\psi_{2}$ exceeds this critical value, the system
enters one of the two incompressible phases, which is adiabatically
connected to the gapped phases in the clean limit,
\begin{eqnarray}
\hspace{1.5cm}
{\cal F}_0 = 0, \ \ \ {\cal F}_5 = \overline{\psi}_2. \label{sol1}
\end{eqnarray}
The renormalized mass terms $\overline{\psi}_2$
is given by
\begin{eqnarray}
\hspace{0.7cm}
\overline{\psi}_2
\big\{ 1 + 2\eta_{-} - 2 \eta_{-}\overline{\psi}_2
{\rm arctan}\big[\overline{\gamma}^{-1}_2\big]\big\} = \psi_{2}. \label{sol1a}
\end{eqnarray}
These two solutions are connected continuously at the phase boundary,
$\psi_{2}=\psi_{2,c}(\eta_{+},\eta_{-})$.
\subsection{$\psi_{1}\ne 0$ and $\psi_{2}\ne 0$ case}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{table.eps}
\caption{
${\cal F}_{0}={\cal F}^{\prime}_0 + i{\cal F}^{\prime\prime}_0$
and ${\cal F}_5={\cal F}^{\prime}_5+i{\cal F}^{\prime\prime}_5$
as a function of bare chemical potential $\mu$ and topological mass $m$.
${\cal F}_0$ changes its sign under
$\psi_{1} \rightarrow - \psi_{1}$, while ${\cal F}_5$ changes
its sign under $\psi_{2} \rightarrow - \psi_{2}$.}
\label{table1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.82]{ab.eps}
\caption{$({\cal F}_{0},{\cal F}_5)$ numerically
evaluated at $\Delta_{a}/\Delta_s=B/\Delta_s=0.5$ and
$\eta_{+}=0.45/0.57$ (upper/lower four panels).
Their real parts and imaginary parts decrease/increase
toward the dark/light region.
(upper/lower four from left to right): The contour plot of
${\cal F}^{\prime}_0$, ${\cal F}^{\prime\prime}_0$,
${\cal F}^{\prime}_5$ and ${\cal F}^{\prime\prime}_5$
at $\eta_{+}=0.45/0.57$. The contour intervals are
$0.32/0.30\times 10^{-1}$,
$0.9/1.25 \times 10^{-2}$, $0.24/0.24 \times 10^{-1}$ and
$0.68/0.75 \times 10^{-3}$ respectively. Both
${\cal F}^{\prime}_0$ and ${\cal F}^{\prime\prime}_5$
become zero at $\psi_1=0$. ${\cal F}^{\prime\prime}_0$
and ${\cal F}^{\prime\prime}_5$ vanish
within both the yellow region (topological insulator)
and the red region (ordinary insulator).
${\cal F}^{\prime\prime}_{0}$ in the lower panel
remains finite even at $\psi_{1}=0$, as far as
$-\psi_{2,c}<\psi_{2}<\psi_{2,c}$,
where $\psi_{2,c}$ is given by eq.~(\ref{bound}).}
\label{pb}
\end{center}
\end{figure}
For general $\psi_{1}$ and $\psi_{2}$, we have solved
numerically
the following coupled equations in favor of `$a$' and `$b$':
\begin{eqnarray}
&&\hspace{-0.7cm}
{\cal F}_0 =
\frac{\psi_{1} (1+ \eta_{+}{\rm Re} G)}
{(1+\eta_{+}{\rm Re} G)^2 + (\eta_{+}{\rm Im} G)^2}
+ i\frac{\psi_{1} \eta_{+}{\rm Im} G}
{(1+\eta_{+}{\rm Re} G)^2 + (\eta_{+}{\rm Im} G)^2}, \nonumber \\
&&\hspace{-0.7cm}
{\cal F}_5 =
\frac{\psi_{2} (1+ \eta_{-}{\rm Re} G)}
{(1+\eta_{-}{\rm Re} G)^2 + (\eta_{-}{\rm Im} G)^2}
+ i\frac{\psi_{2} \eta_{-}{\rm Im} G}
{(1+\eta_{-}{\rm Re} G)^2 + (\eta_{-}{\rm Im} G)^2}, \nonumber
\end{eqnarray}
with
${\cal F}^2_0-{\cal F}^2_5=(a+ib)^2$
and ${\rm Re}G$ and ${\rm Im}G$
defined in eqs.~(\ref{reg},\ref{img}).
The numerical solutions thus obtained are clearly
four-folded, i.e. $(a,b)$, $(-a,b)$, $(a,-b)$ and $(-a,-b)$.
This leads to double degeneracy in
${\cal F}_0$ and ${\cal F}_5$
\begin{eqnarray}
\hspace{1.5cm}
({\cal F}^{\prime}_0,{\cal F}^{\prime\prime}_0,
{\cal F}^{\prime}_5,{\cal F}^{\prime\prime}_5), \ \
({\cal F}^{\prime}_0,-{\cal F}^{\prime\prime}_0,
{\cal F}^{\prime}_5,-{\cal F}^{\prime\prime}_5), \nonumber
\end{eqnarray}
with ${\cal F}_0\equiv
{\cal F}^{\prime}_0 + i {\cal F}^{\prime\prime}_0$
and ${\cal F}_5 \equiv
{\cal F}^{\prime}_5 + i {\cal F}^{\prime\prime}_5$.
Moreover, these two solutions at
$(\psi_1,\psi_2)$ are related to those at
the other three points, i.e. $(\psi_{1},-\psi_{2})$, $(-\psi_{1},\psi_{2})$
and $(-\psi_{1},-\psi_{2})$,
\begin{eqnarray}
&&\hspace{-0.1cm} ({\cal F}^{\prime}_0,\pm {\cal F}^{\prime\prime}_0,
{\cal F}^{\prime}_5,\pm {\cal F}^{\prime\prime}_5)_{|\psi_{1},\psi_{2}}
= ({\cal F}^{\prime}_0,\pm {\cal F}^{\prime\prime}_0,-
{\cal F}^{\prime}_5,\mp {\cal F}^{\prime\prime}_5)_{|\psi_{1},-\psi_{2}} \nonumber \\
&&\hspace{0.1cm} = (-{\cal F}^{\prime}_0,\pm {\cal F}^{\prime\prime}_0,
{\cal F}^{\prime}_5,\mp {\cal F}^{\prime\prime}_5)_{|-\psi_{1},\psi_{2}}
= (-{\cal F}^{\prime}_0,\pm {\cal F}^{\prime\prime}_0,-
{\cal F}^{\prime}_5,\pm {\cal F}^{\prime\prime}_5)_{|-\psi_{1},-\psi_{2}}.
\label{relation}
\end{eqnarray}
The upper/lower sign is for the retarded/advanced Green function
(Fig.~\ref{table1}).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.75]{pd1.eps}
\caption{The self-consistent Born phase diagram
for fixed $\Delta_a/\Delta_s=B/\Delta_s=0.5$. (a/b/c)
$\eta_{+}=0.31/0.48/0.53$, where $\eta_{+,c}=0.5$.}
\label{pd2}
\end{center}
\end{figure}
Fig.~\ref{pb} demonstrates how ${\cal F}_0$ and ${\cal F}_5$ behave
as a function of $\mu$ and $m$, both for $\eta_{+}<\eta_{+,c}$
and for $\eta_{+}>\eta_{+,c}$. Fig.~\ref{pd2} is the
corresponding phase diagrams, where both
${\cal F}^{\prime\prime}_0$ and ${\cal F}^{\prime\prime}_5$
vanish in two incompressible phases (the
topological insulator and an ordinary insulator).
For $\eta_{+}<\eta_{+,c}$, the $\mu$-$m$ phase diagram holds
a {\it direct} phase transition point between
these two gapped phases (Fig.~\ref{pd2}(a,b)).
For $\eta_{+}>\eta_{+,c}$, this direct
phase transition point becomes a {\it finite region} of
the diffusive phase (Fig.~\ref{pd2}(c)). Especially
at $\psi_1=0$, this diffusive region ranges from
$\psi_2=-\psi_{2,c}$ to $\psi_2=\psi_{2,c}$,
where $\psi_{2,c}$ is given by eq.~(\ref{bound}).
Note also that, due to the symmetry relation given in
eq.~(\ref{relation}), all
the phase boundaries are symmetric with respect to the
sign changes of $\psi_1$ and $\psi_2$
(see red arrows in Fig.~\ref{pd2}(c)).
\section{quantum conductivity correction in the presence of
the topological-mass type disorder potential}
We will argue the behaviour of the quantum
conductivity correction in the presence of both chemical-potential
type disorder and topological-mass type disorder. To
do this, consider the following series-sum of the
ladder type diagrams;
\begin{eqnarray}
\hspace{-0.4cm}
\Gamma^{\rm dif}(q,\omega) =
\Big[\hat{\xi}^{-1} - \sum_{k}\hat{G}^{R}\big(k_{+},
\mu_{+}\big)\times \hat{G}^{A}\big(k_{-},
\mu_{-}\big)\Big]^{-1} \label{gam}
\end{eqnarray}
where $k_{\pm}\equiv k\pm\frac{q}{2}$,
$\mu_{\pm}\equiv \mu\pm\frac{\omega}{2}$
and the $16$ by $16$ tensor $\hat{\xi}$ is given by
\footnote{Note that the product between these tensors is
defined as $(\hat{A}_r \times \hat{A}_a) \cdot (\hat{B}_r \times \hat{B}_a)
\equiv \hat{A}_{r}\hat{B}_r\times \hat{B}_a\hat{A}_a$, where
$\hat{A}_r$ ($\hat{B}_r$) is a $4$ by $4$ matrix in the
retarded line, while $\hat{A}_a$ ($\hat{B}_a$) stands for
that in the advanced line.};
\begin{eqnarray}
\hspace{0.1cm}\hat{\xi} \equiv \Delta_{0,0}\hat{\gamma}_{0}\times
\hat{\gamma}_{0} + \Delta_{0,5} \hat{\gamma}_5 \times \hat{\gamma}_0
+ \Delta_{5,0} \hat{\gamma}_{0}\times \hat{\gamma}_5 + \Delta_{5,5}
\hat{\gamma}_5 \times \hat{\gamma}_5. \nonumber
\end{eqnarray}
Tracing out its two vertices with
some internal degree of freedom, say $\hat{\gamma}_j$,
we obtain the correlation function of the corresponding density
$\psi^{\dagger}\hat{\gamma}_j \psi$,
\begin{eqnarray}
&&\hspace{-1.4cm}
\phi_{j}(q,\omega) \equiv - \frac{1}{2\pi i}
\sum_{\alpha,\cdots,\theta}\sum_{k,k'}
\big[\hat{\gamma}_j\big]_{\beta\alpha}
\hat{G}^{R}_{\alpha\epsilon}(k_{+},\mu_{+})\hat{G}^{A}
_{\zeta\beta}(k_{-},\mu_{-}) \nonumber \\
&&\hspace{0.9cm}
\big\{\delta_{k,k'}
\delta_{\epsilon\delta}\delta_{\gamma\zeta}
\ + \ \Gamma^{\rm dif}_{\epsilon\theta,
\eta\zeta}(q,\omega)
\hat{G}^{R}_{\theta\delta}(k^{\prime}_{+},\mu_{+})\hat{G}^{A}
_{\gamma\eta}(k^{\prime}_{-},\mu_{-})\big\}
\big[\hat{\gamma}_j\big]_{\delta\gamma} \label{cor}
\end{eqnarray}
When this density is a conserved quantity in each ensemble,
like $\psi^{\dagger}\hat{\gamma}_0 \psi$, the function should
exhibit the diffusive behaviour at the infrared region,
$q,\omega \simeq 0$. Thus, the above series-sum carries at
least one diffusion pole.
When its advanced line is ${\cal T}$-reversed,
it is transcribed into the series-sum of the
`maximally-crossed' diagrams (Cooperon),
\begin{eqnarray}
&& \hspace{-1.4cm}
(1\otimes s_y)_{\gamma\gamma^{\prime}}
\Gamma^{\rm dif}_{\alpha\delta,\beta^{\prime}\gamma^{\prime}}
(q,\omega)(1\otimes s_y)_{\beta^{\prime}\beta}
= \sum_{i,j=0,5} \Delta_{i,j} \ [\hat{\gamma}_{i}]_{\alpha\delta}
\times [\hat{\gamma}_{j}]_{\gamma\beta} \nonumber \\
&&\hspace{-1.6cm} + \sum_{i,\cdots,m} \sum_{k}
\Delta_{i,j}\Delta_{l,m} \
[\hat{\gamma}_{i}]_{\alpha\alpha_1}
\hat{G}^{R}_{\alpha_1\delta_1}(k_{+},\mu_{+})
[\hat{\gamma}_l]_{\delta_1\delta} \times
[\hat{\gamma}_j]_{\gamma\gamma_1}
\hat{G}^{A}_{\gamma_1\beta_1}(-k_{-},\mu_{-})
[\hat{\gamma}_{m}]_{\beta_1\beta} \nonumber \\
&&\hspace{-1.4cm} + \sum_{i,\cdots,p} \sum_{k,k^{\prime}}
\Delta_{i,j}\Delta_{l,m} \Delta_{n,p} \
[\hat{\gamma}_{i}]_{\alpha\alpha_1}
\hat{G}^{R}_{\alpha_1\delta_1}(k_{+},\mu_{+})
[\hat{\gamma}_{l}]_{\delta_1\alpha_2}
\hat{G}^{R}_{\alpha_2\delta_2}(k^{\prime}_{+},\mu_{+})
[\hat{\gamma}_{n}]_{\delta_2\delta} \nonumber \\
&& \hspace{-0.cm}
\times [\hat{\gamma}_j]_{\gamma\gamma_1}
\hat{G}^{A}_{\gamma_1\beta_1}(-k_{-},\mu_{-})
[\hat{\gamma}_{m}]_{\beta_1\gamma_2}
\hat{G}^{A}_{\gamma_2\beta_2}(-k^{\prime}_{-},\mu_{-})
[\hat{\gamma}_{p}]_{\beta_2\beta} + \cdots, \label{coop}
\end{eqnarray}
which describes the quantum interference between
the time-reversal paired scattering process from
$K$ to $-K+q$. Thus, any type of the diffusion pole
appearing in eq.~(\ref{gam}) generally results in a certain
quantum interference effect in the backward
scattering channel.
To investigate the nature of the diffusion poles in eq.~(\ref{gam}),
employ the canonical transformation used in sec.~4 first,
\begin{eqnarray}
&& \hspace{0.8cm}
\overline{\Gamma}^{\rm dif}(q,\omega) \equiv V^{R}_{\theta}
\times V^{A}_{\theta} \cdot \Gamma^{\rm dif}(q,\omega) \cdot
V^{R}_{\theta} \times V^{A}_{\theta}, \nonumber \\
&&\hspace{1.8cm}
V^{R}_{\theta} = V^{A}_{\theta} =
\cosh \frac{\theta}{2} \
\hat{\gamma}_0 +
\sinh \frac{\theta}{2} \ \hat{\gamma}_5, \nonumber
\end{eqnarray}
with $\theta$ being defined as eq.~(\ref{bog}).
Such a transformation diagonalizes $\hat{\xi}$.
Simultaneously, it simplifies the expression of
$\hat{G}^{R(A)}(k_{\pm},\mu_{\pm})$ in terms of
${\cal F}_0$ and ${\cal F}_{5}$. Namely, the series-sum
thus transformed reads
\begin{eqnarray}
\hspace{-0.3cm}\overline{\Gamma}^{\rm dif}(q,\omega) =
\Big[\hat{\xi}^{-1}_d -
\sum_{k}{\cal G}^{R}\big(k_{+},\mu_{+}\big)
\times {\cal G}^{A}\big(k_{-},\mu_{-}\big)\Big]^{-1}
\label{cano}
\end{eqnarray}
with
\begin{eqnarray}
&& \hspace{-0.64cm}
\hat{\xi}^{-1}_d \equiv \frac{\upsilon_{+}}{\upsilon^2_{+}-\upsilon^2_-}
\ \hat{\gamma}_0 \times \hat{\gamma}_0 - \frac{\upsilon_-}
{\upsilon^2_{+}-\upsilon^2_{-}}\ \hat{\gamma}_5
\times \hat{\gamma}_5, \nonumber \\
&&\hspace{-1.5cm}
\hat{\cal G}^{R}\big(k_{+},\mu_{+}\big)
\equiv \frac{{\cal F}_{0,+}}{{\cal F}^2_{0+} - k^2_{+}
-{\cal F}^2_{5,+}} \hat{\gamma}_0 -
\sum_{j=1}^3 k_{+,j} \hat{\gamma}_j -
\frac{{\cal F}_{5,+}}{{\cal F}^2_{0,+} - k^2_{+}
-{\cal F}^2_{5,+}} \hat{\gamma}_5, \nonumber \\
&&\hspace{-1.5cm}
\hat{\cal G}^{A}\big(k_{-},\mu_{-}\big)
\equiv \frac{{\cal F}^{*}_{0,-}}
{({\cal F}^*_{0,-})^2 - k^2_{-}
-({\cal F}^*_{5,-})^2} \hat{\gamma}_0 -
\sum_{j=1}^3 k_{-,j} \hat{\gamma}_j - \frac{{\cal F}^{*}_{5,-}}
{({\cal F}^*_{0,-})^2 - k^2_{-}
-({\cal F}^*_{5,-})^2}\hat{\gamma}_5, \nonumber \\
&&\hspace{-0.68cm}
2{\upsilon}_{\pm}\equiv \sqrt{\Delta^2_{s}-B^2}\pm 2\pi
(\Delta_{0,0} - \Delta_{5,5}). \nonumber
\end{eqnarray}
The subscript on ${\cal F}_{j}$ stands for its
$\omega$-dependence; ${\cal F}_{j,\pm}
\equiv {\cal F}_{j}(\mu\pm \frac{\omega}{2},m)$,
where the $\mu$- and $m$-dependence of ${\cal F}_j$
are determined by the preceding gap equation,
eqs.~(\ref{gap3a}-\ref{img}).
Taking $q$ to be zero, we
get a relatively simpler
expression for eq.~(\ref{cano}),
\begin{eqnarray}
&& \hspace{0.4cm}
\overline{\Gamma}^{\rm dif}(0,\omega) =
f^{-1}_{1} {\Gamma}_1
+ f^{-1}_{2} {\Gamma}_2
+ f^{-1}_{3} {\Gamma}_3
+ f^{-1}_{4} {\Gamma}_{4}, \label{diff} \\
&& \hspace{0.4cm}
f_{1} = a^2_1 - \delta a^2_{04} + \delta a^2_{23}, \ \ \
f_{2} = a^2_1 - a^2_{04} + a^2_{23}, \nonumber \\
&&\hspace{0.7cm}
f_{3} = 9 a^2_1 - \delta a^2_{04} + \delta a^2_{23}, \ \
f_{4} = 9 a^2_1 - a^2_{04} + a^2_{23}, \nonumber
\end{eqnarray}
with $\delta a_{ij} \equiv a_i-a_j$,
$a_{ij}\equiv a_i + a_j$ and
\begin{eqnarray}
&&\hspace{-0.3cm}
a_0 \equiv \frac{v_+}{v^2_+ - v^2_-} -
\sum_{k}
\frac{{\cal F}_{0,+} {\cal F}^{*}_{0,-}}
{(k^2-{\cal F}^2_{0,+}+{\cal F}^2_{5,+})
(k^2-({\cal F}^*_{0,-})^2+({\cal F}^*_{5,-})^2)}, \label{5a0} \\
&&\hspace{-0.3cm}
a_1 \equiv -
\sum_{k}
\frac{k^2_x} {(k^2-{\cal F}^2_{0,+}+{\cal F}^2_{5,+})
(k^2-({\cal F}^*_{0,-})^2+({\cal F}^*_{5,-})^2)}, \label{5a1} \\
&&\hspace{-0.3cm}
a_2 \equiv \sum_{k}
\frac{{\cal F}_{0,+} {\cal F}^{*}_{5,-}}
{(k^2-{\cal F}^2_{0,+}+{\cal F}^2_{5,+})
(k^2-({\cal F}^*_{0,-})^2+({\cal F}^*_{5,-})^2)}, \label{5a2} \\
&&\hspace{-0.3cm}
a_3 \equiv \sum_{k}
\frac{{\cal F}_{5,+} {\cal F}^{*}_{0,-}}
{(k^2-{\cal F}^2_{0,+}+{\cal F}^2_{5,+})
(k^2-({\cal F}^*_{0,-})^2+({\cal F}^*_{5,-})^2)}, \label{5a3} \\
&&\hspace{-0.3cm}
a_4 \equiv - \frac{v_-}{v^2_+ - v^2_-} -
\sum_{k}
\frac{{\cal F}_{5,+} {\cal F}^{*}_{5,-}}
{(k^2-{\cal F}^2_{0,+}+{\cal F}^2_{5,+})
(k^2-({\cal F}^*_{0,-})^2+({\cal F}^*_{5,-})^2)}, \label{5a4}
\end{eqnarray}
Four $\Gamma_{j}$ in eq.~(\ref{diff})
turns out to be all regular functions (tensors) of
$\omega$~\cite{prb-sm}. The only
possibility is, therefore, the diffusion pole appears in
one of the four coefficients $f_j$.
In fact, employing the same technique invented in
Ref.~\cite{prb-sm}, we can easily show that
eq.~(\ref{gap3a}) requires $f_{4}$ to be zero
at $\omega = 0$.
The other three coefficients are generally
truncated by some finite infrared cutoff. Each cutoff
stands for (the inverse of) the relaxation time of a certain
internal degree of freedom other than charge density.
When the topological-mass-type disorder
is absent~\cite{prb-sm}, $f_{3}$ also exhibits
the diffusive behaviour
at the massless point, $m=0$.
This is because, in the presence of only the $\gamma_0$-type disorder
potential, $\hat{\gamma}_{45}$ always commutes with
a hamiltonian at the TQCP, so that not only
the charge density but the corresponding parity density\footnote{
$\hat{\gamma}_{45}$ is spatially-inversion odd, while
time-reversal even, so that we call the corresponding
density, $\psi^{\dagger}\hat{\gamma}_{45}\psi$,
as the parity density.} also follow a diffusion equation.
Indeed, when substituted into eq.~(\ref{cor}),
$f^{-1}_{3}\Gamma_{3}$ contributes to the correlation
function of this parity density,
while $f^{-1}_4\Gamma_4$ does to that of the charge density.
When their advanced lines are
time-reversed as in eq.~(\ref{coop}), both $\Gamma_3$
and $\Gamma_{4}$ result in the same magnitude of
the positive weak-localization (AWL) correction
to the charge conductivity. Thus,
the additional $U(1)$ symmetry recovery at the TQCP
induces the `doubling phenomena' of
the quantum conductivity correction around this massless point.
In the presence of the topological-mass-type disorder,
however, the relaxation time of this parity
density, given below, {\it always remains finite} in
the {\it whole} $\mu$-$m$ parameter space,
\begin{eqnarray}
\hspace{0.1cm}
\tau^{-1}_{\rm topo} \equiv
\Big\{\frac{\partial f_4}{\partial \omega}
(f_3-f_4) \Big\}_{|\omega=0} =
\Big\{\frac{\partial f_4}{\partial \omega}
(-a_0 a_4 + a_2 a_3) \Big\}_{|\omega=0}. \label{topo}
\end{eqnarray}
Namely, though both $a_{2}$ and $a_3$ in the r.h.s. vanish
at $\psi_2\equiv -\mu \sinh \theta + m \cosh \theta =0$,
$a_4$ never vanishes even at $\psi_2=0$, due to the first
term in eq.~(\ref{5a4}), $- \frac{v_-}{v^2_{+}-v^2_-}$.
Indeed, once the $\gamma_5$-type impurity potential
is introduced, the parity density is generally a
non-conserved quantity. As a result, unlike in Ref.~\cite{prb-sm},
only the charge diffusion pole results in the positive
weak-localization correction, while that of the parity
diffusion mode remains always `gapped' in the
entire $\mu$-$m$ parameter space.
\section{summary}
From the bulk-edge correspondence, the topological
quantum critical point (TQCP) which intervenes the three-dimensional
topological insulator and an ordinary
insulator is expected to be stable against any non-magnetic
disorders, provided that each surface state supported in the
topological insulator phase is stable against the same
perturbations. To understand the stability of this 3-d TQCP,
we first employ
the `Berry phase' argument and show that
any backward scattering process which conserves
the parity density degree of freedom,
$\psi^{\dagger}\gamma_{45}\psi$, is
always set off by its ${\cal T}$-reversal
counter-process. This observation upholds
more directly our previous surmise in
Ref~\cite{prb-sm}, where
two of the authors conjectured that, when the system transits
from the topological insulator side to the ordinary insulator
side, the parity density always becomes a conserved quantity
once. However, it is still an open issue {\it how} the parity
density becomes a conserved quantity at
these transition points (or regions)
{\it in the presence of generic non-magnetic disorders}.
Namely, the `Berry phase' argument {\it only} confirms
that, as far as the parity density
is a conserved quantity, the 3-d TQCP remains
delocalized (or critical) even in the presence of
sufficiently strong generic non-magnetic disorders.
In fact, as for those backward scattering processes
which do not conserve the parity density, the above `Berry phase' argument
does not work at all. To understand these generic ${\cal T}$-symmetric
situations, we further derive the self-consistent Born phase diagram in
the presence of general non-magnetic disorders.
We found that the derived scB phase diagram
is basically same as that in the case with only the
chemical-potential-type disorder. Namely, as in
Ref.~\cite{prb-sm}, there exists a certain critical
disorder strength, below which the two gapped phases
(the topological insulator and an ordinary
insulator) are always separated by the {\it direct}
quantum phase transition point in the $\mu$-$m$
parameter space. When the disorder strength exceeds
this critical value, this direct transition
point becomes a {\it finite region} of the
diffusive metallic phase.
As for the quantum conductivity correction,
the situation changes drastically.
On increasing the topological-mass type disorder
potential, we found that the diffusive parity
density mode observed at the 3-d TQCP always acquires a
{\it finite relaxation time} in the
{\it entire} $\mu$-$m$ parameter space. This indicates that
the `doubling phenomena' of the quantum conductivity correction
previously observed at the TQCP~\cite{prb-sm} becomes
suppressed in the presence of $\gamma_5$-type disorder
potential.
\section{Acknowledgments}
RS and RN were supported by the Institute of Physical
and Chemical Research (RIKEN) and SM was supported
by Grants-in-Aid for Scientific Research from
the Ministry of Education, Culture,
Sports, Science and Technology (MEXT) of Japan.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In 1973 Y. Nambu proposed a generalization of the usual
Hamiltonian dynamics, in which odd-dimensional phase spaces are
also possible \cite{ref:Nambu}. To his proposal, time evolution of
a dynamical variable $f(x_1,\dots ,x_n)=f(x)$ over an
$n$-dimensional phase space is given by the so-called Nambu
bracket
\begin{eqnarray}\label{Nambu}
\dot{f}=\{f,H_1,\dots ,H_{n-1}\}=\frac{\partial (f,H_1,\dots
,H_{n-1})}{\partial (x_1, \dots ,x_n)},
\end{eqnarray}
where $H_1,\dots ,H_{n-1}$ are the functionally independent
Hamilton functions and the variables $x_1,\dots ,x_n$ stand for
the local coordinates of $\mathbb{R}^n$. The explicit form of the
Nambu bracket (\ref{Nambu}) is given by the expression
\begin{equation}\label{explicit}
\{f_{1},\dots ,f_{n}\}=\frac{\partial (f_{1},\dots
,f_{n})}{\partial (x_1,\dots ,x_n)}= \epsilon_{i_{1}\cdots
i_{n}}\frac{\partial f_1}{\partial x_{i_1}}\cdots \frac{\partial
f_n}{\partial x_{i_n}}.
\end{equation}
(Throughout the text, sum is taken over all repeated indices). The
coordinate-free expression of the Nambu bracket is defined by
means of the $(n-1)$-form $\Gamma =dH_1\wedge \cdots \wedge
dH_{n-1}$, namely
\begin{eqnarray}
^*(df\wedge \Gamma)=\{f,H_1,\dots ,H_{n-1}\},
\end{eqnarray}
where $d$ and $\wedge$ denote the usual exterior derivative and
exterior product respectively, and $^*$ is the Hodge map.
It is well known that canonical transformations (CTs) are a
powerful tool in the usual Hamilton mechanics. They serve three
main purposes: to describe the evolution of a dynamical system, to
show the equivalence of two systems, and mostly to transform a
system of interest into a simpler or known one in different
variables. In this paper we study CTs in the phase space endowed
with canonical Nambu bracket and we will try to gain a deeper
insight to the subject in a general framework.
The paper is organized as follows: In Sec.2, a precise definition
of CT in three-space is given. Since every CT is a canonoid
transformation it is felt that an explicit definition of the
canonoid transformations should be given. In doing so, the
discussion is kept in its general pattern, i.e., in the time
dependent form. Additionally, direct conditions on a CT
corresponding to the ones in the usual even-dimensional Hamilton
formalism are constructed . Sec.3 is devoted to show how to find
the generating functions (GFs) and the new Hamilton functions.
This section also contains the way to find the CT for given GFs.
It is seen that if one wants to know the GFs, the CT and the new
Hamilton functions, one must solve a Pfaffian differential
equation related with that quantity. Sec.4 stands for the
exemplification of CTs, including the definitions of gauge and
point CTs in three-space. Sec.5 deals with the classification of
CTs. It gives an extensive number of types. All of the possible
eighteen types is listed in six main kinds in Table~\ref{table1}.
As an inevitable part of the presentation, we construct the
infinitesimal transformations (ICTs) in Sec.6. It is shown that
the construction parallels the usual Hamilton formalism such that
ICTs can generate finite CTs. In order to complete the discussion,
in Sec.7 it is shown that a CT in three-space can be decomposed
into a sequence of three minor CTs. This result, in fact, confirms
a well known conjecture saying the same thing in the usual
classical and quantum mechanics.
\section{Definition of Canonical Transformations in Three-Space}
In the definition (\ref{Nambu}), $f$ and Hamilton functions
$H_1,\dots ,H_{n-1}$ do not contain $t$ explicitly. For the sake
of generality we will allow the explicit $t$ dependence. Since,
for the local coordinates $x_1, x_2, x_3$, the Nambu-Hamilton
equations of motion give
\begin{eqnarray}\label{NH}
\dot{x_i}=\epsilon_{ijk}\frac{\partial H_1}{\partial
x_j}\,\frac{\partial H_2}{\partial x_k},\qquad i,j,k=1,2,3,
\end{eqnarray}
(from now on, all Latin indices will take values $1,2,3$), total
time evolution of a dynamical variable $f(x,t)$ becomes
\begin{eqnarray}
\dot{f}=\{f,H_1,H_2\}+\frac{\partial f}{\partial t}.
\end{eqnarray}
Hence time evolution of the Hamilton functions amounts to the well
known form
\begin{eqnarray}
\dot{H_\alpha}=\frac{dH_\alpha}{dt}=\frac{\partial
H_\alpha}{\partial t},\qquad \alpha =1,2.
\end{eqnarray}
Instead of giving directly the definition of a CT in three-space,
it may be remarkable to give some interesting situations as a
pre-knowledge. First, by using the same terminology developed for
the usual Hamilton formalism in the literature
\cite{ref:Abraham,ref:Jose}, we give the definition of a canonoid
transformation. The main definition of a CT will be based on this
definition.\newline\newline%
{\bf Definition 2.1.} For a dynamical system whose equations of
motion are governed by the pair $(H_1(x,t),H_2(x,t))$, the time
preserving diffeomorphism $\mathbb{R}^3\times
\mathbb{R}\rightarrow \mathbb{R}^3\times \mathbb{R}$ such that
\begin{eqnarray}\label{imap}
(x_i,t)\mapsto (X_i(x,t),t)
\end{eqnarray}
is called a \emph{canonoid }transformation with respect to the
pair $(H_1,H_2)$ if there exist a
pair\newline%
$(K_1(X,t),K_2(X,t))$ satisfying
\begin{eqnarray}\label{covariance}
\dot{X}_i=\epsilon_{ijk}\frac{\partial K_1}{\partial
X_j}\,\frac{\partial K_2}{\partial X_k},
\end{eqnarray}
where $\mathbb{R}^3\times\mathbb{R}$ is the extended phase space
in which $t$ is considered as an additional independent variable.
The invertible transformation (\ref{imap}) (canonoid or not) also
changes the basis of vector fields and differential forms:
\begin{eqnarray}\label{vfield}
\frac{\partial}{\partial x_i}=\frac{\partial X_j}{\partial
x_i}\frac{\partial}{\partial X_j}+\frac{\partial t}{\partial
x_i}\frac{\partial }{\partial t}(=0),\qquad
\frac{\partial}{\partial X_i}=\frac{\partial x_j}{\partial
X_i}\frac{\partial}{\partial x_j}+\frac{\partial t}{\partial
X_i}\frac{\partial }{\partial t}(=0),
\end{eqnarray}
\begin{eqnarray}\label{form}
dx_i=\frac{\partial x_i}{\partial X_j}dX_j+\frac{\partial
x_i}{\partial t}dt(=0),\qquad dX_i=\frac{\partial X_i}{\partial
x_j}dx_j+\frac{\partial X_i}{\partial t}dt.
\end{eqnarray}
In the time independent case, the extended part drops and the map
becomes on $\mathbb{R}^3$ as expected, i.e.,
\begin{eqnarray}
x_i\mapsto X_i(x).
\end{eqnarray}
Note that, such a map considers $t$ in any time dependent function
$f(x,t)$ as a parameter only.
According to Definition 2.1 it is obvious that $K_1$ and $K_2$
serve as Hamilton functions for the new variables and the
transformation (\ref{imap}) preserves the Nambu-Hamilton
equations.
As an example consider Nambu system
\begin{eqnarray}
\dot{x_1}=x_2x_3\;,\;\dot{x_2}=-x_1x_3\;,\;\dot{x_3}=0
\end{eqnarray}
governed by the Hamilton functions
\begin{eqnarray}\label{Hs}
H_1(x)=\frac{1}{2}(x_1^2+x_2^2)\;,\;H_2(x)=\frac{1}{2}x_3^2.
\end{eqnarray}
Let the transformation be
\begin{eqnarray}\label{canonoid}
X_1=x_1\;,\;X_2=x_2\;,\;X_3=x_3^2.
\end{eqnarray}
Now if we choice the new Hamilton functions as
\begin{eqnarray}
K_1(X)=\frac{1}{2}(X_1^2+X_2^2)\;,\;K_2(X)=\frac{2}{3}X_3^{3/2},
\end{eqnarray}
we see that Nambu-Hamilton equations of motion remain covariant.
For a different pair $(H_1,H_2)$, there may not exist a new pair
$(K_1,K_2)$ for the same transformation.
It is well known that the canonicity condition of a transformation
must be independent from the forms of the Hamilton functions. We
now give a theorem related with this condition. Our theorem is
three-dimensional time dependent generalization of the
two-dimensional time independent version \cite{ref:Hurley}.
\newline\newline%
{\bf Theorem 2.1.} The transformation $(\ref{imap})$ is canonoid
with respect to all Hamiltonian pairs iff
\begin{eqnarray}\label{constant}
\{X_1,X_2,X_3\}=\rm{constant}.
\end{eqnarray}
\newline%
{\bf Proof:} If we consider the fact that
\begin{eqnarray}\label{Kzero}
\epsilon_{ijk}\frac{\partial}{\partial X_i}\frac{\partial
(K_1,K_2)}{\partial (X_j,X_k)}=0,
\end{eqnarray}
it is apparent from (\ref{covariance}) that the existence of $K_1$
and $K_2$ is equivalent to
\begin{eqnarray}\label{dots}
\frac{\partial\dot{X_i}}{\partial X_i}=0.
\end{eqnarray}
Since
\begin{eqnarray}\label{dot}
\dot{X}_i(x,t)=\frac{\partial X_i}{\partial
x_j}\dot{x_j}+\frac{\partial X_i}{\partial t},
\end{eqnarray}
with the help of (\ref{NH}), (\ref{dots}) reduces to
\begin{eqnarray}\label{cond1}
\epsilon_{jkl}\frac{\partial }{\partial X_i}\left(\frac{\partial
X_i}{\partial x_j}\frac{\partial H_1}{\partial x_k}\frac{\partial
H_2}{\partial x_l}\right)+\frac{\partial }{\partial
X_i}\frac{\partial X_i}{\partial t}=0.
\end{eqnarray}
Equivalently,
\begin{eqnarray}\label{derivate}
\epsilon_{jkl}\left(\frac{\partial }{\partial X_i}\frac{\partial
X_i}{\partial x_j}\right)\frac{\partial H_1}{\partial
x_k}\frac{\partial H_2}{\partial x_l}+\epsilon_{jkl}\frac{\partial
X_i }{\partial x_j}\frac{\partial }{\partial
X_i}\left(\frac{\partial H_1}{\partial x_k}\frac{\partial
H_2}{\partial x_l}\right)+\frac{\partial }{\partial
X_i}\frac{\partial X_i}{\partial t}=0.
\end{eqnarray}
If the first transformation rule in (\ref{vfield}) is used, the
second term of (\ref{derivate}) vanishes as
\begin{eqnarray}\label{Hzero}
\epsilon_{jkl}\frac{\partial}{\partial x_j}\frac{\partial
(H_1,H_2)}{\partial (x_k,x_l)}=0.
\end{eqnarray}
If we impose the requirement that the transformation is a canonoid
transformation independent from the Hamilton functions $H_1$ and
$H_2$, the coefficients in the first term of (\ref{derivate}) must
vanish, namely
\begin{eqnarray}\label{coeff}
\frac{\partial}{\partial X_i}\frac{\partial X_i}{\partial x_j}=0.
\end{eqnarray}
The last term in (\ref{derivate}) is already Hamiltonian
independent and it gets directly zero with the condition
(\ref{coeff}). Therefore the theorem becomes equal to the
following statement
\begin{eqnarray}
\frac{\partial}{\partial X_i}\frac{\partial X_i}{\partial x_j}=0
\quad\Leftrightarrow\quad \{X_1,X_2,X_3\}=\rm{constant}.
\end{eqnarray}
It is straightforward to see, after a bit long but simple
calculation, that
\begin{eqnarray}\label{zeroB}
\partial_{X_m}\{X_1,X_2,X_3\}=0,
\end{eqnarray}
if (\ref{coeff}) is satisfied. Conversely, the explicit form of
(\ref{zeroB}), for $m=1$ for instance, is
\begin{eqnarray}\label{m1}
\partial_{X_1}\{X_1,X_2,X_3\}=\epsilon_{jkl}\frac{\partial X_2}{\partial x_k}\frac{\partial
X_3}{\partial x_l}\frac{\partial }{\partial X_i}\frac{\partial
X_i}{\partial x_j}=0.
\end{eqnarray}
Together with the other two values of $m$, (\ref{m1}) defines a
homogeneous system of linear equations for the unknowns
\begin{eqnarray}
\frac{\partial}{\partial X_i}\frac{\partial X_i}{\partial x_j}.
\end{eqnarray}
The determinant of the matrix of coefficients gives
$\{X_1,X_2,X_3\}^2$ and with the condition (\ref{constant}), the
unique solution is then the trivial one, i.e.,
(\ref{coeff}).\newline%
\null\hspace{17.5cm}$\Box$
\newline\newline%
{\bf Definition 2.2.} A \emph{canonical} transformation is a
canonoid transformation with
\begin{eqnarray}\label{necessaryandsufficient}
\{X_1,X_2,X_3\}=1.
\end{eqnarray}
Therefore a CT is a transformation preserving the fundamental
Nambu bracket
\begin{eqnarray}\label{micro}
\{x_1,x_2,x_3\}=1
\end{eqnarray}
independently from the forms of the pair $(H_1,H_2)$.
Additionally, if one employs the transformation rule
(\ref{vfield}) for (\ref{micro}), the canonicity condition gives
\begin{eqnarray}
\{x_1,x_2,x_3\}_X=1,
\end{eqnarray}
where the subscript $X$ means that the derivatives in the
expansion of the bracket are taken with respect to the new
coordinates $X_1,X_2,X_3$.
In fact, a brief definition of the CTs in the three-space is given
in Ref.~\cite{ref:Takhtajan} as a diffeomorphism of the phase
space which preserve Nambu bracket structure. But such a
definition bypasses the probability that the transformation is a
canonoid transformation.\newline\newline%
{\bf Remark 2.1.} A CT preserves the Nambu bracket of arbitrary
functions, i.e.,
\begin{eqnarray}
\{f(x,t),g(x,t),h(x,t)\}_x=\{f(x,t),g(x,t),h(x,t)\}_X.
\end{eqnarray}
According to the Remark 2.1., one gets
\begin{subequations}
\begin{eqnarray}\label{direct1}
\{X_i,H_1,H_2\}_x&=&\{X_i,H_1,H_2\}_X,\\
\{x_i,H_1,H_2\}_x&=&\{x_i,H_1,H_2\}_X.\label{direct2}
\end{eqnarray}
\end{subequations}
With the help of (\ref{vfield}), the first covariance
(\ref{direct1}) implies the first group of conditions on a CT
\begin{eqnarray}\label{firstgroup}
\frac{\partial X_i}{\partial x_l}=\frac{\partial
(x_m,x_n)}{\partial (X_j,X_k)},
\end{eqnarray}
and (\ref{direct2}) implies the second group
\begin{eqnarray}\label{secondgroup}
\frac{\partial x_i}{\partial X_l}=\frac{\partial
(X_m,X_n)}{\partial (x_j,x_k)},
\end{eqnarray}
where $(i,j,k)$ and $(l,m,n)$ are cycling indices.
(\ref{firstgroup}) and (\ref{secondgroup}) are the equations
corresponding to the so-called \emph{direct conditions} in
Hamilton formalism.
\section{Generating Functions}
We now discuss how CTs can be generated in the three-space. We
will show that to each CT corresponds a particular pair
$(F_1,F_2)$. $F_1$ and $F_2$ are the GFs of the transformation
defined on $\mathbb{R}^3\times \mathbb{R}$, and as shown in Sec.5,
they can give a complete classification of the CTs.
We start with the three-form
\begin{eqnarray}\label{3form}
\chi =dX_1\wedge dX_2\wedge dX_3.
\end{eqnarray}
When (\ref{form}) is used for every one-form in (\ref{3form}), we
get by (\ref{necessaryandsufficient}) that
\begin{eqnarray}\label{form-exp}
dX_1\wedge dX_2\wedge dX_3=dx_1\wedge dx_2\wedge
dx_3+\frac{\partial (X_{[i},X_j)}{\partial
(x_l,x_m)}\frac{\partial X_{k\,]}}{\partial t}\,dx_l\wedge
dx_m\wedge dt,
\end{eqnarray}
where the bracket [ ] stands for the cyclic sum. The substitution
of the term
\begin{eqnarray}
\frac{\partial X_i}{\partial t}=\frac{\partial (K_1,K_2)}{\partial
(X_j,X_k)}-\{X_i,H_1,H_2\}
\end{eqnarray}
obtained by (\ref{NH}), (\ref{covariance}) and (\ref{dot}), into
(\ref{form-exp}) gives ultimately that
\begin{eqnarray}\label{ultimate}
dX_1\wedge dX_2\wedge dX_3=dx_1\wedge dx_2\wedge dx_3-dH_1\wedge
dH_2\wedge dt+dK_1\wedge dK_2\wedge dt.
\end{eqnarray}
The first property that should be pointed out for (\ref{ultimate})
is that, for the time independent transformations it reduces
simply to
\begin{eqnarray}\label{condnotime}
dX_1\wedge dX_2\wedge dX_3=dx_1\wedge dx_2\wedge dx_3
\end{eqnarray}
which is an alternative test for the canonicity. Now let us
rewrite (\ref{ultimate}) as
\begin{eqnarray}
d\Omega =d(x_1dx_2\wedge dx_3-X_1dX_2\wedge dX_3-H_1dH_2\wedge
dt+K_1dK_2\wedge dt)=0.
\end{eqnarray}
We assume that the closed two-form $\Omega$ can be decomposed as
the product of two one-forms $dF_1$ and $dF_2$, then
\begin{eqnarray}\label{GFs}
dF_1\wedge dF_2=x_1dx_2\wedge dx_3-X_1dX_2\wedge
dX_3-H_1dH_2\wedge dt+K_1dK_2\wedge dt.
\end{eqnarray}
Equating the coefficients of similar basic two-forms not including
$dt$ on both sides of (\ref{GFs}) gives
\begin{eqnarray}\label{Ninterior}
&& \frac{\partial (F_1,F_2)}{\partial
(x_2,x_3)}=x_1-X_1\frac{\partial (X_2,X_3)}{\partial
(x_2,x_3)}:=A(x,t),\nonumber\\
&& \frac{\partial (F_1,F_2)}{\partial
(x_3,x_1)}=-X_1\frac{\partial
(X_2,X_3)}{\partial (x_3,x_1)}:=B(x,t),\nonumber\\
&& \frac{\partial (F_1,F_2)}{\partial
(x_1,x_2)}=-X_1\frac{\partial (X_2,X_3)}{\partial
(x_1,x_2)}:=C(x,t),
\end{eqnarray}
where the relation
\begin{eqnarray}
\frac{\partial A}{\partial x_1}+\frac{\partial B}{\partial
x_2}+\frac{\partial C}{\partial x_3}=0
\end{eqnarray}
is satisfied independently from the transformation due to the
general rule (\ref{Hzero}) written for the GFs $F_1$ and $F_2$.
(\ref{Ninterior}) is a useful set of equations in finding both GFs
and CTs: Since we have also
\begin{eqnarray}\label{ado}
\frac{\partial F_\alpha}{\partial x_{[i}}\,\frac{\partial
(F_1,F_2)}{\partial (x_j,x_{k]})}=\epsilon_{ijk}\frac{\partial
F_\alpha}{\partial x_i}\,\frac{\partial F_1}{\partial
x_j}\frac{\partial F_2}{\partial x_k}=0,
\end{eqnarray}
given CT $X_i(x)$, the GFs appear as the solution to the Pfaffian
partial differential equation
\begin{eqnarray}\label{Pfaffian2}
A(x,t)\frac{\partial F_\alpha}{\partial x_1}+B(x,t)\frac{\partial
F_\alpha}{\partial x_2}+C(x,t)\frac{\partial F_\alpha}{\partial
x_3}=0,
\end{eqnarray}
up to an additive function of $t$. Conversely, given GFs,
(\ref{Ninterior}) provides the differential equation for $X_2$ and
$X_3$
\begin{eqnarray}\label{Pfaffian3}
[A(x,t)-x_1]\frac{\partial X_\beta}{\partial
x_1}+B(x,t)\frac{\partial X_\beta}{\partial
x_2}+C(x,t)\frac{\partial X_\beta}{\partial x_3}=0\quad ,\quad
\beta =2,3.
\end{eqnarray}
Once $X_\beta (x,t)$ has been determined, the complementary part
$X_1(x,t)$ of the transformation is immediate by returning to
(\ref{Ninterior}).
The general solutions to (\ref{Pfaffian2}) and (\ref{Pfaffian3})
are arbitrary functions of some unique arguments. Hence,
$F_\alpha$ or $X_\beta$ do not specify the transformation
uniquely. However, by obeying the conventional procedure in the
textbooks, through the text we will accept these unique arguments
as the solutions so long as they are suitable for our aim.
On the other hand, in (\ref{GFs}), the coefficients of the forms
including $dt$ gives another useful relation between the GFs, the
CT and the new Hamilton functions;
\begin{eqnarray}\label{a}
\frac{\partial (F_1,F_2)}{\partial
(x_i,t)}=-H_1\,\frac{\partial H_2}{\partial
x_i}+K_1\,\frac{\partial K_2}{\partial x_i}- X_1\,\frac{\partial
(X_2,X_3)}{\partial (x_i,t)}.
\end{eqnarray}
Given a dynamical system with $(H_1,H_2)$ and a CT, finding the
pair $(K_1,K_2)$ is another matter. In order to find the new
Hamilton functions, we consider the interior product of
$\partial_{\,t}$ and the three-form (\ref{ultimate}) resulting
\begin{eqnarray}\label{b}
\frac{\partial (K_1,K_2)}{\partial (x_i,x_j)}=\frac{\partial
(H_1,H_2)}{\partial (x_i,x_j)}+\frac{\partial
(X_{[k},X_l)}{\partial (x_i,x_j)}\,\frac{\partial X_{m]}}{\partial
t}=:f_{ij}(x,t).
\end{eqnarray}
Given $f_{ij}$, by means of (\ref{ado}) which is also valid for
the pair $(K_1,K_2)$, we obtain the differential equation
\begin{eqnarray}\label{Ksmall}
f_{\,[ij}\,\frac{\partial K_\alpha}{\partial x_{k]}}=0
\end{eqnarray}
whose solutions are the new Hamilton functions.
Alternatively, the Pfaffian partial differential equation
\begin{eqnarray}\label{Pfaffian1}
\dot{X}_i\,\frac{\partial K_\alpha}{\partial X_i}=0,
\end{eqnarray}
originated from (\ref{covariance}) and from the fact
\begin{eqnarray}\label{grule}
\frac{\partial K_\alpha}{\partial X_{[i}}\,\frac{\partial
(K_1,K_2)}{\partial (X_j,X_{k]})}=\epsilon_{ijk}\frac{\partial
K_\alpha}{\partial X_i}\,\frac{\partial K_1}{\partial
X_j}\frac{\partial K_2}{\partial X_k}=0,
\end{eqnarray}
gives the same solution pair but in terms of $X$. It is apparent
that the pairs $(F_1,F_2)$ and $(K_1,K_2)$ must also satisfy
$(\ref{a})$.
For the time independent CTs, finding the new Hamilton functions
is much easier without considering the differential equations
given above:\newline\newline%
{\bf Theorem 3.1.} If the CT is time independent, then the new
Hamiltonian pair can be found simply as
\begin{eqnarray}
(K_1(X,t),K_2(X,t))=(H_1(x(X),t),H_2(x(X),t)).
\end{eqnarray}
\newline%
{\bf Proof:}
\begin{eqnarray}\label{findingK}
\dot{X}_i=\frac{\partial X_i}{\partial
x_j}\dot{x}_j&=&\epsilon_{jmn}\frac{\partial X_i}{\partial
x_j}\frac{\partial H_1}{\partial x_m}\frac{\partial H_2}{\partial
x_n}\nonumber\\
&=&\epsilon_{jmn}\frac{\partial X_i}{\partial x_j}\frac{\partial
X_k}{\partial x_m}\frac{\partial X_l}{\partial x_n}\frac{\partial
H_1}{\partial X_k}\frac{\partial H_2}{\partial X_l}\nonumber\\
&=&\{X_i,X_k,X_l\}\frac{\partial H_1}{\partial X_k}\frac{\partial
H_2}{\partial X_l}\nonumber\\
&=&\epsilon_{ikl}\frac{\partial H_1}{\partial X_k}\frac{\partial
H_2}{\partial X_l}=\frac{\partial (H_1,H_2)}{\partial (X_k,X_l)}\nonumber\\
&=&\frac{\partial (K_1,K_2)}{\partial (X_k,X_l)},
\end{eqnarray}
where $(i,k,l)$ are cycling indices again and (\ref{vfield}) and
(\ref{explicit}) are used in the first and second lines
respectively. \newline%
\null\hspace{17.5cm}$\Box$\newline%
Note that the new Hamilton functions
$K_1$ and $K_2$ may contain $t$ explicitly due to $H_1(x,t)$ and
$H_2(x,t)$ even if the transformation is time independent.
Before concluding this section, it may be remarkable to point out
that in his original paper, as an interesting approach, Nambu
considers the CT itself as equations of motion generated by the
closed two-form
\begin{eqnarray}\label{GH}
dH(x)\wedge dG(x)=X_1(x)dx_2\wedge dx_3+X_2(x)dx_3\wedge
dx_1+X_3(x)dx_1\wedge dx_2.
\end{eqnarray}
Though (\ref{GH}) is a powerful tool to find the CT or the GFs,
its closeness property imposes the restriction
\begin{eqnarray}\label{rest}
\frac{\partial X_1}{\partial x_1}+\frac{\partial X_2}{\partial
x_2}+\frac{\partial X_3}{\partial x_3}=0
\end{eqnarray}
on the transformation. Linear CT (\ref{LCT}) satisfies the
restriction (\ref{rest}) and its analysis via (\ref{GH}) can be
found in Ref.~\cite{ref:Nambu}.
\section{Most Known Canonical Transformations and Their Generating Functions}
(i) Scaling transformation:
\begin{eqnarray}\label{scaling}
X_1=ax_1 \;,\; X_2=bx_2\;,\; X_3=cx_3\;,\;\;abc=1.
\end{eqnarray}
Since the transformation is time independent, (\ref{GFs}) becomes
\begin{eqnarray}
dF_1\wedge dF_2=0.
\end{eqnarray}
There exist three possibilities for the GFs: $F_\alpha =$
constant, $F_2=F_2(F_1)$ and $F_1=f(x),\;F_2=$ constant. We prefer
the one compatible with the usual Hamilton formalism, i.e.,
$F_\alpha =$ constant which also corresponds to the so-called
Methieu transformation \cite{ref:Whittaker}. The special case
$a=b=c=1$ is the identity transformation, of course.
As a direct application consider the Euler equations of a rigid
body \cite{ref:Nambu}
\begin{eqnarray}\label{Euler}
\dot{x_1}=x_2\frac{x_3}{I_3}-x_3\frac{x_2}{I_2},\nonumber\\
\dot{x_2}=x_3\frac{x_1}{I_1}-x_1\frac{x_3}{I_3},\nonumber\\
\dot{x_3}=x_1\frac{x_2}{I_2}-x_2\frac{x_1}{I_1},
\end{eqnarray}
where $x_i$ stands for the components of angular momentum and
$I_i$ is the moment of inertia corresponding to the related
principal axis. If we take $\gamma_i^2=-1/I_j+1/I_k$ with the
cycling indices, (\ref{Euler}) leads to
\begin{eqnarray}
\dot{x_1}=\gamma_1^2\,x_2x_3\;,\;\dot{x_2}=\gamma_2^2x_3x_1\;,
\;\dot{x_3}=\gamma_3^2\,x_1x_2,\qquad
\gamma_1^2+\gamma_2^2+\gamma_3^2=0.
\end{eqnarray}
If $\gamma_1\gamma_2\gamma_3=1$ is also satisfied, then the
equations of motion are generated by the Hamilton functions
\begin{eqnarray}
H_1=\frac{1}{2}\left(\frac{x_1^2}{\gamma_1^2}-\frac{x_2^2}{\gamma_2^2}\right)\;,\;
H_2=\frac{1}{2}\left(\frac{x_1^2}{\gamma_1^2}-\frac{x_3^2}{\gamma_3^2}\right).
\end{eqnarray}
The scaling transformation
\begin{eqnarray}
X_1=x_1/\gamma_1 \;,\; X_2=x_2/\gamma_2\;,\; X_3=x_3/\gamma_3,
\end{eqnarray}
converts the Euler system (\ref{Euler}) into the Lagrange system
\cite{ref:Chakravarty}
\begin{eqnarray}
\dot{X_1}=X_2X_3\;,\;\dot{X_2}=X_3X_1\;, \;\dot{X_3}=X_1X_2
\end{eqnarray}
which is also called Nahm's system in the theory of static
$SU(2)$-monopoles generated by the transformed Hamilton functions
\begin{eqnarray}
K_1=\frac{1}{2}\left(X_1^2-X_2^2\right)\;,\;
K_2=\frac{1}{2}\left(X_1^2-X_3^2\right).
\end{eqnarray}
\\
(ii) Linear transformations:
Three-dimensional version of the linear CT is immediate:
\begin{eqnarray}\label{LCT}
X_1&=&a_1\,x_1+a_2\,x_2+a_3\,x_3,\nonumber\\
X_2&=&b_1\,x_1+b_2\,x_2+b_3\,x_3,\nonumber\\
X_3&=&c_1\,x_1+c_2\,x_2+c_3\,x_3,
\end{eqnarray}
satisfying $a_1\,\alpha_1+a_2\,\alpha_2+a_3\,\alpha_3=1$, where
\begin{eqnarray}
\alpha_1&=&b_2\,c_3-b_3\,c_2,\nonumber\\
\alpha_2&=&b_3\,c_1-b_1\,c_3,\nonumber\\
\alpha_3&=&b_1\,c_2-b_2\,c_1.
\end{eqnarray}
The solutions to (\ref{Pfaffian2}) appear as the GFs;
\begin{eqnarray}
F_1(x)&=&\alpha_2\,x_3-\alpha_3\,x_2,\nonumber\\
F_2(x)&=&-\frac{1}{2}\,a_1\,x_1^2+
\frac{\alpha_1}{2\,\alpha_2}\,a_2\,x_2^2+
\frac{\alpha_1}{2\,\alpha_3}\,a_3\,x_3^2-a_2\,x_1\,x_2-a_3\,x_1\,x_3.
\end{eqnarray}
As an application of the linear CTs we consider the Takhtajan's
system \cite{ref:Takhtajan};
\begin{eqnarray}\label{Takhtajan}
\dot{x_1}=x_2-x_3\;,\;\dot{x_2}=x_3-x_1\;,\;\dot{x_3}=x_1-x_2.
\end{eqnarray}
The implicit solution of the system is the trajectory vector
$\textbf{r}(t)=x_1(t)\,\textbf{e}_1+x_2(t)\,\textbf{e}_2+x_3(t)\,\textbf{e}_3$
tracing out the curve which is the intersection of the sphere
$H_1=(x_1^2+x_2^2+x_3^2)/2$ and the plane $H_2=x_1+x_2+x_3$.
$\textbf{r}(t)$ makes a precession motion with a constant angular
velocity around the vector
$\textbf{N}=\textbf{e}_1+\textbf{e}_2+\textbf{e}_3$ normal to the
$H_2$ plane. The linear CT corresponding to the rotation
\begin{eqnarray}
&&X_1=\frac{1}{\sqrt{6}}\,x_1+\frac{1}{\sqrt{6}}\,x_2-\frac{2}{\sqrt{6}}\,x_3,\nonumber\\
&&X_2=-\frac{1}{\sqrt{2}}\,x_1+\frac{1}{\sqrt{2}}\,x_2,\nonumber\\
&&X_3=\frac{1}{\sqrt{3}}\,x_1+\frac{1}{\sqrt{3}}\,x_2+\frac{1}{\sqrt{3}}\,x_3
\end{eqnarray}
coincides $\textbf{N}$ with the $\textbf{e}_3$ axis. The new
system is then given by the well-known equations of motion of the
Harmonic oscillator
\begin{eqnarray}
\dot{X_1}=\sqrt{3}\,X_2\;,\;\dot{X_2}=-\sqrt{3}\,X_1\;,\;\dot{X_3}=0
\end{eqnarray}
with the Hamilton functions $K_1=(X_1^2+X_2^2+X_3^2)/2$ and
$K_2=\sqrt{3}\,X_3$. Therefore inverse of the transformation
provides directly an explicit solution to the original system.\\
(iii) Gauge transformations:
We will define the gauge transformation in our three-dimensional
phase space as a model transformation which is similar to the case
in the usual Hamilton formalism:
\begin{eqnarray}\label{gauge1}
X_1=x_1\;,\;X_2=x_2+f_1(x_1)\;,\;X_3=x_3+f_2(x_1),
\end{eqnarray}
where $f_1(x_1)$ and $f_2(x_1)$ are arbitrary functions
determined by the GF. Since
\begin{eqnarray}
A(x)=0\;,\;B(x)=x_1\,\frac{\partial f_1}{\partial
x_1}\;,\;C(x)=x_1\,\frac{\partial f_2}{\partial x_1},
\end{eqnarray}
(\ref{Pfaffian2}) provides us the GFs as the following form
\begin{eqnarray}
F_1(x)=x_2\,\frac{\partial f_2}{\partial x_1}-x_3\,\frac{\partial
f_1}{\partial x_1}\;,\;F_2(x)=-\frac{1}{2}\,x_1^2.
\end{eqnarray}
By keeping ourselves in this argument, other possible gauge
transformation types can be constructed easily. For instance, a
second kind of gauge transformation can be defined by
\begin{eqnarray}\label{gauge2}
X_1=x_1+g_1(x_2)\;,\;X_2=x_2\;,\;X_3=x_3+g_2(x_2)
\end{eqnarray}
and it is generated by $F_1=g_1(x_2)\,x_3$ and $F_2=x_2$. Another
type is
\begin{eqnarray}\label{gauge3}
X_1=x_1+h_1(x_3)\;,\;X_2=x_2+h_2(x_3)\;,\;X_3=x_3
\end{eqnarray}
and it is generated by $F_1=h_1(x_3)\,x_2$ and $F_2=-x_3$.
\\
(iv) Point transformations:
Our model transformation which is similar to the Hamilton
formalism again will be in the form
\begin{eqnarray}\label{point1}
X_1=f_1(x_1)\;,\;X_2=f_2(x_1)\,x_2\;,\;X_3=f_3(x_1)\,x_3,
\end{eqnarray}
where $f_1$, $f_2$ and $f_3$ are arbitrary functions satisfying
\begin{eqnarray}
\frac{\partial f_1}{\partial x_1}\,f_2\,f_3=1.
\end{eqnarray}
(\ref{Ninterior}) says that
\begin{eqnarray}
A(x)=x_1-f_1\,f_2\,f_3\;,\;B(x)=x_2\,f_1\,f_3\,\frac{\partial
f_2}{\partial x_1}\;,\;C(x)=x_3\,f_1\,f_2\,\frac{\partial
f_3}{\partial x_1},
\end{eqnarray}
and to find the GFs we use (\ref{Pfaffian2}) of course, hence
\begin{eqnarray}
F_1(x)=x_2\,\exp \left(-\int
\frac{B}{C\,x_2}dx_1\right)\;,\;F_2(x)=x_3\,\exp \left(-\int
\frac{A}{C\,x_3}dx_1\right),
\end{eqnarray}
where
\begin{eqnarray}
\exp \left[-\int
\frac{1}{C}\,\left(\frac{A}{x_3}+\frac{B}{x_2}\right)dx_1\right]=C.
\end{eqnarray}
Other possible types of the point transformation;
\begin{eqnarray}
X_1=g_1(x_2)\,x_1\;,\;X_2=g_2(x_2)\;,\;X_3=g_3(x_2)\,x_3,
\end{eqnarray}
and
\begin{eqnarray}
X_1=h_1(x_3)\,x_1\;,\;X_2=h_2(x_3)\,x_2\;,\;X_3=h_3(x_3)
\end{eqnarray}
give surprisingly constant GFs.
\\
(v) Rotation in $\mathbb{R}^3$:
This last example is chosen as time dependent so that it makes the
procedure through a CT more clear. Consider again the system
(\ref{Takhtajan}) together with the CT
\begin{eqnarray}
X_1=x_1\;,\;X_2=x_2\,\cos t+x_3\,\sin t\;,\;X_3=-x_2\,\sin
t+x_3\,\cos t
\end{eqnarray}
corresponding to the rotation about the $x_1$ axis. The first
attempt to determine the GFs is to consider (\ref{Pfaffian2}).
Since $A(x)=0$, $B(x)=0$ and $C(x)=0$, that equation does not give
enough information on the pair $(F_1,F_2)$. Still things can be
put right by considering first (\ref{Ksmall}). For our case it
yields
\begin{eqnarray}
(x_2-x_3)\,\frac{\partial K_\alpha}{\partial
x_1}+(2x_3-x_1)\,\frac{\partial K_\alpha}{\partial x_2}+
(x_1-2x_2)\,\frac{\partial K_\alpha}{\partial x_3}=0
\end{eqnarray}
with the solution
\begin{eqnarray}
K_1=\frac{1}{2}(x_1^2+x_2^2+x_3^2)\;,\;K_2=2x_1+x_2+x_3.
\end{eqnarray}
Note that one gets, with the aid of the inverse transformation,
that
\begin{eqnarray}
K_1=\frac{1}{2}(X_1^2+X_2^2+X_3^2)\;,\;K_2=2X_1+(\cos t+\sin
t)X_2+(\cos t-\sin t)X_3
\end{eqnarray}
and this is also the solution to (\ref{Pfaffian1}). Now the right
hand side of (\ref{a}) is explicit and the solution
\begin{eqnarray}
F_1=\frac{x_1}{2}\left(
\frac{x_1^2}{3}+x_2^2+x_3^2\right)\;,\;F_2=t
\end{eqnarray}
also satisfies (\ref{Ninterior}) or (\ref{Pfaffian2}).
\section{Generating Functions of Type}
A CT may admit various independent triplets on
$\mathbb{R}^3\times\mathbb{R}$ apart from $(x_1,x_2,x_3)$ or
$(X_1,X_2,X_3)$. Two main groups are possible; first one is
$(x_i,x_j,X_k)$, and the second one is $(X_i,X_j,x_k)$, where
$i\neq j$ and every group contains obviously nine triplets. In
order to show how one can determine the transformation types, two
different types of them are treated explicitly. The calculation
scheme is the same for all possible types which is listed in Table
~\ref{table1}.
\begin{table}[ph]
\caption{Types of the canonical transformations in six kinds.
($r=1,...,6$ and $U=H_1dH_2\wedge dt-K_1dK_2\wedge dt$).}
{\begin{tabular}{@{}cc@{}}
Independent variables & $(dF_1\wedge dF_2)r$\\
\hline %
$x_1,x_2,X_1$ & \\
$x_1,x_2,X_2$ & $df_1\wedge df_2+d(x_1x_3)\wedge dx_2 =
x_3dx_1\wedge
dx_2-X_1dX_2\wedge dX_3-U$\\
$x_1,x_2,X_3$ & \\
\hline %
$x_1,x_3,X_1$ & \\
$x_1,x_3,X_2$ & $df_1\wedge df_2-d(x_1x_2)\wedge dx_3 =
x_2dx_3\wedge
dx_1-X_1dX_2\wedge dX_3-U$\\
$x_1,x_3,X_3$ & \\
\hline %
$x_2,x_3,X_1$ & \\
$x_2,x_3,X_2$ & $df_1\wedge df_2= x_1dx_2\wedge
dx_3-X_1dX_2\wedge dX_3-U$\\
$x_2,x_3,X_3$ & \\
\hline %
$X_1,X_2,x_1$ & \\
$X_1,X_2,x_2$ & $df_1\wedge df_2-d(X_1X_3)\wedge dX_2 =
x_1dx_2\wedge
dx_3-X_3dX_1\wedge dX_2-U$\\
$X_1,X_2,x_3$ & \\
\hline %
$X_1,X_3,x_1$ & \\
$X_1,X_3,x_2$ & $df_1\wedge df_2+d(X_1X_2)\wedge dX_3 =
x_1dx_2\wedge
dx_3-X_2dX_3\wedge dX_1-U$\\
$X_1,X_3,x_3$ & \\
\hline %
$X_2,X_3,x_1$ & \\
$X_2,X_3,x_2$ & $df_1\wedge df_2= x_1dx_2\wedge
dx_3-X_1dX_2\wedge dX_3-U$\\
$X_2,X_3,x_3$ & \\
\hline %
\end{tabular}\label{table1}}
\end{table}
First, we consider the triplet $(x_1,x_2,X_3)$. Then if every term
in (\ref{GFs}) is written in terms of $(x_1,x_2,X_3)$, the
equivalence of related coefficients of the components on both
sides of that equation amounts to
\begin{eqnarray}\label{typexyZ}
\frac{\partial (f_1,f_2)}{\partial
(x_1,x_2)}&=&-x_1\,\frac{\partial x_3}{\partial x_1},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial
(X_3,x_1)}&=&X_1\,\frac{\partial X_2}{\partial x_1},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial
(x_2,X_3)}&=&x_1\,\frac{\partial x_3}{\partial
X_3}-X_1\,\frac{\partial X_2}{\partial x_2},
\end{eqnarray}
and
\begin{eqnarray}\label{typexyZt}
\frac{\partial (f_1,f_2)}{\partial (x_1,t)}&=&-H_1\,\frac{\partial
H_2}{\partial x_1}+
K_1\,\frac{\partial K_2}{\partial x_1},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial (x_2,t)}&=&-H_1\,\frac{\partial
H_2}{\partial x_2}+
K_1\,\frac{\partial K_2}{\partial x_2}+x_1\,\frac{\partial x_3}{\partial t},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial (X_3,t)}&=&-H_1\,\frac{\partial
H_2}{\partial X_3}+ K_1\,\frac{\partial K_2}{\partial
X_3}+X_1\,\frac{\partial X_2}{\partial t},
\end{eqnarray}
where $f_\alpha =F_\alpha (x_1,x_2,x_3(x_1,x_2,X_3,t),t)$. Given
GFs $f_1$ and $f_2$, these equations do not give always complete
information on the transformation. But consider the rearrangement
of (\ref{typexyZ})
\begin{eqnarray}\label{hacivat}
\frac{\partial (f_1,f_2)}{\partial (x_1,x_2)}+\frac{\partial (x_1
x_3,x_2)}{\partial
(x_1,x_2)}&=&x_3,\nonumber\\
\frac{\partial (f_1,f_2)}{\partial (X_3,x_1)}+\frac{\partial (x_1
x_3,x_2)}{\partial
(X_3,x_1)}&=&X_1\,\frac{\partial X_2}{\partial x_1},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial (x_2,X_3)}+\frac{\partial (x_1
x_3,x_2)}{\partial (x_2,X_3)}&=&-X_1\,\frac{\partial X_2}{\partial
x_2},
\end{eqnarray}
which is equivalent to
\begin{eqnarray}\label{f12}
df_1\wedge df_2+d(x_1x_3)\wedge dx_2=&&x_3 dx_1\wedge
dx_2-X_1dX_2\wedge dX_3\nonumber\\
&&-H_1\,dH_2\wedge dt+K_1dK_2\wedge dt.
\end{eqnarray}
For the functions $F_\alpha (x_1,x_2,X_3,t)$ which are the
solutions to the differential equation
\begin{eqnarray}\label{GFtype}
X_1\,\frac{\partial X_2}{\partial x_2}\frac{\partial
F_\alpha}{\partial x_1}-X_1\,\frac{\partial X_2}{\partial
x_1}\frac{\partial F_\alpha}{\partial x_2}-x_3\frac{\partial
F_\alpha}{\partial X_3}=0
\end{eqnarray}
obtained from (\ref{hacivat}); (\ref{f12}) leads to
\begin{eqnarray}\label{F12}
(dF_1\wedge dF_2)_1=x_3 dx_1\wedge dx_2-X_1dX_2\wedge
dX_3-H_1\,dH_2\wedge dt+K_1dK_2\wedge dt
\end{eqnarray}
corresponding to the our first kind transformation. Note, as can
be seen from Table~\ref{table1}, that the first kind contains
three types. Now $x_3$ is immediate by
\begin{eqnarray}
\frac{\partial (F_1,F_2)}{\partial (x_1,x_2)}=x_3,
\end{eqnarray}
and for $X_2$ one needs to solve
\begin{eqnarray}\label{X2}
\left[ \frac{\partial (F_1,F_2)}{\partial
(x_2,X_3)}\right]\frac{\partial X_2}{\partial x_1}-\left[
\frac{\partial (F_1,F_2)}{\partial (X_3,x_1)}
\right]\frac{\partial X_2}{\partial x_2}=0
\end{eqnarray}
which is originated from (\ref{hacivat}) again. Note that the
equivalence of (\ref{f12}) and (\ref{F12}) does not imply in
general $F_1=f_1+x_1x_3$ and $F_2=f_2+x_2$ unless $df_1\wedge
dx_2=df_2\wedge d(x_1x_3)$. On the other hand, for the
transformations $f_2=x_2$, the equivalence
\begin{eqnarray}\label{simplest}
dF_1\wedge dF_2=d(f_1+x_1x_3)\wedge dx_2
\end{eqnarray}
is always possible. To be more explicit about this remark,
consider the CT
\begin{eqnarray}
X_1=x_1+x_2 \;,\; X_2=x_2+x_3 \;,\; X_3=x_3.
\end{eqnarray}
If the general solutions of (\ref{Pfaffian2}) are taken as the
independent functions $F_1=x_2x_3\,$,$\,F_2=x_2$, then the
corresponding functions of type become $f_1=x_2X_3\,$,$\,f_2=x_2$.
Hence by the virtue of (\ref{simplest}) the GFs are
\begin{eqnarray}\label{gfs}
F_1=(x_1+x_2)X_3\;,\;F_2=x_2.
\end{eqnarray}
Conversely, (\ref{gfs}) generates, via (\ref{hacivat}) and
(\ref{X2}), the CT
\begin{eqnarray}
X_1=x_1+x_2 \;,\; X_2=x_2+h(x_3) \;,\; X_3=x_3.
\end{eqnarray}
Second, consider the triplet $(x_2,x_3,X_1)$. This time, for
$f_\alpha (x_2,x_3,X_1,t)$, (\ref{GFs}) says
\begin{eqnarray}
\frac{\partial (f_1,f_2)}{\partial
(x_2,x_3)}&=&x_1-X_1\,\frac{\partial (X_2 , X_3)}{\partial
(x_2,x_3)},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial
(X_1,x_2)}&=&-X_1\,\frac{\partial (X_2 , X_3)}{\partial
(X_1,x_2)},\nonumber\\
\frac{\partial (f_1,f_2)}{\partial
(x_3,X_1)}&=&-X_1\,\frac{\partial (X_2 , X_3)}{\partial (x_3,X_1)}
\end{eqnarray}
similar to (\ref{Ninterior}) and
\begin{eqnarray}
\frac{\partial (f_1,f_2)}{\partial
(\xi,t)}=-H_1\,\frac{\partial H_2}{\partial
\xi}+K_1\,\frac{\partial K_2}{\partial \xi}- X_1\,\frac{\partial
(X_2,X_3)}{\partial (\xi,t)},\qquad \xi =x_2,x_3,X_1,
\end{eqnarray}
similar to (\ref{a}). This last system of equations says that
\begin{eqnarray}
df_1\wedge df_2&=&
dF_1(x_2,x_3,X_1,t)\wedge dF_2(x_2,x_3,X_1,t)\nonumber\\
&=&x_1 dx_2\wedge dx_3-X_1\,dX_2\wedge dX_3-H_1dH_2\wedge
dt+K_1dK_2\wedge dt.
\end{eqnarray}
and therefore
\begin{eqnarray}
f_\alpha (x_2,x_3,X_1,t)=F_\alpha (x_2,x_3,X_1,t).
\end{eqnarray}
Note that $F_\alpha (x_2,x_3,X_1,t)$ serves just like the GF of
first type $F_1(q,Q,t)$ of the usual Hamilton formalism. As can be
seen in the Table 1, there are six GFs of this type. The example
given above obeys also this type of transformation.
As a further consequence, one should note that a CT may be of
different types at the same time. For example the scaling
transformation given in Sec.4 admits four types simultaneously:
\begin{eqnarray}
&&F_1=\frac{1}{c}\,x_1X_3,\;F_2=x_2,\nonumber\\
&&F_1=-\frac{1}{b}\,x_1X_2,\;F_2=x_3,\nonumber\\
&&F_1=-c\,x_3X_1,\;F_2=X_2,\nonumber\\
&&F_1=b\,x_2X_1,\;F_2=X_3.
\end{eqnarray}
\section{Infinitesimal Canonical Transformations}
In the two-dimensional phase space of the usual Hamilton
formalism, ICTs are given by the variations in the first order
\begin{eqnarray}
Q &=& q+\epsilon\eta_1(q,p)=q+\epsilon \{ q,G\} = q+\epsilon
\frac{\partial G}{\partial
p},\nonumber\\
P &=&p+\epsilon\eta_2(q,p)=p+\epsilon \{ p,G\}= p-\epsilon
\frac{\partial G}{\partial q},
\end{eqnarray}
where $\epsilon$ is a continuous parameter and $G(q,p)$ is the GF
of the ICT. The canonicity condition implies
\begin{eqnarray}\label{conda}
\frac{\partial \eta_1}{\partial q}+\frac{\partial \eta_2}{\partial
p}=0
\end{eqnarray}
up to the first order of $\epsilon$. Following the same practice,
these results can be extended to the three-space. An ICT in the
three-dimensional phase space would then be proposed as
\begin{eqnarray}
X_i=x_i+\epsilon \,f_i(x)=x_i+\epsilon
\{x_i,G_1,G_2\}=x_i+\epsilon \frac{\partial (G_1,G_2)}{\partial
(x_j,x_k)},
\end{eqnarray}
where $G_1(x)$ and $G_2(x)$ generate directly the ICT via
\begin{eqnarray}
dG_1\wedge dG_2=f_1\,dx_2\wedge dx_3+f_2\,dx_3\wedge
dx_1+f_3\,dx_1\wedge dx_2.
\end{eqnarray}
One can check easily that, similar to (\ref{conda}), the
canonicity condition (\ref{condnotime}) implies
\begin{eqnarray}
\frac{\partial f_1(x)}{\partial x_1}+\frac{\partial
f_2(x)}{\partial x_2}+\frac{\partial f_3(x)}{\partial x_3}=0
\end{eqnarray}
up to the first order of $\epsilon$ again.
It is well known that an ICT is a transformation depending on a
parameter that moves the system infinitesimally along a trajectory
in phase space and therefore a finite CT is the sum of an infinite
succession of ICTs giving by the well known expansion
\begin{eqnarray}
\phi =\varphi +\epsilon \{\varphi
,G\}+\frac{\epsilon^2}{2!}\{\{\varphi ,G\},G\}
+\frac{\epsilon^3}{3!}\{\{\{\varphi ,G\},G\},G\}+\cdots
\end{eqnarray}
where $\phi =Q,P$ and $\varphi =q,p$ in turn. With the same
arguments used for the two-dimensional phase space, the
transformation equation of a finite CT generated by the GFs $G_1$
and $G_2$ will correspond to
\begin{eqnarray}\label{exp}
X_i&=&x_i+\epsilon
\{x_i,G_1,G_2\}+\frac{\epsilon^2}{2!}\{\{x_i,G_1,G_2\},G_1,G_2\}\nonumber\\
&&+\frac{\epsilon^3}{3!}\{\{\{x_i,G_1,G_2\},G_1,G_2\},G_1,G_2\}+\cdots
.
\end{eqnarray}
Equivalently, if we define the vector field
\begin{eqnarray}
\hat{V}_G=f_1(x)\,\partial_{x_1}+f_2(x)\,\partial_{x_2}+f_3(x)\,\partial_{x_3},
\end{eqnarray}
it is easy to see that the same transformation is given by
\begin{eqnarray}\label{ICT}
e^{\epsilon\,\hat{V}_G} x_i=X_i.
\end{eqnarray}
We can give a specific example showing that this construction
actually works. For this aim we consider the CT
\begin{eqnarray}
X_1=x_1,\quad X_2=x_2+\epsilon x_3,\quad X_3=x_3-\epsilon x_2.
\end{eqnarray}
The transformation is generated by GFs
\begin{eqnarray}
G_1(x)=\frac{1}{2}(x_2^2+x_3^2),\quad G_2(x)=x_1
\end{eqnarray}
or by vector field
\begin{eqnarray}
\hat{V}_G=x_3\,\partial_{x_2}-x_2\,\partial_{x_3}
\end{eqnarray}
which is the generator of rotation about $x_1$ axis. Therefore it
is immediate by means of (\ref{exp}) or (\ref{ICT}) that our
finite CT is
\begin{eqnarray}
X_1=x_1,\quad X_2=x_2\cos\epsilon+x_3\sin\epsilon,\quad
X_3=-x_2\sin\epsilon+x_3\cos\epsilon ,
\end{eqnarray}
where the parameter $\epsilon$ stands clearly for the rotation
angle.
\section{Decomposition of the Transformations}
In classical mechanics a conjecture states surprisingly that any
CT in a two dimensional phase space can be decomposed into some
sequence of two principal CTs \cite{ref:Leyvraz}. These are linear
and point CTs. Proceeding elaborations of this conjecture in
quantum mechanics led to a triplet as a wider class including
gauge, point and interchanging transformations
\cite{ref:Deenen,ref:Anderson}. One can check that the same
triplet can also be used for the classical CTs. Without giving so
many examples here, we give a particular one for the sake of
motivation: Consider the CT
\begin{eqnarray}
q\rightarrow p^2-\frac{q^2}{4p^2},\qquad p\rightarrow
-\frac{q}{2p}
\end{eqnarray}
converting the system with linear potential $H_0=p^2+q$ into the
free particle $H_1=p^2$. (In this section, we prefer using the map
representation of CTs so that we can perform easily the
transformation steps). The decomposition of the transformation can
be achieved by the following five steps in turn;
\begin{eqnarray}
&&{\rm 1.\;interchange}\qquad q\rightarrow p,\quad p\rightarrow
-q,\nonumber\\
&&{\rm 2.\;gauge}\qquad q\rightarrow q,\quad p\rightarrow
p-q^2,\nonumber\\
&&{\rm 3.\;interchange}\qquad q\rightarrow -p,\quad p\rightarrow
q,\nonumber\\
&&{\rm 4.\;point}\qquad q\rightarrow q^2,\quad p\rightarrow
p/(2q),\nonumber\\
&&{\rm 5.\;interchange}\qquad q\rightarrow -p,\quad p\rightarrow q
\end{eqnarray}
corresponding symbolically to the sequence from right to left
\begin{eqnarray}
{\cal S}={\cal I}_3\,{\cal P}\,{\cal I}_2\,{\cal G}\,{\cal I}_1.
\end{eqnarray}
As a challenging problem, the statement has not been proven in a
generic framework yet. But even though it is not true for every
CT, it applies to a huge number of CTs. Parallel to the
presentation, we will show that the discussion also applies to the
CTs in the three-space.
First we will decompose the linear CT (\ref{LCT}). Before doing
this note that all the three types (\ref{gauge1}), (\ref{gauge2}),
(\ref{gauge3}) of gauge transformation can be generated by the GFs
\begin{eqnarray}
\hat{V}_{G_1}&=&f_1(x_1)\partial_{x_2}+f_2(x_1)\partial_{x_3},\nonumber\\
\hat{V}_{G_2}&=&g_1(x_2)\partial_{x_1}+g_2(x_2)\partial_{x_3},\nonumber\\
\hat{V}_{G_3}&=&h_1(x_3)\partial_{x_1}+h_2(x_3)\partial_{x_2}
\end{eqnarray}
respectively when considering (\ref{ICT}). Now for the choices
\begin{eqnarray}
&&f_1(x_1)=\lambda_1\,x_1\;,\;f_2(x_1)=\lambda_2\,x_1,\nonumber\\
&&g_1(x_2)=\mu_1\,x_2\;,\;g_2(x_2)=\mu_2\,x_2,\nonumber\\
&&h_1(x_3)=\nu_1\,x_3\;,\;h_2(x_3)=\nu_2\,x_3,
\end{eqnarray}
the sequence
\begin{eqnarray}\label{decomplinear}
{\cal S}_L={\cal P}\,{\cal G}_3\,{\cal G}_2\,{\cal G}_1\,
\end{eqnarray}
where ${\cal P}$ stands for the point transformation generating
the scaling transformation (\ref{scaling}), generates in turn the
transformation chain
\begin{eqnarray}
&&{\rm 1.\;gauge}\qquad x_1\rightarrow x_1,\quad x_2\rightarrow
x_2+\lambda_1x_1,\quad x_3\rightarrow x_3+\lambda_2x_1,\nonumber\\
&&{\rm 2.\;gauge}\qquad x_1\rightarrow x_1+\mu_1x_2,\quad
x_2\rightarrow
x_2,\quad x_3\rightarrow x_3+\mu_2x_2,\nonumber\\
&&{\rm 3.\;gauge}\qquad x_1\rightarrow x_1+\nu_1x_3,\quad
x_2\rightarrow
x_2+\nu_2x_3,\quad x_3\rightarrow x_3,\nonumber\\
&&{\rm 4.\;point}\qquad x_1\rightarrow ax_1,\quad x_2\rightarrow
bx_2,\quad x_3\rightarrow cx_3.
\end{eqnarray}
Application of (\ref{decomplinear}) to the coordinates $(x_1, x_2,
x_3)$ gives thus the linear CT
\begin{eqnarray}
X_1&=&a\,x_1+b\,\mu_1\,x_2+c\,(\nu_1+\mu_1\,\nu_2)x_3,\nonumber\\
X_2&=&a\,\lambda_1\,x_1+b\,(1+\lambda_1\,\mu_1)\,x_2+
c\,[\lambda_1\,\nu_1+(1+\lambda_1\,\mu_1)\,\nu_2]\,x_3,\nonumber\\
X_3&=&a\,\lambda_2\,x_1+b\,(\mu_2+\mu_1\,\lambda_2)\,x_2+
c\,[1+\lambda_2\,\nu_1+(\mu_2+\mu_1\,\lambda_2)\,\nu_2]\,x_3.
\end{eqnarray}
The next example is related with the cylindrical coordinate
transformation
\begin{eqnarray}\label{c}
X_1=\frac{1}{2}(x_1^2+x_2^2)\;,\;X_2=\tan^{-1}\frac{x_2}{x_1}\;,\;X_3=x_3.
\end{eqnarray}
The sequence
\begin{eqnarray}
&&{\rm 1.\;interchange}\qquad x_1\rightarrow -x_2,\quad
x_2\rightarrow
x_1,\quad x_3\rightarrow x_3,\nonumber\\
&&{\rm 2.\;point}\qquad x_1\rightarrow \tan^{-1}x_1,\quad
x_2\rightarrow
(1+x_1^2)x_2,\quad x_3\rightarrow x_3,\nonumber\\
&&{\rm 3.\;interchange}\qquad x_1\rightarrow x_2,\quad
x_2\rightarrow
-x_1,\quad x_3\rightarrow x_3,\nonumber\\
&&{\rm 4.\;point}\qquad x_1\rightarrow x_1^2/2,\quad
x_2\rightarrow x_2/x_1,\quad x_3\rightarrow x_3,
\end{eqnarray}
which can be written in the compact form
\begin{eqnarray}
{\cal S}_C={\cal P}_2\,{\cal I}_2\,{\cal P}_1\,{\cal I}_1
\end{eqnarray}
is the decomposition of (\ref{c}).
\section*{Acknowledgements}
This work was supported by T\"{U}B\.{I}TAK (Scientific and
Technical Research Council of Turkey) under contract 107T370.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Polygonal meshes are de facto the most common discrete representation of surfaces in computer graphics. For geometric modeling purposes, their expressiveness allows to capture adequate approximations of many object surfaces. At the same time, their modular design has sparked a plethora of interactive editing tools. That, in turn, makes them appealing to 3D artists for various creative tasks, such as modeling, sculpting, and rigging.
The flexible nature of meshes allows artists to conceive shapes of varying densities by diligently employing local operations. Detailed mesh areas, and in particular curved parts and features relevant for articulation, are usually carefully crafted with higher polygonal count and certain connectivity structures. For example, a mesh of a humanoid may contain both a detailed, dense polygon fan of an eye, as well as a low-poly, flat torso.
Much of this delicate design involves meticulous repetitive work, oftentimes when jointly drafting multiple mesh models. Experts may resort to the usage of templates, but this is limited to certain use cases. A scanned point cloud, for instance, is not generated from such a template and therefore cannot directly inherit a desired mesh topology.
To alleviate the hard labor of artists, a wealth of research efforts have been focusing on \emph{meshing} and \emph{remeshing} techniques, which, given a target{} polygonal shape, generate an alternative, coherent and enhanced mesh topology. This is usually achieved by optimization methods with various objectives that aim to optimize the polygon distribution while avoiding degenerate faces~\cite{campen2017partitioning}.
While these methods are useful, they are inadequate at times, as they neither allow users full control over the resulting connectivity, nor directly support polygon soups and point cloud targets. As a consequence, they are missing the opportunity to re-use sophisticated meshes carefully crafted by experts.
In this paper, we introduce an alternative approach, namely, a method for transferring an existing, high quality reference mesh structure to a comparable target{} shape. Unlike previous methods that optimize both the connectivity and the geometry of the target shape mesh, our method reuses the source mesh characteristics while deforming it to best fit the geometry of the target. Our algorithm relies on a neural optimization process at its core: we learn the weights of a lightweight network to deform a given source mesh to the target shape. \ah{In Figure~\ref{fig:teaser}, we show a mesh transfer from a source
surface left to three different target shapes.}
We follow contemporary literature, and overcome the shortcomings of simple neural nets by employing varying positional encodings. More specifically, we gradually map the source mesh vertex positions though encoding functionals of increasing frequencies ~\cite{hertz2021sape} from low to high, before feeding them into the network. This progressive scheme introduces an \textit{inductive bias} of slowly changing global deformations. That allows us to maintain a stable optimization through the earlier, critical steps of the optimization. Towards the end of the optimization, it also allows us to fit delicate details, where higher frequencies pertaining to fine local features are mapped.
We propose Mesh Draping{}, a parameterization-free, neural approach for mesh transfer between shapes. Mesh Draping{} is an interactive tool, where the user merely guides the displacement procedure. That is, the source mesh is "draped" over the target geometry. Our optimizing technique yields high quality meshes, faithful to both the source design and the geometry of the target shape. Most importantly, it is not limited by the target shape representation, rendering our approach suitable for transfering mesh structures to an assortment of shape formats, such as point clouds, polygon soups, and nonmanifold meshes. We demonstrate results of transferring triangle and quad meshes on a variety of examples, ranging from 3D models generated by artists to synthetic objects comparable with other methods.
\section{Related Work}
\subsection{Remeshing}
The literature on \emph{meshing} and \emph{remeshing} is concerned with methods that directly generate new mesh structures of the desired shapes without assuming any prescribed connectivity. Oftentimes, a generated mesh of good quality ideally comprises \ah{of} well-shaped triangles or quadrilaterals that are aligned to principal directions.
Different remeshing methods allow varying degrees of control by the user.
Automated or semi-automated techniques may utilize high level inputs, such as density control and crease selection \cite{alliez2002interactive,lai2008incremenral,fang2018quadrangulation,jakob2015instant}.
Other methods allow further interactions with the user. \cite{bommes2009mixed} let users specify orientation and alignment constraints. \cite{campen2014dual} introduce dual strip weaving, a framework where the user can specify the main layout of the quad grid. \cite{marcias2015data} learn quadrangulation patterns, applying them by guided user sketches of the edge flows. \cite{ebke2016interactively} suggest a rapid feedback interface that enables more precise control by specifying constraints, such as the placement of singularities and their connections, as well as smooth surface regions, among others. The above methods generate high quality meshes, however, they do not offer users the option of employing existing, custom mesh structures.
Another class of remeshing methods deals with compatible meshing. Such methods remesh multiple shapes to generate common connectivity among them \cite{yang2018volume,yang2020error}. These methods generally aim to produce meshes of regular connectivity and minimal polygon count that fit both shapes to facilitate applications like morphing or attribute mapping; however, they discard the original mesh connectivity, in contrast to the goal of our work.
For further reading about triangle and quadrilateral remeshing algorithms, we refer the curious reader to the comprehensive surveys by \cite{alliez2008recent} and \cite{campen2017partitioning}.
\subsection{Surface Mapping}
Our work shares similarities with methods that map attributes between surfaces.
In the following, we provide a brief summary of relevant surface mapping techniques; a more extensive discussion is available in the survey by \cite{tam2013registration}.
Broadly speaking, most works can be characterized into extrinsic approaches, which leverage surface deformation embedded within a 3D space, and intrinsic approaches, which achieve a complete bijective mapping between the two surfaces via parameterization to some intermediate domain.
In a purely extrinsic approach, the deformation process is preceded by global alignment defined by corresponding points on the source and target shapes \cite{zhang2008deformation, li2008global}. Local transformations calculated per element depend on the global orientation of both shapes.
Early approaches, such as the renowned iterative closest point method (ICP) \cite{chen1992icp,besl1992icp}, assume the scenario of rigid registration, that is, the transformation between the source and target shapes maintains pairwise point distances.
Follow up works remove this restriction and perform non-rigid registration \cite{sharf2006SnapPasteAI}, possibly in a setting of partially overlapping surfaces with outliers \cite{li2008global,huang2009nonrigid,bouaziz2013sparse}.
By contrast, in the intrinsic line of works, mapping is achieved by means of parameterization to an intermediate domain. The actual mapping is achieved using a composition of mappings from source shape to the intermediate domain and an inverse mapping from the intermediate domain to the target shape. Representative examples include
\cite{kraevoy2004cross, kim2011blended, aigerman2015seamless, aigerman2015orbifold, aigerman2016hyperbolic, baden2018mobius, schmidt2019distortion}. Many of these works aim for either continuous or bijective mappings, or both.
To handle non-isometric cases, \cite{mandad2017variance} suggest using an optimal transport plan without requiring an intermediate domain, geared towards as-conformal-as-possible mapping. A recent work by \cite{schmidt2020inter} presents a novel method for continuous bijective mapping (homeomorphism) of attributes between surfaces of the same genus. Their method is able to obtain low intrinsic distortion, as well as generalize to arbitrary genus. \cite{deng2020better} present a neural approach to reconstruct a target shape via an atlas of stitched patch mappings. Unlike our method, these methods rely on surface parameterization and cannot be easily extended to domains beyond manifold meshes, such as point clouds.
\cite{ezuz2019elastic} propose a hybrid extrinsic approach that builds upon an intrinsic initialization. Their design combines an optimization scheme of elastic thin-shell deformation of the source mesh with projection over the target. They can handle non-isometric shapes, but struggle on highly complex meshes.
The problem of non-isometric mapping has also been studied from a spectral point of view. \cite{ovsjanikov2012functional,ovsjanikov2017computing} propose using functional maps to transfer functions between two meshes using eigendecomposition. Newer neural variants of this method also exist \cite{litany2017deepfunctional, ginzburg2020cyclicFM, donati2020deepgeometricfunctional}. These works create vertex-to-an-area mappings, rather than vertex-to-vertex mappings, which renders them less suitable for the purpose of mesh transfer.
The work of \cite{tierny2011inspired} is closest to ours in spirit, as it also allows for direct quad mesh transfer between source and target meshes. In their work, they construct a corpus of source meshes, and use cross-parameterization with as-rigid-as-possible deformations to to automatically generate a mapping between the shapes. Their method assumes a parameterization of the target shape is available, as well as the existence of a homeomorphism. Their method requires a preprocessing phase for generating the aforementioned corpus, where boundary conditions are provided and segmentation masks are marked.
Their method then requires to stitch the back the segments in a postprocess.
A common application for surface mapping methods is texture mapping, where the emphasis is on minimizing global distortion of the mapped textures. In contrast, our work is parameterization-free, global, and differs by focusing on correlative face transfer between similar areas of the two shapes, denoted by users, as well as maintaining the original mesh integrity. While mesh transfer can be implemented via a surface mapping method, we show in Section \ref{section:experiments} that such methods produce inferior results in the extrinsic case,
and cannot be used
when a manifold mesh is not available and bijective parameterization cannot be directly achieved.
\subsection{Shape Deformation}
Applying unconstrained deformations to mesh vertices introduces a risk of geometric artifacts, such as intersecting faces.
Our work shares common ground with shape deformation methods, and similarly to them, we cope with the challenge of maintaining the mesh coherency under geometric warping.
To that end, many of the shape deformation methods are concerned with the type of regularization imposed on the mesh transformations, or proper design of interactive editing tools offered to the user.
We highlight however, that the end goal of shape deformation works is somewhat different: they aim to preserve a single source shape under some set of user constraints. For an elaborate review of non-neural methods, we refer readers to \cite{DeformationTutorial:2009}, \cite{jacobson2013algorithmsai} and the references therein.
To tie up the discussion, we mention works most relevant to our problem domain.
\cite{TemplateMeshFitting:2010} suggest deformation by template based fitting of 3D meshes, using a coarse to fine pipeline.
\cite{jacobson2011bounded} present a unified interactive approach combining control handles, skeletons and cages to deform a given shape. Their method uses weighted linear blending for practical purposes of mesh editing and rigging.
More recently, \cite{yifan2020neural} demonstrated a learned variant of cage based deformation. Given a source and a target shape, their algorithm warps the source mesh to match the target. However, this work focuses on preserving local details within the source shape, and does not guarantee that the result intimately coincides with the target shape.
Another contemporary work by \cite{wang20193dn} performs neural warping of source to target shapes by concatenating their global representations and predicting per-vertex offsets. Their method relies on a compound loss that assumes symmetry, and is limited to the domain of the training set.
The ShapeFlow method \cite{jiang2020shapeflow} is a scheme for learning a deformation space. Given a target shape, the approach employs a nearest neighbor search for a pre-learned source candidate to guide the warp process.
Our method, in contrast, does not assume any learned shape representation, and can be applied directly to novel shapes from unseen domains.
\section{Method Overview}
\label{sec:method}
Our framework begins with a preliminary step, where the user specifies a small number of correspondence points on the source{} and target{} shapes. After that, the principal part of the algorithm, the optimization, takes place automatically, during which we allow the source{} mesh vertices to shift their positions to fit the target{} shape. We optimize an objective function that expresses the Euclidean distance between the source{} mesh and the target{} geometry, regulated by the marked user correspondence points. The objective function encapsulates complementary terms that concurrently fit the target{} shape and respect the structure of the source{} mesh. In a nutshell, the process iteratively projects the source{} mesh onto the target{} surface, and shifts the projected vertices to preserve face angles and local area entropy.
During the optimization phase, our method learns the parameters of a deep neural network that performs the pairwise mapping of source{} mesh vertex positions to offsets that fit their target{} shape locations. To facilitate the learning process, we introduce a progressive positional encoding layer \cite{hertz2021sape} at the head of the network. Simply put, a progressive encoding layer maps the input vertex positions to a higher dimensional space by feeding them through periodic functions with gradually increasing frequencies.
\ah{During optimization, it progressively reveals higher frequencies of source{} vertices' positional embedding as an input to the mapping network.} We assert the claims of \cite{tancik2020fourfeat, hertz2021sape} and demonstrate how the optimization benefits both from the stability introduced by spectral bias of the network \cite{rahaman2019spectral} and its non-biased solutions pertaining to high frequency mappings \cite{sitzmann2020implicit, mildenhall2020nerf}.
In Section~\ref{sec:method_ui}, we describe the preliminary setup of our method.
In Section~\ref{sec:method_loss}, we describe in detail the optimization terms of our \textit{mesh transfer} solution. Finally, in Section~\ref{sec:method_ppe}, we lay out the progressive configuration which allows Mesh Draping{} to achieve stable optimization and high quality results.
\subsection{Correspondence Setup}
\label{sec:method_ui}
A key aspect of our method is that it provides the user means to guide the mesh transfer.
First, the user marks a small set of correspondence points $\left\{v_i, u_i\right\}_{i=1}^k$ between the source{} mesh $\mathcal{M}$ and the target{} shape $\mathcal{T}$.
The correspondence points enable the estimation of an initial global affine transformation from $\mathcal{M}$ to $\mathcal{T}$, followed by a biharmonic deformation \cite{jacobson2011bounded} that utilizes the corresponding points on $\mathcal{T}$ as the boundary conditions.
In addition, the user can specify the correspondences as \textit{rigid}, for example in Figure~\ref{fig:bodies}, the movement of the body parts is specified by rigid points. In those cases, we apply another \textit{as-rigid-as-possible} deformation \cite{sorkine2007rigid} using the rigid points as the boundary conditions. See Section~\ref{sec:implementation} for elaborate implementation details.
The optimization phase starts when the user is satisfied with the initial deformation.
The objective of the optimization is to bring the surface of $\mathcal{M}$ close to $\mathcal{T}$ while maintaining the structural properties of the original mesh, as described in Section~\ref{sec:method_loss}.
\textbf{Interactive Mode.} Mesh Draping{} also includes an interactive mode. Between the optimization epochs, the user may pause, and make additional local modifications by adding or adjusting the correspondence points.
\subsection{Neural Optimization}
\label{sec:method_loss}
\ah{Instead of formulating an \textit{explicit} optimization term between the source{} mesh and the target{}, we use a neural network parameterization. Specifically, we use fully connected neural networks (FC) with parameters $\theta$. Doing so has been shown to improve the optimization in various works \cite{williams2019deep,Hanocka2020point} as the network serves as an ''internal prior'' in the optimization. }
The mesh deformation is obtained through a direct neural optimization of the parameters $\theta$ of a mapping function $f\left(\mathcal{M} \,|\, \theta \right) = \widehat{\mathcal{M}}$ that receives a source{} mesh, $\mathcal{M}$, and outputs an optimized mesh $\widehat{\mathcal{M}}$.
The loss term follows directly from the definition of our problem:
\begin{equation}
\mathcal{L}\left(\widehat{\mathcal{M}} \,|\, \mathcal{T}, \ \mathcal{M}\right) = \mathcal{L}_{d}\left(\widehat{\mathcal{M}} \,|\, \mathcal{T}\right) + \lambda \mathcal{L}_{s}\left(\widehat{\mathcal{M}} \,|\, \mathcal{M}\right).
\end{equation}
\ah{On the one hand, we would like our output mesh $\widehat{\mathcal{M}}$ to fit a given target shape $\mathcal{T}$ as closely as possible, }
i.e., minimize the distance $\mathcal{L}_{d}\left(\widehat{\mathcal{M}} \,|\, \mathcal{T}\right)$ between $\widehat{\mathcal{M}}$ and $\mathcal{T}$. On the other hand, we wish to preserve the structural quality of the source{} mesh, which is measured by $\mathcal{L}_{s}\left(\widehat{\mathcal{M}} \,|\, \mathcal{M}\right)$.
The distance loss is given by
\begin{equation}
\mathcal{L}_{d}\left(\widehat{\mathcal{M}} \,|\, \mathcal{T}\right) = Ch\left(\widehat{\mathcal{M}},\ \mathcal{T}\right) + \sum_{i=1}^{k}\|\widehat{v}_{i} - u_{i}\|_{2}^{2} ,
\label{eq:distance_loss}
\end{equation}
where $Ch(\widehat{\mathcal{M}}, \mathcal{T})$ is a symmetric Chamfer distance between uniformly sampled points on the optimized mesh and the target{} shape. In addition, we keep the $k$ correspondence points specified by the user close by minimizing the squared distance between them, where $\widehat{v}_{i}$ and $u_{i}$ are pairs of corresponding points on $\widehat{\mathcal{M}}{}$ and $\mathcal{T}$, respectively.
The structural loss is given by
\begin{equation}
\mathcal{L}_{s}\left(\widehat{\mathcal{M}} \,|\, \mathcal{M}\right) = \dfrac{1}{N} \sum_{i=1}^{N} \left( \sum_{j \in R(i)} \| \hat{\alpha_j} - \alpha_{j} \|_{2}^{2} + D_{KL}\left(P_i || Q_i\right)\right),
\label{eq:structural_loss}
\end{equation}
where the summation is over the $N$ vertices of $\mathcal{M}$.
The first term represents the distortion of the angles ${\hat{\alpha_j}}$ on $R(i)$: the 1-ring of $\hat{v}_i$ with respect to the original angles on $\mathcal{M}$.
The second term measures the local area Kullback–Leibler divergence where $P_i$ are the fixed areas of the faces around vertex $i$ in $\mathcal{M}$, normalized to have sum $1$. $Q_i$ is the equivalent (non fixed) local area distribution in $\widehat{\mathcal{M}}$. \ah{Figure~\ref{fig:ablation_loss} shows the effect of each structural loss term on the final optimization result.}
To prevent numerical issues caused by \textit{skinny} faces, we utilize the quality measure for a triangular face from \cite{liu2020neural}:
\begin{equation}
Q_f = \dfrac{4 \sqrt{3} A_{f}}{\|e_1\|^2 + \|e_2\|^2 + \|e_3\|^2},
\end{equation}
where $A_f$ is the area of the face and $\|e_i\|$ is the length of its $i$-th edge. When $Q \rightarrow 0$, the face approaches to degenerate zero area. To prevent such cases we penalize by $1-Q_f$ all the faces in $\widehat{\mathcal{M}}{}$ with quality $Q_f < 0.1$.
\subsection{Progressive Positional Encoding}
\label{sec:method_ppe}
It has been shown that the learning bias of deep ReLU networks tends to be towards low frequency functions, e.g., they have a spectral bias \cite{rahaman2019spectral}. The spectral bias of the network has a positive trait of preventing large deformations during the optimization process. These may be caused by an unstable Chamfer term, which leads to an erroneous local minimum.
Unfortunately, the spectral bias also comes with a price: as the optimization proceeds, it prevents local \textit{delicate} deformations that bring the surface of the source{} close to the the target{} shape. That implies that common FC network{} with ReLU activations have a hard time mapping a continuous domain to a high frequency image.
To mitigate the deficiencies of FC network{} with ReLU activations, previous works by \cite{tancik2020fourfeat, mildenhall2020nerf, sitzmann2020implicit} suggested frequency based encodings. Specifically in a mesh transfer scenario, source{} mesh vertex positions are first mapped via positional encodings (PE) before feeding them as input to the FC network. However, in the case of mesh transfer, static frequency encoding schemes introduces a new problem to the architecture. Encoding functionals of high frequencies may overfit too quickly, causing the optimization to converge to suboptimal solutions. See for example, Figure~\ref{fig:model_compare}, for the distortion caused by the positional encoding neural optimization.
Instead, Mesh Draping{} leverages the progressive positional encoding layer of \cite{hertz2021sape}.
Progressive positional encoding
operates under the assumption that earlier iterations of the optimization benefit from the spectral bias, which is achieved by low frequency encodings. As the optimization converges, higher frequencies encodings are used to fit the delicate features of the shape.
\op{In the context of mesh transfer, this formulation enforces an inductive bias which ensures the optimization is both stable and accurate. In our experiments we adopt a lightweight version of \cite{hertz2021sape}: we leverage progressive positional encodings, and trace their progression in a global manner, e.g: all vertices are exposed to higher frequencies at the same time-step.}
\section{Evaluation}
\label{section:experiments}
Unless specified otherwise, we use the same optimization configuration detailed in Section~\ref{sec:implementation} for all described experiments.
\subsection{Implementation Details}
\label{sec:implementation}
\textbf{Correspondence Setup.}
For most results shown in this paper, our algorithm requires users to mark 8-15 pairs of correspondence points on average. This number may vary with the complexity of the models. As a good measure, up to 10 correspondence points is the amount used to generate all face figures within this paper and up to 20 for the human bodies in Figure \ref{fig:bodies}.
For processing the influence area of correspondence points, we use the implementation of \cite{jacobson2011bounded} and \cite{sorkine2007rigid} from \emph{libigl} \cite{libigl}.
\textbf{Network Architecture and Optimization.}
Our architecture consists of an FC network{} with $4$ hidden layers, of size $256$, where the first layer is a progressive positional encoding \cite{hertz2021sape} layer divided into $6$ blocks.
Our optimization runs for $1500$ iterations, alternating between backpropagations on $\mathcal{L}_{d}\left(\widehat{\mathcal{M}} | \mathcal{T}\right)$ and $\lambda\mathcal{L}_{s}\left(\widehat{\mathcal{M}} | \mathcal{M}\right)$, where the weight $\lambda$ is set to $1$ for the first $1000$ iterations and later lowered to $0.2$ for the rest of the optimization.
\textbf{Latency.} On a machine equipped with Nvidia GTX 1080, the optimization takes up to 45 seconds to converge for $1500$ iterations.
\subsection{Evaluation Metrics}
\label{sec:exp_quality}
Our quantitative and qualitative evaluations highlight the importance of jointly optimizing a source mesh distortion metric, as well as the alignment with the target shape. We measure the quality of distortion of $\widehat{\mathcal{M}}{}$ with respect to the source{} mesh $\mathcal{M}{}$ by discrete Dirichlet energy \cite{pinkall1993computing}. The alignment integrity of the optimized mesh $\widehat{\mathcal{M}}{}$ over the target{} shape $\mathcal{T}{}$ is evaluated using Chamfer and Hausdorff distances. It should be noted that the distortion and alignment metrics are complementary to one another, i.e, optimizing one but not the other provides inferior results as either the source structure is distorted, or the result does not perfectly align with the target shape. Readers may confirm the last statement by cross-comparing the visualizations of Figure~\ref{fig:compare_mapping}, Figure~\ref{fig:compare_deform} and Figure~\ref{fig:compare_shrec} with the quantitative evaluation in Table \ref{tab:transfer_metrics}.
The aforementioned metrics are staple, and are commonly used on an individual basis. However, when measured concurrently for mesh transfer quality assessment, one has to account for their scale differences, their specific ranges, and how to effectively quantify them into one, comparable measurement. To that end, we propose a novel evaluation metric, $\mathbf{Q}_\text{transfer}${}, which combines source distortion and target alignment measurements to a single score.
We define $\mathbf{Q}_\text{transfer}${} using the following template function:
\vspace{-5pt}
\begin{equation}
\mathbf{Q}_{\tau}(\widehat{\mathcal{M}} \,|\, \mathcal{M},\ \mathcal{T}) = 1 - \exp\left(\frac{-\tau}{\| \mathcal{F}{}_{d}(\mathcal{M}, \widehat{\mathcal{M}}) + \mathcal{F}{}_{a}(\mathcal{T}, \widehat{\mathcal{M}}) \|_{2} }\right).
\end{equation}
Here, the notations follow Section~\ref{sec:method_loss}, where $\mathcal{M}$, $\mathcal{T}$ and $\widehat{\mathcal{M}}$ represent the source mesh, the target shape and the transferred mesh, respectively.
$\mathcal{F}{}_{d}$ and $\mathcal{F}{}_{a}$ are respectively the source distortion and target alignment measure functions, which are chosen in this paper as the Dirichlet energy and Hausdorff distance defined as
\begin{equation}
\mathcal{F}{}_{d}(\mathcal{M}, \widehat{\mathcal{M}}) = \dfrac{1}{2} \left(\sum_{f \in \mathcal{M}} | d\phi_{\mathcal{M}\widehat{\mathcal{M}}}(f) |_{2}^{2} \, a_{f} \right) - 1,
\end{equation}
\begin{equation}
\mathcal{F}{}_{a}(\mathcal{T}, \widehat{\mathcal{M}}) = w_a \max \Big\{ \adjustlimits \sup_{\vphantom{\widehat{\mathcal{M}}}x \in \mathcal{T}} \inf_{y \in \widehat{\mathcal{M}}} \mathrm{d}(x,y), \adjustlimits \sup_{x \in \widehat{\mathcal{M}}} \inf_{y \in \mathcal{T}} \mathrm{d}(x,y) \Big\} ,
\end{equation}
where for the discrete Dirichlet energy, we follow the definition of \cite{ezuz2019reversible} and assume $d\phi_{\mathcal{M} \widehat{\mathcal{M}}}(f) \in \mathbb{R}^{2 \times 2}$ is the linear transformation
that maps face $f \in \mathcal{M} $ to its image in $\widehat{\mathcal{M}}$, and $a_f$ is the face area. To map the Dirichlet energy to the range of $[0,\infty]$, we also subtract a single unit. To align the magnitude of the Hausdorff distance, we scale it by a constant $w_{a}=1e^2$, which we set empirically.
The calibration hyper-parameter $\tau$ is used to tone down the scale of the metric to better distinguish how different methods compare to each other. Throughout this paper, we use $\mathbf{Q}_\text{transfer} = \mathbf{Q}_{\tau}$ with $\tau=5$.
In practice, prior to metrics computation, all 3D models are normalized to the unit cube to ensure scale invariance.
To conclude, $\mathbf{Q}_\text{transfer}${} exhibits the following attributes:
\renewcommand{\theenumi}{\roman{enumi}}%
\begin{enumerate}
\item $\mathbf{Q}_\text{transfer}${}$\in[0, 1]$, where a higher score correlates with better quality.
\item A perfect score of 1.0 is obtainable when $\mathcal{F}{}_{d}=1$ (no distortion occurs) and $\mathcal{F}{}_{a}=0$ (optimized shape perfectly aligns with target).
\item When $\mathcal{F}{}_{d} \to \infty$ or $\mathcal{F}{}_{a} \to \infty$, then $\mathbf{Q}_\text{transfer}${}$=0$.
\item A lower $\tau$ yields a ``harder'' evaluation metric, which penalizes to a greater extent both misalignment and high distortion.
\end{enumerate}
To tie up the discussion of metrics, we acknowledge that one of the terms in our evaluation metrics, the Chamfer distance, is directly used as one of the optimization terms. We emphasize this choice does not impact the integrity of the evaluation due to the optimization objective encompassing additional terms, and our evaluation consisting of other metrics as well.
\input{tables/table_transfer_metrics}
\input{figures/04_shrec}
\subsection{Comparisons}
\label{section:comprison}
We compare our method to other methods that we found as the most relevant to the task of mesh transfer. The quantitative results are summarized in Table~\ref{tab:transfer_metrics}.
\textbf{SHREC-BIM benchmark.}
We evaluate our method on the BIM benchmark \cite{kim2011blended} which includes 200 pairs of meshes from SHREC dataset \cite{giorgi2007shape}. Each pair is supplied with with corresponding points (between 2-36 points).
We compare our method with two parameterization based
methods: Hyperbolic Orbifold Tutte Embeddings \cite{aigerman2016hyperbolic} (HOT) and Reversible Harmonic Maps between Discrete Surfaces \cite{ezuz2019reversible} (RHM). Both methods receive as input the corresponding points and output a parametrization between the meshes that maps the vertices of one mesh to barycentric coordinates on the another mesh. As can been seen in Figure~\ref{fig:compare_shrec}, the parametrization methods may struggle to keep on the triangulation of the source mesh. \ah{For example, see artifacts RHM outputs such as flipping faces on the fish tail and humanoid hand.}
\textbf{Additional mapping methods.}
We compare our method to additional two surface mapping methods on our custom test set. The first is Elastic Correspondence (\elc) \cite{ezuz2019elastic} using the initialization scheme of Aigerman et al. \cite{aigerman2016hyperbolic}, and the second is Inter-Surface Maps due to (\ism) \cite{schmidt2020inter}. Both methods are based on parameterization. Consequentially, they are only applicable for pairs of meshes, and also assume the existence of some bijectivity between the surfaces.
The comparisons are shown in Figure~\ref{fig:compare_mapping} where the input to the three methods are pairs of source{} and target{} meshes with their marked corresponding points. As evident by careful observation of the corresponding segmented parts marked on the source{} and the optimized mesh, \elc{} causes semantic distortion, for example, on the top face ears or the cow mouth.
Both \ism{} and our method preserve the semantic correspondence between the meshes.
This observations are also reflected in the quantitative results. Due to some distortions in the mapping of \ism, our method achieves \textit{slightly better} results.
It is important to mention, that both methods, \ism{} and \elc{}, satisfy a complete bijective map between the input pair of meshes. However, an error evolves in places where the new discrete tessellation of the source{} does not properly cover the target{} when projecting the map onto the target{} mesh. We suspect that these kind of errors appear mostly at delicate places, i.e, at the eyes of the topmost face and the cow mouth (Figure~\ref{fig:compare_mapping}). In comparison, our direct optimization avoids such types of distortion.
\textbf{Comparison to deformation methods.}
We also compare our method to two deformation methods. Both methods are based on a pre-training step over some specific dataset. Those methods can be categorized by the priors they inject to the deformation allowed between the source{} to the target{} shapes. The first method, Neural Cages (\ncage) \cite{yifan2020neural}, allows only coarse deformations by displacing the control points of a \textit{cage} that wraps the source{} mesh. The second method, ShapeFlow (\sflow) \cite{jiang2020shapeflow}, enables a more flexible deformation by allowing displacement for all vertices of the source{} mesh. Their deformation is regularized through the loss function that includes a volume conservation term and a penalization term for a non isometric deformation.
The results are shown in Figure~\ref{fig:compare_deform}, where we trained and tested each method on selected classes from the ShapeNet dataset \cite{chang2015shapenet}.
We apply our method without providing correspondence points, , \op{e.g: we set $k=0$ in Eq.~\ref{eq:distance_loss}. In other words, we assume only a global orientation} alignment between the source{} and the target{} meshes, as both other methods assume.
Both qualitative and quantitative comparisons clearly highlight the difference between the methods. Our proposed progressive optimization better aligns the target{} mesh with the target shape, doing so with few distortions. By utilizing only global deformations, \ncage{} minimizes the distortion term but the resulted mesh is unable to satisfactorily obtain the form of the target{} mesh. Finally, \sflow{} brings the source{} mesh closer to the form of the target{} shape, but the resulting meshes usually suffer from undesired distortions.
\textbf{Polygon soups and point clouds targets.}
Finally, we highlight the strengths of our progressive optimization by presenting visual examples of mesh-transfer in non-trivial scenarios. In Figure~\ref{fig:faces}, we demonstrate a transfer of face meshes to point clouds, a task which surface mapping methods relying on bijective mappings cannot directly perform. The quality of our results are on par with the mesh-to-mesh case. This setting pertains to a real-life application, where experts may reuse predefined meshes on recently scanned objects.
\textbf{Varying density meshes.} In Figure~\ref{fig:bodies}, we illustrate the example of transferring meshes of varying density. For clarity, different areas of interest are segmented by color. Though we suggest a global approach, it is able to transfer both sparsely and densely detailed polygonal areas with minimal distortion, maintaining the original intention of the expert artist. This example alludes to the case where specific parts of the mesh are highly detailed for a particular purpose, such as animation.
\subsection{Limitations}
\textbf{Limitations.}
While Mesh Draping{} exhibits very good results in many cases, there are still some things to improve in it.
First and foremost, Mesh Draping{} is intended for pairs of shapes of similar parts. \ah{For example - mesh transfer will succeed between different four legged animals (Figure~\ref{fig:teaser}), but will under-perform when transferring meshing from humans to animals, or from a dinosaur to a giraffe as shown in Figure~\ref{fig:limitation}.
Here, the horns shape of the giraffe requires special meshing, which does not exist on the dino head model.}
This phenomenon is also demonstrated in Figure~\ref{fig:compare_deform}, row 4: the target truck shape contains a deep depression right above the right-front wheel, whereas the source shape is smooth. As a result, part of the resulting wheel mesh appears to be deformed.
In addition, on extreme cases of sharp edges (so called "spikes"), the method may struggle to achieve perfect alignment between the optimized mesh and the target shape. To maintain high quality results on very detailed meshes, users may be required to define additional correspondence points. Since we do not use a pre-defined train set, in such cases user intervention is necessary.
By design, our method does not alter local connectivity at all. However, where needed, one may perform local subdivision or remeshing as a postprocess.
Finally, the latency measures in this paper, are presented for an unoptimized implementation. Subsequent efforts may reduce the runtime cost (detailed in Section~\ref{sec:implementation}) by further optimizing the Mesh Draping{} logic.
\section{Conclusion and Future Work}
We presented a parameterization-free approach for transferring mesh structure to comparable shapes. The proposed method uses neural optimization with progressive frequency positional encodings, which contribute to both stable optimization and high quality fitting to fine details. We demonstrated the applicability of our method on a range of examples, including the re-use of triangular and quad meshes to target meshes and point clouds.
Future work may leverage an unpaired training set of shapes to obtain priors on common shapes for faster and more robust optimization.
Another direction for future work may improve the computation time of our optimization, or its flexibility by allowing users to guide mappings between different topologies, or jointly deform a shape from several sources.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Establishing connection between the properties of galaxies and the
underlying dark matter is crucial for both studies of galaxy evolution
and cosmology. Star formation histories of galaxies has been studied
by associating galaxies with their host dark matter halos and their
connection provides fundamental constraints on galaxy formation models
\citep[e.g.,][]{Conroy09,Leauthaud10,Behroozi13}. Future galaxy
surveys such as Prime Focus Spectrograph (PFS) \citep{PFS}, the Dark
Energy Spectroscopic Instrument (DESI) \citep{DESI}, {\it Euclid}
\citep{Euclid} and {\it the Wide Field Infrared Survey Telescope
(WFIRST)} \citep{WFIRST} use both luminous red galaxies and emission
line galaxies to trace the large-scale structure at $z\simlt 2$. A
major uncertainty for the precision cosmology using galaxy surveys
comes from the challenge of relating galaxies and dark matter.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The disk wind plays a clue role in the process of removing the
angular momentum excess from accretion disks (Blandford and Payne
1982 (BP82)), and works towards the accretion of the disk matter
towards the star. A physical connection between accretion and
outflow processes is confirmed by a decrease of the outflow
activity with the age of stars, accompanying with a decrease of
accretion rates (Calvet et al. 2000) and a frequency of accretion
disks (see Andr\'{e} et al. (2000); Mundy et al. (2000).)
Most of the modern models (see., e.g., review by Pudritz et al.
(2007)) suggest that the disk wind does not contain the dust.
Nevertheless, already in 1993 Safier (1993a) in his generalized
version of the BP82 self-similar wind model argued that the weakly
ionized wind arising from the accretion disk surface lifts the
dust grains due to their collisions with the neutral atoms, i.e.
the disk wind is dusty. According to Safier, the maximum size of
the grain which are capable to lift with the wind is about of 1
mm.
Apparently, the presence of the dust in the disk wind has to
affect the circumstellar (CS) extinction, the spectral energy
distribution and polarization properties of the young stellar
objects. Based on the results of Safier (1993a), in our previous
papers (Grinin and Tambovtseva 2002; Grinin et al. 2004;
Tambovtseva et al. (2006)) we consider photometric effects
produced by the dusty disk winds in the young binaries. In the
present paper we consider in more details than in the cited papers
an interaction of the dust component of the wind with the
radiation of the star as well with the hot gas for the single TTSs
and HAEs.
\section{Choice of the disk wind model}
\subsection{Observational data}
In more details, observational manifestation of the disk wind was
investigated in TTSs, in whose spectra the wind was responsible
for an origin of some spectral lines, including such forbidden
lines as [O I] $\lambda 6300 \AA,$ [S II] $\lambda 6731 \AA $ and
some others (Solf and B\"{o}hm 1993; Hirth et al. 1994, 1997;
Hartigan et al. 1995). From these line profiles two wind
components have been distinguished: a high velocity component
(HVC) (the jet, 200-400 km/s), which forms in the nearest to the
star regions of the accretion disk, and a low velocity component
(LVC) (5-40 km/s) forming at the periphery regions of the disk
(Kwan and Tademaru 1995). Such separation, however, rather
conventional: studies of the rotational jet velocities revealed an
intermediate wind component provided the poloidal velocities of
the order of 100 km/c (Bacciotti et al. 2000; Lavalley-Fouquet et
al. 2000; Coffey et al. 2004; Woitas et al. 2005).
The presence of the HVC and LVC of the disk wind is especially
spectacular on the images of HH 30. This young star is surrounded
by the CS disk seen nearly edge-on; as a result, in the visible
part of the spectrum a direct radiation of the star is completely
absorbed by the disk. Visual images of HH 30 obtained with the
Hubble Space Telescope distinctly demonstrate highly collimated
jet propagating in the direction perpendicular to the disk plane
with the mean velocity of about 300 km/s. Observations in the CO
molecule lines showed (Pety et al. 2006) that in the same
direction a slow biconical outflow was observed with the typical
velocity of about 12 km/s and the open angle of about 30 degrees.
The mass loss rate in the biconical outflow estimated by these
authors was $\sim 6.3 \cdot 10^{-8} M_\odot$ per year, while that
in the jet was less by about two orders of magnitude ($\sim
10^{-9} M_\odot$ per year). Hence it follows, that a bulk of the
kinetic energy of the disk wind ($L_w \approx 6\cdot 10^{31}$
erg/s) comes to acceleration of the matter in the narrow
collimated jet, while the main mass loss falls to the low velocity
wind component. Such a conclusion agrees well with results of the
numerical MHD simulations (Goodson et al. 1999).
\subsection{Theoretical models}
According to Blandford and Payne (1982), processes of acceleration
and tap of the matter in the disk wind are governed by the
magnetic field of the accretion disk. If magnetic fields lines
threading a thin rotating disk make an angle $\vartheta$ with the
symmetry axis of the disk $ \ge \vartheta_0$ ($ \vartheta_0 =
30^\circ$), then the disk matter will be accelerated under the
effect of the Lorentz force and the magneto-centrifugally driven
wind can be launched. If the disk is threaded by the open field
lines from some internal radius $r_i$ to some external radius
$r_e$, then one has a typical scheme of the extended disk wind for
which different self-similar solutions have been obtained with the
help of magnetohydrodynamics (K\"{o}nigl 1989; Wardle and
K\"{o}nigl 1993; Ferreira and Pelletier 1995; Ferreira 1997;
Casse and Ferreira 2000; Ferreira and Casse 2004)\footnote{There
is so-called X-wind model where the wind is launched from one
annulus located in the inner disk (Shu et al. 1994; Shang et al.
2002); in this model the disk does not possess the large scale
magnetic field, and the main role is assigned to the magnetosphere
of the star. In the framework of this model some modern
observational facts are not reproduced (see., e.g. Ferreira et al.
2006, Coffey et al. 2004).}. There is the parameter $\xi = d \log
\dot M_{a}(\varpi)/d \log\varpi$ which is a measure of the disc
ejection efficiency and regulates relation between low-velocity
and high velocity wind components, (here $\dot M_{a}$ is the
accretion rate, $\varpi$ the distance from the disk symmetry
axis). Calculations show that the best agreement with the observed
parameters of the forbidden lines in the spectra of TTSs occurs at
$\xi \approx$ 0.007-0.01 (Cabrit et al. (1999), Garcia et al.
2001a). Below in calculations we use the model "A" from the paper
by Garcia et al. (2001a) where the parameter $\xi$ = 0.01.
\subsection{The model of the dust mixture}
During the evolution, the dust component of the protoplanetary
disk undergoes essential changes: the dust grains grow and
gradually settle towards the disk midplane (Safronov 1972;
Weidenschilling 2000). Further, they form solids and
planetezimals. However, in the surface layers of the disk small
grains of an approximately original (i.e. interstellar) chemical
composition persist during a long time. Results of photometric
observations of UX Ori type stars testify this. A violent
photometric activity of these stars is caused by the changes in
the CS extinction due to the small inclination of their CS disks
relatively to the line of sight (see review of Grinin (2000) and
papers cited there). The data on the selective CS absorbtion,
which is observed in these stars during their fading show that the
reddening law is close to the interstellar one (see, e.g., Pugach
2004). Since the disk wind starts from the surface of the CS disk
we operate below with the MRN mixture (Mathis et al. 1977). Along
with this, we also consider the dust grains with the radius equal
to $a$ = 0.1 $\mu$m which provide the reddening law close to that
given by the MRN mixture.
\section{The dust survival in the gas component of the wind}
As it was shown by Safier (1993a,b), an ambipolar diffusion (the
ion-neutral drift) is an important source of the gas heating in
the disk wind of TTSs. Under the effect of this mechanism the
accelerated gas is heated up to the temperature of about 10$^4$ K.
In the wind regions nearest to the star one can also expect an
essential contribution from the X-ray radiation to the gas heating
(see., e.g. Glassgold et al. 2000), whose major part originates in
the shocks during infall of the accretion gas onto the star. The
question arises: can the dust grains survive contacting with the
heated gas?
\subsection{Collisions with the gas atoms. Thermal effect}
When the gas particles collide with the dust grains, a part of
their kinetic energy converts into the heat resulting in the dust
heating. An efficiency of such a process depends on the sort of
particles (atoms, ions, electrons) and is determined by the
relation (see, e.g. Draine 1981):
\begin{equation}
Q_{coll} = \pi a^2 \sum_i n_i(\frac{8kT_i}{\pi m_i})^{1/2}\,2kT_i
<\alpha_i>
\end{equation}
Here $n_i, m_i$ and $T_i$ are number densities, masses and kinetic
temperatures of gas species $i$, $a$ is the radius of the dust
grain, $<\alpha_i>$ the mean fraction of the kinetic energy which
is converted to heat when a particle $i$ impacts the grain, $k$
the Boltzmann's constant.
Let us consider an efficiency of this mechanism in comparison with
heating of the dust grains by the stellar radiation. An energy
absorbed by the dust grain can be written as
\begin{equation} Q_{*}
= \frac{\pi a^2}{4\pi r^2} \int_0^\infty\,L_*(\lambda)\,
Q_{abs}(\lambda)\,d\lambda,
\end{equation}
where $r$ is the spherical radius, $L_*(\lambda)$ the luminosity
of the star at the given wavelength $\lambda$, $Q_{abs}$ the
absorption efficiency factor for the grains of the given radius
and chemical composition.
The ratios of $Q_*/Q_{coll}$ for two sorts of the dust grains
(graphite and astronomical silicate) are shown in Figs. 1 and 2.
The grain radius is equal to 0.1 $\mu$m. The effective temperature
$T_{eff}$ and the radius $R_*$ of the star are equal to 4000 K and
$2.5 R_\odot$ respectively. The spectral energy distribution of
the star is described by the Planck function. Calculations are
made for two streamlines: the innermost one with the start
coordinate in the disk plane $\varpi_0$ = 0.1 AU and an outer
streamline with $\varpi_0$ = 1 AU. The medium was assumed to be
optically thin for the stellar radiation. Optical characteristics
of the dust were calculated with the Mie theory. The optic
constants were taken from the paper by Draine (1985).
One can see that in the both cases the dust heating due to
collisions with atoms and free electrons in the disk wind is
negligible in comparison with that by the radiation of the star.
Only in those wind regions where the radiation of the star is
strongly diluted due to the absorbtion by the dust component of
the wind, heating due to collisions may be dominant. But even
there cooling of the dust due to the radiation is an efficient
process and the grain temperature is far from the sublimation one.
As shown by Safier (1993a), the opposite process (the gas cooling
by the dust) plays an important role in the base of the wind but
in the higher wind layers is less effective than the adiabatic
cooling.
\begin{figure}
\centering
\includegraphics[angle = -90, width=12cm]{FIG1.EPS}
\caption{A ratio of the heat power by the radiation of the star to
that by dust-gas collisions along the streamline with the anchor
at $\varpi_0$ = 0.1 AU, the grain radius is 0.1 $\mu$m, (a)
graphite, (b) astrosilicate. The accretion rate is equal to
$10^{-6} M_\odot$ per yr. (dashed line), $10^{-7} M_\odot$ per yr.
(solid line) and $10^{-8} M_\odot$ per yr. (dots).}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle = -90, width=12cm]{FIG2.EPS}
\caption{The same as in Fig. 1 but for the streamline with
$\varpi_0$ = 1 AU}
\end{figure}
Thus, the hot gas in the disk wind and the cold dust can exist in
the same regions of the medium, and there is no any paradox or
contravention of the thermal dynamics laws. The reason is that the
disk wind is transparent or semi-transparent for the thermal
radiation of the dust grains and, therefore is not a close system
in the thermodynamical sense.
\subsection{Dust sputtering and sublimation}
A dust grain can dissipate in the \textit{sputtering} process,
when molecules are ejected from the grain after the latter
collides with gas particles; this leads to the destruction of the
grain. In this case the mass loss by the dust particle is
\begin{equation}
\frac{dm}{dt}=-m_s\sum_iN_iY_i,
\end{equation}
where $N_i$ is the number of particles of species $i$ impacting
the dust grain in the unit of time, $m_s=\mu_sm_H$ the mass of the
molecule leaving the grain, $\mu_s$ its molecular weight, $Y_i$
sputtering yield, i.e. the number of molecules released after
impact by the particles of the given species. The value of $Y_i$
strongly depends on the energy of colliding particles (Draine and
Salpeter 1979). According to these authors, at the energy of the
incident atoms of about 1 ev , the sputtering yield is equal to
zero both for silicate and graphite. The same is valid for the
"chemical" sputtering (Draine 1979).
Thus, the main process affected the dust survival in the disk wind
is the sublimation of the dust in the radiation field of the star.
As mentioned above, Safier (1993a) and Gracia et al. (2001)
determined the dust sublimation zone in the disk winds of TTSs:
this is a region extended approximately to 0.1 AU from the central
source. In the case of HAEs the sublimation zone is greater. From
our calculations it is about 0.5 AU for the model adopted below.
Therefore, the inner regions of the disk wind in HAEs are free of
dust.
\section{Disk wind and circumstellar extinction}
Assuming that the disk wind has an axial symmetry we estimated the
portion of the total luminosity of the star which can be absorbed
and scattered by the dust component of the wind, and the disk
inclination angles under which the wind becomes transparent for
the radiation of the star. The first of these parameters (we call
it as screening coefficient) is determined as follows:
\begin{equation}
\delta = \frac{1}{L_*}\,\int_0^\infty
L_*(\lambda)\,d\lambda\int_0^{\pi/2} (1 -
e^{-\tau_0(\lambda,\theta)})\sin{\theta}\,d\theta
\end{equation}
Here $L_*$ is a bolometric luminosity of the star (as above we
assume the Planck spectrum), $\theta$ an angle between an
arbitrary radius-vector $\vec r$ and the symmetry axis of the
disk; $\tau_0(\lambda,\theta)$ the optical depth of the disk wind
at the wavelength $\lambda$ in the $\vec r$ direction.
Calculations of $\tau_0$ have been made for the MRN mixture at
dust to gas ratio 1:100 that is typical for the interstellar
medium. The gas density distribution in the TTS's wind was taken
as mentioned above. The same model was used for the disk wind in
HAEs. For this purpose we used scaling given in the Garcia et al
(2001a) (relations (9)) connecting the disk wind parameters with
the mass of the star and the accretion rate. For HAEs we adopt
$M_* = 2.5 M_\odot$. Two other parameters needed for calculation
of the sublimation radius are the stellar luminosity ($L_* = 50
L_\odot$) and the effective temperature ($T_{ef} = 9000$ K).
Results of the $\delta$ calculations for different values of the
accretion rate in the range of $\dot M_a$ = $10^{-9}$ - $10^{-6}
M_\odot$ per year are shown in Fig. 3. It is seen that for TTSs
the value of $\delta$ changes in the range from about 0.1 at $\dot
M_a$ = $10^{-9} M_\odot$ yr$^{-1}$ to $\approx$ 0.4 at $M_a =
10^{-6} M_\odot$ yr$^{-1}$. This means that at $M_a \geq 10^{-8}
M_\odot$ yr$^{-1}$ \emph{the dust component of the disk wind can
absorb and scatter a noticeable fraction of the stellar radiation
producing thus an expanding shadow zone in the adjacent regions of
the CS disk}.
Figure 3 presents also results of the analogues calculations for
graphite - silicate mixture with proportions as in Draine and Lee
(1984) and with the fixed radius ($a$ = 0.1 $\mu$m). Such a
mono-dispersed mixture will be used in the further paper in
simulations of the infrared radiation of the disk wind, since in
the visual and near infrared regions of the spectrum it has
optical characteristics very close to that of the MRN mixture. It
is indirectly confirmed by Fig. 3: it is seen that this mixture
provides almost the same screening effect by the disk wind as the
MRN mixture.
Calculations of the thermal balance of the graphite and silicate
grains with the radius $a$ = 0.1 $\mu$m showed that for Herbig Ae
stars with adopted $L_*$ and $T_{ef}$, the sublimation radius is
equal to 0.35 AU for graphite particles and 0.75 AU for silicate
ones. Taking this into account we calculated optical depths in the
disk wind $\tau_0$ and coefficient of screening $\delta$ for HAEs.
It is seen from Fig. 3 that an absence of dust in the inner part
of the disk wind in HAEs notably decreases a solid angle within
which the radiation of the star can be absorbed and scattered by
the dust component of the wind: a maximum value of $\delta$ (at
$\dot M_a$ = 10$^{-6} M_\odot$ yr$^{-1}$) is about 0.15; this is
less by $\sim$ factor of 3 than that for TTSs at the same $\dot
M_a$ but comparable with the effect produced by the puffed-up
inner rim in the dust sublimation zone (Natta et al. 2001).
\begin{figure*}
\centering
\includegraphics[angle = -90, width=9cm]{FIG3.EPS}
\caption[]{The coefficient of screening of the stellar radiation
by the dusty disk wind vs. $\dot M_a$. Solid line: the MRN
mixture, dashed line: the mono-dispersed mixture with radius 0.1
$\mu$m (both curves relate to the disk wind in TTSs),
dashed-dotted line: the mono-dispersed mixture with the radius 0.1
$\mu$m for the disk wind in HAEs.}
\end{figure*}
Figure 4 shows angles $\theta_1$ between the disk plane and the
line of sight at which the optical depth of the disk wind $\tau$
is equal to unity at wavelengths $\lambda$ = 0.5 and 0.1 $\mu$m.
The former is close to the maximum of the V - band path, the
latter is close to the wavelength of the $L_\alpha$ - line which
plays an important role in the energetics of the ultraviolet
spectra of the young stars. Calculations are fulfilled for MRN
mixture. It is seen that in TTSs the angle $\theta_1$ at which
$\tau_{\lambda_{0.5}}$ = 1 ranges from 8 to 37 degrees depending
on the accretion rate. In HAEs the corresponding values of
$\theta_1$ are notably less because of the existence of the inner
region in the disk wind free of dust.
At $\lambda = 0.1 \mu$m the extinction coefficient of the MRN
mixture is three times greater than that at $\lambda = 0.5 \mu$m.
As a result, the angle $\theta_1$ corresponded to
$\tau_{\lambda_{0.1}}=1$ increases. For TTSs it reaches 12 - 45
degrees at accretion rates $10^{-9} - 10^{-6}M_\odot$ per year
respectively. Calculated utmost angles show under which
inclinations of the CS disks to the line-of-sight one can see
ultraviolet and optical spectra of the young stars undisturbed by
the absorption in the disk wind.
\begin{figure*}
\centering
\includegraphics[angle = -90, width=15cm]{FIG4.EPS}
\caption[]{An angle between the disk plane and the line of sight
at which the optical depth of the disk wind is equal to unity at
the wavelength $\lambda = 0.5\mu$m (solid) and 0.1$\mu$m (dashed)
\textbf{a)} for TTSs, \textbf{b)} - for HAEs. See the text for
details.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=12cm]{FIG5.EPS}
\caption[]{Contours of the TTSs' disk wind at $\tau_\bot$ = 1 at
the wavelengths $0.1\mu$m (solid line) and 0.5$mu$m (dashed line).
The accretion rate $\dot{M_a}=10^{-6}M_\odot$ per year.}
\end{figure*}
Figure 5 shows contours of the disk wind in TTSs calculated under
the condition that the vertical optical depth $\tau_\bot$ measured
from the disk surface inwards is equal to unity. Calculations have
been made for three wavelengths: 0.1, 0.5 and 3 $\mu$m for the
wind model with $\dot{M_a}=10^{-6}M_\odot$ per year and MRN dust.
The wavelength 3$\mu$m corresponds to the effective wavelength of
the infrared radiation originated in the regions of the CS disk
nearest to the star. At this $\lambda$ a mayor part of the disk
wind is transparent for the radiation. Therefore, its boundary at
3 $\mu$m is close to the surface of the standard flared disk
(e.g., Kenyon and Hartmann 1987; Dullemond and Natta 2003)
determined as $H/r \approx 0.1,$ where $H$ is the scaling height
at the distance $r$. In the visual and especially in the
ultraviolet spectrum regions the effective optical depth of the
disk wind increases, and one has to take into account this
circumstance when modelling the spectral lines arising in the
dense wind layers (such as, for example, ultraviolet molecules
H$_2$).
\section{Discussion}
Thus, we showed that in the case of T Tauri stars the disk wind
can absorb and scatter the radiation of the star within a rather
large segment of the solid angle 4$\pi$ in the wide range of the
accretion rates $\dot{M_a}= 10^{-8} - 10^{-6}M_\odot$ yr$^{-1}$.
This means that the disk wind, in fact, is capable to play the
same role as the puffed-up inner rim in the dust sublimation zone
of the accretion disk (Natta et al. 2001). This inner rim screens
the adjoining regions of the accretion disk from the direct
stellar radiation (Dullemond et al. 2001), and under the certain
inclination angles of the disk to the line of sight may be a
source of the variable CS extinction (Dullemond et al. 2003). The
dust component of the disk wind is able to produce the same
effect.
In the case of Herbig Ae stars a screening effect produced by the
dust component of the disk wind at the same range of accretion
rates is significantly less than that for TTSs. Therefore, in HAEs
the contribution of the disk wind into the thermal radiation of
the CS dust can be comparable with that from the inner rim only at
the high values of the accretion rate $\geq 10^{-6}M_\odot$
yr$^{-1}$.
It should be noted that we considered optical properties of the
dust component of the wind using the model of Garcia et al (2001)
with $\xi = 0.01$. In the disk wind theory this important
parameter governs an efficiency of the magneto centrifugal
mechanism of the gas acceleration. In particular, a growth in
$\xi$ leads to an increase of the mass loss rate in the disk wind
as well to a decrease of the terminal velocity of the wind (Garcia
et al. 2001). Both effects works in the same direction: they
increase the density of the matter in the wind. Therefore in the
models with the large $\xi$ the disk wind has to be more opaque on
dust in comparison with the model considered above.
\subsection{Structural disk wind and variable circumstellar
extinction} Basing on the existing disk wind models we suggested
that the wind possesses an axial symmetry and azimuthal
homogeneity. In fact, this is a model simplification, and in
reality, it seems hardly feasible. In conditions of supersonic
turbulent motions the disk wind cannot be a continuous outflow in
the hydrodynamical sense. It has to consist of an aggregate of gas
and dust streams of the different power arising from the disk
surface. In such a case the filling factor $q$ would be one of the
wind parameters. It implies a fraction of the wind volume filled
in with streams of the matter. Now we can only note that $q$ has
to be less than unity.
Thus, under the real conditions, the column density of the dust
along the line of sight passed through the disk wind, may be a
complex function of time. It may fluctuate due to the motion of
the gas and dust streams. Besides its changes can reveal
quasi-periods caused by the repeated intersection of the line of
sight by the same dominant wind stream. Note, that the
quasi-periods in the brightness changes have been really observed
in some UX Ori type stars (see e.g., Shevchenko et al. 1993). The
rotation of the nonhomogeneous disk wind could be the reason of
the spectral variability of some young stars (e.g. Kozlova et al.
2003).
Changes in the CS extinction may vary not only the radiation flux
coming to the observer directly from the star, but the radiation
flux from that region of the disk which is illuminated by the star
through the disk wind. Shadows from the wind in this part of the
CS disk have to move along the disk following the motion of the
gas and dust streams. Since these streams looks like spinning-up
spirals, their shadows projected on the disk have to be also
spiral-like. Detection and investigation of such moving shadows on
the images of the CS disks would be important for the theory of
the disk winds.
{\bf HH 30}. It is likely that namely such a mechanism of the
variability is realized in the case of HH 30. Comparison of the
images of this object obtained in the different time with the
Hubble Space Telescope showed that they are variable. Both the
type of the object's asymmetry and the integral flux from it were
variable (Burrows et al. 1996; Stapelfeldt et al. 1999, Wood et
al. 2000). Wood and Whitney (1998) supposed that changes in
conditions of illumination of the CS disk by the spotted rotating
star could be the reason of the HH 30 variability. However, new
data on the variability of the object (Watson and Stapelfeldt
2004, 2007) did not confirm the presence of the period connected
with the rotation of the spotted star. According to these authors,
variability of HH 30 has a more complex character and caused by
changes in the CS extinction in the inner regions of the disk. A
structural disk wind consisted of the separate gas and dust
streams starting from the surface of the CS disk corresponds well
to this role.
{\bf RW Aur}. Another example of the young star whose variability
is difficult to explain without appealing to a hypothesis about a
dusty disk wind is the classical TTS RW Aur. This star relates to
the most studied young stars. It is characterized with a large
amplitude photometric activity (Herbst et al. 1994) and a complex
type of variability of the emission line profiles and intensities
(Petrov et al. 2001; Alencar et al. 2005). Recently Petrov and
Kozak (2007) analyzed in detail a long-term series of the spectral
and photometric observations of RW Aur and showed that there is a
correlation in variations in the emission lines with the different
excitation potential, which can be explained only if to assume
that spectral variability is due to screening the emission region
by the CS dust clouds. It is known from observations that the
symmetry axis of the RW Aur's CS disk is inclined to the line of
sight under 46 $\pm 3^\circ$ (this angle was derived very
accurately with the help of the radial and space (a projection on
the sky plane) velocities of the moving details in the optical jet
(Lopez-Martin et al. 2003)). Under such an inclination disk cannot
screen the star even if to take into account the rim in the
sublimation zone. Therefore, an appearance of the dust on the line
of sight (and hence, on the high latitudes in the star's
coordinate system) Petrov and Kozak connected with the dust
fragments of the disk wind.
An applicability of the theory of the dusty disk winds is not
limited by examples given above. The calculations show (Grinin and
Tambovtseva 2002; Tambovtseva et al. 2006) that the photometric
effects caused by the dust component of the disk wind can be
observed in the young binaries. In particular, obscuration by the
extended disk win could cause abnormally long lasting eclipses
observed in some binaries.
\section{Conclusion}
Let us briefly summarize results of the analysis given above.
1. Basing on the disk wind model described by Garcia et al. (2001)
we showed that the dust grains carrying away by the gas component
of the wind survive being in the contact with the hot ($10^4$ K)
gas.
2. The range of the solid angles which is covered by the part of
the wind opaque by dust depends on the accretion rate and the
luminosity of the star, and for TTSs may amount a noticeable
fraction of the full solid angle 4$\pi$. This means that the disk
wind can notably contribute both to the scattered radiation at the
optical and ultraviolet wavelengths, and to the infrared excesses
of the radiation of T Tauri stars.
3. Conditions of the disk wind formation are such that it cannot
be a continuous axially-symmetric outflow; it is rather an
agglomerate of the gas and dust streams started from those points
of the circumstellar disk where the conditions for the matter
acceleration by the magnetic field are most favorable. A motion of
the matter in the disk wind results in the variations of the dust
column density on the line of sight. Therefore, under certain
inclination of the disk to the line of sight the gas and dust
streams of the disk wind can cause the variable CS extinction
resulting in the photometric activity of the young stars. For the
same reason one can see moving shadows on the CS disks images
caused by gas and dust streams arising from the disk surface.
4. Herbig Ae stars have the sublimation radius at about 0.5 AU
from the central source. As a result, the inner densest part of
the disk wind is free of dust, and the effective solid angle
within which the dust wind component can interact with the
radiation of the star is small compared to 4$\pi$. Nevertheless,
even in such a case a periphery region of the wind may be a source
of the variable CS extinction, responsible for the photometric
activity of UX Ori type stars. Therefore, dense in time
photometrical monitoring of these stars may give a valuable
information on the disk wind structure in the acceleration zone in
the close vicinity to the surface of the accretion disk.
We are grateful to A. K\"{o}nigl for useful discussion of the
results of this work and valuable comments. The work is supported
by the Program of the Presidium of RAS "Origin and evolution of
stars and Galaxies", INTAS grant N 03-51-6311 and grant
NS-8542.2006.2.
\begin{center}
\LARGE {\bf References}
\end{center}
S.H.P. Alencar, G. Basri, L. Hartmann, N. Calvet, 2005, Astron. Astrophys.
\textbf{440}, 595\\
P. Andr\'{e}, D. Ward-Thompson, M. Barsony, 2000,
\textit{Protostars and Planets IV}, (eds. V. Mannings, A.P. Boss,
S.S. Russel, The University of Arizona Press, Tucson), p. 59.\\
F. Bacciotti, R. Mundt, T. P. Ray et al. 2000, Astrophys. J. \textbf{537}, L49\\
R. D. Blandford and D.G. Payne, 1982, MNRAS, \textbf{199}, 883\\
C. J. Burrows, K. R. Stapelfeldt, A.M. Watson et al. 1996,
Astrophys.J., \textbf{473}, 437\\
S. Cabrit, J. Ferreira, A.C. Raga, 1999, Astron. Astrophys., \textbf{343}, L61 \\
N. Calvet, L. Hartmann, S.E. Storm, 2000, \textit{Protostars and
Planets IV},(Eds. V. Mannings, A.P. Boss, S.S. Russel,
The University of Arizona Press, Tucson), p. 377\\
F. Casse and J. Ferreira, 2000, Astron. Astrophys. \textbf{353}, 1115\\
D. Coffey, F. Bacciotti, J. Woitas, T.P. Ray, and J.
Eisl\"{o}ffel, 2004, Astrophys. J. \textbf{604}, 758 \\
B.T. Draine, 1979, Astrophys. J., \textbf{230}, 106\\
B.T. Draine, 1981, Astrophys. J., \textbf{245}, 880\\
B.T. Draine, 1985, Astrophys. J. Suppl. Ser., \textbf{57}, 587\\
B.T. Draine and E.E. Salpeter, 1979, Astrophys. J, \textbf{231}, 77\\
B.T. Draine and E.E. Salpeter, 1979, Astrophys. J. \textbf{231}, 77\\
B.T. Draine, H.M. Lee, 1984, Astrophys. J. \textbf{285}, 89,\\
C.P. Dullemond and A. Natta, 2003, Astron. Astrophys.
\textbf{408}, 161\\
C.P. Dullemond, C. Dominik, and A. Natta, 2001,
Astrophys. J., \textbf{560}, 957\\
C.P. Dullemond, M.E. van den Ancker, B. Acke, and R. van Boekel
2003, Astrophys. J., \textbf{594}, L47\\
J. Ferreira, 1997, Astron. Astrophys., \textbf{319}, 340\\
J. Ferreira and G. Pelletier, 1995, Astron. Astrophys. \textbf{295}, 807\\
J. Ferreira and F. Casse, 2004, Astrophys. J., \textbf{601}, L139\\
J. Ferreira, C. Dougados, S. Cabrit, 2006, Astron. Astrophys. \textbf{453}, 785\\
P.J.V. Garcia, J. Ferreira, S. Cabrit, and L. Binette, 2001a,
Astron. Astrophys. \textbf{377}, 589\\
P.J.V. Garcia, S. Cabrit, J. Ferreira,, and L. Binette, 2001b,
Astron. Astrophys. \textbf{377}, 609\\
A. E. Glassgold, E.D. Feigelson, T. Montmerle, 2000,
\textit{Protostars and Planets IV}, (eds. V. Mannings, A.P. Boss,
S.S. Russel, The University of Arizona Press, Tucson, p. 457.\\
V.P. Grinin, 2000, \textit{Disks, Planetesimals, and Planets},
(Eds. F. Garzon, C. Eiroa, D. de Winter, and T. J. Mahoney, ASP
Conference Proceedings, Vol. 219, Astronomical Society of the Pacific), p.216\\
V. P. Grinin and L..V Tambovtseva, 2002, Astron. Letters \textbf{28}, 601\\
V.P.Grinin, L.V. Tambovtseva, N.Ya. Sotnikova, 2004, Astron. Lett. \textbf{30}, 694\\
A.P. Goodson, K.-H. B\"{o}hm, and R.M. Winglee, 1999, Astrophys. J. \textbf{524}, 142\\
P. Hartigan, S. Edwards, L. Ghandour, 1995, Astrophys. J. \textbf{452}, 736\\
W. Herbst, D.K. Herbst, and E.J. Grossman, 1994, Astron. J. \textbf{108}, 1906\\
G. A. Hirth, R. Mundt, J. Solf, and T.P. Ray, 1994, Astrophys. J., \textbf{427}, L99\\
G. A. Hirth, R. Mundt, J. Solf, 1997, Astron . Astrophys. Suppl. Ser., \textbf{126}, 437\\
S.J. Kenyon and L. Hartmann, 1987, ApJ \textbf{323}, 714\\
A. K\"{o}nigl, 1989, Astrophys. J. \textbf{342}, 208\\
O.V. Kozlova, V.P.Grinin, G.A. Chuntonov, 2003, Astrophysics,
\textbf{46}, 265\\
J. Kwan and E. Tademaru, 1995, Astrophys. J. \textbf{454}, 382\\
C. Lavalley-Fouquet, S. Cabrit, and C. Dougados, 2000, Astron.
Astrophys. \textbf{356}, L41\\
L. L\'{o}pez-Martin, S. Cabrit and C. Dougados, 2003, A\&A, 405, L1\\
J.M. Mathis, W. Rumpl, and K. H. Nordsieckm 1997, Astrophys. J.
\textbf{217}, 425\\
L.G. Mundy, L.W. Looney, W.J. Welch, 2000, \textit{Protostars and
Planets IV}, (eds. V. Mannings, A.P. Boss, S.S. Russel,
The University of Arizona Press, Tucson), p. 355\\
A. Natta, T. Prusti, R. Neri et al. 2001, Astron. Astrophys., {\bf
371}, 186 \\
P.P. Petrov, G.F. Gahm, J.F. Gameiro, et al. 2001, Astron.
Astrophys. \textbf{369}, 993\\
P.P. Petrov P.P and B.S. Kozak, 2007, Astron. Rep. \textbf{51}, 500\\
J. Pety, F. Gueth, S. Guillateau, A. Dutrey, 2006, A\&A, \textbf{458}, 841\\
A. F. Pugach, 2004, Astron. Rep \textbf{48}, 470\\
R E. Pudritz, R. Ouyed, C. Fendt, and A. Brandenburg, 2007,
\textit{Protostars and Planets V} (Eds. B. Reipurth,
D. Jewitt, K. Keil, Univ. of Arizona Press, Tucson, 951) p. 277\\
P. N. Safier, 1993a, Astrophys. J. \textbf{408}, 115\\
P. N. Safier, 1993b, Astrophys. J. \textbf{408}, 148\\
V.S. Safronov, 1972, \emph{Evolution of the protoplanetary cloud
and formation of the Earth and planets}, Moscow, Nauka\\
J. Solf and K.H. B\"{o}hm, 1993, Astrophys. J., \textbf{410}, L31\\
H. Shang, A. E. Glassgold, F. H. Shu, and S. Lizano, 2002, Astrophys. J.
\textbf{564}, 853\\
V.S. Shevchenko, K.N. Grankin, M.A. Ibragimov, et al. 1993,
Astrophys. Sp. Sci. \textbf{202}, 121\\
F. Shu, J. Najita, E. Ostriker, et al., 1994, Astrophys. J. \textbf{429}, 781\\
K.R. Stapelfeldt, A.M.Watson, J.E.Krist et al., 1999, ApJ, 516, L95\\
L. V. Tambovtseva, V. P. Grinin, G. Weigelt, 2006, Astron.
Astrophys., \textbf{448}, 633\\
M. Wardle and A. K\"{o}nigl, 1993, Astrophys. J. \textbf{410}, 218\\
A.M. Watson and K. R. Stapelfeldt, 2004, Astrophys. J. \textbf{602}, 860\\
A.M. Watson and K.R. Stapelfeldt, 2007, Astron. J., \textbf{133}, 845\\
S.J. Weidenschilling, 2000, Space Sci. Rev. \textbf{92}, 281\\
J. Woitas, F. Bacciotti, T. P. Ray et al., 2005, Astron. Astrophys. \textbf{432}, 149\\
K. Wood, S.J. Wolk, K.Z.Stanek et al., 2001, Astrophys. J., \textbf{542}, L21\\
K. Wood, and B. Whitney, 1998, Astrophys. J., \textbf{506}, L43\\
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\medskip
The last decade has undoubtedly been one of great advances in physical cosmology.
One of the most important achievements is the precision measurements of the anisotropies in the CMB\cite{CMB} together with
what seems to be their natural explanation within the context of the inflationary scenarios\cite{Guth}.
There is however a serious hole in this seemingly blemish-less suit of the Emperor:
The description of our Universe-- or the relevant part thereof- starts\footnote{Here we refer to the era relevant to the starting point of the analysis that leads to the ``fluctuation spectrum". In the standard view of inflation, the relevant region of the universe starts with a Plank regime containing large fluctuations of essentially all relevant quantities, but then, a large number of inflationary e-folds leads to an homogeneous and isotropic universe which is in fact the starting point of the analysis that takes us to the
primordial fluctuation spectrum. One might wish, instead, to regard such fluctuation spectrum as a remnant of the earlier anisotropic and inhomogeneous conditions but then one ends up giving up
any pretense that one can explain its origin and account for its specific form.} with an initial condition which is
homogeneous and isotropic both in the background space-time and in the quantum state that is supposed to describe the "fluctuations", and it is quite easy to
see that the subsequent evolution through dynamics that do not break these symmetries can only lead
to an equally homogeneous and anisotropic universe\footnote{ In fact many arguments have been put forward in order to deal with this issue, that is often phrased in terms of the Quantum to Classical transition -- without focussing on the required
concomitant breakdown of homogeneity and isotropy in the state-- the most popular ones associated with the notion of decoherence. These the alternatives have been critically discussed in\cite{InflationUS}.}. In fact, if we were to think in terms of first principles,
we would start by acknowledging that the correct description of the problem at hand would involve a full theory of quantum gravity coupled to a theory of all the matter quantum fields, and that there, the issue would be whether we start with a quantum state that is homogeneous and isotropic or not?. Even if these notions do not make sense within that level of description, a fair question is whether or not, the inhomogeneities and anisotropies we are interested on, can be traced to aspects of the description that have no contra-part in the approximation we are using. Recall that such description involves the separation of background vs. fluctuations and thus must be viewed only as an approximation, that allows us to separate the nonlinearities in the system--as well as those aspects that are inherent to quantum gravity-- from the linear part of problem represented by the fluctuations, which are treated in terms of linear quantum field theory. In this sense, we might be tempted to ignore the problem and view it as something inherent to such approximation. This would be fine, but we could not argue that we understand the origin of the CMB spectrum, if we view the asymmetries it embodies as arising from some aspect of the theory we do not rely or touch upon.
In fact, what we propose in the following treatment is to bring up one element or aspect, that we view as part of the
quantum gravity realm, to the forefront in order to modify-- in a minimalistic way-- the semiclassical treatment, that, as we said, we find lacking.
\smallskip
It is of course not at all clear that the problem we are discussing
should be related to quantum gravity, but
since that is the only sphere of fundamental physics for which we have so far failed to find
a satisfactory conceptual understanding\footnote{There are of course many open issues in fundamental
understanding of physics that are not in principle connected with the issue of quantum gravity, however
it is only in this latter field that the problems seem to be connected with deep conceptual issues and where
one can envision the possibility that their resolution might require a fundamental
change of paradigm, as would be the case if we find we must modify the laws of quantum mechanics.} we find quite
natural to associate the two. In this sense we would be following the ideas of Penrose regarding the fundamental
changes, that he argues\cite{Penrose}, are needed in quantum mechanics and their connection to quantum gravity.
He argues that quantum gravity might play a role in triggering a real
dynamical collapse of the
wave function of systems \cite{Penrose}. His proposals would have a system collapsing whenever the gravitational interaction
energy between two alternative
realizations that appear as superposed in a wave function of a system reaches a certain threshold which is identified with
$M_{Planck}$.
The
ideas can, in principle, lead to observable effects and, in fact, experiments to test them are currently being contemplated \cite{ExpPenrose}
(although it seems that the available technology can not yet be pushed to the level where actual tests might be expected to become a
reality soon). We have considered in \cite{InflationUS} a situation for which there exists already a wealth of empirical information and
which we have argued
can not be fully understood without involving some New Physics, which required features would seem to be quite close to Penrose's proposals.
\section{ The quantum origin of the seeds of cosmic structure}\label{sec_main}
\medskip
One of the major claimed successes of Inflationary cosmology is its reported ability
to predict the correct spectrum for the primordial density fluctuations that seed the
growth of structure in our Universe. However
when one thinks about it one immediately notes that there
is something truly remarkable
about it, namely that out of an initial situation which is taken to be perfectly isotropic and homogeneous
and based on a dynamics that preserves
those symmetries one ends with a non-homogeneous and non isotropic situation. Most of our colleagues who have been working in this
field for a long time would reassure us, that there is no problem at all by
invoking a variety of arguments. It is noteworthy that these arguments would tend to differ in general from one inflationary cosmologist
to another \cite{Cosmologists}. Other cosmologists do acknowledge that there seems to be something unclear at this point \cite{Cosmologists2}. In a recent paper \cite{InflationUS},
a critical analysis of such proposals has been carried out indicating that all the existing justifications fail to be fully
satisfactory.
In fact, the situation at hand, is quite different from any other situation usually treated using quantum mechanics as can be seen by noting that, while in analyzing ordinary situations, quantum mechanics offers us, at least one
self consistent assignment
at each time of a state of the Hilbert space to our physical system (we are of course thinking of the Schroedinger picture), the same is not true for the standard analysis of the current problem.
It is well known, that in certain instances there might be several
mutually incompatible assignments of that sort, as for instance when contemplating the two descriptions offered by two different inertial
observers who
consider a given a specific EPR experiment.
However, as we said, in all other known cases, one has at least one description available. The reader
should try the consideration
of such
assignment -- of a state at each time -- when presented with any of the proposed justifications offered to deal with the issue
of the
transition from a
symmetric universe to a non-symmetric one. The reader will find that in each case he/she will be asked to accept one of the
following: i)
our universe was not really
in that symmetric state (corresponding to the vacuum of the quantum field), ii) our universe is still described by a symmetric state,
iii) at least at some points in the past the description of the state of our universe could not be done within quantum mechanics, iv)
quantum mechanics does not correspond to
the full description of a system at all times, or v) our own observations of the
universe mark the transition from a symmetric to an asymmetric state. It should be clear that none of these represent
a satisfactory alternative.
In particular, if we want to claim, that we understand the
evolution of our universe and its structure -- including ourselves -- as the result of the fluctuations of quantum origin in
the very early stages of
our cosmology.
Needless is to say that
none of these options will be explicitly called upon in the arguments one is presented with, however
one or more would be
hidden, perhaps
in a subtle way, underneath some of the aspects of
the explanation. For a more thorough discussion we refer the reader to \cite{InflationUS}.
The interesting part of this situation is that one is forced to call upon some novel physical
process to fill in the
missing or unacceptable part of the justification of the steps that are used to take us from that
early and symmetric state, to the
asymmetric state
of our universe today, or the state of the universe we photograph when we look at the surface of
last scattering in the pictures of the CMB.
In \cite{InflationUS} we have considered in this cosmological context a proposal calling for a
self induced collapse of the wave function
along the general
lines conceived by Penrose, and
have shown that the requirement that one should obtain results compatible with current observations
is already sufficient
to restrict
in important
ways some specific aspects of these novel physics. Thus, when we consider that the origin of
such new physics could be traced to some aspects of quantum gravity, one would be already in a position of setting
phenomenological
constraints on
at least this aspect of the quantum
theory of gravitation.
In the following we give a short description of this analysis. The
staring point is as usual the action of a scalar field coupled to
gravity\footnote{We will be using units where $c=1$ but will keep $\hbar$ (with units of Mass $ M$ times Length $L$ ), and $G $ ( with units of $ L/M$ ) explicitly throughout the manuscript .The coordinates in the metric $\eta, x^i $ will have units of length $L$ but the metric components, such as the scale factor $a$ will be dimensionless. The field $\phi$ has units of $(M/ L)^{1/2}$, and the potential $V$ has units of $M/L^3$}.
\nopagebreak[3]\begin{equation}
\label{eq_action}
S=\int d^4x \sqrt{-g} \lbrack {1\over {16\pi G}} R[g] - 1/2\nabla_a\phi
\nabla_b\phi g^{ab} - V(\phi)\rbrack,
\end{equation}
where $\phi$ stands for the inflaton or scalar field responsible for inflation and $V$ for the
inflaton's potential.
One then splits both, metric and
scalar field into a spatially homogeneous (`background') part and an
inhomogeneous part (`fluctuation'), i.e. $g=g_0+\delta g$,
$\phi=\phi_0+\delta\phi$.
Th equations governing the background unperturbed Friedman-Robertson universe with line element
$ ds^2=a(\eta)^2\left[- d\eta^2 + \delta_{ij} dx^idx^j\right]$, and the homogeneous scalar field $\phi_0(\eta)$ are, the
scalar field equation,
\begin{equation}
\ddot\phi_0 + 2 \frac{\dot a}{ a}\dot\phi_0 +
a^2\partial_{\phi}V[\phi] =0, \label{Scalar0}
\end{equation}
and Friedman's
equation
\begin{equation}
3\frac{\dot a^2}{a^2}=4\pi G (\dot \phi^2_0+ 2 a^2 V[\phi_0]).
\end{equation}
The background solution
corresponds to the standard inflationary cosmology which written using a conformal time,
has, during the inflationary era a scale factor
$a(\eta)=-\frac{1}{H_{\rm I} \eta},$
with $ H_I ^2\approx (8\pi/3) G V$and with the scalar $\phi_0$ field in the slow roll regime so $\dot\phi_0= - ( a^3/3 \dot a)V'$. This era is supposed to give rise to a reheating period whereby the universe is repopulated with ordinary matter fields, and then to a standard hot big bang cosmology leading up to the present cosmological time. The functional form of $a(\eta)$ during these latter periods is of course different but we will ignore such details on the account that most of the change in the value of $a$ occurs during the inflationary regime. The overall normalization of the scale factor will be set so $ a=1$ at the "present cosmological time". The inflationary regime would end for a value of $\eta=\eta_0$, negative and very small in absolute terms.
The perturbed metric can be written
\begin{equation}
ds^2=a(\eta)^2\left[-(1+ 2 \Psi) d\eta^2 + (1- 2
\Psi)\delta_{ij} dx^idx^j\right],
\end{equation}
where $\Psi$ stands for the relevant perturbation and is called
the Newtonian potential.
The perturbation of the scalar field leads to a perturbation of the energy momentum tensor, and
thus Einstein's equations at lowest order lead to
\begin{equation}
\nabla^2 \Psi = 4\pi G \dot \phi_0 \delta\dot\phi \equiv s \Gamma
\label{main3}
\end{equation}
where we introduced the abbreviation $s=4\pi G \dot \phi_0$ and the
quantity $\Gamma$ as the aspect of the field that acts as a source of
the Newtonian Potential, which for slow roll approximation considered here is just
$\Gamma=\delta\dot\phi$.
Now, write the quantum theory of the field $\delta\phi$.
It is convenient to consider instead the field $y=a \delta \phi$.
We
consider the field in a box of side $L$, and decompose the real
field $y$ into plane waves
\begin{equation}
y(\eta,\vec{x})=\frac{1}{L^{3}} \Sigma_{ \vec k} \left(\ann_k y_k(\eta)
e^{i \vec{k}\cdot\vec{x}}+\cre_{k} \bar y_k(\eta)
e^{-i\vec{k}\cdot\vec{x}}\right),
\end{equation}
where the sum is over the wave vectors $\vec k$ satisfying $k_i L=
2\pi n_i$ for $i=1,2,3$ with $n_i$ integers.
It is convenient to rewrite the field and momentum operators as
\begin{equation}
\y(\eta,\vec{x})=
\frac{1}{L^{3}}\sum_{\vec k}\ e^{i\vec{k}\cdot\vec{x}} \hat y_k
(\eta), \qquad \py(\eta,\vec{x}) =
\frac{1}{L^{3}}\sum_{\vec k}\ e^{i\vec{k}\cdot\vec{x}} \hat \pi_k
(\eta),
\end{equation}
where $\hat y_k (\eta) \equiv y_k(\eta) \ann_k +\bar y_k(\eta)
\cre_{-k}$ and $\hat \pi_k (\eta) \equiv g_k(\eta) \ann_k + \bar g_{k}(\eta)
\cre_{-k}$
with
\begin{equation}
y^{(\pm)}_k(\eta)=\frac{1}{\sqrt{2k}}\left(1\pm\frac{i}{\eta
k}\right)\exp(\pm i k\eta),\qquad
g^{\pm}_k(\eta)=\pm
i\sqrt{\frac{k}{2}}\exp(\pm i k\eta) . \label{Sol-g}
\end{equation}
As we will be interested in considering a kind of self induced collapse which
operates in close analogy with a ``measurement", we proceed to work
with Hemitian operators, which in ordinary quantum mechanics are the ones susceptible of direct measurement.
Thus we decompose both $\hat y_k (\eta)$ and $\hat \pi_k
(\eta)$ into their real and imaginary parts $\hat y_k (\eta)=\hat y_k{}^R
(\eta) +i \hat y_k{}^I (\eta)$ and $\hat \pi_k (\eta) =\hat \pi_k{}^R
(\eta) +i \hat \pi_k{}^I (\eta)$ where
\begin{equation}
\hat{y_k}{}^{R,I} (\eta) =
\frac{1}{\sqrt{2}}\left(
y_k(\eta) \ann_k{}^{R,I}
+\bar y_k(\eta) \cre{}^{R,I}_k\right) ,\qquad
\hat \pi_k{}^{R,I} (\eta) =\frac{1}{\sqrt{2}}\left( g_k(\eta)
\ann_k{}^{R,I}
+ \bar g_{k}(\eta) \cre {}^{R,I}_{k} \right).
\end{equation}
We note that the operators $\hat y_k^{R, I} (\eta)$ and $\hat
\pi_k^{R, I} (\eta)$ are therefore hermitian operators.
Note that the operators corresponding to $k$ and $-k$ are identical in the real
case (and identical up to a sign in the imaginary case).
Next we specify our model of collapse, and follow the field evolution through collapse
to the end of inflation.
We will assume that the collapse is
somehow analogous to an imprecise measurement of the
operators $\hat y_k^{R, I}
(\eta)$ and $\hat \pi_k^{R, I} (\eta)$ which, as we pointed out are
hermitian operators and thus reasonable observables. These field
operators contain complete information about
the field (we ignore here for simplicity the relations between the modes $k$ and $-k$).
Let $|\Xi\rangle$ be any state in the Fock space of
$\hat{y}$. Let us introduce the following quantity:
$d_k^{R,I} = \langle \ann_k^{R,I} \rangle_\Xi.$
Thus the expectation values of the modes are expressible
as
\begin{equation}
\langle {\y_k{}^{R,I}} \rangle_\Xi = \sqrt{2} \Re (y_k d_k^{R,I}), \qquad
\langle {\py_k{}^{R,I}} \rangle_\Xi = \sqrt{2} \Re (g_k d_k^{R,I}).
\end{equation}
For the vacuum state $|0\rangle$ we have of course:
$
\langle{\y_k{}^{R,I}}\rangle_0 = 0, \langle\py_k{}^{R,I}\rangle_0 =0,
$
while their corresponding uncertainties are
\begin{equation}\label{momentito}
\fluc{\y_k {}^{R,I}}_0 =(1/2) |{y_k}|^2(\hbar L^3), \qquad
\fluc{\pf_k {}^{R,I}}_0 =(1/2)|{g_k}|^2(\hbar L^3).
\end{equation}
{\bf The collapse}\newline
Now we will specify the rules according to which collapse happens.
Again, at this point our criteria will be simplicity and naturalness.
Other possibilities do exist, and may lead to different
predictions.
What we have to describe is the state $|\Theta\rangle$ after the
collapse. We need to specify
$d^{R,I}_{k} = \langle\Theta|\ann_k^{R,I}|\Theta\rangle $.
In the vacuum state, $\y_k$ and
$\py_k$ individually are distributed according to Gaussian
distributions centered at 0 with spread $\fluc{\y_k}_0$ and
$\fluc{\py_k}_0$ respectively. However, since they are mutually
non-commuting, their distributions are certainly not independent. In
our collapse model, we do not want to distinguish one over the other,
so we will ignore the non-commutativity and make the following
assumption about the (distribution of) state(s) $|\Theta\rangle$ after
collapse:
\begin{eqnarray}
\langle {\y_k^{R,I}(\eta^c_k)} \rangle_\Theta&=&x^{R,I}_{k,1}
\sqrt{\fluc{\y^{R,I}_k}_0}=x^{R,I}_{k,1}|y_k(\eta^c_k)|\sqrt{\hbar L^3/2},\\
\langle {\py_k{}^{R,I}(\eta^c_k)}\rangle_\Theta&=&x^{R,I}_{k,2}\sqrt{\fluc{\hat{\pi}^{(y)R,I}_k}
_0}=x^{R,I}_{k,2}|g_k(\eta^c_k)|\sqrt{\hbar L^3/2},
\end{eqnarray}
where $x_{k,1},x_{k,2}$ are selected randomly from within a Gaussian
distribution centered at zero with spread one.
From these equations we solve for $d^{R,I}_k$.
Here we must emphasize that our universe, corresponds
to a single realization of the random variables, and thus each of the quantities
$ x^{R,I}{}_{k,1,2}$ has a single specific value.
Later, we will see how to make relatively specific predictions, despite these features.
Next we focus on the expectation value of the quantum
operator which appears in our basic formula
Eq.(\ref{main3}). In the slow roll approximation we have
$\Gamma= a^{-1} \pi^{y}$. Our general approach indicates that, upon
quantization, the above equation turns into
\begin{equation}\nabla^2 \Psi = s \langle\hat\Gamma\rangle. \label{main4}
\end{equation}
Before the collapse occurs, the expectation value on the right hand
side is zero. Let us now determine what happens after the collapse: To
this end, take the Fourier transform of Eq.(\ref{main4}) and rewrite it
as
\begin{equation}\label{modito}
\Psi_k(\eta)=\frac{s}{k^2}\langle\hat\Gamma_k\rangle_\Theta.
\label{Psi}
\end{equation}
Let us focus now on the slow roll approximation and compute the right
hand side, we note that $\delta\dot\phi=a^{-1}\py$ and hence
we find
\begin{eqnarray}
\nonumber
\langle\Gamma_k\rangle_\Theta&=&\sqrt{\hbar L^3 k}\frac{1}{2a}F(k), \label{F}
\end{eqnarray}
where
\begin{equation}
F(k) = (1/2) [A_k (x^{R}_{k,1} +ix^{I}_{k,1}) + B_k (x^{R}_{k,2}
+ix^{I}_{k,2})],
\end{equation}
with
\begin{equation} A_k = \frac {\sqrt{ 1+z_k^2}} {z_k} \sin(\Delta_k) ; \qquad B_k
=\cos (\Delta_k) + (1/z_k) \sin(\Delta_k)
\end{equation}
and where $\Delta_k= k \eta -z_k$ with $ z_k =\eta_k^c
k$.
Next we turn to the experimental results. We will, for the most part, disregard the changes to
dynamics that happen after re-heating and due to the transition to
standard (radiation dominated) evolution. The quantity that is measured is ${\Delta T \over T}
(\theta,\varphi)$ which is a function of the coordinates on the
celestial two-sphere which is expressed as $\sum_{lm} \alpha_{lm}
Y_{l,m}(\theta,\varphi)$. The angular variations of the
temperature are then identified with the corresponding variations in the
``Newtonian Potential" $ \Psi$, by the understanding that they are the
result of gravitational red-shift in the CMB photon frequency $\nu$ so
${{\delta T}\over T}={{\delta \nu}\over {\nu}} = {{\delta (
\sqrt{g_{00}})}\over {\sqrt{g_{00}}}} \approx \Psi$.
The quantity that is presented
as the result of observations is $OB_l=l(l+1)C_l$ where $C_l =
(2l+1)^{-1}\sum_m |\alpha^{obs}_{lm}|^2 $. The observations indicate
that (ignoring the acoustic oscillations, which is anyway an aspect
that is not being considered in this work) the quantity $OB_l$ is
essentially independent of $l$ and this is interpreted as a reflection
of the ``scale invariance" of the primordial spectrum of fluctuations.
Then, as we noted the measured quantity is the
``Newtonian potential" on the surface of last scattering: $
\Psi(\eta_D,\vec{x}_D)$, from where one
extracts
\begin{equation}
a_{lm}=\int \Psi(\eta_D,\vec{x}_D) Y_{lm}^* d^2\Omega.
\end{equation}
To evaluate the expected value for the quantity of interest we use (\ref{Psi}) and (\ref{F}) to
write
\begin{equation}
\Psi(\eta,\vec{x})=\sum_{\vec k}\frac{s U(k)} {k^2}\sqrt{\frac{\hbar
k}{L^3}}\frac{1}{2a}
F(\vec{k})e^{i\vec{k}\cdot\vec{x}},
\label{Psi2}
\end{equation}
where we have added the factor $U(k)$ to represent the aspects of
the evolution of the quantity of interest associated with the
physics of period from re-heating to de coupling, which includes among
others the acoustic oscillations of the plasma.
After some algebra we obtain
\begin{eqnarray}
\alpha_{lm}&=&s\sqrt{\frac{\hbar}{L^3}}\frac{1}{2a} \sum_{\vec
k}\frac{U(k)\sqrt{k}}{k^2} F(\vec k) 4 \pi i^l j_l((|\vec k|
R_D) Y_{lm}(\hat k),\label{alm1}
\end{eqnarray}
where $\hat k$ indicates the direction of the vector $\vec k$. It is in this
expression that the justification for the use of statistics becomes
clear. The quantity we are in fact considering is the result of
the combined contributions of an
ensemble of harmonic oscillators each one contributing with a complex
number to the sum, leading to what is in effect a 2 dimensional random
walk whose total displacement corresponds to the observational
quantity. To proceed further we must evaluate the most likely value
for such total displacement. This we do with the help of the imaginary
ensemble of universes, and the identification of the most likely value
with the ensemble mean vale. Now we
compute the expected magnitude of this quantity. After taking the continuum limit we find,
\begin{equation}
|\alpha_{lm}|^2_{M. L.}
=\frac{s^2 \hbar}{2 \pi a^2} \int \frac {U(k)^2
C(k)}{k^4} j^2_l((|\vec k| R_D) k^3dk, \label{alm4}
\end{equation}
where
\begin{equation}
C(k)\equiv 1+ (2/ z_k^2) \sin (\Delta_k)^2 + (1/z_k)\sin (2\Delta_k).
\label{ExpCk}
\end{equation}
The last expression can be made more useful
by changing the variables of integration to $x =kR_D$ leading to
\begin{equation}
|\alpha_{lm}|^2_{M. L.}=\frac{s^2 \hbar}{2 \pi a^2} \int
\frac{U(x/R_D)^2 C(x/R_D)}{x^4} j^2_l(x) x^3 dx,
\label{alm5}
\end{equation}
which in the exponential expansion regime where $\mu$ vanishes and in
the limit $z_k\to -\infty$ where $C=1$, and taking for simplicity
$U (k) =U_0$ to be independent of $k$, (neglecting for instance the
physics that gives rise to the acoustic peaks), we find:
\begin{equation}
|\alpha_{lm}|^2_{M. L.}=\frac{s^2 U_0^2 \hbar} {2 a^2}
\frac{1}{l(l+1)} .
\end{equation}
Now, since this does not depend on $m$ it
is clear that the expectation of $C_l = (2l+1)^{-1}\sum_m
|\alpha_{lm}|^2 $ is just $|\alpha_{lm}|^2$ and thus the observational
quantity $OB_l=l(l+1)C_l =\frac{s^2 U_0^2 \hbar}{2 a^2} $ independent
of $l$ and in agreement with the scale invariant spectrum obtained in
ordinary treatments and in the observational studies.
Thus, the predicted value for the $OB_l$ is,
\begin{equation}\label{resultA}
OB_l= (\pi/3) G\hbar \frac{(V')^2}{V} U_0^2 =
( 2\pi/3)\epsilon \tilde V U_0^2,
\end{equation}
where we have used the standard definition of the dimensionless
slow roll parameter
$
\epsilon \equiv (1/2) (M_{Pl}^2/ \hbar) (V'/V)^2
$
which is normally expected to be rather small and the dimensionless potential
$\tilde V \equiv V\hbar^3/M_{Pl}^4$. Thus, if $U$ could be prevented from becoming
too large during re-heating, the quantity of interest
would be proportional to $\epsilon$ a possibility
that was not uncovered in the standard treatments. That is , the present analysis offers a path to
get rid of the ``fine tuning problem" for the inflationary
potential, i.e. even if $ V\hbar^3 \sim M_{Pl}^4$, the temperature
fluctuations in the CMB could remain rather small (at the level of $10^{-5}$ as observed in the CMB).
Now, let us focus on the effect of the finite value of times of
collapse $\eta^c_k$. That is, we consider the general functional form of
$C(k)$. The first thing we note is that in order to get a reasonable
spectrum there seems to be only one simple option: That $z_k $ be
essentially independent of $k$ that is the time of collapse of the
different modes should depend on the mode's frequency according to
$\eta_k^c=z/k$. This is a rather strong conclusion which could represent relevant information about whatever the
mechanism of collapse is.
Let us turn next to examine a simple proposal about the collapse mechanism which following Penrose's ideas is assumed to be tied to Quantum Gravity, and examine it with the above results in mind.
\section{ A version of `Penrose's mechanism' for collapse in the cosmological setting}
\label{sec_penrose}
\medskip
Penrose has for a long time advocated that the collapse of quantum
mechanical wave functions might be a dynamical process independent of observation, and that the
underlying mechanism might be related to gravitational
interaction. More precisely, according to this suggestion, collapse
into one of two quantum
mechanical alternatives would take place when the gravitational
interaction energy between the alternatives exceeds a certain
threshold. In fact, much of the initial motivation for the present
work came from Penrose's ideas and his questions regarding the quantum
history of the universe.
A very naive realization of Penrose's ideas in the present setting
could be obtained as follows: Each mode would collapse by the
action of the gravitational interaction between it's own possible
realizations. In our case, one could estimate the interaction energy
$E_I(k,\eta)$ by considering two representatives of the possible
collapsed states on opposite sides of the Gaussian associated with
the vacuum. Clearly, we must interpret $\Psi$ as the Newtonian
potential and consequently the right hand side of Eq.
(\ref{main3}), (after a rescaling by $a^{-2}$ to replace the laplacian expressed in the
comoving coordinates $x$ to
a laplacian associated with coordinates measuring physical lenght ) should be identified with matter density $\rho$. Therefore, $\rho= a^{-2}\dot\phi_0 \Gamma $, with $\Gamma =\pi^y/a=\dot \delta\phi$. Then we have:
\nopagebreak[3]\begin{equation}\label{GE1}
E_I(\eta)=\int \Psi^{(1)} \rho^{(2)}dV =\int \Psi^{(1)}(x,\eta) \rho^{(2)}(x,\eta)a^3 d^3 x = a
\int \Psi^{(1)}(x,\eta) \dot\phi_0 \Gamma^{(2)}(x,\eta) d^3x.
\end{equation}
Note that in this section we are ignoring the overall sign of this energy which being a gravitational binding energy would naturally be negative.
We next express this energy in terms of the Fourier expansion leading to :
\nopagebreak[3]\begin{equation}
E(\eta)= (a/L^6) \Sigma_{k, k'} \Psi_{k}^{(1)}( \eta)
\dot\phi_0 \Gamma^{(2)}_{k'} (\eta) \int e^{i x (k-k')} d^3x = (a/L^3)\dot\phi_0 \Sigma_{k}
\Psi^{(1)}_{ k}( \eta) \Gamma^{(2)}_{k}(\eta) ,
\end{equation}
where $(1),(2)$
refer to the two different realizations chosen. Recalling
that $\Psi_{ k} = ( s/k^2) \Gamma_k$, with $s= 4\pi G\dot\phi_0$, we find
\nopagebreak[3]\begin{equation}
E(\eta)= 4\pi G (a/L^3)
\dot\phi_0^2\Sigma_{k} (1/k^2)
\Gamma^{(1)}_{k}(\eta) \Gamma^{(2)}_{k}(\eta),
\end{equation}
Using equation (\ref{momentito}), we estimate $ \Gamma^{(1)}_{k}(\eta) \Gamma_{k}^{(2)}(\eta) $ by
$|<\Gamma_k > |^2 = \hbar k L^3 (1/2a)^2$, and thus we we find
\nopagebreak[3]\begin{equation}
E_I(\eta) = \Sigma_{k}( \pi \hbar G/ak) (\dot\phi_0)^2.
\end{equation}
which can be interpreted as the sum of the contributions of each mode to the interaction energy of different alternatives.
According to all the considerations we have made, we view each mode's collapse as occurring independently, so the trigger for the collapse of mode $k$ would be, in accordance to Penrose's ideas, the condition that this energy $
E_I(k,\eta)=( \pi \hbar G/ak) (\dot\phi_0)^2 $ reaches the `one-graviton level', namely, that it equals the value of the Planck Mass $M_p$. Now we use the specific expressions for the scale factor $ a=\frac{-1}{\eta H_I}$ and the slow rolling of the background scalar field $\dot \phi_0= (1/3) (\dot a/a^3 ) V'$ to find
\nopagebreak[3]\begin{equation} \label{Emodek}
E_I(k,\eta)=\frac{\pi \hbar G}{ 9H_I^2} ( a/k) ( V')^2.
\end{equation}
Thus the condition determining the time of collapse $\eta^c_k$ of the mode $k$ is
\nopagebreak[3]\begin{equation}
z_k=\eta^c_k k =\frac{\pi }{9} (\hbar V')^2(H_I M_p)^{-3}=\frac{\epsilon} {8\sqrt {6\pi}}(\tilde V)^{1/2}\equiv z^c,
\end{equation}
which is independent of $k$, and thus, leads to a roughly scale invariant spectrum of fluctuations in
accordance with observations. Note that the energy of mode $k$ in Eq. (\ref{Emodek}) is an increasing function of conformal time $ \eta$, during the slow roll regime.
We can look closer into this issue and ask when do the relevant modes collapse?.
In order to do this we use the value for $z^c$
and recall that the time of collapse
is determined bt $\eta^c_k =z^c/k $, and thus the scale factor at the time of collapse of the modes
with wave number $k$ was
\nopagebreak[3]\begin{equation}
a^c_k= (H_I\eta^c_k)^{-1} = (12/\epsilon) k l_p (\tilde V )^{-1}
\end{equation}
where $l_p$ stands for the Planck length.
As the value of the scale factor $a$ at the last scattering surface was $a \approx 10^{-4}$ (recall that the scale factor $a$ has been set so its value today is $1$), the modes that are relevant to say scales of order $10^{-3}$ of the size of the surface of last scattering (corresponding to a fraction of a degree in today's sky) have $k \approx 10^{-10} {ly}^{-1}$.
Thus, taking $\epsilon \times \tilde V $ to be of order $ 10 ^{-5}$, we have for those modes
$
a^c_k\approx 10^{-45}
$
corresponding to $ N_e =103$ e-folds of total expansion, or something like 80 e-folds before the end of inflation in standard type of inflationary scenarios. Thus in this scheme inflation must have at least 90 e-folds for it to include the complete description of the regime we are considering and to account also for the collapse of the modes that are of the order of magnitude of the surface of last scattering itself. The usual requirements of inflation put the lowest bound at something like 60 e-folds of inflation so the present requirement is not substantially stronger.
This result can be directly compared with the so called, time of ``Horizon crossing" $ \eta^H_k $ for mode $k $, corresponding to the physical wavelength reaching the Hubble distance $ H_I^{-1} $.
Therefore these latter time is determined from:
\nopagebreak[3]\begin{equation}
a^H_k\equiv a(\eta^H_k)= k/ (2\pi H_I) = k l_p (3/ 32 \pi^3)^{1/2} (\tilde V)^{-1/2}.
\end{equation}
Thus the ratio of scale factors at collapse and at horizon crossing for a given mode is
$ a^c_k/ a^H_k= (16/\epsilon) (6 \pi^3)^{1/2} (\tilde V)^{-1/2} $,
which would ordinarily be a very large number, indicating that the collapse time would be much later that the time of ``Horizon exiting" or crossing out, of the corresponding mode.
Thus we find that a naive realization of Penrose's ideas seems, at first sight, to be a good candidate to supply the element that we argued
is missing in the standard accounts of the emergence of the seeds of cosmic structure from quantum fluctuations during the inflationary regime in the early universe. However more research along these lines is necessary to find out, for instance, whether the scheme would imply a second collapse of modes already collapsed, and whether such secondary collapse could disrupt in a substantial way the
observational spectrum.
\section{An Alternative Collapse Scheme and the Fine Tuning problem}\label{sec_fineT}
\medskip
This section should be considered even more speculative that the others because the ideas here proposed have not yet undergone much substantial checking. Nevertheless, it seems worthwhile to present it here because it illustrates the power of the new way of looking at some of the relevant issues. We have considered one collapse scheme that seemed very natural. However there is another scheme that could be considered even more natural in light of the point view explored in the previous section, that the uncertainties in the matter sources of the gravitational field are the triggers of the collapse. We note that it is only the conjugate momentum to the field $\pi_k(\eta)$ that acts as source of the ``Newtonian potential" in Eq. (\ref{main3}) and contributes to the gravitational interaction energy in Eq. (\ref{GE1}), thus it seems natural to assume that it is only this quantity what is subject to the collapse (i.e is only this operator that is subjected to a ``self induced measurement") while the field $y_k(\eta)$ itself is not.
In this case, the analysis is almost identical: The collapse is defined by
\nopagebreak[3]\begin{equation}
\langle {\y_k^{R,I}(\eta^c_k)} \rangle_\Theta=0,\qquad
\langle {\py_k{}^{R,I}(\eta^c_k)}\rangle_\Theta=x^{R,I}_{k}\sqrt{\fluc{\hat{\pi}^{(y)R,I}_k}
_0}=x^{R,I}_{k,2}|g_k(\eta^c_k)|\sqrt{\hbar L^3/2},
\end{equation}
where $x_{k}$ are selected randomly from within a Gaussian
distribution centered at zero with spread one. Again from these equations we solve for $d^{R,I}_k$, and proceed as before. The only difference so far is that the function $C(k)$ containing information about the collapse changes slightly (see \cite{InflationUS}) to:
\begin{equation}
C'(k)=1+ (1- 1/ z_k^2) \sin (\Delta_k)^2 - (1/z_k)\sin (2\Delta_k).
\label{ExpCk2}
\end{equation}
Compare the above expression with that corresponding to the first collapse scheme Eq. (\ref{ExpCk}).
However the point we want to make is that this scheme seems to be a rather serious candidate to alleviate the fine tuning problem, that, as we have mentioned in the discussion around Eq. (\ref{resultA}), seems to affect most inflationary scenarios. The point is that according to quite general analysis, \cite{Garriaga} the quantity\footnote{Note that in contarst with this reference the analysis here is carried out in terms of the conformal time, and thus the sligth difference with the experssions in that work.}:
\nopagebreak[3]\begin{equation}
\zeta \equiv \Psi + aH \delta \phi/\dot \phi_0
\end{equation}
remains constant through the cosmological evolution even if the equation of state changes, so it seems natural to expect that in our context the corresponding quantity
\nopagebreak[3]\begin{equation}
\zeta \equiv \Psi + aH \langle \delta \phi\rangle_\Theta /\dot \phi_0= \Psi + H \langle y \rangle_\Theta /\dot \phi_0
\end{equation}
would be conserved from immediately after the collapse (a process through which the classical equations would not hold) through the reheating and up to the late times associated with the observation,
(we are essentially relying on Ehrenfest's theorem). However for the collapse mode we have considered in this section the last term in the equations vanishes just after the collapse so the value of the Newtonian potential would be that of the estimation we have made before, and, as indicated in the discussion of the last part of section \ref{sec_main}, this indicates a substantial amelioration of the fine tuning problem. This seems a very interesting possibility, but clearly it must be investigated much more profoundly before any compelling claims in this regard could be made.
\section{Noteworthy features of this approach}
\medskip
The quantities of interest $\alpha_{l,m}$ are now understood as different realizations of the random walk
described by Eq.(\ref{alm1}), so one can study their spreading and in clear way compare the model with the observations. An interesting research issue would be to estimate how many different modes $k$ effectively contribute sum, i.e how many steps are the various the random walks made of.
Another important observation follows directly from the basic point of view adopted in this analysis: The source of the fluctuations that lead to anisotropies and inhomogeneities lies in the quantum uncertainties of the scalar field, which collapse due to some unknown quantum gravitational effect. Once collapsed, these density inhomogeneities and anisotropies feed into the gravitational degrees of freedom leading to nontrivial perturbations in the metric functions, in particular the so called newtonian potential. However, the metric itself is not a source of the quantum gravitational induced collapse (in following with the equivalence principle the local metric perturbations have no energy). Therefore, as the scalar field does not act as a source for the gravitational tensor modes -- at least not at the lowest order considered here -- the tensor modes can not be excited. The scheme thus naturally leads to the expectation of a zero-- or at least a strongly suppressed-- contribution of the tensor modes to the CMB fluctuation spectrum.
As pointed out at the end of section \ref{sec_main} and in section \ref{sec_fineT} this approach also opens new avenues to address the fine tuning problem that affects most inflationary models, because one can follow in more detail the objects that give rise to the anisotropies and inhomogeneities, and by having the possibility to consider independently the issues relative to formation of the perturbation, and their evolution through the
reheating era.
And finally and as explicitly exhibited in the previous section, this approach allows us to consider concrete proposals for the physical mechanisms that give rise to the inhomogeneities and anisotropies
in the early universe, and confront them with observations.
\section{Conclusions}
\medskip
We have reviewed a serious shortcoming of the inflationary account of the origin of cosmic structure, and have given a brief account of the proposals to deal with them which were
first reported in \cite{InflationUS}.
These lines of inquiry have lead to the recognition that something else seems to be needed
for the whole picture to work and that it could be pointing towards an actual manifestation quantum gravity.
We have shown that not only
the issues are susceptible of scientific investigation based on observations, but also that a simple
account of what is needed, seems to be
provided by the extrapolation of Penrose's ideas to the cosmological setting.
The scheme exhibits several differences in the predictions as compared with the standard analyses of this problem where the metric and scalar field perturbations are quantized, in particular the suppression of the tensor modes\footnote{See however\cite{Roy} for another scheme which also leads to strong suppression of tensor modes.}. These predictions can, in principle, be tested, indicating that an issue that could {\it ab initio} be considered to be essentially a philosophical problem, leads instead to truly physical matters.
In fact, it might well be, that in our frantic search for physical manifestations of new physics tied to quantum aspects of gravitation, the most dramatic such occurrence has been in front of our eyes all this time and it has just been overlooked: The cosmic structure of the Universe itself.
\section*{Acknowledgments}
\noindent It is a pleasure to acknowledge very helpful conversations with J. Garriaga, E. Verdaguer and A. Perez. This work was supported
in part by DGAPA-UNAM
IN108103 and CONACyT 43914-F grants.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
The standard cosmological paradigm of Cold Dark Matter with the
addition of a cosmological constant ($\rm{\Lambda}$CDM) has been successful at
interpreting astrophysical phenomena on a wide range of scales, from
the large scale structure of the Universe to the formation of
individual galaxies \citep{2015ARA&A..53...51S}. However, it remains
somewhat unclear whether the internal structures of simulated galaxies
formed in a $\rm{\Lambda}$CDM\ framework are consistent with observations of real
galaxies.
In spiral galaxies, the structure of dark matter halos can be
constrained using galaxy rotation curves
\citep[e.g.][]{1978PhDT.......195B}. Typically, the observed rotation
curve is decomposed into contributions from stars and gas and any
remaining velocity is attributed to dark matter. In cosmological
simulations of dark matter structure growth, dark matter halos have
been observed to follow a broken power law form
\citep[e.g.][]{1965TrAlm...5...87E, 1996ApJ...462..563N,
2004MNRAS.349.1039N, 2008MNRAS.387..536G}. To account for the
additional gravitational pull provided by baryons, modifications can
be applied to theoretical halo density profiles to increase their
densities at small radii \citep[e.g.][]{2004ApJ...616...16G,
2005ApJ...634...70S}. Applying these modified halo models to
observed rotation curves produces dark matter halos which are
underdense relative to the predictions of $\rm{\Lambda}$CDM\ simulations
\citep{2015A&A...574A.113P}.
Numerical simulations which incorporate stellar feedback in galaxies
have partially eased this tension by showing that feedback from
baryonic processes can redistribute dark matter within a galaxy
\citep{2010Natur.463..203G, 2012MNRAS.421.3464P,
2013MNRAS.429.3068T}. These effects are stronger in galaxies with
lower masses \citep[e.g.][]{2011AJ....142...24O, 2011MNRAS.415.1051B,
2014Natur.506..171P}. Recent simulations have shown that the ability
of a galaxy to redistribute dark matter through stellar feedback
depends on the ratio of its stellar mass to its halo mass
\citep[e.g.][]{2014MNRAS.437..415D, 2015MNRAS.454.1719B}. These
$M_*/M_{\rm halo}$-dependent density profiles have been shown by
\citet{2017MNRAS.466.1648K} to be more consistent with the photometry
and rotation curves of real galaxies than traditional NFW profiles.
The relationship between dark matter halos and observed rotation
curves is not a trivial one, as measurements of rotation curves can be
biased by non-circular motions, projection effects, and halo
triaxiality \citep[e.g.][]{2004ApJ...617.1059R, 2006MNRAS.373.1117H,
2007ApJ...657..773V}. Measurements of one-dimensional rotation
curves are therefore insufficient to constrain the three-dimensional
mass distributions. All of these mechanisms for potential bias in
rotation curves leave kinematic signatures in the full
three-dimensional velocity distributions of galaxy disks. For example,
gas streaming along bars and spiral arms has both circular and radial
components to its velocity, and therefore will affect the line of
sight velocities along the major and minor axes differently
\citep{2010MNRAS.404.1733S}.
Measurements of the velocity field of the entire disk at high spatial
resolution are required to extract these kinematic signatures. For
example, to separate bar-like flows in spiral galaxies from their
rotation curves, $<200~$pc spatial resolution is required
\citep[e.g.][]{2007ApJ...659.1176M, 2010MNRAS.404.1733S,
2014A&A...568A..70B,2015MNRAS.451.4397H}.
In recent years, the state of the art in numerical simulations has
moved to smaller and smaller spatial scales. However, comparisons of
these simulations to observed galaxies have been lacking, partially
due to a lack of velocity fields of sufficiently high resolution for
comparison.
We have designed the RSS Imaging spectroscopy Nearby Galaxy Survey
(RINGS) to obtain the high-resolution kinematic data necessary to
probe these open questions of galaxy structure. Our survey targets 19
nearby, late-type spiral galaxies over a wide range of masses (67 km
s$^{-1} < V_{\rm flat} < $ 275 km s$^{-1}$) and luminosities (-17.5 $> M_V
>$ -21.5). The survey is designed to exploit the large collecting area
and large field-of-view of the Robert Stobie Spectrograph (RSS) on the
Southern African Large Telescope (SALT). In addition to the high
spatial resolution H$\alpha$\ kinematic data from SALT's RSS, we are
obtaining lower spatial resolution \ion{H}{1} 21~cm kinematic
observations and have obtained $BVRI$ photometric imaging of these
galaxies.
A number of previous surveys have obtained two-dimensional
H$\alpha$\ velocity fields of galaxies with similar goals to RINGS, e.g.
BH$\alpha$BAR \citep{2005MNRAS.360.1201H}, GHASP
\citep{2008MNRAS.388..500E}, GH$\alpha$FaS
\citep{2008PASP..120..665H}, DiskMass \citep{2010ApJ...716..198B}, and
CALIFA \citep{2012A&A...538A...8S}. Compared to these surveys, our
data are deeper and more extended thanks to SALT's large primary
mirror and large angular field-of-view. The typical angular
resolution of the RINGS data is similar to that of the DiskMass and
CALIFA surveys and somewhat worse than that of GH$\alpha$FaS.
However, the RINGS galaxies are typically more nearby than the
galaxies in those surveys, and our physical resolutions are comparable
to those of GH$\alpha$FaS and higher than those of DiskMass and
CALIFA. The typical spectral resolution of our data ($R\sim1300$) is
similar to that of CALIFA ($R\sim1000$) and lower than that of
DiskMass ($R\sim8000$) and GH$\alpha$FaS ($R\sim15000$). Our target
selection criteria also differ from these surveys in choosing a
representative sample of partially inclined galaxies across a wide
range of Hubble classifications, masses, and luminosities.
In Paper I \citep{RINGS1}, we presented our first H$\alpha$\ and \ion{H}{1}
kinematic data and modelling for the galaxy NGC 2280. In Paper II
\citep{RINGSPhot}, we presented our photometric sample and modelling.
In this paper, we present kinematic maps and axisymmetric models of 14
of the 19 RINGS galaxies. The maps are derived from data taken using
the medium-resolution etalon of SALT's Fabry-P\'erot\ system. The typical
angular resolution of our resulting H$\alpha$\ velocity fields is
$\sim2.5\arcsec$, corresponding to a typical spatial resolution of
$\sim250~$pc at the source locations. We then model the kinematic
data using the \texttt{DiskFit}\ software package \citep{2007ApJ...664..204S,
2010MNRAS.404.1733S} and show that the derived rotation curves
generally agree well with others in the literature. We also compare
the fitted projection parameters with those obtained from our {\it I}-band
images. Finally, we present azimuthally-averaged H$\alpha$\ and [\ion{N}{2}]
profiles for these galaxies, which we use to derive oxygen abundance
gradients. In future papers in this series, we will use our velocity
maps in order to better understand these galaxies' mass distributions.
\input{table1.tex}
\section{Data Acquisition and Reduction}
\label{sec:datareduction}
We obtained data on 14 nearby late-type galaxies with the
medium-resolution mode of the Fabry-P\'erot\ interferometer on the RSS of SALT.
Our data were acquired over a total exposure time of 19 hours during
the period 11 Nov 2011 to 8 Sept 2015. A typical single observation
consists of $\sim25$ exposures, each of length $\sim70~$seconds. The
medium-resolution etalon has a spectral full width at half maximum
(FWHM) at H$\alpha$\ of $\sim5~$\AA. For each exposure taken in an
observation, we offset the wavelength of the etalon's peak
transmission by $\sim2~$\AA\ from the previous exposure. Each
observation therefore represents a scan over a $\sim50~$\AA\ range in
$\sim2~$\AA\ steps. For each galaxy, we attempted to obtain at least
two such observations. A summary of the properties of these 14
galaxies and our observations is provided in Table \ref{tab:tab1}.
Note that NGC 2280, which we have discussed previously in
\citet{RINGS1}, is among the galaxies presented in this work. Because
several aspects of our data reduction process have changed somewhat
(e.g.\ flat-field correction and ghost subtraction, discussed below)
since that work was published, we have chosen to present an updated
velocity field of that galaxy here to ensure homogeneity across the
final sample.
\subsection{Preliminary Data Reduction}
We have utilized the PySALT\footnote{http://pysalt.salt.ac.za/}
\citep{pysalt} software package to perform preliminary reductions of
our raw SALT images. The tasks in PySALT apply standard routines for
gain variation corrections, bias subtraction, CCD crosstalk
corrections, and cosmic ray removal.
\subsection{Flattening}
\label{sec:flattening}
The unusual design of SALT introduces unique challenges in calibrating
the intensity of our images. SALT's primary
mirror\footnote{https://www.salt.ac.za/telescope/\#telescope-primary-mirror}
is composed of a hexagonal grid of 91 1-meter mirrors. Unlike most
telescopes, the primary mirror remains stationary over the course of
an observation and object tracking is accomplished by moving the
secondary optics package in the primary mirror's focal plane. The
full collecting area of the primary mirror is rarely utilized, as some
mirror segments are unable to illuminate the secondary depending on a
target's position. Overall, the available collecting area of the
primary mirror is smaller by $\sim30$\% at the beginning and end of an
observation relative to the middle.
The individual mirror segments are removed for realuminization and
replaced on $\sim$weekly timescales in a sequential scheme. This
results in the reflectivity of the primary mirror varying as a
function of position on the mirror, and these variations change over
time as different mirror segments are freshly realuminized.
As a target galaxy passes through SALT's field of view, individual
mirror segments pass in and out of the secondary payload's field of
view, changing the fraction of the total collecting area utilized as a
function of time.
Furthermore, differential vignetting of images occurs within the
spherical aberration corrector (SAC) on the secondary payload. This
effect also varies as a function of object position overhead (as the
secondary package moves through the focal plane to track an
object). This vignetting effect changes image intensities by
$\sim5-10$\% across an image.
The combined effects of these factors result in image intensity
variations which are: position-dependent within a single image,
pointing-dependent over the course of an observation as the target
drifts overhead, and time-dependent over the $\sim$weekly
segment-replacement timescale.
A traditional approach to flat-field calibration (i.e.\ combining
several exposures of the twilight sky) is insufficient for correcting
these effects, as this approach will not account for the
pointing-dependent effects. Theoretical modelling of the sensitivity
variations by ray-tracing software is not feasible due to the frequent
replacement of mirror segments with different reflective properties.
In a previous paper \citep{RINGS1}, we utilized an approach for NGC
2280 which compared stellar photometry in our SALT Fabry-P\'erot\ images to
\textit{R}-band images from the CTIO 0.9m telescope
\citep{RINGSPhot}. For $\sim50$ stars present in both sets of images,
we computed an intensity ratio between our SALT images and the
\textit{R}-band image. For each SALT image, we then fitted a quadratic
two-dimensional polynomial to these intensity ratios. By scaling each
of our images by its corresponding polynomial, we were able to correct
for these variations.
Unlike NGC 2280, most of our target galaxies do not overlap with dense
star fields and we therefore cannot apply this approach. Instead, we
have developed a new approach which utilizes the night sky background
to calibrate our photometry. We make the assumption that the intrinsic
night sky background has uniform intensity over the $8\arcmin$ field
of view over the course of each individual exposure ($\sim70~$s). We
then mask objects in our fields using a sigma-clipped cutoff for stars
and a large elliptical mask for the galaxy. We fit the remaining
pixels with a quadratic two-dimensional polynomial of the same form
used in the stellar photometry approach described above. We then scale
the pixel values in each image by this fitted polynomial. If the
assumption of uniform sky brightness is valid, this method results in
a uniformly illuminated field.
In order to validate the assumption of uniform sky intensity, we have
applied this ``sky-fitting'' approach to our data on NGC 2280 and
compared it to our previous ``star-fitting'' approach for the same
data. We found no significant differences in the resulting fitted
polynomials for either of the two nights for which we had data on that
galaxy. This suggests that the sky-fitting approach is sufficient for
flattening our images. The assumption of a uniform sky background is
less likely to be valid if a target galaxy fills a large fraction of
the field of view, as is the case with our observations of NGC
7793. We have examined several spectra obtained from overlapping
observations of this galaxy, and it appears any errors introduced by a
non-uniform sky background are small compared to other sources of
uncertainty.
We utilize this ``sky-fitting'' approach to flat-field correction for
all 14 of the galaxies presented in this work.
\begin{figure*}
\begin{center}
\includegraphics[width=\hsize]{ghosts.eps}
\end{center}
\caption{Left: A median-combined image of our 15 July 2014
observations of NGC 6384 with detected star-ghost pairs marked
with blue lines. The large red star marks the location of the
point about which the intensity ratios of a ghost to its star are
symmetric. The large rectangular feature in the lower-right
portion of the left panel is the shadow of SALT's tracking probe
and the affected pixels have been masked from any calculations.
Right: The black points with error bars mark the intensity ratios
between ghosts and stars as a function of radius from the point
marked in the left panel. These star/ghost pairs were selected
from all of our SALT Fabry-P\'erot\ observations. The solid red line shows
our linear fit to these intensity ratios.
\label{fig:ghosts}}
\end{figure*}
\subsection{Ghost identification and subtraction}
\label{sec:ghosts}
Reflections between the Fabry-P\'erot\ etalon and the CCD detector result in
each light source in an image appearing twice -- once at its true
position and again at a reflected position, known as the ``diametric
ghost'' \citep{ghosts}. The positions of these reflections are
symmetric about a single point in the image, the location at which the
instrument's optical axis intersects the plane of the CCD. The left
panel of Figure \ref{fig:ghosts} illustrates this effect in one of our
observations of NGC 6384.
As will be discussed in \S \ref{sec:wavecal}, the wavelength
calibration solutions for our images are symmetric about the same
central point. The ghost positions are therefore extremely useful for
precisely determining the location of this point. By matching each
star in an image to its ghost and averaging their positions, we are
able to determine our reflection centers to within a small fraction of
a pixel.
While useful for determining the location of the symmetry axis, the
presence of these ghosts adversely affects our goal of measuring
velocities. In particular, the reflected image of a target galaxy
often overlaps with the galaxy itself. This effect is extremely
undesirable, since it mixes emission from gas at one location and
velocity with emission from gas at a different location and velocity.
In order to remove them, we perform aperture photometry on each
star-ghost pair to determine intensity ratios between the ghosts and
their real counterparts. These ratios are typically $\sim5$\%. In a
previous paper \citep{RINGS1}, we simply rotated each image by
180\arcdeg\ about its symmetry axis and subtracted a small multiple of
the rotated image from the original. After examining a much larger
quantity of data, it appears that the intensity ratio between an
object and its ghost depends linearly on the object's distance from a
central point. This decreasing ghost intensity ratio is caused by
vignetting within the camera optics of the non-telecentric reflection
from the CCD. This central point's location is not coincident with
the center of reflection (private communication: D. O'Donoghue), but
appears to be consistent among all of our observations. The right
panel of Figure \ref{fig:ghosts} shows the dependence of the ghost
intensity ratio on radius from this point. We have fitted a linear
function to the flux ratios of star-ghost pairs in several of our
observations, which decreases from $\sim6$\% at the central point to
$\sim2$\% at the edge of the images. We then apply the same
reflect-and-subtract approach as in \citep{RINGS1}, except that here
we rescale the reflected images by this linear function rather than a
constant factor. This process removes most of the ghost image
intensity from our science images without necessitating masking of
these regions.
\vfill\eject
\subsection{Alignment and Normalization}
\label{sec:align_norm}
Among the images of a single observation, we use the centroid
locations of several stars to align our images to one another.
Typically, the image coordinate system drifts by
$\sim0.25$\arcsec\ over the course of an observation.
As mentioned previously, different fractions of SALT's primary mirror
are utilized over the course of a single observation. Thus, the
photometric sensitivity of each image varies over an observational
sequence. To correct for this effect, we perform aperture photometry
on the same stars which were used for aligning the images in order to
determine a normalization factor for each image. We then scale each
image by a multiplicative normalization factor so that each of these
stars has the same intensity in all of our images. Typically, between
10 and 50 stars are used in this process, though in some extreme cases
(e.g.\ NGC 578), the number of stars in the images can be as low as 5.
The combined effects of flattening uncertainty
(\S\ref{sec:flattening}), ghost subtraction (\S\ref{sec:ghosts}), and
normalization uncertainty (\S\ref{sec:align_norm}) result in a typical
photometric uncertainty of $\sim 10-12\%$.
When combining multiple observations which were taken at different
telescope pointings, we have utilized the \texttt{astrometry.net}
software package \citep{astrometry} to register our images' pixel
positions to accurate sky coordinates. We then use the resulting
astrometric solutions to align our observations to one another.
Just as we used stellar photometry to normalize images from among a
single observation sequence, we use the same photometry to normalize
different observation sequences to one another. Stars which are
visible in only one pointing are not useful for this task, so we use
the photometry of stars which are visible in more than one observation
sequence.
\subsection{Wavelength Calibration}
\label{sec:wavecal}
Collimated light incident on the Fabry-P\'erot\ etalon arrives at different
angles depending on position in our images. Different angles of
incidence result in different wavelengths of constructive
interference. Thus, the peak wavelength of an image varies across the
image itself. The wavelength of peak transmission is given by
\begin{equation}
\lambda_{\rm peak}(R) = \frac{\lambda_{\rm cen}}{(1+R^2/F^2)^{1/2}}
\end{equation}
where $\lambda_{\rm cen}$ is the peak wavelength at the center of the
image, $R$ is the radius of a pixel from the image center, and $F$ is
the effective focal length of the camera optics, measured in units of
pixels. The image center is the location where the optical axis
intersects the image plane, and is notably the same as the center of
the star-ghost reflections discussed in \S\ref{sec:ghosts}.
The peak wavelength at the center is determined by a parameter, $z$,
which controls the spacing of the etalon's parallel plates. It may
also be a function of time, as a slight temporal drift in the etalon
spacing has been observed. In general, we find that the function
\begin{equation}
\lambda_{\rm cen}(z,t) = A + Bz + Et
\label{eqn:wavesoln}
\end{equation}
is sufficient to describe the central wavelength's dependence on the
control parameter and time. This equation equivalent to the one found
by \citet{rangwala} with the addition of a term which is linear in
time to account for a slight temporal drift. We find that their
higher-order terms proportional to $z^2$ and $z^3$ are not necessary
over our relatively narrow wavelength range.
Across a single image, the wavelength of peak transmission depends
only on the radius, $R$. Therefore, a monochromatic source which
uniformly illuminates the field will be imaged as a symmetric ring
around the image center, with radius $R_{\rm ring} =
F(\lambda_{\rm cen}^2/\lambda_{\rm ring}^2-1)^{1/2}$.
Before and after each observation sequence, exposures of neon lamps
were taken for the purposes of wavelength calibrations, which create
bright rings in the images. Additionally, several atmospheric
emission lines of hydrogen, [\ion{N}{2}], and OH are imaged as dim rings in our
observations of the RINGS galaxies. By measuring the radii of these
rings, we can determine best-fitting values for the constants $A$,
$B$, $E$, and $F$ in the above equations using a least-squares
minimization fit. We then use these fitted parameters to calibrate
the wavelengths in our images. The sixth column of Table
\ref{tab:tab1} shows the uncertainty in each observation's wavelength
solution, calculated as the root mean square residual to our
wavelength solution divided by the square root of the number of
degrees of freedom in the fit.
\subsection{Sky Subtraction}
\label{sec:skysub}
The sky background radiation in our images is composed of two
components: a continuum, which we treat as constant with wavelength,
and emission lines from molecules in the atmosphere.
Once a wavelength solution has been found for our images, we search in
our images for ring signatures of known atmospheric emission lines
\citep{osterbrock}. We fit for such emission lines and subtract the
fitted profiles from our images. Occasionally, additional emission
lines are seen (as prominent rings) even after such subtraction.
These emission lines fall into two broad categories: adjacent spectral
orders and diffuse interstellar bands.
The medium-resolution Fabry-P\'erot\ system has a free spectral range (FSR) at
H$\alpha$\ of $\sim75~$\AA. Thus, an atmospheric emission line $\pm75~$\AA\
from an image's true wavelength may appear in the image due to the
non-zero transmission of the order-blocking filter at
$\pm75~$\AA. Several such emission lines have been detected in our
data and subsequently fitted and subtracted from our images.
In several of our observations, we have detected emission consistent
with the diffuse interstellar band (DIB) wavelength at 6613~\AA\
\citep{dibs}. DIBs are commonly seen as absorption lines in stellar
spectra, and are not often observed in emission \citep{herbig95}. This
emission has also been fitted and subtracted from our data in the same
fashion as the known night-sky emission lines. The DIB emission was
detected in our observations of NGC 908, NGC 1325, and NGC 2280.
Once ring features from emission lines have been fitted and
subtracted, we have run a sigma-clipped statistics algorithm to
determine the typical value of the night sky continuum emission. This
continuum value is then subtracted from each of our images before we
produce our final data cube.
\subsection{Convolution to Uniform Seeing}
Because atmospheric turbulence and mirror alignment do not remain
constant over the course of an observation, each of our images has a
slightly different value for the effective seeing FWHM. In producing a
data cube, we artificially smear all of our images to the seeing of
the worst image of the observation track. In principle, we could
choose to keep only images with better effective seeing and discard
images with worse seeing. When our observations were obtained, SALT
did not have closed-loop control of the alignment of the primary
mirror segments. Thus the image quality tended to degrade over an
observational sequence. Discarding poorer images would therefore tend
to preferentially eliminate the longer wavelength images, since we
usually stepped upward in wavelength over the sequence. Discarding
images would also reduce the overall depth of our observations. For
these reasons, we choose to not discard any images when producing the
final data cubes presented in this work.
The correction to uniform seeing is done by convolution with a
Gaussian beam kernel with $\sigma_{\rm beam}^2 = \sigma_{\rm worst}^2 -
\sigma_{\rm image}^2$. We also shift the position of the convolution
kernel's center by the values of the shifts calculated from stellar
centroids described in \S\ref{sec:align_norm}. In this way, we
shift and convolve our images simultaneously. The ``Seeing'' column of
Table \ref{tab:tab1} lists the worst seeing FWHM from each of our
observations. Typical worst seeing values are between 2\arcsec\ and
3\arcsec. In the cases where we combine multiple observations of the
same object, we convolve all observations to the seeing of the worst
image from among all observations of that object, then combine the
results into a single data cube.
\begin{figure*}
\begin{center}
\includegraphics[width=\hsize]{goodbadugly.eps}
\end{center}
\caption{Selected spectra (solid points with error bars) and
best-fitting line profiles (solid red lines) from our data cubes.
The left panels show pixels with very high signal-to-noise. The
middle panels show pixels with much lower signal-to-noise. The
right panels show pixels very low signal-to-noise which are just
above our detection thresholds. All spectra have been normalized
so that the maximum value of each spectrum is 1. Each row's
spectra are different pixels selected from a single galaxy's data
cube. The different colors and shapes of points correspond to
observations from different nights.
\label{fig:lineprofs}}
\end{figure*}
\subsection{Line Profile Fitting}
\label{sec:linefits}
In addition to observing the H$\alpha$\ line, our wavelength range is wide
enough to detect the [\ion{N}{2}] 6583 line as well. We fit for both of these
lines in our spectra simultaneously. The transmission profile of the
Fabry-P\'erot\ etalon is well-described by a Voigt function,
\begin{equation}
V(\lambda;\sigma_g,\gamma_l) = \int_{-\infty}^{\infty}
G(\lambda',\sigma_g)\Gamma(\lambda-\lambda',\gamma_l)d\lambda',
\end{equation}
where $G(\lambda,\sigma_g)$ and $\Gamma(\lambda,\gamma_l)$ are
Gaussian and Lorentzian functions, respectively. Calculating this
convolution of functions is computationally expensive, and we
therefore make use of the pseudo-Voigt function described by
\citet{voigt}. At each spatial pixel in our data cubes, we fit a
6-parameter model of the form
\begin{eqnarray}
I(\lambda; C, F_H, F_N, \lambda_H, \sigma_g, \gamma_l) =
\phantom{junkjunk} \nonumber \\
\phantom{junkjunk} C + F_HV(\lambda-\lambda_H;\sigma_g,\gamma_l) + \nonumber \\
\phantom{junkjunk} F_NV(\lambda-1.003137\lambda_H;\sigma_g,\gamma_l),
\label{eqn:model}
\end{eqnarray}
where $I(\lambda;\ldots)$ is the image intensity as a function of
wavelength and the 6 model parameters are: $C$, the continuum surface
brightness, $F_H$, the integrated surface brightness of the H$\alpha$\ line,
$F_N$, the integrated surface brightness of the [\ion{N}{2}] 6583 line,
$\lambda_H$, the peak wavelength of Doppler-shifted H$\alpha$, and
$\sigma_g$ and $\gamma_l$ the two line widths of the Voigt profile.
We assume that the H$\alpha$\ and [\ion{N}{2}] 6583 emission arise from gas at the
same velocity, and the factor of 1.003137 in the above equation
reflects this assumption.
An anonymous referee questioned whether $C$ would really be constant
over the fitted range because the stellar continuum would have an
H$\alpha$\ absorption feature at almost the same wavelength as the
H$\alpha$\ emission we are attempting to measure. While there may be some
effect of stellar H$\alpha$\ absorption on the emission line strength, it is
unlikely to exactly cancel the gaseous emission, and would leave a
distorted spectral profile (e.g.\ with emission core and absorption
wings), which we do not see. \citet{2002MNRAS.332..283R} find that
stellar absorption in disk galaxies has the greatest effect at
H$\delta$ and H$\epsilon$, and essentially no contribution at H$\alpha$.
This suggests that absorption has a minimal effect on our estimate of
H$\alpha$\ line strength. Since there is no significant absorption of the
[\ion{N}{2}] lines, we do not expect stellar absorption lines to reduce our
ability to detect emission from excited gas to any significant extent.
Estimates of the H$\alpha$/[\ion{N}{2}] line intensity ratio would be affected by any
H$\alpha$\ absorption and, if important, would compromise {\em all}
spectroscopic estimates of this line intensity ratio, not exclusively
those from Fabry-P\'erot\ data.
\input{table2.tex}
We fit for these 6 parameters simultaneously using a
$\chi^2$-minimization routine, where the uncertainties in the pixel
intensities arise primarily from photon shot noise. The shot noise
uncertainties are propagated through the various image reduction steps
(flattening, normalization, sky subtraction, convolution) to arrive at
a final uncertainty for the intensity at each pixel. To account for
the uncertainty in overall normalization of each image, we also add a
small fraction of the original image intensity (typically 3-5\%) in
quadrature to the uncertainty at each pixel.
The $\chi^2$-minimization routine also returns an estimate of the
variances and covariances of our 6 model parameters. We mask all
pixels with $\Delta F_H / F_H > 1$ or $\Delta \sigma_g / \sigma_g > 1$
to ensure that only pixels with sufficiently well-constrained
parameters are retained. Here $\Delta$ refers to the
$\chi^2$-estimated uncertainty in a parameter.
Figure \ref{fig:lineprofs} shows an assortment of spectra and line
profile fits from our data cubes ranging from very high
signal-to-noise regions (left column) to very low signal-to-noise
regions (right column). The line profiles shown are the best fits to
all of the data points from multiple observations combined into a
single data cube.
A number of other groups \citep[e.g.][]{2003MNRAS.342..345C,
2015MNRAS.451.1004E} use Voronoi binning to combine pixels with low
S/N in order to bring out possible faint emission. We have decided
not to do that.\Ignore{ because we would not then know the precise sky
position of any faint signal that this procedure finds, which would
complicate, and possibly throw off, our attempts to fit the rotation
curve.}
In converting wavelengths to velocities, we first adjust our
wavelengths to the rest frame of the host galaxy by using the systemic
velocities in Table \ref{tab:tab2}. We then use the relativistic
Doppler shift equation:
\begin{equation}
v = c\frac{(\lambda/\lambda_0)^2-1}{(\lambda/\lambda_0)^2+1}.
\end{equation}
\subsection{Idiosyncrasies of Individual Observations}
\subsubsection{NGC 7793 Sky Subtraction}
The nearest galaxy in our sample, NGC 7793, required us to modify
slightly our procedure for subtracting the night sky emission lines
from our images. Because it is so close, its systemic velocity is
small enough to be comparable to its internal motions; i.e.\ some of
its gas has zero line-of-sight velocity relative to Earth.
Additionally, it takes up a substantially larger fraction of the RSS
field of view than do the other galaxies discussed in this work. This
means that night sky emission of H$\alpha$\ and [\ion{N}{2}] is sometimes both
spatially and spectrally coincident with NGC 7793's H$\alpha$\ and [\ion{N}{2}]
emission across a large fraction of our images. Because the night sky
emission was contaminated by the emission from NGC 7793, we were
unable to use the ``fit-and-subtract'' technique as described in
\S \ref{sec:skysub}. Instead, we temporarily masked regions of
our images in which the night sky emission ring overlapped the galaxy
and fit only the uncontaminated portion of the images. Visual
inspection of the images after this process indicates that the night
sky emission was removed effectively without over-subtracting from the
galaxy's emission.
We were unable to obtain all of our requested observations of NGC 7793
before the decommissioning of SALT's medium-resolution Fabry-P\'erot\ etalon in
2015. Consequently, we have acquired 4 observations of the eastern
portion of this galaxy but only 1 observation of the western portion.
We are therefore able to detect H$\alpha$\ emission from areas of lower
signal on the eastern side of the galaxy only. All 5 observations
overlap in the central region, which is the area of greatest interest
to our survey.
\begin{figure*}
\begin{center}
\hbox to \hsize{
\includegraphics[width=.45\hsize]{projcomp1.ps} \hfill
\includegraphics[width=.45\hsize]{projcomp2.ps}}
\end{center}
\caption{Comparison between the projection geometry fitted to the
kinematic map (blue) and the {\it I}-band photometric image (red).
For each galaxy, the left-hand panel compares the fitted positions
of the centers, with the shaded area showing the region that
encloses 68\% of the bootstrap estimates of the position of the
best fit kinematic center, which is marked by the blue dot. The
red plus symbol shows the location of the adopted photometric
center on the same scale, for which there is no uncertainty
estimate. Note that the center of NGC~578 was fixed at the
photometric position when fitting the kinematic map. The fitted
PA is shown in the middle panel and the inclination in the
righthand panel, and again the gray shading indicates the
1-$\sigma$ uncertainties about the best fit value, which is less
than the line width in some cases.
\label{fig:projcomp}}
\end{figure*}
\subsubsection{Migratory Image Artifacts}
\label{sec:ufos}
In our 28 Dec 2011 observations of NGC 908, NGC 1325, and NGC 2280 and
our 29 Dec 2011 observation of NGC 578, we detect a series of bright
objects which move coherently across our images. These objects have a
different point spread function from that of the real objects in our
images, and appear to be unfocused. In a time sequence of images,
these objects move relative to the real objects of the field in a
uniform way.
The relative abundance of these objects appears to be roughly
proportional to the abundance of stars in each image, though we have
been unable to register these objects with real stars. In the case of
our 29 Dec 2011 observation of NGC 578, one of these objects is so
bright that its diametric ghost (see \S \ref{sec:ghosts}) is
visible and moves in the opposite direction to the other objects'
coherent movement.
Based on this information, we have arrived at a possible explanation
for the appearance of these strange objects. We believe that on these
two nights in Dec 2011, a small subset of SALT's segmented primary
mirror, perhaps only one segment, was misaligned with the rest of the
primary mirror. This subset of the primary mirror then reflected
out-of-field light into our field. As the secondary optics package
moved through the focal plane to track our objects of interest, the
stars reflected from outside the field then appear to move across the
images due to the misalignment of this subset of mirror segments. New
edge sensors have been installed between SALT's primary mirror
segments in the time since these observations were taken, so these
types of image artifacts should not be present in future observations.
We have applied a simple mask over our images wherever these objects
appear. Any pixels which fall within this mask are excluded from any
calculations in the remainder of our data reduction process.
\subsubsection{Other Image Artifacts}
SALT utilizes a small probe to track a guide star over the course of
an observation to maintain alignment with a target object. In some of
our observations, the shadow of this guide probe overlaps our images
(e.g.\ the lower right of the image in Figure \ref{fig:ghosts}).
Similar to our treatment of the migrating objects above, we apply a
mask over pixels which are affected by this shadow. We also apply
such a mask in the rare cases in which a satellite trail overlaps our
images.
\section{Velocity and Intensity Maps}
\label{sec:dataproduct}
The results of the foregoing reductions of the raw data cube for each
galaxy are 2D maps of median surface brightness, continuum surface
brightness (i.e.\ $C$ from equation \ref{eqn:model}), integrated
H$\alpha$\ line surface brightness ($F_H$), integrated [\ion{N}{2}] line surface
brightness ($F_N$), line-of-sight velocity, and estimated uncertainty
in velocity for each of our 14 galaxies. The total number of fitted
pixels and number of independent resolution elements in each galaxy's
maps are summarized in Table \ref{tab:tab1}.
\subsection{Axisymmetric models and rotation curves}
\label{sec:models}
We have utilized the \texttt{DiskFit}\footnote{\texttt{DiskFit}\ is publicly available for
download at https://www. physics.queensu.ca/Astro/people/Kristine\underline{~}Spekkens/diskfit/}
software package \citep{2007ApJ...664..204S,2010MNRAS.404.1733S} to
fit axisymmetric rotation models to our H$\alpha$\ velocity fields. Unlike
tilted-ring codes, e.g.\ \texttt{rotcur} \citep{rotcur}, \texttt{DiskFit}\ assumes
a single projection geometry for the entire galactic disk and derives
uncertainties on all the fitted parameters from a bootstrap procedure.
In addition to fitting for five global parameters, which mostly
describe the projection geometry, it fits for a circular rotation
speed in each of an arbitrary number of user-specified radius bins
(i.e.\ the rotation curve). The five global parameters are: the
position of the galaxy center ($x_c, y_c$), the systemic recession
velocity of the galaxy ($V_{\rm sys}$), the disk inclination ($i$), and
the position angle of the disk relative to the North-South axis
($\phi_{\rm PA}$). For $N$ user-specified radius bins, \texttt{DiskFit}\ fits for the
$N+5$ parameters using a $\chi^2$-minimization algorithm.
Where we have sufficiently dense velocity measurements, we typically
space the $N$ radial bins along the major axis by 5\arcsec, which well
exceeds the seeing in all cases, so that each velocity measurement is
independent.
The velocity uncertainties used in calculating the $\chi^2$ values
arise from two sources: the uncertainty in fitting a Voigt profile to
each pixel's spectrum (\S \ref{sec:linefits}) and the intrinsic
turbulence within a galaxy. This intrinsic turbulence, $\Delta_{\rm
ISM}$, is in the range 7-12~km~s$^{-1}$ both in the Milky Way
\citep{1979AJ.....84.1181G} and in external galaxies
\citep{1993PhDT.......185K}. When most emission in a pixel arises
from a single \ion{H}{2} region, the measured velocity may differ from
the mean orbital speed by some random amount drawn from this turbulent
spread. We therefore add $\Delta_{\rm ISM} = 12~$km~s$^{-1}$ in
quadrature to the estimated velocity uncertainty in each pixel when
fitting these models to each of our galaxies.
We calculate uncertainties for each of these fitted parameters using
the bootstrap method described in \citet{2010MNRAS.404.1733S}. Due to
the fact that these velocity maps can contain structure not accounted
for in our models, residual velocities may be correlated over much
larger regions than a single resolution element. To account for this,
the bootstrap method preserves regions of correlated residual line of
sight velocity when resampling the data to estimate the uncertainty
values.
Table \ref{tab:tab2} lists the projection parameters and
reduced-$\chi^2$ values for our best-fitting axisymmetric models to
our 14 H$\alpha$\ velocity maps. The uncertainty values in Table
\ref{tab:tab2} and in the rotation curves of Figures
\ref{fig:N337A}-\ref{fig:N7793} are the estimated 1-$\sigma$
uncertainties from 1000 bootstrap iterations. In some cases
(e.g.\ NGC 7793), the inclination of the galaxy is poorly constrained
in our axisymmetric models. This leads to a large uncertainty in the
overall normalization of the rotation curve even when the shape of the
rotation curve is well-constrained. This is the reason that
uncertainties in the velocities are often substantially larger than
the point-to-point scatter in the individual values.
\subsection{Non-axisymmetric models}
\texttt{DiskFit}\ is also capable of fitting more complicated models that include
kinematic features such as bars, warped disks, and radial flows. We
have attempted to fit our velocity maps with such models, but in no
case have we obtained an improved fit that appeared convincing. Often
a fitted ``bar'' was clearly misaligned with, and of different
length from that visible in the galaxy image, and the bootstrap
uncertainties yielded large errors on the fitted bar parameters. The
\texttt{DiskFit}\ algorithm has been demonstrated to work well
\citep{2007ApJ...664..204S, 2010MNRAS.404.1733S} when there are
well-determined velocities covering the region of the bar. But the
\texttt{DiskFit}\ algorithm is unable to find a convincing fit when the velocity
map lacks information at crucial azimuths of the expected bar flow, as
appears to be the case for all the barred galaxies in our sample.
This remains true even when the initial guesses at parameter values
are chosen carefully. We therefore here present only axisymmetric
fits to our data in which bars and other asymmetries are azimuthally
averaged. We will discuss more complex kinematic models for these
galaxies in future papers in this series.
\subsection{Comparison with photometry}
\citet{RINGSPhot}\ have applied the \texttt{DiskFit}\ package to multi-band
photometric images of these galaxies, fitting both a disk and, where
appropriate, a bulge and/or a bar. These fits yield the disk major
axis position angle and an axis ratio that is interpreted as a measure
of the inclination of a thin, round disk. In order to estimate color
gradients, they fixed the photometric center in each image to the same
sky position, and therefore did not obtain uncertainty estimates for
the position of the center. Figure~\ref{fig:projcomp} presents a
graphical comparison between the values derived separately from our
kinematic maps and from the {\it I}-band image of each galaxy. In
most cases, the measurements agree within the uncertainties. However,
there are some significant differences. In particular, discrepancies
in the fitted positions of the centers seem large compared with the
uncertainties. In some cases, notably NGC 1325, NGC 3705, NGC 5364,
NGC 6384, and NGC 7606, we have no kinematic measurements in the inner
15\arcsec\ - 25\arcsec, which complicates fitting for the center. In
all these cases, both the kinematic and photometric centers are well
within the region where we have no kinematic data, while the radial
extent of our maps is 10 - 20 times larger; forcing the kinematic
center to coincide with the photometric center has little effect on
the fitted inclination, position angle, and outer rotation curve. We
discuss other cases in the following subsections about each galaxy
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N337A.eps}
\includegraphics[width=\hsize]{df_N337A.eps}
\end{center}
\caption{Results for NGC 337A. Top left: the median flux for each
pixel in our combined data cube. Middle left: the fitted
continuum flux. Top center: the fitted integrated H$\alpha$\ line flux.
Middle center: the fitted integrated [\ion{N}{2}] line flux. Top right:
the fitted line-of-sight velocity. Middle right: the estimated
uncertainty in the fitted line-of-sight velocity. At a distance
of 2.57 Mpc, the physical scale is $12.5 \textrm{ pc}/\arcsec$.
Bottom left: Our best-fitting axisymmetric \texttt{DiskFit}\ model of NGC
337A's line-of-sight H$\alpha$\ velocity field. The center, orientation
of the major axis, and axis-ratio of our best-fitting \texttt{DiskFit}\ model
are marked with a large black cross. Bottom center: A map of the
data-minus-model residual velocities for the best-fitting model in
the left panel. Bottom right: A rotation curve extracted from the
best-fitting axisymmetric model with 1-$\sigma$ uncertainties
derived from our bootstrapping procedure. The radii were chosen
to be at least 5\arcsec\ apart, which is approximately 2 seeing
elements.
\label{fig:N337A}}
\end{figure*}
\section{Results for Individual Galaxies}
\subsection{NGC 337A}
NGC 337A has one of the most sparsely sampled velocity maps in the
RINGS medium-resolution H$\alpha$\ kinematic data, as seen in Figure
\ref{fig:N337A}. It is also one of the two galaxies in this work
(along with NGC 4517A) that are classified as Irregular. Despite
this, our model is able to sample the rotation curve over a wide range
of radii (Figure \ref{fig:N337A}) extending out to $\sim 2.5~$kpc.
Near the center and at $R\ga 175$\arcsec, the velocity data are too
sparse to yield a meaningful estimate of the circular speed.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N578.eps}
\includegraphics[width=\hsize]{df_N578.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 578. At a
distance of 27.1 Mpc, the physical scale is $131 \textrm{
pc}/\arcsec$. The large uncertainties on the points are due
almost entirely to the galaxy's inclination being poorly
constrained.
\label{fig:N578}}
\end{figure*}
Our best-fitting kinematic projection parameters for this galaxy differ
substantially from those derived from the {\it I}-band image by
\citet{RINGSPhot}, as indicated in Figure~\ref{fig:projcomp}, which is
not too surprising given the sparseness of the kinematic map. In
particular, the axis about which the galaxy is rotating appears to be
strongly misaligned from the symmetry axis of the {\it I}-band light
distribution. Since the kinematic data are clearly blueshifted on the
West side of the galaxy and redshifted on the East, the misalignment
is more probably due to difficulties in fitting the image; the light
of NGC~337A is dominated by a bulge while the disk is very faint so
that the apparent projection geometry of the galaxy is dominated by
that of the bulge.
\subsection{NGC 578}
Even though NGC 578 exhibits one of the strongest visible bars among
this sample of galaxies, we were disappointed to find that the
velocity map (Figure~\ref{fig:N578}) lacks sufficient data in the bar
region to be able to separate a non-circular flow from the
axisymmetric part. Note the absence of velocity information
immediately to the N and S of the bar. We therefore derive an
estimate of the rotation curve from an axisymmetric fit only. Also,
for this galaxy only, we fix the center of rotation to the sky
position of the photometric center. The coherent velocity features in
the residual map clearly contain more information that we will examine
more closely in a future paper in this series.
The slow, and almost continuous rise of the fitted circular speed
affects our ability to determine the inclination of the disk plane to
line of sight, which is generally more tightly constrained when the
rotation curve has a clear peak. This galaxy therefore has one of the
larger inclination uncertainties in the sample, which leads to the
large uncertainties in the deprojection of the orbital speeds and to
the fact that the point-to-point differences in the best fit values
are substantially smaller than the uncertainties.
As shown in Figure~\ref{fig:projcomp}, the best-fitting inclination
and position angle for our kinematic models of this galaxy disagree
significantly with the values derived from the photometric model of
\citet{RINGSPhot}, in which the bar was fitted separately. There are
at least two reasons for this discrepancy: the prominent bar feature
probably does affect the estimated projection geometry derived from an
axisymmetric fit to the kinematic map and the galaxy image also
manifests a strong asymmetry in the outer parts, with an unmatched
spiral near the Northern minor axis, that complicates the fit to
photometric image.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N908.eps}
\includegraphics[width=\hsize]{df_N908.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 908. At a
distance of 19.4 Mpc, the physical scale is $94.1 \textrm{
pc}/\arcsec$.
\label{fig:N908}}
\end{figure*}
In Figure~\ref{fig:rc_comp}, we compare our derived rotation curve
with that reported by \citet{1996ApJS..107...97M} via H$\alpha$\ longslit
spectroscopy (red points). There is generally somewhat smaller
scatter in our points, and those authors adopt a higher inclination of
58\arcdeg, compared with our 44\arcdeg, causing them to derive
circular speeds that are systematically lower by about 20\%.
\subsection{NGC 908}
NGC 908 has a single large spiral arm towards the north-east side of
the galaxy (see the top left panel of Figure \ref{fig:N908}) which is
unmatched by a corresponding spiral arm on the opposite side. We have
fitted an axisymmetric model, which therefore leads to a corresponding
region of large correlated residual velocity. This feature is
probably responsible for the sudden increase in the derived rotation
curve beyond 120\arcsec, which could also be indicative of a warped
disk at large radii.
Again, Figure~\ref{fig:projcomp} indicates that our best-fitting
values for the center, position angle, and inclination of this galaxy
differ somewhat from those fitted to the {\it I}-band image
\citep{RINGSPhot}, though this is not entirely surprising given the
asymmetry of this galaxy.
As shown in Figure \ref{fig:rc_comp}, the shape of our derived
rotation curve for NGC 908 agrees fairly well with the previous
long-slit measurements by \citet{1996ApJS..107...97M}, although we do
not reproduce the slow inner rise that they report. Again they
adopted a higher inclination of 66\arcdeg, compared with our
54\arcdeg, causing their circular speeds to be lower than ours by
about 12\%.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N1325.eps}
\includegraphics[width=\hsize]{df_N1325.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 1325. At a
distance of 23.7 Mpc, the physical scale is $115 \textrm{
pc}/\arcsec$.
\label{fig:N1325}}
\end{figure*}
\vfill\ejec
\subsection{NGC 1325}
Our data on NGC 1325 (Figure \ref{fig:N1325}) indicate that this
galaxy has a regular projected flow pattern. We derive a rotation
curve that is approximately flat over a wide range of radii. Notably,
we detect very little H$\alpha$\ emission in the innermost
$\sim25$\arcsec\ of the map, where our velocity estimates are
correspondingly sparse and uncertain. Our best-fitting projection
angles for this galaxy agree extremely well with those from the
photometric models of \citet{RINGSPhot}, as shown in
Figure~\ref{fig:projcomp}, but the position of the center differs by
over 10\arcsec, probably because of the dearth of kinematic data in
the inner parts.
\citet{1982ApJ...261..439R} adopted an inclination of 70\arcdeg\ for
this galaxy, which is identical within the uncertainty with our
best fit value, and our extracted rotation curve agrees
reasonably well (Figure \ref{fig:rc_comp}) with their measurements at
$R > 50\arcsec$. We do not, however, reproduce the slow rise
interior to this radius that they report; this discrepancy could
indicate that their slit did not pass through the center.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N1964.eps}
\includegraphics[width=\hsize]{df_N1964.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 1964. At a
distance of 20.9 Mpc, the physical scale is $101 \textrm{
pc}/\arcsec$.
\label{fig:N1964}}
\end{figure*}
\vfill\ejec
\subsection{NGC 1964}
We find, Figure \ref{fig:N1964}, an almost regular flow pattern
for NGC 1964. Our fitted center position and projection angles agree,
within the estimated uncertainties (see Figure~\ref{fig:projcomp}),
with those derived from the {\it I}-band image by \citet{RINGSPhot}.
As shown in Figure \ref{fig:rc_comp}, our derived rotation curve is
similar to that measured previously by \citet{1996ApJS..107...97M},
who adopted an inclination of 68\arcdeg, compared to our 74\arcdeg.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N2280.eps}
\includegraphics[width=\hsize]{df_N2280.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 2280. At a distance of 24.0 Mpc, the physical scale is $116 \textrm{ pc}/\arcsec$.
\label{fig:N2280}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 2280}
Our previous paper \citep{RINGS1} presented a kinematic map for NGC
2280 that was derived from the same Fabry-P\'erot\ data cube. The most
significant difference between the maps and models presented in that
work and those presented here is increased spatial resolution due
to a change in our pixel binning procedure. As mentioned in \S\S
\ref{sec:flattening} and \ref{sec:ghosts}, we have made minor
refinements to our flatfielding and ghost subtraction routines which
have improved the data reduction process, and here we also include a
fit to the [\ion{N}{2}] 6583 line in addition to the H$\alpha$\ line, which results in
a slightly increased image depth.
Our derived velocity map for NGC 2280, presented in
Figure~\ref{fig:N2280}, again reveals a regular flow pattern that is
typical of a rotating disk galaxy seen in projection. Unlike many of
the other galaxies in our sample, we have been able to extract
reliable velocities at both very small and very large radii, producing
one of the most complete rotation curves in this sample. Aside from
the inermost point, which is very uncertain, the measured orbital
speed agrees well with that in our previous paper, where we also
demonstrated general agreeement with the \ion{H}{1} rotation curve.
The position of the center, inclination, and position angle of this
galaxy are very well constrained in our models, with uncertainties
$\la 1$\arcdeg\ for both angle parameters. These values are
consistent with our previous work on this galaxy in \citet{RINGS1},
but the estimated inclination, 63.5\arcdeg\ is in tension (see
Figure~\ref{fig:projcomp}) with the 69.6\arcdeg\ derived from the {\it
I}-band image by \citet{RINGSPhot}, who also estimated the
uncertainty on each angle to be $\sim 1\arcdeg$.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N3705.eps}
\includegraphics[width=\hsize]{df_N3705.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 3705. At a
distance of 18.5 Mpc, the physical scale is $89.7 \textrm{
pc}/\arcsec$.
\label{fig:N3705}}
\end{figure*}
Our rotation curve NGC 2280 extends to much larger radii than those
published previously, as shown in Figure \ref{fig:rc_comp}. We derive
systematically slightly larger velocities than did
\citet{1996ApJS..107...97M} (red points), who adopted $i=61\arcdeg$.
Our estimated velocities are almost double the values reported by
\citet{1995A&AS..110..279S} (green points), who did not give an
inclination for this galaxy and may have reported projected, i.e.\
line-of-sight, velocities.
\subsection{NGC 3705}
We have derived the maps shown in Figure~\ref{fig:N3705} from our data
on NGC 3705. We detect no H$\alpha$\ emission in the central $\sim
20\arcsec$ of the galaxy, and the innermost fitted velocities have
large uncertainties. For $R \ga 80\arcsec$, the rotation curve
appears to be approximately flat over a broad range of radii.
Our values for NGC 3705's center and projection angles are consistent
with (see Figure~\ref{fig:projcomp}) the values from the {\it I}-band image
fitted by \citet{RINGSPhot}, but our lack of velocity measurements at
small radii made it difficult to pinpoint the center.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N4517A.eps}
\includegraphics[width=\hsize]{df_N4517A.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 4517A. At a distance of 26.7 Mpc, the physical scale is $129 \textrm{ pc}/\arcsec$.
\label{fig:N4517A}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 4517A}
Our velocity map for NGC 4517A, Figure~\ref{fig:N4517A}, like that for
NGC 337A, is very sparsely sampled, and both galaxies are
morphologically classified as Irregular. Our rotation curve extracted
from an axisymmetric model of this galaxy is sparsely sampled and has
large uncertainties. These uncertainties also reflect the uncertainty
in the inclination.
The projection parameters of our best-fitting model have some of the
largest uncertainties in Table \ref{tab:tab2}, but are consistent,
within the uncertainties (see Figure~\ref{fig:projcomp}), with the
values derived from the {\it I}-band image by \citet{RINGSPhot}, and our
fitted center agrees well with the photometric estimate.
Our estimates of the circular speed in NGC 4517A generally agree with
the values measured by \citet{2011MNRAS.413.1875N}, though both their
PPAK data and ours are quite sparse (Figure \ref{fig:rc_comp}). Their
slightly higher orbital speeds are a consequence of a difference in
adopted inclination of $i = 90\arcdeg - 33\arcdeg = 57\arcdeg$
compared with our 51\arcdeg.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N4939.eps}
\includegraphics[width=\hsize]{df_N4939.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 4939. At a distance of 41.6 Mpc, the physical scale is $202 \textrm{ pc}/\arcsec$.
\label{fig:N4939}}
\end{figure*}
\vfill\ejec
\subsection{NGC 4939}
Figure~\ref{fig:N4939} presents our results for NGC 4939, which is the
most luminous galaxy in our sample. The rotation curve rises steeply
before becoming approximately flat for $R\ga 25\arcsec$ at a value of
270 km~s$^{-1}$ out to nearly 40 kpc in the disk plane. Our kinematic
projection parameters and center for this galaxy agree very well (see
Figure~\ref{fig:projcomp}) with those derived from the {\it I}-band image by
\citet{RINGSPhot}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N5364.eps}
\includegraphics[width=\hsize]{df_N5364.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 5364. At a
distance of 18.1 Mpc, the physical scale is $87.8 \textrm{
pc}/\arcsec$.
\label{fig:N5364}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 5364}
The H$\alpha$\ emission in NGC 5364 very strongly traces its spiral arms and
we detect no H$\alpha$\ emission within the innermost $\sim15\arcsec$. The
rotation curve is rising roughly linearly outside this radius before
becoming approximately flat for $R \ga 80\arcsec$. Because the
kinematic data are somewhat sparse, the galaxy's inclination has a
moderately large uncertainty, leading to a large uncertainty in the
overall normalization of the rotation curve.
Our fitted position angle and inclination differ (see
Figure~\ref{fig:projcomp}) by a few degrees from the values derived
from the {\it I}-band image by \citet{RINGSPhot}, although differences are
not large compared with the uncertainties. Again the lack of
kinematic data in the inner part of map led to larger than usual
uncertainties in the position of the center.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N6118.eps}
\includegraphics[width=\hsize]{df_N6118.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 6118. At a
distance of 22.9 Mpc, the physical scale is $111 \textrm{
pc}/\arcsec$.
\label{fig:N6118}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 6118}
Our velocity map for NGC 6118 is presented in Figure~\ref{fig:N6118}.
The rotation curve extracted from our axisymmetric model rises
continuously from the center to $R \ga 100\arcsec$. The decreasing
values beyond this radius have large uncertainties.
Our best-fitting projection angles agree (Figure~\ref{fig:projcomp})
with the values derived from the {\it I}-band image by \citet{RINGSPhot},
but the centers disagree by about 5\arcsec, or about $6\sigma$.
Our rotation curve also agrees very well (Figure~\ref{fig:rc_comp})
with that obtained by \citet{1984A&AS...58..351M} using a longslit and
who adopted an inclination of 62\arcdeg.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N6384.eps}
\includegraphics[width=\hsize]{df_N6384.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 6384. At a
distance of 19.7 Mpc, the physical scale is $95.5 \textrm{
pc}/\arcsec$.
\label{fig:N6384}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 6384}
We present our velocity map for NGC 6384 in Figure~\ref{fig:N6384}.
As in NGC 5364, the H$\alpha$\ emission closely traces the spiral arms. We
detect no H$\alpha$\ emission within the innermost $\sim25\arcsec$. Our
fitted rotation curve is roughly flat from this point to the outermost
limits of our data.
Our best-fitting model's inclination is in reasonable agreement with
the uncertain value (see Figure~\ref{fig:projcomp}) derived from the
{\it I}-band image by \citet{RINGSPhot}, while the position angle and
center are in better agreement.
Figure~\ref{fig:rc_comp} shows that our estimates of the circular
speed in NGC~6384 are systematically higher than those of
\citet{1995A&AS..110..279S}, as was the case for NGC~2280. Again
these authors appear not have corrected their orbital speeds for
inclination.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N7606.eps}
\includegraphics[width=\hsize]{df_N7606.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 7606. At a
distance of 34.0 Mpc, the physical scale is $165 \textrm{
pc}/\arcsec$.
\label{fig:N7606}}
\end{figure*}
\vfill\eject\phantom{blah}\vfill\eject
\subsection{NGC 7606}
NGC 7606 is the fastest-rotating galaxy in this sample and the second
most-luminous. Again the velocity map, Figure~\ref{fig:N7606},
displays the flow pattern of a typical spiral disk, and again we
detect no H$\alpha$\ emission in the innermost $\sim15\arcsec$. Our fitted
rotation curve appears to be rising from our innermost point, becoming
roughly flat from $R\sim30\arcsec$, before declining somewhat from
$\sim50$--$120\arcsec$ with a hint of an outer increase, although the
uncertainties are large due to the sparseness of our data at these
radii.
The inclination and position angle of this galaxy are extremely
tightly constrained by our kinematic models and agree very well,
Figure~\ref{fig:projcomp}, with the projection angles derived from the
{\it I}-band image by \citet{RINGSPhot}, as does the location of the center
despite the absence of data at small radii.
In general, our rotation curve measurements agree well with the
previous measurements by \citet{1982ApJ...261..439R} (blue points in
Figure~\ref{fig:rc_comp}) and \citet{1996ApJS..107...97M} (red
points), who adopted inclinations of 66\arcdeg\ and
70\arcdeg\ respectively.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\hsize]{six_panel_N7793.eps}
\includegraphics[width=\hsize]{df_N7793.eps}
\end{center}
\caption{Same as Figure \ref{fig:N337A}, but for NGC 7793. At a
distance of 3.44 Mpc, the physical scale is $16.7 \textrm{
pc}/\arcsec$. Because we obtained 4 observations of the East
(approaching) side of this galaxy and only 1 observation of the
West (receding) side, our sensitivity is significantly stronger on
the Eastern portion of these maps. All 5 observations overlap in
the central region.
\label{fig:N7793}}
\end{figure*}
\vfill\ejec
\subsection{NGC 7793}
NGC 7793 has the largest angular size of our sample and the velocity
map, Figure~\ref{fig:N7793}, was derived from the combination of two
separate pointings.
Our fitted rotation curve shows a general rise to $R\sim 100\arcsec$,
except for a slight decrease around $R\sim40\arcsec$. Our data in the
outermost parts of the galaxy are too sparse to measure the orbital
speed reliably. As for NGC 337A, the large uncertainties on
individual points in the rotation curve are mostly due to the large
uncertainty in NGC 7793's inclination in our model.
The projection parameters of our best-fitting model agree well within
the larger than usual uncertainties, Figure~\ref{fig:projcomp}, with
those derived from the {\it I}-band image by \citet{RINGSPhot}, and while
our fitted center is some 14\arcsec\ from the photometric center, our
uncertainty estimates are also large, so that this discrepancy is
$2.5\sigma$.
Again in Figure~\ref{fig:rc_comp} we compare our estimated rotation
curve with those previously reported by \citet{1980ApJ...242...30D}
(orange points) and by \citet{2008AJ....135.2038D} (purple points),
who adopted inclinations of 53\arcdeg\ and 46\arcdeg\ respectively
that are both larger than our 40\arcdeg. Consequently, our estimated
speeds are above theirs at most radii. The shapes of the rotation
curves are generally similar, although we find a steeper inner rise.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.69\hsize]{rc_comparisons.eps}
\end{center}
\caption{A comparison of our best-fitting model rotation curves
(black circles with error bars) to previous measurements from the
literature. In all cases, squares are from the approaching side of
the galaxy, triangles from the receding side, and circles from an
azimuthal average. Unless otherwise specified, we have used own
own best-fitting values for systemic velocity and inclination (see
Table \ref{tab:tab2}) to deproject the data. Red points (NGC 578,
908, 1964, 2280, and 7606): \citet{1996ApJS..107...97M} via H$\alpha$\
longslit spectroscopy (Note: We have adopted a systemic velocity
of 1960 km~s$^{-1}$ for NGC 1964 rather than our best-fitting
value to match the authors' spectra. The authors also report a
rotation curve for NGC 1325, but the wavelength calibration for
those data appears to have been incorrect.). Blue points (NGC 1325
and 7606): \citet{1982ApJ...261..439R} via H$\alpha$\ and [\ion{N}{2}] longslit
spectroscopy. Green points (NGC 2280 and 6384):
\citet{1995A&AS..110..279S} via H$\alpha$\ and [\ion{N}{2}] longslit
spectroscopy. Magenta points (NGC 4517A):
\citet{2011MNRAS.413.1875N} via H$\alpha$\ IFU spectroscopy. Cyan points
(NGC 6118): \citet{1984A&AS...58..351M} via optical longslit
spectroscopy. Brown and purple points (NGC 7793):
\citet{2008AJ....135.2038D} via H$\alpha$\ Fabry-P\'erot\ spectrophotometry. Orange
points (NGC 7793): \citet{1980ApJ...242...30D} via H$\alpha$\ Fabry-P\'erot\
spectrophotometry.
\label{fig:rc_comp}}
\end{figure*}
\subsection{Discussion}
As we have discussed for the individual cases, the rotation curves we
derive from fitting axisymmetric flow patterns to our velocity maps
agree quite well with previously published estimates from several
different authors and using a number of different optical instruments.
These comparisons are shown in Figure~\ref{fig:rc_comp}, where most
systematic discrepancies can be attributed to differences between the
inclinations we adopt, and those in the comparison work. This
generally good agreement is reassuring.
\subsection{Oval disks?}
Discrepancies between the position angle and inclination fitted
separately to a kinematic map and a photometric image of the same
galaxy would be expected if the disk were intrinsically oval, as has
been claimed in some cases \citep[e.g][]{2011ApJ...739L..27P} and
emphasized as a possibility by \citet{2013seg..book....1K}. Even were
the projected major axis to be closely aligned with either of the
principal axes of a strongly oval disk, the fitted inclinations should
differ.
We have no clear evidence of this behavior in our sample of galaxies,
since the projection angles derived from fitting axisymmetric models
to our velocity maps generally agree, within the estimated
uncertainties, with those fitted to the {\it I}-band images
\citep{RINGSPhot}, as shown in Figure~\ref{fig:projcomp}. We argued
above that the discrepancy in NGC~337A is due to the faintness of the
outer disk, while those in NGC~578 and NGC~908 can be ascribed to
asymmetries. Note that \citet{2003AJ....125.1164B} reported that the
position angle of the galaxy major axis estimated from photometric
images and kinematic maps never exceeded 4\arcdeg\ in their larger
sample of 74 galaxies, and \citet{2014A&A...568A..70B} found only
minor misalignments in a sample of intrinsically barred galaxies.
Since these were all randomly selected spiral galaxies, it would seem
that the incidence of intrinsically oval disks is low, at least over
the radial extent of these maps.
Futhermore, \citet{2003AJ....125.1164B}, found that the kinematic
centers of their models were within 2\farcs7 of the photmetric
centers in 67 out of 74 galaxies in their sample. Here we find the
centers of our kinematic models are consistent in several cases with
the photometric centers (see Figure~\ref{fig:projcomp}), and the
greater discrepancies generally arise where our maps are sparse or
lack data in the center.
\begin{figure}
\begin{center}
\bigskip
\includegraphics[width=\hsize]{sb.eps}
\end{center}
\caption{Left: Azimuthally averaged \textit{R}-band continuum
surface brightness profiles plotted as functions of galactocentric
radius in kpc. Center: The same values plotted as functions of
galactocentric radius rescaled by each galaxy's $R_{23.5}$ in the
\textit{I}-band. Right: The same values plotted as functions of
galactocentric radius rescaled by each galaxy's $R_{\rm opt}$ in the
\textit{I}-band. In each panel, the lines have been vertically
offset by a constant to separate them.
\label{fig:sb}}
\end{figure}
\section{Radial trends}
Figure \ref{fig:sb} shows the azimuthally averaged \textit{R}-band
continuum surface brightness of our galaxies derived from our
H$\alpha$\ Fabry-P\'erot\ data cubes plotted against three different measures of
galactocentric radius. These surface brightness profiles assume that
the disk projection parameters are those of the best-fitting
\textit{I}-band models of \citet{RINGSPhot}. The surface brightness
profiles show qualitative and quantitative agreement with the
\textit{R}-band surface brightness profiles of \citet{RINGSPhot}, but
have a smaller radial extent.
\begin{figure}
\begin{center}
\bigskip
\includegraphics[width=\hsize]{ha_profiles.eps}
\end{center}
\caption{Left: Azimuthally averaged integrated H$\alpha$\ surface
brightness profiles plotted as functions of galactocentric radius
in kpc. Center: The same values plotted as functions of
galactocentric radius rescaled by each galaxy's $R_{23.5}$ in the
\textit{I}-band. Right: The same values plotted as functions of
galactocentric radius rescaled by each galaxy's $R_{\rm opt}$ in the
\textit{I}-band. In each panel, the lines have been vertically
offset by a constant to separate them.
\label{fig:haprofiles}}
\end{figure}
Figure \ref{fig:haprofiles} shows the azimuthally averaged integrated
H$\alpha$\ surface brightnesses of our galaxies, i.e.\ the values of $F_H$
in Equation \ref{eqn:model}. These values should be considered as
lower limits on the true H$\alpha$\ intensity, as the averages were taken
over all pixels in a radial bin, including those which fell below our
signal-to-noise threshold.
\begin{figure}
\begin{center}
\medskip
\includegraphics[width=\hsize]{n2_ratios.eps}
\end{center}
\caption{Left: Azimuthally averaged N2 Index ($\mathrm{N2}\equiv
\log(\mathrm{F_{\mathrm{N2~6583}}/\mathrm{F_{H\alpha}}})$) plotted
as functions of galactocentric radius in kpc. Center: The same
values plotted as functions of galactocentric radius rescaled by
each galaxy's $R_{23.5}$ in the \textit{I}-band. Right: The same
values plotted as functions of galactocentric radius rescaled by
each galaxy's $R_{\rm opt}$ in the \textit{I}-band. In each panel,
the lines have been vertically offset by a constant to separate
them.
\label{fig:n2ratios}}
\end{figure}
\vfill\eject
\subsection{[\ion{N}{2}]-to-H$\alpha$\ Ratio and Oxygen Abundance}
Figure \ref{fig:n2ratios} shows the azimuthally averaged value of the
ratio of the integrated [\ion{N}{2}] 6583 surface brightness to the integrated
H$\alpha$\ surface brightness, commonly known as the ``N2 Index''
\citep{1979A&A....78..200A}
\begin{equation}
\mathrm{N2} \equiv
\log(F_{\mathrm{N2}~6583}/F_{\mathrm{H\alpha}}).
\end{equation}
It is important to note that the plotted quantity is the average value
of the ratio ($ \langle F_N/F_H \rangle $) and not the ratio of the
averages ($\langle F_N \rangle / \langle F_H \rangle $). We note that
all of our galaxies show a downward trend in this parameter. The
relative intensities of these two lines are complicated functions of
metallicity and electron temperature in the emitting gas, and the line
intensity ratio is also known to be sensitive to the degree of
ionization of the gas \citep{1983MNRAS.204...53S}.
\begin{figure}
\begin{center}
\bigskip
\includegraphics[width=.96\hsize]{o2_metalicity.eps}
\end{center}
\caption{Left: Azimuthally averaged oxygen abundances
($12+\log(\mathrm{O}/\mathrm{H})$) plotted as functions of
galactocentric radius in kpc. Center: The same values plotted as
functions of galactocentric radius rescaled by each galaxy's
$R_{23.5}$ in the \textit{I}-band. Right: The same values plotted
as functions of galactocentric radius rescaled by each galaxy's
$R_{\rm opt}$ in the \textit{I}-band. In each panel, the lines have
been vertically offset by a constant to separate them.
\label{fig:O2}}
\end{figure}
Because this ratio is sensitive to the metallicity of a galaxy and
does not strongly depend on absorption, it has been widely used as an
indicator of oxygen abundance \citep[e.g.][]{2009MNRAS.398..949P,
2013A&A...559A.114M}; \citet{2004MNRAS.348L..59P} show that the data
support an approximately linear relation between oxygen abundance and
N2 index, which holds over the range $-2 \ga \mathrm{N2} \ga -0.5$,
but the relation may steepen at both higher and lower values of the
ratio. \citet{2013A&A...559A.114M} give the following relation
between the N2 index and oxygen abundance:
\begin{equation}
12+\log(\mathrm{O}/\mathrm{H}) = 8.743 + 0.462\times\mathrm{N2}.
\label{eq.oabund}
\end{equation}
We have used this relation to derive the mean radial variation of
oxygen abundance in our galaxies displayed in Figure \ref{fig:O2}. As
in many previous studies, we find that our galaxies generally manifest
a declining trend in metallicity \citep[e.g.][]{1992MNRAS.259..121V,
1994ApJ...420...87Z, 2010ApJS..190..233M, 2017MNRAS.469..151B}.
With the exception of NGC~6384, the most extended normalized profiles
(e.g. NGC~4939, NGC~337A, NGC~4939, NGC~2280) show hints of a
flattening in the outer parts, as has also been reported for large
samples \citep[e.g.][]{2014A&A...563A..49S, 2016A&A...587A..70S}. NGC
4939 is the only galaxy discussed in this work known to host an active
galactic nucleus (AGN). Away from the nucleus of this galaxy, and in
all other galaxies in our sample, most ionizing radiation probably
comes from hot, young stars. The extra ionizing radiation from the
AGN in NGC 4939 may be the reason for central spike in the apparent
oxygen abundance in this case.
\section{Summary}
\label{sec:summary}
We have presented high spatial resolution ($\sim$2.5\arcsec)
H$\alpha$\ velocity fields of 14 of the 19 galaxies in the RINGS sample, as
well as maps of these galaxies' \textit{R}-band continuum emission and
H$\alpha$\ and [\ion{N}{2}] integrated surface brightness. Additionally, we have
presented azimuthally averaged integrated surface brightness profiles
of these emission lines. We observe a general downward trend of the
[\ion{N}{2}]-to-H$\alpha$\ emission ratio with radius in all of our galaxies.
We have used the \texttt{DiskFit}\ software package of \citet{2007ApJ...664..204S}
and \citet{2010MNRAS.404.1733S} to model the velocity fields presented
in this work. From these models, we have extracted rotation curves at
high spatial resolution and have shown good general agreement with
those previously published, where available. In most cases, the
projection geometries of these models agree well with the photometric
models of \citet{RINGSPhot}. This agreement argues against the disks
being intrinsically oval.
As of 2015 Sept, the medium-resolution Fabry-P\'erot\ etalon of SALT RSS is no
longer available for observations due to deterioration of the
reflective coatings. The remaining five galaxies of the RINGS sample
are scheduled to be completed in the Fabry-P\'erot\ system's high-resolution
mode.
\acknowledgments
We thank Tad Pryor for several productive conversations in designing
our data reduction procedures, and an anonymous referee for a thorough
and constructive report. This work was supported by NSF grant
AST/12117937 to JAS \& TBW and NSF grant PHY/1263280. This research
made use of the NASA/IPAC Extragalactic Database (NED) which is
operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration; Astropy, a community-developed core Python package for
Astronomy \citep{2013A&A...558A..33A}; matplotlib, a Python library
for publication quality graphics \citep{Hunter:2007}; SciPy
\citep{jones_scipy_2001}; IRAF, distributed by the National Optical
Astronomy Observatory, which is operated by the Association of
Universities for Research in Astronomy (AURA) under cooperative
agreement with the National Science Foundation
\citep{1993ASPC...52..173T}; and PyRAF, a product of the Space
Telescope Science Institute, which is operated by AURA for NASA. The
observations reported in this paper were obtained with the Southern
African Large Telescope (SALT) under programs 2011-3-RU-003,
2012-1-RU-001, 2012-2-RU-001 (PI: TBW), 2013-2-RU\_RSA-001,
2014-1-RU\_RSA-001, 2014-2-SCI-012, and 2015-1-SCI-016 (PI: JAS).
\bibliographystyle{apj}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Shear can play an important role in hydromagnetic dynamos.
This is especially true of dynamos in astrophysical bodies
that generate magnetic fields on
scales larger than the scale of the turbulent motions.
Those types of dynamos are generally referred to as large-scale dynamos.
Simulations confirm that shear can be the sole driver of dynamo action
\citep{B05,Yousef1,Yousef2,BRRK08}, but there is no consensus as to what
is the underlying mechanism for producing such large-scale fields.
In addition to shear there are also other possible mechanisms
producing large-scale magnetic fields.
One important contender is the $\alpha$ effect \citep{SKR66}, which
quantifies the effect of kinetic helicity on magnetic field generation.
It can also be the sole driver of large-scale dynamo action \citep{B01,KKB09b}.
When both shear and $\alpha$ effect act simultaneously, it becomes even
harder to identify the main drivers of large-scale dynamo action.
Although shear is generally believed to be advantageous for large-scale
dynamo action \citep[e.g.][]{Tobias09}, it is conceivable that the two effects
($\alpha$ effect and shear) suppress each other at least partially.
This is because, in the presence of stratification or other inhomogeneities,
shear itself can produce an $\alpha$ effect \citep{RK03,RS06,KKB09a}.
Its sign depends on the relative orientation of shear and stratification.
The net $\alpha$ depends then on the pseudo scalar
$(2\bm{\Omega}+\overline{\bm{W}})\cdot\bm{g}$, where $2\bm\Omega$ and
$\overline{\bm{W}}$ are the vorticities associated with rotation and
large-scale shear flow, respectively.
The issue can be complicated even further if shear is not constant but
has a sinusoidal profile, for example \citep{BBS01,HP09}.
Sinusoidal shear profiles are commonly adopted in numerical simulations
where all boundaries are strictly periodic.
This has obvious computational advantages and is certainly easier to
implement than the so-called shearing-periodic boundary conditions
where cross-stream periodicity applies only to positions that follow
the shear flow and are thus changing with time \citep{WT88}.
In helical turbulence with shear there is the possibility of dynamo
waves that propagate perpendicular to the plane of the shear.
This is clearly borne out by simulations \citep{KB09}.
The propagation direction of the dynamo wave is proportional to the product
$H_{\rm K}\overline{\bm{W}}$, where $H_{\rm K}$ is the kinetic helicity
of the flow.
When the shear is sinusoidal, the sign of $\overline{\bm{W}}$ changes
in space, so one obtains counter-propagating dynamo waves in the two halves
of the domain \citep{BBS01}.
In the presence of helicity, there is also a turbulent pumping effect,
whose effective velocity is also in the direction of
$H_{\rm K}\overline{\bm{W}}$
\citep{MKTB09}.
In the cases discussed above the turbulence is driven by a helical body
force, which is clearly artificial, but it allows contact to be made with
analytic theories of dynamo action in homogeneous media \citep{Mof78}.
A more realistic case is one where the turbulence is driven by natural
convection in a slab with a temperature gradient in the vertical direction.
Many of the features of dynamo action discussed above carry over to this
case as well, but an additional complication arises both from the fact
that there are impenetrable walls and that the sign of kinetic helicity
changes with depth \citep[e.g.][]{BNPST90,CH06}.
In the present paper we deal with both aspects, but we focus in particular
on the effects of sinusoidal shear, where we expect at least partial
cancellation of the $\alpha$ effect when averaged over horizontal planes.
We contrast our work with earlier results that used linear shear,
implemented via the shearing-box
approximation \citep{KKB08}, as well as the case with no shear
\citep{KKB09b}, where only the $\alpha$ effect can operate.
The conclusion from these studies is that in the
simulation domain there is an $\alpha$ effect
of the strength expected from kinematic mean-field theory
\citep{KKB09a,KKB09b}.
There is also a back-reaction of the magnetic field through the Lorentz force,
and its strength varies depending on whether or not magnetic helicity is
allowed to escape from the domain \citep{KKB09c}.
Again, these aspects are now well understood using mean-field theory.
The new aspect here is the sinusoidal shear.
In a recent paper, \cite{HP09} present results from
convection simulations with rotation and large-scale shear and report
the emergence of a large-scale magnetic field whose growth rate is proportional
to the shear rate, similar to the earlier results of
\cite{KKB08}.
They also determine the $\alpha$ effect from
their simulations using the so-called imposed-field method and find
that $\alpha$ is small and unaffected by the presence of shear. From
these results the authors conclude that the dynamo cannot be explained by
a classical $\alpha^2$ or $\alpha \Omega$ dynamo.
The interpretation of the results of \cite{HP09} is potentially
in conflict with that of \cite{KKB08}.
In both cases, convection together with shear was found to produce
large-scale fields, but in \cite{KKB08} they are interpreted as being
the result of a conventional $\alpha$ effect while in \cite{HP09}
it is argued that they are due to another mechanism similar to the
incoherent $\alpha$--shear effect \citep{VB97,Sok97,Sil00,Pro07},
or perhaps the shear--current effect \citep{RK03,RK04}.
Moreover, \cite{HP09} argue that the $\alpha$ effect is ruled out.
At this point we cannot be sure that there is really a difference in
interpretations, because the systems considered by \cite{KKB08}
and \cite{HP09} are different in at least two important aspects.
Firstly, in \cite{HP09} there is no density stratification,
and since
$\alpha$ is supposed to be proportional to the logarithmic density gradient
\citep{SKR66} the resulting $\alpha$ may indeed vanish.
However, due to the impenetrable vertical boundaries, the turbulence
is inhomogeneous so that $\bm\nabla\lnu_{\rm rms}\neq{\bm0}$, which can
also lead to an $\alpha$ effect
\citep[e.g.][]{GZR05}.
Here, $u_{\rm rms}$ is the rms velocity of the turbulence.
Secondly, the shear profile changes sign in the horizontal direction.
Together with the vertical inhomogeneity this also produces an $\alpha$ effect
\citep{RK03,RS06}, but its contribution is not captured by horizontal
averaging and it partially cancels the $\alpha$ effect from rotation.
This should be a measurable effect which was not quantified in \cite{HP09}.
Doing this is one of the main motivations behind our present paper.
There is yet another important issue relevant to determining $\alpha$ in
a system where the magnetic Reynolds number is large enough to result in
dynamo action \citep{HdSKB09}.
Obviously, any successful $\alpha$ effect should produce large-scale
magnetic fields.
Given enough time, this field should reach saturation.
By employing a weak external field one might therefore measure $\alpha$
at a saturated level.
Depending on boundary conditions, which were unfortunately not specified in
\cite{HP09}, the saturation can result in a catastrophically quenched
$\alpha$ effect.
Furthermore, here we show that even in the absence of a dynamo the
electromotive force from long time averages reflects not only $\alpha$
due to the uniform imposed field as assumed by \cite{HP09}, but also
picks up contributions from the additionally generated nonuniform
fields of comparable magnitude.
These caveats in determining $\alpha$ with an externally imposed field
were known for some time \citep{OSBR02,KKOS06}, but they have only recently
been examined in detail \citep{HdSKB09} and were therefore not addressed
by \cite{HP09}.
This gives another motivation to our study.
Here we use a similar simulation setup as \cite{HP09}
and derive the $\alpha$ effect with the imposed-field method. We show
that the value of $\alpha$ determined by the method of resetting the
magnetic field after regular time intervals yields a substantially higher value than
that reported by \cite{HP09}. Furthermore, we show that for a
sinusoidally varying shear, also the $\alpha$ effect will have a
sinusoidal variation in the horizontal direction,
hence explaining why \cite{HP09} did not see the
contribution of shear in their horizontally averaged results.
\section{The model}
In an effort to compare with the study of \cite{HP09}, we use a
Cartesian domain with $L_x=L_y=5d$ and $L_z=d$ with $0<z<d$, where
$d$ is the depth
of the convectively unstable layer.
We solve the usual set of hydromagnetic equations
\begin{eqnarray}
\frac{\partial \bm{A}}{\partial t} &=& \bm{U}\times\bm{B} - \eta \mu_0 \bm{J}, \\
\frac{D \ln \rho}{Dt} &=& -\bm\nabla\cdot\bm{U}, \\
\frac{D \bm{U}}{Dt} &=& -\frac{1}{\rho}{\bm \nabla}p + {\bm g} - 2\,\bm{\Omega} \times \bm{U} + \frac{1}{\rho} \bm{J} \times {\bm B} \nonumber \\ && \hspace{1.5cm} + \frac{1}{\rho} \bm{\nabla} \cdot 2 \nu \rho \mbox{\boldmath ${\sf S}$} + \frac{1}{\tau} (\bm{U}-\meanv{U}^{(0)}),\label{equ:mom}\\
\frac{D e}{Dt}\!&=&\!-\frac{p}{\rho}\bm\nabla\cdot{\bm U} + \frac{1}{\rho} \bm{\nabla} \cdot K \bm{\nabla}T + 2 \nu \mbox{\boldmath ${\sf S}$}^2 + \frac{\mu_0\eta}{\rho} \bm{J}^2,
\end{eqnarray}
where $D/Dt=\partial/\partial t + \bm{U}\cdot\bm\nabla$ is the advective time
derivative, $\bm{A}$ is the
magnetic vector potential, $\bm{B} = \bm{\nabla} \times \bm{A}$ the
magnetic field, and $\bm{J} = \mu_0^{-1} \bm{\nabla} \times \bm{B}$ is
the current density, $\mu_0$ is the vacuum permeability, $\eta$ and
$\nu$ are the magnetic diffusivity and kinematic viscosity,
respectively, $K$ is the heat conductivity, $\rho$ is the density,
$\bm{U}$ is the velocity, $\bm{g} = -g\hat{\bm{z}}$ the gravitational
acceleration, and $\bm{\Omega}=\Omega_0(0,0,1)$ the rotation
vector. The fluid obeys an ideal gas law $p=\rho e (\gamma-1)$, where
$p$ and $e$ are the pressure and internal energy, respectively, and
$\gamma = c_{\rm P}/c_{\rm V} = 5/3$ is the ratio of specific heats at
constant pressure and volume, respectively. The specific internal
energy per unit mass is related to the temperature via $e=c_{\rm V}
T$. The rate of strain tensor $\mbox{\boldmath ${\sf S}$}$ is given by
\begin{equation}
{\sf S}_{ij} = {\textstyle{1\over2}} (U_{i,j}+U_{j,i}) - {\textstyle{1\over3}} \delta_{ij} \bm\nabla\cdot\bm{U}.
\end{equation}
The last term of \Eq{equ:mom} maintains a shear flow of the form
\begin{equation}
\meanv{U}^{(0)} = U_0 \cos\left[\frac{2\pi(x-x_0)}{L_x} \right]\hat{\bm{e}}_y,\label{equ:sf}
\end{equation}
where $U_0$ is the amplitude of the shear flow, $x_0=-L_x/2$ is the
position of the left-hand boundary of the domain, and
$\tau$ is a relaxation time. Here we use a $\tau=20\sqrt{d/g}$ which
corresponds to roughly 3.5 convective turnover times.
In their study, \cite{HP09} use the Boussinesq approximation and thus
neglect density stratification.
Here we use the {\sc Pencil
Code}\footnote{http://pencil-code.googlecode.com} which is
fully compressible.
However, in order to stay close to the setup of \cite{HP09} we
employ a weak stratification: the density difference between the top
and the bottom of the domain is only ten per cent and the average Mach
number is always less than 0.1. Hence the effects of compressibility
are small.
The stratification in the associated hydrostatic
initial state can be described by a polytrope with index $m=1$. Unlike
our previous studies \citep[e.g.][]{KKB08}, no stably stratified
layers are present.
The horizontal boundaries are periodic.
We keep the temperature fixed
at the top and bottom boundaries. For the velocity we apply
impenetrable, stress-free conditions according to
\begin{eqnarray}
\partial_zU_x = \partial_z U_y = U_z=0.
\end{eqnarray}
For the magnetic field we use vertical field conditions
\begin{eqnarray}
B_x = B_y=0,
\end{eqnarray}
that allow magnetic helicity to escape from the domain.
\begin{table}
\caption{Summary of the runs. Here ${\rm Ma}=U_{\rm rms}/\sqrt{dg}$,
where $U_{\rm rms}$ is the total rms velocity including the shear flow,
${\rm Ma}_0=u_{\rm rms}/\sqrt{dg}$,
and $\tildeB_{\rm rms}=B_{\rm rms}/B_{\rm eq}$, where $B_{\rm eq}=\sqrt{\mu_0 \rho u_{\rm rms}^2}$.
We use ${\rm Rm}\approx18$, ${\rm Co}\approx2.3$, and ${\rm Ra}=10^5$ in
all runs.}
\vspace{12pt}
\centerline{\begin{tabular}{lccccc}
Run & $\rm Ma$ & ${\rm Ma}/{\rm Ma_0}$ & ${\rm Sh}$ & $\tildeB_{\rm rms}$ & Dynamo \\
\hline
A0 & $0.028$ & $1.00$ & $0.00$ & -- & no \\
A1 & $0.027$ & $0.98$ & $0.07$ & -- & no \\
A2 & $0.028$ & $1.01$ & $0.14$ & $0.70$ & yes \\
A3 & $0.039$ & $1.42$ & $0.36$ & $1.15$ & yes \\
A4 & $0.063$ & $2.28$ & $0.72$ & $1.97$ & yes \\
A5 & $0.096$ & $3.47$ & $1.45$ & $3.99$ & yes
\label{Runs}\end{tabular}}\end{table}
\subsection{Units, nondimensional quantities, and parameters}
Dimensionless quantities are obtained by setting
\begin{eqnarray}
d = g = \rho_0 = c_{\rm P} = \mu_0 = 1\;,
\end{eqnarray}
where $\rho_0$ is the density at $z_{\rm m}={\textstyle{1\over2}} d$. The units of
length, time, velocity, density, specific entropy, and magnetic field are then
\begin{eqnarray}
&& [x] = d\;,\;\; [t] = \sqrt{d/g}\;,\;\; [U]=\sqrt{dg}\;,\;\; \nonumber \\ && [\rho]=\rho_0\;,\;\; [s]=c_{\rm P}\;,\;\; [B]=\sqrt{dg\rho_0\mu_0}\;.
\end{eqnarray}
The simulations are controlled by the following dimensionless
parameters: thermal and magnetic diffusion in
comparison to viscosity are measured by the Prandtl numbers
\begin{eqnarray}
\Pr=\frac{\nu}{\chi_0}, \quad {\rm Pm}=\frac{\nu}{\eta},
\end{eqnarray}
where $\chi_0=K/(c_{\rm P} \rho_0)$ is the reference value of the
thermal diffusion coefficient, measured in the middle of the layer,
$z_{\rm m}$, in the non-convecting initial state.
We use $\Pr=0.6$ and ${\rm Pm}=2$ in most models.
Note that \cite{HP09} use $\Pr=1$ and ${\rm Pm}=5$,
but based on earlier parameter studies \citep{KKB09a,KKB09c}
we do not expect this difference to be significant.
The efficiency of
convection is measured by the Rayleigh number
\begin{eqnarray}
{\rm Ra}=\frac{g d^4}{\nu \chi_0}\left(- \frac{1}{c_{\rm P}}\frac{{\rm d}s}{{\rm d}z} \right)_{z_{\rm m}},
\end{eqnarray}
again determined from the initial non-convecting state at $z_{\rm m}$. The
entropy gradient can be presented in terms of logarithmic temperature
gradients
\begin{eqnarray}
\left(- \frac{1}{c_{\rm P}}\frac{{\rm d}s}{{\rm d}z} \right)_{z_{\rm m}}=\frac{\nabla-\nabla_{\rm ad}}{H_{\rm P}},
\end{eqnarray}
with $\nabla=(\partial \ln T/\partial \ln p)_{z_{\rm m}}$, $\nabla_{\rm ad}=1-1/\gamma$,
and $H_{\rm P}$ being the pressure scale height at $z=z_{\rm m}$.
The effects of viscosity and magnetic
diffusion are quantified respectively by the fluid and magnetic Reynolds numbers
\begin{eqnarray}
{\rm Re}=\frac{u_{\rm rms}}{\nu k_{\rm f}}, \quad {\rm Rm}=\frac{u_{\rm rms}}{\eta k_{\rm f}}={\rm Pm}\,{\rm Re},
\end{eqnarray}
where $u_{\rm rms}$ is the root-mean-square (rms) value of the velocity
taken from a run where $\meanv{U}^{(0)}={\bm0}$,
and $k_{\rm f}=2\pi/d$ is the wavenumber
corresponding to the depth of the convectively unstable layer.
The strengths of rotation and shear are measured by the Coriolis and
shear numbers
\begin{eqnarray}
{\rm Co}=\frac{2\Omega}{u_{\rm rms} k_{\rm f}}, \quad {\rm Sh}=\frac{S}{u_{\rm rms} k_{\rm f}},
\end{eqnarray}
where $S=2\pi U_0/L_x$.
The size of error bars is estimated by dividing the time series into
three equally long parts.
The largest deviation of the average for each of the three parts from that
over the full time series is taken to represent the error.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{pbrms}
\end{center}
\caption[]{Root mean square value of the total magnetic field as a
function of time for the runs listed in Table~\ref{Runs}.}
\label{pbrms}
\end{figure}
\section{Results}
\subsection{Dynamo excitation}
We first set out to reproduce the results of \cite{HP09}. To achieve
this, we take a run with parameters close to theirs which does not
act as a dynamo in the absence of shear (${\rm Sh}=0$). For this baseline
simulation we choose the parameters ${\rm Rm}\approx18$ and
${\rm Co}\approx2.3$. We then follow the same procedure as \cite{HP09} and
gradually increase ${\rm Sh}$ whilst keeping all other parameters constant
(Table~\ref{Runs}) and determine the growth rate $\lambda$ of the magnetic
field.
The time evolution of the rms-value of the total magnetic field from
our set of runs is presented in \Fig{pbrms}. We find no dynamo for
${\rm Sh}=0$ and for weak shear with ${\rm Sh}=0.07$, the growth rate of the
field remains virtually the same as in the absence of shear. This
can be understood as follows: imposing large-scale shear via a
relaxation term effectively introduces a friction term for $U_y$ in
places where $\bm{U}-\meanv{U}^{(0)}\neq\bm{0}$, hence lowering the Reynolds
number somewhat. However, as the same relaxation time $\tau u_{\rm rms}
k_{\rm f}\approx3.5$ is used in all runs with shear, we are confident that
these runs can be compared with each other. As the shear is
increased beyond ${\rm Sh}=0.07$, the growth rate first increases roughly directly
proportional to the shear rate $S$ (\Fig{pgr}). However, for
${\rm Sh}>0.72$ the increase of the growth rate slows down similarly as in
several previous studies \citep{Yousef2,KKB08,HP09}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{pgr}
\end{center}
\caption[]{Growth rate $\lambda$ of the total magnetic field, divided
by the shear rate $S$ as a function of ${\rm Sh}$.}
\label{pgr}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{boxes_By}
\end{center}
\caption[]{Magnetic field component $B_y$ in the saturated state from
two runs with weak
(left panel, ${\rm Sh}\approx0.14$, $t u_{\rm rms} k_{\rm f}\approx 700$) and strong
shear (right panel,
${\rm Sh}\approx1.45$, $t u_{\rm rms} k_{\rm f}\approx 350$). The sides of the
boxes show the field at
the periphery of the domain whereas the bottom (top) panel depicts
$B_y$ from $z=0.05d$ ($z=0.95d$).}
\label{boxes_By}
\end{figure*}
\subsection{Field structure}
In earlier studies where a homogeneous shear flow was used, the
large-scale magnetic field in the saturated state was non-oscillating,
showed little dependence on horizontal coordinates, and could hence be well
represented by a horizontal average \citep{KKB08}.
However, in the
present case with sinusoidal shear, the field structure and temporal
behaviour can in principle be more complicated. Furthermore, \cite{HP09} do
not comment on the field structure in their study.
In fact, the only evidence of a large-scale field in their paper is
given in the form of spectra of the magnetic field.
We find that in our simulations the large-scale field is non-oscillating.
It turns out that the magnetic field shows an interesting spatial dependence.
In \Fig{boxes_By} we show visualizations of the structure of the $B_y$
component from the runs with
the weakest (${\rm Sh}\approx0.14$) and the strongest
(${\rm Sh}\approx1.45$) shear in which dynamo action was detected.
In both cases it is clear that the strong large-scale fields are
concentrated to one side of the computational
domain whereas the other side of the box is almost devoid of
strong coherent fields. This behaviour is even more striking when the
field is averaged over $y$ and $t$; see \Fig{pByxz}. In the next
section we show that the region of strong large-scale fields
coincides with the region where the $\alpha$ effect is strongest.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{pByxz}
\end{center}
\caption[]{Magnetic field component $B_y$ averaged over the saturated
state in time and over the
$y$-dimension from Runs~A2-A5.}
\label{pByxz}
\end{figure}
\subsection{$\alpha$ effect}
The origin of large-scale magnetic fields in helical turbulence is
commonly attributed to the $\alpha$ effect in turbulent dynamo
theory \citep[e.g.][]{Mof78,KR80,RH04}. Results for convection
simulations, making use of the test-field method \citep{KKB09b},
suggest that the $\alpha$ effect does
indeed contribute to large-scale dynamo action in simulations
presented by \cite{KKB08}.
However, it was also shown that, in order to fully explain the
simulation results, additional contributions from the shear--current
and $\meanv{\Omega}\times \meanv{J}$ effects \citep{R69} appear
to be needed.
On the other hand, \cite{HP09} claim that in their setup the $\alpha$ effect
is small, unaffected by shear, and thus incapable of driving a
large-scale dynamo. The setup of \cite{HP09} is based on the Boussinesq
approximation whereby stratification is not present in their
system. However, the impenetrable vertical boundaries also generate an
inhomogeneity, which, in a rotating system leads to an $\alpha$ effect
of the form \citep{SKR66}
\begin{equation}
\alpha_{ij}^{(\Omega)}=\alpha_1 (\bm{G}\cdot\bm\Omega)\delta_{ij} + \alpha_2 (G_i\Omega_j + G_j\Omega_i),
\end{equation}
where $G_i$ denotes the inhomogeneity and $\bm\Omega$ is the rotation
vector. In Boussinesq convection with rotation the kinetic helicity
and thus the $\alpha$ effect are antisymmetric around the midplane of the
layer. In such cases it can be useful to average over one vertical half of the
layer to obtain an estimate of $\alpha$. We note that mean-field
dynamo models have shown that the details of the $\alpha$ profile
can also play a significant role \citep[e.g.][]{BS87,SG03}.
In what follows, we show in most cases the full profile of
$\alpha$ and present averages over the upper half of the domain only when comparing
directly to \cite{HP09}.
Since the simulations in the present paper are weakly
stratified, only minor deviations from a perfectly symmetric profile can
be expected to occur.
Adding a shear flow of the form presented in \Eq{equ:sf} produces
large-scale vorticity $\mean{W}_z\propto\sin \tilde{x}$,
where $\tilde{x}$ is a shifted and rescaled $x$ coordinate
with $\tilde{x}=2\pi(x-x_0)/L_x$.
Such vorticity leads to an $\alpha$ effect
\citep[see, e.g.][]{RK03,RS06},
\begin{equation}
\alpha_{ij}^{(\meanv{W})}=\alpha_1 (\bm{G}\cdot\meanv{W})\delta_{ij} + \alpha_2 (G_i\mean{W}_j + G_j\mean{W}_i),\label{alpW}
\end{equation}
which, in the present case, leads to $\alpha_{yy}\propto\sin \tilde{x}$.
Thus, when both rotation and shear are present, $\alpha=\alpha(x,z)$ is a
function of both $x$ and $z$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{palpyy}
\end{center}
\caption[]{The coefficient $\alpha$, averaged over the $y$-direction
and time for Runs~A0-A5.}
\label{palpyy}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{palpyy_shear}
\end{center}
\caption[]{The contribution of shear to the $\alpha$ effect according
to \Eq{shearalp}, averaged over the $y$-coordinate and time for
Runs~A1-A5.}
\label{palpyy_shear}
\end{figure}
In order to measure the $\alpha$ effect, we impose a weak uniform
magnetic field $B_0\hat{\bm{e}}_y$, with $B_0\approx4\cdot10^{-5} B_{\rm
eq}$, and measure the response of the relevant ($y$)
component of the electromotive force. Our $\alpha$ is then obtained
from
\begin{equation}
\alpha \equiv \alpha_{yy} = \mathcal{\mean{E}}_{y}/B_0.\label{equ:alpha}
\end{equation}
In contrast to the study of \cite{HP09}, we do not usually allow the field
that is generated in addition to $B_0$ to saturate, but reset it after
a time interval $\Delta t \approx 10\ t u_{\rm rms} k_{\rm f}$. Such a procedure
was first introduced by \cite{OSBR02} and it was used also in \cite{KKOS06}
to circumvent the complications that arise due to the additionally
generated fields. A more systematic study of \cite{HdSKB09} showed
that only if $\Delta t$ is not too long, the kinematic value of
$\alpha$ can be obtained
if there is a successful large-scale dynamo present in the system.
However, in the present study and also in that of \cite{HP09} there is no
dynamo in the runs from which $\alpha$ is computed. We find that it is
still necessary to use resetting to obtain the correct value of
$\alpha$ even in the absence of a dynamo. However, we postpone detailed
discussion of this issue to Section \ref{ImportanceOfResetting}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{palpside}
\end{center}
\caption[]{The coefficient $\alpha$ averaged over the upper half
($0.5d<z<d$) of the domain from the left ($x<0$, solid line) and
right ($x>0$, dashed line) sides of the box.}
\label{palpside}
\end{figure}
Our results for $\alpha$ from runs with constant rotation and varying
shear are shown in \Fig{palpyy}. We find that in the absence of shear,
$\alpha$ is a function only of $z$ and has a magnitude of about
$0.6\alpha_0$,
where $\alpha_0={\textstyle{1\over3}} u_{\rm rms}$ is a reference value, and $u_{\rm rms}$
is taken from a run with
${\rm Sh}=0$. When shear is introduced, $\alpha$ increases
(decreases) in the regions of the domain where $\sin \tilde{x}>0$ ($\sin
\tilde{x}<0$). However, for strong shear, the contribution to $\alpha$ from
shear no longer appears to be symmetric around $x=0$. This can be understood in
terms of the shear parameter
\begin{equation}
q=-\frac{\partial \mean{U}^{(0)}}{\partial x}/\Omega,
\end{equation}
where
\begin{equation}
\frac{\partial \mean{U}^{(0)}}{\partial x}=S\sin\tilde{x}.
\end{equation}
The flow is linearly unstable for $q>2$ (Rayleigh instability criterion).
Although the maximum value of $q$ in our
simulations is about $1.25$ it is clear that for ${\rm Sh}\ga0.36$ (with $|q|\ga0.31$), the
profile and the magnitude of $\alpha$ are no longer significantly affected by
the increasing shear.
In order to illustrate this we compute the contribution of $\alpha$
due to shear from runs with ${\rm Sh}\neq0$ by subtracting the
$\alpha$ that was found in the absence of shear using
\begin{equation}
\alpha^{(\mean{W})}=\alpha-\alpha^{(\Omega)},\label{shearalp}
\end{equation}
where $\alpha^{(\Omega)}$ is the $\alpha$ obtained from Run~A0 with no
shear but only rotation.
The results are shown in \Fig{palpyy_shear} and clearly show that for
small ${\rm Sh}$ ($\la0.14$), the shear-induced $\alpha$ shows a
sinusoidal variation as a function of $x$.
For larger shear the profile of $\alpha^{(\mean{W})}$ is no longer
antisymmetric around
$x=0$. This could reflect the asymmetry of the results for $q>0$
($-L_x/2<x<0$) and $q<0$ ($0<x<L_x/2$), that was found earlier
by Snellman et al.\ (2009) in a somewhat
different context of forced turbulence under the influence
of rotation and shear.
They found that the
Reynolds stresses were significantly different in setups with
different sign of ${\rm Sh}$ or $q$, and that this asymmetry became more
pronounced when the magnitude of shear was increased. Similar behavior
has been seen in the magnetohydrodynamic regime by Korpi et
al.\ (2009) in the Reynolds and Maxwell stresses.
We also observe that the magnitude of $\alpha^{(\mean{W})}$ does not
significantly change for ${\rm Sh}\ga0.36$.
This could indicate that the
$\alpha$ effect due to shear saturates and that a simple relation like
\Eq{alpW} is no longer valid.
This is apparent from
Figure~\ref{palpside} which shows $\alpha$ volume-averaged over the
upper half of the domain separately for the left and right sides of the
box. For weak shear (${\rm Sh}\la0.2$) we find that $\alpha$ is linearly
proportional to shear. For ${\rm Sh}\la0.4$ the values of $\alpha$ on both
sides appear to saturate to constant values. The results thus imply that
the coefficients $\alpha_1$ and $\alpha_2$ in \Eq{alpW} should
depend on $\meanv{W}$ when shear is strong.
We note that in \cite{HP09} also larger
values of shear were used.
The large vortex seen in the velocity
field in their Figure~3 indicates that some of their runs with strong
shear could indeed be in the Rayleigh-unstable regime.
With the present data we cannot ascribe the appearance of the
large-scale dynamo solely to the $\alpha$ effect. However, the
coincidence of regions of strong magnetic fields and large $\alpha$
suggest that the $\alpha$ effect is indeed an important ingredient in
generating the large-scale fields.
\begin{table}
\caption{Summary of runs with and without resetting with
varying $B_0$. Run~B1 corresponds to Run~A0 in
Table~\ref{Runs}. ${\rm Co}\approx2.3$, ${\rm Sh}=0$, and ${\rm Ra}=10^5$
in all runs and the imposed field in normalised form is given by
$\tilde{B}_0=B_0/B_{\rm eq}$.}
\vspace{12pt}
\centerline{\begin{tabular}{lcccccc}
Run & ${\rm Rm}$ & $\tilde{B}_0$ & $\mean{\alpha}/\alpha_0$ & Resetting \\
\hline
B1 & $18$ & $4\cdot10^{-5}$ & $0.39\pm0.05$ & yes \\
B2 & $18$ & $0.04$ & $0.36\pm0.03$ & yes \\
B3 & $18$ & $0.11$ & $0.37\pm0.05$ & yes \\
B4 & $18$ & $0.39$ & $0.63\pm0.21$ & yes \\
B5 & $18$ & $1.25$ & $0.25\pm0.10$ & yes \\
B6 & $18$ & $4.47$ & $0.06\pm0.05$ & yes \\
\hline
C1 & $18$ & $4\cdot10^{-5}$ & $0.09\pm0.06$ & no \\
C2 & $18$ & $0.04$ & $0.12\pm0.09$ & no \\
C3 & $18$ & $0.12$ & $0.09\pm0.02$ & no \\
C4 & $18$ & $0.37$ & $0.12\pm0.05$ & no \\
C5 & $18$ & $1.27$ & $0.08\pm0.03$ & no \\
C6 & $18$ & $2.22$ & $0.06\pm0.01$ & no \\
C7 & $18$ & $4.10$ & $(1.10\pm0.34)\cdot10^{-3}$ & no \\
\hline
D1 & $30$ & $4\cdot10^{-5}$ & $0.36\pm0.03$ & yes \\
D2 & $30$ & $4\cdot10^{-5}$ & $-0.03\pm0.23$ & no
\label{Resetruns}\end{tabular}}\end{table}
\subsection{Importance of resetting}
\label{ImportanceOfResetting}
It has previously been demonstrated that the imposed field
method can yield misleading results if a successful large-scale dynamo
is operating in the system and long time averages are
employed \citep{HdSKB09}.
In this case, unexpectedly low values of $\alpha$ could be explained
by the fact that the system is already in a saturated state.
However, many papers have reported small
values of $\alpha$ also for systems that do not act as
dynamos \citep[e.g.][]{CH06,HC08,HP09}.
These results in apparent contradiction with
those of \cite{OSBR02,KKOS06,KKB09a} who use either the imposed field
method with resetting or the test field method.
In these cases the systems must be in a truly kinematic state.
Thus, the explanation of \cite{HdSKB09} does not apply.
The purpose of this section is therefore to resolve this puzzle.
We begin the investigation of this issue by performing two sets of
simulations where we study the dependence of $\alpha$, as measured
using \Eq{equ:alpha}, on $B_0$ with runs where the field is being
periodically reset or left to evolve unimpeded (Sets~B and C,
see Table~\ref{Resetruns}). We take
Run~A0 with ${\rm Rm}\approx18$ and no shear as our baseline and vary
$B_0/B_{\rm eq}$ in the range $4\cdot10^{-5}\ldots4$. Our results for
$\mean\alpha$, defined as the volume average over the upper half of
the box,
\begin{equation}
\mean\alpha=\frac{2}{L_z}\int_{{\textstyle{1\over2}} L_z}^{L_z} \frac{{\mathcal{\mean{E}}_{y}}(z)}{B_0} dz,\label{altop}
\end{equation}
are shown in \Fig{palp_B0}. We see that, with the exception of the
strongest $B_0$ case in Set C, the
results for both sets are in accordance with a simple quenching
formula
\begin{equation}
\mean\alpha=\frac{q_1 \alpha_0}{1+q_2(B_0/B_{\rm eq})^2},
\end{equation}
where $q_1$ and $q_2$ are constants which we use as free parameters in
the fitting. We find that the value of $\mean\alpha$ for weak fields is
consistently four times smaller in the cases where no resetting is
performed. The values of $\mean\alpha$ in the range
$B_0/B_{\rm eq}\approx0.04\ldots1$ are essentially the same as those made
for our standard imposed field strength
$B_0/B_{\rm eq}\approx4\cdot10^{-5}$ (see also Table~\ref{Resetruns}). This
suggests that the values of $\mean\alpha$ in this range represent the
kinematic stage and that the factor of four between the results in
the different sets arises from the additional inhomogeneous mean magnetic
fields generated in the cases where no resetting is performed.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{palp_B0}
\end{center}
\caption[]{Coefficient $\mean\alpha$ according
to \Eq{altop} as a function of $B_0$ from
runs where the field is either being reset (Set~B, solid line) or
left to evolve on its own (Set~C, dashed line). The dotted lines
show fits to a quenching formula given in the legend where we use
the coefficients $q_1=0.4$ ($0.1$) and $q_2=0.5$ ($0.2$) in the
upper (lower) curve. The diamonds on the left of the vertical axis
indicate the values of $\mean\alpha$ for
$B_0/B_{\rm eq}\approx4\cdot10^{-5}$.}
\label{palp_B0}
\end{figure}
This is demonstrated in the uppermost panel of \Fig{peymz} where the
additionally generated horizontal magnetic fields, averaged over time
and horizontal directions, are shown from Run~C1. The origin of these
fields can be understood as follows: the imposed field
$B_0 \hat{\bm e}_y$ induces a $z$-dependent
electromotive force in the $y$ direction, i.e.\ ${\mathcal{\mean{E}}_{y}}(z)$.
This leads to the generation of an $x$ component of mean magnetic
field via $\dot{\mean{B}}_x(z)=\ldots-{\mathcal{\mean{\bm E}}}_{y,z}$ which, on the other
hand, induces a $z$ dependent electromotive force ${\mathcal{\mean{\bm E}}}_x(z)$ and hence
$\dot{\mean{B}}_y(z)=\ldots+{\mathcal{\mean{\bm E}}}_{x,z}$. Since these additional fields
are functions of $z$, mean currents $\mean{J}_x(z)=-\mean{B}_{y,z}$ and
$\mean{J}_y(z)=\mean{B}_{x,z}$ are also present. We emphasize that these
fields arise due to the presence of an imposed field and decay if
the imposed field is removed.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{peymz}
\end{center}
\caption[]{Top panel: horizontally averaged horizontal components
of the magnetic field from Run~C1. Middle panel: vertical profiles
of $\alpha(z)$ from the imposed-field method (solid line) and test
field calculation with $k=k_1$ (dash-dotted line), and $\eta_{\rm t}(z)$
(dashed line). Bottom panel:
$y$-component of the electromotive force (solid line) compared with
$\alpha \mean{B}_y-\eta_{\rm t} \mean{J}_y$ (dashed line), and $\alpha
\mean{B}_y$ (dash-dotted line).}
\label{peymz}
\end{figure}
It is now clear that $\alpha$ cannot be determined using
\Eq{equ:alpha} in this situation because the electromotive force picks
up contributions from the generated fields according to
\begin{equation}
{\mathcal{\mean{E}}_{y}}(z)=\alpha(z)[\mean{B}_y(z)+B_0]-\eta_{\rm t}(z) \mean{J}_y(z).\label{equ:memfy}
\end{equation}
Here we omit the off-diagonal components of $\alpha_{ij}$ and
$\eta_{ijk}$ whose influence on the final result is marginal.
Since the magnetic fields are weak, $\alpha$ and $\eta_{\rm t}$ can be
considered as the kinematic values. We use here $\alpha$ as
determined from Run~B1 (imposed field with resetting) and $\eta_{\rm t}$
obtained from a corresponding test field simulation (see the
middle panel of \Fig{peymz}) where the test
fields have a $\sin kz$ dependence on $z$ with $k/k_1=1$ and
$k_1=2\pi/d$.
For more details about the test field method in the context
of convection simulations see \cite{KKB09a}.
We normalise the turbulent diffusion with a reference value
$\eta_{\rm t0}={\textstyle{1\over3}} u_{\rm rms} k_{\rm f}^{-1}$.
The bottom panel of \Fig{peymz} shows that \Eq{equ:memfy} with these
$z$-dependent
coefficients gives a good fit to the simulation
data of ${\mathcal{\mean{E}}_{y}}$ from Run~C1 when the actual mean magnetic fields
are used.
The diffusion term in \Eq{equ:memfy} has a noticeable effect only
near the boundaries where the current is also largest. These results
demonstrate that the interpretation of the electromotive force in
terms of \Eq{equ:alpha} is insufficient if long time averages
are used.
A general comment is here in order.
Near boundaries, as well as elsewhere in the domain where
the scale of variation of the mean field becomes comparable
with the scale of the turbulent eddies, a simple multiplication
with turbulent transport coefficients becomes inaccurate and one
needs to resort to a convolution with integral kernels.
The kernels can be obtained via Fourier transformation using the
test-field results for different wavenumbers \citep{BRS08}.
In the present paper we have only considered the result for the
wavenumber $k=2\pi/L_z$.
This is also the case for the $\eta_{\rm t}$ shown in the middle panel of
\Fig{peymz}.
The $\alpha$ obtained from the test-field method has a more nearly
sinusoidal shape, but with similar amplitude than the profile
shown in \Fig{peymz}.
This confirms the internal consistency of our result.
Another facet of the issue is highlighted when the magnetic Reynolds
number is increased from 18 to 30 (Runs D1 and D2, see \Fig{palp_reset}).
The larger
${\rm Rm}$ value is very close to marginal for dynamo action whereas the smaller
value is clearly subcritical. We find that, if
resetting is used, the kinematic value of $\mean\alpha$ is independent
of ${\rm Rm}$ in accordance with mean-field theory.
The situation changes dramatically if we let the field evolve without
resetting; see the two lower panels of \Fig{palp_reset}. For
Run~C1 with ${\rm Rm}\approx18$ we can still extract a statistically
significant mean value of $\mean\alpha$ although the scatter of the
data is considerable. For Run~D2 with ${\rm Rm}\approx30$ the
fluctuations of $\mean\alpha$ increase even further so that a
very long time average would be needed to obtain a statistically
meaningful value. A similar convergence issue has been
encountered in the studies by \cite{CH06,HC08,HP09}. However, as we
have shown above, the interpretation of such values cannot be done
without taking into account the additionally generated fields and the
effects of turbulent diffusion.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{palp_reset}
\end{center}
\caption[]{Time series of the coefficient $\mean\alpha$ for
Runs~B1 and D1 (uppermost panel), C1 (middle), and D2 (bottom).}
\label{palp_reset}
\end{figure}
\section{Conclusions}
We use three-dimensional simulations of weakly stratified turbulent
convection with sinusoidal shear to study dynamo action. The
parameters of the simulations are chosen so that in the absence of
shear no dynamo is present. For weak shear the growth rate of the
magnetic field is roughly proportional to the shear rate.
This is in accordance
with earlier studies. A large-scale magnetic field is found in all
cases where a dynamo is excited. The strongest large-scale fields are
concentrated in one half of the domain ($x<0$), with a sign change
close to $x=0$ and weaker field of opposite sign in the other half
($x>0$) of the box.
In an earlier study, \cite{HP09} investigated a similar system and came to
the conclusion that the dynamo cannot be explained by $\alpha \Omega$
or $\alpha^2$ dynamos due to a low value of $\alpha$ determined using
the imposed-field method.
However, we demonstrate that their method where long time averages
are used yields the kinematic value $\alpha$ only if
additionally generated inhomogeneous mean fields are taken into
account. Hence, this analysis becomes meaningless without the
knowledge of turbulent diffusion.
The situation has now changed through the widespread usage of the test-field method
to obtain values of $\eta_{\rm t}$ at the same time \citep[see, e.g.,][]{Gre08}.
Furthermore, we show that, if the
magnetic field is reset before the additionally generated fields
become comparable to the imposed field, the kinematic value of $\alpha$
can be obtained by much shorter simulations and without the
complications related to gradients of $\meanv{B}$ or statistical
convergence. These issues were already known for some time
\citep{OSBR02,KKOS06}, but they have generally not been taken into
consideration.
Another new aspect is the sinusoidal shear that is expected to lead to
a sinusoidal $\alpha$ profile \cite[e.g.][]{RS06}. In the
study of \cite{HP09} a volume average of $\alpha$ over one vertical half of the domain
is used, which averages out the contribution of $\alpha$ due to shear.
We find that, in the absence of shear, $\alpha$ is approximately
antisymmetric with respect to the midplane of the convectively
unstable layer suggesting that the main contribution to $\alpha$ comes
from the inhomogeneity due to the boundaries rather than due to
density stratification. When sinusoidal shear is introduced into the
system, an additional sinusoidal variation of $\alpha$
in the $x$ direction is indeed present. When
the shear is strong enough, the $\alpha$ profile is highly
anisotropic. The maximum value of $\alpha$ is close to the expected one,
$\alpha_0=\onethirdu_{\rm rms}$,
which is significantly higher than the $\alpha$ in \cite{HP09}.
We also note that the regions of strong large-scale magnetic fields
coincide with the regions where the $\alpha$ effect is the
strongest.
This supports the idea that the $\alpha$
effect does indeed play a significant role in generating the
large-scale field.
\section*{Acknowledgments}
The authors acknowledge Matthias Rheinhardt for pointing out the importance
of turbulent diffusion in connection with non-uniform mean fields when
no resetting is used.
The numerical simulations were performed with the supercomputers
hosted by CSC -- IT Center for Science in Espoo, Finland, who are
administered by the Finnish Ministry of Education. Financial support
from the Academy of Finland grant Nos.\ 121431 (PJK) and 112020 (MJK),
the Swedish Research Council grant 621-2007-4064, and
the European Research Council under the AstroDyn Research Project 227952 are
acknowledged.
The authors acknowledge the hospitality of NORDITA during the program
``Solar and Stellar Dynamos and Cycles''.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The discovery of a
gamma-ray burst optical counterpart with a measurable redshift
seems to have shown that the sources are cosmological in origin
(\cite{djor97,metz97}).
The observed fading multi-wavelength afterglows are so far consistent
with the simple relativistic blastwave model (\cite{mesz97})
which radiates via a synchrotron shock. However, the emission mechanism
resulting in the prompt gamma rays remains a mystery.
Studies of gamma-ray burst (GRB) spectral evolution have
uncovered many trends which may be used to test possible emission
mechanisms. In general, studies of GRB spectral evolution have
focused on the ``hardness'' of bursts, measured either by the
ratio between two detector channels or with
more physical variables such as the spectral break
or peak power energy $\rm{E}_{\rm{pk}}$ (\cite{ford95})
which is the maximum of $\nu F_{\nu}$, where
$\nu$ is photon energy and $F_{\nu}$ is the specific energy flux.
Such hardness parameters were found to either follow a
``hard-to-soft'' trend (\cite{norr86}),
decreasing monotonically while the flux rises and falls,
or to ``track'' the flux during GRB pulses (\cite{gole83,karg94}).
The discovery that $\rm{E}_{\rm{pk}}$ often decays exponentially in bright,
long, smooth BATSE GRB pulses
\textit{as a function of photon fluence} $\Phi$
($=\int_{t'=0}^{t'=t} \rm{F_{N}}(t') dt'$)
(\cite{lian96}, hereafter LK96) provided a new constraint for emission models
(\cite{lian97a,lian97b,daig98}).
In their analysis, LK96 fit the function
\begin{equation}
\rm{E}_{\rm{pk}}(t) = \rm{E}_{\rm{pk(0)}} \rm{e}^{-\Phi(t) / \Phi_{0}^{\rm{LK}}}
\end{equation}
to 37 GRB pulses in 34 bursts.
To interpret this empirical trend, they differentiated
Eq. 1 to find
\begin{equation}
-\rm{d}\rm{E}_{\rm{pk}}/\rm{dt} = \rm{E}_{\rm{pk}}~\rm{F}_{N} / \Phi_{0}^{\rm{LK}} \approx \rm{F}_{E}
/ \Phi_{0}^{\rm{LK}}
\end{equation}
where $\rm{F}_{E} = \int_{\rm{E} \approx 30
\rm{keV}}^{\rm{E} \approx 2000 \rm{keV}}
\rm{E}~N(\rm{E})~\rm{dE}$ is the BATSE energy flux
(see Eq. 1 of LK96). In this paper, we wished to avoid the
assumption that $\rm{E}_{\rm{pk}}~\rm{F}_{N} \approx \rm{F}_{E}$. To do this, we directly
tested the trend $-\rm{d}(\rm{E}_{\rm{pk}})/\rm{dt} = \rm{F}_{E} / \Phi_{0}$
by integrating it to give us the function
\begin{equation}
\rm{E}_{\rm{pk}}(t) = \rm{E}_{pk(0)} - \mathcal{E}\mathrm{(t)} \mathrm{/ \Phi_{0}}
\end{equation}
where $\mathcal{E}\mathrm{(t)}$
($=\int_{t'=0}^{t'=t} \rm{F_{E}}(t') dt'$) is the BATSE energy fluence.
We emphasize that this is not a fundamentally different trend from the form
used in LK96.
The decay constant $\Phi_{0}^{\rm{LK}}$
appeared to be invariant among pulses during some bursts
analyzed in LK96, suggesting
that individual pulses in a burst may originate in the same plasma.
These discoveries coupled with the observed evolution of the spectral shape
(\cite{crid98a}) suggested that saturated
inverse Comptonization may be a viable mechanism during the
gamma-ray active phase of bursts (\cite{lian97a}),
regardless of the distance scale (\cite{lian97b}).
\section{Procedures}
To determine the evolution of GRB spectral shapes,
we examined High Energy Resolution
data collected from the BATSE Large-Area Detectors (LADs)
and Spectroscopy Detectors (SDs) on board the Compton
Gamma-Ray Observatory (\cite{fish89}).
We began with the $126$ bursts
which appear in \cite{pree98}.
These bursts were chosen for having a BATSE
fluence (28-1800 keV) $> 4 \times 10^{-5}$ erg cm$^{-2}$
or a peak flux (50-300 keV on a 256-ms time scale)
$> 10~\rm{photon~s^{-1}~cm^{-2}}$.
The counts from the detector most nearly
normal to the line of sight of each burst (burst angle closest to 0) were
background-subtracted and binned into
time intervals each with a SNR of $\sim 45$ within the 28 keV to 1800 keV
range. Such a SNR has been found to be necessary in time-resolved
spectroscopy of BATSE gamma-ray bursts (\cite{pree98}).
We deconvolved the gamma-ray spectra of each time interval using
the Band et al. (1993) GRB function
\vspace{4mm}
\( \begin{array}{rclrr}
\rm{N_{E}(E)} & = & \rm{A\left(\frac{E}{100~\rm{keV}}\right)^{\alpha}
exp\left(-\frac{E}{E_{\rm{0}}}\right),} &
\rm{(\alpha-\beta)E_{\rm{0}} \geq E,} \\
\mbox{} & = & \rm{A\left[\frac{(\alpha-\beta)E_{\rm{0}}}{100~
\rm{keV}}\right]^{\alpha-
\beta} exp(\beta-\alpha) \left(\frac{E}{\rm{100~keV}}\right)^{\beta},}
& \rm{(\alpha-\beta)E_{\rm{0}} \leq E,} & (4)
\end{array} \)
\stepcounter{equation}
\vspace{4mm}
\noindent where A is the amplitude (in $\rm{photon}~
\rm{s}^{-1}~\rm{cm}^{-2}~
\rm{kev}^{-1}$) and $\rm{E}_{0} = \rm{E}_{\rm{pk}} / (2 + \alpha)$.
While LK96 assumed that $\alpha$ and $\beta$ were constant
during the course of each burst,
this has since been shown to be untrue with a larger data set.
(\cite{crid97}).
We thus left $\alpha$ and $\beta$ as free parameters in our fits.
At this point, we needed to select pulses within our bursts that we could
use to test Equations 1 and 3. Ideally, our pulses would not overlap other
pulses and our method for choosing the time bins associated with a pulse would
not be biased.
Unfortunately,
by forcing our time bins to have a SNR $\sim45$ so that spectra may be fit to them,
much time resolution is lost. Pulses which would be easily separable at a higher time
resolution become blurred together.
In Figure \ref{Figure1}, we show an example of what
would likely be identified as two pulses
in our coarse (SNR $\sim45$) data.
Below it, 64-ms count rate data
for this same burst, obtained
from the Compton Observatory Science Support Center (COSSC),
is plotted. With higher time resolution,
we see this bursts is composed of at least 4 distinct pulses.
To avoid contaminating our sample with overlapping pulses (and to avoid biases introduced by
a human in pulse selection), we used the
COSSC 64-ms count rates and background fits
to determine where each of our pulses began and ended.
To do this, we developed an interactive IDL routine to fit
the Norris et al. (1996) pulse profile to the individual pulses within these bursts.
The pulse profile function for the count rate C(t) can be written
\begin{equation}
\rm{ C(t) = A~exp~ \left( - {\left| \frac{t-t_{max}}{\sigma_{r,f}} \right|}^{\nu} \right)}
\end{equation}
where $\rm{t_{max}}$ is the time of maximum count rate, $\sigma_{\rm{r}}$ and
$\sigma_{\rm{d}}$ are the count rate rise and decay time constants, and
$\nu$ is the pulse ``peakedness''. For an exponential rise and decay, $\nu = 1$.
When $\nu = 2$ and $\sigma_{\rm{r}} = \sigma_{\rm{d}}$ this shape describes a Gaussian.
With 64-ms resolution, we found that in many bursts pulses overlapped in a fashion
making them too complex
for us to fit individual pulses. Other bursts
contained pulses which could be resolved, but none of their pulses
lasted long enough to span at least 4 time bins with SNR $\sim45$. For
13 of the bursts, no processed 64-ms data was available. We discarded bursts
which fell into any of these three categories. This left us with
$26$ bursts.
This is comparable to the 34 multi-pulse bursts used in LK96,
which included several pulses which appear to be overlapping when
confronted with the 64-ms data. To avoid overlap in our own analysis,
we used only bins which were dominated by a single pulse
(at least $70\%$ of the counts from one pulse).
Within our $26$ bursts,
we identified $41$ regions
composed of at least 4 time bins dominated by a single pulse.
The time bins selected for each pulse were consecutive in all but
two cases (BATSE triggers 451 and 3290). The 64-ms data for each of these
two bursts suggests that a short pulse occured near the middle of a longer
pulse, which forced us to fit two separate regions with a single decay
law.
Our next step was
to test the $\rm{E}_{\rm{pk}}$-fluence relations (Equations 1 and 3) with each of the selected pulses.
Our motivation for emphasizing the
$\rm{E}_{\rm{pk}}$-energy fluence relation (Eq. 3) as opposed the
$\rm{E}_{\rm{pk}}$-photon fluence relation (Eq. 1) is that we believe
that the former represents
a more physical quantity.
It is possible (perhaps even likely) that the observed BATSE photon
fluence is a poor representation of the bolometric photon fluence.
The BATSE LAD energy window was
designed to contain the peak of GRB
energy spectra, \emph{not} the peak of the photon
spectra. By using
energy fluence in place of photon fluence, we can avoid
the shakier assumption that the BATSE LAD
photon flux is proportional to the bolometric photon flux.
LK96 had attempted this but found that statistical
errors in $\mathcal{E}$ were too large to be useful.
This was a result of their fixing $\alpha$ and $\beta$ in the spectral fitting. When we fit
the time-resolved spectra with variable $\alpha$ and $\beta$, we obtained
much smaller
errors for $\mathcal{E}$, which made testing the $\rm{E}_{\rm{pk}}-\mathcal{E}$ relation
possible. Nevertheless, we also fit Eq. 1 to our pulses
in this paper both for
historical reasons and as a test of our interpretation.
\section{Results}
We fit both the $\rm{E}_{\rm{pk}}-\Phi$ and
the $\rm{E}_{\rm{pk}}-\mathcal{E}$ relations to our $41$ clean pulses
using FITEXY (\cite{pres92}).
Table 1 summarizes the results for each of the pulses in our sample. The
first column is the BATSE trigger number. The second column is the burst name,
which is also the date the burst triggered in the format YYMMDD.
The third column is the number of the LAD which was used for processing.
The fourth column lists the $\rm{t_{max}}$ from the Norris function fit to the pulse.
The fifth column
is the energy fluence within the bins selected for fitting
in units $\rm{MeV~cm^{-2}}$.
The sixth column gives the number of bins selected for fitting.
The seventh column is the fitted value $\Phi_{0}^{\rm{LK}}$ for
each pulse defined in Eq. 1. The eighth and ninth columns are
the fitted values of $\Phi_{0}$ and $\rm{E_{pk(0)}}$ for each pulse as
defined in Eq. 3.
Of course, $\Phi_{0}$
will only equal $\Phi_{0}^{\rm{LK}}$ if $\rm{E}_{\rm{pk}} \rm{F}_{N} = \rm{F}_{E}$.
Since the latter is not strictly true, we find that $\Phi_{0} \approx
\Phi_{0}^{\rm{LK}}$.
For completeness, we also show the plots
of $\rm{E}_{\rm{pk}}$ versus $\mathcal{E}$ and their fits in Figure \ref{Figure2a}.
From the $\chi^{2}$ and the
number of fluence bins for each decay fit, we calculated the probability Q of
randomly getting a higher $\chi^{2}$ by chance. Thus, $\rm{Q} \geq \sim 0.5$
represents very good fits, while $\rm{Q} \sim 0$ represents poor fits.
The Q values from fits of Eq. 3 to our pulses appear in the tenth column of Table 1.
If $\rm{E}_{\rm{pk}}$ does indeed cool linearly with $\mathcal{E}$
in all pulses selected for fitting,
then when plotting the cumulative distribution of Q values, we would expect
10\% of the pulses to have a Q less that 0.1, 20\% of the pulses to
have a Q less than 0.2, and so on.
Figure \ref{Figure3} shows the cumulative distribution of Q values for our pulses
with acceptable fits. An excess of
pulses with very high Q values would suggest a biased pulse selection process.
A Kolmogorov-Smirnov test (P = $0.18$)
applied to our distribution of Q values suggests that the set of $41$
pulses is not too biased and roughly follows the distribution we would expect if all
of them are consistent with a linear decay of $\rm{E}_{\rm{pk}}$ with respect to energy fluence.
By fitting the $\rm{E}_{\rm{pk}}-fluence$ law to the full observable duration of each
pulse and not just the flux decay phase, we could characterize our pulses
as ``hard-to-soft''. None of our pulses required
a ``tracking'' classification, though many of the ambiguous pulses excluded
from this study
could be ``hard-to-soft'' or ``tracking''.
Three of the pulses in our sample (BATSE triggers 2316, 3491, and 3870) contain
pulses with negative values of $\Phi_{0}$. However, all three of these pulses
are still consistent with a positive value of $\Phi_{0}$.
We remind the reader here that
large absolute values of $\Phi_{0}$ (like those in these three pulses)
correspond to pulses with very little change
in $\rm{E}_{\rm{pk}}$, where $\frac{1}{\Phi_{0}} \equiv \frac{d \rm{E}_{\rm{pk}}}{d \mathcal{E}} \approx 0$.
In such cases, small statistical errors in $\frac{d \rm{E}_{\rm{pk}}}{d \mathcal{E}}$ translate
to very large statistical errors in $\Phi_{0}$. Even if
all pulses decay monotonically from hard-to-soft, we should expect to see a few
pulses to have negative values of $\Phi_{0}$. Since all of our pulses are
consistent with a monotonic decay in $\rm{E}_{\rm{pk}}$, we adopt the hypothesis that all
pulses behave this way for the remainder of this paper and drop these three
pulses from our sample to simplify our calculations.
\subsection{Distribution of $\Phi_{0}$}
The distribution of fitted $\Phi_{0}$ values appears in Figure
\ref{Figure4}.
It is roughly log-normal where the mean of
$\rm{log_{10}\Phi_{0}}$
is $1.75 \pm 0.07$ and the FWHM of $\rm{log_{10}~\Phi}$ is
$1.0 \pm 0.1$.
This distribution likely suffers some selection effects. This
becomes obvious when one realizes that
$\Phi_{0} \approx - \frac{\Delta \mathcal{E}}{\Delta \rm{E}_{\rm{pk}}}$. We see that the
smallest absolute value of $\Phi_{0}$ is limited by the minimum energy fluence
which allows one to fit spectra (about $1~\rm{MeV~cm^{-2}}$ from Table 1)
and the energy window of BATSE (max $\Delta \rm{E}_{\rm{pk}} \approx 1870~keV$).
There are no such limitations on the high side of this distribution,
since $\mid \Delta \rm{E}_{\rm{pk}} \mid$ can be arbitrarily small and
$\Delta \mathcal{E}$ is only limited by nature.
\subsection{Testing the Invariance of $\Phi_{0}$ Among Pulses within Bursts}
LK96 reported that the decay constant
$\Phi_{\rm{0}}^{\rm{LK}}$ sometimes remains fixed from pulse to pulse
within some bursts. Such behavior would
hint at a regenerative source rather than a
single catastrophic event (such as \cite{mesz93}).
However, the intrinsically narrow distribution of decay constants
mentioned above and the relatively large confidence regions for each pulse's
value of $\Phi_{0}$ suggest
that many bursts would appear to have an invariant decay constant merely
by chance.
As done earlier with a larger, but less reliable, set of pulses (\cite{crid98b}), we
calculated three statistics for each multi-pulse burst to test the invariance
of the $\rm{E}_{\rm{pk}}-fluence$ decay constant.
We compared two of each bursts' M pulses at a time using the statistic
\begin{equation}
\rm{X}_{ij}^{2} = \frac{[\Phi_{0}(i) - \Phi_{0}(j)]^{2}}
{\sigma_{\Phi_{0}(i)}^{2} +
\sigma_{\Phi_{0}(j)}^{2}}
\end{equation}
and then distilled the comparisons within each burst into a single
statistic to represent that burst. These statistics are defined in
Table 2. Each is tailored for different
null hypotheses.
The statistic $G_{1}$ tests if \emph{at least} two pulses
in a burst are similar (and thus ``invariant''), while $G_{2}$ tests
if \emph{all} the pulses
have a similar decay constant. $G_{3}$
tests for either a single good pairing or
several moderately close pairings. We believe that this last statistic
is the most reasonable for testing our results since it does not
require that \emph{all} pulses decay at the same rate (as $G_{2}$ does) but
also does not discard information about multiple pulses repeating (as
$G_{1}$ does).
Finally, we calculated a table of probabilities P$(G,\rm{M})$
for our goodness-of-fit statistics $G$ based on
a simple Monte Carlo simulation. We
created synthetic bursts with pulse decay parameters randomly sampled
from the observed distributions of $\Phi_{0}$ and $\sigma_{\Phi_0} / {\Phi_0}$.
To avoid any bias that intrinsic invariances would have on these
distributions, we created them using only one pulse from each burst.
The sample of bursts in this study has
fewer bursts than previous works, and hence has fewer bursts with more than
one pulse. The three versions of the $G$ statistic defined above are equivalent
when only two pulses appear in a burst. Thus for this sample, with only 3 of the
9 multipulse bursts having more than 2 pulses, these statistics are nearly equivalent.
The high probability that the
observed repetitions occurred by coincidence leads us to conclude
that pulse decays are not invariant from pulse to pulse within bursts.
Instead, we suggest that the distribution of $\Phi_{0}$ values seen in
all bursts is narrow enough that an apparent invariance of $\Phi_{0}$
is inevitable in some burst. We came to the same conclusion when examining
$\Phi_{0}^{\rm{LK}}$ (\cite{crid98b}).
\section{Discussion}
Out of the $26$ bursts to
which we could fit a time-evolving Band GRB
function, all contain at least one pulse consistent with
a linear decay of $\rm{E}_{\rm{pk}}$ with respect to energy fluence.
Of the $41$ pulses in these bursts, all
are consistent ($\rm{Q} > 0.001$; \cite{pres92})
with this decay pattern. This is also
true when we fit the LK96 exponential decay of
$\rm{E}_{\rm{pk}}$ with respect to photon fluence.
Besides LK96, other quantitative spectral evolution trends have been reported for GRBs.
The averaged temporal and spectral evolution for 32 bright GRBs has been
calculated in \cite{feni98}. The averaged photon flux evolution can be
described as both rising and decaying linearly with time. The hardness,
as measured
by $\rm{E}_{\rm{pk}}$ with $\alpha$ and $\beta$ held fixed, also appears to decay linearly with time
during the averaged burst ($\rm{E}_{\rm{pk}} = \rm{E_{pk(0)}} (1 - t/t_{0})$).
This is clearly not representative of all bursts since the evolution in bursts of
$\rm{E}_{\rm{pk}}$ is often complex (\cite{ford95,lian96}).
These trends possibly reflect the physics dictating the envelope of emission.
The fact that LK96 found the $\rm{E}_{\rm{pk}}$-fluence trend in many mingled pulses
may result from the fact that the burst envelope also evolves in this manner.
Since the hardness of this envelope appears to decay more slowly than the hardness
during the pulses we observe, we might not expect to see this trend in our pulses.
However, the degree of confidence of $\rm{E}_{\rm{pk}}$ in our fits, coupled
with the fact that energy fluence is often linear in time, makes the observations
of many bursts possibly consistent ($\rm{Q} > 0.001$) with this decay law.
Testing the distribution of Q values as we did in Fig \ref{Figure3}, we find
a probability P=0.001 that the pulses are realizations of linear Epk-time trend,
compared to P=$0.18$ for the linear Epk-energy fluence trend.
While the linear $\rm{E}_{\rm{pk}}$-time relation
does not seem to describe individual pulses as well as the $\rm{E}_{\rm{pk}}$-fluence relation, the results are not conclusive.
More pulses are clearly needed if one is to discriminate between
any two time-dependent spectral functions.
One could simply wait for bursts to occur or for a more sensitive instrument
to be built. However, it may
be possible to increase the number of fittable pulses using the existing BATSE
database. Fitting a time-dependent spectral function directly to higher
time resolution data (or time-tagged event data) greatly reduces the number
of required fit parameters. Another approach may be to analytically integrate
the time-dependent spectral function and fit that to integrated spectra,
as has been done by Ryde \& Svensson (1998).
By increasing the number of pulses,
it will become possible to make more definitive statements
about the evolution of
prompt GRB emission and how it relates to the GRB afterglow.
\acknowledgements
AC thanks NASA-MSFC for the Graduate Student Research Program
fellowship. It is also a pleasure to mention very useful discussions with
Ed Fenimore, Charles Dermer, Markus B\"{o}ttcher, and Rodney Hurley.
This work is supported by NASA grant NAG5-3824.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
As special classes of random point processes, fermion point processes
and boson point processes have been studied by many authors since \cite{BM73, M75, M77}.
Among them, \cite{FF87, F91} made a correspondence between boson processes and
locally normal states on $C^*$-algebra of operators on the boson Fock
space.
A functional integral method is used in \cite{L02} to obtain these processes
from quantum field theories of finite temperatures.
On the other hand, \cite{ST03} formulated both the fermion and boson processes
in a unified way in terms of the Laplace transformation and generalized them.
Let $Q(R)$ be the space of all the locally finite configurations
over a Polish space $R$ and $K$ a locally trace class integral operator on
$L^{2}(R)$ with a Radon measure $\lambda$ on $R$ .
For any nonnegative function $f$ having bounded support and
$\xi=\sum _{j} \delta_{x_{j}} \in Q(R)$, we set
$<\xi,f>=\sum_{j} f(x_{j})$.
Shirai and Takahashi \cite{ST03} have formulated and studied
the random processes $\mu_{\alpha,K}$ which have Laplace transformations
\begin{equation}
E[e^{-<f,\xi>}]
\equiv \int_{Q(R)} \mu_{\alpha,K}(d\xi)\,e^{-<\xi,f>}
={\rm Det}\big(I+\alpha \sqrt{1-e^{-f}}K\sqrt{1-e^{-f}}\big)
^{-1/\alpha}
\label{ft}
\end{equation}
for the parameters $\alpha \in \{2/m;m \in {\Bbb N}\} \cup \{-1/m; m\in N\}$.
\medskip
\noindent Here the cases $\alpha = \pm 1$ correspond to
boson/fermion processes, respectively.
In their argument, the generalized Vere-Jones' formula\cite{VJ88}
\begin{equation}
{\rm Det}(1-\alpha J)^{-1/\alpha}=\sum \frac{1}{n!}\int_{R^n}
\det{}_{\alpha}(J(x_{i} ,x_{j}))_{i,j=1}^{n}
\lambda^{\otimes n}
(dx_{1}\cdots dx_{n})
\label{VJ}
\end{equation}
has played an essential role.
Here $J$ is a trace class integral operator, for which we need
the condition $||\alpha J|| <1$ unless $-1/\alpha \in \Bbb N $,
$ {\rm Det}(\,\cdot\,)$ the Fredholm determinant
and $\det_{\alpha} A$ the $\alpha$-{\it determinant} defined by
\begin{equation}
\det{}_{\alpha}A=\sum_{\sigma\in {\cal S}_{n}}\alpha^{n-\nu(\sigma)}
\prod_{i} A_{i\sigma(i)}
\label{adet}
\end{equation}
for a matrix $A$ of size $n\times n$, where
$\nu(\sigma)$ is the numbers of cycles in $\sigma$.
The formula (\ref{VJ}) is Fredholm's original definition of his functional
determinant in the case $\alpha = -1$.
The purpose of the paper is to construct both the fermion and boson
processes from a view point of elementary quantum mechanics in order to
get simple, clear and straightforward understanding of them in the
connection with physics.
Let us consider the system of $N$ free fermions/bosons in a
box of finite volume $V$ in $\Bbb R^d$ and the quantum statistical mechanical
state of the system with a finite temperature.
Giving the distribution function of the positions of all particles in terms of
the square of the absolute value of the wave functions,
we obtain a point process of $N$ points in the box.
As the thermodynamic limit, $N, V \to \infty$ and $N/V \to \rho$, of these
processes of finite points, fermion and boson processes in $\Bbb R^d$
with density $\rho$ are obtained.
In the argument, we will use the generalized Vere-Jones' formula in the form:
\begin{equation}
\frac{1}{N!}\int
\det{}_{\alpha}(J(x_{i} ,x_{j}))_{i,j=1}^{N}
\lambda^{\otimes N}
(dx_{1}\cdots dx_{N})
= \oint _{S_r(0)}\frac{dz}{2\pi iz^{N+1}}{\rm Det}(1- z\alpha J)^{-1/\alpha},
\label{IVJ}
\end{equation}
where $r>0$ is arbitrary for $-1/\alpha \in \Bbb N$, otherwise $r$ should satisfy
$||r\alpha J||<1$.
Here and hereafter, $S_r(\zeta)$ denotes the integration contour defined
by the map $ \theta \mapsto \zeta + r\exp(i\theta) $,
where $\theta$ ranges from $-\pi$ to $\pi$, $ r>0 $ and $ \zeta \in \Bbb C$.
In the terminology of statistical mechanics, we start from canonical ensemble
and end up with formulae like (\ref{ft}) and (\ref{VJ}) of grand canonical nature.
In this sense, the argument is related to the equivalence of ensembles.
The use of (\ref{IVJ}) makes our approach simple.
In this approach, we need neither quantum field theories nor the theory
of states on the operator algebras to derive the boson/fermion processes.
It is interesting to apply the method to the problems which have not
been formulated in statistical mechanics on quantum field theories yet.
Here, we study the system of para-fermions and para-bosons of order 2.
Para statistics was first introduced by Green\cite{G53} in the context of
quantum field theories.
For its review, see \cite{OK82}.
\cite{MG64} and \cite{HT69, ST70} formulated it within the framework of
quantum mechanics of finite number of particles. See also \cite{OK69}.
Recently statistical mechanics of para-particles are formulated
in \cite{S90, C96, CS97}. However, it does not seem to be fully
developed so far.
We formulate here point processes as the distributions of positions of
para-particles of order 2 with finite temperature and positive density
through the thermodynamic limit.
It turns out that the resulting processes are corresponding to the
cases $\alpha = \pm 1/2$ in \cite{ST03}.
We also try to derive point processes from ensembles of composite particles at
zero temperature and positive density in this formalism.
The resulting processes also have their Laplace transforms expressed by
Fredholm determinants.
This paper is organized as follows.
In Section 2, the random point processes of fixed numbers of fermions
as well as bosons are formulated on the base of quantum mechanics in a bounded box.
Then, the theorems on thermodynamic limits are stated.
The proofs of the theorems are presented in Section 3 as applications
of a theorem of rather abstract form.
In Sections 4 and 5, we consider the systems of para-particles and composite
particles, respectively.
In Appendix, we calculate complex integrals needed for the thermodynamic
limits.
\section{Fermion and boson processes}
Consider $ L^2(\Lambda_L) $ on $ \Lambda_L = [-L/2, L/2]^d $
$\subset {\Bbb R}^d$ with the Lebesgue measure on $\Lambda_L$.
Let $\triangle_L$ be the Laplacian on $ {\cal H}_L = L^2(\Lambda_L) $
satisfying periodic boundary conditions at $\partial \Lambda_{L}$.
We deal with periodic boundary conditions in this paper, however,
all the arguments except that in section 5 may be applied
for other boundary conditions.
Hereafter we regard $ -\triangle_L$ as the quantum mechanical
Hamiltonian of a single free particle.
The usual factor $\hbar^2/2m$ is set at unity.
For $k\in {\Bbb Z}^d$,
$ \varphi_k^{(L)}(x) = L^{-d/2} \exp (i2\pi k\cdot x/L) $
is an eigenfunction of $\triangle_L$, and
$ \, \{\, \varphi_k^{(L)} \, \}_{k\in {\Bbb Z}^d} \,$ forms an
complete orthonormal system [CONS] of $ {\cal H}_L $.
In the following, we use the operator $G_L = \exp(\beta\triangle_L)$
whose kernel is given by
\begin{equation}
G_L(x,y) = \sum_{k\in {\Bbb Z}^d}e^{-\beta |2\pi k/L|^2} \varphi_k^{(L)}(x)
\overline{\varphi_k^{(L)}(y)}
\label{GL}
\end{equation}
for $\beta > 0$. We put $ g_k^{(L)} = \exp (-\beta |2\pi k/L|^2) $, the
eigenvalue of $G_L$ for the eigenfunction $\varphi_k^{(L)}$.
We also need $G = \exp(\beta \triangle)$ on $L^2({\Bbb R}^d)$ and its kernel
\[
G(x,y) = \int_{ {\Bbb R}^d}\frac{dp}{(2\pi)^d}
e^{-\beta |p|^2 +ip\cdot(x-y)}
= \frac{\exp(-|x-y|^2/4\beta)}{(4\pi \beta)^{d/2}}.
\]
Note that $G_L(x,y)$ and $G(x,y)$ are real symmetric and
\begin{equation}
G_L(x,y) = \sum_{k\in {\Bbb Z}^d} G(x, y+kL).
\label{GG}
\end {equation}
Let $ f: {\Bbb R}^d \rightarrow [0,\infty) $ be an arbitrary continuous function
whose support is compact.
In the course of the thermodynamic limit,
$f$ is fixed and we assume that $L$ is so large that $\Lambda_L $
contains the support, and regard $f$ as a function on $\Lambda_L$.
\subsection{Fermion processes}
In this subsection, we construct the fermion process in $\Bbb R^d$ as a limit of
the process of $N$ points in $\Lambda_L$.
Suppose there are $N$ identical particles which obey the Fermi-Dirac statistics
in a finite box $\Lambda_L$.
The space of the quantum mechanical states of the system is given by
\[
{\cal H}^F_{L,N} = \{ \, A_Nf \,| \, f \in \otimes^N{\cal H}_L\, \},
\]
where
\[
A_N f(x_1, \cdots, x_N) = \frac{1}{N!}\sum_{\sigma \in {\cal S}_N}
{\rm sgn}(\sigma) f(x_{\sigma(1)}, \cdots, x_{\sigma(N)})
\qquad (\; x_1, \cdots, x_N \in \Lambda_L \;)
\]
is anti-symmetrization in the $N$ indices.
Using the CONS $ \{\, \varphi_k^{(L)} \, \}_{k\in {\Bbb Z}^d} \,$
of ${\cal H}_L = L^2(\Lambda_L)$, we make the element
\begin{equation}
\Phi_k(x_1, \cdots, x_N)
= \frac{1}{\sqrt{N!}}\sum_{\sigma\in{\cal S}_N} {\rm sgn}(\sigma)
\varphi_{k_1}(x_{\sigma(1)})\cdot \cdots \cdot
\varphi_{k_N}(x_{\sigma(N)})
\label{fcons}
\end{equation}
of ${\cal H}^F_{L,N}$ for $ k=(k_1, \cdots, k_N) \in (\Bbb Z^d)^N$.
Let us introduce the lexicographic order $\prec$ in $\Bbb Z^d$ and put
$ (\Bbb Z^d)^N_{\precneqq} = \{ \, (k_1, \cdots, k_N) \in (\Bbb Z^d)^N \,|
\, k_1 \precneqq \cdots \precneqq k_N \, \}$.
Then $ \{\,\Phi_k \,\}_{k\in (\Bbb Z^d)^N_{\precneqq}} $
forms a CONS of ${\cal H}^F_{L,N}$.
According to the idea of the canonical ensemble in quantum statistical
mechanics, the probability density distribution of the positions of
the $N$ free fermions in the periodic box $\Lambda_L$
at the inverse temperature $\beta$ is given by
\begin{eqnarray}
p^F_{L, N}(x_1, \cdots, x_N)
&=& Z_F^{-1}\sum_{k\in (\Bbb Z^d)^N_{\precneqq}}
\Big(\prod_{j=1}^N g_{k_j}^{(L)}\Big)
|\Phi_k(x_1, \cdots, x_N)|^2 \nonumber \\
&=& Z_F^{-1}\sum_{k\in (\Bbb Z^d)^N_{\precneqq}}
\overline{\Phi_k(x_1, \cdots, x_N)}\big((\otimes^NG_L)\Phi_k\big)
(x_1, \cdots, x_N)
\label{ffp}
\end{eqnarray}
where $Z_F$ is the normalization constant.
We can define the point process of $N$ points in $\Lambda_L$ from
the density (\ref{ffp}). I.e., consider a map
$ \Lambda_L^N \ni ( x_1, \cdots, x_N) \mapsto
\sum_{j=1}^N \delta_{x_j} \in Q(\Bbb R^d) $.
Let $ \mu_{L, N}^F $ be the probability measure on $Q(\Bbb R^d)$
induced by the map from the probability measure on $\Lambda_L^N$
which has the density (\ref{ffp}).
By $ {\rm E}^F_{L,N} $, we denote expectation with respect to the measure
$ \mu_{L, N}^F $.
The Laplace transform of the point process is given by
\begin{eqnarray}
{\rm E}^F_{L,N}\big[e^{-<f,\xi>}\big]
&=&\int_{Q(\Bbb R^d)}d\mu_{L, N}^F(\xi)\,e^{-<f,\xi>}
\nonumber \\
&=& \int_{\Lambda_L^N}\exp(-\sum_{j=1}^Nf(x_j))
p_{L, N}^F(x_1, \cdots, x_N) \,dx_1\cdots dx_N \nonumber \\
&=& \frac{{\rm Tr\,}_{{\cal H}_{L,N}^F}[(\otimes^Ne^{-f})(\otimes^NG_L)]}
{{\rm Tr\,}_{{\cal H}_{L,N}^F}[\otimes^N G_L]}
\nonumber \\
&=& \frac{{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^N\tilde G_L)A_N]}
{{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^N G_L)A_N]}
\label{fgfl} \\
&=& \frac{\int_{\Lambda_L^N}
\det_{-1}\tilde G_L(x_i, x_j)\,dx_1\cdots dx_N}
{\int_{\Lambda_L^N}\det_{-1}G_L(x_i, x_j)\,dx_1\cdots dx_N},
\nonumber
\end{eqnarray}
where $\tilde G_L$ is defined by
\begin{equation}
\tilde G_L = G_L^{1/2}e^{-f}G_L^{1/2},
\label{tildeg}
\end{equation}
where
$e^{-f}$ represents the operator of multiplication by the function $e^{-f}$.
The fifth expression follows from
$ [ \otimes^NG_L^{1/2}, A_N] = 0 $, cyclicity of the trace and
$(\otimes^NG_L^{1/2})$
$(\otimes^Ne^{-f})(\otimes^NG_L^{1/2})
= \otimes^N\tilde G_L$ and so on.
The last expression can be obtained by calculating the trace
on $\otimes^N {\cal H}_L$ using its CONS
$\{\varphi_{k_1}\otimes\cdots \otimes\varphi_{k_N} \,
| \, k_1, \cdots, k_N \in \Bbb Z^d \}$,
where $\det_{-1}$ is the usual determinant, see eq. (\ref{adet}).
Now, let us consider the thermodynamic limit, where the volume
of the box $\Lambda_L$ and the number of points $N$ in the box
$\Lambda_L$ tend to infinity in such a way that the densities tend
to a positive finite value $\rho $, i.e.,
\begin{equation}
L,\; N \rightarrow \infty, \quad N/L^d \to \rho > 0.
\label{tdl}
\end{equation}
\begin{thm}
The finite fermion processes $\{ \, \mu_{L,N}^F \, \}$ defined above
converge weakly to the fermion process $\mu_{\rho}^F$ whose Laplace
transform is given by
\begin{equation}
\int_{Q(\Bbb R^d)}e^{-<f, \xi>}d\mu_{\rho}^F(\xi)
= {\rm Det} \big[1 - \sqrt{1-e^{-f}}z_*G(1+z_*G)^{-1}\sqrt{1-e^{-f}}\big]
\label{EF}
\end{equation}
in the thermodynamic limit (\ref{tdl}), where $z_*$ is the positive number
uniquely determined by
\[
\rho = \int \frac{dp}{(2\pi)^d}\frac{z_*e^{-\beta |p|^2}}
{1 + z_*e^{-\beta |p|^2}}
=(z_*G(1+z_*G)^{-1})( x, x).
\]
\label{fthm}
\end{thm}
{\sl Remark : } The existence of $\mu_{\rho}^F$ which has the
above Laplace transform is a consequence of the result of \cite{ST03}
we have mentioned in the introduction.
\subsection{Boson processes}
Suppose there are $N$ identical particles which obey Bose-Einstein
statistics in a finite box $\Lambda_L$.
The space of the quantum mechanical states of the system is given by
\[
{\cal H}^B_{L,N} = \{ \, S_Nf \,| \, f \in \otimes^N{\cal H}_L\, \},
\]
where
\[
S_N f(x_1, \cdots, x_N) = \frac{1}{N!}\sum_{\sigma \in {\cal S}_N}
f(x_{\sigma(1)}, \cdots x_{\sigma(N)})
\qquad ( \; x_1, \cdots, x_N \in \Lambda_L \; )
\]
is symmetrization in the $N$ indices.
Using the CONS $ \{\, \varphi_k^{(L)} \, \}_{k\in {\Bbb Z}^d} \,$
of $L^2(\Lambda_L)$, we make the element
\begin{equation}
\Psi_k(x_1, \cdots, x_N) = \frac{1}{\sqrt{N!n(k)}}\sum_{\sigma\in{\cal S}_N}
\varphi_{k_1}(x_{\sigma(1)})\cdot \cdots \cdot
\varphi_{k_N}(x_{\sigma(N)})
\label{bcons}
\end{equation}
of ${\cal H}^B_{L,N}$ for $ k=(k_1, \cdots, k_N) \in \Bbb Z^d$,
where $ n(k) = \prod_{l\in\Bbb Z^d}(\sharp\{\,n\in\{ \, 1, \cdots, N\,\} \,|\,
k_n = l
\,\}!)$.
Let us introduce the subset
$ (\Bbb Z^d)^N_{\prec} = \{ \, (k_1, \cdots, k_N) \in (\Bbb Z^d)^N \,|
\, k_1 \prec \cdots \prec k_N \, \} $ of $(\Bbb Z^d)^N$.
Then $ \{\,\Psi_k \,\}_{k\in (\Bbb Z^d)^N_{\prec}} $ forms a CONS of
${\cal H}^B_{L,N}$.
As in the fermion's case,
the probability density distribution of the positions of the $N$ free
bosons in the periodic box $\Lambda_L$ at the inverse temperature
$\beta$ is given by
\begin{equation}
p^B_{L, N}(x_1, \cdots, x_N)
= Z_B^{-1}\sum_{k\in (\Bbb Z^d)^N_{\prec}}
\Big(\prod_{j=1}^N g_{k_j}^{(L)}\Big)
|\Psi_k(x_1, \cdots, x_N)|^2,
\label{d+1}
\end{equation}
where $Z_B$ is the normalization constant.
We can define a point process of $N$ points $\mu_{L,N}^B $
from the density (\ref{d+1}) as in the previous section.
The Laplace transform of the point process is given by
\begin{equation}
{\rm E}^B_{L,N}\big[e^{-<f,\xi>}\big]
= \frac{{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^N\tilde G_L)S_N]}
{{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^NG_L)S_N]}
= \frac{\int_{\Lambda_L^N}\det_{1}\tilde G_L(x_i, x_j)\,dx_1\cdots
dx_N}
{\int_{\Lambda_L^N}\det_{1}G_L(x_i, x_j)\,dx_1\cdots dx_N},
\label{bgfl}
\end{equation}
where det$_1$ denotes permanent, see eq. (\ref{adet}). We set
\begin{equation}
\rho_c = \int_{\Bbb R^d} \frac{dp}{(2\pi)^d}\frac{e^{-\beta |p|^2}}
{1 - e^{-\beta |p|^2}},
\label{rho_c}
\end{equation}
which is finite for $d > 2$.
Now, we have
\begin{thm}
The finite boson processes $\{ \, \mu_{L,N}^B \, \}$ defined above converge
weakly to the boson process $\mu_{\rho}^B $ whose Laplace transform is given by
\begin{equation}
\int_{Q(\Bbb R^d)} e^{-<f, \xi>} d\mu_{\rho}^B(\xi)
= {\rm Det} [1 + \sqrt{1-e^{-f}}z_*G(1 - z_*G)^{-1}\sqrt{1-e^{-f}}]^{-1}
\label{EB}
\end{equation}
in the thermodynamic limit (\ref{tdl}) if
\[
\rho = \int_{\Bbb R^d} \frac{dp}{(2\pi)^d}\frac{z_*e^{-\beta |p|^2}}
{1 - z_*e^{-\beta |p|^2}} = (z_*G(1 - z_*G)^{-1})(x,x) < \rho_c.
\]
\label{bthm}
\end{thm}
\medskip
\noindent {\sl Remark 1 : } For the existence of $\mu_{\rho}^B$ , we
refer to \cite{ST03}.
\noindent {\sl Remark 2 : } In this paper, we only consider the boson processes
with low densities : $\rho < \rho_c$.
The high density cases $\rho \geqslant \rho_c$ are related to the
Bose-Einstein condensation. We need the detailed knowledge about
the spectrum of $\tilde G_L$ to deal with these cases.
It will be reported in another publication.
\section{ Thermodynamic limits}
\subsection{A general framework}
It is convenient to consider the problem in a general framework on a Hilbert
space ${\cal H}$ over $ \Bbb C$.
The proofs of the theorems of section 2 are given in the next subsection.
We denote the operator norm by $||\,\cdot\,||$, the trace norm
by $||\,\cdot\,||_1$ and the Hilbert-Schmidt norm by $||\,\cdot\,||_2$.
Let $\{ V_L\}_{L > 0}$ be a one-parameter family of Hilbert-Schmidt
operators on ${\cal H}$ which satisfy the conditions
\[
\forall L>0: || \, V_L \, ||=1, \quad
\lim_{L\to\infty}|| \, V_L \, ||_2=\infty
\]
and $A$ a bounded self-adjoint operator on ${\cal H}$ satisfying
$ 0 \leqslant A \leqslant 1 $.
Then $G_L = V_L^*V_L, \; \tilde G_L= V_L^*AV_L$ are
self-adjoint trace class operators satisfying
\[
\forall L>0 : 0 \leqslant \tilde G_L \leqslant G_L \leqslant 1, \; ||G_L||=1
\; \mbox{ and } \; \lim_{L\to\infty}{\rm Tr\,} G_L = \infty.
\]
We define $ I_{-1/n}= [0, \infty)$ for $ n\in \Bbb N$
and $ I_{\alpha}=[ 0, 1/|\alpha|)$ for
$\alpha \in [ -1, 1]-\{ 0, -1, -1/2, \cdots \}$.
Then the function
\[
h_L^{(\alpha)}(z)=\frac{{\rm Tr\,}[zG_L(1-z\alpha G_L)^{-1}]}{{\rm Tr\,} G_L}
\]
is well defined on $ I_{\alpha}$ for each $L>0$ and $\alpha\in [ -1, 1] -\{0\}$.
\medskip
\begin{thm}
Let $\alpha \in [ -1, 1]-\{0\}$ be arbitrary but fixed.
Suppose that for every $z\in I_{\alpha}$, there exist a limit
$h^{(\alpha)}(z)=\lim_{L\to\infty}h_L^{(\alpha)}(z)$ and a trace class operator
$K_z$ satisfying
\begin{equation}
\lim_{L\to\infty}|| \, K_z - (1-A)^{1/2}V_L(1-z\alpha
V_L^*V_L)^{-1}V_L^*(1-A)^{1/2}||_1 = 0.
\label{K_z}
\end{equation}
Then, for every $ \hat\rho\in [0, \sup_{z\in I_{\alpha}}h^{(\alpha)}(z))$, there exists
a unique solution $z=z_*\in I_{\alpha}$ of $ h^{(\alpha)}(z) =\hat\rho$.
Moreover suppose that a sequence $L_1 < L_2 < \cdots < L_N < \cdots $
satisfies
\begin{equation}
\lim_{N\to\infty}N/{\rm Tr\,} G_{L_N} = \hat\rho.
\label{gtdl}
\end{equation}
Then
\begin{equation}
\lim_{N\to\infty}\frac{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[\otimes^N\tilde G_{L_N} U(\sigma)]}
{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[\otimes^N G_{L_N} U(\sigma)]}
= {\rm Det}[1+z_*\alpha K_{z_*}]^{-1/\alpha}
\label{limDet}
\end{equation}
holds.
\label{gthm}
\end{thm}
\medskip
\noindent In (\ref{limDet}), the operator $U(\sigma)$ on $\otimes^N {\cal H}$ is
defined by
$ U(\sigma)\varphi_1 \otimes \cdots \otimes \varphi_N =
\varphi_{\sigma^{-1}(1)} \otimes \cdots \otimes \varphi_{\sigma^{-1}(N)}$
for $\sigma \in{\cal S}_N$ and $ \varphi_1, \cdots, \varphi_N \in {\cal H}$.
In order to prove the theorem, we prepare several lemmas
under the same assumptions of the theorem.
\begin{lem}
$h^{(\alpha)}$ is a strictly increasing continuous function on $I_{\alpha}$ and
there exists a unique $z_* \in I_{\alpha}$ which satisfies
$ h^{(\alpha)}(z_*) =\hat\rho$.
\label{P1}
\end{lem}
{\sl Proof : } From ${h^{(\alpha)}_L}'(z)={\rm Tr\,} [G_L(1-z\alpha G_L)^{-2}]/{\rm Tr\,} G_L$,
we have
$1 \leqslant {h_L^{(\alpha)}}'(z)\leqslant (1-z\alpha)^{-2}$ for $\alpha > 0 $
and $1 \geqslant {h_L^{(\alpha)}}'(z)\geqslant (1-z\alpha)^{-2}$ for $\alpha < 0 $,
i.e., $\{h_L^{(\alpha)}\}_{\{L>0\}}$ is equi-continuous on $I_{\alpha}$.
By Ascoli-Arzel\`a's theorem, the convergence
$h_L^{(\alpha)} \to h^{(\alpha)} $ is locally uniform and hence $h^{(\alpha)}$ is
continuous on $I_{\alpha}$.
It also follows that $h^{(\alpha)}$ is strictly increasing.
Together with $h^{(\alpha)}(0)=0$, which comes from $h_L^{(\alpha)}(0)=0$, we get that
$h^{(\alpha)}(z)=\hat\rho$ has a unique solution in $I_{\alpha}$.
\hfill $\Box$
\begin{lem}
There exists a constant $c_0 > 0$ such that
\[
||G_L - \tilde G_L||_1 = {\rm Tr\,} [V_L^*(1-A)V_L] \leqslant c_0
\]
uniformly in $L > 0$.
\label{P2}
\end{lem}
{\sl Proof : } Since \; $ 1-z\alpha G_L $ \; is invertible for $ z \in
I_{\alpha}$
and $V_L$ is Hilbert-Schmidt,
we have
\begin{eqnarray}
\lefteqn{{\rm Tr\,} [V_L^*(1-A)V_L] } && \nonumber \\
&=& {\rm Tr\,} [(1-z\alpha G_L)^{1/2} (1-z\alpha G_L)^{-1/2}
V_L^*(1-A)V_L(1-z\alpha G_L)^{-1/2} (1-z\alpha G_L)^{1/2}]
\nonumber \\
&\leqslant & ||1-z\alpha G_L|| {\rm Tr\,}[(1-z\alpha G_L)^{-1/2}V_L^*(1-A)V_L
(1-z\alpha G_L)^{-1/2}] \nonumber \\
&=& ||1-z\alpha G_L|| {\rm Tr\,}[(1-A)^{1/2}V_L (1-z\alpha
G_L)^{-1}V_L^*(1-A)^{1/2}] \nonumber \\
&=& (1-(\alpha\wedge 0)z)({\rm Tr\,} K_z +o(1)).
\end{eqnarray}
Here we have used $|{\rm Tr\,} B_1CB_2| \leqslant ||B_1||\,||B_2||\,||C||_1
=||B_1||\,||B_2||{\rm Tr\,} C$ for bounded
operators $B_1, B_2$ and a positive trace class operator $C$
and $ {\rm Tr\,} WV = {\rm Tr\,} VW $ for Hilbert-Schmidt operators $W, V$.
\hfill $\Box$
Let us denote all the eigenvalues of $G_L$ and $\tilde G_L$ in
decreasing order
\[
g_0(L) =1 \geqslant g_1(L) \geqslant \cdots \geqslant
g_j(L) \geqslant \cdots
\]
and
\[
\tilde g_0(L) \geqslant \tilde g_1(L) \geqslant \cdots \geqslant
\tilde g_j(L) \geqslant \cdots,
\]
respectively.
Then we have
\begin{lem}
For each $ j = 0, 1, 2, \cdots, \quad
g_j(L) \geqslant \tilde g_j(L) $ \quad holds.
\label{minmax}
\end{lem}
\noindent{\sl Proof:} By the min-max principle, we have
\begin{eqnarray*}
\tilde g_j(L) &=& \min_{\psi_0, \cdots, \psi_{j-1} \in {\cal H}_L}
\; \max_{\psi \in \{\psi_0, \cdots, \psi_{j-1}\}^{\perp}}
\frac{(\psi, \tilde G_L\psi)}{||\psi||^2} \nonumber \\
&\leqslant &
\min_{\psi_0, \cdots, \psi_{j-1} \in {\cal H}_L}
\; \max_{\psi \in \{\psi_0, \cdots, \psi_{j-1}\}^{\perp}}
\frac{(\psi, G_L \psi)}{||\psi||^2} = g_j(L).
\mbox{\hspace*{5cm}} \Box
\end{eqnarray*}
\begin{lem}
For $N$ large enough, the conditions
\begin{equation}
{\rm Tr\,} [z_NG_{L_N}(1-\alpha z_NG_{L_N})^{-1}]=
{\rm Tr\,} [\tilde z_N\tilde G_{L_N}(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}]=N
\label{n}
\end{equation}
determine $z_N, \tilde z_N \in I_{\alpha}$ uniquely.
$z_N$ and $ \tilde z_N $ satisfy
\[
z_N \leqslant \tilde z_N, \quad | \tilde z_N -z_N |=O(1/N) \quad
\mbox{and} \quad \lim_{N\to\infty}z_N=\lim_{N\to\infty}\tilde z_N=z_*.
\]
\label{P4}
\end{lem}
{\sl Proof : } From the proof of Lemma \ref{P1},
$H_N(z) = {\rm Tr\,} [zG_{L_N}(1-z\alpha G_{L_N})^{-1}] = h_{L_N}^{(\alpha)}(z){\rm Tr\,} G_{L_N}$
is a strictly increasing continuous function on $I_{\alpha}$ and $H_N(0)=0$.
Let us pick $z_0 \in I_{\alpha}$ such that $z_0 > z_*$.
Since $h^{(\alpha)}$ is strictly increasing, $ h^{(\alpha)}(z_0) - h^{(\alpha)}(z_*) = \epsilon > 0$.
We have
\begin{equation}
\frac{H_N(z_0)}{N}=\frac{{\rm Tr\,} G_{L_N}}{N}h_{L_N}(z_0) \to
\frac{h^{(\alpha)}(z_0)}{\hat\rho} = 1 + \frac{\epsilon}{\hat\rho},
\label{h_N}
\end{equation}
which shows $H_N\big((\sup I_{\alpha})-0\big) \geqslant H_N(z_0) > N$ for large enough
$N$.
Thus $z_N \in[0, z_0) \subset I_{\alpha}$ is uniquely determined by
$H_N(z_N) = N$.
Put $\tilde H_N(z) = {\rm Tr\,} [z\tilde G_{L_N}(1-z\alpha \tilde G_{L_N})^{-1}]
$.
Then by Lemma \ref{minmax}, $\tilde H_N$ is well-defined on $I_{\alpha}$
and
$\tilde H_N \leqslant H_N$ there.
Moreover
\begin{eqnarray*}
H_N(z) -\tilde H_N(z)
&=& {\rm Tr\,} [(1-\alpha zG_{L_N})^{-1}
z(G_{L_N}-\tilde G_{L_N})(1-\alpha z\tilde G_{L_N})^{-1}] \\
&\leqslant& ||(1-\alpha zG_{L_N})^{-1}||
||(1-\alpha z\tilde G_{L_N})^{-1}||z{\rm Tr\,} [G_{L_N}- \tilde G_{L_N}]\\
&\leqslant & C_z = \frac{zc_0}{(1-(\alpha\vee 0) z)^2}
\end{eqnarray*}
holds.
Together with (\ref{h_N}), we have
\[
\frac{\tilde H_N(z_0)}{N} \geqslant \frac{ H_N(z_0) - C_{z_0}}{N} >
1+ \frac{\epsilon}{2\hat \rho} - \frac{C_{z_0}}{N},
\]
hence $\tilde H_N(z_0) > N$, if $N$ is large enough.
It is also obvious that $\tilde H_N$ is strictly increasing and
continuous on $I_{\alpha}$ and $\tilde H_N(0)=0$.
Thus $\tilde z_N \in [0, z_0) \subset I_{\alpha}$ is uniquely determined by
$ \tilde H_N(\tilde z_N)=N$.
The convergence $ z_N \to z_* $ is a consequence of
$h_{L_N}^{(\alpha)}(z_N) = N/{\rm Tr\,} G_{L_N} \to \hat \rho =h^{(\alpha)}(z_*)$,
the strict increasingness of $h^{(\alpha)}, h_L^{(\alpha)}$ and the pointwise
convergence $ h_L^{(\alpha)} \to h^{(\alpha)}$.
We get $ z_N \leqslant\tilde z_N$ from $ H_N \geqslant \tilde H_N $ and the
increasingness of $H_N, \tilde H_N$.
Now, let us show $ |\tilde z_N-z_N|=O(N^{-1})$, which together
with $z_N \to z_*$, yields $\tilde z_N \to z_*$.
From
\begin{eqnarray*}
0 &=& N - N = H_N(z_N) - \tilde H_N(\tilde z_N) \\
&=& {\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1}(z_NG_{L_N}
- \tilde z_N\tilde G_{L_N})(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}]\\
&=& z_N{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1}(G_{L_N} - \tilde G_{L_N})
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}] \\
& &
-(\tilde z_N -z_N){\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1}\tilde G_{L_N}
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}],
\end{eqnarray*}
we get
\begin{eqnarray*}
\lefteqn
{\frac{\tilde z_N - z_N}{\tilde z_N}
{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1/2}\tilde z_N\tilde G_{L_N}
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}(1-\alpha z_NG_{L_N})^{-1/2}]
} && \\
&=&z_N{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1}(G_{L_N}-\tilde G_{L_N})
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}].
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\lefteqn{\frac{\tilde z_N - z_N}{\tilde z_N}N =
\frac{\tilde z_N - z_N}{\tilde z_N}\tilde H_N(\tilde z_N) }&&\\
&=&
\frac{\tilde z_N - z_N}{\tilde z_N}
{\rm Tr\,}[(1-\alpha z_NG_{L_N})(1-\alpha z_NG_{L_N})^{-1/2}\tilde z_N\tilde
G_{L_N}
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}(1-\alpha z_NG_{L_N})^{-1/2}]
\end{eqnarray*}
\begin{eqnarray*}
& \leqslant & \frac{\tilde z_N - z_N}{\tilde z_N} ||1-\alpha z_NG_{L_N}||
{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1/2}\tilde z_N\tilde G_{L_N}
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}(1-\alpha z_NG_{L_N})^{-1/2}]
\\
& = & ||1-\alpha z_NG_{L_N}|| z_N{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-1}
(G_{L_N}-\tilde G_{L_N})(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}]
\\
& \leqslant & z_N ||1-\alpha z_NG_{L_N}|| \, ||(1-\alpha z_NG_{L_N})^{-1}||\,
{\rm Tr\,} [G_{L_N}-\tilde G_{L_N}]\,||(1-\alpha \tilde z_N\tilde
G_{L_N})^{-1}||
\\
& \leqslant & c_0z_0(1-(\alpha\wedge 0) z_0)/(1-(\alpha\vee 0)z_0)^2
\end{eqnarray*}
for $N$ large enough, because $z_N, \tilde z_N <z_0$.
Thus, we have obtained $\,\tilde z_N -z_N = O(N^{-1})$. \hfill $\Box$
\bigskip
We put
\[
v^{(N)} = {\rm Tr\,}[z_NG_{L_N}(1-\alpha z_NG_{L_N})^{-2}] \quad \mbox{and}
\quad
\tilde v^{(N)} = {\rm Tr\,}[\tilde z_N\tilde G_{L_N}(1-\alpha \tilde
z_N
\tilde G_{L_N})^{-2}].
\]
Then we have :
\begin{lem}
${\rm (i)} \quad \displaystyle v^{(N)}, \tilde v^{(N)} \to \infty,$
\hspace*{3cm}
${\rm (ii)} \quad \displaystyle
\frac{v^{(N)}}{\tilde v^{(N)}} \to 1.$
\label{vv}
\end{lem}
{\sl Proof : } (i) follows from the lower bound
\begin{eqnarray*}
v^{(N)} &=& {\rm Tr\,}[z_NG_{L_N}(1-\alpha z_NG_{L_N})^{-2}]
\\
&\geqslant& {\rm Tr\,}[z_NG_{L_N}(1-\alpha z_NG_{L_N})^{-1}]\,
||1-\alpha z_NG_{L_N}||^{-1}
\geqslant N(1+o(1))/(1-(\alpha\wedge 0) z_*),
\end{eqnarray*}
since $z_N\to z_*$. The same bound is also true for $\tilde v^{(N)}$.
\noindent
(ii) Using
\begin{eqnarray*}
v^{(N)} &=& {\rm Tr\,}[-z_NG_{L_N}(1-\alpha z_NG_{L_N})^{-1}
+\alpha^{-1}(1-\alpha z_NG_{L_N})^{-2} - \alpha^{-1}]
\\
&=& -N + \alpha^{-1}{\rm Tr\,}[(1-\alpha z_NG_{L_N})^{-2} - 1]
\end{eqnarray*}
and the same for $\tilde v^{(N)}$, we get
\begin{eqnarray*}
|\tilde v^{(N)} - v^{(N)}|
&=& |\alpha^{-1}{\rm Tr\,}[(1 -\alpha \tilde z_N\tilde G_{L_N})^{-2}
- (1-\alpha z_NG_{L_N})^{-2} ]| \\
&\leqslant& |\alpha^{-1}{\rm Tr\,}[\big((1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}
- (1 -\alpha z_N\tilde G_{L_N})^{-1}\big)
(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}]|\\
& & + |\alpha^{-1}{\rm Tr\,}[\big((1 -\alpha z_N\tilde G_{L_N})^{-1} -
(1 -\alpha z_N G_{L_N})^{-1}\big)
(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}]| \\
& & + |\alpha^{-1}{\rm Tr\,}[(1 -\alpha z_N G_{L_N})^{-1}
\big((1 -\alpha \tilde z_N\tilde G_{L_N})^{-1} -
(1 -\alpha z_N\tilde G_{L_N})^{-1}\big)]| \\
& & + |\alpha^{-1}{\rm Tr\,}[(1 -\alpha z_N G_{L_N})^{-1}
\big((1 -\alpha z_N\tilde G_{L_N})^{-1} -
(1 -\alpha z_N G_{L_N})^{-1}\big)]| \\
&\leqslant& ||(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}||
\, ||(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}
(\tilde z_N -z_N)\tilde G_{L_N}
(1 -\alpha z_N\tilde G_{L_N})^{-1}||_1 \\
& & + ||(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}|| \,
||(1 -\alpha z_N\tilde G_{L_N})^{-1}
z_N(G_{L_N} - \tilde G_{L_N})(1 -\alpha z_N G_{L_N})^{-1}||_1 \\
& & + ||(1 -\alpha z_N G_{L_N})^{-1}|| \,
||(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}
(\tilde z_N -z_N)\tilde G_{L_N}
(1 -\alpha z_N\tilde G_{L_N})^{-1}||_1 \\
& & + ||(1 -\alpha z_N G_{L_N})^{-1}|| \,
||(1 -\alpha z_N\tilde G_{L_N})^{-1}
z_N(G_{L_N} - \tilde G_{L_N})(1 -\alpha z_N G_{L_N})^{-1}||_1 \\
& \leqslant& ( ||(1 -\alpha \tilde z_N\tilde G_{L_N})^{-1}||
+ ||(1 -\alpha z_N G_{L_N})^{-1}||) \\
& & \times\bigg(\frac{\tilde z_N - z_N}{\tilde z_N}
||\tilde z_N\tilde G_{L_N}(1-\alpha \tilde z_N\tilde
G_{L_N})^{-1}||_1
\, ||(1 -\alpha z_N\tilde G_{L_N})^{-1}|| \\
& & + z_N||(1 -\alpha z_N\tilde G_{L_N})^{-1}||\,
||G_{L_N} - \tilde G_{L_N}||_1
\,|| (1 - \alpha z_N G_{L_N})^{-1}||\bigg) = O(1).
\end{eqnarray*}
In the last step, we have used Lemmas \ref{P2} and \ref{P4}.
This, together with (i), implies (ii). \hfill $\Box$
\begin{lem}
\[
\lim_{N\to \infty}\sqrt{2\pi v^{(N)}}\oint_{S_1(0)}
\frac{d\eta}{2\pi i\eta^{N+1}}{\rm Det}\big[1 -\alpha z_N (\eta-1) G_{L_N}
(1-\alpha z_NG_{L_N})^{-1}\big]^{-1/\alpha} = 1,
\]
\[
\lim_{N\to \infty}\sqrt{2\pi \tilde v^{(N)}}\oint_{S_1(0)}
\frac{d\eta}{2\pi i\eta^{N+1}}
{\rm Det}\big[1 -\alpha \tilde z_N (\eta-1) \tilde G_{L_N}
(1-\alpha \tilde z_N\tilde G_{L_N})^{-1}\big]^{-1/\alpha} = 1,
\]
\label{intDet}
\end{lem}
{\sl Proof : } Put $ s = 1/|\alpha| $ and
\[
p_j^{(N)} = \frac{|\alpha|z_Ng_j(L_N)}{1 -\alpha z_Ng_j(L_N)}.
\]
Then the first equality is nothing but proposition A.2(i) for
$\alpha < 0$ and proposition A.2(ii) for $\alpha >0$.
The same is true for the second equality.
\hfill $\Box$
\bigskip
{\sl Proof of Theorem \ref{gthm} : }
Since the uniqueness of $z_*$ has already been shown, it is enough to
prove (\ref{limDet}).
The main apparatus of the proof is Vere-Jones' formula in
the following form: Let $\alpha = -1/n$ for $n \in \Bbb N$.
Then
\[
{\rm Det}(1-\alpha J)^{-1/\alpha} = \sum_{n=0}^{\infty}\frac{1}{n!}
\sum_{\sigma\in{\cal S}_n}\alpha^{n-\nu(\sigma)}
{\rm Tr\,}_{\otimes^n{\cal H}}[(\otimes^n J) U(\sigma)]
\]
holds for any trace class operator $J$.
For $\alpha \in[ -1, 1]- \{ 0, -1, -1/2, \cdots, 1/n, \cdots\} $,
this holds under an additional condition $||\alpha J|| <1$.
This has actually been proved in Theorem 2.4 of \cite{ST03}.
We use the formula in the form
\begin{equation}
\frac{1}{N!} \sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[(\otimes^NG_{L_N}) U(\sigma)]
= \oint_{S_{z_N}(0)}\frac{dz}{2\pi iz^{N+1}}
{\rm Det}(1-z\alpha G_{L_N})^{-1/\alpha}
\label{gIVJ}
\end{equation}
and in the form in which $G_{L_N}$ is replaced by $\tilde G_{L_N}$.
Here, recall that $z_N, \tilde z_N \in I_{\alpha}$.
We calculate the right-hand side by the saddle point method.
Using the above integral representation and the property of
the products of the Fredholm determinants followed by the change
of integral variables $z = z_N\eta, z= \tilde z_N\eta$, we get
\[
\frac{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[\otimes^N \tilde G_{L_N} U(\sigma)]}
{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[\otimes^N G_{L_N} U(\sigma)]}
= \frac{\oint_{S_{\tilde z_N}(0)}
{\rm Det}(1-z\alpha \tilde G_{L_N})^{-1/\alpha}dz/2\pi iz^{N+1}}
{\oint_{S_{z_N}(0)}
{\rm Det}(1- z\alpha G_{L_N})^{-1/\alpha}dz/2\pi iz^{N+1}}
\]
\begin{eqnarray*}
& =& \frac{{\rm Det}[1- \tilde z_N\alpha G_{L_N}]^{-1/\alpha}}
{{\rm Det}[1- z_N\alpha G_{L_N}]^{-1/\alpha}}
\frac{{\rm Det}[1- \tilde z_N\alpha \tilde G_{L_N}]^{-1/\alpha}}
{{\rm Det}[1- \tilde z_N\alpha G_{L_N}]^{-1/\alpha}}
\frac{z_N^N}{\tilde z_N^N} \\
& & \times \frac{\oint_{S_1(0)}
{\rm Det}[1-\tilde z_N(\eta-1)\alpha \tilde G_{L_N}
(1-\tilde z_N\alpha \tilde G_{L_N})^{-1}]^{-1/\alpha}
d\eta/2\pi i\eta^{N+1}}
{\oint_{S_1(0)}
{\rm Det}[1-z_N(\eta-1)\alpha G_{L_N}
(1-z_N\alpha G_{L_N})^{-1}]^{-1/\alpha}d\eta/2\pi i\eta^{N+1}}.
\end{eqnarray*}
Thus the theorem is proved if the following behaviors
in $N \to \infty$ are valid:
\begin{eqnarray*}
{\rm (a)}& \mbox{\hspace{0.5cm}}
&\frac{z_N^N}{\tilde z_N^N} \; = \;
\exp\big(- \frac{\tilde z_N - z_N}{z_N}N +o(1)\big)
\\
{\rm (b)}& & \frac{{\rm Det}[1-\tilde z_N\alpha G_{L_N}]^{-1/\alpha}}
{{\rm Det}[1- z_N\alpha G_{L_N}]^{-1/\alpha}} \; = \;
\exp\big(\frac{\tilde z_N - z_N}{z_N}N +o(1)\big),
\\
{\rm (c)}& &\frac{{\rm Det}[1-\tilde z_N\alpha \tilde G_{L_N}]^{-1/\alpha}}
{{\rm Det}[1- \tilde z_N\alpha G_{L_N}]^{-1/\alpha}} \; \to \;
{\rm Det}[1+z_*\alpha K_{z_*}]^{-1/\alpha}
\\
{\rm (d)}& &\frac{\oint_{S_1(0)}{\rm Det}[1-\tilde z_N(\eta-1)\alpha
\tilde G_{L_N}(1-\tilde z_N\alpha \tilde G_{L_N})^{-1}]^{-1/\alpha}
d\eta/2\pi i\eta^{N+1}}
{\oint_{S_1(0)}{\rm Det}[1-z_N(\eta-1)\alpha G_{L_N}
(1-z_N\alpha G_{L_N})^{-1}]^{-1/\alpha}d\eta/2\pi i\eta^{N+1}}
\quad \to \; 1.
\end{eqnarray*}
In fact, (a) is a consequence of Lemma \ref{P4}.
For (b), let us define a function
$ k(z) = \log\,{\rm Det}[1-z\alpha G_L]^{-1/\alpha} =
-\alpha^{-1}\sum_{j=0}^{\infty}\log(1-z\alpha g_j(L))$.
Then by Taylor's formula and (\ref{n}), we get
\[
k(\tilde z_N) - k(z_N) = k'(z_N)(\tilde z_N -z_N) + k''(\bar z)
\frac{( \tilde z_N - z_N)^2}{2} \mbox{\hspace{3cm}}
\]
\[
= \sum_{j=0}^{\infty}\frac{g_j}{1- z_N\alpha g_j}(\tilde z_N - z_N)
+ \sum_{j=0}^{\infty}\frac{\alpha g_j^2}{(1-\bar z\alpha g_j)^2}
\frac{( \tilde z_N - z_N)^2}{2}
= N\frac{ \tilde z_N - z_N}{z_N} + \delta,
\]
where $ \bar z $ is a mean value of $z_N$ and $\tilde z_N$ and
$ |\delta| = O(1/N) $ by Lemma \ref{P4}.
From the property of the product and the cyclic nature of the
Fredholm determinants, we have
\begin{eqnarray*}
\lefteqn{ \frac{{\rm Det}[1-\tilde z_N\alpha \tilde G_{L_N}]}
{{\rm Det}[1- \tilde z_N\alpha G_{L_N}]}} && \\
&=& {\rm Det}[1+ z_*\alpha (1-A)^{1/2}V_{L_N}
(1- z_*\alpha G_{L_N})^{-1}V_{L_N}^*(1-A)^{1/2}] \\
& & + \big\{ {\rm Det}[1+ \tilde z_N\alpha(G_{L_N} - \tilde G_{L_N})
(1- \tilde z_N\alpha G_{L_N})^{-1}] \\
& &
- {\rm Det}[1+ z_*\alpha(G_{L_N} - \tilde G_{L_N})
(1- z_*\alpha G_{L_N})^{-1}] \big\}.
\end{eqnarray*}
The first term converges to ${\rm Det}[1+z_*\alpha K_{z_*}]$ by
the assumption (\ref{K_z}) and the continuity of the
Fredholm determinants with respect to the trace norm.
The brace in the above equation tends to $0$, because of the continuity
and
\begin{eqnarray*}
\lefteqn{||\tilde z_N\alpha(G_{L_N} - \tilde G_{L_N})
(1- \tilde z_N\alpha G_{L_N})^{-1}
- z_*\alpha(G_{L_N} - \tilde G_{L_N})
(1- z_*\alpha G_{L_N})^{-1}||_1 } && \\
& & \leqslant |\tilde z_N-z_*|\,|\alpha|\,||G_{L_N} - \tilde G_{L_N}||_1
||(1- \tilde z_N\alpha G_{L_N})^{-1}|| \\
& & + z_*|\alpha|\,||G_{L_N} - \tilde G_{L_N}||_1
||(1- \tilde z_N\alpha G_{L_N})^{-1} - (1- z_*\alpha G_{L_N})^{-1}||
\to 0,
\end{eqnarray*}
where we have used Lemmas \ref{P2} and \ref{P4}.
Thus, we get (c).
(d) is a consequence of Lemma \ref{vv} and Lemma \ref{intDet}.
\hfill
\subsection{Proofs of the theorems}
To prove Theorem \ref{fthm}[\ref{bthm}], it is enough to show that
(\ref{fgfl})[(\ref{bgfl})] converges to the right-hand side of (\ref{EF})
[(\ref{EB}), respectively] for every $ f\in C_o(\Bbb R^d)$\cite{DVJ}.
We regard $ {\cal H}_L = L^2(\Lambda_L) $ as a closed subspace of $L^2(\Bbb R^d)$.
Corresponding to the orthogonal decomposition
$ L^2(\Bbb R^d) = L^2(\Lambda_L) \oplus L^2(\Lambda_L^c) $, we set
$V_L = e^{\beta\triangle_L/2} \oplus 0$.
Let $ A = e^{-f} $ be the multiplication operator on $L^2(\Bbb R^d)$,
which can be decomposed as
$ A = e^{-f}\chi_{\Lambda_L} \oplus \chi_{\Lambda_L^c}$
for large $L$ since supp$\,f$ is compact.
Then
\[
G_L = V_L^*V_L = e^{\beta\triangle_L} \oplus 0 \quad\mbox{and}\quad
\tilde G_L = V_L^*AV_L = e^{\beta\triangle_L/2}e^{-f}
e^{\beta\triangle_L/2} \oplus 0
\]
can be identified with those in section 2.
We begin with the following fact, where we denote
\[
\Box_k^{(L)} \; = \; \frac{2\pi}{L}\Big(k +
\Big( -\frac{1}{2}, \frac{1}{2}\Big]^d\Big)
\qquad \mbox{for} \quad k \in \Bbb Z^d.
\]
\begin{lem}
Let $ b : [0, \infty) \to [0, \infty) $ be a monotone decreasing
continuous function such that
\[
\int_{\Bbb R^d}b(|p|)dp < \infty.
\]
Define the function $ b_L : \Bbb R^d \to [0, \infty)$ by
\[
b_L(p) =
b(|2\pi k/L|) \qquad \mbox{ if } \quad
p\in \Box_k^{(L)}
\quad \mbox{for} \quad k \in \Bbb Z^d.
\]
Then $ b_L(p) \to b(|p|) $ in $L^1(\Bbb R^d)$ as $L \to \infty $ .
\end{lem}
{\sl Proof : } There exist positive constants $c_1$ and $c_2$ such that
$ b_L(p) \leqslant c_1b(c_2|p|) $ holds for all $L\geqslant 1$ and $ p \in \Bbb R^d$.
Indeed, $c_1 = b(0)/b(2\pi\sqrt{d/(d+8)}), \, c_2 = 2/\sqrt{d+8}$ satisfy the condition,
since $ \inf\{ \, c_1b(c_2|p|) \, | \, p \in \Box_0^{(L)} \, \} \geqslant b(0) $
for $\forall L >1$ and $ \sup\{ \,c_2|p| \, | \, p \in \Box_k^{(L)} \, \}
\leqslant 2\pi|k|/L $ for $ k \in \Bbb Z^d -\{0\}$.
Obviously $ c_1b(c_2|p|) $ is an integrable function of $ p \in \Bbb R^d$.
The lemma follows by the dominated convergence theorem. \hfill $\Box$
\medskip
Finally we confirm the assumptions of theorem \ref{gthm}.
\begin{prop}
\begin{eqnarray}
{\rm (i)} && \forall L>0: ||V_L|| = 1, \qquad
\lim_{L\to\infty} {\rm Tr\,} G_L/L^d =(4\pi \beta)^{-d/2}.
\label{TrG_L} \\
{\rm (ii)}&& \mbox{The following convergences hold as } L\to \infty
\mbox{ for each } z\in I_{\alpha}:\hskip2cm
\notag\\
&& \mbox{\hspace{-1cm}}
h_L^{(\alpha)}(z) =
\frac{{\rm Tr\,} [zG_L(1-z\alpha G_L)^{-1}]}{{\rm Tr\,} G_L}
\; \to \; (4\pi \beta)^{d/2}\int_{\Bbb R^d}\frac{dp}{(2\pi)^d}
\frac{ze^{-\beta|p|^2}}{1-z\alpha e^{-\beta|p|^2}} = h^{(\alpha)}(z),
\label{falpha} \\
&& \mbox{\hspace{-1cm}}
||\sqrt{1-e^{-f}}\big( G_L(1-z\alpha G_L)^{-1}
- G(1-z\alpha G)^{-1}\big)\sqrt{1-e^{-f}}||_1\to 0.
\label{limK}
\end{eqnarray}
\end{prop}
{\sl Proof : }
By applying the above lemma to $ b(|p|) = e^{-\beta|p|^2} $ and
$ \tilde b(|p|) = ze^{-\beta|p|^2}/(1-z\alpha e^{-\beta|p|^2})$,
we have (\ref{TrG_L}) and (\ref{falpha}).
By Gr\"um's convergence theorem, it is enough to show
$$ \sqrt{1-e^{-f}} G_L(1-z\alpha G_L)^{-1}\sqrt{1-e^{-f}}
\to \sqrt{1-e^{-f}} G(1-z\alpha G)^{-1}\sqrt{1-e^{-f}} $$
strongly and
\begin{eqnarray*}
\lefteqn{ {\rm Tr\,} [\sqrt{1-e^{-f}}G_L(1-z\alpha G_L)^{-1}\sqrt{1-e^{-f}}] =
\int_{\Bbb R^d}(1-e^{-f(x)}) \big(G_L(1-z\alpha G_L)^{-1}\big)(x, x)dx
} && \\
&\to& \int_{\Bbb R^d}(1-e^{-f(x)}) \big(G(1-z\alpha G)^{-1}\big)(x, x)dx
= {\rm Tr\,}[\sqrt{1-e^{-f}} G(1-z\alpha G)^{-1}\sqrt{1-e^{-f}}]
\end{eqnarray*}
for (\ref{limK}). These are direct consequences of
\begin{eqnarray*}
\lefteqn{| zG_L(1-z\alpha G_L)^{-1}(x,y) - zG(1-z\alpha G)^{-1}(x,y) | } && \\
&=& \int\frac{dp}{(2\pi)^d}|e_L(p, x-y)
\tilde b_L(p) -e(p, x-y)\tilde b(|p|)| \\
&\leqslant& \int\frac{dp}{(2\pi)^d}\big(|\tilde b_L(p)-\tilde b(|p|)| +
|e_L(p, x-y) - e(p, x-y)|\tilde b(|p|)\big) \to 0
\end{eqnarray*}
uniformly in $x ,y \in$ supp $f$.
Here we have used the above lemma for $\tilde b(|p|)$ and we put
$e(p,x) = e^{ip\cdot x}$ and
\begin{equation}
e_L(p; x) = e(2\pi k/L; x) \quad\mbox{if} \quad
p\in \Box_k^{(L)}
\quad \mbox{for} \quad k \in \Bbb Z^d.
\tag*{$\Box$}
\end{equation}
Thanks to (\ref{TrG_L}), we can take a sequence $\{ L_N\}_{N\in \Bbb N}$
which satisfies (\ref{gtdl}).
On the relation between $\rho$ in Theorems \ref{fthm}, \ref{bthm}
and $\hat\rho$ in Theorem \ref{gthm}, $\hat\rho=(4\pi\beta)^{d/2}\rho$
is derived from (\ref{tdl}).
We have the ranges of $\rho$ in Theorem \ref{bthm} and Theorem \ref{fthm}, since
\[
\sup_{z\in I_1}h^{(1)}(z)= (4\pi\beta)^{d/2}\int_{\Bbb R^d}\frac{dp}{(2\pi)^d}
\frac{e^{-\beta|p|^2}}{1- e^{-\beta|p|^2}}
= (4\pi\beta)^{d/2}\rho_c
\]
and $ \sup_{z\in I_{-1}}h^{(-1)}(z)=\infty$ from (\ref{falpha}).
Thus we get Theorem \ref{fthm} and Theorem \ref{bthm} using Theorem \ref{gthm}.
\section{Para-particles}
The purpose of this section is to apply the method which we have developed
in the preceding sections to statistical mechanics of gases which consist of
identical particles obeying para-statistics.
Here, we restrict our attention to para-fermions and para-bosons of
order $2$.
We will see that the point processes obtained after the thermodynamic limit
are the point processes corresponding to the cases of $ \alpha = \pm 1/2 $
given in \cite{ST03}.
In this section, we use the representation theory of the symmetric
group ( cf. e.g. \cite{JK, Sa91, Si96}).
We say that $ (\lambda_1, \lambda_2, \cdots, \lambda_n) \in {\Bbb N}^n $ is
a Young frame of length $n$ for the symmetric group ${\cal S}_N$ if
\[
\sum_{j=1}^n\lambda_j =N, \quad
\lambda_1 \geqslant \lambda_2 \geqslant \cdots \geqslant \lambda_n >0.
\]
We associate the Young frame $ (\lambda_1, \lambda_2, \cdots, \lambda_n) $ with
the diagram of $\lambda_1$-boxes in the first row, $\lambda_2$-boxes in the
second row,..., and $\lambda_n$-boxes in the $n$-th row.
A Young tableau on a Young frame is a bijection from the numbers $1, 2,
\cdots, N$ to the $N$ boxes of the frame.
\subsection{Para-bosons of order 2}
Let us select one Young tableau, arbitrary but fixed, on each Young
frame of length less than or equal to 2, say the tableau $T_j$ on the
frame $( N-j, j)$ for $ j = 1, 2, \cdots, [N/2]$ and the tableau
$T_0$ on the frame $(N)$.
We denote by ${\cal R}(T_j)$ the row stabilizer of $T_j$, i.e.,
the subgroup of ${\cal S}_N$ consists of those elements that keep all
rows of $T_j$ invariant, and by ${\cal C}(T_j)$ the column stabilizer
whose elements preserve all
columns of $T_j$.
Let us introduce the three elements
\[
a(T_j)=\frac{1}{\#{\cal R}(T_j)}\sum_{\sigma \in {\cal R}(T_j)}\sigma,
\qquad
b(T_j)=\frac{1}{\#{\cal C}(T_j)}\sum_{\sigma \in {\cal C}(T_j)}
{\rm sgn}(\sigma)\sigma
\]
and
\[
e(T_j)= \frac{d_{T_j}}{N !}\sum_{\sigma \in {\cal R}(T_j)}
\sum_{\tau \in {\cal C}(T_j)}{\rm sgn}(\tau)\sigma\tau
= c_ja(T_j)b(T_j)
\]
of the group algebra
${\Bbb C}[{\cal S}_N]$ for each $j=0, 1, \cdots, [N/2]$,
where $d_{T_j}$ is the dimension of the irreducible representation of
${\cal S}_N$ corresponding to $T_j$ and
$ c_j = d_{T_j}\#{\cal R}(T_j)\#{\cal C}(T_j)/N ! $.
As is known,
\begin{equation}
a(T_j)\sigma b(T_k)=b(T_k)\sigma a(T_j) = 0
\label{asb}
\end{equation}
hold for any $\sigma \in{\cal S}_N$ and $0\leqslant j<k\leqslant [N/2]$.
The relations
\begin{equation}
a(T_j)^2 = a(T_j), \quad b(T_j)^2 =b(T_j), \quad
e(T_j)e(T_k)=\delta_{jk}e(T_j)
\label{abe}
\end{equation}
also hold.
For later use, let us introduce
\begin{equation}
d(T_j) = e(T_j)a(T_j) = c_j a(T_j)b(T_j)a(T_j)
\qquad ( j=0, 1, \cdots,[N/2]).
\label{defd}
\end{equation}
They satisfy
\begin{equation}
d(T_j)d(T_k)=\delta_{jk}d(T_j) \quad \mbox{ for } \quad 0\leqslant j, k \leqslant
[N/2],
\label{ddd}
\end{equation}
as is shown readily from (\ref{asb}) and (\ref{abe}).
The inner product $< \cdot, \cdot>$ of $\Bbb C[{\cal S}_N]$ is defined by
\[
< \sigma, \tau> = \delta_{\sigma\tau} \quad \mbox{ for } \; \sigma, \tau
\in{\cal S}_N
\]
and extended to all elements of $\Bbb C[{\cal S}_N]$ by sesqui-linearity.
The left representation $L$ and the right representation $R$ of ${\cal S}_N$
on $\Bbb C[{\cal S}_N]$ are defined by
\[
L(\sigma)g = L(\sigma)\sum_{\tau\in{\cal S}_N}g(\tau)\tau
=\sum_{\tau\in{\cal S}_N}g(\tau)\sigma\tau
= \sum_{\tau\in{\cal S}_N}g(\sigma^{-1}\tau)\tau
\]
and
\[
R(\sigma)g = R(\sigma)\sum_{\tau\in{\cal S}_N}g(\tau)\tau
=\sum_{\tau\in{\cal S}_N}g(\tau)\tau\sigma^{-1}
= \sum_{\tau\in{\cal S}_N}g(\tau\sigma)\tau,
\]
respectively. Here and hereafter we identify $g: {\cal S}_N \to \Bbb C$ and
$\sum_{\tau\in{\cal S}_N}g(\tau)\tau \in\Bbb C[{\cal S}_N]$.
They are extended to the representation of $\Bbb C[{\cal S}_N]$ on $\Bbb C[{\cal S}_N]$ as
\[
L(f)g = fg =\sum_{\sigma,\tau}f(\sigma)g(\tau)\sigma\tau
=\sum_{\sigma}\big(\sum_{\tau}f(\sigma\tau^{-1})g(\tau)\big)\sigma
\]
and
\[
R(f)g = g\hat f =\sum_{\sigma,\tau}g(\sigma)f(\tau)\sigma\tau^{-1}
=\sum_{\sigma}\big(\sum_{\tau}g(\sigma\tau)f(\tau)\big)\sigma,
\]
where $\hat f = \sum_{\tau}\hat f(\tau)\tau = \sum_{\tau}f(\tau^{-1})\tau
= \sum_{\tau}f(\tau)\tau^{-1}$.
The character of the irreducible representation of ${\cal S}_N$ corresponding
to the tableau $T_j$ is obtained by
\[
\chi_{T_j}(\sigma)=\sum_{\tau\in{\cal S}_N}(\tau, \sigma R(e(T_j))\tau)
= \sum_{\tau\in{\cal S}_N}(\tau, \sigma \tau \widehat{e(T_j)}).
\]
We introduce a tentative notation
\begin{equation}
\chi_{g}(\sigma) \equiv \sum_{\tau\in{\cal S}_N}(\tau, \sigma R(g)\tau)
= \sum_{\tau,\gamma\in{\cal S}_N}(\tau, \sigma \tau \gamma^{-1})g(\gamma)
= \sum_{\tau\in{\cal S}_N}g(\tau^{-1}\sigma\tau)
\label{chig}
\end{equation}
for $ g=\sum_{\tau}g(\tau)\tau\in\Bbb C[{\cal S}_N]$.
Let $U$ be the representation of ${\cal S}_N$ ( and its extension to
$\Bbb C[{\cal S}_N]$) on $\otimes^N{\cal H}_L$ defined by
\[
U(\sigma) \varphi_1\otimes \cdots \otimes \varphi_N =
\varphi_{\sigma^{-1}(1)}\otimes \cdots \otimes\varphi_{\sigma^{-1}(N)}
\qquad \mbox{for } \; \varphi_1, \cdots, \varphi_N \in{\cal H}_L,
\]
or equivalently by
\[
(U(\sigma) f)(x_1, \cdots, x_N ) =
f(x_{\sigma(1)}, \cdots,x_{\sigma(N)})
\qquad \mbox{ for } \; f \in \otimes^N{\cal H}_L.
\]
Obviously, $U$ is unitary: $ U(\sigma)^* = U(\sigma^{-1}) = U(\sigma)^{-1}$.
Hence $U(a(T_j))$ is an orthogonal projection because of
$ U(a(T_j))^* = U(\widehat{a(T_j)}) = U(a(T_j)) $ and (\ref{abe}).
So are $U(b(T_j))$'s, $U(d(T_j))$'s and $P_{2B} = \sum_{j=0}^{[N/2]}U(d(T_j))$.
Note that Ran$\,U(d(T_j)) = \,$Ran$\,U(e(T_j))$ because of
$d(T_j)e(T_j) = e(T_j), e(T_j)d(T_j)= d(T_j)$.
We refer the literatures \cite{MG64, HT69, ST70} for quantum mechanics
of para-particles.
(See also \cite{OK69}.)
The arguments of these literatures indicate that the state space of
$N$ para-bosons of order 2 in the finite box $\Lambda_L$ is given by
${\cal H}_{L,N}^{2B} = P_{2B}\otimes^N{\cal H}_L$.
It is obvious that there is a CONS of ${\cal H}_{L,N}^{2B}$ which consists of
the vectors of the form
$U(d(T_j))\varphi_{k_1}^{(L)}\otimes \cdots \otimes \varphi_{k_N}^{(L)}$,
which are the eigenfunctions of $\otimes^NG_L$.
Then, we define a point process of $N$ free para-bosons of order 2 as
in section 2 and its generating functional is given by
\[
E_{L, N}^{2B}\big[e^{-<f, \xi>}\big] =
\frac{{\rm Tr\,}_{\otimes^N{\cal H}_L}
[(\otimes^N\tilde G_L) P_{2B}]}
{{\rm Tr\,}_{\otimes^N{\cal H}_L}
[(\otimes^N G_L)P_{2B}]}.
\]
Let us give expressions, which have a clear correspondence
with (\ref{bgfl}).
\begin{lem}
\begin{eqnarray}
E_{L, N}^{2B}\big[e^{-<f, \xi>}\big]
&=&
\frac{\sum_{j=0}^{[N/2]}\sum_{\sigma\in{\cal S}_N}\chi_{T_j}(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^N\tilde G_L) U(\sigma)]}
{\sum_{j=0}^{[N/2]}\sum_{\sigma\in{\cal S}_N}\chi_{T_j}(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}[(\otimes^N G_L) U(\sigma)]}
\label{pgfl} \\
&=&\frac{\sum_{j=0}^{[N/2]}\int_{\Lambda_L^N}
\det_{T_j}\{\tilde G_L(x_i, x_j)\}dx_1 \cdots dx_N}
{\sum_{j=0}^{[N/2]}\int_{\Lambda_L^N}
\det_{T_j}\{ G_L(x_i, x_j)\}dx_1 \cdots dx_N}
\label{imt}
\end{eqnarray}
\label{pl}
\end{lem}
{\sl Remark 1. } $ {\cal H}_{L,N}^{2B} = P_{2B}\otimes^N{\cal H}_L $ is determined by
the choice of the tableaux $T_j$'s.
The spaces corresponding to different choices of tableaux are different
subspaces of $\otimes^N{\cal H}_L$.
However, they are unitarily equivalent and the generating functional
given above is not affected by the choice.
In fact, $\chi_{T_j}(\sigma)$ depends only on
the frame on which the tableau $T_j$ is defined.
\noindent{\sl Remark 2. } det$_{T}A = \sum_{\sigma\in{\cal S}_N}
\chi_T(\sigma)\prod_{i=1}^NA_{i\sigma(i)}$ in (\ref{imt}) is called immanant,
another generalization of determinant than $\det_{\alpha}$.
\medskip
\noindent {\sl Proof : } Since $\otimes^N G$ commutes with $U(\sigma)$ and
$ a(T_j)e(T_j) = e(T_j)$, we have
\[
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^N G_L) U(d(T_j))\big)
= {\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^N G_L) U(e(T_j))U(a(T_j))\big)
\mbox{\hspace{2cm}}
\]
\begin{equation}
= {\rm Tr\,}_{\otimes^N{\cal H}_L}
\big(U(a(T_j))(\otimes^N G_L) U(e(T_j))\big)
= {\rm Tr\,}_{\otimes^N{\cal H}_L}
\big((\otimes^N G_L) U(e(T_j))\big).
\label{d-e}
\end{equation}
On the other hand, we get from (\ref{chig}) that
\begin{eqnarray}
\lefteqn{
\sum_{\sigma\in{\cal S}_N}\chi_g(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(\sigma)\big)
=\sum_{\tau, \sigma\in{\cal S}_N}g(\tau^{-1}\sigma\tau)
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(\sigma)\big)
} && \nonumber \\
&=& \sum_{\tau,\sigma}g(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(\tau\sigma\tau^{-1})\big)
= \sum_{\tau,\sigma}g(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(\tau)U(\sigma)U(\tau^{-1})\big)
\nonumber \\
&=& N!\sum_{\sigma}g(\sigma)
{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(\sigma)\big)
= N!{\rm Tr\,}_{\otimes^N{\cal H}_L}\big((\otimes^NG)U(g)\big),
\label{imm}
\end{eqnarray}
where we have used the cyclicity of the trace and the commutativity
of $U(\tau)$ with $\otimes^NG$. Putting $g=e(T_j)$ and using (\ref{d-e}), the first equation
is derived. The second one is obvious. \hfill $\Box$
\medskip
Let $\psi_{T_j}$ be the character of the induced representation
Ind$_{{\cal R}(T_j)}^{{\cal S}_N}[{\bf 1}] $, where {\bf 1} is the
representation ${\cal R}(T_j) \ni \sigma \to 1$, i.e.,
\[
\psi_{T_j}(\sigma) = \sum_{\tau\in{\cal S}_N}< \tau, \sigma
R(a(T_j))\tau> = \chi_{a(T_j)}(\sigma).
\]
Then the determinantal form \cite{JK}
\begin{eqnarray}
\chi_{T_j} &=& \psi_{T_j} - \psi_{T_{j-1}} \qquad ( j= 1, \cdots, [N/2])
\label{chipsi} \\
\chi_{T_0} &=& \psi_{T_0}
\notag
\end{eqnarray}
yields the following result:
\begin{thm}
The finite para-boson processes defined above converge weakly to the
point process whose Laplace transform is given by
\[
{\rm E}_{\rho}^{2B}\big[e^{-<f, \xi>}\big]
= {\rm Det} \big[1 + \sqrt{1-e^{-f}}z_*G(1 - z_*G)^{-1}\sqrt{1-e^{-f}}\big]^{-2}
\]
in the thermodynamic limit, where $z_* \in (0, 1)$ is determined by
\[
\frac{\rho}{2} = \int \frac{dp}{(2\pi)^d}\frac{z_*e^{-\beta |p|^2}}
{1 - z_*e^{-\beta |p|^2}} = (z_*G(1 - z_*G)^{-1})(x,x) < \rho_c,
\]
and $\rho_c$ is given by (\ref{rho_c}).
\end{thm}
{\sl Proof : } Using (\ref{chipsi}) in the expression in
Lemma \ref{pl} and (\ref{imm}) for $ g = a(T_{[N/2]})$, we have
\begin{eqnarray*}
E_{L, N}^{2B}\big[e^{-<f, \xi>}\big]
&=&
\frac{\sum_{\sigma\in{\cal S}_N}\psi_{T_{[N/2]}}(\sigma)
{\rm Tr\,}_{{\cal H}_L^{\otimes N}}
\big((\otimes^N\tilde G_L)U(\sigma)\big)}
{\sum_{\sigma\in{\cal S}_N}\psi_{T_{[N/2]}}(\sigma)
{\rm Tr\,}_{{\cal H}_L^{\otimes N}}
\big((\otimes^N G_L)U(\sigma)\big)} \nonumber \\
&=& \frac{{\rm Tr\,}_{\otimes^N{\cal H}_L}
\big((\otimes^N\tilde G_L)U(a(T_{[N/2]})\big)}
{{\rm Tr\,}_{\otimes^N{\cal H}_L}
\big((\otimes^N G_L)U(a(T_{[N/2]})\big)} \nonumber \\
& =&
\frac{{\rm Tr\,}_{\otimes^{[(N+1)/2]}{\cal H}_L}
\big((\otimes^{[(N+1)/2]}\tilde G_L)S_{[(N+1)/2]}\big)}
{{\rm Tr\,}_{\otimes^{[(N+1)/2]}{\cal H}_L }
\big((\otimes^{[(N+1)/2]} G_L)S_{[(N+1)/2]}\big)}
\frac{{\rm Tr\,}_{\otimes^{[N/2]}{\cal H}_L}
\big((\otimes^{[N/2]}\tilde G_L)S_{[N/2]}\big)}
{{\rm Tr\,}_{\otimes^{[N/2]}{\cal H}_L}
\big((\otimes^{[N/2]} G_L)S_{[N/2]}\big)}.
\end{eqnarray*}
In the last equality, we have used
\[
a(T_{[N/2]}) = \frac{\sum_{\sigma\in{\cal R}_1}\sigma}{\#{\cal R}_1}
\frac{\sum_{\tau\in{\cal R}_2}\tau}{\#{\cal R}_2},
\]
where $ {\cal R}_1 $ is the symmetric group of $[(N+1)/2]$ numbers which are on
the first row of the tableau $T_{[N/2]}$ and $ {\cal R}_2 $ that of $[N/2]$
numbers on the second row.
Then, Theorem \ref{bthm} yields the theorem.
\hfill $\Box$
\subsection{Para-fermions of order 2}
For a Young tableau $T$, we denote by $T'$ the tableau obtained by
interchanging the rows and the columns of $T$.
In another word, $T'$ is the transpose of $T$.
The tableau $T_j'$ is on the frame
$ (\underbrace{2, \cdots, 2}_j, \underbrace{1, \cdots, 1}_{N-2j}) $
and satisfies
\[
{\cal R}(T'_j) = {\cal C}(T_j), \qquad
{\cal C}(T'_j) = {\cal R}(T_j).
\]
The generating functional of the point process for $N$ para-fermions of
order 2 in the finite box $\Lambda_L$ is given by
\[
E_{L, N}^{2F}\big[e^{-<f, \xi>}\big] =
\frac{\sum_{j=0}^{[N/2]}{\rm Tr\,}_{\otimes^N{\cal H}_L}
\big((\otimes^N\tilde G) U(d(T'_j))\big)}
{\sum_{j=0}^{[N/2]}{\rm Tr\,}_{\otimes^N{\cal H}_L}
\big((\otimes^N G) U(d(T'_j))\big)}
\]
as in the case of para-bosons of order 2.
Let us recall the relations
\[
\chi_{T'_j}(\sigma) = {\rm sgn}(\sigma)\chi_{T_j}(\sigma), \qquad
\varphi_{T'_j}(\sigma) = {\rm sgn}(\sigma)\psi_{T_j}(\sigma),
\]
where we have denoted by
\[
\varphi_{T'_j}(\sigma) = \sum_{\tau}< \tau, \sigma R(b({T'_j}))\tau>
\]
the character of the induced representation
Ind$_{{\cal C}(T_j')}^{{\cal S}_N}[ \, {\rm sgn} \, ]$,
where \, sgn\, is the representation
${\cal C}(T_j') = {\cal R}(T_j) \ni \sigma \mapsto {\rm sgn}(\sigma)$.
Thanks to these relations, we can easily translate the argument of
para-bosons to that of para-fermions and get the following theorem.
\begin{thm}
The finite para-fermion processes defined above converge weakly to the
point process whose Laplace transform is given by
\[
{\rm E}_{\rho}^{2B}\big[e^{-<f, \xi>}\big]
= {\rm Det} \big[1 - \sqrt{1-e^{-f}}z_*G(1 + z_*G)^{-1}\sqrt{1-e^{-f}}\big]^2
\]
in the thermodynamic limit, where $z_*\in ( 0, \infty)$ is determined by
\[
\frac{\rho}{2} = \int \frac{dp}{(2\pi)^d}\frac{z_*e^{-\beta |p|^2}}
{1 + z_*e^{-\beta |p|^2}}= (z_*G(1 + z_*G)^{-1})(x,x).
\]
\end{thm}
\bigskip
\section{Gas of composite particles}
Most gases are composed of composite particles.
In this section, we formulate point processes which yield the position
distributions of constituents of such gases.
Each composite particle is called a ``molecule", and molecules consist of
``atoms".
Suppose that there are two kinds of atoms, say A and B, such that
both of them obey Fermi-Dirac or Bose-Einstein statistics simultaneously,
that $N$ atoms of kind A and $N$ atoms of kind B are in the same box $\Lambda_L$
and that one A-atom and one B-atom are bounded to form a molecule by
the non-relativistic interaction described by the Hamiltonian
\[
H_L = -\triangle_x -\triangle_y + U(x-y)
\]
with periodic boundary conditions in $L^2( \Lambda_L\times\Lambda_L)$.
Hence there are totally $N$ such molecules in $\Lambda_L$.
We assume that the interaction between atoms in different molecules
can be neglected.
We only consider such systems of zero temperature, where
$N$ molecules are in the ground state and
(anti-)symmetrizations of the wave functions of the
$N$ atoms of type A and the $N$ atoms of type B are considered.
In order to avoid difficulties due to boundary conditions,
we have set the masses of two atoms A and B equal.
We also assume that the potential $U$ is infinitely deep so that
the wave function of the ground state has a compact support.
We put
\[
H_L = -\frac{1}{2}\triangle_R - 2\triangle_r + U(r)
= H_L^{(R)} + H_L^{(r)},
\]
where $R = (x+y)/2, \quad r= x-y $.
The normalized wave function of the ground state of $H_L^{(R)}$ is the
constant
function $L^{-d/2}$.
Let $\varphi_L(r)$ be that of the ground state of $H_L^{(r)}$.
Then, the ground state of $H_L$ is $\psi_L( x, y) = L^{-d/2}\varphi_L( x -
y)$.
The ground state of the $N$-particle system in $\Lambda_L$ is, by taking the
(anti-)symmetrizations,
\begin{eqnarray}
\Psi_{L, N}( x_1, \cdots, x_N; y_1, \cdots, y_N)
&=& Z_{c\alpha}^{-1}\sum_{\sigma, \tau\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
\alpha^{N-\nu(\tau)}
\prod_{j=1}^N\psi_L(x_{\sigma(j)}, y_{\tau(j)}) \nonumber \\
&=& \frac{N!}{Z_{c\alpha}L^{dN/2}}\sum_{\sigma}\alpha^{N-\nu(\sigma)}
\prod_{j=1}^N\varphi_L( x_j - y_{\sigma(j)}),
\label{gscs}
\end{eqnarray}
where $Z_{c\alpha}$ is the normalization constant and $\alpha = \pm 1$.
Recall that $\alpha^{N-\nu(\sigma)} = {\rm sgn}(\sigma)$ for $\alpha = -1$.
The distribution function of positions of $2N$-atoms of the system
with
zero temperature is given by the square of magnitude of (\ref{gscs})
\begin{equation}
p^{c\alpha}_{L, N}( x_1, \cdots, x_N; y_1, \cdots, y_N)
= \frac{(N!)^2}{Z_{c\alpha}^2L^{dN}}\sum_{\sigma, \tau \in{\cal S}_N}
\alpha^{N-\nu(\sigma)}\prod_{j=1}^{N}\varphi_L( x_j- y_{\tau(j)})
\overline{\varphi_L( x_{\sigma(j)}- y_{\tau(j)})}.
\label{epdc}
\end{equation}
Suppose that we are interested in one kind of atoms, say of type A.
We introduce the operator $ \varphi_L $ on ${\cal H}_L = L^2(\Lambda_L)$
which has the integral kernel $\varphi_L(x-y)$.
Then the Laplace transform of the distribution of the positions of $N$ A-atoms
can be written as
\begin{eqnarray*}
E_{L,N}^{c\alpha}\big[e^{- < f, \xi>}\big]
&=&\int_{\Lambda^{2N}}e^{-\sum_{j=1}^Nf(x_j)}
p^{c\alpha}_{L, N}( x_1, \cdots, x_N; y_1, \cdots, y_N)
\,dx_1\cdots dx_Ndy_1\cdots dy_N \nonumber \\
&=& \frac{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[(\otimes^N\varphi_L^*e^{-f}\varphi_L) U(\sigma)]}
{\sum_{\sigma\in{\cal S}_N}\alpha^{N-\nu(\sigma)}
{\rm Tr\,}_{\otimes^N{\cal H}}[(\otimes^N \varphi_L^*\varphi_L )U(\sigma)]}.
\end{eqnarray*}
In order to take the thermodynamic limit $ N, L \to \infty, V/L^d \to \rho$,
we consider a Schr\"odinger operator in the whole space.
Let $ \varphi $ be the normalized wave function of the ground state of
$ H_r = - 2\triangle_r + U(r) $ in $L^2(\Bbb R^d)$.
Then
$ \varphi(r) = \varphi_L(r) \; (\forall r \in \Lambda_L)$ holds for
large $L$ by the assumption on $U$.
The Fourier series expansion of $\varphi_L$ is given by
\[
\varphi_L(r) = \sum_{k\in
\Bbb Z^d}\Big(\frac{2\pi}{L}\Big)^{d/2}\hat\varphi
\Big(\frac{2\pi k}{L}\Big) \frac{e^{i2\pi k\cdot r/L}}{L^{d/2}},
\]
where $\hat\varphi$ is the Fourier transform of $\varphi$:
\[
\hat\varphi(p) = \int_{\Bbb R^d}\varphi(r)e^{-ip\cdot r}
\frac{dr}{(2\pi)^{d/2}}.
\]
By $\varphi$, we denote the integral operator on $ {\cal H}=L^2(\Bbb R^d)$ having
kernel $\varphi(x-y)$.
Now we have the following theorem on the thermodynamic limit,
where the density $\rho > 0 $ is arbitrary for $\alpha = -1$,
$\rho \in ( 0, \rho_c^c)$ for $\alpha =1$ and
\[
\rho_c^c = \int \frac{dp}{(2\pi)^d}\frac{|\hat\varphi(p)|^2 }
{|\hat\varphi(0)|^2 - |\hat\varphi(p)|^2 }.
\]
\begin{thm}
The finite point processes defined above for $\alpha = \pm 1$ converge weakly
to the process whose Laplace transform is given by
\[
{\rm E}_{\rho}^{c\alpha}\big[e^{-<f, \xi>}\big]
= {\rm Det} \big[1 + z_*\alpha \sqrt{1-e^{-f}}\varphi(||\varphi||_{L^1}^2-z_*\alpha
\varphi^*\varphi)^{-1}\varphi^*\sqrt{1-e^{-f}}\big]^{-1/\alpha}
\]
in the thermodynamic limit (\ref{tdl}),
where the parameter $z_*$ is the positive constant uniquely determined by
\[
\rho = \int \frac{dp}{(2\pi)^d}\frac{z_*|\hat\varphi(p)|^2 }
{|\hat\varphi(0)|^2 - z_*\alpha|\hat\varphi(p)|^2 } =
(z_*\varphi(||\varphi||_{L^1}^2-z_*\alpha\varphi^*\varphi)^{-1}\varphi^*)(x,x).
\]
\label{cthm}
\end{thm}
\noindent{\sl Proof : } The eigenvalues of the integral operator
$\varphi_L$ is $\{(2\pi)^{d/2}\hat\varphi(2\pi k/L)\}_{k\in\Bbb Z^d}$.
Since $\varphi$ is the ground state of the Schr\"odinger operator,
we can assume $\varphi \geqslant 0$.
Hence the largest eigenvalue is $(2\pi)^{d/2}\hat\varphi(0)= ||\varphi||_{L^1}$.
We also have
\begin{equation}
1 = ||\varphi||^2_{L^2(\Bbb R^d)} = \int_{\Bbb R^d}|\hat \varphi(p)|^2dp
= ||\varphi_L||^2_{L^2(\Lambda_L)}
= \sum_{k\in \Bbb Z^d}\Big(\frac{2\pi}{L}\Big)^d
\Big|\hat\varphi\Big(\frac{2\pi k}{L}\Big)\Big|^2.
\label{s=s}
\end{equation}
Set $ V_L=\varphi_L/||\varphi||_{L^1}$ so thet
\[
||V_L|| = 1, \quad ||V_L||_2^2 = L^d/||\varphi||_{L^1}^2.
\]
Then Theorem \ref{gthm} applies as follows:
For $z \in I_{\alpha}$, let us define functions $d, d_L$ on $\Bbb R^d$ by
\[
d(p) = \frac{z|\hat\varphi(p)|^2}
{|\hat\varphi(0)|^2 - z\alpha|\hat\varphi(p)|^2}
\]
and
\begin{equation}
d_L(p) =d(2\pi k/L) \quad\mbox{if} \quad
p\in \Box_k^{(L)}
\quad \mbox{for} \quad k \in \Bbb Z^d.
\label{d_L}
\end{equation}
Then
\[
\int_{\Bbb R^d}\frac{dp}{(2\pi)^d}d_L(p)
= L^{-d}||zV_L(1 - z\alpha V_L^*V_L)^{-1}V_L^*||_1
\]
and the following lemma holds:
\begin{lem}
\[
\lim_{L\to\infty}|| d_L - d ||_{L^1} = 0.
\]
\end{lem}
{\sl Proof : } Put
\[
\hat\varphi_{[L]}(p) =\hat\varphi(2\pi k/L) \quad\mbox{if} \quad
p\in \Box_k^{(L)}
\quad \mbox{for} \quad k \in \Bbb Z^d
\]
and note that compactness of supp$\;\varphi$ implies
$\varphi\in L^1(\Bbb R^d)$ and uniform continuity of $\hat\varphi$.
Then we have
$ ||\,|\hat \varphi_{[L]}|^2 - |\hat \varphi|^2 ||_{L^{\infty}} \to 0 $
and $ || d_L - d ||_{L^{\infty}} \to 0 $.
On the other hand, we get
$ ||\,|\hat \varphi_{[L]}|^2||_{L^1} = ||\, |\hat \varphi|^2 ||_{L^1} $
from (\ref{s=s}).
It is obvious that
\[
|\, || d_L ||_{L^1}- ||d ||_{L^1} \,|\leqslant
\frac{z}{(1-z(\alpha\vee 0))^2}
\frac{||\,|\hat \varphi_{[L]}|^2 - |\hat \varphi|^2 ||_{L^1}}
{|\varphi(0)|^2}.
\]
Hence the lemma is derived by using the following fact twice:
If $ f, f_1, f_2, \cdots \in L^1(\Bbb R^d)$ satisfy
\[
||f_n - f||_{L^{\infty}} \to 0 \; \mbox{ and }
\; ||\, f_n\,||_{L^1} \to ||\, f\, ||_{L^1},
\]
then $ ||f_n - f||_{L^1} \to 0 $ holds.
In fact, using
\[
\int_{|x| > R}|f_n(x)|\,dx = \int_{|x| > R}|f(x)|\,dx
+ \int_{|x| \leqslant R}(|f(x)| - |f_n(x)|)\,dx +
||\, f_n\, ||_{L^1} - ||\, f\, ||_{L^1},
\]
we have
\begin{eqnarray*}
|| f_n - f||_{L^1}
&\leqslant & \int_{|x| \leqslant R}|f_n(x) - f(x)|\,dx
+ \int_{|x| > R}(|f_n(x)| + |f(x)|)\,dx \\
&\leqslant & 2\int_{|x| \leqslant R}|f_n(x) - f(x)|\,dx
+ 2\int_{|x| > R} |f(x)|\,dx + ||\, f_n\, ||_{L^1} - ||\, f\, ||_{L^1}.
\end{eqnarray*}
For any $\epsilon >0$, we can choose $R$ large enough to make the second
term of the right hand side smaller than $\epsilon$.
For this choice of $R$, we set $n$ so large that the first term and the
remainder are smaller than $\epsilon$ and then $|| f_n - f ||_{L^1} < 3\epsilon$.
\hfill $\Box$
\bigskip
\noindent ( {\sl Continuation of the proof of Theorem \ref{cthm}} )
Using this lemma, we can show
\begin{eqnarray*}
&& h_L^{(\alpha)}(z) =
\frac{{\rm Tr\,} [zV_L(1-z\alpha V_L^*V_L)^{-1}V_L^*]}{{\rm Tr\,} V_L^*V_L}
\quad \to \quad |\hat\varphi(0)|^2\int_{\Bbb R^d}dp
\frac{z|\hat\varphi(p)|^2}{|\hat\varphi(0)|^2-z\alpha |\hat\varphi(p)|^2}
= h^{(\alpha)}(z),
\\
&& ||\sqrt{1-e^{-f}}\big[V_L(1-z\alpha V_L^*V_L)^{-1}V_L^*
- \varphi(||\varphi||_{L^1}^2-z\alpha \varphi^*\varphi)^{-1}\varphi\big]
\sqrt{1-e^{-f}}||_1\to 0,
\end{eqnarray*}
as in the proof of (\ref{falpha}) and (\ref{limK}).
We have the conversion $\hat\rho = ||\varphi||_{L^1}^2\rho$
and hence $\rho_c^c= \sup_{z\in I_1}h^{(1)}(z)/||\varphi||_{L^1}^2$.
Hence the proof is completed by Theorem \ref{gthm}.
\hfill $\Box$
\bigskip
\bigskip
\noindent{\bf\Large Acknowledgements}
\bigskip
\noindent We would like to thank Professor Y. Takahashi and Professor T. Shirai
for many useful discussions.
K. R. I. thanks Grant-in-Aid for Science Research (C)15540222 from JSPS.
\bigskip
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{introduction}
The distribution of zeros of the Riemann zeta-function $\zeta(s)$ is
closely connected to that of zeros of $\zeta'(s)$. As just one
illustration we cite A. Speiser's~\cite{Spe} theorem that the
Riemann Hypothesis (RH) is equivalent to the nonexistence of
non-real zeros of $\zeta'(s)$ in the half-plane $\Re s<1/2$.
Let $\rho'=\beta'+i\g'$ be a zero of $\zeta'(s)$, and let
$\rho_c=\rho_c(\rho')=\beta_c+i\g_c$ be a zero of $\zeta(s)$ with
smallest $|\g'-\g_c|$ (if there is more than one such zero, take any
of them). M. Z. Garaev and C. Y. Y{\i}ld{\i}r{\i}m ~\cite{GYi}
showed that
$$
\g'-\g_c \ll \sqrt{|\beta'-1/2|}.
$$
Their result is unconditional. Our purpose here is to obtain a
conditional improvement.
\begin{thm}\label{thm}
Assume RH. We have
\begin{align}\label{eq thm}
\g'-\g_c \ll \sqrt{\frac{\beta'-1/2}{\log\log\g'}}
\end{align}
for $\beta'-1/2\le1/\log\log\g'$. Here the implied constant is absolute,
and for $\g'$ sufficiently large we may take the implied constant to be 2.16.
\end{thm}
\emph{Remark 1.} Note that on RH we trivially have
\begin{align}
\label{eq trivial bound}
\g' - \g_c \ll \frac{1}{\log\log \g'}\ .
\end{align}
Combining this with our Theorem \ref{thm}, we see that on RH
\begin{align*}
\g'-\g_c \ll \min \bigg\{\sqrt{\frac{\beta'-1/2}{\log\log\g'}},\
\frac{1}{\log\log \g'}\bigg\}.
\end{align*}
The inequality \eqref{eq trivial bound} follows
from the well-known fact that on RH, the largest gap between
consecutive zeros of $\zeta(s)$ up to height $T$ is $\ll 1/\log\log
T$ (see \cite{Tit}, for example).
\begin{comment}
\emph{Remark 2.} N. Levinson and H. L. Montgomery \cite {LMo} proved
that
$$
2\pi \sum_{T\le\g'\le 2T} (\beta'-1/2)=T\log\log T +O(T).
$$
This tells us that (on RH) almost all zeros of $\zeta'(s)$ are in
$$
\Re s < 1/2 + \lambda(t)\frac{\log\log t}{\log t},
$$
where $\lambda$ is any function going to infinity with $t$. Thus,
Theorem \ref{thm} is nontrivial for almost all zeros of $\zeta'(s)$.
\end{comment}
\emph{Remark 2.} In \cite{FGH} D. W. Farmer, S. M. Gonek and C. P.
Hughes conjectured that
$$
\limsup_{t\rightarrow \infty} \frac{S(t)}{\sqrt{\log t \log\log t}} = \frac{1}{\pi \sqrt{2}}.
$$
Assuming this as well as RH, one can show (by the same proof as that
of Theorem \ref{thm}) that
$$ \g'
- \g_c \ll \sqrt{\beta'-1/2}\ \bigg(\frac{\log\log
\g'}{\log \g'}\bigg)^{1/4}
$$
for $\beta'-1/2\ll \sqrt{\log\log \g'/\log \g'}$.
\emph{Remark 3.} There are multiple ways to prove results like
Theorem 1. For example, one can start with Lemma~\ref{lem sum of h}
below, split the sum into three parts (according to
$|\g-\g'|\le1/\log\log \g'$, $1/\log\log \g'<|\g-\g'|\le1$ or
$|\g-\g'|\ge1$), and estimate each part separately. This will give a
slightly weaker result than Theorem 1. The proof we present in this
paper follows another clue, which we think is more inspiring and
more likely to be modified. For example, with a little more care it
is possible to show that (on RH) for $\beta'-1/2\le1/\log\log\g'$
there are $ \gg (\b)\log\g' $ zero(s) of $\zeta(s)$ lie in $
\Big[\,\g'-C\sqrt{\frac{\beta'-1/2}{\log\log\g'}},\
\g'+C\sqrt{\frac{\beta'-1/2}{\log\log\g'}}\ \Big] $ for some
constant $C$.
\section{lemmas}
\begin{lem}\label{lem sum of h}
Assume RH. If $\beta'>1/2$, then we have
$$
\frac{ \log\g'}{2} = \sum_{\g} \frac{\b}{(\b)^2+(\g'-\g)^2}+O(1).
$$
\end{lem}
See equation (4) in \cite{Sou}.
Let $ N(T)=\sum_{0<\g\leq T} 1 $ be the zero counting function of
$\zeta(s)$. It is well-known (see \cite{Tit}) that
$$N(T)=L(T)+S(T)+E(T),$$ where \begin{align*}L(T)=\frac{1}{2\pi}T\log T - \frac{1+\log 2\pi}{2\pi}T
+\frac{7}{8}, \ \ \ \ \ \
S(T)=\pi^{-1}\arg\zeta(1/2+iT),\end{align*} and $E(T)$ is an error
term. We require the following result.
\begin{lem}\label{lem L+E}
We have $$d(L(u)+E(u))=\bigg( \frac{1}{2\pi}\log u +O(1) \bigg)du. $$
\end{lem}
\proof By the proof of Theorem 9.3 in \cite{Tit} we know that
$$
L(T)+E(T)=1-\frac{T\log\pi}{2\pi} + \frac{1}{\pi}\Im\log\Gamma(1/4+iT/2).
$$
Therefore, we have $$d(L(u)+E(u))=\bigg( \frac{1}{\pi}\frac{d\
\Im\log\Gamma(1/4+iu/2)}{du} +O(1) \bigg)du.$$ It is
straightforward to compute that
$$ \frac{d\ \Im\log\Gamma(1/4+iu/2)}{du} =
\frac{1}{2}\ \Re\frac{\Gamma'}{\Gamma}(1/4+iu/2).$$ By Stirling's
formula, this is $(\log u) /2 +O(1)$. Hence the result. \qed
\begin{lem}\label{lem E difference}
Let $T>2$ and $T<t_1<t_2<2T$. Then we have
$$E(t_2)-E(t_1)\ll t_2-t_1.$$
\end{lem}
\proof Write $s=1/2+it$. By the proof of Theorem 9.3 in \cite{Tit} we know that
\begin{align*}
\pi E(t)& = \pi (N(t)-S(t)-L(t))\\
& = \Delta\arg s(s-1) + \Delta\arg \pi^{-s/2} +
\Delta\arg\Gamma(s/2) +\Delta\arg \zeta(s) \\ & \qquad - \arg\zeta(s) -
\frac{T}{2}\log t +\frac{1+\log 2\pi}{2}t - \frac{7}{8}\pi\\
& = \Delta\arg\Gamma(s/2) -\frac{T}{2}\log t
+\frac{1+\log 2}{2}t + \frac{\pi}{8}.
\end{align*}
It follows that
$$
\pi (E(t_2)-E(t_1)) = \Delta\arg\Gamma(1/4+it_2/2)-\Delta\arg\Gamma(1/4+it_1/2) - \frac{1}{2}(t_2-t_1)\log T +O(t_2-t_1).
$$
By the mean value theorem of calculus,
$$\Delta\arg\Gamma(1/4+it_2/2)-\Delta\arg\Gamma(1/4+it_1/2)=(t_2-t_1)\cdot \frac{1}{2}\ \Re\frac{\Gamma'}{\Gamma}(1/4+it_3/2)
$$
for some $t_3\in [t_1, t_2]$. But this is
$$
\frac{1}{2}(t_2-t_1)\log T
+O\bigg(\frac{t_2-t_1}{T}\bigg)
$$ by Stirling's formula.
Hence the result. \qed
\section{proof of the theorem}
It is well-known that $$\zeta'(1/2+i\g')=0 \Longrightarrow
\zeta(1/2+i\g')=0.$$ Therefore, $\beta'=1/2$ implies that
$\g_c=\g'$, in which case \eqref{eq thm} is trivially true. Below we
assume that $1/2<\beta'\le 1/2+1/\log\log \g'$. We may also assume
$\g'>2015$ for convenience.
Define
$$
h(t)= h_{\rho'}(t) = \frac{\b}{(\b)^2+(t-\g')^2}\ .
$$
By Lemma \ref{lem sum of h} we have
$$
\sum_{\gamma} h(\gamma)= \frac{1}{2}\log\g'+O(1).
$$
It is well-known that $\zeta(s)$ has no zero in the region
$$
\sigma>0,\ \ \ \ -14\le t\le 14.
$$
For $t\le-14$, there are $\ll \log |t|$ zeros $1/2+i\g$ of $\zeta(s)$ for which $t-1\le \g \le t$. Thus, it is easy to see that
$$
\sum_{-\infty<\g\le 14} h(\g) = \sum_{n=14}^{\infty} \ \ \sum_{-n-1<\g\le -n} h(\g) \ll (\b)\cdot \sum_{n=1}^{\infty}\frac{\log
n}{n^2}\ll 1.$$
It follows that
\begin{align} \label{eq whole integral}
\frac{1}{2}\log\g'+O(1) = \sum_{\gamma>14} h(\gamma) =
\int_{14}^{\infty}h(u)d(N(u)) =
\int_{14}^{\infty}h(u)d(L(u)+E(u)+S(u)).
\end{align}
Next we show that
$$\int_{14}^{\infty}h(u)d(L(u)+E(u)) = \frac{\log\g'}{2}+O(1).
$$
By Lemma \ref{lem L+E} we have
$$
\int_{14}^{\infty}h(u)d(L(u)+E(u)) = \int_{14}^{\infty}h(u)\bigg(\frac{\log u}{2\pi}+O(1)\bigg)du.
$$
It is clear that $$\bigg(\int_{14}^{\g'/2} + \int_{3\g'/2}^{\infty}\bigg)\bigg( h(u)\Big(\frac{\log u}{2\pi}+O(1)\Big)\bigg)du \ll 1,$$
and that $$\int_{\g'/2}^{3\g'/2}h(u)\bigg(\frac{\log u}{2\pi}+O(1)\bigg)du = \frac{\log\g'}{2\pi}\int_{\g'/2}^{3\g'/2}h(u)du+O(1).$$
Hence, we see that
$$
\int_{14}^{\infty}h(u)d(L(u)+E(u)) =\frac{\log\g'}{2\pi}\int_{\g'/2}^{3\g'/2}h(u)du+O(1).
$$
Now we plainly have
\begin{align*}
\int_{\g'/2}^{3\g'/2}h(u)du = 2\arctan\bigg(\frac{\g'}{2(\b)}\bigg)
= \pi + O\bigg(\frac{\b}{\g'}\bigg).
\end{align*}
Therefore, we obtain
$$\int_{14}^{\infty}h(u)d(L(u)+E(u)) = \frac{\log\g'}{2}+O(1).
$$
This together with (\ref{eq whole integral}) give us
$$
\int_{14}^{\infty}h(u)dS(u)=O(1).
$$
By integration by parts, we see that
$$
\int_{14}^{\infty}h(u)dS(u) = -\int_{14}^{\infty}h'(u)S(u)du +O(1).
$$
It follows that
\begin{align}\label{eq whole h'S}
\int_{14}^{\infty}h'(u)S(u)du=O(1).
\end{align}
Let $p=p(\g')$ be a parameter to be determined later. Split the above integral into three parts:
\begin{align*}
\int_{14}^{\infty}h'(u)S(u)du = \bigg[\Big( \int_{14}^{\g'-\frac{\sqrt{\b}}{p}} + \int_{\g'+\frac{\sqrt{\b}}{p}}^{2\g'}\Big)+
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}} + \int_{2\g'}^\infty\bigg]h'(u)S(u)du\ .
\end{align*}
We estimate them separately.
First, since
$$
h'(u)=-\frac{2(u-\g')(\b)}{((\b)^2+(u-\g')^2)^2}\ ,
$$
we trivially have
\begin{align}
\label{eq int fourth part}
\int_{2\g'}^\infty h'(u)S(u)du \ll 1/\g'.
\end{align}
Next we consider
$$
\Big( \int_{14}^{\g'-\frac{\sqrt{\b}}{p}} + \int_{\g'+\frac{\sqrt{\b}}{p}}^{2\g'}\Big)h'(u)S(u)du
\ .
$$
It is straightforward to compute that
$$
\int_{-\infty}^\infty |h'(u)|du = 2\int_{-\infty}^{\g'} h'(u) du =
2 h(u)\Big|_{-\infty}^{\g'}= \frac{2}{\b}\ ,
$$
and that
$$
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}|h'(u)|du = \frac{2}{\b}\cdot\frac{1}{1+(\b)p^2}\ .
$$
Hence, using the bound (see \cite{Tit})
\begin{align*}
|S(T)|\le \frac{ A\log T}{\log\log T}
\end{align*}
for some absolute positive constant $A$,
we see that
\begin{align} \label{eq int two parts}
\bigg(\int_{14}^{\g'-\frac{\sqrt{\b}}{p}} + &
\int_{\g'+\frac{\sqrt{\b}}{p}}^{2\g'}\bigg) \big|h'(u)S(u)\big|du \nonumber\\ & \le
\frac{ 2A \log\g'}{\log\log\g'}\bigg(\int_{14}^{\g'-\frac{\sqrt{\b}}{p}} +
\int_{\g'+\frac{\sqrt{\b}}{p}}^{2\g'}\bigg) \big|h'(u)\big|du \nonumber\\ &
\le \frac{2A\log\g'}{\log\log\g'}\bigg(\int_{-\infty}^\infty -
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}\bigg)\big|h'(u)\big|du \nonumber
\\ &
= \frac{2A\log\g'}{\log\log\g'} \bigg( \frac{2}{\b} - \frac{2}{\b}\cdot\frac{1}{1+(\b)p^2} \nonumber
\bigg)\\
& = \frac{2A\log\g'}{\log\log\g'}\cdot \frac{2p^2}{1+(\b)p^2}\ .
\end{align}
Now we turn to
$$
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}} h'(u)S(u)du\ .
$$
Suppose that there is no zero of $\zeta(s)$ on the vertical segment
\begin{align}\label{assump}
\bigg[1/2+i\Big(\g'-\frac{\sqrt{\b}}{p}\Big),\
1/2+i\Big(\g'+\frac{\sqrt{\b}}{p}\Big)\bigg]\ .
\end{align}
Then we
have $ N(t_2)-N(t_1)=0 $ for $ t_1, t_2\in
\Big[\,\g'-\frac{\sqrt{\b}}{p},\g'+\frac{\sqrt{\b}}{p}\ \Big] $. It
follows that
$$
S(t_1)-S(t_2)=L(t_2)-L(t_1)+E(t_2)-E(t_1)=\frac{t_2-t_1}{2\pi}\log\g'+O(t_2-t_1)+E(t_2)-E(t_1).
$$
By Lemma \ref{lem E difference}, this is
\begin{align}\label{eq dif s}
\frac{t_2-t_1}{2\pi}\log\g'+O(t_2-t_1).
\end{align}
Therefore, since
$$
h'(u)=-\frac{2(u-\g')(\b)}{((\b)^2+(u-\g')^2)^2}\ ,
$$
by changing variables we see that
$$
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}
h'(u)S(u)du = \int_0^{\frac{\sqrt{\b}}{p}}
\frac{2(\b)v}{((\b)^2+v^2)^2}\bigg(S(\g'-v)-S(\g'+v)\bigg)dv\ .
$$
By \eqref{eq dif s}, this is
$$
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}
h'(u)S(u)du =
\int_0^{\frac{\sqrt{\b}}{p}}
\frac{4(\b)v^2}{((\b)^2+v^2)^2}\bigg(\frac{\log
\g'}{2\pi}+O(1)\bigg)dv\ ,
$$
and a straightforward computation turns it into
$$
\int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}
h'(u)S(u)du =
\bigg(\frac{\log
\g'}{\pi}+O(1)\bigg)\cdot
\bigg(\frac{-p\sqrt{\b}}{1+(\b)p^2}+\arctan\Big(\frac{1}{p\sqrt{\b}}
\Big)\bigg).
$$
Combining this with (\ref{eq whole h'S}), (\ref{eq int fourth part}) and (\ref{eq int two parts}) we obtain
\begin{align}
\label{eq contradiction}
\bigg(\frac{\log
\g'}{\pi}+O(1)\bigg)\cdot&
\bigg(\frac{-p\sqrt{\b}}{1+(\b)p^2}+\arctan\Big(\frac{1}{p\sqrt{\b}}\nonumber
\Big)\bigg)\\ \nonumber & = \int_{\g'-\frac{\sqrt{\b}}{p}}^{\g'+\frac{\sqrt{\b}}{p}}
h'(u)S(u)du\\ & = \bigg[ \int_{14}^{\infty} - \Big( \int_{14}^{\g'-\frac{\sqrt{\b}}{p}} +\nonumber
\int_{\g'+\frac{\sqrt{\b}}{p}}^{2\g'}\Big)- \int_{2\g'}^\infty\bigg]h'(u)S(u)du\\ & \le
O(1) +\ \frac{2A\log\g'}{\log\log\g'}\cdot
\frac{2p^2}{1+(\b)p^2} +\ O(1/\g')\ .
\end{align}
We wish to choose $p=c\sqrt{\log\log\g'}$ for some sufficiently
small positive constant $c$ such that
\begin{align}\label{eq contrdct 1}
\frac{-p\sqrt{\b}}{1+(\b)p^2}+\arctan\Big(\frac{1}{p\sqrt{\b}}
\Big) \ge \frac{\pi}{3}\ ,
\end{align}
and that
\begin{align}\label{eq contrdct 2}
\frac{2A}{\log\log\g'}\cdot
\frac{2p^2}{1+(\b)p^2}\le \frac{1}{6}.
\end{align}
We show such $c$ exists. In fact, we clearly have
$$
\frac{2A}{\log\log\g'}\cdot \frac{2p^2}{1+(\b)p^2} = \frac{4Ac^2}{1+c^2(\b)\log\log\g'}\le 4Ac^2.
$$
Next, since $\b\le 1/\log\log\g'$ we have $0<p\sqrt{\b}\le c$. It follows that
$$
\frac{-p\sqrt{\b}}{1+(\b)p^2}+\arctan\Big(\frac{1}{p\sqrt{\b}}
\Big)\ge -c + \arctan(c^{-1}).
$$
Thus, there does exist a small constant $c>0$ such that both (\ref{eq contrdct 1}) and
(\ref{eq contrdct 2}) hold.
Now combining (\ref{eq contradiction}) with (\ref{eq contrdct 1}) and
(\ref{eq contrdct 2}), we obtain
$$
\bigg(\frac{\log
\g'}{\pi}+O(1)\bigg)\cdot \frac{\pi}{3} \le \frac{\log \g'}{6} + O(1),
$$
which is clearly a contradiction for large $\g'$.
Hence, the assumption \eqref{assump} must be false. This means there
exists a zero of $\zeta(s)$ on the vertical segment
$$
\bigg[1/2+i\Big(\g'-\frac{\sqrt{\b}}{p}\Big),\
1/2+i\Big(\g'+\frac{\sqrt{\b}}{p}\Big)\bigg]\ . $$ This ends our
proof. \qed
\emph{Note added in proof}:
From the above discussion we see that for any $\epsilon>0$ and $\g'$ sufficiently large (depending on $\epsilon$),
it suffices to choose $c$ such that
\begin{align*}
-c + \arctan(c^{-1}) \ge
\pi\cdot\ 4Ac^2\ +\epsilon.
\end{align*}
By the work of E. Carneiro, V. Chandee and M. B. Milinovich \cite{CCM},
we can take $A=\frac{1}{4}+o(1)$. Therefore, we may choose any positive $c<c_0$ where $c_0$ is the positive root of $\arctan(x^{-1})-x = \pi x^2$,
whose numerical value is $c_0=0.463...$. Thus, for $\g'$ sufficiently large, we may then take the implied constant in \eqref{eq thm} to be $1/0.463\approx 2.16$.
\section*{Acknowledgement}
The author is indebted to Professor Steve Gonek for very helpful
conversations. He also thanks the referee for valuable suggestions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Molecular phylogenetics is primarily concerned with the
reconstruc\-tion of evolutionary relationships between species based on
sequence information. To this end, alignments of protein or DNA sequences
are employed, whose evolutionary history is believed to be congruent to that
of the respective species. This property can be ensured most easily in the
absence of gene duplications and horizontal gene transfer.
Phylogenetic studies judiciously select
families of genes that rarely exhibit duplications (such as rRNAs, most
ribosomal proteins, and many of the housekeeping enzymes). In
phylogenomics, elaborate automatic pipelines such as \texttt{HaMStR}
\cite{Ebersberger:09}, are used to filter genome-wide data sets to at least
deplete sequences with detectable paralogs (homologs in the same species).
\begin{figure}[t]
\includegraphics[bb= 33 628 523 776, scale=1]{./fig_workflow.ps}
\caption{Outline of the computational framework.
Starting from an estimated
orthology relation $\Theta$, its graph representation $G_{\Theta}$ is
edited to obtain the closest cograph $G_{\Theta^*}$, which in turn is
equivalent to a (not necessarily fully resolved) gene tree $T$ and an
event labeling $t$. From $(T,t)$ we extract the set $\mathbb{S}$ of
all relevant species triples. As the triple set $\mathbb{S}$ need not
to be consistent, we compute the maximal consistent subset
$\mathbb{S^*}$ of $\mathbb{S}$. Finally, we construct a least resolved
species tree from $\mathbb{S^*}$.}
\label{fig:workflow}
\end{figure}
In the presence of gene duplications, however, it becomes necessary to
distinguish between the evolutionary history of genes (\emph{gene trees})
and the evolutionary history of the species (\emph{species trees}) in which
these genes reside. Leaves of a gene tree represent genes. Their inner
nodes represent two kinds of evolutionary events, namely the duplication of
genes within a genome -- giving rise to paralogs -- and speciations, in
which the ancestral gene complement is transmitted to two daughter
lineages. Two genes are (co-)orthologous if their last common ancestor in
the gene tree represents a speciation event, while they are paralogous if
their last common ancestor is a duplication event, see \cite{Fitch2000}
and \cite{GK13} for a more recent discussion on orthology and paralogy
relationships. Speciation events, in turn, define the inner vertices of a
species tree. However, they depend on both, the gene and the species
phylogeny, as well as the reconciliation between the two. The latter
identifies speciation vertices in the gene tree with a particular
speciation event in the species tree and places the gene duplication events
on the edges of the species tree. Intriguingly, it is nevertheless
possible in practice to distinguish orthologs with acceptable
accuracy without constructing either gene or species trees
\cite{Altenhoff:09}. Many tools of this type have become available over the
last decade, see \cite{KWMK:11, DAAGD2013} for a recent review. The output
of such methods is an estimate $\Theta$ of the true orthology relation
$\Theta^*$, which can be interpreted as a graph $G_\Theta$ whose vertices
are genes and whose edges connect estimated (co-)orthologs.
Recent advances in mathematical phylogenetics suggest that the
estimated orthology relation $\Theta$ contains information on the structure
of the species tree. To make this connection, we combine here three
abstract mathematical results that are made precise in
\emph{Materials and Methods} below.
(1) Building upon the theory of symbolic ultrametrics \cite{Boeckner:98} we
showed that \emph{in the absence of horizontal gene transfer, the orthology
relation of each gene family is a cograph} \cite{Hellmuth:13d}. Cographs
can be generated from the single-vertex graph $K_1$ by complementation and
disjoint union \cite{Corneil:81}. This special structure of cographs
imposes very strong constraints that can be used to reduce the noise and
inaccuracies of empirical estimates of orthology from pairwise sequence
comparison. To this end, the initial estimate of $G_{\Theta}$ is modified
to the closest correct orthology relation $G_{\Theta^*}$ in such a way that
a minimal number of edges (i.e., orthology assignments) are introduced or
removed. This amounts to solving the cograph-editing problem
\cite{Liu:11,Liu:12}.
(2) It is well known that \emph{each cograph is equivalently represented by
its cotree} \cite{Corneil:81}. The cotree is easily computed for a given
cograph. In our context, the cotree of $G_{\Theta^*}$ is an incompletely
resolved event-labeled gene-tree. That is, in addition to the tree
topology, we know for each internal branch point whether it corresponds to
a speciation or a duplication event. Even though, adjacent speciations or
adjacent duplications cannot be resolved, the tree faithfully encodes the
relative order of any pair of duplication and speciation
\cite{Hellmuth:13d}. In the presence of horizontal gene transfer $G_\Theta$
may deviate from the structural requirements of a cograph. Still, the
situation can be described in terms of edge-colored graphs whose subgraphs
are cographs \cite{Boeckner:98,Hellmuth:13d}, so that the cograph structure
remains an acceptable approximation.
(3) \emph{Every triple (rooted binary tree on three leaves) in the cotree
that has leaves from three species and is rooted in a speciation event
also appears in the underlying species tree}
\cite{hernandez2012event}. Thus, the estimated orthology relation, after
editing to a cograph and conversion to the equivalent event-labeled gene
tree, provide many information on the species tree. This result allows us
to collect from the cotrees for each gene family partial information on the
underlying species tree. Interestingly, only gene families that harbor
duplications, and thus have a non-trivial cotree, are informative. If no
paralogs exist, then the orthology relation $G_\Theta$ is a clique (i.e.,
every family member is orthologous to every other family member) and the
corresponding cotree is completely unresolved, and hence contains no
triple. On the other hand, full resolution of the species tree is
guaranteed if at least one duplication event between any two adjacent
speciations is observable. The achievable resolution therefore depends on
the frequency of gene duplications and the number of gene families.
Despite the variance reduction due to cograph editing, noise in the data,
as well as the occasional introduction of contradictory triples as a
consequence of horizontal gene transfer is unavoidable. The species triples
collected from the individual gene families thus will not always be
congruent. A conceptually elegant way to deal with such potentially
conflicting information is provided by the theory of supertrees in the form
of the largest set of consistent triples \cite{Jansson:05,GM-13}. The data
will not always contain a sufficient set of duplication events to achieve
full resolution. To this end we consider trees with the property that
the contraction of any edge leads to the loss of an input triple. There
may be exponentially many alternative trees of this type. They can be
listed efficiently using Semple's algorithms \cite{sem:03}. To reduce the
solution space further we search for a \emph{least resolved tree} in the
sense of \cite{Jansson:12}, i.e., a tree that has the minimum number of inner
vertices. It constitutes one of the best estimates of the phylogeny
without pretending a higher resolution than actually supported by the
data. In the Supplemental Material we discuss alternative choices.
The mathematical reasoning summarized above, outlined in \emph{Materials
and Methods}, and presented in full detail in the Supplemental Material,
directly translates into a computational workflow, Fig.\ 1.
It entails three NP-hard combinatorial optimization problems: cograph
editing \cite{Liu:12}, maximal consistent triple set \cite{Bryant97,
Wu2004, Jansson2001} and least resolved supertree \cite{Jansson:12}. We
show here that they are nevertheless tractable in practice by formulating
them as Integer Linear Programs (ILP) that can be solved for both,
artificial benchmark data sets and real-life data sets, comprising
genome-scale protein sets for dozens of species, even in the presence of
horizontal gene transfer.
\section{Preliminaries}
Here, we summarize the definitions and notations required to outline the
mathematical framework, presented in Section \emph{Theory} and \emph{ILP
Formulation}
\textit{Phylogenetic Trees:} We consider a set $\ensuremath{\mathfrak{G}}$ of at least three genes
from a non-empty set $\ensuremath{\mathfrak{S}}$ of species. We denote genes by lowercase Roman
and species by lowercase Greek letters. We assume that for each gene its
species of origin is known. This is encoded by the surjective map
$\sigma:\ensuremath{\mathfrak{G}}\to \ensuremath{\mathfrak{S}}$ with $a \mapsto \sigma(a)$. A \textit{phylogenetic
tree (on $L$)} is a rooted tree $T=(V,E)$ with leaf set $L\subseteq V$
such that no inner vertex $v\in V^0:= V\setminus L$ has outdegree one and
whose root $\rho_T\in V$ has indegree zero. A phylogenetic tree $T$ is
called \emph{binary} if each inner vertex has outdegree two. A
phylogenetic tree on $\ensuremath{\mathfrak{G}}$, resp., on $\ensuremath{\mathfrak{S}}$, is called \emph{gene tree},
resp., \emph{species tree}. A (inner) vertex $y$ is an ancestor of $x\in
V$, in symbols $x\prec_T y$ if $y\ne x$ lies on the unique path connecting
$x$ with $\rho_T$. The \emph{most recent common ancestor} $\ensuremath{\operatorname{lca}}_T(L')$ of
a subset $L'\subseteq L$ is the unique vertex in $T$ that is the least
upper bound of $L'$ under the partial order $\preceq_T$. We write $L(v):=\{
y\in L| y\preceq_T v\}$ for the set of leaves in the subtree $T(v)$ of $T$
rooted in $v$. Thus, $L(\rho_T)=L$ and $T(\rho_T)=T$.
\smallskip
\textit{Rooted Triples:}
Rooted triples \cite{Dress:book}, i.e., rooted binary trees on three leaves,
are a key concept in the theory of
supertrees \cite{sem-ste-03a,Bininda:book}. A rooted triple
$r={\rt{(xy|z)}}$ with leaf set $L_r=\{x,y,z\}$ is \emph{displayed} by a
phylogenetic tree $T$ on $L$ if (i) $L_r\subseteq L$ and (ii) the path from
$x$ to $y$ does not intersect the path from $z$ to the root $\rho_T$. Thus
$\ensuremath{\operatorname{lca}}_T(x,y)\prec_T \ensuremath{\operatorname{lca}}_T(x,y,z)$. A set $R$ of triples is
\emph{(strictly) dense} on a given leaf set $L$ if for each set of three
distinct leaves there is (exactly) one triple $r\in R$. We denote by
$\mathfrak{R}(T)$ the set of all triples that are displayed by the
phylogenetic tree $T$. A set $R$ of triples is \emph{consistent} if there
is a phylogenetic tree $T$ on $L_R:=\cup_{r\in R} L_r$ such that
$R\subseteq\mathfrak{R}(T)$, i.e., $T$ displays (all triples of) $R$.
If no such tree exists, $R$ is said to be \emph{inconsistent}.\\
Given a triple set $R$, the
polynomial-time algorithm \texttt{BUILD} \cite{Aho:81} either constructs a
phylogenetic tree $T$ displaying $R$ or recognizes that $R$ is
inconsistent. The problem of finding a phylogenetic tree with the smallest
possible number of vertices that is consistent with every rooted triple in
$R$, i.e., a \emph{least resolved} tree, is an NP-hard problem
\cite{Jansson:12}. If $R$ is inconsistent, the problem of determining a
maximum consistent subset of an inconsistent set of triples is
NP-hard and also APX-hard, see \cite{Byrka:10a,vanIersel:09}. Polynomial-time
approximation algorithms for this problem and further theoretical results
are reviewed by \cite{Byrka:10}.
\textit{Triple Closure Operations and Inference Rules:}
If $R$ is consistent it is often possible to infer additional consistent
triples. Denote by $\langle R \rangle$ the set of all phylogenetic trees on
$L_R$ that display $R$. The \emph{closure} of a consistent set of triples
$R$ is $\displaystyle \ensuremath{\operatorname{cl}}(R) = \cap_{T\in \langle R \rangle}
\mathfrak{R}(T)$, see \cite{BS:95,GSS:07,Bryant97,huber2005recovering,BBDS:00}.
We say $R$ is \emph{closed} if $R=\ensuremath{\operatorname{cl}}(R)$ and write $R\vdash \rt{(xy|z)}$ iff
$\rt{(xy|z)}\in \ensuremath{\operatorname{cl}}(R)$. The closure of a given consistent set $R$ can be
computed in in $O(|R|^5)$ time \cite{BS:95}.
Extending earlier work of Dekker
\cite{Dekker86}, Bryant and Steel \cite{BS:95} derived conditions under
which $R\vdash \rt{(xy|z)} \implies R'\vdash \rt{(xy|z)}$ for some
$R'\subseteq R$. Of particular importance are the following so-called
\emph{2-order} inference rules:\\[0.1cm]
\hspace*{1cm}
$ \{\rt{(ab|c)}, \rt{(ad|c)}\}\vdash \rt{(bd|c)}$\hfill(i)\\
\hspace*{1cm}
$\{\rt{(ab|c)}, \rt{(ad|b)}\}\vdash \rt{(bd|c)},\rt{(ad|c)}$\hfill(ii)\\
\hspace*{1cm} $ \{\rt{(ab|c)}, \rt{(cd|b)}\}\vdash
\rt{(ab|d)},\rt{(cd|a)}$.
\hfill(iii)\\[0.1cm]
Inference rules based on pairs of triples $r_1, r_2 \in R$ can imply new
triples only if $|L_{r_1}\cap L_{r_2}| = 2$. Hence, in a strictly dense
triple set only the three rules above may lead to new triples.
\textit{Cograph:}
Cographs have a simple characterization as $P_4$-free
graphs, that is, no four vertices induce a simple path, although there
are a number of equivalent characterizations, see \cite{Brandstaedt:99}.
Cographs can be recognized in linear time \cite{Corneil:85,habib2005simple}.
\textit{Orthology Relation:}
An empirical orthology relation $\Theta \subset \ensuremath{\mathfrak{G}}\times\ensuremath{\mathfrak{G}}$ is a
symmetric, irreflexive relation that contains all pairs $(x,y)$ of
orthologous genes. Here, we assume that $x,y\in\ensuremath{\mathfrak{G}}$ are \emph{paralogs} if
and only if $x\ne y$ and $(x,y)\notin\Theta$. This amounts to ignoring
horizontal gene transfer. Orthology detection tools often report some
weight or confidence value $w(x,y)$ for $x$ and $y$ to be orthologs from
which $\Theta$ is estimated using a suitable cutoff. Importantly, $\Theta$
is symmetric, but not transitive, i.e., it does in general not represent a
partition of $\ensuremath{\mathfrak{G}}$.
\textit{Event-Labeled Gene Tree:}
Given $\Theta$ we aim to find a gene tree $T$ with an ``event labeling''
$t:V^0\to\{\bullet,\square\}$ at the inner vertices so that, for any two
distinct genes $x,y\in L$, $t(\ensuremath{\operatorname{lca}}_{T}(x,y))=\bullet$ if $\ensuremath{\operatorname{lca}}_{T}(x,y)$
corresponds to a speciation and hence $(x,y)\in\Theta$ and
$t(\ensuremath{\operatorname{lca}}_{T}(x,y))=\square$ if $\ensuremath{\operatorname{lca}}_{T}(x,y)$ is a duplication vertex and
hence $(x,y)\notin\Theta$. If such a tree $T$ with event-labeling $t$
exists for $\Theta$, we call the pair $(T,t)$ a \emph{symbolic
representation} of $\Theta$. We write $(T,t;\sigma)$ if in addition the
species assignment map $\sigma$ is given. A detailed and more general
introduction to the theory of symbolic representations is given in the
Supplemental Material.
\textit{Reconciliation Map:}
A phylogenetic tree $S=(W,F)$ on $\ensuremath{\mathfrak{S}}$ is a species tree for a gene tree
$T=(V,E)$ on $\ensuremath{\mathfrak{G}}$ if there is a reconciliation map $\mu:V\to W\cup F$
that maps genes $a\in \ensuremath{\mathfrak{G}}$ to species $\sigma(a)=\alpha \in \ensuremath{\mathfrak{S}}$ such that
the ancestor relation $\preceq_S$ is implied by the ancestor relation
$\preceq_T$. A more formal definition is given in the Supplemental
Material. Inner vertices of $T$ that map to inner vertices of $S$ are
speciations, while vertices of $T$ that map to edges of $S$ are
duplications.
\section{Theory}
In this section, we summarize the main ideas and concepts behind our new
methods that are
based on our results established in \cite{hernandez2012event, Hellmuth:13d}.
We consider the following problem.
Given an empirical orthology relation $\Theta$ we want to compute
a species tree. To this end, four independent problems as explained
below have to be solved.
\textit{From Estimated Orthologs to Cographs:}
Empirical estimates of the orthology relation $\Theta$ will in general
contain errors in the form of false-positive orthology assignments, as well
as false negatives, e.g., due to insufficient sequence
similarity. Horizontal gene transfer adds to this noise. Hence an empirical
relation $\Theta$ will in general not have a symbolic representation. In
fact, $\Theta$ has a \emph{symbolic representation} $(T,t)$ if and only if
$G_\Theta$ is a cograph \cite{Hellmuth:13d}, from which $(T,t)$ can be
derived in linear time, see also Theorem \ref{A:thm:ortho-cograph} in the
Supplemental Material. However, the \emph{cograph editing
problem}, which aims to convert a given graph $G(V,E)$ into a cograph
$G^*=(V,E^*)$ with the minimal number $|E\vartriangle E^*|$ of inserted or
deleted edges, is an NP-hard problem \cite{Liu:11, Liu:12}. Here, the
symbol $\vartriangle$ denotes the symmetric difference of two sets.
In our setting the problem is considerably simplified by the structure of
the input data. The gene set of every living organism consists of hundreds
or even thousands of non-homologous gene families. Thus, the initial
estimate of $G_{\Theta}$ already partitions into a large number of connected
components. As shown in Lemma \ref{A:lem:disconnected} in the Supplemental
Material, it suffices to solve the cograph editing for each connected
component separately.
\textit{Extraction of All Species Triples:}
From this edited cograph $G_{\Theta^*}$, we obtain a unique cotree that, in particular,
is congruent to an incompletely resolved event-labeled gene-tree $(T,t;\sigma)$.
In \cite{hernandez2012event}, we investigated the conditions for the
existence of a reconciliation map $\mu$ from the gene tree $T$ to
the species tree $S$. Given
$(T,t;\sigma)$, consider the triple set $\mathbb{G}$ consisting of all
triples $r=\rt{(ab|c)}\in\mathfrak{R}(T)$ so that (i) all genes
$a,b,c$ belong to different species, and (ii) the event at
the most recent common ancestor of $a,b,c$ is a speciation event,
$t(\ensuremath{\operatorname{lca}}_T(a,b,c))=\bullet$. From $\mathbb{G}$ and $\sigma$, one can
construct the following set of species triples:
\begin{equation*}
\mathbb{S}= \left\{ \rt{(\alpha\beta|\gamma)} |\,
\exists \rt{(ab|c)}\in\mathbb{G}
\textrm{\ with\ }
\sigma(a)=\alpha, \sigma(b)=\beta,\sigma(c)=\gamma
\right\}
\end{equation*}
The main result of \cite{hernandez2012event} establishes that there is a
species tree on $\sigma(\ensuremath{\mathfrak{G}})$ for $(T,t,\sigma)$ if and only if the triple
set $\mathbb{S}$ is consistent. In this case, a reconciliation map can
be found in polynomial time. No reconciliation map exists if $\mathbb{S}$
is inconsistent.
\textit{Maximal Consistent Triple Set:}
In practice, we cannot expect that the set $\mathbb{S}$ will be consistent.
Therefore, we have to solve an NP-hard problem, namely,
computing a maximum consistent subset of triples
$\mathbb{S}^* \subset \mathbb{S}$ \cite{Jansson:12}.
The following result (see \cite{GM-13} and Supplemental Material) plays a
key role for the ILP formulation of triple consistency.
\begin{thm}\small
A strictly dense triple set $R$ on $L$ with $|L|\geq 3$ is consistent if
and only if $\ensuremath{\operatorname{cl}}(R')\subseteq R$ holds for all $R'\subseteq R$ with
$|R'|= 2$.
\label{thm:consistIFFpairwise}
\end{thm}
\textit{Least Resolved Species Tree:} In order to compute an estimate for
the species tree in practice, we finally compute from $\mathbb{S}^*$ a
least resolved tree $S$ that minimizes the number of inner vertices.
Hence, we have to solve another NP-hard problem
\cite{Byrka:10a,vanIersel:09}. However, some instances can be solved in
polynomial time, which can be checked efficiently by utilizing the next
result (see Supplemental Material).
\begin{proposition} \small
If the tree $T$ inferred from the triple set $R$ by means of
\texttt{BUILD} is binary, then the closure $\ensuremath{\operatorname{cl}}(R)$ is strictly dense.
Moreover, $T$ is unique and hence, a least resolved tree
for $R$.
\label{pro:BinaryClDense}
\end{proposition}
\section{ILP Formulation}
Since we have to solve three intertwined NP-complete optimization problems
we cannot realistically hope for an efficient exact algorithm. We therefore
resort to ILP as the method of choice for solving the problem of computing
a least resolved species tree $S$ from an empirical estimate of the
orthology relation $G_\Theta$. We will use binary variables throughout.
Table 1
summarizes the definition of the ILP variables and
provides a key to the notation used in this section. In the following we
summarize the ILP formulation. A detailed description and proofs for the
correctness and completeness of the constraints can be
found in the Supplemental Material.
\input{ILPnotationTable.tex}
\textit{From Estimated Orthologs to Cographs:}
Our first task is to compute a cograph $G_{\Theta^*}$ that is as similar as
possible to $G_\Theta$ (Eq.\ \eqref{ilp:minDiff} and \eqref{ilp:cog}) with
the additional constraint that no pair of genes within the same species is
connected by an edge, since no pair of orthologs can be found in the same
species (Eq.\ \eqref{ilp:forbid_E}). Binary variables $E_{xy}$ express
(non)edges in $G_{\Theta^*}$ and binary constants $\Theta_{ab}$ (non)pairs
of the input relation $\Theta$. This ILP formulation
requires $O(|\ensuremath{\mathfrak{G}}|^2)$ binary variables and
$O(|\ensuremath{\mathfrak{G}}|^4)$ constraints. In practice, the effort is not dominated by the
number of vertices, since the connected components of $G_{\Theta}$ can be
treated independently.
\begin{ILP} \label{ilp:minDiff}
\min & \sum_{(x,y)\in\ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}}} (1-\Theta_{xy}) E_{xy} +
\sum_{(x,y)\in \ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}} } \Theta_{xy} (1-E_{xy})
\end{ILP}
\begin{ILP}\label{ilp:forbid_E}
E_{xy}=0 \text{ for all } x,y\in \ensuremath{\mathfrak{G}}
\text{ with } \sigma(x)=\sigma(y)
\end{ILP}
\begin{ILP} \label{ilp:cog}
&E_{wx} + E_{xy}+ E_{yz} - E_{xz} - E_{wy} - E_{wz} \leq 2 \\
&\forall \text{\ ordered tuples\ } (w,x,y,z) \text{\ of distinct\ }
w,x,y,z\in\ensuremath{\mathfrak{G}}
\end{ILP}
\textit{Extraction of All Species Triples:}
The construction of the species tree $S$ is based upon the set $\mathbb{S}$
of species triples that can be derived from the set of gene triples
$\mathbb{G}$, as explained in the previous section. Although the problem
of determining such triples is not NP-hard, we give in the Supplemental
Material an ILP formulation due to the sake of completeness. However, as
any other approach can be used to determine the species triples we omit
here the ILP formulation, but state that it requires $O(|\ensuremath{\mathfrak{S}}|^3)$
variables and $O(|\ensuremath{\mathfrak{G}}|^3+|\ensuremath{\mathfrak{S}}|^4)$ constraints.
\textit{Maximal Consistent Triple Set:}
An ILP approach to find maximal consistent triple sets was proposed in
\cite{chang2011ilp}. It explicitly builds up a binary tree as a way of
checking consistency. Their approach, however, requires $O(|\ensuremath{\mathfrak{S}}|^4)$ ILP
variables, which limits the applicability in practice. By Theorem
\ref{thm:consistIFFpairwise}, strictly a dense triple set $R$ is consistent,
if for all two-element subsets $R'\subseteq R$ the closure $\ensuremath{\operatorname{cl}}(R')$ is
contained in $R$. This observation allows us to avoid the explicit tree
construction and makes is much easier to find a maximal consistent subset
$\mathbb{S}^*\subseteq \mathbb{S}$. Of course, neither $\mathbb{S}^*$ nor
$\mathbb{S}$ need to be strictly dense. However, since $\mathbb{S}^*$ is
consistent, Lemma \ref{A:lem:binstrictdense} (Supplemental Material)
guarantees that there is a strictly dense triple set $\mathbb{S}'$ containing
$\mathbb{S}^*$. Thus we have $\mathbb{S}^* = \mathbb{S}'\cap \mathbb{S}$,
where $\mathbb{S}'$ must be chosen to maximize $|\mathbb{S}'\cap
\mathbb{S}|$. We define binary variables $T'_{\rt{(\alpha\beta|\gamma)}}$,
$T^*_{\rt{(\alpha\beta|\gamma)}}$, resp., binary constants $T_{\rt{(\alpha\beta|\gamma)}}$
to indicate whether $\rt{(\alpha\beta|\gamma)}$ is contained in $\mathbb{S}'$,
$\mathbb{S}^*$, resp., $\mathbb{S}$.
The ILP formulation that uses $O(|\ensuremath{\mathfrak{S}}|^3)$ variables and $O(|\ensuremath{\mathfrak{S}}|^4)$ constraints
is as follows.
\begin{ILP}
\max \sum_{\rt{(\alpha\beta|\gamma)}\in \mathbb S} T'_{\rt{(\alpha\beta|\gamma)}}
\label{ilp:maxdense}
\end{ILP}
\begin{ILP}
\label{ilp:sd}
&T'_{\rt{(\alpha\beta|\gamma)}} + T'_{\rt{(\alpha\gamma|\beta)}} +
T'_{\rt{(\beta\gamma|\alpha)}} = 1
\end{ILP}
\begin{ILP}
2 T'_{\rt{(\alpha\beta|\gamma)}} + 2&T'_{\rt{(\alpha\delta|\beta)}} -
T'_{\rt{(\beta\delta|\gamma)}} - T'_{\rt{\rt{(\alpha\delta|\gamma)}}} \leq 2
\label{ilp:eq:infRule2}
\end{ILP}
\begin{ILP}
0 \leq T'_{\rt{(\alpha\beta|\gamma)}} + T_{\rt{(\alpha\beta|\gamma)}} -
2T^*_{\rt{(\alpha\beta|\gamma)}} \leq 1
\label{eq:tstar}
\end{ILP}
This ILP formulation can easily be adapted to solve a \emph{``weighted''
maximum consistent subset} problem: Denote by $w\rt{(\alpha\beta|\gamma)}$ the
number of connected components in $G_{\Theta^*}$ that contain three
vertices $a,b,c\in \ensuremath{\mathfrak{G}}$ with $\rt{(ab|c)}\in \mathbb G$ and
$\sigma(a)=\alpha,\sigma(b)=\beta, \sigma(c)=\gamma$. These weights can simply be
inserted into the objective function Eq.\ \eqref{ilp:maxdense}
\begin{ILP}
\max \sum_{\rt{(\alpha\beta|\gamma)}\in \mathbb{S}}
T'_{\rt{(\alpha\beta|\gamma)}}*w\rt{(\alpha\beta|\gamma)}
\label{ilp:wmax}
\end{ILP}
to increase the relative importance of species triples in $\mathbb{S}$, if
they are observed in multiple gene families.
\textit{Least Resolved Species Tree:}
We finally have to find a least resolved species tree from the set $\mathbb
S^*$ computed in the previous step. Thus the variables
$T^*_{\rt{(\alpha\beta|\gamma)}}$ become the input constants. For the explicit
construction of the tree we use some of the ideas of \cite{chang2011ilp}.
To build an arbitrary tree for the consistent triple set $\mathbb S^*$, one
can use one of the fast implementations of \texttt{BUILD}
\cite{sem-ste-03a}. If this tree is binary, then Proposition
\ref{pro:BinaryClDense} implies that the closure $\ensuremath{\operatorname{cl}}(\mathbb S^*)$ is
strictly dense and that this tree is a unique and least resolved tree for
$\mathbb S^*$. Hence, as a preprocessing step \texttt{BUILD} is used in
advance, to test whether the tree for $\mathbb S^*$ is already binary. If
not, we proceed with the following ILP approach that
uses $O(|\ensuremath{\mathfrak{S}}|^3)$ variables and constraints.
\begin{ILP} \label{ilp:minY}
\min\ &\sum_p Y_p
\end{ILP}
\begin{ILP}
0 \leq Y_p|\ensuremath{\mathfrak{S}}| - \sum_{\alpha\in\ensuremath{\mathfrak{S}}} M_{\alpha p}\leq |\ensuremath{\mathfrak{S}}|-1 \label{ilp:yp}
\end{ILP}
\begin{ILP}\label{ilp:Nclus}
0\leq & M_{\alpha p} + M_{\beta p} - 2 N_{\alpha\beta, p} \leq 1
\end{ILP}
\begin{ILP}
1 - |\ensuremath{\mathfrak{S}}|(1- T^*_{\rt{(\alpha\beta|\gamma)}}) \leq
\sum_p N_{\alpha\beta,p} - \frac{1}{2}
N_{\alpha\gamma,p} -\frac{1}{2} N_{\beta\gamma,p}
\label{ilp:rep}
\end{ILP}
\begin{ILP}\label{ilp:CM}
C_{p,q,01}\geq &-M_{\alpha p}+M_{\alpha q}\\[-0.1cm]
C_{p,q,10}\geq &\ \ \ \ M_{\alpha p}-M_{\alpha q} \notag\\[-0.1cm]
C_{p,q,11}\geq &\ \ \ \ M_{\alpha p}+M_{\alpha q}-1 \notag
\end{ILP}
\begin{ILP}
C_{p,q,01} + C_{p,q,10} + C_{p,q,11} \leq 2 \ \forall p,q \label{ilp:comp}
\end{ILP}
Since a phylogenetic tree $S$ is equivalently specified by its \emph{hierarchy}
$\mathcal{C} = \{L(v)\mid v\in V(S)\}$ whose elements are called \emph{clusters}
(see Supplemental Material or \cite{sem-ste-03a}),
we construct the clusters induced by
all triples of $\mathbb{S}^*$ and check whether they form a hierarchy on
$\ensuremath{\mathfrak{S}}$. Following \cite{chang2011ilp}, we define the binary
$|\ensuremath{\mathfrak{S}}|\times(|\ensuremath{\mathfrak{S}}|-2)$ matrix $M$, whose entries $M_{\alpha p}=1$ indicates
that species $\alpha$ is contained in cluster $p$, see Supplemental
Material. The entries $M_{\alpha p}$ serve as ILP variables. In contrast to
the work of \cite{chang2011ilp}, we allow \emph{trivial} columns in $M$ in
which all entries are $0$. Minimizing the number of \emph{non-trivial}
columns then yields a least resolved tree.
For any two distinct species $\alpha,\beta$ and all clusters $p$ we introduce
binary variables $N_{\alpha\beta, p}$ that indicate whether two species
$\alpha,\beta$ are both contained in the same cluster $p$ or not
(Eq.\ \eqref{ilp:Nclus}). To determine whether a triple $\rt{(\alpha\beta|\gamma)}$
is contained in $\mathbb{S}^* \subseteq \mathbb{S}$ and displayed by a
tree, we need the constraint Eq.\ \eqref{ilp:rep}. Following, the ideas of
Chang et al.\ we use the ``three-gamete condition''. Eq.\ \eqref{ilp:CM} and
\eqref{ilp:comp} ensures that $M$ defines a ``partial'' hierarchy (any two
clusters satisfy $p\cap q\in \{p,q, \emptyset\}$) of compatible clusters.
A detailed discussion how these conditions establish that $M$ encodes a
``partial'' hierarchy can be found in the Supplemental Material.
Our aim is to find a least resolved tree that displays all triples of
$\mathbb{S}^*$. We use the $|\ensuremath{\mathfrak{S}}|-2$ binary variables $Y_p=1$ to indicate
whether there are non-zero entries in column $p$ (Eq.\ \eqref{ilp:yp}).
Finally, Eq.\ \eqref{ilp:minY} captures that the number of non-trivial
columns in $M$, and thus the number of inner vertices in the respective
tree, is minimized. In the Supplemental Material we also discuss an
ILP formulation to find a tree that displays the minimum number of
additional triples not contained in $\mathbb S^*$ as an alternative to
minimizing number of interior vertices.
\section{Implementation and Data Sets}
Details on implementation and test data sets can be found in the
Supplemental Material. Simulated data were computed with and without
horizontal gene transfer using both, the method described in \cite{HHW+14}
and the \texttt{Artificial Life Framework} (\texttt{ALF}) \cite{Dalquen:12}. As
real-life data sets we used the complete protein complements of 11
\emph{Aquificales} and 19 \emph{Enterobacteriales} species. The initial
orthology relations are estimated with \texttt{Proteinortho}
\cite{Lechner:11a}. The ILP formulation of Fig.\ 1
is implemented in the Software \texttt{ParaPhylo} using \texttt{IBM ILOG
CPLEX{\texttrademark}} Optimizer 12.6. \texttt{ParaPhylo} is
freely available from \\
\texttt{http://pacosy.informatik.uni-leipzig.de/paraphylo}.
\section{Results and Discussion}
\begin{figure*}[t]
\begin{center}
\includegraphics[bb=150 280 470 520, scale=1.3]{./plots-doc.ps} \hspace{1cm}
\end{center}
\caption{
Accuracy of reconstructed species trees in simulated data sets.
\emph{(a)} Dependence on the number of gene families:
10 (left), and 20 (right) species and 100 to 500 gene families are
generated using \texttt{ALF}\ with duplication/loss rate 0.005 and horizontal
gene transfer rate $0.0$.
\emph{(b)} Dependence on the intensity of horizontal gene transfer:
Orthology estimated with \texttt{Proteinortho} (left), and assuming
perfect paralogy knowledge (right); $10$ species and $1000$ gene
families are generated using \texttt{ALF}\ with duplication/loss rate $0.005$
and horizontal gene transfer rate ranging from $0.0$ to $0.0075$.
\emph{(c)} Dependence on the type and intensity ($p=5-25\%$) of noise
in the raw orthology data $\Theta$:
$10$ species and 1000 gene families are generated using \texttt{ALF}\ with
duplication/loss rate $0.005$ and horizontal gene transfer rate $0.0$.
Tree distances are measured by the triple metric (TT); all box plots
summarize 100 independent data sets.
}%
\label{fig:plot}
\end{figure*}
We have shown rigorously that orthology information alone is sufficient to
reconstruct the species tree provided that (i) the orthology is known
without error and unperturbed by horizontal gene transfer, and (ii)
the input data contains a sufficient number of duplication events. While
this species tree can be inferred in polynomial time for noise-free data,
in a realistic setting three NP-hard optimization problems need to be
solved.
To this end, we use here an exact ILP formulation implementing the workflow
of Fig.\ 1
to compute species trees from empirically
estimated orthology assignments. We first use simulated data to
demonstrate that it is indeed feasible in practice to obtain correct gene
trees directly from empirical estimates of orthology. For 5, 10, 15, and
20 species we obtained prefect, fully resolved reconstructions of 80\%,
56\%, 24\%, and 11\% of the species trees using 500 gene families. This
comes with no surprise, given the low amount of paralogs in the simulations
(7.5\% to 11.2\%), and the high amount of extremely short branches in the
generated species trees -- on 11.3\% to 17.9\% of the branches, less then
one duplication is expected to occur. Nevertheless, the average TT distance
\emph{was always smaller than 0.09} for more than 300 gene families,
independent from the number of species, Fig.\ 2(a).
Similar
results for other tree distance measures are compiled in the Supplemental
Material. Thus, deviations from perfect reconstructions are nearly
exclusively explained by a lack of perfect resolution.
In order to evaluate the robustness of the species trees in response to
noise in the input data we used simulated gene families with different
noise models and levels: (i) insertion and deletion of edges in the
orthology graph (homologous noise), (ii) insertion of edges (orthologous
noise), (iii) deletion of edges (paralogous noise), and (iv) modification
of gene/species assignments (xenologous noise). We observe a substantial
dependence of the accuracy of the reconstructed species trees on the noise
model. The results are most resilient against overprediction of orthology
(noise model ii), while missing edges in $\Theta$ have a larger impact, see
Fig.\ 2(c)
for TT distance, and Supplemental Material for
the other distances. This behavior can be explained by the observation that
many false orthologs (overpredicting orthology) lead to an orthology graph,
whose components are more clique-like and hence, yield few informative
triples. Incorrect species triples thus are reduced, while missing species
triples often can be supplemented through other gene families. On the other
hand, if there are many false paralogs (underpredicting orthology) more
false species triples are introduced, resulting in inaccurate trees.
Xenologous noise (model iv), simulated by changing gene/species
associations with probability $p$ while retaining the original gene tree,
amounts to an extreme model for horizontal transfer. Our model, in
particular in the weighted version, is quite robust for small amounts of
HGT of 5\% to 10\%. Although some incorrect triples are introduced in the
wake of horizontal transfer, they are usually dominated by correct
alternatives observed from multiple gene families, and thus, excluded during
computation of the maximal consistent triple set. Only large scale
concerted horizontal transfer, which may occur in long-term endosymbiotic
associations \cite{Keeling:08}, thus pose a serious problem.
Simulations with \texttt{ALF} \cite{Dalquen:12} show that our method is resilient
against errors resulting from mis-predicting xenology as orthology, see
Fig.\ 2(b)
right, even at horizontal gene transfer rates
of $39.5\%$. Assuming perfect paralogy knowledge, i.e., assuming that all
xenologs are mis-predicted as orthologs, the correct trees are
reconstructed essentially independently from the amount of HGT for
$69.75\%$ of the data sets, and the triple distance to the correct tree
remain minute in the remaining cases. This is consistent with noise model
(ii), i.e., a bias towards overpredicting orthology. Tree reconstruction
based directly on the estimated orthology relation computed with
\texttt{Proteinortho} are of course more inaccurate, Fig.\
2(b)
left. Even extreme rates of HGT, however, have no
discernible effect on the quality of the inferred species trees. Our
approach is therefore limited only by quality of initial orthology
prediction tools.
The fraction $s$ of all triples obtained from the orthology relations
that are retained in the final tree estimates serves as a quality measure
similar in flavor e.g.\ to the retention index of
cladistics. Bootstrapping support values for individual nodes are readily
computed by resampling either at the level of gene families or at the
level of triples (see Supplemental Material).
\begin{figure}[t]
\centerline{
\includegraphics[width=1.\columnwidth]{./trees_aquificales.tboot.eps}
}
\caption{
Phylogenetic tree of eleven \emph{Aquificales} species inferred
from paralogy. Internal node labels indicate triple-based bootstrap
support.}
\label{fig:simTree}
\end{figure}
For the \emph{Aquificales} data set \texttt{Proteinortho} predicts 2856
gene families, from which 850 contain duplications. The reconstructed
species tree (see Fig.\
, support $s=0.61$) is almost
identical to the tree presented in \cite{Lechner:14b}. All species are
clustered correctly according to their taxonomic families. A slight
difference refers to the two \emph{Sulfurihydrogenibium} species not being
directly clustered. These two species are very closely related. With only
a few duplicates exclusively found in one of the species, the data was not
sufficient for the approach to resolve this subtree correctly.
Additionally, \emph{Hydrogenivirga sp.} is misplaced next to
\emph{Persephonella marina}. This does not come as a surprise: Lechner
\emph{et al.} \cite{Lechner:14b} already suspected that the data from this
species was contaminated with material from \emph{Hydrogenothermaceae}.
The second data set comprises the genomes of 19 \emph{Enterobacteriales}
with 8218 gene families of which 15 consists of more than 50 genes and 1342
containing duplications. Our orthology-based tree shows the expected
groupings of \emph{Escherichia} and \emph{Shigella} species and identifies
the monophyletic groups comprising \emph{Salmonella}, \emph{Klebsiella},
and \emph{Yersinia} species. The topology of the deeper nodes agrees only
in part with the reference tree from \texttt{PATRIC} database
\cite{Wattam:13}, see Supplemental Material for additional information. The
resulting tree has a support of $0.53$, reflecting that a few of the deeper
nodes are poorly supported.
Data sets of around 20 species with a few thousand gene families, each
having up to 50 genes, can be processed in reasonable time,
see Table \ref{tab:runtimeExtended}.
However, depending on the
amount of noise in the data, the runtime for cograph editing can increase
dramatically even for families with less than 50 genes.
\section{Conclusion}
We have shown here both theoretically and in a practical implementation
that it is possible to access the phylogenetic information implicitly
contained in gene duplications and thus to reconstruct a species phylogeny
from information of paralogy only. This source of information is strictly
complementary to the sources of information employed in phylogenomics
studies, which are always based on alignments of orthologous sequences. In
fact, 1:1 orthologs -- the preferred data in sequence-based phylogenetics
-- correspond to cographs that are complete and hence have a star as their
cotree and therefore do not contribute \emph{at all} to the phylogenetic
reconstruction in our approach. Access to the phylogenetic information
implicit in (co-)orthology data requires the solution of three NP-complete
combinatorial optimization problems. This is generally the case in
phylogenetics, however: both the multiple sequence alignment problem and
the extraction of maximum parsimony, maximum likelihood, or optimal
Bayesian trees is NP-complete as well. Here we solve the
computational tasks exactly for moderate-size problems by means of an
ILP formulation. Using phylogenomic data for \emph{Aquificales} and
\emph{Enterobacteriales} we demonstrated that non-trivial phylogenies can
indeed be re-constructed from tree-free orthology estimates
alone. Just as sequence-based approaches in molecular phylogeny
crucially depend on the quality of multiple sequence alignments, our
approach is sensitive to the initial estimate $\Theta$ of the orthology
relation. Horizontal gene transfer, furthermore, is currently not
included in the model but rather treated as noise that disturbs the
phylogenetic signal. Simulated data indicate that the method is rather
robust and can tolerate surprisingly large levels of noise in the form
of both, mis-predicted orthology and horizontal gene transfer, provided
a sufficient number of independent gene families is available as
input data. Importantly, horizontal gene-transfer can introduce a
bias only when many gene families are simultaneously affected by horizontal
transfer. Lack of duplications, on the other hand, limits our resolution
at very short time scales, a regime in which sequence-based approaches work
very accurately.
We have used here an exact implementation as ILP to demonstrate the
potential of the approach without confounding it with computational
approximations. Thus, the current implementation does not easily scale
to very large data sets.
Paralleling
the developments in sequence-based phylogenetics, where the
NP-complete problems of finding a good input alignment and of
constructing tree(s) maximizing the parsimony score, likelihood, or
Bayesian posterior probability also cannot be solved exactly for large
data sets, it will be necessary in practice to settle for heuristic
solutions. In sequence-based phylogenetics, these have improved over
decades to the point where they are no longer a limiting factor in
phylogenetic reconstruction. Several polynomial time heuristics and
approximation algorithms have been devised \emph{already} for the triple
consistency problem \cite{Gasieniec:99,Maemura:07,Byrka:10a,Tazehkand:13}.
The cograph editing problem and the least resolved tree problem, in
contrast, have received comparably little attention so far, but constitute
the most obvious avenues for boosting computational efficiency. Empirical
observations such as the resilience of our approach against overprediction
of orthologs in the input will certainly be helpful in designing efficient
heuristics.
In the long run, we envision that the species tree $S$, and the symbolic
representation of the event-annotated gene tree $(T,t)$ may serve as
constraints for a refinement of the initial estimate of $\Theta$, solely
making use only of (nearly) unambiguously identified branchings and event
assignments. A series of iterative improvements of estimates for $\Theta$,
$(T,t)$, and $S$, and, more importantly, methods that allow to
accurately detect paralogs, may not only lead to more accurate trees and
orthology assignments, but could also turn out to be computationally more
efficient.
\section*{APPENDIX}
\section{Theory}
In this section we give an expanded and more technical account of the
mathematical theory underlying the relationships between orthology
relations, triple sets, and the reconciliation of gene and triple sets. In
particular, we include here the proofs of the key novel results outline in
the main text. The notation in the main text is a subset of the one used
here. Theorems, remarks, and ILP formulations have the same numbers as in
the main text. As a consequence, the numberings in this supplement may not
always be in ascending order.
\subsection{Notation}
For an arbitrary set $X$ we denote with $\binom{X}{n}$ the set of
$n$-elementary subsets of X. In the remainder of this paper, $L$ will
always denote a finite set of size at least three. Furthermore, we will
denote with $\ensuremath{\mathfrak{G}}$ a set of genes and with $\ensuremath{\mathfrak{S}}$ a set of species and
assume that $|\ensuremath{\mathfrak{G}}|\ge 3$ and $|\ensuremath{\mathfrak{S}}|\ge 1$. Genes contained in $\ensuremath{\mathfrak{G}}$ are
denoted by lowercase Roman letters $a,b,c,\ldots$ and species in $\ensuremath{\mathfrak{S}}$ by
lower case Greek letters $\alpha,\beta,\gamma\ldots$. Furthermore, let
$\sigma:\ensuremath{\mathfrak{G}}\to \ensuremath{\mathfrak{S}}$ with $x \mapsto \sigma(x)$ be a mapping that assigns
to each gene $x\in \ensuremath{\mathfrak{G}}$ its corresponding species $\sigma(x)=\chi\in
\ensuremath{\mathfrak{S}}$. With $\sigma(\ensuremath{\mathfrak{G}})$ we denote the image of $\sigma$. W.l.o.g. we can
assume that the map $\sigma$ is surjective, and thus,
$\sigma(\ensuremath{\mathfrak{G}})=\ensuremath{\mathfrak{S}}$. We assume that the reader is familiar with graphs and
its terminology, and refer to \cite{Diestel12} as a standard reference.
\subsection{Phylogenetic Trees}
A tree $T=(V,E)$ is a connected cycle-free graph with vertex set $V(T)=V$
and edge set $E(T)=E$. A vertex of $T$ of degree one is called a
\emph{leaf} of $T$ and all other vertices of $T$ are called \emph{inner}
vertices. An edge of $T$ is an \emph{inner} edge if both of its end
vertices are inner vertices. The sets of inner vertices of $T$ is denoted
by $V^0$. A tree $T$ is called \emph{binary} if each inner vertex has
outdegree two. A \emph{rooted tree} $T=(V,E)$ is a tree that contains a
distinguished vertex $\rho_T\in V$ called the \emph{root}.
A \emph{phylogenetic tree $T$ (on $L$)} is a rooted tree $T=(V,E)$
with leaf set $L\subseteq V$ such that no inner vertex has in- and
outdegree one and whose root $\rho_T\in V$ has indegree zero.
A phylogenetic tree on $\ensuremath{\mathfrak{G}}$, resp., on $\ensuremath{\mathfrak{S}}$,
is called \emph{gene tree}, resp., \emph{species tree}.
Let $T=(V,E)$ be a phylogenetic tree on $L$ with root $\rho_T$. The
ancestor relation $\preceq_T$ on $V$ is the partial order defined, for
all $x,y\in V$, by $x \preceq_T y$ whenever $y$ lies on the (unique)
path from $x$ to the root. Furthermore, we write $x \prec_T y$ if $x
\preceq_T y$ and $x\ne y$. For a non-empty subset of leaves
$L'\subseteq L$, we define $\ensuremath{\operatorname{lca}}_T(L')$, or the \emph{most recent
common ancestor of $L'$}, to be the unique vertex in $T$ that is the
least upper bound of $L'$ under the partial order $\preceq_T$. In
case $L'=\{x,y \}$, we put $\ensuremath{\operatorname{lca}}_T(x,y):=\ensuremath{\operatorname{lca}}_T(\{x,y\})$ and if
$L'=\{x,y,z \}$, we put $\ensuremath{\operatorname{lca}}_T(x,y,z):=\ensuremath{\operatorname{lca}}_T(\{x,y,z\})$. If there
is no danger of ambiguity, we will write $\ensuremath{\operatorname{lca}}(L')$ rather then
$\ensuremath{\operatorname{lca}}_T(L')$.
For $v\in V$, we denote with $L(v):=\{ y\in L| y\preceq_T v\}$ the
set of leaves in the subtree $T(v)$ of $T$ rooted in $v$. Thus,
$L(\rho_T)=L$ and $T(\rho_T)=T$.
It is well-known that there is a one-to-one correspondence between
(isomorphism classes of) phylogenetic trees on $L$ and so-called
hierarchies on $L$. For a finite set $L$, a \emph{hierarchy on $L$} is a
subset $\mathcal C$ of the power set $\mathbb P(L)$ such that
\begin{enumerate}
\item[(i)] $L\in \mathcal{C}$
\item[(ii)] $\{x\}\in \mathcal{C}$ for all $x\in L$ and
\item[(iii)] $p\cap q\in \{p, q, \emptyset\}$ for all $p, q\in
\mathcal{C}$.
\end{enumerate}
The elements of $\mathcal{C}$ are called clusters.
\ \\ \begin{thm}[\cite{sem-ste-03a}]
Let $\mc C$ be a collection of non-empty subsets of $L$. Then, there is
a phylogenetic tree $T$ on $L$ with $\mc C = \{L(v)\mid v\in V(T)\}$ if
and only if $\mc C$ is a hierarchy on $L$.
\label{A:thm:hierarchy}
\end{thm}\ \\
The following result appears to be well known. We include a simple proof
since we were unable to find a reference for it.
\ \\ \begin{lem}
The number of clusters $|\mc C|$ in a hierarchy $\mc C$ on $L$
determined by a phylogenetic tree $T=(V,E)$ on $L$ is bounded
by $2 |L|-1$.
\label{A:lem:nrC}
\end{lem}
\begin{proof}
Clearly, the number of clusters $|\mathcal{C}|$ is determined by the number
of vertices $|V|$, since each leaf $v\in L$,
determines the singleton cluster $\{v\}\in \mathcal{C}$
and each inner node $v$ has at least two children and thus,
gives rise to a new cluster $L(v) \in \mathcal{C}$. Hence,
$|\mathcal{C}| = |V|$.
First, consider a binary phylogenetic tree $T=(V,E)$ on $|L|$
leaves. Then there are $|V|-|L|$ inner vertices, all of out-degree
two. Hence, $|E| = 2(|V|-|L|) =|V|-1$ and thus $|V|=2|L|-1$. Hence, $T$
determines $|\mc C| =2|L|-1$ clusters and has in particular $|L|-1$ inner
vertices.
Now, its easy to verify by induction on the number of leaves $|L|$ that
an arbitrary phylogenetic tree $T'=(V',E')$ has $n_0\leq |L|-1$ inner
vertices and thus, $ |\mathcal{C}'|=|V'| = n_0 +|L| \leq 2|L|-1$
clusters.
\end{proof}
\subsection{Rooted Triples}
\subsubsection{Consistent Triple Sets}
Rooted triples, sometimes also called rooted triplets \cite{Dress:book},
constitute an important concept in the context of supertree reconstruction
\cite{sem-ste-03a,Bininda:book} and will also play a major role here. A
rooted triple $r={\rt{(xy|z)}}$ is \emph{displayed} by a phylogenetic tree
$T$ on $L$ if $x,y,z\in L$ pairwise distinct, and the path from $x$ to $y$ does not intersect
the path from $z$ to the root $\rho_T$ and thus, having $\ensuremath{\operatorname{lca}}_T(x,y)\prec_T
\ensuremath{\operatorname{lca}}_T(x,y,z)$. We denote with $L_r$ the set of the three leaves
$\{x,y,z\}$ contained in the triple $r=\rt{(xy|z)}$, and with
$L_R:=\cup_{r\in R} L_r$ the union of the leaf set of each $r\in R$. For a
given leaf set $L$, a triple set $R$ is said to be \emph{(strict) dense}
if for each $x,y,z\in L$ there is (exactly) one triple $r\in R$ with
$L_r=\{x,y,z\}$. For a phylogenetic tree $T$, we denote by
$\mathfrak{R}(T)$ the set of all triples that are displayed by $T$. A set
$R$ of triples is \emph{consistent} if there is a phylogenetic tree $T$ on
$L_R$ such that $R\subseteq\mathfrak{R}(T)$, i.e., $T$ displays all triples
$r\in R$.
Not all sets of triples are consistent, of course. Given a triple set $R$
there is a polynomial-time algorithm, referred to in \cite{sem-ste-03a} as
\texttt{BUILD}, that either constructs a phylogenetic tree $T$ displaying
$R$ or recognizes that $R$ is not consistent or \emph{inconsistent}
\cite{Aho:81}. Various practical implementations have been described
starting with \cite{Aho:81}, improved variants are discussed in
\cite{Henzinger:99,Jansson:05}. The problem of determining a maximum
consistent subset of an inconsistent set of triples, however, is NP-hard
and also APX-hard, see \cite{Byrka:10a,vanIersel:09} and the references
therein. We refer to \cite{Byrka:10} for an overview on the available
practical approaches and further theoretical results.
For a given consistent triple set $R$, a rooted phylogenetic tree, in
which the contraction of any edge leads to the loss of an input triple is
called a \emph{least resolved} tree (for $R$). Following the idea of
Janson et al.\ \cite{Jansson:12}, we are mainly concerned with the
construction of least resolved trees, that have in addition as few inner
vertices as possible and cover the largest subset of compatible triples
contained in $R$. Finding a tree with a minimal number of inner nodes
for a given consistent set of rooted triples is also an NP-hard problem,
see \cite{Jansson:12}. Unless stated explicitly, we use the term
\emph{least resolved tree} to refer to a tree with a minimum number of
interior vertices, i.e., the least resolved trees in the sense of
\cite{Jansson:12}. Alternative choices include the trees that display the
minimum number of additional triples not contained in $R$.
\subsubsection{Graph Representation of Triples}
There is a quite useful representation of a set of triples $R$ as a graph
also known as \emph{Aho graph}, see \cite{Aho:81, huson2010phylogenetic,
BS:95}. For given a triple set $R$ and an arbitrary subset $\S\subseteq
L_R$, the graph $[R,\S]$ has vertex set $\S$ and two vertices $x,y\in \S$
are linked by an edge, if there is a triple $\rt{(xy|z)} \in R$ with $z\in
\S$. Based on connectedness properties of the graph $[R,\S]$ for
particular subsets $\S\subseteq L_R$, the algorithm \texttt{BUILD}
recognizes if $R$ is consistent or not. In particular, this algorithm makes
use of the following well-known theorem.
\bigskip
\begin{thm}[\cite{Aho:81,BS:95}]
A set of rooted triples $R$ is consistent if and only if for each
subset $\S\subseteq L_R$, $|\S|>1$ the graph $[R,\S]$ is disconnected.
\label{A:thm:ahograph}
\end{thm}
\bigskip
\begin{lem}[\cite{huson2010phylogenetic}]
Let $R$ be a dense set of rooted triples on $L$. Then for each
$\S\subseteq L$, the number of connected components of the Aho graph
$[R,\S]$ is at most two.
\label{A:lem:dense-binary}
\end{lem}\bigskip
The tree computed with \texttt{BUILD} based on the Aho graph for a
consistent set of rooted triples $R$ is denoted by $\mathrm{Aho}(R)$.
Lemma~\ref{A:lem:dense-binary} implies that $\mathrm{Aho}(R)$ must be
binary for a consistent dense set of rooted triples. We will use the Aho
graph and its key properties as a frequent tool in upcoming proofs.
For later reference, we recall \smallskip
\begin{lem}[\cite{BS:95}]
If $R'$ is a subset of the triple set $R$ and $L$ is
a leaf set, then $[R',L]$ is a subgraph of $[R,L]$.
\label{A:lem:subgraph}
\end{lem}
\subsubsection{Closure Operations and Inference Rules}
The requirement that a set $R$ of triples is consistent, and thus, that
there is a tree displaying all triples, allows to infer new triples from
the set of all trees displaying all triples of $R$ and to define a
\emph{closure operation} for $R$, which has been extensively studied in the
last decades, see \cite{BS:95, GSS:07,
Bryant97,huber2005recovering,BBDS:00}. Let $\langle R \rangle$ be the set
of all phylogenetic trees on $L_R$ that display all the triples of $R$.
The closure of a consistent set of rooted triples $R$ is defined as
$$\ensuremath{\operatorname{cl}}(R) = \bigcap_{T\in \langle R \rangle} \mathfrak{R}(T).$$
This operation satisfies the usual three properties of a closure operator,
namely: $R \subseteq \ensuremath{\operatorname{cl}}(R)$; $\ensuremath{\operatorname{cl}}(\ensuremath{\operatorname{cl}}(R))=\ensuremath{\operatorname{cl}}(R)$ and if $R' \subseteq R$,
then $\ensuremath{\operatorname{cl}}(R')\subseteq \ensuremath{\operatorname{cl}}(R)$. We say $R$ is \emph{closed} if $R=\ensuremath{\operatorname{cl}}(R)$.
Clearly, for any tree $T$ it holds that $\mathfrak{R}(T)$ is closed. The
brute force computation of the closure of a given consistent set $R$ runs
in $O(|R|^5)$ time \cite{BS:95}: For any three leaves $x,y,z\in L_R$ test
whether exactly one of the sets $R\cup\{\rt{(xy|z)}\}$,
$R\cup\{\rt{(xz|y)}\}$, $R\cup\{\rt{(zy|x)}\}$ is consistent, and if so,
add the respective triple to the closure $\ensuremath{\operatorname{cl}}(R)$ of $R$.
For a consistent set $R$ of rooted triples we write $R\vdash \rt{(xy|z)}$
if any phylogenetic tree that displays all triples of $R$ also displays
$\rt{(xy|z)}$. In other words, $R\vdash \rt{(xy|z)}$ iff $\rt{(xy|z)}\in
\ensuremath{\operatorname{cl}}(R)$. In a work of Bryant and Steel \cite{BS:95}, in which the authors
extend and generalize the work of Dekker \cite{Dekker86}, it was shown
under which conditions it is possible to infer triples by using only
subsets $R'\subseteq R$, i.e., under which conditions $R\vdash \rt{(xy|z)}
\implies R'\vdash \rt{(xy|z)}$ for some $R'\subseteq R$. In particular, we
will make frequent use of the following inference rules:
\renewcommand{\theequation}{\roman{equation}}
\begin{align}
\{\rt{(ab|c)}, \rt{(ad|c)}\} &\vdash \rt{(bd|c)} \label{eq:infRule1} \\
\{\rt{(ab|c)}, \rt{(ad|b)}\} & \vdash \rt{(bd|c)},\rt{(ad|c)} \label{eq:infRule2} \\
\{\rt{(ab|c)}, \rt{(cd|b)}\} &\vdash \rt{(ab|d)},\rt{(cd|a)}.\label{eq:infRule3}
\end{align}
\renewcommand{\theequation}{\arabic{equation}}
\begin{rem}
It is an easy task to verify,
that such inference rules based on two triples $r_1, r_2
\in R$ can lead only to new triples, whenever
$|L_{r_1}\cap L_{r_2}| = 2$. Hence, the latter three
stated rules are the only ones that lead to new triples for a given
pair of triples in a strictly dense triple set.
\label{A:rem:only}
\end{rem}\bigskip
For later reference and the ILP formulation, we give the following lemma.
\bigskip
\begin{lem}
\label{lem:suffRule}
Let $R$ be a strictly dense set of rooted triples.
For all $L'=\{a,b,c,d\} \subseteq L_R$ we have the following statements:
All triples inferred by rule \eqref{eq:infRule2} applied on triples $r\in R$ with $L_r\subset L'$ are contained in $R$
if and only if all triples inferred by rule \eqref{eq:infRule3} applied on triples $r\in R$ with $L_r\subset L'$
are contained in $R$.
Moreover, if all triples inferred by rule \eqref{eq:infRule2} applied on triples $r\in R$ with $L_r\subset L'$ are contained in $R$
then all triples inferred by rule \eqref{eq:infRule1} applied on triples $r\in R$ with $L_r\subset L'$ are contained in $R$.
\end{lem}
\begin{proof}
The first statement was established in \cite[Lemma 2]{GM-13}.
For the second statement assume that for all pairwise distinct
$L'=\{a,b,c,d\} \subseteq L_R$ it holds that all triples inferred by rule
\eqref{eq:infRule2}, or equivalently, by rule \eqref{eq:infRule3}
applied on triples $r\in R$ with $L_r\subset L'$ are contained in $R$.
Assume for contradiction that there are triples
$\rt{(ab|c)}, \rt{(ad|c)} \in R$, but $\rt{(bd|c)}\not\in R$.
Since $R$ is strictly dense, we have either $\rt{(bc|d)}\in R$
or $\rt{(cd|b)}\in R$. In the first case and since $\rt{(ab|c)}\in R$,
rule \eqref{eq:infRule2} implies that $\rt{(ac|d)}\in R$, a contradiction.
In the second case and since $\rt{(ab|c)}\in R$,
rule \eqref{eq:infRule3} implies that $\rt{(cd|a)}\in R$, a contradiction.
\end{proof}
We are now in the position to prove the following important and helpful
lemmas and theorem. The final theorem basically states that consistent strict
dense triple sets can be characterized by the closure of any two element
subset of $R$. Note, an analogous result was established by \cite{GM-13}.
However, we give here an additional direct and transparent proof.
\bigskip
\begin{lem}
Let $R$ be a strictly dense set of triples on $L$ such that for all
$R'\subseteq R$ with $|R'|= 2$ it holds $\ensuremath{\operatorname{cl}}(R')\subseteq R$. Let
$x\in L$ and $L'=L\setminus \{x\}$. Moreover, let $R_{|L'}\subset R$
denote the subset of all triples $r\in R$ with $L_r\subseteq L'$. Then
$R_{|L'}$ is strictly dense and for all $R'\subseteq R_{|L'}$ with
$|R'|= 2$ it holds $\ensuremath{\operatorname{cl}}(R')\subseteq R_{|L'}$.
\label{A:lem:rest}
\end{lem}
\begin{proof}
Clearly, since $R$ is strictly dense and since $R_{|L'}$ contains all
triples except the ones containing $x$ it still holds that for all
$a,b,c\in L'$ there is exactly one triple $r\in R_{|L'}$ with
$a,b,c\in L_r$. Hence, $R_{|L'}$ is strictly dense.
Assume for contradiction, that there are triples
$r_1, r_2\in R_{|L'}\subset R$ with $\ensuremath{\operatorname{cl}}(r_1,r_2)\not\subseteq R_{|L'}$.
By construction of $R_{|L'}$, no triples $r_1, r_2\in R_{|L'}$
can infer a new triple $r_3$ with $x\in L_{r_3}$.
This immediately implies that $\ensuremath{\operatorname{cl}}(r_1,r_2)\not\subseteq R$,
a contradiction.
\end{proof}
\begin{lem}
Let $R$ be a strictly dense set of triples on $L$ with $|L|=4$. If
for all $R'\subseteq R$ with $|R'|= 2$ holds $\ensuremath{\operatorname{cl}}(R')\subseteq R$
then $R$ is consistent.
\label{A:lem:L=4}
\end{lem}
\begin{proof}
By contraposition, assume that $R$ is not consistent. Thus, the Aho graph
$[R,\S]$ is connected for some $\S\subseteq L$. Since $R$ is strict
dense, for any $\S\subseteq L$ with $|\S|=2$ or $|\S|=3$ the Aho graph
$[R,\S]$ is always disconnected. Hence, $[R,\S]$ for $\S=L$ must be
connected. The graph $[R,L]$ has four vertices, say $a,\ b,\ c$ and $d$.
The fact that $R$ is strictly dense and $|L|=4$ implies that $|R|=4$ and in
particular, that $[R,L]$ has three or four edges. Hence, the graph
$[R,L]$ is isomorphic to one of the following graphs $G_0$, $G_1$ or $G_2$.
The graph $G_0$ is isomorphic to a path $x_1-x_2-x_3-x_4$ on four
vertices; $G_1$ is isomorphic to a chordless square; and $G_2$ is
isomorphic to a path $x_1-x_2-x_3-x_4$ on four vertices where the edge
$\{x_1,x_3\}$ or $\{x_2,x_4\}$ is added. W.l.o.g. assume that for the
first case $[R,L]\simeq G_0$ has edges $\{a,b\}$, $\{b,c\}$, $\{c,d\}$;
for the second case $[R,L]\simeq G_1$ has edges $\{a,b\}$, $\{a,c\}$,
$\{c,d\}$ and $\{b,d\}$ and for the third case assume that $[R,L]\simeq
G_2$ has edges $\{a,b\}$, $\{a,c\}$, $\{c,d\}$ and $\{a,d\}$.
Let $[R,L]\simeq G_0$. Then there are triples of the form $\rt{(ab|*)}$,
$\rt{(bc|*)}$, $\rt{(cd|*)}$, where one kind of triple must occur twice,
since otherwise, $[R,L]$ would have four edges. Assume that this is
$\rt{(ab|*)}$. Hence, the triples $\rt{(ab|c)}, \rt{(ab|d)}\in R$ since
$|R|=4$. Since $R$ is strictly dense, $\rt{(bc|*)}=\rt{(bc|d)}\in R$,
which implies that $\rt{(cd|*)} = \rt{(cd|a)}\in R$. Now,
$R'=\{\rt{(ab|c)},\rt{(bc|d)} \} \vdash \rt{(ac|d)}$. However, since $R$
is strictly dense and $\rt{(cd|a)}\in R$ we can conclude that
$\rt{(ac|d)}\not\in R$, and therefore $ \ensuremath{\operatorname{cl}}(R')\not\subseteq R$. The
case with triples $\rt{(cd|*)}$ occurring twice is treated analogously.
If triples $\rt{(bc|*)}$ occur twice, we can argue the same way to obtain
obtain $\rt{(bc|a)}, \rt{(bc|d)}\in R$, $\rt{(ab|*)} = \rt{(ab|d)}$, and
$\rt{(cd|*)} = \rt{(cd|a)}$. However,
$R'=\{\rt{(bc|a)},\rt{(cd|a)}\}\vdash \rt{(bd|a)} \notin R$, and thus
$\ensuremath{\operatorname{cl}}(R')\not\subseteq R$.
Let $[R,L]\simeq G_1$. Then there must be triples of the form
$\rt{(ab|*)}$, $\rt{(ac|*)}$, $\rt{(cd|*)}$, $\rt{(bd|*)}$. Clearly,
$\rt{(ab|*)}\in \{\rt{(ab|c)}, \rt{(ab|d)}\}$. Note that not both
$\rt{(ab|c)}$ and $\rt{(ab|d)}$ can be contained in $R$, since then
$[R,L]\simeq G_0$. If $\rt{(ab|*)}=\rt{(ab|c)}$ and since $R$ is strict
dense, $\rt{(ac|*)} = \rt{(ac|d)}$. Again, since $R$ is strictly dense,
$\rt{(cd|*)} = \rt{(cd|b)}$ and this implies that $\rt{(bd|*)} =
\rt{(bd|a)}$. However, $R'=\{\rt{(ab|c)},\rt{(ac|d)}\}\vdash \rt{(ab|d)}
\notin R$, since $R$ is strictly dense and $\rt{(bd|a)}\in R$. Thus,
$\ensuremath{\operatorname{cl}}(R')\not\subseteq R$. If $\rt{(ab|*)}=\rt{(ab|d)}$ and since $R$ is
strictly dense, we can argue analogously, and obtain, $\rt{(bd|*)} =
\rt{(bd|c)}$, $\rt{(cd|*)} = \rt{(cd|a)}$ and $\rt{(ac|*)} =
\rt{(ac|b)}$. However, $R'=\{\rt{(ab|d)},\rt{(bd|c)}\}\vdash \rt{(ad|c)}
\notin R$, and thus $\ensuremath{\operatorname{cl}}(R')\not\subseteq R$.
Let $[R,L]\simeq G_2$. Then there must be triples of the form
$\rt{(ab|*)}$, $\rt{(ac|*)}$, $\rt{(cd|*)}$, $\rt{(ad|*)}$. Again,
$\rt{(ab|*)}\in \{\rt{(ab|c)}, \rt{(ab|d)}\}$. By similar arguments as in
the latter two cases, if $\rt{(ab|*)} = \rt{(ab|c)}$ then we obtain,
$\rt{(ac|*)} = \rt{(ac|d)}$, $\rt{(ad|*)} = \rt{(ad|b)}$ and $\rt{(cd|*)}
= \rt{(cd|b)}$. Since $R'=\{\rt{(ab|c)},\rt{(ac|d)}\}\vdash \rt{(bc|d)}
\notin R$, we can conclude that $\ensuremath{\operatorname{cl}}(R')\not\subseteq R$. If
$\rt{(ab|*)}=\rt{(ab|d)}$ we obtain analogously, $\rt{(ad|*)} =
\rt{(ad|c)}$, $\rt{(cd|*)} = \rt{(cd|b)}$ and $\rt{(ac|*)} =
\rt{(ac|b)}$. However, $R'=\{\rt{(ab|d)},\rt{(ad|c)}\}\vdash \rt{(bd|c)}
\notin R$, and thus $\ensuremath{\operatorname{cl}}(R')\not\subseteq R$.
\end{proof}
\begin{cthm}{\ref{thm:consistIFFpairwise}}
Let $R$ be a strictly dense triple set on $L$ with $|L|\geq 3$. The
set $R$ is consistent if and only if $\ensuremath{\operatorname{cl}}(R')\subseteq R$ holds
for all $R'\subseteq R$ with
$|R'|= 2$.
\end{cthm}
\begin{proof}
$\Rightarrow:$
If $R$ is strictly dense and consistent, then for any triple triple
$\rt{(ab|c)} \notin R$ holds $R\cup \rt{(ab|c)}$ is inconsistent as either
$\rt{(ac|b)}$ or $\rt{(bc|a)}$ is already contained in $R$. Hence, for each
$a,b,c\in L$ exactly one $R\cup\{\rt{(ab|c)}\}$, $R\cup\{\rt{(ac|b)}\}$,
$R\cup\{\rt{(bc|a)}\}$ is consistent, and this triple is already
contained in $R$. Hence, $R$ is closed. Therefore, for any subset
$R'\subseteq R$ holds $\ensuremath{\operatorname{cl}}(R')\subseteq \ensuremath{\operatorname{cl}}(R)=R$. In particular, this
holds for all $R'\subseteq R$ with $|R'|= 2$.
$\Leftarrow:$ \emph{(Induction on $|L|$.)}\\
If $|L|=3$ and since $R$ is strictly dense, it holds $|R|=1$ and thus,
$R$ is always consistent. If $|L|=4$, then Lemma \ref{A:lem:L=4}
implies that if for any two-element subset
$R'\subseteq R$ holds that $\ensuremath{\operatorname{cl}}(R')\subseteq R$, then $R$ is
consistent. Assume therefore, the assumption is true for
all strictly dense triple sets $R$ on $L$ with $|L|=n$.
Let $R$ be a strictly dense triple set on $L$ with $|L|=n+1$ such that
for each $R'\subseteq R$ with $|R'|= 2$ it holds $\ensuremath{\operatorname{cl}}(R')\subseteq
R$. Moreover, let $L'=L\setminus \{x\}$ for some $x\in L$ and
$R_{|L'}\subset R$ denote the subset of all triples $r\in R$ with
$L_r\subset L'$. Lemma \ref{A:lem:rest} implies that $R_{|L'}$ is
strictly dense and for each $R'\subseteq R_{|L'}$ with $|R'|= 2$ we
have $\ensuremath{\operatorname{cl}}(R')\subseteq R_{|L'}$. Hence, the induction hypothesis can
be applied for any such $R_{|L'}$ implying that $R_{|L'}$ is
consistent. Moreover, since $R_{|L'}$ is strictly dense and
consistent, for any triple $\rt{(xy|z)}\notin R_{|L'}$ holds that
$R_{|L'} \cup \rt{(xy|z)}$ is inconsistent. But this implies that
$R_{|L'}$ is closed, i.e., $\ensuremath{\operatorname{cl}}(R_{|L'})=R_{|L'}$. Lemma
\ref{A:lem:dense-binary} implies that the Aho graph $[R_{|L'},\S]$ has
exactly two connected components $C_1$ and $C_2$ for each
$\S\subseteq L'$ with $|\S|>1$. In the following we denote with
$\S_i=V(C_i)$, $i=1,2$ the set of vertices of the connected
component $C_i$ in $[R_{|L'},\S]$. Clearly, $\S=\S_1 \dot\cup \S_2$.
It is easy to see that $[R,\S] \simeq [R_{|L'},\S]$
for any $\S\subseteq L'$, since none of the graphs contain
vertex $x$. Hence, $[R,\S]$ is always disconnected
for any $\S\subseteq L'$.
Therefore, it remains to show that, for all $\S\cup\{x\}$ with
$\S\subseteq L'$ holds: if for any $R'\subseteq R$ with $|R'|= 2$ holds
$\ensuremath{\operatorname{cl}}(R')\subseteq R$, then $[R, \S\cup \{x\}]$ is disconnected and
hence, $R$ is consistent.
To proof this statement we consider the different possibilities for $\S$
separately. We will frequently use that $[R_{|L'},\S]$ is a subgraph of
$[R,\S]$ for every $\S\subseteq L$ (Lemma \ref{A:lem:subgraph}).
\emph{Case 1.} If $|\S|=1$, then $\S\cup \{x\}$ implies that $[R, \S\cup
\{x\}]$ has exactly two vertices and clearly, no edge. Thus, $[R, \S\cup
\{x\}]$ is disconnected.
\emph{Case 2.} Let $|\S|=2$ with $\S_1=\{a\}$ and $\S_2=\{b\}$. Since
$R$ is strictly dense, exactly one of the triples $\rt{(ab|x)}$,
$\rt{(ax|b)}$, or $\rt{(xb|a)}$ is contained in $R$. Hence, $[R, \S\cup
\{x\}]$ has exactly three vertices where two of them are linked by an
edge. Thus, $[R, \S\cup \{x\}]$ is disconnected.
\emph{Case 3.} Let $|\S|\geq 3$ with $\S_1=\{a_1,\ldots, a_n\}$ and
$\S_2=\{b_1,\ldots, b_m\}$. Since $R_{|L'}$ is consistent and strict
dense and by construction of $\S_1$ and $\S_2$ it holds $\forall a_i,a_j
\in \S_1, b_k \in \S_2, i \neq j : \rt{(a_ia_j|b_k)} \in R_{|L'}
\subseteq R$ and $\forall a_i \in \S_1, b_k, b_l \in \S_2, k \neq l :
\rt{(b_kb_l|a_i)} \in R_{|L'} \subseteq R$. Therefore, since $R$ is
strictly dense, there cannot be any triple of the form $\rt{(a_ib_k|a_j)}$
or $\rt{(a_ib_k|b_l)}$ with $a_i,a_j \in \S_1, b_k,b_l \in \S_2$ that is
contained $R$. It remains to show that $R$ is consistent. The following
three subcases can occur.
\begin{itemize}
\item[3.a)] The connected components $C_1$ and $C_2$ of $[R_{|L'},\S]$
are connected in $[R,\S\cup \{x\}]$. Hence, there must be a triple
$\rt{(ab|x)}\in R$ with $a\in \S_1$ and $b\in \S_2$. Hence, in order
to prove that $R$ is consistent, we need to show that there is no
triple $\rt{(cx|d)}$ contained $R$ for all $c,d \in \S$, which would
imply that $[R,\S\cup \{x\}]$ stays disconnected.
\item[3.b)] The connected component $C_1$ of $[R_{|L'},\S]$ is connected
to $x$ in $[R,\S\cup \{x\}]$. Hence, there must be a triple
$\rt{(ax|c)}\in R$ with $a\in \S_1$, $c\in \S$. Hence, in order to
prove that $R$ is consistent, we need to show that there are no triples
$\rt{(b_kx|a_i)}$ and $\rt{(b_kx|b_l)}$ for all $a_i \in \S_1$, $b_k,
b_l \in \S_2$, which would imply that $[R,\S\cup \{x\}]$ stays
disconnected.
\item[3.c)] As in Case $3.b)$, the connected component $C_2$ of
$[R_{|L'},\S]$ might be connected to $x$ in $[R,\S\cup \{x\}]$ and we
need to show that there are no triples $\rt{(a_ix|b_k)}$ and
\rt{(a_ix|a_j)}$\rt{(a_ix|a_j)}$ for all $a_i,a_j \in \S_1$, $b_k \in
\S_2$ in order to prove that $R$ is consistent.
\end{itemize}
\emph{Case 3.a)} Let $\rt{(ab|x)} \in R$, $a \in \S_1$, $b \in
\S_2$. First we show that for all $a_i \in \S_1$ holds $\rt{(a_ib|x)} \in
R$. Clearly, if $\S_1=\{a\}$ the statement is trivially true. If
$|\S_1|>1$ then $\{\rt{(ab|x)}, \rt{(a_ia|b)}\}\vdash \rt{(a_ib|x)}$ for
all $a_i\in \S_1$. Since the closure of all two element subsets of $R$ is
contained in $R$ and $\rt{(ab|x)},\rt{(a_ia|b)} \in R$ we can conclude
that $\rt{(a_ib|x)}\in R$. Analogously one shows that for all $b_k \in
\S_2$ holds $\rt{(ab_k|x)} \in R$.
\noindent
Since $\{\rt{(a_ia|b_k)},\rt{(ab_k|x)}\}\vdash \rt{(a_ib_k|x)}$ and
$\rt{(a_ia|b_k)},\rt{(ab_k|x)}\in R$ we can conclude that
$\rt{(a_ib_k|x)} \in R$ for all $a_i\in \S_1$, $b_k\in
\S_2$. Furthermore, $\{\rt{(a_ia_j|b)},\rt{(a_ib|x)}\}\vdash
\rt{(a_ia_j|x)}$ for all $a_i,a_j\in \S_1$ and again, $\rt{(a_ia_j|x)}\in
R$ for all $a_i,a_j\in \S_1$. Analogously, one shows that
$\rt{(b_kb_l|x)}\in R$ for all $b_k,b_l\in \S_2$.
\noindent
Thus, we have shown, that for all $c,d\in \S$ holds that $\rt{(cd|x)}\in
R$. Since $R$ is strictly dense, there is no triple $\rt{(cx|d)}$
contained in $R$ for any $c,d\in \S$. Hence, $[R,\S\cup \{x\}]$ is
disconnected.
\emph{Case 3.b)} Let $\rt{(ax|c)}\in R$ with $a\in \S_1$, $c\in
\S$. Assume first that $c\in \S_1$. Then there is triple $\rt{(ac|b)}\in
R$. Moreover, $ \{\rt{(ax|c)},\rt{(ac|b)}\}\vdash \rt{(ax|b)}$ and thus,
$\rt{(ax|b)}\in R$. This implies that there is always some $c'=b\in \S_2$
with $\rt{(ax|c')}\in R$. In other words, w.l.o.g. we can assume that for
$\rt{(ax|c)}\in R$, $a\in \S_1$ holds $c\in \S_2$.
\noindent
Since $\{\rt{(ax|b)},\rt{(aa_i|b)}\}\vdash \rt{(a_ix|b)}$ and
$\rt{(ax|b)},\rt{(aa_i|b)}\in R$ we can conclude that $\rt{(a_ix|b)}\in
R$ for all $a_i\in \S_1$. Moreover,
$\{\rt{(a_ix|b)},\rt{(bb_k|a_i)}\}\vdash \rt{(a_ix|b_k)}$ and by similar
arguments, $\rt{(a_ix|b_k)}\in R$ for all $a_i\in \S_1, b_k\in \S_2$.
Finally, $\{\rt{(a_ix|b_k)},\rt{(b_lb_k|a_i)}\}\vdash \rt{(b_kb_l|x)}$,
and therefore, $\rt{(b_kb_l|x)}\in R$ for all $b_k,b_l\in \S_2$. To
summarize, for all $a_i \in \S_1, b_k,b_l \in \S_2$ we have
$\rt{(a_ix|b_k)} \in R$ and $\rt{(b_kb_l|x)} \in R$. Since $R$ is strict
dense there cannot be triples $\rt{(b_kx|a_i)}$ and $\rt{(b_kx|b_l)}$ for
any $a_i \in \S_1$, $b_k, b_l \in \S_2$, and hence, $[R,\S\cup \{x\}]$ is
disconnected.
\emph{Case 3.c)} By similar arguments as in Case $3.b)$ and interchanging
the role of $\S_1$ and $\S_2$, one shows that $[R,\S\cup \{x\}]$ is
disconnected.
In summary, we have shown that $[R,\S\cup\{x\}]$ is disconnected in all
cases. Therefore $R$ is consistent.
\end{proof}
\begin{cpro}{\ref{pro:BinaryClDense}}
Let $R$ be a consistent triple set on $L$. If the tree obtained with
\texttt{BUILD} is binary, then the closure $\ensuremath{\operatorname{cl}}(R)$ is strictly dense.
Moreover, this tree $T$ is unique and therefore, a least resolved tree
for $R$.
\end{cpro}
\begin{proof}
Note, the algorithm \texttt{BUILD} relies on the Aho graph $[R,\S]$ for
particular subsets $\S\subseteq L$. This means, that if the tree
obtained with \texttt{BUILD} is binary, then for each of the particular
subsets $\S\subseteq L$ the Aho graph $[R,\S]$ must have exactly two
components. Moreover, $R$ is consistent, since \texttt{BUILD} constructs
a tree.
Now consider arbitrary three distinct leaves $x,y,z\in L$. Since $T$ is
binary, there is a subset $\S\subseteq L$ with $x,y,z\in \S$ in some
stage of \texttt{BUILD} such that two of the three leaves, say $x$ and $y$
are in a different connected component than the leaf $z$. This implies
that $R\cup \rt{(xy|z)}$ is consistent, since even if $\{x,y\}\not \in
E([R,\S])$, the vertices $x$ and $y$ remain in the same connected
component different from the one containing $z$ when adding the edge
$\{x,y\}$ to $[R,\S]$. Moreover, by the latter argument, both $R\cup
\rt{(xz|y)}$ and $R\cup \rt{(yz|x)}$ are not consistent. Thus, for any
three distinct leaves $x,y,z\in L$ exactly one of the sets
$R\cup\{\rt{(xy|z)}\}$, $R\cup\{\rt{(xz|y)}\}$, $R\cup\{\rt{(zy|x)}\}$
is consistent, and thus, contained in the closure $\ensuremath{\operatorname{cl}}(R)$. Hence, $\ensuremath{\operatorname{cl}}(R)$
is strictly dense.
Since a tree $T$ that displays $R$ also displays $\ensuremath{\operatorname{cl}}(R)$ and because
$\ensuremath{\operatorname{cl}}(R)$ is strictly dense and consistent, we can conclude that $\ensuremath{\operatorname{cl}}(R) =
\mathfrak{R}(T)$ whenever $T$ displays $R$. Hence, $T$ must be unique and
therefore, the least resolved tree for $R$.
\end{proof}
\begin{lem}\label{A:lem:binstrictdense}
Let $R$ be a consistent set of triples on $L$. Then there is a strictly dense
consistent triple set $R'$ on $L$ that contains $R$.
\end{lem}
\begin{proof}
Let $\mathrm{Aho}(R)$ be the tree constructed by \texttt{BUILD} from a
consistent triple set $R$. It is in general not a binary tree. Let $T'$
be a binary tree obtained from $\mathrm{Aho}(R)$ by substituting a binary
tree with $k$ leaves for every internal vertex with $k>2$ children. Any
triple $(ab|c)\in \mathfrak{R}(\mathrm{Aho}(R))$ is also displayed by
$T'$ since unique disjoint paths $a-b$ and $c-\rho$ in $\mathrm{Aho}(R)$
translate directly to unique paths in $T'$, which obviously are again
disjoint. Furthermore, a binary tree $T'$ with leaf set $L$ displays
exactly one triple for each $\{a,b,c\}\in \binom{L}{3}$; hence $R'$ is
strictly dense.
\end{proof}
\begin{rem}
Let $T$ be a binary tree. Then $\mathfrak{R}(T)$ is strictly dense and
hence, $\mathfrak{R}(T)\cup\{r\}$ is inconsistent for any triple $r\notin
\mathfrak{R}(T)$. Since $\mathfrak{R}(T)\subseteq
\mathfrak{R}(\mathrm{Aho}(\mathfrak{R}(T))$ by definition of the action
of $\texttt{BUILD}$ and there is no consistent triple set that strictly
contains $\mathfrak{R}(T)$, we have $\mathfrak{R}(T)=
\mathfrak{R}(\mathrm{Aho}(\mathfrak{R}(T))$. Thus
$\mathrm{Aho}(\mathfrak{R}(T))=T$.
\end{rem}\bigskip
In order to discuss the relationship of alternative choices of least
resolved trees we need a few additional definitions. Let
$\mathcal{C}(T)=\bigcup_{v\in V(T)}\{L(v)\}$ be the hierarchy defined by
$T$. We say that a phylogenetic tree $S$ \emph{refines} a tree $T$, if
$\mathcal{C}(T)\subseteq \mathcal{C}(S)$. A collection of rooted triples
$R$ \emph{identifies} a phylogenetic tree $T$ if $T$ displays $R$ and
every other tree that displays $R$ is a refinement of $T$.
\bigskip
\begin{lem}
Let $R$ be a consistent set of triples that identifies a phylogenetic
tree $T$. Suppose the trees $T_1$ and $T_2$ display all triples of $R$
so that $T_1$ has the minimum number of vertices among all trees in
$\langle R \rangle$ and $T_2$ minimizes the cardinality
$|\mathfrak{R}(T_2)|$. Then,
$$T\simeq\mathrm{Aho}(R)\simeq T_1 \simeq T_2.$$
\label{lem:allEqual}
\end{lem}
\begin{proof}
Lemma 2.1 and 2.2 in \cite{GSS:07} state
that $R$ identifies $T$ iff $\mathfrak{R}(T) = \ensuremath{\operatorname{cl}}(R)$ and
that $T\simeq\mathrm{Aho}(R)$ in this case.
Since $R$ identifies $T$, any other tree that displays $R$ refines
$T$ and thus, must have more vertices. Hence, $T\simeq T_1$.
\newline
Since the closure $\ensuremath{\operatorname{cl}}(R)$ must be displayed by all trees
that display $R$ it follows that $T$ is one of the trees that
have a minimum cardinality set $\mathfrak{R}(T)$ and thus,
$|\mathfrak{R}(T_2)| = |\mathfrak{R}(T)|$ and hence,
$\mathfrak{R}(T_2) = \mathfrak{R}(T)=\ensuremath{\operatorname{cl}}(R)$.
Lemma 2.1 in \cite{GSS:07} implies that
$R$ identifies $T_2$. Lemma 2.2 in \cite{GSS:07}
implies that, therefore, $T_2\simeq \mathrm{Aho}(R)$.
\end{proof}
\subsection{Orthology Relations, Symbolic Representations, and Cographs}
\label{ss:cograph}
For a gene tree $T=(V,E)$ on $\ensuremath{\mathfrak{G}}$ we define $t:V^0\to M$ as a map that
assigns to each inner vertex an arbitrary symbol $m\in M$. Such a map $t$
is called a \emph{symbolic dating map} or \emph{event-labeling} for $T$; it
is \emph{discriminating} if $t(u) \neq t(v)$, for all inner edges
$\{u,v\}$, see \cite{Boeckner:98}.
In the rest of this paper we are interested only in event-labelings $t$
that map inner vertices into the set $M=\{\bullet, \square\}$, where the
symbol ``$\bullet$'' denotes a speciation event and ``$\square$'' a
duplication event. We denote with $(T,t)$ a gene tree $T$ with
corresponding event labeling $t$. If in addition the map $\sigma$ is given,
we write this as $(T,t;\sigma)$.
An orthology relation $\Theta \subset \ensuremath{\mathfrak{G}}\times\ensuremath{\mathfrak{G}}$ is a symmetric
relation that contains all pairs $(x,y)$ of orthologous genes. Note, this
implies that $(x,x)\notin \Theta$ for all $x\in \ensuremath{\mathfrak{G}}$. Hence, its
complement $\overline \Theta $ contains all leaf pairs $(x,x)$ and pairs
$(x,y)$ of non-orthologous genes and thus, in this context all paralogous
genes.
For a given orthology relation $\Theta$ we want to find an event-labeled
phylogenetic tree $T$ on $\ensuremath{\mathfrak{G}}$, with $t:V^0\to\{\bullet, \square\}$ such
that
\begin{enumerate}
\item $t(\ensuremath{\operatorname{lca}}_{T}(x,y))=\bullet$ for all $(x,y)\in \Theta$
\item $t(\ensuremath{\operatorname{lca}}_{T}(x,y))=\square$ for all $(x,y)\in \overline \Theta
\setminus\{(x,x)\mid x\in \ensuremath{\mathfrak{G}}\}$.
\end{enumerate}
In other words, we want to find an event-labeled tree $T$ on $\ensuremath{\mathfrak{G}}$ such
that the event on the most recent common ancestor of the orthologous genes
is a speciation event and of paralogous genes a duplication event. If such
a tree $T$ with (discriminating) event-labeling $t$ exists for $\Theta$, we
call the pair $(T,t)$ a \emph{(discriminating) symbolic representation} of
$\Theta$.
\subsubsection{Symbolic Representations and Cographs}
\label{sect:cograph}
Empirical orthology estimations will in general contain false-positives.
In addition orthologous pairs of genes may have been missed due to the
scoring function and the selected threshold. Hence, not for all estimated
orthology relations there is such a tree. In order to characterize
orthology relations we define for an arbitrary symmetric relation $R
\subseteq \ensuremath{\mathfrak{G}}\times \ensuremath{\mathfrak{G}}$ the underlying graph $G_R=(\ensuremath{\mathfrak{G}}, E_R)$ with edge
set $E_R=\left\{\{x,y\}\in \binom{\ensuremath{\mathfrak{G}}}{2}\mid (x,y)\in R\right\}$.
As we shall see, orthology relations $\Theta$ and cographs are closely
related. A cograph is a $P_4$-free graph (i.e.\ a graph such that no four
vertices induce a subgraph that is a path on $4$ vertices), although there
are a number of equivalent characterizations of such graphs (see e.g.\
\cite{Brandstaedt:99} for a survey).
It is well-known in the literature concerning cographs that, to any cograph
$G=(V,E)$, one can associate a canonical \emph{cotree} $\ensuremath{\operatorname{CoT}}(G)=(W\cup
V,F)$ with leaf set $V$ together with a labeling map $\lambda_G:W\to
\{0,1\}$ defined on the inner vertices of $\ensuremath{\operatorname{CoT}}(G)$. The key observation is
that, given a cograph $G=(V,E)$, a pair $\{x,y\} \in \binom{V}{2}$ is an
edge in $G$ if and only if $\lambda_G(\ensuremath{\operatorname{lca}}_{\ensuremath{\operatorname{CoT}}(G)}(x,y))=1$
(cf. \cite[p. 166]{Corneil:81}). The next theorem summarizes the results,
that rely on the theory of so-called symbolic ultrametrics developed in
\cite{Boeckner:98} and have been established in a more general context in
\cite{Hellmuth:13d}.\bigskip
\begin{thm}[\cite{Hellmuth:13d}] \label{A:thm:ortho-cograph}
Suppose that $\Theta$ is an (estimated) orthology relation and
denote by $\overline{\Theta}^{\neq}:=\overline{\Theta} \setminus\{(x,x)\mid x\in \ensuremath{\mathfrak{G}}\}$
the complement of $\Theta$ without pairs $(x,x)$.
Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $\Theta$ has a symbolic representation.
\item[(ii)] $\Theta$ has a discriminating symbolic representation.
\item[(iii)] $G_\Theta= \overline{G}_{\overline{\Theta}^{\neq}}$
is a cograph.
\end{itemize}
\end{thm}
This result enables us to find the corresponding discriminating
symbolic representation $(T,t)$ for $\Theta$ (if one exists) by
identifying $T$ with the respective cotree $\ensuremath{\operatorname{CoT}}(G_\Theta)$ of the
cograph $G_\Theta$ and setting $t(v)=\bullet$ if $\{x,y\}\in
E(G_\Theta)$ and thus, $\lambda_{G_\Theta}(v)=1$ and $t(v)=\square$ if
$\{x,y\}\not\in E(G_\Theta)$ and thus $\lambda_{G_\Theta}(v)=0$
We identify the discriminating symbolic representation $(T,t)$ for
$\Theta$ (if one exists) with the cotree $\ensuremath{\operatorname{CoT}}(G_\Theta)$ as explained
above.
\subsubsection{Cograph Editing}
It is well-known that cographs can be recognized in linear time
\cite{Corneil:85, habib2005simple}. However, the cograph editing problem,
that is given a graph $G=(V,E)$ one aims to convert $G$ into a cograph
$G^*=(V,E^*)$ such that the number $|E\vartriangle E^*|$ of inserted or deleted
edges is minimized is an NP-complete problem \cite{Liu:11, Liu:12}.
In view of the above results, this implies the following:\bigskip
\begin{thm}
Let $\Theta\subset \ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}}$ be an (estimated) orthology relation.
It can be recognized in linear time whether
$\Theta$ has a (discriminating) symbolic representation.
For a given positive integer $K$
the problem of deciding if there is an orthology relation
$\Theta^*$ that
has a (discriminating) symbolic representation
s.t. $|\Theta\vartriangle \Theta^*|\leq K$ is NP-complete.
\end{thm}
\bigskip
As the next result shows, it suffices to solve the cograph editing problem separately for the
connected components of $G$.
\begin{lem}
For any graph $G(V,E)$ let $F\in\binom{V}{2}$ be a minimal set of edges
so that $G'=(V,E\vartriangle F)$ is a cograph. Then $(x,y)\in F\setminus E$
implies that $x$ and $y$ are located in the same connected component of
$G$.
\label{A:lem:disconnected}
\end{lem}
\begin{proof}
Suppose, for contradiction, that there is a minimal set $F$ connecting
two distinct connected components of $G$, resulting in a cograph $G'$.
W.l.o.g., we may assume that $G$ has only two connected components
$C_1,C_2$. Denote by $G''$ the graph obtained from $G'$ by removing all
edges $\{x,y\}$ with $x\in V(C_1)$ and $y\in V(C_2)$. If $G''$ is
not a cograph, then there is an induced $P_4$, which must be contained
in one of the connected components of $G''$. By construction this
induced $P_4$ is also contained in $G'$. Since $G'$ is a cograph no such
$P_4$ exists and hence $G''$ is also a cograph, contradicting the
minimality of $F$.
\end{proof}
\subsection{From Gene Triples to Species Triples and Reconciliation Maps}
A gene tree $T$ on $\ensuremath{\mathfrak{G}}$ arises in evolution by means of a series of
events along a species tree $S$ on $\ensuremath{\mathfrak{S}}$. In our setting these may be
duplications of genes within a single species and speciation events, in
which the parent's gene content is transmitted to both offsprings. The
connection between gene and species tree is encoded in the reconciliation
map, which associates speciation vertices in the gene tree with the
interior vertex in the species tree representing the same speciation
event. We consider the problem of finding a species tree for a given gene
tree. In this subsection We follow the presentation of
\cite{hernandez2012event}.
\subsubsection{Reconciliation Maps}
We start with a formal definition of reconciliation maps. \smallskip
\begin{defi}[\cite{hernandez2012event}] \label{A:def:mu} Let $S=(W,F)$
be a species tree on $\ensuremath{\mathfrak{S}}$, let $T=(V,E)$ be a gene tree on $\ensuremath{\mathfrak{G}}$ with
corresponding event labeling $t:V^0\to \{\bullet,\square\}$ and suppose
there is a surjective map $\sigma$ that assigns to each gene the
respective species it is contained in. Then we say that $S$ is a
\emph{species tree for $(T,t;\sigma)$} if there is a map $\mu:V\to W\cup
F$ such that, for all $x\in V$:
\begin{itemize}
\item[(i)] If $x\in \ensuremath{\mathfrak{G}}$ then $\mu(x)=\sigma(x)$.
\item[(ii)] If $t(x)=\bullet$ then $\mu(x)\in W\setminus \ensuremath{\mathfrak{S}}$.
\item[(iii)] If $t(x)=\square$ then $\mu(x)\in F$.
\item[(iv)] Let $x,y\in V$ with $x\prec_T y$. We distinguish two cases:
\begin{enumerate}
\item If $t(x)=t(y)=\square$ then $\mu(x)\preceq_S \mu(y)$ in $S$.
\item If $t(x)=t(y)=\bullet$ or $t(x)\neq t(y)$ then
$\mu(x)\prec_S\mu(y)$ in $S$.
\end{enumerate}
\item[(v)] If $t(x)=\bullet$ then
$\mu(x)=\ensuremath{\operatorname{lca}}_S( \sigma(L(x)) )$
\end{itemize}
We call $\mu$ the reconciliation map from $(T,t,\sigma)$ to $S$.
\end{defi} \smallskip
A reconciliation map $\mu$ maps leaves $x\in \ensuremath{\mathfrak{G}}$ to leaves
$\mu(x):=\sigma(x)$ in S and inner vertices $x\in V^0$ to inner vertices
$w\in W\setminus \ensuremath{\mathfrak{S}}$ in $S$ if $t(x)=\bullet$ and to edges $f\in F$ in
$S$ if $t(x)=\square$, such that the ancestor relation $\preceq_S$ is
implied by the ancestor relation $\preceq_T$. Definition \ref{A:def:mu} is
consistent with the definition of reconciliation maps for the case when the
event labeling $t$ on $T$ is not known, see \cite{Doyon:09}.
\subsubsection{Existence of a Reconciliation Map}
The reconciliation of gene and species trees is usually studied in the
situation that only $S$, $T$, and $\sigma$ are known and both $\mu$ and
$t$ and must be determined
\cite{Guigo1996,Page1997,Arvestad2003,Bonizzoni2005,Gorecki2006,%
Hahn2007,Bansal2008,Chauve2008,Burleigh2009,Larget2010}. In this form,
there is always a solution $(\mu,t)$, which however is not unique in
general. A variety of different optimality criteria have been used in the
literature to obtain biologically plausible reconciliations. The
situation changes when not just the gene tree $T$ but a symbolic
representation $(T,t)$ is given. Then a species tree need not exists.
\cite{hernandez2012event} derived necessary and sufficient conditions for
the existence of a species tree $S$ so that there exists a reconciliation
map from $(T,t)$ to $S$. We briefly summarize the key results.
For $(T,t;\sigma)$ we define the triple set
\begin{align*}
\begin{split}
\mathbb{G}= \left\{ r \in \mathfrak{R}(T)\big\vert
t(\ensuremath{\operatorname{lca}}_T(L_r))=\bullet \, \textrm{ and } \,
\sigma(x)\not=\sigma(y),\,
\right. \\
\left. \textrm{for all }\,
x,y\in L_r\,\textrm{pairwise distinct}\right\}
\end{split}
\end{align*}
In other words, the set $\mathbb G$ contains all triples $r=\rt{(ab|c)}$ of
$\mathfrak{R}(T)$ where all three genes in $a,b,c\in L_r$ are contained in
different species and the event at the most recent common ancestor of $L_r$
is a speciation event, i.e., $t(\ensuremath{\operatorname{lca}}_T(a,b,c))=\bullet$. It is easy to see
that in this case $S$ must display $\rt{(\sigma(a)\sigma(b)|\sigma(c))}$,
i.e., it is a necessary condition that the triple set
\begin{align*}
\mathbb{S}=
\left\{ \rt{(\alpha\beta|\gamma)} |\, \exists \rt{(ab|c)}\in\mathbb{G}
\textrm{\ with\ }
\sigma(a)=\alpha,\sigma(b)=\beta,\sigma(c)=\gamma \right\}
\end{align*}
is consistent. This condition is also sufficient:\bigskip
\begin{thm}[\cite{hernandez2012event}]
There is a species tree on $\sigma(\ensuremath{\mathfrak{G}})$ for $(T,t,\sigma)$ if
and only if the triple set $\mathbb{S}$ is consistent.
A reconciliation map can then be found in polynomial time.
\label{A:thm:gen-spec-tree}
\end{thm}
\subsubsection{Maximal Consistent Triple Sets}
In general, however, $\mathbb{S}$ may not be consistent. In this case it
is impossible to find a valid reconciliation map. However, for each
consistent subset $\mathbb S^*\subset \mathbb S$, its corresponding species
tree $S^*$, and a suitably chosen homeomorphic image of $T$ one can find the
reconciliation. For a phylogenetic tree $T$ on $L$, the \emph{restriction}
$T|_{L'}$ of $T$ to $L'\subseteq L$ is the phylogenetic tree with leaf set
$L'$ obtained from $T$ by first forming the minimal spanning tree in $T$
with leaf set $L'$ and then by suppressing all vertices of degree two with
the exception of $\rho_T$ if $\rho_T$ is a vertex of that tree, see
\cite{sem-ste-03a}. For a consistent subset $\mathbb{S}^*\subset
\mathbb{S}$ let $L'=\{x\in \ensuremath{\mathfrak{G}}\mid \exists r\in \mathbb{S}^*$ with
$\sigma(x)\in L_r\}$ be the set of genes (leaves of $T|_{L'}$) for which a
species $\sigma(x)$ exits that is also contained in some triple $r\in
\mathbb{S}^*$. Clearly, the reconciliation map of $T|_{L'}$ and the species
tree $S^*$ that displays $\mathbb{S}^*$ can then be found in polynomial
time by means of Theorem \ref{A:thm:gen-spec-tree}.
\section{ILP Formulation}
The workflow outline in the main text consists of three stages, each of
which requires the solution of hard combinatorial optimization problem.
Our input data consist of an $\Theta$ or of a weighted version thereof. In
the weighted case we assume the edge weights $w(x,y)$ have values in the
unit interval that measures the confidence in the statement
``$(x,y)\in\Theta$''. Because of measurement errors, our first task is to
correct $\Theta$ to an irreflexive, symmetric relation $\Theta^*$ that is a
valid orthology relation. As outlined in section~\ref{sect:cograph},
$G_{\Theta^*}$ must be cograph so that $(x,y)\in\Theta^*$ implies
$\sigma(x)\ne\sigma(y)$. By Lemma~\ref{A:lem:disconnected} this problem has
to be solved independently for every connected component of $G_{\Theta}$.
The resulting relation $\Theta^*$ has the symbolic representation $(T,t)$.
In the second step we identify the best approximation of the species tree
induced by $(T,t)$. To this end, we determine the maximum consistent subset
$\mathbb{S}^*$ in the set $\mathbb{S}$ of species triples induced by those
triples of $(T,t)$ that have a speciation vertex as their root. The hard
part in the ILP formulation for this problem is to enforce consistency of a
set of triples \cite{chang2011ilp}. This step can be simplified
considerably using the fact that for every consistent triple set
$\mathbb{S}^*$ there is a strictly dense consistent triple set $\mathbb{S}'$
that contains $\mathbb{S}^*$ (Lemma~\ref{A:lem:binstrictdense}). This allows
us to write $\mathbb{S}^*=\mathbb{S}'\cap \mathbb{S}$. The gain in
efficiency in the corresponding ILP formulation comes from the fact that a
strictly dense set of triples is consistent if and only if all its
two-element subsets are consistent
(Theorem~\ref{thm:consistIFFpairwise}), allowing for a much faster check
of consistency.
In the third step we determine the least resolved species tree $S$ from the
triple set $\mathbb{S}^*$ since this tree makes least assumptions of the
topology and thus, of the evolutionary history. In particular, it displays
only those triples that are either directly derived from the data or that
are logically implied by them. Thus $S$ is the tree with the minimal number
of (inner) vertices that displays $\mathbb{S}^*$. Our ILP formulation uses
ideas from the work of \cite{chang2011ilp} to construct $S$ in the form of
an equivalent partial hierarchy.
\subsection{Cograph Editing}
Given the edge set of an input graph, in our case the pairs
$(x,y)\in\Theta$, our task is to determine a modified edge set so that the
resulting graph is a cograph. The input is conveniently represented by
binary constants $\Theta_{ab}=1$ iff $(a,b)\in \Theta$. The edges of the
adjusted cograph $G_{\Theta^*}$ are represented by binary variables
$E_{xy}=E_{yx}=1$ if and only if $\{x,y\}\in E(G_{\Theta^*})$. Since
$E_{xy}\equiv E_{yx}$ we use these variables interchangeably, without
distinguishing the indices. Since genes residing in the same organism
cannot be orthologs, we exclude edges $\{x,y\}$ whenever
$\sigma(x)=\sigma(y)$ (which also forbids loops $x=y$. This is expressed
by setting
\begin{align}\tag{\ref{ilp:forbid_E}}
E_{xy}=0 \text{ for all } \{x,y\}\in \binom{\ensuremath{\mathfrak{G}}}{2} \text{ with }
\sigma(x)=\sigma(y).
\end{align}
To constrain the edge set of $G_{\Theta^*}$ to cographs, we use the fact
that cographs are characterized by $P_4$ as forbidden subgraph. This can be
expressed as follows. For every ordered four-tuple $(w,x,y,z)\in\ensuremath{\mathfrak{G}}^4$
with pairwise distinct $w,x,y,z$ we require
\begin{align}
E_{wx} + E_{xy}+ E_{yz} - E_{xz} - E_{wy} - E_{wz} \leq 2 \tag{\ref{ilp:cog}}
\end{align}
Constraint \eqref{ilp:cog} ensures that for each ordered tuple $(w,x,y,z)$
it is not the case that there are edges $\{w,x\}$, $\{x,y\}$, $\{y,z\}$ and
at the same time no edges $\{x,z\}$, $\{w,y\}$, $\{w,z\}$ that is, $w,x,y$
and $z$ induce the path $w-x-y-z$ on four vertices. Enforcing this
constraint for all orderings of $w,x,y,z$ ensures that the subgraph induced
by $\{w,x,y,z\}$ is $P_4$-free.
In order to find the closest orthology cograph $G_{\Theta^*}$ we
minimize the symmetric difference of the estimated and adjusted
orthology relation. Thus the objective function is
\begin{align} \tag{\ref{ilp:minDiff}}
\min & \sum_{(x,y)\in\ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}}} (1-\Theta_{xy}) E_{xy} +
\sum_{(x,y)\in \ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}} } \Theta_{xy} (1-E_{xy})
\end{align}
\begin{rem}\label{rem:real}
We have defined $\Theta$ above as a binary relation. The problem can be
generalized to a weighted version in which the input $\Theta$
is a real valued function $\Theta: \ensuremath{\mathfrak{G}} \times \ensuremath{\mathfrak{G}} \to [0,1]$
measuring the confidence with which a pair $(x,y)$ is orthologous.
The ILP formulation remains unchanged.
\end{rem}
\bigskip
The latter ILP formulation makes use of $O(|\ensuremath{\mathfrak{G}}|^2)$ variables and
Equations \eqref{ilp:forbid_E} and \eqref{ilp:cog} impose
$O(|\ensuremath{\mathfrak{G}}|^4)$ constraints.
\subsection{Extraction of All Species Triples}
Let $\Theta$ be an orthology relation with symbolic representation
$(T,t;\sigma)$ so that $\sigma(x)=\sigma(y)$ implies $(x,y)\notin\Theta$.
By Theorem~\ref{A:thm:gen-spec-tree}, the species tree $S$ displays all
triples $\rt{(\alpha\beta|\gamma)}$ with a corresponding gene triple
$\rt{(xy|z)}\in \mathbb G\subseteq \mathfrak R(T)$, i.e., a triple
$\rt{(xy|z)}$ with speciation event at the root of
$t(\ensuremath{\operatorname{lca}}_T(x,y,z)=\bullet$ and $\sigma(x)=\alpha$, $\sigma(y)=\beta$,
$\sigma(z)=\gamma$ are pairwise distinct species. We denote the set of these
triples by $\mathbb{S}$. Although all species triples can be extracted
in polynomial
time, e.g. by using the $\texttt{BUILD}$ algorithm, we give here an ILP
formulation to complete the entire ILP pipeline. It will also be useful as
a starting point for the final step, which consists in finding a minimally
resolved trees that displays $\mathbb{S}$. Instead of using the symbolic
representation $(T,t;\sigma)$ we will directly make use of the information
stored in $\Theta$ using the following simple observation.\bigskip
\begin{lem}
Let $\Theta$ be an orthology relation with discriminating symbolic
representation $(T,t;\sigma)$ that is identified with the cotree of the
corresponding cograph $G_\Theta=(\ensuremath{\mathfrak{G}}, E_\Theta)$. Assume that
$\rt{(xy|z)}\in \mathfrak{R}(T)$ is a triple where all genes $x,y,z$ are
contained in pairwise different species. Then it holds:
$t(\ensuremath{\operatorname{lca}}(x,y))=\square$ if and only if $\{x,y\} \notin E_\Theta$ and
$t(\ensuremath{\operatorname{lca}}(x,y,z))=\bullet$ if and only if $\{x,z\},\{y,z\}\in E_\Theta$
\label{A:lem:bulletifftheta}
\end{lem}
\begin{proof}
Assume there is a triple $\rt{(xy|z)}\in \mathfrak R(T)$ where all genes
$x,y,z$ are contained in pairwise different species. Clearly,
$t(\ensuremath{\operatorname{lca}}(x,y))=\square$ iff $(x,y)\notin \Theta$ iff $\{x,y\} \notin
E_\Theta$. Since, $\ensuremath{\operatorname{lca}}(x,y)\neq \ensuremath{\operatorname{lca}}(x,z)=\ensuremath{\operatorname{lca}}(y,z)=\ensuremath{\operatorname{lca}}(x,y,z)$ we
have $t(\ensuremath{\operatorname{lca}}(x,z))=t(\ensuremath{\operatorname{lca}}(y,z)) = \bullet$, which is iff $(x,z), (y,z)
\in \Theta$ and thus, iff $\{x,z\},\{y,z\}\in E_\Theta$.
\end{proof}
The set $\mathbb{S}$ of species triples is encoded by the binary variables
$T_{\rt{(\alpha\beta|\gamma)}}=1$ iff $\rt{(\alpha\beta|\gamma)}\in\mathbb{S}$. Note that
$\rt{(\beta\alpha|\gamma)}\equiv\rt{(\alpha\beta|\gamma)}$. In order to avoid superfluous
variables and symmetry conditions connecting them we assume that the first
two indices in triple variables are ordered. Thus there are three triple
variables $T_{\rt{(\alpha\beta|\gamma)}}$, $T_{\rt{(\alpha\gamma|\beta)}}$, and
$T_{\rt{(\beta\gamma|\alpha)}}$ for any three distinct $\alpha,\beta,\gamma\in\ensuremath{\mathfrak{S}}$.
Assume that $\rt{(xy|z)}\in \mathfrak{R}(T)$ is an arbitrary triple
displayed by $T$. In the remainder of this section, we assume
that these genes $x,y$ and $z$ are from pairwise different species
$\sigma(x)=\alpha$, $\sigma(y)=\beta$ and $\sigma(z)=\gamma$. Given that in addition
$t(\ensuremath{\operatorname{lca}}(x,y,z))=\bullet$, we need to ensure that $T_{\rt{(\alpha\beta|\gamma)}}=1$. If
$t(\ensuremath{\operatorname{lca}}(x,y,z))=\bullet$ then there are two cases: \emph{(1)}
$t(\ensuremath{\operatorname{lca}}(x,y))=\square$ or \emph{(2)} $t(\ensuremath{\operatorname{lca}}(x,y))=\bullet$. These two
cases needs to be considered separately for the ILP formulation.
\emph{Case (1) $t(\ensuremath{\operatorname{lca}}(x,y))=\square\neq t(\ensuremath{\operatorname{lca}}(x,y,z))$:}
Lemma~\ref{A:lem:bulletifftheta} implies that $E_{xy}=0$ and
$E_{xz}=E_{yz}=1$. This yields, $(1-E_{xy})+E_{xz}+E_{yz}=3$.
To infer that in this case $T_{\rt{(\alpha\beta|\gamma)}}=1$ we add the next
constraint.
\begin{ILP}
(1-E_{xy})+E_{xz}+E_{yz}-T_{\rt{(\alpha\beta|\gamma)}} & \leq 2
\label{ilp:inferOne}\\[-0.1cm]
\end{ILP}
These constraints need, by symmetry, also be added for the possible triples
$\rt{(xz|y)}$, resp., $\rt{(yz|x)}$ and the corresponding species triples
$\rt{(\alpha\gamma|\beta)}$, resp., $\rt{(\beta\gamma|\alpha)}$:
\begin{align}
E_{xy}+(1-E_{xz})+E_{yz}-T_{\rt{(\alpha\gamma|\beta)}} \leq 2 \tag{\ref{ilp:inferOne}}\\
E_{xy}+E_{xz}+(1-E_{yz})-T_{\rt{(\beta\gamma|\alpha)}} \leq 2 \notag
\end{align}
\emph{Case (2) $t(\ensuremath{\operatorname{lca}}(x,y))=\bullet=t(\ensuremath{\operatorname{lca}}(x,y,z))$:} Lemma
\ref{A:lem:bulletifftheta} implies that $E_{xy}=E_{xz}=E_{yz}=1$. Since
$\ensuremath{\operatorname{lca}}(x,y) \neq \ensuremath{\operatorname{lca}}(x,y,z)$ and the gene tree we obtained the triple from
is a discriminating representation, that is consecutive event labels are
different, there must be an inner vertex $v\not\in \{\ensuremath{\operatorname{lca}}(x,y),
\ensuremath{\operatorname{lca}}(x,y,z)\}$ on the path from $\ensuremath{\operatorname{lca}}(x,y)$ to $\ensuremath{\operatorname{lca}}(x,y,z)$ with
$t(v)=\square$. Since $T$ is a phylogenetic tree, there must be a leaf
$w\in L(v)$ with $w\neq x,y$ and $\ensuremath{\operatorname{lca}}(x,y,w)=v$ which implies
$t(\ensuremath{\operatorname{lca}}(x,y,w))=t(v)=\square$. For this vertex $w$ we derive that $(xw|z),
(yw|z) \in \mathfrak R(T)$ and in particular, $\ensuremath{\operatorname{lca}}(y,w,z)=\ensuremath{\operatorname{lca}}(x,y,z) =
\ensuremath{\operatorname{lca}}(w,z)$. Therefore, $t(\ensuremath{\operatorname{lca}}(y,w,z)) =t(\ensuremath{\operatorname{lca}}(w,z))=\bullet$.
Now we have to distinguish two subcases; either \emph{Case (2a)}
$\sigma(x)=\alpha=\sigma(w)$ (analogously one treats the case
$\sigma(y)=\beta=\sigma(w)$ by interchanging the role of $x$ and $y$) or
\emph{Case (2b)} $\sigma(x)=\alpha\neq\sigma(w)=\delta\notin\{\alpha,\beta,\gamma\}$.
Note, the case $\sigma(w)=\sigma(z)=\gamma$ cannot occur, since we obtained
$(T,t)$ from the cotree of $G_\Theta$ and in particular, we have
$t(\ensuremath{\operatorname{lca}}(w,z))=\bullet$. Therefore, $E_{wz}=1$ and hence, by Constraint
\ref{ilp:forbid_E} it must hold $\sigma(w)\neq \sigma(z)$.
\begin{itemize}
\item[(2a)] Since $t(\ensuremath{\operatorname{lca}}(y,w,z))=\bullet$ and $v=\ensuremath{\operatorname{lca}}(y,w)$ with
$t(v)=\square$ it follows that the triple $(yw|z)$ fulfills the
conditions of \emph{Case 1}, and hence $T_{\rt{(\alpha\beta|\gamma)}}=1$ and we
are done.
\item[(2b)] Analogously as in Case (2a), the triples $(xw|z)$ and $(yw|z)$
fulfill the conditions of Case (1), and hence we get
$T_{\rt{\rt{(\alpha\delta|\gamma)}}}=1$ and $T_{\rt{(\beta\delta|\gamma)}}=1$. However, we
must ensure that also the triple $\rt{(\alpha\beta|\gamma)}$ will be determined
as observed species triple. Thus we add the constraint:
\begin{align}
T_{\rt{\rt{(\alpha\delta|\gamma)}}}+T_{\rt{(\beta\delta|\gamma)}}-T_{\rt{(\alpha\beta|\gamma)}}
\leq 1 \tag{\ref{ilp:inferOne}}
\end{align}
which ensures that $T_{\rt{(\alpha\beta|\gamma)}}=1$ whenever
$T_{\rt{\rt{(\alpha\delta|\gamma)}}}=T_{\rt{(\beta\delta|\gamma)}}=1$.
\end{itemize}
The first three constraints in Eq. \eqref{ilp:inferOne} are added for all
$\{x,y,z\}\in \binom{\ensuremath{\mathfrak{G}}}{3}$ and where all three genes are contained in
pairwise different species $\sigma(x)=\alpha$, $\sigma(y)=\beta$ and
$\sigma(z)=\gamma$ and the fourth constraint in Eq. \eqref{ilp:inferOne}
is added for all $\{\alpha,\beta,\gamma,\delta\}\in \binom{\ensuremath{\mathfrak{S}}}{4}$.
In particular, these constraints ensure, that for each triple
$\rt{(xy|z)}\in \mathbb G$ with speciation event on top and corresponding
species triple $\rt{(\alpha\beta|\gamma)}$ the variable $T_{\rt{(\alpha\beta|\gamma)}}$ is
set to $1$.
However, the latter ILP constraints allow some degree of freedom for the
choice of the binary value $T_{\rt{(\alpha\beta|\gamma)}}$, where for all
respective triples $\rt{(xy|z)}\in \mathfrak R(T)$ holds
$t(\ensuremath{\operatorname{lca}}(x,y,z))=\square$. To ensure, that only those variables
$T_{\rt{(\alpha\beta|\gamma)}}$ are set to $1$, where at least one triple
$\rt{(xy|z)}\in \mathfrak R(T)$ with $t(\ensuremath{\operatorname{lca}}(x,y,z))=\bullet$ and
$\sigma(x)=\alpha$, $\sigma(y)=\beta$, $\sigma(z)=\gamma$ exists, we add the
following objective function that minimizes the number of variables
$T_{\rt{(\alpha\beta|\gamma)}}$ that are set to $1$:
\begin{ILP}
\min \sum_{\{\alpha,\beta,\gamma\}\in \binom{S}{3}}
T_{\rt{(\alpha\beta|\gamma)}}+T_{\rt{(\alpha\gamma|\beta)}}+T_{\rt{(\beta\gamma|\alpha)}}
\label{ilp:min1triple}
\end{ILP}
For the latter ILP formulation $O(|\ensuremath{\mathfrak{S}}|^3)$ variables
and $O(|\ensuremath{\mathfrak{G}}|^3+|\ensuremath{\mathfrak{S}}|^4)$ constraints are required.
\subsection{Find Maximal Consistent Triple Set}
Given the set of species triple $\mathbb{S}$ the next step is to extract a
maximal subset $\mathbb{S}^* \subseteq \mathbb{S}$ that is consistent.
This combinatorial optimization problem is known to be NP-complete
\cite{Jansson2001,Wu2004}. In an earlier ILP approach, \cite{chang2011ilp}
explicitly constructed a tree that displays $\mathbb{S}^*$. In order to
improve the running time of the ILP we focus here instead on constructing a
consistent, strictly dense triple set $\mathbb{S}$' containing the desired
solution $\mathbb{S^*}$ because the consistency check involves two-element
subsets in this case (Theorem \ref{thm:consistIFFpairwise}). From
$\mathbb{S}'$ obtain the desired solution as
$\mathbb{S^*}=\mathbb{S}'\cap\mathbb{S}$. We therefore introduce binary
variables $T'_{\rt{(\alpha\beta|\gamma)}}=1$ iff $\rt{(\alpha\beta|\gamma)}\in\mathbb{S}'$.
To ensure, that $\mathbb{S}'$ is strictly dense we add for all
$\{\alpha,\beta,\gamma\}\in \binom{\mathcal{S}}{3}$ the constraints:
\begin{align} \tag{\ref{ilp:sd}}
&T'_{\rt{(\alpha\beta|\gamma)}}+T'_{\rt{(\alpha\gamma|\beta)}}+T'_{\rt{(\beta\gamma|\alpha)}} = 1.
\end{align}
We can now apply the inference rules in
Eq. \eqref{eq:infRule2} and the results of Theorem
\ref{thm:consistIFFpairwise} and Lemma \ref{lem:suffRule}.
Therefore, we add the following constraint for all ordered tuples $(\alpha,\beta,\gamma,\delta)$
for all $\{\alpha,\beta,\gamma,\delta\}\in \binom{\ensuremath{\mathfrak{S}}}{4}$:
\begin{align}
2T'_{\rt{(\alpha\beta|\gamma)}} + 2&T'_{\rt{(\alpha\delta|\beta)}}-
T'_{\rt{(\beta\delta|\gamma)}} - T'_{\rt{\rt{(\alpha\delta|\gamma)}}} \leq 2
\tag{\ref{ilp:eq:infRule2}}
\end{align}
The constraint in Eq. \eqref{ilp:eq:infRule2} is a
direct translation of the inference rule in Eqn. \eqref{eq:infRule2}.
Moreover, by Theorem \ref{thm:consistIFFpairwise} and Lemma \ref{lem:suffRule},
we know that testing pairs of triples with Eq. \eqref{eq:infRule2} is
sufficient for verifying consistency.
To ensure maximal cardinality of $\mathbb S^* = \mathbb S' \cap \mathbb S$
we use the objective function
\begin{align}
\max \sum_{\rt{(\alpha\beta|\gamma)}\in \mathbb S} T'_{\rt{(\alpha\beta|\gamma)}}
\tag{\ref{ilp:maxdense}}
\end{align}
This ILP formulation can easily be adapted to solve a ``weighted''
maximum consistent subset problem: With $w\rt{(\alpha\beta|\gamma)}$ we denote for
every species triple $\rt{(\alpha\beta|\gamma)}\in \mathbb S$ the number of
connected components in $G_{\Theta^*}$ that contains three
vertices $a,b,c\in \ensuremath{\mathfrak{G}}$ with $\rt{(ab|c)}\in \mathbb G$ and
$\sigma(a)=\alpha,\sigma(b)=\beta, \sigma(c)=\gamma$. In this way, we increase
the significance of species triples in $\mathbb S$ that have been
observed more times, when applying the following objective function.
\begin{align}
\max \sum_{\rt{(\alpha\beta|\gamma)}\in \mathbb S}
T'_{\rt{(\alpha\beta|\gamma)}}*w\rt{(\alpha\beta|\gamma)}. \tag{\ref{ilp:wmax}}
\end{align}
Finally, we define binary variables $T^*_{\rt{(\alpha\beta|\gamma)}}$ that indicate
whether a triple $\rt{(\alpha\beta|\gamma)}\in \mathbb{S}$ is contained in a
maximal consistent triples set $\mathbb{S}^*\subseteq \mathbb{S}$, i.e.,
$T^*_{\rt{(\alpha\beta|\gamma)}}=1$ iff $\rt{(\alpha\beta|\gamma)}\in\mathbb S^*$ and thus,
iff $T_{\rt{(\alpha\beta|\gamma)}}=1$ and $T'_{\rt{(\alpha\beta|\gamma)}}=1$.
Therefore, we add for all $\{\alpha,\beta,\gamma\}\in \binom {\mathcal S}{3}$
the binary variables
$T^*_{\rt{(\alpha\beta|\gamma)}}$ and add the constraints
\begin{equation}
0 \leq T'_{\rt{(\alpha\beta|\gamma)}} + T_{\rt{(\alpha\beta|\gamma)}} - 2T^*_{\rt{(\alpha\beta|\gamma)}} \leq 1
\tag{\ref{eq:tstar}}
\end{equation}
It is easy to verify, that in the latter ILP formulation
$O(|\ensuremath{\mathfrak{S}}|^3)$ variables and $O(|\ensuremath{\mathfrak{S}}|^4)$ constraints are required.
\subsection{Least Resolved Species Tree}
The final step consists in finding a minimally resolved tree that displays
all triples of $\mathbb S^*$ and, in addition, has a minimum number
of inner vertices. The variables $T^*_{\rt{(\alpha\beta|\gamma)}}$
defined in the previous step take on the role of constants here.
There is an ILP approach by \cite{chang2011ilp}, for determining a maximal
consistent triple sets. However, this approach relies on determining
consistency by checking and building up a binary tree, a very time
consuming task. As we showed, this can be improved and simplified by the
latter ILP formulation. However, we will adapt now some of the ideas
established by \cite{chang2011ilp}, to solve the NP-hard problem
\cite{Jansson:12} of finding a least resolved tree.
To build an arbitrary tree for the consistent triple set $\mathbb{S}^*$,
one can use the fast algorithm \texttt{BUILD}
\cite{sem-ste-03a}. Moreover, if the tree obtained by \texttt{BUILD} for
$\mathbb{S}^*$ is a binary tree, then Proposition \ref{pro:BinaryClDense}
implies that the closure $\ensuremath{\operatorname{cl}}(\mathbb{S}^*)$ is strictly dense and that this
tree is a unique and least resolved tree for $\mathbb{S}^*$. Hence, as a
preprocessing step one could use \texttt{BUILD} first, to test whether the
tree for $\mathbb{S}^*$ is already binary and if not, proceed with the
following ILP approach.
A phylogenetic tree $S$ is uniquely determined by hierarchy $\mathcal{C} =
\{L(v)\mid v\in V(S)\}$ according to Theorem \ref{A:thm:hierarchy}. Thus it
is possible to construct $S$ by building the clusters induced by the
triples of $\mathbb{S}^*$. Thus we need to translate the condition for
$\mathcal{C}$ to be a hierarchy into the language of ILPs.
Following \cite{chang2011ilp} we use a binary $|\ensuremath{\mathfrak{S}}|\times N$ matrix $M$,
with entries $M_{\alpha p}=1$ iff species $\alpha$ is contained in cluster
$p$. By Lemma \ref{A:lem:nrC}, it is clear that we need at most $2|\ensuremath{\mathfrak{S}}|-1$
columns. As we shall see later, we exclude (implicitly) the trivial
singleton clusters $\{x\}\in \ensuremath{\mathfrak{S}}$ and the cluster $\ensuremath{\mathfrak{S}}$. Hence, it
suffices to use $N=2|\ensuremath{\mathfrak{S}}|-1-|\ensuremath{\mathfrak{S}}|-1=|\ensuremath{\mathfrak{S}}|-2$ clusters. Each cluster $p$,
which is represented by the $p$-th column of $M$, corresponds to an inner
vertex $v_p$ in the species tree $S$ so that $p=(L(v_p))$.
Since we are interested in finding a least resolved tree rather than a
fully resolved one, we allow that number of clusters is smaller than $N-2$,
i.e., we allow that some columns of $M$ have no non-zero entries. Here, we
deviate from the approach of \cite{chang2011ilp}. Columns $p$ with
$\sum_{\alpha\in \ensuremath{\mathfrak{S}}} M_{\alpha p}=0$ containing only $0$ entries and thus,
clusters $L(v_p)=\emptyset$, are called \emph{trivial}, all other columns
and clusters are called \emph{non-trivial}. Clearly, the non-trivial
clusters correspond to the internal vertices of $S$, hence we have to
maximize the number of trivial columns of $M$. This condition also
suffices to remove redundancy, i.e., non-trivial columns with the same
entries.
We first give the ILP formulation that captures that all triples
$\rt{(\alpha\beta|\gamma)}$ contained in $\mathbb{S}^* \subseteq \mathbb{S}$ are
displayed by a tree. A triple $\rt{(\alpha\beta|\gamma)}$ is displayed by a tree if
and only if there is an inner vertex $v_p$ such that $\alpha,\beta\in L(v_p)$
and $\gamma\notin L(v_p)$ and hence, iff $M_{\alpha p}=M_{\beta p}=1\neq M_{\gamma
p}=0$ for this cluster $p$.
To this end, we define binary variables $N_{\alpha\beta, p}$ so that
$N_{\alpha\beta, p}=1$ iff $\alpha,\beta\in L(v_p)$ for all $\{\alpha,\beta\}\in
\binom{\ensuremath{\mathfrak{S}}}{2}$ and $p=1,\dots, |\ensuremath{\mathfrak{S}}|-2$. This condition is captured by
the constraint:
\begin{align}\label{ref{ilp:Nclus}}\tag{\ref{ilp:Nclus}}
0\leq & M_{\alpha p} + M_{\beta p} - 2 N_{\alpha\beta, p} \leq 1.
\end{align}
We still need to ensure that for each triple $\rt{(\alpha\beta|\gamma)}\in \mathbb
S^*$ there is at least one cluster $p$ that contains $\alpha$ and $\beta$
but not $\gamma$, i.e., $N_{\alpha\beta, p}=1$ and $N_{\alpha\gamma, p}=0$ and
$N_{\beta\gamma, p}=0$. For each possible triple $\rt{(\alpha\beta|\gamma)}$ we
therefore add the constraint
\begin{align}\label{ref{ilp:rep}}
1 - |\ensuremath{\mathfrak{S}}|(1- T^*_{\rt{(\alpha\beta|\gamma)}}) \leq
\sum_p N_{\alpha\beta,p} - \frac{1}{2} N_{\alpha\gamma,p} -\frac{1}{2} N_{\beta\gamma,p}.
\tag{\ref{ilp:rep}}
\end{align}
To see that \eqref{ilp:rep} ensures $\alpha,\beta\in L(v_p)$ and $\gamma\notin
L(v_p)$ for each $\rt{(\alpha\beta|\gamma)}\in \mathbb S^*$ and some $p$, assume
first that $\rt{(\alpha\beta|\gamma)}\not\in \mathbb S^*$ and hence,
$T^*_{\rt{(\alpha\beta|\gamma)}}=0$. Then, $1 - |\ensuremath{\mathfrak{S}}|(1- T^*_{\rt{(\alpha\beta|\gamma)}})
= 1 - |\ensuremath{\mathfrak{S}}|$ and we are free in the choice of the variables
$N_{\alpha\beta,p}$, $N_{\alpha\gamma,p}$, and $N_{\beta\gamma,p}$. Now assume that
$\rt{(\alpha\beta|\gamma)}\in \mathbb S^*$ and hence, $T^*_{\rt{(\alpha\beta|\gamma)}}=1$.
Then, $1 - |\ensuremath{\mathfrak{S}}|(1- T^*_{\rt{(\alpha\beta|\gamma)}}) = 1$. This implies that at
least one variable $N_{\alpha\beta,p}$ must be set to $1$ for some $p$. If
$N_{\alpha\beta,p}=1$ and $N_{\alpha\gamma,p}=1$, then constraint
\eqref{ref{ilp:Nclus}} implies that $M_{\alpha p} = M_{\beta p} = M_{\gamma p} =1$
and thus $N_{\beta\gamma,p}=1$. Analogously, if $N_{\alpha\beta,p}=1$ and
$N_{\beta\gamma,p}=1$, then $N_{\alpha\gamma,p}=1$. It remains to show that there is
some cluster $p$ with $N_{\alpha\beta,p}=1$ and
$N_{\alpha\gamma,p}=N_{\beta\gamma,p}=0$. Assume, for contradiction, that for none of
the clusters $p$ with $N_{\alpha\beta,p}=1$ holds that
$N_{\alpha\gamma,p}=N_{\beta\gamma,p}=0$. Then, by the latter arguments all of these
clusters $p$ satisfy: $N_{\alpha\gamma,p}=N_{\beta\gamma,p}=1$. However, this implies
that $ N_{\alpha\beta,p} - \frac{1}{2} N_{\alpha\gamma,p} -\frac{1}{2} N_{\beta\gamma,p} =
0$ for all $p$, which contradicts the constraint
\eqref{ilp:rep}. Therefore, if $T^*_{\rt{(\alpha\beta|\gamma)}}=1$, there must be
at least one cluster $p$ with $N_{\alpha\beta,p}=1$ and
$N_{\alpha\gamma,p}=N_{\beta\gamma,p}=0$ and hence, $M_{\alpha p} = M_{\beta p}=1$ and
$M_{\gamma p} =0$.
In summary the constraints above ensure that for the
maximal consistent triple set $\mathbb S^*$ of $\mathbb S$ and for each
triple $\rt{(\alpha\beta|\gamma)}\in \mathbb S^*$ exists at least one column $p$ in
the matrix $M$ that contains $\alpha$ and $\beta$, but not $\gamma$. Note that for
a triple $\rt{(\alpha\beta|\gamma)}$ we do not insist on having a cluster $q$ that
contains $\gamma$ but not $\alpha$ and $\beta$ and therefore, we do not insist on
constructing singleton clusters. Moreover, there is no constraint that
claims that the set $\ensuremath{\mathfrak{S}}$ is decoded by $M$. In particular, since we later
maximize the number of trivial columns in $M$ and since we do not gave ILP
constraints that insist on finding clusters $\ensuremath{\mathfrak{S}}$ and $\{x\},\ x\in \ensuremath{\mathfrak{S}}$,
these clusters will not be defined by $M$. However, these latter clusters
are clearly known, and thus, to decode the desired tree, we only require
that $M$ is a ``partial'' hierarchy, that is for every pair of clusters $p$
and $q$ holds $p\cap q\in \{p,q, \emptyset\}$. In such case the clusters
$p$ and $q$ are said to be compatible. Two clusters $p$ and $q$ are
incompatible if there are (not necessarily distinct) species
$\alpha,\beta,\gamma\in \ensuremath{\mathfrak{S}}$ with $\alpha\in p\setminus q $ and $\beta\in q\setminus
p$, and $\gamma\in p\cap q$. In the latter case we would have $(M_{\alpha
p},M_{\alpha q})=(1,0)$, $(M_{\beta p},M_{\beta q})=(0,1)$, $(M_{\gamma p},M_{\gamma
q})=(1,1)$. Here we follow the idea of \cite{chang2011ilp}, and use the
so-called three-gamete condition. For each gamete $(\Gamma,\Lambda)\in
\{(0,1),(1,0)(1,1)\}$ and each column $p$ and $q$ we define a set of binary
variables $C_{p,q,\Gamma\Lambda}$. For all $\alpha\in \ensuremath{\mathfrak{S}}$ and $p,q=1,\dots,
|\ensuremath{\mathfrak{S}}|-2$ with $p\neq q$ we add
\begin{align}\tag{\ref{ilp:CM}}
C_{p,q,01}\geq &-M_{\alpha p}+M_{\alpha q}\\
C_{p,q,10}\geq &\ \ \ \ M_{\alpha p}-M_{\alpha q} \notag\\
C_{p,q,11}\geq &\ \ \ \ M_{\alpha p}+M_{\alpha q}-1 \notag
\end{align}
These constraints capture that $C_{p,q,\Gamma\Lambda}=1$ as long as
if $M_{\alpha p}=\Gamma$ and $M_{\alpha q}=\Lambda$ for some $\alpha\in \ensuremath{\mathfrak{S}}$.
To ensure that only compatible clusters are contained, we add
for each of the latter defined variable
\begin{align}\label{ref{ilp:comp}}
C_{p,q,01} + C_{p,q,10} + C_{p,q,11} \leq 2. \tag{\ref{ilp:comp}}
\end{align}
Hence the latter Equations \eqref{ilp:Nclus}-\eqref{ilp:comp} ensure,
that we get a ``partial'' hierarchy $M$, where only the singleton clusters
and the set $\ensuremath{\mathfrak{S}}$ is missing,
Finally we want to have for the maximal consistent triple sets
$\mathbb{S}^*$ of $\mathbb{S}$ the one that determines the least resolved
tree, i.e, a tree that displays all triples of $\mathbb{S}^*$ and has a
minimal number of inner vertices and makes therefore, the fewest
assumptions on the tree topology. Since the number of leaves $|\ensuremath{\mathfrak{S}}|$ in
the species tree $S$ is fixed and therefore the number of clusters is
determined by the number of inner vertices, as shown in the proof of Lemma
\ref{A:lem:nrC}, we can conclude that a minimal number of clusters results
in tree with a minimal number of inner vertices. In other words, to find a
least resolved tree determined by the hierarchy matrix $M$, we need to
maximize the number of trivial columns in $M$, i.e., the number of columns
$p$ with $\sum_{\alpha\in \ensuremath{\mathfrak{S}}} M_{\alpha p}=0$.
For this, we require in addition to the constraints
\eqref{ilp:Nclus}-\eqref{ilp:comp} for each $p=1,\dots,|\ensuremath{\mathfrak{S}}|-2$ a binary
variable $Y_p$ that indicates whether there are entries in column $p$ equal
to $1$ or not. To infer that $Y_p=1$ whenever column $p$ is non-trivial we
add for each $p=1,\dots,|\ensuremath{\mathfrak{S}}|-2$ the constraint
\begin{align}
0&\leq Y_p|\ensuremath{\mathfrak{S}}| - \sum_{\alpha\in\ensuremath{\mathfrak{S}}} M_{\alpha p}\leq |\ensuremath{\mathfrak{S}}|-1
\tag{\ref{ilp:yp}}
\end{align}
If there is a ``1'' entry in column $p$ and $Y_p=0$ then, $Y_p|\ensuremath{\mathfrak{S}}| -
\sum_{\alpha\in\ensuremath{\mathfrak{S}}} M_{\alpha p}<0$, a contradiction. If column $p$ is trivial
and $Y_p=1$ then, $Y_p|\ensuremath{\mathfrak{S}}| - \sum_{\alpha\in\ensuremath{\mathfrak{S}}} M_{\alpha p}=|\ensuremath{\mathfrak{S}}|$, again a
contradiction. Finally, in order to minimize the number of non-trivial
columns in $M$ and thus, to obtain a least resolved tree for $\mathbb S^*$
with a minimum number of inner vertices,
we add the objective function
\begin{align} \tag{\ref{ilp:minY}}
\min\ &\sum_p Y_p
\end{align}
Therefore, we obtain for the maximal consistent subset
$\mathbb{S}^*\subseteq \mathbb{S}$ of species triples a ``partial''
hierarchy defined by $M$, that is, for all clusters $L(v_p)$ and $L(v_q)$
defined by columns $p$ and $q$ in $M$ holds $L(v_p) \cap L(v_q) \in
\{L(v_p), L(v_q), \emptyset\}$. The clusters $\ensuremath{\mathfrak{S}}$ and $\{x\},\ x\in \ensuremath{\mathfrak{S}}$
will not be defined by $M$. However, from these clusters and the clusters
determined by the columns of $M$ it is easily build the corresponding tree,
which by construction displays all triples in $\mathbb{S}^*$, see
\cite{sem-ste-03a,Dress:book}.
The latter ILP formulation requires $O(|\ensuremath{\mathfrak{S}}|^3)$ variables and
constraints.
A least resolved tree with a minimum number of inner vertices \cite{Jansson:12} is not the
only possible way to construct a species tree without spurious resolution.
As an alternative approach one might consider trees $T$ that display
all triples of $\mathbb{S}^*$ and at the same time minimize the number of
additional triples $r\in \mathfrak{R}(T)\setminus\mathbb{S}^*$. Since the
closure $\ensuremath{\operatorname{cl}}(\mathbb{S}^*)$ is displayed by all trees that display also
$\mathbb{S}^*$, this task is equivalent to finding a tree with
$\widehat{\mathbb{S}}:=\mathfrak{R}(T)$ that displays $\ensuremath{\operatorname{cl}}(\mathbb{S}^*)$
and minimizes number of triples in $\widehat{\mathbb{S}}\setminus
\ensuremath{\operatorname{cl}}(\mathbb{S}^*)$. Thus, $\widehat{\mathbb{S}}$ must be of minimum
cardinality. To this end, we modify the ILP formulation
\eqref{ilp:minY}-\eqref{ilp:comp}, and remove the objective function
\eqref{ilp:minY}, constraint \eqref{ilp:yp} and omit the variables
$Y_p$. Instead, we introduce the binary variables
$\widehat{T}_{\rt{(\alpha\beta|\gamma)}}=1$ iff
$\rt{(\alpha\beta|\gamma)}\in\widehat{\mathbb{S}}$. For each $p=1,\dots,|\ensuremath{\mathfrak{S}}|-2$
and all $\{\alpha,\beta,\gamma\}\in \binom {\mathcal S}{3}$ we add the
constraints
\begin{ILP} \label{ilp:tHat}
M_{\alpha p} + M_{\beta p} + (1-M_{\gamma p}) -
\widehat{T}_{\rt{(\alpha\beta|\gamma)}} \leq 2.
\end{ILP}
Constraint \eqref{ilp:tHat} enforces that
$\widehat{T}_{\rt{(\alpha\beta|\gamma)}}=1$ whenever there exists a cluster $p$
with $\alpha,\beta \in L(v_p)$ and $\gamma \notin L(v_p)$, and hence
$\rt{(\alpha\beta|\gamma)}\in\widehat{\mathbb{S}}$. Finally, in order to
minimize the number of triples in $\widehat{\mathbb{S}}$ we change the
objective function \eqref{ilp:minY} to
\begin{ILP} \label{ilp:minTHat}
\min\ &\sum_{\rt{(\alpha\beta|\gamma)}} \widehat{T}_{\rt{(\alpha\beta|\gamma)}}.
\end{ILP}
Constraints \eqref{ilp:Nclus} to \eqref{ilp:comp} remain unchanged. This
alternative ILP formulation requires $O(|\ensuremath{\mathfrak{S}}|^3)$ variables and
$O(|\ensuremath{\mathfrak{S}}|^4)$ constraints.
If $\mathbb{S}^*$ is strictly dense, then both ILP formulations will
result in the same binary tree as constructed using the \texttt{BUILD}
algorithm.
\section{Implementation and Data Sets}
\subsection{ILP Solver}
The ILP approach has been implemented using \textsc{IBM ILOG
CPLEX{\texttrademark}} Optimizer 12.6 in the weighted version of the
maximum consistent triple set problem.
For each component of $G_\Theta$ we
check in advance if it is already a cograph. If this is not the case then
an ILP instance is executed, finding the closest cograph. In a similar
manner, we check for each resulting cograph whether it contains any paralogous
genes at all. If not, then the cograph is a complete graph and the
resulting gene tree would be a star, not containing any species triple information.
Hence, extracting the species triples is skipped. Triple extraction is done using an
polynomial time algorithm instead of the ILP formulation.
Although the connected components
of $G_\Theta$ are treated separately, some instances of the cograph editing
problem have exceptionally long computation times. We therefore exclude
components of $G_{\Theta}$ with more than 50 genes. In addition, we limit
the running time for finding the closest cograph for one disconnected
component to 30 minutes. If an optimal solution for this component is not
found within this time limit, we use the best solution found so far. The
other ILP computations are not restricted by a time limit.\smallskip
\subsection{Simulated Data}
To evaluate the ILP approach we use simulated and real-life data sets.
Artificial data is created with the the method described in
\cite{HHW+14} as well as the \texttt{Artificial Life Framework} (\texttt{ALF})
\cite{Dalquen:12}. The first method generates explicit species/gene tree
histories, from which the orthology relation is directly accessible. All
simulations are performed with parameters $1.0$ for gene duplication,
$0.5$ for gene loss and $0.1$ for the loss rate, respectively increasing
loss rate, after gene duplication. We do not consider cluster or genome
duplications. \texttt{ALF}\ simulates the evolution of sequences along a branch
length-annotated species tree, explicitly taking into account gene
duplication, gene loss, and horizontal transfer events. To obtain
bacteria-like data sets we adopted the procedure from \cite{DAAGD2013}: a
tree of \emph{$\gamma$-proteobacteria} from the OMA project
\cite{Altenhoff:11} was randomly pruned to obtain trees of moderate size,
while conserving the original branch lengths. All simulations are
performed with parameters 0.005 for gene duplication/loss rate. We do
not consider cluster duplications/loss.
The presented method heavily depends on the amount of duplicated
genes, which, in turn, is depending on the number of analyzed genes per
species. Naturally, the question arose, how many genes, respectively
gene families, are needed, to provide enough information to reconstruct
accurate species trees, assuming a certain gene duplication rate.
Therefore, we evaluate the precision of reconstructed trees with respect
to the number of species and gene families. 100 species trees of size
5, 10, 15, and 20 (\texttt{ALF}\ only) leaves are generated. For each tree,
the evolution of ten to 100 (first simulation method) and 100 to 500
(\texttt{ALF}) gene families is simulated. This corresponds for the first
simulation method to $32.6\%$ (five species), $19.0\%$ (ten species), and
$13.5\%$ (15 species) and for \texttt{ALF}\ simulations $11.2\%$ (five species),
$8.1\%$ (ten species), and $7.5\%$ (15 and 20 species) of all homologous
pairs being paralogs (values determined from the simulations).
Horizontal gene transfer and cluster duplication/loss were not
considered.
The reconstructed trees are compared with the generated (binary)
species trees. Therefore, we use the software \texttt{TreeCmp}
\cite{BGW-treeCmp:12} to compute distances for rooted trees based on
Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS) and
Triple metric (TT). The distances are normalized by the average distance
between random Yule trees \cite{Yule:25}.
In order to estimate the effects of noise in the empirical orthology
relation we consider several forms of perturbations (i) insertion and
deletion of edges in the orthology graph (homologous noise), (ii)
insertion of edges (orthologous noise), (iii) deletion of edges
(paralogous noise), and (iv) modification of gene/species assignments
(xenologous noise). In the first three models each possible edge is
modified with probability $p$. Model (ii) simulates overprediction of
orthology, while model (iii) simulates underprediction. Model (iv)
retains the original orthology information but changes the associations
between genes and their respective species with probability $p$. This
simulates noise as expected in case of horizontal gene transfer. For each
model we reconstruct the species trees of 100 simulated data sets with
ten species and 100 gene families (first simulation method), respectively
1000 gene families (\texttt{ALF}). As before, no horizontal gene transfer or
cluster duplications/losses were simulated. Noise is added with a
probability $p \in \{0.05, 0.10, 0.15, 0.20, 0.25\}$.
Horizontal transfer is an abundant process in particular in
procaryotes that may lead to particular types of errors in each step of
our approach, see the theoretical discussion below. We therefore
investigated the robustness of our approach against HGT as a specific
type of perturbation in some detail. To this end, we simulate data sets
of 1000 gene families, using \texttt{ALF}, with a duplication/loss rate of
$0.005$ and evolutionary rates $r \in \{0.0, 0.0025, 0.005, 0.0075\}$ for
horizontal transfer. Cluster duplications/losses, or horizontal
transfers of groups of genes are not considered. The simulation is
repeated $100$ times for each combination of parameters. From the
simulated sequences, orthologous pairs of genes are predicted with
\texttt{Proteinortho} \cite{Lechner:11a}, using an $E$-value threshold of
$1e-10$ and similarity parameter of $0.9$. From this estimate of the
orthology relation species trees are reconstructed.
The authors of \cite{DAAGD2013} observed that increasing HGT rates
have only a minor impact on the recall of orthology prediction, while the
precision drops significantly, i.e., orthology prediction tools tend to
mis-predict xenology as orthology. To evaluate the impact of noise
solely coming from mis-predicting xenology as orthology, a second
orthology relation is constructed from the same simulations. This
orthology relation only differs from the simulated orthology relation by
all simulated xenologs being predicted as orthologs, i.e., all paralogs
are correctly detected (\emph{perfect paralogy knowledge}), see \ref{fig:simXenology} (B).
Analogously, we evaluated the impact of noise
solely coming from mis-predicting xenology as paralogy, i.e., all orthologs
are correctly detected (\emph{perfect orthology knowledge}), see \ref{fig:simXenology} (C).
From these orthology relations, species trees
are reconstructed with the ILP approach, and compared with the generated
species trees, used for the simulation.
All simulations so far have been performed by computing a least
resolved tree that minimized the cardinality of the vertex set, i.e.,
the definition of \cite{Jansson:12}. Given a consistent set of triples
$\mathbb{S}^*$ it is thus of interest to evaluate the influence of
different choices of how a tree is inferred from the triple set. To
this end we compare (i) the \texttt{BUILD} algorithm, (ii) least
resolved trees with minimum number of vertices, and (iii) least resolved
trees that
minimize the number of additional triples $r \notin cl(\mathbb{S}^*)$.
As a consequence of Proposition \ref{pro:BinaryClDense} the three
methods will produce the same tree whenever the tree constructed with
\texttt{BUILD} is binary. This is nearly always the case when the
target tree is binary. Therefore, we use \texttt{ALF}\ to generate a
duplication/loss history along a non-binary species tree. As before,
parameter values of $0.005$ are used for gene duplication and loss.
Horizontal gene transfer and cluster duplication/loss were not
considered here. The resulting orthology relation is perturbed with
``orthologous noise'' (insertion of edges) with probability $0.05$.
Each data set was analyzed with the ILP pipeline with the three
different tree building methods. The resulting trees are compared with
each other as well as the input tree used for the simulation. The
procedure is repeated 100 times using the same ternary species tree
that is given here in Newick tree format:
\begin{small}
\texttt{(((SE001,SE002,SE003),SE004,SE005)}\texttt{,(SE006,}\texttt{(SE007,}\\\texttt{SE008,}
\texttt{SE009),SE010),(SE011,SE012,(SE013,SE014,SE015)))}.
\end{small}
\subsection{Real-Life Data Sets}
As real-life applications we consider two sets of eubacterial genomes. The
set of eleven \emph{Aquificales} species studied in \cite{Lechner:14b}
covers the three families \emph{Aquificaceae}, \emph{Hydrogenothermaceae},
and \emph{Desulfurobacteriaceae}. The species considered are the
\emph{Aquificaceae}: \emph{Aquifex aeolicus} VF5
(NC\textunderscore000918.1, NC\textunderscore001880.1),
\emph{Hydrogeni\-virga sp.} 128-5-R1-1 (ABHJ00000000.1),
\emph{Hydrogenobacter thermophilus} TK-6 (NC\textunderscore013799.1),
\emph{Hydrogenobaculum sp.} \linebreak Y04AAS1 (NC\textunderscore
011126.1), \emph{Thermocrinis albus} DSM 14484 (NC\textunderscore013894.1),
\emph{Thermocrinis ruber} DSM 12173 (CP007028.1), the
\emph{Hydrogenothermaceae}: \emph{Persephonella marina} EX-H1 \linebreak
(NC\textunderscore012439.1, NC\textunderscore012440.1),
\emph{Sulfuri\-hydrogenibium sp.} \linebreak YO3AOP1 (NC\textunderscore010730.1)
\emph{Sulfurihydrogenibium azorense} Az-Fu1
(NC\textunderscore012438.1), and the \emph{Desulfurobacteriaceae}:
\emph{Desulfobacterium thermolithotrophum} DSM 11699
(NC\textunderscore015185.1), and \emph{Thermovibrio ammonificans} HB-1
(NC\textunderscore014917.1, \linebreak NC\textunderscore014926.1).
A larger set of 19 \emph{Enterobacteriales} was taken from RefSeq:
\emph{Enterobacteriaceae} family: \emph{Cronobacter sakazakii} ATCC BAA-894
(NC\textunderscore009778.1, NC\textunderscore009779.1,
NC\textunderscore009780.1), \emph{Enterobacter aerogenes} KCTC 2190
(NC\textunderscore015663.1), \emph{Enterobacter cloacae} ATCC 13047
(NC\textunderscore014107.1, NC\textunderscore014108.1,
NC\textunderscore014121.1), \emph{Erwinia amylovora} ATCC 49946
(NC\textunderscore013971.1, NC\textunderscore013972.1,
NC\textunderscore013973.1), \emph{Escherichia coli} K-12 substr DH10B
(NC\textunderscore010473.1), \emph{Escherichia fergusonii} ATCC 35469
(NC\textunderscore011740.1, NC\textunderscore011743.1), \emph{Klebsiella
oxytoca} KCTC 1686 (NC\textunderscore016612.1), \emph{Klebsiella
pneu\-moniae} 1084 (NC\textunderscore018522.1),
\emph{Proteus mirabilis} BB2000 (NC\textunderscore022000.1),
\emph{Salmonella bongori} Sbon 167 (NC\textunderscore021870.1,
NC\textunderscore021871.1), \emph{Salmonella enterica} serovar Agona SL483
(NC\textunderscore011148.1, NC\textunderscore011149.1),
\emph{Salmonella typhimurium} DT104 (NC\textunderscore022569.1,
NC\textunderscore022570.1), \emph{Serratia marcescens} FGI94
(NC\textunderscore020064.1), \emph{Shigella boydii} Sb227
(NC\textunderscore007608.1, NC\textunderscore007613. 1), \emph{Shigella
dysenteriae} Sd197 (NC\textunderscore007606.1, NC\textunderscore007607.1,
NC\textunderscore009344.1), \emph{Shigella flexneri} 5 str 8401
(NC\textunderscore008258.1), \emph{Shigella sonnei} Ss046
(NC\textunderscore007384.1, NC\textunderscore007385.1,
NC\textunderscore009345.1, NC\textunderscore009346.1,
NC\textunderscore009347.1), \emph{Yersinia pestis} Angola
(NC\textunderscore010157. 1, NC\textunderscore010158.1,
NC\textunderscore010159.1), and \emph{Yersinia pseudotuberculosis} IP 32953
(NC\textunderscore006153.2, NC\textunderscore006154.1,
NC\textunderscore006155.1).
\subsection{Estimation of the Input Orthology Relation}
An initial estimate of the orthology relation is
computed with \texttt{Proteinortho} \cite{Lechner:11a} from all the
annotated proteins using an $E$-value threshold of $1e-10$ and similarity
parameter of $0.9$.
Additionally, the genomes of all species were re-\texttt{blast}ed to detect
homologous genes not annotated in the \texttt{RefSeq}.
In brief, \texttt{Proteinortho} implements a modified pair-wise best hit
strategy starting from \texttt{blast} comparisons. It first creates a graph
consisting of all genes as nodes and an edge for every blast hit with an
$E$-value above a certain threshold. In a second step edges between two genes
$a$ and $b$ from different species are removed if a much better blast hit is
found between $a$ and a duplicated gene $b'$ from the same species as $b$.
Finally, the graph is filtered with spectral partitioning to result in
disconnected components with a certain minimum algebraic connectivity.
The resulting orthology graph usually consists of several pairwise
disconnected components, which can be interpreted as individual gene
families. Within these components there may exist pairs of genes having
\texttt{blast} $E$-values worse than the threshold so that these nodes are
not connected in the initial estimate of $\Theta$. Thus, the input data have
a tendency towards underprediction of orthology in particular for distant
species. Our simulation results suggest that the ILP approach handles
overprediction of orthology much better. We therefore re-add edges that
were excluded because of the $E$-value cut-off only within connected
components of the raw $\Theta$ relation.
\subsection{Evaluation of Phylogenies}
For the analysis of simulated data we compare the reconstructed trees with
the trees generated by the simulation. To this end we computed the
four commonly used distances measures for rooted trees, Matching Cluster
(MC), Robinson-Foulds (RC), Nodal Splitted (NS) and Triple metric (TT), as
described by \cite{BGW-treeCmp:12}.
The MC metric asks for a minimum-weight one-to-one matching between the
internal nodes of both trees, i.e., the clusters $C_1$ from tree $T_1$ with
the clusters $C_2$ from tree $T_2$. For a given one-to-one matching the MC
tree distance $d_{MC}$ is defined as the sum of all weights
$h_C(p_1,p_2)=|L(p_1) \setminus L(p_2) \cup L(p_2) \setminus L(p_1)|$ with
$p_1 \in C_1$ and $p_2 \in C_2$. For all unmatched clusters $p$ a weight
$|L(p)|$ is added.
The RC tree distance $d_{RC}$ is equal to the number of different clusters
in both trees divided by 2.
The NS metric computes for each tree $T_i$ a matrix $l(T_i)=(l_{xy})$ with
$x,y \in L(T_i)$ and $l_{xy}$ the length of the path from $\ensuremath{\operatorname{lca}}(x,y)$ to
$x$. The NS tree distance $d_{NS}$ is defined as the $L^2$ norm of these
matrices, i.e., $d_{NS} = \Vert l(T_1)-l(T_2)\Vert_2$.
The TT metric is based on the set of triples $\mathfrak{R}(T_i)$ displayed
by tree $T_i$. For two trees $T_1$ and $T_2$ the TT tree distance is equal
to the number of different triples in respective sets $\mathfrak{R}(T_1)$
and $\mathfrak{R}(T_2)$.
The four types of tree distances are implemented in the software
\texttt{TreeCmp} \cite{BGW-treeCmp:12}, together with an option to compute
normalized distances. Therefore, average distances between
random Yule trees \cite{Yule:25} are provided for each metric
and each tree size from 4 to 1000 leaves.
These average distances are used for normalization, resulting
in a value of 0 for identical trees and a value of approximately
1 for two random trees. Note, however, distances greater 1 are also possible.
For the trees reconstructed from the real-life data sets we compute a
support value $s \in [0,1]$, utilizing the triple weights
$w\rt{(\alpha\beta|\gamma)}$ from Eq.~\eqref{ilp:wmax}. Precisely,
\setcounter{equation}{1}
\begin{equation}
s = \frac{\sum_{\rt{(\alpha\beta|\gamma)} \in \mathbb S^*}
w\rt{(\alpha\beta|\gamma)}}{\sum_{\rt{(\alpha\beta|\gamma)} \in \mathbb S^*}
w\rt{(\alpha\beta|\gamma)}+w\rt{(\alpha\gamma|\beta)}+w\rt{(\beta\gamma|\alpha)}}
\end{equation}
The support value of a reconstructed tree indicates how often the triples
from the computed maximal consistent subset $\mathbb S^*$ were obtained
from the data in relation to the frequency of all obtained triples.
It is equal to 1 if there was no ambiguity in the data.
Values around 0.33 indicate randomness.
In a similar way, we define support values for each subtree $T(v)$ of the
resulting species tree $T$. Therefore, let $S^v = \{\rt{(\alpha\beta|\gamma)} \in
\mathfrak{R}(T) | \alpha, \beta \in L(v), \gamma \notin L(v)\}$ be the subset of
the triples displayed by $T$ with the two closer related species being
leaves in the subtree $T(v)$ and the third species not from this
subtree. Then, the subtree support is defined as:
\begin{equation}
s_v = \frac{\sum_{\rt{(\alpha\beta|\gamma)} \in \mathbb S^v}
w\rt{(\alpha\beta|\gamma)}}{\sum_{\rt{(\alpha\beta|\gamma)} \in \mathbb S^v}
w\rt{(\alpha\beta|\gamma)}+w\rt{(\alpha\gamma|\beta)}+w\rt{(\beta\gamma|\alpha)}}
\end{equation}
Note that $S^v$ only contains triples that support a subtree with leaf set
$L(v)$. Therefore, the subtree support indicates how often triples are
obtained supporting this subtree in relation to the frequency of all
triples supporting the existence or non-existence of this subtree.
In addition, bootstrap trees are constructed for each data set, using two
different bootstrapping approaches. (i) bootstrapping based on components,
and (ii) bootstrapping based on triples. Let $m$ be the number of pairwise
disconnected components from the orthology graph $G_{\Theta^*}$, $n_i$ the
number of species triples extracted from component $i$, and
$n=\sum_{i=1}^{m} n_i$. In the first approach we randomly select $m$
components with repetition from $G_{\Theta^*}$. Then we extract the
respective species triples and compute the maximal consistent subset and
least resolved tree. In the second approach we randomly select $n$ triples
with repetition from $\mathbb S$. Each triple $\rt{(\alpha\beta|\gamma)}$ is
chosen with a probability according to its relative frequency
$w\rt{(\alpha\beta|\gamma)} / n$. From this set the maximal consistent subset and
least resolved tree is computed. Bootstrapping is repeated $100$
times. Majority-rule consensus trees are computed with the software
\texttt{CONSENSE} from the \texttt{PHYLIP} package.
\section{Additional Results}
\subsection{Robustness against Noise from Horizontal Gene Transfer}
\label{sec:insens}
Horizontal gene transfer (HGT) is by far the most common deviation
from vertical inheritance, e.g.\ \cite{Grilli:14}. The key problem with
HGT in the context of orthology prediction is that pairs of genes that
derive from a speciation rather than a duplication event may end up in
the same genome. Since pairs of genes in the same genome are classified
as paralogs by the initial orthology detection heuristics and
subsequently by ILP constraint \ref{ilp:forbid_E} during cograph
editing. Such \emph{pseudo-orthologous} pairs can lead to a misplaced
node with an incorrect event label in the cotree. This may, under
some circumstances, lead to the inference of false species triples, see
Figure \ref{fig:HGT-wrongTriples}. Note, the latter problem still remains
even if we would have detected all events on the gene tree correctly but
use the triple sets $\mathbb{G}$ and $\mathbb{S}$ without any additional
restrictions. Again Fig.~\ref{fig:HGT-wrongTriples} serves as an example.
Therefore, it is of central interest to understand in more detail the relation
between symbolic representations, reconciliation maps
and triple sets that take also HGT into account, which might solve
this problem.
When all true paralogs are known, we obtain surprisingly accurate species
tree, see Figure \ref{fig:simXenology} (B).
The species trees reconstructed from a perfect orthology relation are somewhat
less accurate, see Figure \ref{fig:simXenology} (C).
The most pressing practical
issue, therefore, is to identify true paralogs, i.e., pairs of genes that
originate from a duplication event and subsequently are inherited only
vertically. In addition, phylogeny-free
methods to identify xenologs e.g.\ based on sequence composition
\cite{Jaron:14,Tsirigos:05} are a promising avenue for future work to
improve the initial estimates of orthology and paralogy.
\subsection{Simulated Data}
The results for simulated data sets with a varying number of independent
gene families suggest, that a few hundred gene families are sufficient to
contain enough information for reconstructing proper phylogenetic species
trees. The reconstructions for data sets generated with \texttt{ALF}\ need
much more gene families to obtain a similar accuracy, as compared to
simulations with the first simulation method. This can be explained by
the fact that the simulations of the first method resulted in a higher
amount of paralogs, ranging from $13.5\%$ to $32.6\%$, compared to the
\texttt{ALF}\ simulations ($7.5\%$ to $11.2\%$). Another reason is that due to
the construction of the gene trees, used for \texttt{ALF}\ simulations, the
distribution of branch lengths, and hence, the distribution of
duplications among the species tree, is very heterogeneous. The average
percentage of short branches (for which less than 1 duplication is
expected, using a duplication rate of 0.005 and $n$ gene families) is
ranging from $11.3\%$ ($5$ species, $500$ gene families) to $33.6\%$
($20$ species, $100$ gene families). Note, that the lack of duplications
leads to species trees that are not fully resolved, and hence have a
larger distance to the generated trees used for the simulation. Figures
\ref{fig:simFamilyFull} (first simulation method) and
\ref{fig:simFamilyFullALF} (\texttt{ALF}\ simulations) show boxplots for the four
tree distances as a function of the number of independent gene families.
The complete results for the 2000 simulated data sets of 10 species and
100, resp. 1000 gene families with a varying amount of noise are depicted
in Figures \ref{fig:simNoiseFull} (first simulation method) and
\ref{fig:simNoiseFullALF} (\texttt{ALF}).
The results for simulated data sets with horizontal gene transfer show
that our method is very robust against noise introduced by horizontal
gene transfer, which appears as mis-predicted orthology. Even xenologous
noise of up to 39.5\% of the homologous pairs had only a minor impact on
the obtained tree distances. The triple support values $s$ for the
reconstructed species trees, which ranges between 0.978 (HGT rate 0.0025)
and 0.943 (HGT rate 0.0075). This shows that only very few false species
triples have been inferred. However, these triples could be excluded
during the computation of the maximal consistent subset, as they are
usually dominated by the amount of correctly identified species triples.
The small differences between generated and reconstructed species trees
can be explained by the fact that the method forces homologous genes
within the same species to be paralogous, although, due to horizontal
gene transfer their lowest common ancestor can be a speciation event.
This leads to the estimated orthology not being a cograph, introducing
errors during the cograph editing step. Figure \ref{fig:simXenology}
shows boxplots for the tree distance as a function of the percentage of
xenologous noise.
Computations with the different tree building methods (i.e.,
(i) the \texttt{BUILD} algorithm, (ii) least
resolved trees with minimum number of vertices, and (iii) least
resolved trees that
minimize the number of additional triples $r \notin cl(\mathbb{S}^*)$)
showed no influence of the used method on the
resulting species trees. For none of the 100 data sets, the correct
tree was reconstructed since -- expectedly -- the added noise
introduces a few spurious triples that incorrectly resolve non-binary
vertices. Indeed, all trees were (not fully resolved) refinements of the target tree.
Furthermore, the triple distance between the reconstructed trees and the
target tree was minute, with an average of $0.04$.
Interestingly, for each of the 100 data sets, the respective three
trees reconstructed with the three different tree building methods,
were always identical. The consistent sets of triples obtained
from the 100 data sets always identified a certain tree $T$. As
demonstrated by Lemma \ref{lem:allEqual}, under this condition all
three methods necessarily yield the same result. This finding suggests
that our choice of the least resolved tree could be replaced by
\texttt{BUILD} as an efficient heuristic. For the analyzed data sets \texttt{BUILD}
used less than 3 milliseconds of computation time and was approximately $10^5$ times faster
than the other two methods.
A head-to-head comparison of the two ILP methods shows that the method which minimizes
the number of additional tripples (153 seconds on average) was approximately three times faster
compared to the method which minimizes the number of vertices (496 seconds on average).
\subsection{Real-life Data}
Figure \ref{fig:aquificales} depicts the phylogenetic tree of
\emph{Aquificales} species obtained from paralogy data in comparison to the
tree suggested by \cite{Lechner:14b}. The trees obtained from
bootstrapping experiments are given in Figure
\ref{fig:aquificales}.
The majority-rule consensus trees for both bootstrapping approaches are
identical to the previously computed tree. However, the bootstrap support
appears to be smaller next to the leaves. This is in particular the case
for closely related species with only a few duplicated genes exclusively
found in one of the species.
Figure \ref{fig:enterobacteriales} depicts the phylogenetic tree of
\emph{Enterobacteriales} species obtained from paralogy data in comparison
to the tree from \texttt{PATRIC} database \cite{Wattam:13}. The trees
obtained from bootstrapping experiments are given in Figure
\ref{fig:enterobacteriales}.
When assuming the \texttt{PATRIC} to be
correct, then the subtree support values appear to be a much more reliable
indicator, compared to the bootstrap values.
\subsection{Additional Comments on Running Time}
The CPLEX Optimizer is capable of solving instances with approximately a
few thousand variables. As the ILP formulation for cograph editing requires
$O(|\ensuremath{\mathfrak{G}}|^2)$ many variables, we can solve instances with up to 100 genes
per connected component in $G_\Theta$. However, for our computations we
limit the size of each component to 50 genes. Furthermore, the ILP
formulations for finding the maximal consistent triple set and least
resolved species tree requires $O(|\ensuremath{\mathfrak{S}}|^3)$ many variables. Hence, problem
instances of up to about 20 species can be processed.
Table \ref{tab:runtimeExtended} shows the running times for simulated and
real-life data sets for each individual sub-task. Note that the time used
for cograph editing is quite high, compared to the other sub-tasks. This is
due to the fact, that cograph editing if performed for each connected component
in $G_\Theta$ individually, and initializing the ILP solver is a relevant
factor. In the implementation we first perform a check, if for a
given gene family cograph editing has to be performed.
Triple extraction is performed with a polynomial time algorithm.
Another oddity is the extraordinary short running time for the computation of the
maximal consistent subset of species triples in the
\emph{Enterobacteriales} data set. During the bootstrapping experiments for
this set much longer times were observed.
\clearpage
\begin{figure*}[tp]
\centering
\includegraphics[width=0.4\textwidth]{./SI_fig_hgt.ps}\\
\caption{Shown is a gene tree $T$ on $\ensuremath{\mathfrak{G}} = \{a,b,c1,c2,d\}$ evolving along species tree $S$
on $\ensuremath{\mathfrak{S}}=\{A,B,C,D\}$.
In this scenario false gene triples in $\mathbb{G}$ and thus, false species triples in $\mathbb{S}$
are introduced, due to the HGT-event ($\triangle$) followed by a duplication event ($\Box$) and
certain losses (\textbf{x}).
Here, we obtain that $\rt{(bc2|a)}\in \mathbb{G}$ and thus $\rt{(BC|A)}\in \mathbb{S}$, contradicting
that $\rt{(AB|C)} \in \mathfrak{R}(S)$.}
\label{fig:HGT-wrongTriples}
\end{figure*}
\begin{figure*}[tp]
\begin{center}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesMC.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesRC.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesNS.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesTT.eps}\\
\end{minipage}
\end{center}
\caption{Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS)
and Triple metric (TT) tree distances of 100 reconstructed phylogenetic
trees with (from left to right) five, ten, and 15 species and 10 to 100 gene
families, each. Simulations are generated with first simulation method.}
\label{fig:simFamilyFull}
\end{figure*}
\begin{figure*}[tp]
\begin{center}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesMC_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesRC_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesNS_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_familiesTT_ALF.eps}\\
\end{minipage}
\end{center}
\caption{Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS)
and Triple metric (TT) tree distances of 100 reconstructed phylogenetic
trees with (from left to right) five, ten, 15, and 20 species and 100 to 500 gene
families, each. Simulations are generated with \texttt{ALF}.}
\label{fig:simFamilyFullALF}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseMC.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseRC.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseNS.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseTT.eps}\\
\end{minipage}
\end{center}
\caption{Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS)
and Triple metric (TT) tree distances of 100 reconstructed phylogenetic
trees with ten species and 100 gene families generated with first simulation method.
For each model noise was added with a probability of 0.05 to 0.25.}
\label{fig:simNoiseFull}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseMC_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseRC_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseNS_ALF.eps}\\
\end{minipage}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_noiseTT_ALF.eps}\\
\end{minipage}
\end{center}
\caption{Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS)
and Triple metric (TT) tree distances of 100 reconstructed phylogenetic
trees with ten species and 1000 gene families generated with \texttt{ALF}.
For each model noise was added with a probability of 0.05 to 0.25.}
\label{fig:simNoiseFullALF}
\end{figure*}
\begin{figure*}[tp]
\begin{center}
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_hgtnoisePO.eps}\\
(A)
\end{minipage}
\vspace{2em}\\
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_hgtnoiseALF.eps}\\
(B)
\end{minipage}
\vspace{2em}\\
\begin{minipage}{0.98\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{./SI_plots_hgtnoiseALF2.eps}\\
(C)
\end{minipage}
\end{center}
\caption{Matching Cluster (MC), Robinson-Foulds (RC), Nodal Splitted (NS)
and Triple metric (TT) tree distances of 100 reconstructed phylogenetic
trees with ten species. \texttt{ALF}\ simulations are performed with duplication/loss rates of $0.005 \cong 6.1\%$
and hgt rates of $0.0025$ to $0.0075$, resulting in xenologous noise between $0.0\%$ to $39.5\%$.
Reconstructions are based on (A) \texttt{Proteinortho} orthology estimation, (B) perfect paralogy knowledge, and (C) perfect orthology knowledge.}
\label{fig:simXenology}
\end{figure*}
\begin{table}[htb]
\caption{Running time in seconds on 2 Six-Core AMD Opteron\textsuperscript{\texttrademark} Processors with 2.6GHz for individual sub-tasks:
\textbf{CE} cograph editing,
\textbf{MCS} maximal consistent subset of triples,
\textbf{LRT} least resolved tree.}{
\label{tab:runtimeExtended}
\begin{tabular}{lrrlr}
\hline \\
\textbf{Data} &
\quad\textbf{CE}\quad &
\quad\textbf{MCS}\quad &
\quad\textbf{LRT}\quad &
\quad\textbf{Total\footnote{Total time includes triple extraction, parsing input, and writhing output files.}} \\
Simulations\footnote{Average of 2000 simulations generated with \texttt{ALF}, 10 species,
1000 gene families.}
& $125$\footnote{2,000,000 cographs, 41 not optimally solved within time limit of 30 min.}
& $<1$
& $<1$\footnote{In $95.95\%$ of the simulations the least resolved tree could be found using \texttt{BUILD}.}
& $126$\\
\emph{Aquificales} & $34$
& $<1$
& $<1\ (6)$\footnote{A unique tree was obtained using \texttt{BUILD}. Second value indicates running time with ILP solving enforced. \label{footnoteBUILDsupp}}
& $34$\\
\emph{Enterobacteriales} & $2673$
& $2$\footnote{Note that the bootstrap computations had a much longer running time ($125$ sec. on average).}
& $<1\ (1749)^\P$
& $2676$ \\
\end{tabular}}
\end{table}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{./SI_trees_aquificales.eps}
\hspace{2em}
\includegraphics[width=0.4\textwidth]{./SI_trees_aquificalesLECHNER.eps}\\
\includegraphics[width=0.4\textwidth]{./SI_trees_aquificales.cboot.eps}
\hspace{2em}
\includegraphics[width=0.4\textwidth]{./SI_trees_aquificales.tboot.eps}\\
\end{center}
\caption{Phylogenetic tree of eleven \emph{Aquificales} species. Top, L.h.s.:
tree computed from paralogy data. Internal node labels indicate support
of subtrees. Top, R.h.s.: reference tree from \cite{Lechner:14b}.
Cograph-based (bottom, l.h.s.) and triple-based (bottom, r.h.s.) bootstrapping
trees of the eleven \emph{Aquificales} species.
}
\label{fig:aquificales}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{./SI_trees_enterobacteriales.eps}
\hspace{2em}
\includegraphics[width=0.4\textwidth]{./SI_trees_enterobacterialesPATRIC.eps}\\
\includegraphics[width=0.4\textwidth]{./SI_trees_enterobacteriales.cboot.eps}
\hspace{2em}
\includegraphics[width=0.4\textwidth]{./SI_trees_enterobacteriales.tboot.eps}\\
\end{center}
\caption{Phylogenetic trees of 19 \emph{Enterobacteriales} species. Top, L.h.s.:
tree computed from paralogy data. Internal node labels indicate support
of subtrees. Top, R.h.s.: reference tree from \texttt{PATRIC} database,
projected to the 19 considered species. \emph{Salmonella typhimurium} is
missing in \texttt{PATRIC} tree.
Cograph-based (bottom, l.h.s.) and triple-based (bottom, r.h.s.) bootstrapping
trees of the 19 \emph{Enterobacteriales} species.
}
\label{fig:enterobacteriales}
\end{figure*}
\clearpage
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\label{sec:level1}First-level heading}
In hadron-hadron collisions, the $W/Z$+$b$- or $c$-jet final state can signal
the presence of new physics; however, only a few measurements
\cite{cdfZbpaper,d0Zbpaper,cdfWcpaper} of cross sections for these standard model processes exist. \ Charm quark production in association with a $W$ boson can be a significant background, for example, to top quark pair, single top quark and Higgs boson productions, and to supersymmetric top quark (stop) pair production when only the $\tilde{t}\rightarrow c \tilde{\chi}_{1}^{0}$ decay channel is allowed by the mass difference between the stop quark and the neutralino. \ Moreover, as the squared Cabibbo-Kobayashi-Maskawa matrix element, $|V_{cd}|^{2}$, suppresses the expected leading order $d$ quark-gluon fusion production mechanism, $W$+$c$-quark production provides direct sensitivity to the proton's $s$ quark parton distribution function (PDF), $s(x,Q^{2})$, where $x$ is the momentum fraction of the proton carried by the $s$-quark and $\sqrt{Q^{2}}$ is the hard scatter scale~\cite{Wctheorypaper}. \ This PDF has been measured directly only in fixed target neutrino-nucleon deep inelastic scattering experiments using relatively low momentum transfer squared, $Q^{2}$, of the order $1-100$ GeV$^{2}$~\cite{mason,d0altonMGon,strangeseapaper1,strangeseapaper2,charmII,CDHS1,CDHS2}. \ A probe of the $s$ quark PDF at the Tevatron tests the universality of $s(x,Q^{2})$ and its QCD evolution up to $Q^{2}=10^{4}$ GeV$^{2}$. \ The strange quark PDF initiates both standard model (\text{e.g.}, $sg\rightarrow W^{-}$+$c$) and possible new physics processes (\text{e.g.}, $s\bar{c}$ $\rightarrow H^{-}$)~\cite{recentCTEQpaper} at both the Fermilab Tevatron and CERN LHC colliders. \newline\indent In this Letter, we present a measurement of the cross section ratio $\sigma(p\bar{p}\rightarrow W$+$c$-jet$)$ $/\sigma
(p\bar{p}\rightarrow W$+jets$)$ as a function of jet transverse momentum $p_T$, where $W$+$c$-jet denotes a $W$ boson plus
jets final state in which the jets have a net charm quantum number $C=\pm1$,
and $W$+jets denotes any $W$ boson final state with at least one jet.
\ Several experimental uncertainties (e.g., luminosity, jet energy scale, and
reconstruction efficiency) and theoretical uncertainties (e.g.,
renormalization and factorization scales) largely cancel in this ratio.
\newline\indent This measurement utilizes approximately $1$ fb$^{-1}$ of $p\bar{p}$ collision data at a center-of-mass energy $\sqrt{s}=1.96$ TeV collected with the D0 detector at the Fermilab Tevatron collider. \ We identify $W$ bosons through their leptonic decays, $W\rightarrow\ell\nu$, where $\ell=e,\mu$. \ $W$ bosons decaying to tau leptons are included for leptonic tau decays $\tau \rightarrow e \bar{\nu}_{e} \nu_{\tau}$ or $\tau \rightarrow \mu \bar{\nu}_{\mu} \nu_{\tau}$. \ The electron or muon from $W$ boson decays are required to be isolated, and their transverse momentum $p_{T}$ must satisfy $p_{T}>20$ GeV. \ The presence of a neutrino is inferred from the requirement that the missing transverse energy
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$ } satisfies
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$ }$>20$ GeV. \ Jets are defined using the iterative seed-based midpoint cone algorithm~\cite{d0jets} with cone radius of 0.5. \ We restrict the transverse momentum of the jet to $p_{T}>20$ GeV after it is calibrated for the calorimeter jet energy scale (JES), and its pseudorapidity to $|\eta|<2.5$, where $\eta = -\ln\left[\tan\left(\theta/2\right)\right]$ and $\theta$ is the polar angle with respect to the proton beam direction. \ We correct the jet measurement to the particle level~\cite{parjet} for comparison with theory.
\newline\indent A muon reconstructed within a jet (a \textquotedblleft jet-muon\textquotedblright) identifies that jet (a \textquotedblleft$\mu$-tagged jet\textquotedblright) as a charm quark candidate. \ Events containing a jet-muon enrich a sample in $b$/$c$ semileptonic decays. \ Events with the jet-muon's charge opposite to or equal to that of the $W$ boson are denoted as \textquotedblleft OS\textquotedblright\ or \textquotedblleft SS\textquotedblright\ events, respectively. \ In the $W$+$c$-jet process, the charm quark decays into a muon carrying an opposite-sign charge compared to that carried by the $W$ boson, and the numbers of OS and SS events, $N_{\text{OS}}$ and $N_{\text{SS}}$, respectively, satisfy $N_{\text{OS}}\gg N_{\text{SS}}$. \ In the $W$+$c$-jet sample, $N_{\text{SS}}$ can be non-zero because a jet initiated by a $c$ quark has a small probability of containing a muon from the decay of particles other than the leading charm quark.
\ Other vector boson+jets physics processes ($W$+$g$, $W$+$c\bar{c}$, $W$+$b\bar{b}$, $Z$+jets) can produce $\mu$-tagged jets, but the charge of the jet-muon is uncorrelated with that of the boson, hence $N_{\text{OS}}\approx N_{\text{SS}}$ for these sources. \ Processes with light-quark ($u$, $d$ or $s$) initiated jets recoiling against the $W$ boson can produce a small fraction of charge-correlated $\mu$-tagged jets owing to leading particle effects~\cite{lpeffect}. \ Background from $WW$ production contributes only a small amount to the signal sample. \ The $WZ$ and $ZZ$ processes only rarely produce charge-correlated jets. \ Other final states that can produce charge-correlated jets ($t\bar{t}$, $t\bar{b}$, $W$+$b\bar{c}$ and $W$+$b$) are suppressed by small production cross sections or tiny CKM matrix elements. \ These considerations allow a measurement of the $W$+$c$-jet production rate from OS events with the backgrounds determined \textit{in situ} from SS events, up to small weakly model-dependent theory corrections.
\newline\indent The D0 detector~\cite{d0det} is a multi-purpose device built to investigate $p\bar{p}$ collisions. \ The innermost silicon microstrip detectors followed by the scintillating fiber tracking detector, covering pseudorapidity $|\eta| \lesssim 3.0$ and located inside the 2 T superconducting solenoid, are used for tracking and vertexing. \ The liquid-argon and uranium calorimeter, a finely segmented detector in the transverse and the longitudinal directions, is used as the primary tool to reconstruct electrons, jets, and the missing transverse energy. \ It is housed in three cryostats, one central calorimeter in the region $|\eta|<1.1$ and two end caps extending the coverage to $|\eta|\approx4.0$. \ The outermost subsystem of the D0 detector is the muon spectrometer, consisting of three layers of muon tracking subdetectors and scintillation trigger counters, which is used to construct muons up to $|\eta|\approx2.0$. \ The first layer is situated before the 1.8 T iron toroid and the other two layers are outside, enclosing the detector.
\newline\indent Candidate events in the electron (muon) decay channel of the $W$ boson must pass at least one of the single electron (muon) three-level (L1, L2 and L3) triggers used in each data taking period. \ Each level of trigger imposes tighter restrictions on the events compared to those of the preceding level. \ The single muon triggers at L1 impose hit requirements in the muon scintillators. \ Some of the triggers also require spatially matched hits in the muon tracking detectors. \ The conditions at L2 require a reconstructed muon with $p_T$ above a threshold in the range $0$ -- $5$ GeV for various triggers. \ At L3, certain triggers require a track reconstructed in the inner tracking system with $p_T>10$ GeV. \ The ratio measurement benefits from full cancellation of the trigger efficiency in the electron channel. \ This cancellation is partial in the muon channel due to the presence of two muons in the $W$+$c$-jet sample.
\newline\indent Selection of $W\rightarrow e\nu$ candidates begins with the requirement that a cluster of energy be found that is consistent with the presence of an electron in the calorimeter. \ The cluster must: have at least $90\%$ of its energy contained in the electromagnetic part of the calorimeter; have a reconstructed track from the inner tracking system pointing to it; be isolated from other clusters in that the fraction of the energy deposited in an annulus ($0.2<$$\Delta$$\mathcal{R}=\sqrt{(\Delta\phi)^2+(\Delta\eta)^2}$ $<0.4$, where $\phi$ is the azimuthal angle) around the EM cluster is less than $15\%$ of the electromagnetic energy within the cone of radius $\Delta$$\mathcal{R}=0.2$; have longitudinal and transverse energy deposition profiles consistent with those expected for an electron; and satisfy a likelihood discriminant selection that combines tracker and calorimeter information using the expected distributions for electrons and jet background. \ The electron track's point of closest approach to the $z$-axis must be within $3$ cm of the $p\bar{p}$ interaction point, which must lie within $60$ cm of the nominal detector center.
\newline\indent Selection of $W\rightarrow\mu\nu$ candidates begins by requiring that a muon candidate be found in the muon spectrometer with a track matched to one found in the central tracker. \ Rejection of cosmic ray background events demands that the central tracker track pass within $0.02$ or $0.2$ cm of the beam crossing point in the transverse plane, depending on whether the track is reconstructed with or without hits, respectively, in the silicon detector, and that the point of closest approach of the track should be within $3$ cm of the interaction point along the $z$-axis. \ Further cosmic ray rejection comes from scintillator timing information in the muon spectrometer. \ Requiring the $W$ boson candidate muon track to be separated from the axis formed by any jet found in the event by $\Delta$$\mathcal{R}$$(\mu,\text{jet})>0.5$ suppresses backgrounds from semileptonic decays of heavy flavor quarks in multi-jet events.
\newline\indent For the final selection in both channels, each event must satisfy the transverse mass requirement $40\leq M_{T}\leq120$ GeV, where $M_{T}=\sqrt{2\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$} p_{T}^{\ell}\left[ 1-\cos\Delta\phi(\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}, p_{T}^{\ell})\right]}$ is computed from the isolated lepton $p_{T}^{\ell}$ and the $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}$, have an azimuthal angular separation between the isolated lepton and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}$ directions greater than 0.4. \ Events must contain at least one jet with $p_{T}>20$ GeV after the calorimeter JES correction is applied, and $|\eta|<2.5$. \ Upon application of all selection criteria, $N_{Wj}^{e}=82747$ \ and $N_{Wj}^{\mu}=57944$ $W$+jets candidates remain in the electron and muon channels, respectively. \newline\indent Backgrounds originate from photons and jets that are misidentified as electrons and from $c\bar{c}$ and $b\bar{b}$ multi-jet events that produce an isolated muon. \ These multi-jet backgrounds are determined directly from the data using a \textquotedblleft matrix method\textquotedblright\ consisting of the following steps: \ first, \textquotedblleft loose\textquotedblright\ $W(\rightarrow\ell\nu)$+jets datasets are selected through application of all previously described selection criteria in each channel with the exception that the lepton isolation requirements are relaxed. \ This produces a set of loose candidate events, $N_{L}^{\ell}$, in each lepton channel consisting of a mixture of real $W$+jets events, $N_{W}^{\ell}$, and multi-jet background events, $N_{\text{MJ}}^{\ell}$, with $N_{L}^{\ell}=N_{W}^{\ell}+$ $N_{\text{MJ}}^{\ell}$. \ Application of the stricter lepton isolation criteria used to extract the signal changes this mixture to $N_{T}^{\ell}=\epsilon_{W}^{\ell}N_{W}^{\ell}+$ $\epsilon_{\text{MJ}}^{\ell}N_{\text{MJ}}^{\ell}$, where $N_{T}^{\ell}$ signifies the number of events in each channel satisfying the tighter isolation criteria and $\epsilon_{W}^{\ell}$ and $\epsilon_{\text{MJ}}^{\ell}$ denote the relative probabilities for loosely selected $W$ boson and multi-jet events, respectively, to satisfy the stricter isolation criteria. \ A large sample of two-jet events is used to measure $\epsilon_{\text{MJ}}^{e}$, and $\epsilon_{\text{MJ}}^{\mu}$ is estimated from a similar two-jet dataset using a sample of back-to-back muon-plus-jet events with low $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}$. \ The factors $\epsilon_{W}^{\ell}$ are obtained from a large data sample of $Z\rightarrow\ell^{+}\ell^{-}$ events. \ Solving the equations for $N_{W}^{\ell}$ and $N_{\text{MJ}}^{\ell}$ yields estimates for the fractional contributions of multi-jet background to the inclusive $W$+jets signal of $f_{\text{MJ}}^{e}=(3.2\pm0.8)\%$ in the electron channel and $f_{\text{MJ}}^{\mu}=(4.1\pm3.0)\%$ in the muon channel. \newline\indent The channel $Z(\rightarrow\ell^{+}\ell^{-})+$jets contributes as background when one of the leptons from the $Z$ boson decay fails to be reconstructed. \ An estimate of this background follows from MC simulations of $Z$+jets production produced with \textsc{alpgen v2.05}~\cite{alpgen} using the \textsc{cteq6l}~\cite{cteq} PDF set, the \textsc{pythia v6.323}~\cite{pythia} generator for the parton fragmentation and hadronization, the \textsc{mlm} prescription~\cite{mlm} to avoid an over-counting of final state jets, and \textsc{evtgen}~\cite{evtgen} to decay the heavy hadrons. \ A \textsc{geant}~\cite{geant} based program simulates the effects of detector response, and the same reconstruction software is employed as used for data. \ This procedure results in estimates for the fractional contaminations of $f_{Z}^{e}=(0.9\pm0.1)\%$ for $Z(\rightarrow e^{+} e^{-})+$jets and $f_{Z}^{\mu}=(5.0\pm0.7)\%$ for $Z(\rightarrow\mu^{+}\mu^{-})+$jets. \ Quoted uncertainties derive mainly from systematic effects in the $Z$+jets \textsc{alpgen} cross section model that are estimated by varying the relative cross section of $W$+jets with respect to $Z$+jets by its uncertainties. \\
\begin{table*}[hptb]
\caption{Summary of quantities to estimate the $W$+$c$-jet cross section ratio. \ The first uncertainties quoted are statistical and the second systematic.}
\label{tab:eventY}
\centering
\begin{ruledtabular}
\begin{tabular}[c]{ccccc}
jet $p_T$ [GeV] & ($20$--$30$) & ($30$--$45$) & ($45$--$200$) & ($20$--$200$) \\
\hline
\multicolumn{5}{c}{$W \rightarrow e \nu$ decay channel} \\
\hline
$N_{Wj}^{e}$ & 35695 & 24412 & 22640 & 82747 \\
$N_{\text{OS}}^{e}$ & 83 & 77 & 85 & 245 \\
$N_{\text{SS}}^{e}$ & 45 & 41 & 68 & 154 \\
$N_{\text{OS}}^{e,\text{MJ}}$ & 4.5$\pm$1.0$\pm$1.2 & 4.2$\pm$0.9$\pm$1.1 & 4.6$\pm$1.0$\pm$1.2 & 13.3$\pm$2.6$\pm$3.4 \\
$N_{\text{SS}}^{e,\text{MJ}}$ & 5.6$\pm$1.1$\pm$1.4 & 5.1$\pm$1.0$\pm$1.3 & 8.5$\pm$1.5$\pm$2.2 & 19.3$\pm$2.9$\pm$4.9 \\
$N_{\text{OS}}^{e,WW}$ & 1.8$\pm$0.6 & 2.1$\pm$0.7 & 2.3$\pm$0.8 & 6.2$\pm$2.1 \\
$N_{\text{SS}}^{e,WW}$ & 0.4$\pm$0.1 & 0.6$\pm$0.2 & 0.9$\pm$0.3 & 1.9$\pm$0.5 \\
$N_{\text{OS}}^{e,t\bar{t}}$ & 2.4$\pm$0.6 & 4.6$\pm$1.1 & 11.8$\pm$2.8 & 18.8$\pm$4.5 \\
$N_{\text{SS}}^{e,t\bar{t}}$ & 2.1$\pm$0.5 & 4.1$\pm$1.0 & 10.0$\pm$2.4 & 16.1$\pm$3.9 \\
$N_{\text{OS}}^{e,t\bar{b}}$ & 1.1$\pm$0.3 & 2.1$\pm$0.6 & 3.1$\pm$0.9 & 6.3$\pm$1.8 \\
$N_{\text{SS}}^{e,t\bar{b}}$ & 0.8$\pm$0.2 & 1.4$\pm$0.4 & 2.5$\pm$0.7 & 4.6$\pm$1.3 \\
$f_c^{e}$ & 1.183$\pm$0.017$\pm$0.018 & 1.164$\pm$0.019$\pm$0.017 & 1.118$\pm$0.024$\pm$0.017 & 1.149$\pm$0.007$\pm$0.017 \\
$\epsilon_{c}^{e}$ & 0.0113$\pm$0.0015$^{+0.0017}_{-0.0017}$ & 0.0125$\pm$0.0011$^{+0.0019}_{-0.0019}$ & 0.0125$\pm$0.0020$^{+0.0019}_{-0.0019}$ & 0.0124$\pm$0.0012$^{+0.0019}_{-0.0019}$ \\
$\frac{\sigma[W(\rightarrow e \nu)+c\text{-jet}]}{\sigma[W(\rightarrow e \nu)+\text{jets}]}$ & 0.079$\pm$0.031$^{+0.013}_{-0.022}$ & 0.100$\pm$0.038$^{+0.017}_{-0.016}$ & 0.043$\pm$0.049$^{+0.007}_{-0.007}$ & 0.073$\pm$0.023$^{+0.012}_{-0.014}$ \\
\hline
\multicolumn{5}{c}{$W \rightarrow \mu \nu$ decay channel} \\
\hline
$N_{Wj}^{\mu}$ & 27378 & 17325 & 13241 & 57944 \\
$N_{\text{OS}}^{\mu}$ & 76 & 64 & 63 & 203 \\
$N_{\text{SS}}^{\mu}$ & 28 & 38 & 56 & 122 \\
$N_{\text{OS}}^{\mu,\text{MJ}}$ & 4.6$\pm$1.8$\pm$3.3 & 3.8$\pm$1.5$\pm$2.7 & 3.8$\pm$1.5$\pm$2.7 & 12.2$\pm$4.6$\pm$8.7 \\
$N_{\text{SS}}^{\mu,\text{MJ}}$ & 2.0$\pm$1.3$\pm$1.4 & 2.8$\pm$1.7$\pm$2.0 & 4.1$\pm$2.5$\pm$2.9 & 8.8$\pm$5.4$\pm$6.3 \\
$N_{\text{OS}}^{\mu,WW}$ & 0.8$\pm$0.3 & 1.6$\pm$0.5 & 1.8$\pm$0.6 & 4.2$\pm$1.6 \\
$N_{\text{SS}}^{\mu,WW}$ & 0.3$\pm$0.1 & 0.3$\pm$0.1 & 0.6$\pm$0.2 & 1.2$\pm$0.4 \\
$N_{\text{OS}}^{\mu,t\bar{t}}$ & 1.2$\pm$0.3 & 2.3$\pm$0.6 & 5.8$\pm$1.4 & 9.3$\pm$2.2 \\
$N_{\text{SS}}^{\mu,t\bar{t}}$ & 1.0$\pm$0.2 & 2.0$\pm$0.5 & 5.1$\pm$1.2 & 8.1$\pm$1.9 \\
$N_{\text{OS}}^{\mu,t\bar{b}}$ & 0.7$\pm$0.2 & 1.4$\pm$0.4 & 2.1$\pm$0.6 & 4.2$\pm$1.2 \\
$N_{\text{SS}}^{\mu,t\bar{b}}$ & 0.5$\pm$0.1 & 0.8$\pm$0.1 & 1.8$\pm$0.5 & 3.1$\pm$0.9 \\
$f_c^{\mu}$ & 1.195$\pm$0.025$\pm$0.014 & 1.174$\pm$0.027$\pm$0.013 & 1.121$\pm$0.035$\pm$0.013 & 1.148$\pm$0.007$\pm$0.013 \\
$\epsilon_{c}^{\mu}$ & 0.0110$\pm$0.0011$^{+0.0016}_{-0.0017}$ & 0.0122$\pm$0.0013$^{+0.0018}_{-0.0019}$ & 0.0148$\pm$0.0018$^{+0.0022}_{-0.0023}$ & 0.0122$\pm$0.0012$^{+0.0018}_{-0.0019}$ \\
$K_{T}^{\mu}$ & 1.18$\pm$0.02$\pm$0.12 & 1.18$\pm$0.02$\pm$0.12 & 1.18$\pm$0.02$\pm$0.12 & 1.18$\pm$0.02$\pm$0.12 \\
$\frac{\sigma[W(\rightarrow \mu \nu)+c\text{-jet}]}{\sigma[W(\rightarrow \mu \nu)+\text{jets}]}$ & 0.123$\pm$0.037$^{+0.024}_{-0.033}$ & 0.076$\pm$0.050$^{+0.016}_{-0.013}$ & 0.000$\pm$0.058$^{+0.014}_{-0.008}$ & 0.075$\pm$0.031$^{+0.015}_{-0.017}$ \\
\hline
\multicolumn{5}{c}{Combined $W \rightarrow e\nu$ and $W \rightarrow \mu \nu$ decay channels} \\
\hline
$\frac{\sigma[W(\rightarrow \ell \nu)+c\text{-jet}]}{\sigma[W(\rightarrow \ell \nu)+\text{jets}]}$ & 0.097$\pm$0.024$^{+0.016}_{-0.026}$ & 0.091$\pm$0.031$^{+0.016}_{-0.015}$ & 0.025$\pm$0.038$^{+0.005}_{-0.004}$ & 0.074$\pm$0.019$^{+0.012}_{-0.014}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\indent Extraction of samples of $W$+$c$-jet event candidates from the $W$+jets samples follows from selecting events with a $\mu$-tagged jet. \ This jet must contain a reconstructed muon with $p_{T}>4$ GeV and $|\eta|<2.0$ that lies within a cone of $\Delta$$\mathcal{R}(\mu,\text{jet})<0.5$ with respect to the jet axis, and have JES corrected $p_{T}>20$ GeV before including the muon and neutrino energies. \ The muon must be detected in both of the outer two layers of the muon spectrometer, and its muon spectrometer track must be matched to a reconstructed track in the central tracker. \ Background suppression of $Z(\rightarrow\mu^{+}\mu^{-})$+jets events entails rejecting events in which the dimuon invariant mass exceeds $70$~GeV in the muon channel without restricting the charges of the muons. \ Application of all selection criteria yields $N_{\text{OS}}^{e}$ and $N_{\text{SS}}^{e}$ events in the electron channel, and $N_{\text{OS}}^{\mu}$ and $N_{\text{SS}}^{\mu}$ events in the muon channel as reported in Table~\ref{tab:eventY}. \ Estimated multi-jet backgrounds in the $\mu$-tagged jet data samples determined following the matrix method are $N_{\text{OS}}^{e,\text{MJ}}$ and $N_{\text{SS}}^{e,\text{MJ}}$ events in the electron channel, and $N_{\text{OS}}^{\mu,\text{MJ}}$ and $N_{\text{SS}}^{\mu,\text{MJ}}$ events in the muon channel as listed in Table~\ref{tab:eventY}. \ Estimates of the $WW$, $t\bar{t}$ and single top backgrounds from MC are denoted $N_{\text{OS}}^{e,WW}$, $N_{\text{SS}}^{e,WW}$, $N_{\text{OS}}^{e,t\bar{t}}$, $N_{\text{SS}}^{e,t\bar{t}}$, $N_{\text{OS}}^{e,t\bar{b}}$ and $N_{\text{SS}}^{e,t\bar{b}}$ events, respectively, in the electron channel, and $N_{\text{OS}}^{\mu,WW}$, $N_{\text{SS}}^{\mu,WW}$, $N_{\text{OS}}^{\mu,t\bar{t}}$, $N_{\text{SS}}^{\mu,t\bar{t}}$, $N_{\text{OS}}^{\mu,t\bar{b}}$ and $N_{\text{SS}}^{\mu,t\bar{b}}$ events, respectively, in the muon channel as given in Table~\ref{tab:eventY}. \ The estimate of the single top background follows from the $t\bar{b}$ and $t\bar{b}q$ events produced with the \textsc{comphep}~\cite{comphep} generator followed by full detector simulation. \ The quoted uncertainties on the $WW$, $t\bar{t}$ and single top background predictions given in Table~\ref{tab:eventY} are dominated by the uncertainties in their cross section measurements~\cite{WWcross,ttcross,tbcross}. \ Lepton charges are well measured at D0, and uncertainties from charge mis-identification are very small.
\newline\indent The acceptance times efficiencies, $\epsilon_{c}^{\ell}\left( \ell=e,\mu\right)$, of the $W$+$c$-jet selections relative to inclusive $W$+jets in each $W$ boson decay channel is estimated from the MC simulation, and includes MC to data correction factors estimated using independent data calibration samples. \ The absolute efficiency of reconstructing a $W$ boson with at least one jet cancels in the ratio. \ The relative acceptance includes effects of charm quark to hadron fragmentation, charmed hadron semi-muonic decay and the residual missing calorimeter energy from the muon and neutrino in the $\mu$-tagged jet. \ The relative efficiency accounts for muon identification and track reconstruction effects. \ Charm quark fragmentation and charm hadron decay uncertainties are constrained by previous experiments~\cite{cleocquark,pdg} and contribute $4.5\%$ and $9.5\%$, respectively, to the acceptance uncertainties in both channels. \ The correction included in the acceptance for the missing contribution to the jet $p_T$ from the muon and neutrino energies adjusts the jet $p_T$ spectrum of $W$+$c$-jet candidate events appropriately to the particle level, as verified by a MC closure test. \ A large sample of $J/\psi\rightarrow\mu^{+}\mu^{-}$ events collected at D0 is employed to correct the jet-muon reconstruction efficiency, $(58.7\pm2.8)\%$, computed from the MC simulation, by a factor of $0.89\pm0.06$. \ This correction is found to be independent of the jet $p_{T}$. \ The final acceptance times efficiencies are found to be $\epsilon_{c}^{e}=(1.24\pm0.22)\%$ and $\epsilon_{c}^{\mu}=(1.22\pm0.23)\%$.
\newline\indent The presence of two muons in the muon channel increases the trigger selection efficiency of the $W$+$c$-jet candidates compared to the inclusive $W$+jets data sample. \ The divisor factor $K_{T}^{\mu} = 1.18 \pm 0.12$, extracted from the probability of events being selected when only the jet-muon fires the trigger, corrects for the bias in $W$+$c$-jet selection in the muon channel. \ In the electron channel the factor $K_{T}^{e}$ is unity as the trigger efficiency cancels in the ratio.
\newline\indent The $W$+$c$-jet cross section ratio is extracted using
\begin{widetext}
\begin{eqnarray*}
\frac{\sigma\left[ W\left( \rightarrow\ell\nu\right) +c\text{-jet}\right]
}{\sigma\left[ W\left( \rightarrow\ell\nu\right) +\text{jets}\right] }
= \frac{\frac{1}{\epsilon_{c}^{\ell} K_{T}^{\ell}}\left[ N_{\text{OS}}^{\ell}-f_{c}^{\ell}\left(N_{\text{SS}}^{\ell}-N_{\text{SS}}^{\ell,\text{MJ}}-N_{\text{SS}}^{\ell,WW}-N_{\text{SS}}^{\ell,t\bar{t}}-N_{\text{SS}}^{\ell,t\bar{b}}\right)-N_{\text{OS}}^{\ell,\text{MJ}}-N_{\text{OS}}^{\ell,WW}-N_{\text{OS}}^{\ell,t\bar{t}}-N_{\text{OS}}^{\ell,t\bar{b}}\right]}{(1-f_{Z}^{\ell}-f_{\text{MJ}}^{\ell})N_{Wj}^{\ell}},
\end{eqnarray*}
\end{widetext}
which requires one further correction in each channel, $f_{c}^{\ell}$, for the
small correlation between the jet-muon and $W$ boson charges that arises in
$W$+light-quark jet events. \ The factor $f_{c}^{\ell}$ is determined from
fully simulated $W$+jet events as the ratio of the predicted number of OS
$\mu$-tagged jets to SS $\mu$-tagged jets in all background samples that pass
the same selection criteria as defined for the data sample. \ Processes
considered include $W$+$u$,$d$,$s$, $W$+$g$, $W$+$c\bar{c}$, $W$+$b\bar{b}$,
and $W$+$c$-jet, where the $c$ quark does not decay semi-muonically in the
last case. \ The $f_{c}^{\ell}$ are parameterized in terms of
jet $p_{T}$ as $f_{c}^{\ell}=a_{\ell}+b_{\ell}\times p_{T}$, with
$a_{e}=1.223\pm0.016$, $a_{\mu}=1.241\pm0.023$, $b_{e}=-0.0017\pm0.0003$, and
$b_{\mu}=-0.0019\pm0.0004$, where all quoted uncertainties of the parameters
are statistical; \ $f_{c}^{\ell}$ decreases with increasing jet $p_{T}$ because the sub-process $q\bar{q}\rightarrow Wg$ dominates $qg\rightarrow
Wq^{\prime}$ at high jet $p_{T}$. \ Systematic uncertainties in $f_{c}^{\ell}$
arise mainly from the cross section and jet fragmentation models. \ The $f_{c}^{\ell}$ are nearly independent of the absolute charged multiplicity per jet and the $W$+light-jets cross section. \ This has been verified by comparing the ratio of all OS tracks to all SS tracks found in jets in the inclusive $W$+jets data sample. \ The $f_{c}^{\ell}$ depend instead on the $K^{\pm}/\pi^{\pm}$ ratio per jet and the relative cross section for $W$ boson plus heavy quark jet final states compared to $W$+light-jets. \ A $6\%$ uncertainty is
assigned to the weighted $\pi^{\pm}$ multiplicity based on a comparison of the
difference between tracking efficiency in data and simulation, and a $20\%$
uncertainty on the $K^{\pm}/\pi^{\pm}$ ratio is estimated based on comparing
$K_{S}^{0}$ production in data to MC. \ Uncertainties in \textsc{alpgen} cross sections are estimated to be $50\%$ for $W$+$b\bar{b}$, $W$+$c\bar{c},$ and $W$+$c$-jet, relative to $W$+light-jets~\cite{singletoppaper}. \ A change of the $W$+$c$-jet cross section by $\pm100\%$ does not lead to a significant effect in $f_{c}^{\ell}$. \ The uncertainty due to PDFs on $f_{c}^{\ell}$ is estimated to be $_{-0.64}^{+0.97}\%$. \ Overall systematic uncertainties are found to be $1.5\%$ for $f_{c}^{e}$ and $1.1\%$ for $f_{c}^{\mu}$, with the relative cross section contributions dominant. \ Adding a $0.6\%$ uncertainty in each channel due to MC statistics yields $f_{c}^{e}=1.149\pm0.018$ and $f_{c}^{\mu
}=1.148\pm0.015$ averaged over all $p_{T}>20$ GeV.
\begin{figure}[ptb]
\includegraphics[scale=0.43]{R_dsyst.eps}
\caption{Measured ratio $[\sigma(W$+$c$-jet$)/\sigma(W$+jets$)]$ for jet $p_{T}>20$ GeV and $|\eta|<2.5$. \ The inner error bars around the data points show the statistical only uncertainties and the full bars represent the quadratic sum of statistical and systematic uncertainties. \ The systematic uncertainty on $W$+$c$-jet fraction includes the uncertainties due to JES, the jet $p_{T}$ resolution, the background correction factor $f_{c}^{\ell}$, and the product of the relative acceptance and efficiencies $\epsilon_{c}^{\ell}$. \ It also includes the uncertainty due to $K_{T}^{\mu}$ in the muon channel.}
\label{fig:result3binscomb}
\end{figure}
\begin{figure*}[ptb]
\psfrag{sgn [a/sigma(a)] of muon track} {sgn[$\frac{a}{\sigma_a}$] of muon track}
\psfrag{jet M/pT} {jet $\frac{M}{p_{T}}$}
$\begin{array}
[c]{c@{\hspace{0.01in}}c}
\includegraphics[scale=0.46]{OS_SS_mu_simpsignalonly_kf_nodup_PLB_noSS.eps} &
\includegraphics[scale=0.46]{OS_SS_mu_pTrelsignalonly_kf_nodup_PLB_noSS.eps}\\
\end{array}$
\caption{Comparison of the background-subtracted ($N_{\text{OS}}^{\ell}-f_{c}^{\ell}N_{\text{SS}}^{\ell}$) data in the combined electron and muon channels with the simulation. \ (a) \textit{{Signed}} significance in impact parameter of the jet-muon track with respect to the interaction point, (b) jet-muon transverse momentum relative to the jet axis ($p_T^{\text{rel}}$).}
\label{fig:ipsig}
\end{figure*}
\newline\indent Table~\ref{tab:eventY} summarizes the cross section ratio
measurements and their uncertainties for the electron and the muon channels
for all jet $p_{T}>20$ GeV and jet $|\eta|<2.5$, and for three jet $p_{T}$
bins with $|\eta|<2.5$ in each bin. \ The ratio measurements benefit from cancellation of several uncertainties, notably the integrated luminosity~\cite{d0lumi}, lepton detection efficiency, and jet energy scale (JES). \ Table~\ref{tab:syst} lists remaining absolute systematic uncertainties on the measurement estimated from the MC simulation. \ These arise mainly from second order JES effects, jet $p_{T}$ resolution (JPR), $c$-jet tagging efficiency,
and the $W$+$c$-jet background correction factors $f_{c}^{\ell}$.
\newline\indent \ The measured $W$+$c$-jet fractions integrated over
$p_{T}>20$ GeV and $|\eta|< 2.5$ are
\begin{eqnarray*}
\frac{\sigma\left[W\left(\rightarrow e\nu\right)+c\text{-jet}\right]}{\sigma\left[W\left(\rightarrow e\nu\right)+\text{jets}\right]}&=&0.073\pm0.023 (\text{stat.})_{-0.014}^{+0.012}(\text{syst.}),\\
\frac{\sigma\left[W\left( \rightarrow\mu\nu\right) +c\text{-jet}\right]}{\sigma\left[W\left(\rightarrow\mu\nu\right)+\text{jets}\right]}&=&0.075\pm0.031(\text{stat.})_{-0.017}^{+0.015}(\text{syst.}).
\end{eqnarray*}
\noindent Since the $W\rightarrow e\nu$ and $W\rightarrow\mu\nu$ measurements are consistent with one another, and statistical uncertainties dominate, the two lepton channels are combined to yield
\begin{eqnarray*}
\frac{\sigma\left[W+c\text{-jet}\right]}{\sigma\left[ W+\text{jets}\right]}=0.074\pm0.019(\text{stat.})_{-0.014}^{+0.012}(\text{syst.}).
\end{eqnarray*}
\noindent Systematic uncertainties are taken to be fully correlated in the two channels. \ This measurement can be compared to $W$+$c$-jet fraction predicted by \textsc{alpgen} and \textsc{pythia} of 0.044$\pm$0.003, where the quoted theoretical uncertainty derives from the uncertainty on the \textsc{cteq6.5} PDFs. \ Due to the relatively small contributions of $W$+$b\bar{b}$ and $W$+$c\bar{c}$ to inclusive $W$+jets, the model prediction of the $W$+$c$-jet rate has $\lesssim 5\%$ sensitivity to their cross sections. \ Figure~\ref{fig:result3binscomb} shows the differential $W$+$c$-jet fraction, and compares the data to a model prediction using leading order QCD augmented by \textsc{alpgen} and \textsc{pythia}.
\begin{table}[ptb]
\caption{Fractional systematic uncertainties on the measurement in the $W\rightarrow e\nu$ and the $W\rightarrow \mu\nu$ channels.}%
\label{tab:syst}%
\renewcommand{\arraystretch}{1.5}
\centering
\begin{ruledtabular}
\begin{tabular}[c]{c|cccc|ccccc}
& \multicolumn{4}{c}{$e$ channel} & \multicolumn{5}{c}{$\mu$ channel} \\
\hline
$p_T$ & JES & JPR & $f_{c}^{e}$ & $\epsilon_c^{e}$ & JES & JPR & $f_{c}^{\mu}$ & $\epsilon_c^{\mu}$ & $K_{T}^{\mu}$ \\
GeV & $\%$ & $\%$ & $\%$ & $\%$ & $\%$ & $\%$ & $\%$ & $\%$ & $\%$ \\
\hline
$20$--$30$ & $_{-21.6}^{+0}$ & $_{-4.8}^{+2.4}$ & $_{-4.1}^{+3.8}$ & $_{-15.6}^{+15.7}$ & $_{-17.6}^{+0}$ & $_{-7.5}^{+5.0}$ & $_{-3.3}^{+2.5}$ & $_{-16.2}^{+15.3}$ & $\pm10$ \\
$30$--$45$ & $_{-4.3}^{+6.4}$ & $_{-4.3}^{+2.1}$ & $_{-4.7}^{+4.3}$ & $_{-14.4}^{+14.5}$ & $_{-0.7}^{+9.8}$ & $_{-6.7}^{+4.5}$ & $_{-4.3}^{+3.1}$ & $_{-14.9}^{+14.1}$ & $\pm10$\\
$45$--$200$ & $_{-2.2}^{+2.2}$ & $_{-4.5}^{+2.2}$ & $_{-7.6}^{+6.9}$ & $_{-14.6}^{+14.7}$ & $_{-0}^{+27.7}$ & $_{-7.0}^{+4.7}$ & $_{-5.2}^{+4.0}$ & $_{-15.8}^{+15.0}$ & $\pm10$ \\
\hline
$20$--$200$ & $_{-9.0}^{+0}$ & $_{-4.5}^{+2.3}$ & $_{-5.2}^{+4.5}$ & $_{-15.0}^{+15.1}$ & $_{-2.4}^{+5.9}$ & $_{-7.1}^{+4.7}$ & $_{-4.3}^{+3.1}$ & $_{-15.7}^{+14.9}$ & $\pm10$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\newline \indent As a test of the $W$+$c$-jet signal hypothesis, Fig.~\ref{fig:ipsig}(a) compares data to \textsc{alpgen} and \textsc{pythia} expectations in the background-subtracted distribution of the signed impact parameter significance, $a/\sigma_{a}$, for the jet-muon, where $a$ is the projected distance of closest approach of the jet-muon to the event interaction point in the transverse plane, and $\sigma_{a}$ is the estimated uncertainty on $a$. \ Data show satisfactory agreement with expectations for $W$+$c$-jet production and the underlying OS-SS ansatz after the subtraction of light and $b$ quark jet contributions. \ Similarly, the distribution of the relative transverse momentum of the jet-muon with respect to the jet axis, $p_{T}^{\text{rel}}$, shows the consistency between data and the $c$-jet expectation as illustrated in Fig.~\ref{fig:ipsig}(b).
\newline \indent To quantify the probability that the observed excess of OS events over SS events is due exclusively to background fluctuations, ensembles for OS, SS backgrounds and inclusive $W$+jets are generated that incorporate all the systematic uncertainties together with the correlations among the OS, SS backgrounds and $W$+jets expectations using Gaussian samplings of the uncertainties. \ The probability that background fluctuations could produce the observed fraction of the signal events in the inclusive $W$+jets sample is $2.5\times 10^{-4}$, corresponding to a $3.5$ $\sigma$ significance for the $W$+$c$-jet hypothesis.
\newline \indent In conclusion, we have performed a measurement of the $W$+$c$-jet/$W$+jets cross section ratio at a hadron collider using both electron and muon decay channels of the $W$ boson and utilizing the correlation between the charge of the jet-muon with that of the $W$ boson. \ The probability that background fluctuations could produce an estimated $W$+$c$-jet fraction in $W$+jets equal to or larger than the one measured in data is $2.5 \times 10^{-4}$, which corresponds to a $3.5$ $\sigma$ significance of the observation. \ We find our measurement to be consistent with LO perturbative QCD predictions of the $W$+$c$-jet production rate and with an $s$ quark PDF evolved from $Q^{2}$ scales two orders of magnitude below those of this measurement. \ The measurement further provides direct experimental evidence of the underlying partonic process $qg\rightarrow W q^{\prime}$ that should dominate $W$ boson production at the CERN Large Hadron Collider (LHC).\\ \indent
\input {acknowledgement_paragraph_r2.tex}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Let $r,s \geqslant 0$ be integers such that $r \equiv s \ {\rm (mod \ 8)}$; this congruence condition is equivalent to the
existence of an even, unimodular lattice with signature $(r,s)$. When $r,s \geqslant 1$, such a lattice is unique up
to isomorphism (see for instance \cite{S}, chap. V); we denote it by $\Lambda_{r,s}$.
In \cite {GM}, Gross and McMullen raise
the following question (see \cite {GM}, Question 1.1) :
\bigskip
\noindent
{\bf Question.} {\it What are the possibilities for the characteristic polynomial $F(X) = {\rm det}(X - t)$ of an isometry
$t \in {\rm SO}(\Lambda_{r,s})$ ? }
\bigskip
The aim of this paper is to answer this question for semi-simple isometries.
\bigskip
The condition $t \in {\rm SO}(\Lambda_{r,s})$ implies that $F(X) = X^{{\rm deg}(F)}F(X^{-1})$,
hence $F$ is a {\it symmetric} polynomial (cf. \S \ref{symmetric section}). Let
$2n = {\rm deg}(F)$, and let $2m(F)$ be the number of roots of $F$ outside the unit circle.
As shown in \cite{GM}, we have the further necessary conditions :
\bigskip
(C 1) {\it $|F(1)|$, $|F(-1)|$ and $(-1)^n F(1) F(-1)$ are squares.}
\bigskip
(C 2) {\it $r \geqslant m(F)$, $s \geqslant m(F)$, and if moreover $F(1)F(-1) \not = 0$, then $m(F) \equiv r \equiv s \ {\rm (mod \ 2)}$.}
\bigskip
Gross and McMullen prove that if $F \in {\bf Z}[X]$ is an irreducible, symmetric and monic
polynomial satisfying condition (C 2) and such that $|F(1)F(-1)| = 1$, then there exists $t \in {\rm SO}(\Lambda_{r,s})$ with characteristic polynomial $F$ (see
\cite {GM}, Theorem 1.2). They
speculate that conditions (C 1) and (C 2) are sufficient for a monic {\it irreducible} polynomial to be realized as the characteristic
polynomial of an isometry of $\Lambda_{r,s}$; this is proved in \cite {BT}, Theorem A. More generally, Theorem A of \cite {BT}
implies that
if a monic, irreducible
and symmetric polynomial $F$ satisfies conditions (C 1) and (C 2), then {\it there exists} an even, unimodular lattice
of signature $(r,s)$ having an isometry with characteristic polynomial $F$. This is also the point of view of the present
paper - we treat the definite and indefinite cases simultaneously.
\medskip
On the other hand, Gross and McMullen show that these conditions do not suffice in the case
of {\it reducible} polynomials (see \cite {GM}, Proposition 5.2); several other examples are given in \cite{B 20}. Another
example is the following :
\medskip
\noindent
{\bf Example 1.} Let $F(X) = (X^4-X^2+1)(X-1)^4$, and let $(r,s) = (8,0)$; conditions (C 1) and (C 2) hold, but there does not exist any positive definite, even, unimodular lattice of rank $8$ having an isometry with characteristic polynomial $F$; note that
this amounts to saying that the lattice $E_8$ does not have any isometry with characteristic polynomial $F$.
\medskip
All these examples are {\it counter-examples to a Hasse principle}. Indeed,
the first result of the present paper is that conditions (C 1) and (C 2) are sufficient {\it locally}.
If $p$ is a prime number, we say
that a ${\bf Z}_p$-lattice $(L,q)$ is even if $q(x,y) \in 2{\bf Z}_p$; note that if $p \not = 2$, then every lattice is even, since $2$
is a unit in ${\bf Z}_p$. The following is proved in Theorem \ref{integral local} and Proposition \ref{reals} :
\medskip
\noindent
{\bf Theorem 1.} {\it Let $F \in {\bf Z}[X]$ be a monic, symmetric polynomial of even degree.
\medskip {\rm (a)} Condition {\rm (C 1)} holds if and only if for all prime numbers $p$, there
exists an even, unimodular ${\bf Z}_p$-lattice having a semi-simple isometry
with characteristic polynomial $F$.
\medskip {\rm (b)}
The group ${\rm SO}_{r,s}({\bf R})$ contains a semi-simple element having characteristic polynomial $F$ if and
only if condition {\rm (C 2)} holds.}
\bigskip
The next result is a necessary and sufficient condition for the local-global principle to hold. We start
by defining an obstruction group ${\mbox{\textcyr{Sh}}}_F$, depending only on the polynomial $F$.
\medskip Let $I_1$ be the set of irreducible, symmetric factors of $F$ of even degree, and let $I_0 = \{X-1,X+1 \}$. Let
$I = I_0 \cup I_1$. If $f, g \in I$, let $\Pi_{f,g}$ be the set of prime numbers $p$ such that
\medskip
\centerline {$f \ {\rm (mod \ {\it p})}$ and $g \ {\rm (mod \ {\it p})}$}
\medskip
\noindent have a non-trivial common
symmetric factor in ${\bf F}_p[X]$.
Let $C(I)$ be the set of maps $I \to {\bf Z}/2{\bf Z}$, and let $C_0 (I)$ be the set of $c \in C(I)$ such that
\medskip
\centerline { $c(f) = c(g)$ if $\Pi_{f,g} \not = \varnothing$.}
\medskip
\noindent
The obstruction group ${\mbox{\textcyr{Sh}}}_F$ is by definition the quotient of
$C_0(I)$ by the subgroup of constant maps.
\medskip
\noindent
{\bf Example 2.} Let $F(X) = (X^4-X^2+1)(X-1)^4$ as in Example 1. The irreducible, symmetric factors
of $F$ are $f(X) = X^4-X^2+1$ and $g(X) = X -1$. These polynomials are relatively prime over ${\bf Z}$, hence
$\Pi_{f,g} = \varnothing$. This implies that ${\mbox{\textcyr{Sh}}}_F \simeq {\bf Z}/2{\bf Z}$.
\bigskip Let us now assume that condition (C 2) holds, and let $t \in {\rm SO}_{r,s}({\bf R})$ be a semi-simple
isometry with characteristic polynomial $F$; such an isometry exists by part (b) of the above theorem (or Proposition
\ref{reals}). Assume moreover that condition (C 1) also holds. In \S \ref{reformulation}, we define a homomorphism
$$\epsilon_t : {\mbox{\textcyr{Sh}}}_F \to {\bf Z}/2{\bf Z}$$ and prove the following (see Theorem \ref{preserves}) :
\medskip
\noindent
{\bf Theorem 2.} {\it The isometry $t \in {\rm SO}_{r,s}({\bf R})$ preserves an even, unimodular lattice if and
only if $\epsilon_t = 0$.}
\bigskip
\noindent
{\bf Example 3.} Let $F(X) = (X^4-X^2+1)(X-1)^4$; we have ${\mbox{\textcyr{Sh}}}_F \simeq {\bf Z}/2{\bf Z}$ (see Example 2).
If $t_1 \in {\rm SO}_{4,4}({\bf R})$, we have $\epsilon_{t_1} = 0$, and if $t_2 \in {\rm SO}_{8,0}({\bf R})$, then
$\epsilon_{t_2} \not = 0$. Hence $\Lambda_{4,4}$ has a semi-simple isometry with characteristic polynomial $F$,
but the lattice $E_8$ does not have such an isometry.
\bigskip
\noindent
{\bf Corollary 1.} {\it
Let $G \in {\bf Z}[X]$ be a monic, irreducible, symmetric polynomial with $|G(1)| > 1$, and suppose that $|G(-1)|$ is a square. Let
$m \geqslant 2$ be an even integer, and set $$F(X) = G(X)(X-1)^{m}.$$
\medskip
\noindent
Assume that condition {\rm (C 2)} holds for $F$.
Then every semi-simple isometry $t \in {\rm SO}_{r,s}({\bf R})$ with characteristic polynomial $F$ preserves an even, unimodular lattice.}
\medskip Indeed, $\Pi_{G,X-q} \not = \varnothing$, hence ${\mbox{\textcyr{Sh}}}_F = 0$. Condition (C 1) holds for $F$ since $|G(-1)|$ is a square;
therefore Theorem 2 implies the corollary.
\medskip
The following example is an immediate consequence of Corollary 1.
\medskip
\noindent
{\bf Example 4.} Let
$p$ be a prime number with $p \not = 2$, and let $\Phi_p$ be the $p$-th cyclotomic polynomial.
\medskip Take $G = \Phi_p$ in Corollary 1; we have $\Phi_p(1) = p$ and $\Phi_p(-1) = 1$. The polynomial $F(X) = \Phi_p(X) (X-1)^m$
satisfies condition (C 2) for any choice of $(r,s)$, since $m(F) = 0$. Therefore we have :
\medskip
$\bullet$ If $m \geqslant 2$ is an integer such that $p - 1 + m \equiv 0 \ {\rm (mod \ 8)}$,
then there exists a positive definite, even, unimodular lattice having an isometry with characteristic polynomial
$\Phi_p(X) (X-1)^m$.
\medskip For instance, the lattice $E_8$ has an isometry with characteristic polynomial $\Phi_7 (X) (X-1)^2$; one can show that
the Leech lattice has an isometry with characteristic polynomial $\Phi_{23}(X)(X-1)^2$ (see \cite {Q}, (3.10)).
\medskip
$\bullet$ If $r, s \geqslant 1$ and if $m \geqslant 2$ is an integer such that $p - 1 + m = r + s$, then $\Lambda_{r,s}$ has an isometry with characteristic polynomial
$\Phi_p(X) (X-1)^m$.
\bigskip
For polynomials $F$ without linear factors, Theorem 2 is proved in \cite{B 20}, Theorem 27.4. However, it turns out that including
linear factors is very useful in the applications to $K3$ surfaces, which we now describe.
\medskip The second part of the paper gives applications to automorphisms of $K3$ surfaces,
inspired by a series of papers of McMullen (see \cite {Mc1}, \cite {Mc2}, \cite {Mc3}).
\medskip
Recall that a
monic, irreducible, symmetric polynomial $S \in {\bf Z}[X]$ of degree $\geqslant 4$ is a {\it Salem polynomial} if $S$ has exactly two roots outside
the unit circle, both positive real numbers. A real number is called a {\it Salem number} if it is the unique real root $> 1$ of
a Salem polynomial; it is an algebraic unit.
\medskip If
$T : \mathcal X \to \mathcal X$ is an automorphism of a complex $K3$ surface, then
$T^* : H^2(\mathcal X,{\bf C}) \to H^2(\mathcal X,{\bf C})$ respects the Hodge decomposition
$$H^2(\mathcal X,{\bf C}) = H^{2,0}(\mathcal X) \oplus H^{1,1}(\mathcal X) \oplus H^{0,2}(\mathcal X);$$ since
${\rm dim}(H^{2,0}) = 1$, the automorphism $T^*$ acts on it by multiplication with a complex number, denoted by
$\delta(T)$; we have $|\delta(T)| = 1$. Moreover, $T^* : H^2(\mathcal X,{\bf Z}) \to H^2(\mathcal X,{\bf Z})$ preserves
the intersection pairing.
The above properties imply that the characteristic polynomial
of $T^*$ is a product of at most one Salem polynomial and of a finite number of cyclotomic polynomials,
it satisfies condition (C 1), and $\delta(T)$ is a root of this polynomial.
\medskip
Moreover, assume that the characteristic polynomial is equal to $S C$, where $S$ is a Salem polynomial
of degree $d$ with $4 \leqslant d \leqslant 22$ and $C$ is a product of cyclotomic polynomials; then $\mathcal X$
is projective if and only if $\delta(T)$ is a root of $C$ (see \cite{R2}, Theorem 2.2). Such a polynomial is called a {\it complemented
Salem polynomial} (see Definition \ref{complemented}).
\medskip
Let $F$ be a complemented Salem polynomial,
and let
$\delta$ be a root of $F$. We wish
to decide whether $F$ is the characteristic polynomial of an isomorphism $T^*$ as above, with $\delta(T) = \delta$.
We start with
the simplest case, where the cyclotomic factor is a nontrivial power of $X-1$.
\bigskip
\noindent
{\bf Theorem 3.}
{\it Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d\leqslant 20$
and let $\delta$ be a root of $S$ with $|\delta| = 1$. Let $$F(X) = S(X)(X-1)^{22-d},$$
assume that condition {\rm (C 1)} holds for $F$, and that $|S(1) > 1$.
Then
there exists a non-projective $K3$ surface $\mathcal X$
and an automorphism $T : \mathcal X \to \mathcal X$ such
that
\medskip \noindent
$\bullet
\ \ $F$ is the characteristic polynomial of $T^*|H^2(\mathcal X).$
\medskip \noindent
$\bullet$ \ \ $T^*$ acts on $H^{2,0}(\mathcal X)$ by multiplication by $\delta$.}
\medskip
This is proved using Corollary 1, as well as some results of McMullen (\cite {Mc2}, {Mc3}) and Brandhorst (\cite {Br}); see
Theorem \ref{theorem 4 - first part}. For polynomials $S$ with $|S(1) =1$, see Theorem \ref{theorem 4 - second part}; in
this case, the answer depends on the congruence class of $d$ modulo $8$.
\medskip The {\it dynamical degree} of an automorphism $T : \mathcal X \to \mathcal X$ is by definition the spectral
radius of $T^*$; since the characteristic polynomial of $T^*$ is the product of a Salem polynomial and of a product
of cyclotomic polynomial, the dynamical degree is a Salem number. We say that a Salem number is
{\it realizable} if $\alpha$ is the dynamical degree of an automorphism of a {\it non-projective} $K3$ surface.
\medskip
Let $\alpha$ be a Salem number of degree $d$ with $4 \leqslant d\leqslant 20$, and let $S$ be the minimal polynomial of
$\alpha$. In \S \ref{4} we prove
an analog of Theorem 3 for $F(X) = S(X)(X+1)^2(X-1)^{20-d}$ or $S(X)(X^2-1)(X-1)^{20-d}$ (see Theorem \ref{lots}) and show
that if $d = 4,6,8,12,14$ or $16$, then $\alpha$ is realizable (see Corollary \ref{lots coro}).
\medskip
The aim of \S \ref{18} is to prove that the second smallest Salem number, $\lambda_{18} = 1.1883681475...$, is not
realizable. The above results do not apply to this Salem number : indeed, we have $|S_{18}(1)S_{18}(-1)| = 1$ for the minimal polynomial $S_{18}$
of $\lambda_{18} $. By contrast, McMullen proved that $\lambda_{18} $ is the dynamical degree
of an automorphism of a {\it projective} $K3$ surface (see \cite {Mc3}, Theorem 8.1).
\medskip
As an application of Theorem 3, we show in \S \ref{bounds} that some {\it powers} of Salem numbers are realizable;
this leads to a more precise version of a theorem of Brandhorst (cf. \cite {Br}).
\bigskip
I thank Marie Jos\'e Bertin, Curt McMullen and Chris Smyth for very useful comments and suggestions.
\bigskip
\centerline {\bf Table of contents}
\bigskip
\noindent
1. Equivariant Witt groups \dotfill 6
\medskip
\noindent
2. Symmetric polynomials and $\Gamma$-modules \dotfill 7
\medskip
\noindent
3. Isometries of quadratic forms \dotfill 7
\medskip
\noindent
4. Local fields and $\Gamma$-lattices \dotfill 8
\medskip
\noindent
5. Even, unimodular $\Gamma$-lattices over ${\bf Z}_2$ \dotfill 16
\medskip
\noindent
6. Milnor signatures and Milnor indices \dotfill 19
\medskip
\noindent
7. Local conditions for even, unimodular $\Gamma$-lattices \dotfill 20
\medskip
\noindent
8. The local-global problem \dotfill 21
\medskip
\noindent
9. ${\bf Q}[\Gamma]$-forms, signatures and determinants \dotfill 23
\medskip
\noindent
10. Local decomposition \dotfill 24
\medskip
\noindent
11. Obstruction group \dotfill 26
\medskip
\noindent
12. Local data \dotfill 27
\medskip
\noindent
13. Hasse principle \dotfill 30
\medskip
\noindent
14. Unimodular lattices preserved by a semi-simple element of ${\rm SO}_{r,s}({\bf R})$ \dotfill 33
\medskip
\noindent
15. Automorphisms of $K3$ surfaces \dotfill 33
\medskip
\noindent
16. Realizable Salem numbers \dotfill 38
\medskip
\noindent
16. A nonrealizable Salem number \dotfill 40
\medskip
\noindent
17. Exceptional sets, roots of unity and bounds \dotfill 42
\medskip
\section{Equivariant Witt groups}
\medskip
We start by recalling some notions and results from \cite {BT}, \S 3 and \S 4.
\medskip
{\bf The equivariant Witt group}
\medskip
Let $\mathcal K$ be a field, let $\mathcal A$ be a $\mathcal K$-algebra and let $\sigma : \mathcal A \to \mathcal A$ be a $\mathcal K$-linear involution.
An {\it $\mathcal A$-bilinear form} is a pair $(V,b)$ consisting of an $\mathcal A$-module $V$ that is a finite dimensional $\mathcal K$-vector space,
and a non-degenerate symmetric $\mathcal K$-bilinear form $b : V \times V \to K$ such that $b(ax,y) = b(x,\sigma(a)y)$ for all $a \in \mathcal A$ and
all $x, y \in V$.
\medskip
The associated {\it Witt group} is denoted by $W_{\mathcal A}(\mathcal K)$ (see \cite {BT}, \S 3). If $M$ is a simple $\mathcal A$-module, we
denote by $W_{\mathcal A}({\mathcal K},M)$ the subgroup of $W_{\mathcal A}(\mathcal K)$ generated by the classes of $\mathcal A$-bilinear
forms $(M,b)$. Every class in $W_{\mathcal A}(\mathcal K)$ is represented by an $\mathcal A$-bilinear form whose underlining $\mathcal A$-module
is semisimple, and we have $$W_{\mathcal A}(\mathcal K) = \underset{M} \oplus W_{\mathcal A}({\mathcal K},M),$$ where $M$ ranges over the
isomorphism classes of simple $\mathcal A$-modules (see \cite {BT}, Corollary 3.11 and Theorem 3.12).
\medskip
{\bf Discrete valuation rings and residue maps}
\medskip
Let $O$ be a discrete valuation ring with field of fractions $K$, residue field and uniformizer $\pi$. Let $(A,\sigma)$ be an $O$-algebra with involution,
and set $A_K = A \otimes_O K$, $A_k = A \otimes_O k$. An {\it $A$-lattice} in an $A_K$ bilinear form $V$ is an $A$-submodule $L$ which
is finitely generated as an $O$-module and satisfies $K L = V$. If $L$ is an $A$-lattice, then so is its dual $$L^{\sharp} = \{ x \in V \ | \ b(x,L) \subset O \}.$$
We say that $L$ is {\it unimodular} if $L^{\sharp} = L$ and {\it almost unimodular} if $\pi L^{\sharp} \subset L \subset L^{\sharp}$. If $L$ is almost unimodular,
then $b$ induces an $A_k$-bilinear form $L^{\sharp}/L \times L^{\sharp}/L \to k$ (see \cite {BT}, definition 4.1).
\medskip
An $A_K$-bilinear form
is said to be {\it bounded} if it contains an $A$-lattice. We denote by $W^b_{A_K}(K)$ the subgroup of $W_{A_K}(K)$ generated by the classes of
bounded forms. The following result is proved in \cite{BT} :
\begin{theo}\label{4.3}
{\rm (i)} Every bounded $A_K$-bilinear form contains an almost unimodular $A$-lattice $L$.
\medskip
{\rm (ii)} The class of $L^{\sharp}/L$ in $W_{A_k}(k)$ only depends on the class of $V$ in $W_A(K)$.
\medskip
{\rm (iii)} The map $\partial : W_{A_K}(K) \to W_{A_k}(k)$ given by $[V] \to [L^{\sharp}/L]$ is a homomorphism.
\medskip
{\rm (iv)} $V$ contains a unimodular $A$-lattice if and only if $V$ is bounded and $\partial [V] = 0$ in $W_{A_k}(k)$.
\end{theo}
\noindent
{\bf Proof.} See \cite {BT}, Theorem 4.3.
\section{Symmetric polynomials and $\Gamma$-modules}\label{symmetric section}
We recall some notions from \cite {M} and \cite {B 15}. Let $K$ be a field. If
$f \in K[X]$ is a monic polynomial such that $f(0) \not = 0$, set $f^*(X) = {f(0)} X^{\rm deg}(f) f(X^{-1})$; we say that $f$ is
{\it symmetric} if $f^* = f$. Recall the following definition from \cite {B 15} :
\begin{defn} Let $f \in k[X]$ be a monic, symmetric polynomial. We say that $f$ is of
\medskip
\noindent
$\bullet$ {\it type} 0 if $f$ is a product of powers of $X-1$ and of $X+1$;
\medskip
\noindent
$\bullet$ {\it type} 1 if $f$ is a product of powers of monic, symmetric, irreducible polynomials in
$k[X]$ of even degree;
\medskip
\noindent
$\bullet$ {\it type} 2 if $f$ is a product of polynomials of the form $g g^*$, where $g \in k[X]$ is
monic, irreducible, and $g \not = \pm g^*$.
\end{defn}
The following is well-known :
\begin{prop} Every monic symmetric polynomial is a product of polynomials
of type {\rm 0, 1} and {\rm 2}.
\end{prop}
\noindent
{\bf Proof.} See for instance \cite {B 15}, Proposition 1.3.
\medskip
Let $J$ be the set of irreducible factors of $F$, and let us write $F = \underset{f \in J} \prod f^{n_f}$. Let $I_1 \subset J$ be the subset of irreducible factors of type 1, and let $I_0 \subset J$ be the
set of irreducible factors of type 0; set $I = I_0 \cup I_1$. For all $f \in I$, set $M_f = [ K[X]/(f)]^{n_f}$. Set $M^0 = \underset {f \in I_{0}} \oplus M_f$, and
$M^1 = \underset {f \in I_1} \oplus M_f$.
If $f \in J$ such that $f \not = f^*$, set $M_{f,f^*} = [K[X]/(f) \oplus K[X]/(f^*)]^{n_f}$, and let $M^2 = \underset {(f,f^*)} \oplus M_{f,f^*}$,
where the sum runs over the pairs $(f,f^*)$ with $f \in J$ and $f \not = f^*$. Set $$M = M^0 \oplus M^1 \oplus M^2.$$
\medskip
Let $\Gamma$ be the infinite cyclic group, and let $\gamma$ be a generator of $\Gamma$. Setting $\gamma (m) = X m$ for all $m \in M$ endows $M$
with a structure of semi-simple $K[\Gamma]$-module; we say that $M$ is the {\it semi-simple $K[\Gamma]$-module associated to the polynomial
$F$}.
\medskip Let us write $F = F_0 F_1 F_2$, where $F_i$ is the product of the irreducible factors of type $i$ of $F$. We have
$F_0 = (X-1)^{n^+} (X+1)^{n^-}$ for some integers $n^+,n^- \geqslant 0$. Set $M^+ = [K[X]/(X-1)^{n^+}$ and
$M^- = [K[X]/(X+1)^{n^-}$. The $K[\Gamma]$-module $M^0$ splits as $$M^0 = M^+ \oplus M^-.$$
\section{Isometries of quadratic forms}\label{isometries section}
We recall some results from \cite {M} and \cite {B 15}. Let $K$ be a field of characteristic $\not = 2$, let $V$ be a finite dimensional
$K$-vector space, and let $q : V \times V \to K$ be a non-degenerate quadratic form. An {\it isometry} of $(V,q)$ is by definition
an isomorphism $t : V \to V$ such that $q(tx,ty) = q(x,y)$ for all $x, y \in V$.
Let $t : V \to V$ be an isometry, and let $F \in K[X]$ be the characteristic polynomial of $t$. It is well-known that $F$
is a symmetric polynomial (see for instance \cite {B 15}, Proposition 1.1). The following property is also well-known :
\begin{lemma}\label{determinant} If $t : V \to V$ is an isometry of the quadratic form $(V,q)$ and if the characteristic
polynomial $F$ of $t$ satisfies $F(1)F(-1) \not = 0$, then $${\rm det}(q) = F(1)F(-1) \ \ {\rm in} \ \ K^{\times}/K^{\times 2}.$$
\end{lemma}
\noindent
{\bf Proof.} See for instance \cite{B 15}, Corollary 5.2.
\medskip
Recall that $\Gamma$ is the infinite cyclic group,
and let $\sigma : K[\Gamma] \to K[\Gamma]$ be the $K$-linear involution such that
$\sigma(\gamma) =
\gamma^{-1}$ for all $\gamma \in \Gamma$. An isometry $t : V \to V$ endows $V$ with a $K[\Gamma]$-module structure,
and if moreover $t$ is semi-simple with characteristic polynomial $F$, then
this module is isomorphic to the
semi-simple $K[\Gamma]$-module $M = M(F)$ associated to the polynomial
$F$ (see \S \ref{symmetric section}). Hence $M$ also carries a non-degenerate quadratic form, that we also denote by $q$.
Note that $(M,q)$ is a $K[\Gamma]$-bilinear form, and gives rise to an element $[M,q]$ of
the Witt group $W_{K[\Gamma]}(K)$. To simplify notation, set $W_{\Gamma}(K) = W_{K[\Gamma]}(K)$.
\medskip
Let us write $M = M^0 \oplus M^1 \oplus M^2$ as in \S \ref{symmetric section}, and let $q^i$ denote the restriction of $q$ to $M^i$; this gives
rise to an orthogonal decomposition $(M,q) = (M^0,q^0) \oplus (M^1,q^1) \oplus (M^2,q^2)$, and $(M^2,q^2)$ is hyperbolic, hence its class
in $W_{\Gamma}(K)$ is trivial (see for instance \cite {M}, Lemma 3.1). With the notation of \S \ref{symmetric section}, we have the further
orthogonal decompositions
\medskip
\centerline {$(M^0,q^0) = \underset {f \in I_0} \oplus (M_f,q_f)$ and $(M^1,q^1) = \underset {f \in I_1} \oplus (M_f,q_f)$,}
\noindent
where $q_f$ is the restriction of $q$ to $M_f$ (see for instance \cite {M}, \S 3, or \cite {B 15},
Propositions 3.3 and 3.4). Note that if $f \in I_0$, then
$f(X) = X-1$ or $X+1$, and we have the orthogonal decomposition $(M^0,q^0) = (M^+,q^+) \oplus (M^-,q^-)$, with
$q^+ = q_{X-1}$ and $q^- = q_{X+1}$.
\section{Local fields and unimodular $\Gamma$-lattices}\label{local}
\medskip
Let $K$ be a non-archimedean local field of characteristic 0, let $O$ be its ring of integers, and let $k$ be its residue field.
If $a \in O$, set $v(a) = 1$ if $v_K(a)$ is odd, and $v(a) = 0$ if $v_K(a)$ is even or $a = 0$ (in other words, $v(a)$ is the valuation of $a$ (mod 2) if
$a \not = 0$, and $v(0) = 0$).
\medskip
\begin{theo} \label{local odd} Let $F \in O[X]$ be a monic, symmetric polynomial. There exists a unimodular $O$-lattice having a
semi-simple isometry with characteristic polynomial $F$ if and only if one of the following holds
\medskip {\rm (i)} ${\rm char}(k) \not = 2$, and
$v(F(1)) = v(F(-1)) = 0$.
\medskip {\rm (ii)} ${\rm char}(k) = 2$, and
$v(F(1) F(-1)) = 0$.
\end{theo}
\medskip We start with a preliminary result, and some notation.
\begin{notation}\label{lambda} Let $E_0$ be an \'etale $K$-algebra of finite rank, and let $E$ be an \'etale $E_0$-algebra
which is free of rank 2 over $E_0$. Let $\sigma : E \to E$ be the involution fixing $E_0$. If $\lambda \in E_0^{\times}$, we denote by $b_{\lambda}$
the quadratic form $b_{\lambda} : E \times E \to K$ such that $b_{\lambda}(x,y) = {\rm Tr}_{E/K}(\lambda x \sigma(y))$.
\end{notation}
\begin{prop} \label{technical}
Let $E_0$ be an \'etale $K$-algebra of finite rank, and let $E$ be an \'etale $E_0$-algebra
which is free of rank 2 over $E_0$. Let $\sigma : E \to E$ be the involution fixing $E_0$. Let $\alpha \in E_0^{\times}$ be such that
$\alpha \sigma(\alpha) = 1$, and that the characteristic polynomial $f$ of $\alpha$ over $K$ belongs to $O[X]$. Let ${\rm deg}(f) = 2d$, and assume that $f(1) f(-1) \not = 0$. Let
$u_+, u_- \in O^{\times}$.
\medskip
Let $V = V^{+} \oplus V^{-}$ be a finite dimensional $K$-vector space, and let
$\epsilon = (\epsilon^+,\epsilon^-) : V \to V$ be the isomorphism given by $\epsilon^{\pm} : V^{\pm} \to V^{\pm}$, $\epsilon^{\pm} = \pm id$.
Set $n^+ = {\rm dim}(V^+)$ and $n^- = {\rm dim}(V^-)$.
\medskip
If ${\rm char}(k) \not = 2$, assume that if $n^{\pm} = 0$, then $v(f(\pm 1)) = 0$.
\medskip
If ${\rm char}(k) = 2$, assume that if $n^+ = n^- = 0$, then $v(f(1)) + v(f(-1)) = 0$.
\medskip
If moreover $K = {\bf Q}_2$, assume that
\medskip
$\bullet$
$n^+$ and $n^-$ are both even,
\medskip
$\bullet$ if $n^+ = n^- = 0$, then
$(-1)^df(1)f(-1) = 1$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$,
\medskip
$\bullet$ if $n^{\pm} = 0$, then $v(f(\pm 1)) = 0$,
\medskip
$\bullet$ $u_+ u_- = (-1)^n$, where $2n = {\rm dim}(E \oplus V)$.
\bigskip
Then there exists $\lambda \in E_0^{\times}$ and non-degenerate quadratic forms
$$q^+ : V^+ \times V^+ \to K, \ \ q^- : V^- \times V^- \to K$$ such that, for $q = q^+ \oplus q^-$, we have
\medskip
{\rm (i)} $$\partial [E\oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$$
in $W_{\Gamma}(k)$.
\medskip
{\rm (ii)} If ${\rm char}(k) \not = 2$
then ${\rm det}(q^{\pm}) = u_{\pm}f(\pm 1)$ in $K^{\times}/K^{\times 2}$.
\medskip
{\rm (iii)} If moreover $K = {\bf Q}_2$, then
\medskip
$\bullet$ If $n_- \not = 0$, then $v({\rm det}(q^-)) = v(f(-1))$,
\medskip
$\bullet$ If $n_+ \not = 0$ and $n_- \not = 0$, then ${\rm det}(q_{\pm}) = u_{\pm} f(\pm 1)$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$,
\medskip
$\bullet$ ${\rm det}(E\oplus V, b \oplus q) = (-1)^n$,
and $(E\oplus V, b \oplus q)$ contains an even, unimodular ${\bf Z}_2$-lattice.
\end{prop}
\medskip
\noindent{\bf Proof.} The proof depends on the values of $v(f(1))$ and $v(f(-1))$. We
are in one of the following cases
\medskip
{\rm (a)} $v(f(1)) = 0$, $v(f(-1)) = 0$,
\medskip
{\rm (b)} $v(f(1)) = 1$, $v(f(-1)) = 0$,
\medskip
(c) $v(f(1)) = 0$, $v(f(-1)) = 1$,
\medskip
(d) $v(f(1)) = 1$, $v(f(-1)) = 1$.
\medskip
The algebra $E_0$ decomposes as
a product of fields $E_0 = \underset {v \in S} \prod E_{0,v}$. For all $v \in S$, set $E_v = E \otimes_{E_0} E_{0,v}$.
\medskip Assume first that the characteristic of $k$ is $ \not = 2$. The algebra $E_v$ is of one of
the following types
\medskip
(sp) $E_v = E_{0,v} \times E_{0,v}$;
\medskip
(un) $E_v$ is an unramified extension of $E_{0,v}$;
\medskip
(+) $E_v$ is a ramified extension of $E_{0,v}$, and the image $\overline \alpha$ of $\alpha$ in the residue field $\kappa_v$ of $E_v$ is $1$;
\medskip
(-) $E_v$ is a ramified extension of $E_{0,v}$, and the image $\overline \alpha$ of $\alpha$ in the residue field $\kappa_v$ of $E_v$ is $-1$.
\medskip
This gives a partition $S = S_{sp} \cup S_{un} \cup S_+ \cup S_-$.
\medskip
Let $\gamma$ be a generator of $\Gamma$, and let $\chi_{\pm} : \Gamma \to \{ \pm \}$ be the character sending $\gamma$ to $\pm 1$.
\medskip
Let us choose $\lambda = (\lambda_v)_{v \in S}$ in $E_0^{\times} = \underset {v \in S} \prod E_{0,v}^{\times}$ such that
for every $v \in S_{un}$, we have $\partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W_{\Gamma}(k$); this is possible by \cite {BT}, Proposition 6.4.
The choices for $v \in S_+$ and $S_-$ depend on which of the cases (a), (b), (c) or (d) we are in. Let $\overline u$ be the image of $u$ in $k$.
\medskip Assume that we are in case (a) : then by hypothesis $v(f(-1)) = v(f(1)) = 0$. For $v \in S_+ \cup S_-$, we choose $\lambda_v$ such that
\medskip
$\underset {v \in S_+} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_+) \subset W_{\Gamma}(k),$ and
$\underset {v \in S_-} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_-) \subset W_{\Gamma}(k).$
\medskip \noindent
This
is possible by \cite {BT}, Proposition 6.6; indeed, by \cite {BT}, Lemma 6.8 we have $\underset {v \in S_=} \sum [\kappa_v : k] \equiv \ v(f(1)) \ ({\rm mod} \ 2) $,
$\underset {v \in S_-} \sum [\kappa_v : k] \equiv \ v(f(-1)) \ ({\rm mod} \ 2) $, and $v(f(1)) = v(f(-1)) = 0$ by hypothesis;
therefore $\partial [E, b_{\lambda},\alpha] = 0$ in
$W_{\Gamma}(k)$.
Taking for $q^{\pm}$ the zero form if $n^{\pm} = 0$, and a unimodular form of determinant $u^{\pm}f(\pm 1)$ otherwise, we get
$$\partial [E \oplus V, b_{\lambda} \oplus q, (\alpha,\epsilon)] = 0$$ in $W_{\Gamma}(k).$
This implies (i) and (ii), and completes the proof in case (a).
\bigskip Assume now that we are in case (b); then by hypothesis $v(f(-1)) = 0$ and $v(f(1)) = 1$. For $v \in S_-$ we choose $\lambda_v$ such that
\medskip
$\underset {v \in S_-} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_-) \subset W_{\Gamma}(k).$ This
is possible by \cite {BT}, Proposition 6.6; indeed, by \cite {BT}, Lemma 6.8 (ii) we have $$\underset {v \in S_-} \sum [\kappa_v : k] \equiv \ v(f(-1)) \ ({\rm mod} \ 2),$$ and $v(f(-1)) = 0$ by hypothesis.
\medskip
We now come to the places in $S_+$. Recall that by \cite {BT}, Lemma 6.8 (i) we have $\underset {v \in S_+} \sum [\kappa_v : k] \equiv \ v(f(1)) \ ({\rm mod} \ 2) $. Since $v(f(1)) = 1$ by hypothesis, this implies that $\underset {v \in S_+} \sum [\kappa_v : k] \equiv \ 1 \ ({\rm mod} \ 2) $.
Therefore there exists $w \in S_+$ such that $[\kappa_w:k]$ is odd. By \cite {BT}, Proposition 6.6, we can choose $\lambda_w$ such that
$\partial [E_w,b_{\lambda_w},\alpha]$ is either one of the two classes of $\gamma \in W(k) = W(k,\chi_+) \subset W_{\Gamma}(k)$
with ${\rm dim}(\gamma) = 1$. Let us choose the class of determinant $- \overline {u_+}$, and set $\partial [E_w,b_{\lambda_w},\alpha] = \delta$.
\medskip Since $v(f(1)) = 1$, by hypothesis we have $n^+ \geqslant 1$. Let $(V^+,q^+)$ be a non-degenerate
quadratic form over $K$ such that ${\rm det}(q^+) = u_+ f(1)$, and that
\medskip
\centerline {$\partial[V^+,q^+,id] = - \delta$ in $W(k) = W(k,\chi_+) \subset W_{\Gamma}(k).$}
\medskip
Let $S_+' = S_+ - \{w \}$; we have $\underset {v \in S_+'} \sum [\kappa_v : k] \equiv \ 0 \ ({\rm mod} \ 2) $, hence by \cite {BT}, Proposition 6.6,
for all $v \in S_+'$ there exists $\lambda_v \in E_{0,v}^{\times}$ such that
$\underset {v \in S_+'} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_+) \subset W_{\Gamma}(k).$
We have
$$\partial [E \oplus V^+, b_{\lambda} \oplus q^+, (\alpha,id)] = 0$$ in $W_{\Gamma}(k)$.
Taking for $(V^-,q^-)$ a quadratic form over $O$ of determinant $u_- f(-1)$ and setting $q = q^+ \oplus q^-$, we get
$$\partial [E \oplus V, b \oplus q, (\alpha,\epsilon)] = 0$$ in $W_{\Gamma}(k).$
This completes the proof in case (b). The proof is the same in case (c), exchanging the roles of $S_+$ and $S_-$.
\bigskip
Assume that we are in case (d), that is, $v(f(1)) = v(f(-1)) = 1$. By \cite {BT}, Lemma 6.8 (i) and (ii), we have
\medskip
$\underset {v \in S_+} \sum [\kappa_v : k] \equiv \ v(f(1)) \ ({\rm mod} \ 2) $, and
$\underset {v \in S_-} \sum [\kappa_v : k] \equiv \ v(f(-1)) \ ({\rm mod} \ 2) $.
\medskip Therefore
$\underset {v \in S_+} \sum [\kappa_v : k] \equiv \ \underset {v \in S_-} \sum [\kappa_v : k] \equiv \ 1 \ ({\rm mod} \ 2) $. Hence
there exist $w_{\pm} \in S_{\pm}$ such that $[\kappa_{w_+}: k]$ and $[\kappa_{w_-}: k]$ are odd. By \cite {BT}, Proposition 6.6, we
can choose $\lambda_{w_{\pm}}$ such that $\partial [E_{w_{\pm}},b_{\lambda_{w_{\pm}}},\alpha]$ is either one of the two classes of $\gamma \in W(k) = W(k,\chi_{\pm}) \subset W_{\Gamma}(k)$
with ${\rm dim}(\gamma) = 1$. Let us choose $\lambda_{w_{\pm}}$ such that $\partial [E_{w_{\pm}},b_{\lambda_{w_{\pm}}},\alpha]$ is
represented by a form of dimension 1 and determinant $\overline u_{\pm}$, and set $$\delta_{\pm} = \partial [E_{w_{\pm}},b_{\lambda_{w_{\pm}}},\alpha].$$
\medskip By hypothesis, we have $n^+ \geqslant 1$ and $n^- \geqslant 1$. Let $(V^{\pm},q^{\pm})$ be non-degenerate
quadratic forms over $K$ such that ${\rm det}(q^{\pm}) = u_{\pm}f(\pm 1)$ and that
\medskip
\centerline {$\partial[V^{\pm},q^{\pm},\epsilon^{\pm}] = - \delta_{\pm}$ in $W(k) = W(k,\chi_{\pm}) \subset W_{\Gamma}(k)$.}
\medskip
\medskip
Let $S_+' = S_+ - \{w_+ \}$; we have $\underset {v \in S_+'} \sum [\kappa_v : k] \equiv \ 0 \ ({\rm mod} \ 2) $, hence by \cite {BT}, Proposition 6.6,
for all $v \in S_+'$ there exists $\lambda_v \in E_{0,v}^{\times}$ such that
$\underset {v \in S_+'} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_+) \subset W_{\Gamma}(k).$
Similarly, set $S_-' = S_+ - \{w_- \}$; we have $\underset {v \in S_-'} \sum [\kappa_v : k] \equiv \ 0 \ ({\rm mod} \ 2) $, hence by \cite {BT}, Proposition 6.6,
for all $v \in S_-$ there exists $\lambda_v \in E_{0,v}^{\times}$ such that
$\underset {v \in S_-'} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,\chi_+) \subset W_{\Gamma}(k).$
Set $q = q^+ \oplus q^-$,
and note that $$\partial [E \oplus V, b_{\lambda} \oplus q, (\alpha,\epsilon)] = 0$$ in $W_{\Gamma}(k)$.
This completes the proof when the characteristic of $k$ is $ \not = 2$.
\medskip Assume now that the characteristic of $k$ is $2$. The algebra $E_v$ is of one of
the following types
\medskip
(sp) $E_v = E_{0,v} \times E_{0,v}$;
\medskip
(un) $E_v$ is an unramified extension of $E_{0,v}$;
\medskip
(r) $E_v$ is a ramified extension of $E_{0,v}$.
\medskip
This gives a partition $S = S_{sp} \cup S_{un} \cup S_r$.
\bigskip
If $\lambda = (\lambda_v)_{v \in S}$ is an element of $ E_0^{\times} = \underset {v \in S} \prod E_{0,v}^{\times}$,
note that by Lemma \ref{determinant} we have $${\rm disc}(b_{\lambda}) = (-1)^df(1)f(-1)$$ in $K^{\times}/K^{\times 2}$.
\medskip
We choose $\lambda = (\lambda_v)_{v \in S}$ in $E_0^{\times}$ such that
for every $v \in S_{un}$, we have $\partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W_{\Gamma}(k$); this is possible by \cite {BT}, Proposition 6.4.
\medskip
Assume first that we are in case (a) or (d), and note that $v(f(1)) + v(f(-1)) = 0$. By \cite {BT}, Lemma 6.8 and Proposition 6.7, we can choose
$\lambda_v$ such that
\medskip
\centerline {$\underset {v \in S_r} \sum \partial [E_v,b_{\lambda_v},\alpha] = 0$ in $W(k) = W(k,1) \subset W_{\Gamma}(k).$}
\medskip Therefore
$\partial [E,b_{\lambda}, \alpha] = 0$ in $W_{\Gamma}(k)$.
\medskip
If we are in case (a) or case (d) and $K \not = {\bf Q}_2$, take for $q$ the unit quadratic form. We have
$$\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$$ in $W_{\Gamma}(k)$; this concludes the proof in cases (a) and (d) when $K \not = {\bf Q}_2$.
\medskip
Suppose that
$K = {\bf Q}_2$ and that we are in case (a). Suppose first that $n^+ = n^- = 0$. We already know that
$\partial [E,b_{\lambda}, \alpha] = 0$ in $W_{\Gamma}({\bf F}_2)$, hence (i) holds. Since $n^+ = n^- = 0$, by hypothesis $(-1)^df(1)f(-1) = 1$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$, therefore ${\rm disc}(b_{\lambda}) = 1$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$.
Let us choose $\lambda$ such that the quadratic form
$(E,b_{\lambda})$ contains an even, unimodular
${\bf Z}_2$-lattice. If $S_r = \varnothing$, this is automatic; indeed, in that case the trace map $E \to E_0$ is surjective, and hence
every ${\bf Z}_2$-lattice of the shape $(E,b_{\lambda})$ is even. If not, by \cite {BT} Propositions 8.4 and 5.4 we can choose $\lambda$ having this additional property. This implies that (iii) holds as well.
\medskip
We continue supposing that $K = {\bf Q}_2$ and that we are in case (a); assume now that
$n^+ \not = 0$ and $n^- = 0$. Let us choose $q^+$ such that ${\rm det}(q^*) = (-1)^n f(1) f(-1)$; since
${\rm det}(b_{\lambda}) = f(1)f(-1)$, this implies that $${\rm det}(E \oplus V,b_{\lambda} \oplus q^+) = (-1)^n.$$ Moreover,
let us choose the Hasse-Witt invariant of $q^+$ in such a way that the quadratic form
$(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-)$ contains an even, unimodular
${\bf Z}_2$-lattice; this is possible by \cite {BT} Proposition 8.4.
Therefore condition (iii) holds. Note that since $v({\rm det}(q^+)) = 0$, we have $\partial [V,q^+] = 0$ in $W({\bf F}_2$,
hence
$\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$ in $W_{\Gamma}({\bf F}_2)$; therefore condition
(i) also holds.
\medskip Assume now that $n^+ = 0$ and $n^- \not = 0$. Let us choose $q^-$ such that ${\rm det}(q^-) = (-1)^n f(1) f(-1)$,
and note that since $v(f(1)) = v(f(-1)) = 0$, this implies that $v({\rm det}(q^-)) = v(f(-1)) = 0$. As in the previous case,
we see that ${\rm det}(E \oplus V,b_{\lambda} \oplus q^-) = (-1)^n$, and we choose the Hasse-Witt invariant of
$q^-$ so that $(E \oplus V,b_{\lambda} \oplus q^-)$ contains an even, unimodular
${\bf Z}_2$-lattice; this is possible by \cite {BT} Proposition 8.4. As in the previous case, we conclude that conditions (i)
and (iii) are satisfied.
\medskip Suppose that $n^+ \not = 0$ and $n^- \not = 0$. Let us choose $q^+$ such that ${\rm det}(q^+) = u_+ f(1)$ and
$q^-$ such that ${\rm det}(q^-) = u_- f(-1)$. Since $u_+ u_- = (-1)^n$ and
${\rm det}(b_{\lambda}) = f(1)f(-1)$, this implies that $${\rm det}(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-) = (-1)^n.$$ Note
that since $v(u_-) = v(f(-1)) = 0$, we have $v({\rm det}(q^-)) = v(f(-1)) = 0$. As in the previous cases, we can
choose $q^+$ and $q^-$ such that $(E \oplus V,b_{\lambda} \oplus q^-)$ contains an even, unimodular
${\bf Z}_2$-lattice, and that $\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$ in $W_{\Gamma}({\bf F}_2)$,
hence conditions (i) and (iii) hold.
\medskip
Assume now that
$K = {\bf Q}_2$ and that we are in case (d); note that the hypothesis implies that $n^+, n^- \geqslant 2$, and that both
$n^+$ and $n^-$ are even. With our previous choice of $\lambda$, we have $\partial [E,b_{\lambda}, \alpha] = 0$ in $W_{\Gamma}({\bf F}_2)$. Let us choose $q^+$ and $q^-$ such that ${\rm det}(q^{\pm}) = u_{\pm}f(\pm 1)$, and
note that this implies that $v({\rm det}(q^-)) = v(f(-1))$, and that ${\rm det}(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-) = (-1)^n$. Moreover, choose the Hasse-Witt invariants of $q^+$ and $q^-$ so
that $(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-)$ contains an even, unimodular
${\bf Z}_2$-lattice; this is possible by \cite {BT} Proposition 8.4. Therefore condition (iii) holds; moreover, we have
$\partial (V,q,\epsilon) = 0$ in $W_{\Gamma}({\bf F}_2)$ and
$\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$ in $W_{\Gamma}({\bf F}_2)$,
hence condition (i) is also satisfied. This concludes the proof in cases (a) and (d).
\medskip
Suppose that we are in case (b) or case (c), and note that in both cases, we have $v(f(1)) + v(f(-1)) = 1$. Hence Proposition 6.7 and Lemma 6.8 imply
that $\underset {v \in S_r} \sum \partial [E_v,b_{\lambda_v},\alpha]$ is the unique non-trivial element of $W(k) = W(k,1) \subset W_{\Gamma}(k).$
Suppose first that $K \not = {\bf Q}_2$. We have either $n^+ \geqslant 1$ or $n^- \geqslant 1$; choose $q^{\pm}$ such that
$\partial [V^{\pm},q^{\pm},\pm id]$ is also the unique non-trivial element of $W(k) = W(k,1) \subset W_{\Gamma}(k).$
We have $$\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$$ in $W_{\Gamma}(k)$. This settles cases (b) and (c)
when $K \not = {\bf Q}_2$.
\medskip
Assume now that $K = {\bf Q}_2$, and that we are in case (b),
namely $v(f(1)) = 1$ and $v(f(-1)) = 0$; then $n^+ \ge 2$, and is even. If $n^- \not= 0$, then choose $q^-$
such that ${\rm det}(q^{-}) = u_{-}f(- 1)$, and note that this implies that $v({\rm det}(q^-)) = v(f(-1)) = 0$;
choose $q^+$ such that
${\rm det}(q^{+}) = u_{+}f( 1)$. Since $v(f(1)) = 1$, this implies that $\partial [V^+,q^+,id]$ is the unique non-trivial element of $W({\bf F}_2) = W_{\Gamma}({\bf F}_2,1) \subset W_{\Gamma}({\bf F}_2).$ Note that ${\rm det}(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-) = (-1)^n$ in ${\bf Q}_2^{\times}/{\bf Q}^{\times 2}$. Moreover, choose the Hasse-Witt invariants of $q^+$ and $q^-$ such that the quadratic form
$(E \oplus V,b_{\lambda} \oplus q^+ \oplus q^-)$ contains an even, unimodular
${\bf Z}_2$-lattice; this is possible by \cite {BT} Proposition 8.4. Hence condition (iii) holds, and condition (i)
follows from the fact that $\partial [E,b_{\lambda}, \alpha]$ and $\partial [V^+,q^+,id]$ are both equal to the unique non-trivial element of $W({\bf F}_2) = W_{\Gamma}({\bf F}_2,1)$, which is a group of order $2$. Therefore
$\partial [E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$ in $W_{\Gamma}({\bf F}_2)$, and
hence condition (i) is also satisfied.
\medskip
Suppose now that $K = {\bf Q}_2$, and that we are in case (c). Then $v(f(1)) = 0$ and $v(f(-1)) = 1$, hence $n^- \ge 2$, and is even. If $n^+ \not= 0$, then choose $q^+$ such that ${\rm det}(q^{+}) = u_{+}f( 1)$. Choose
$q^-$
such that ${\rm det}(q^{-}) = u_{-}f(- 1)$, and note that this implies that $v({\rm det}(q^-)) = v(f(-1)) = 1$, and that
$\partial [V^-,q^-,-id]$ is the unique non-trivial element of $W({\bf F}_2) = W_{\Gamma}({\bf F}_2,1) \subset W_{\Gamma}({\bf F}_2).$
We conclude as in case (b).
This settles cases (b) and (c), and hence the
proof of the proposition is complete.
\bigskip We now show that the conditions of Theorem \ref{local odd} are sufficient. In the case where ${\rm char}(k) \not = 2$,
we obtain a more precise result (see part (ii) of the following result; for $K = {\bf Q}_2$ an analogous result
is given in Theorem \ref{local even}). We use the notation of \S \ref{symmetric section}.
\begin{theo}\label{sufficient} Let $F \in O[X]$ be a monic, symmetric polynomial.
\medskip If ${\rm char}(k) \not = 2$, assume that
$v(F(1)) = v(F(-1)) = 0$.
\medskip If ${\rm char}(k) = 2$, assume that
$v(F(1) F(-1)) = 0$.
\medskip
{\rm (i)}
Then there exists a unimodular $O$-lattice having a
semi-simple isometry with characteristic polynomial $F$.
\medskip
{\rm (ii)} Assume in addition that ${\rm char}(k) \not = 2$ and let $u_+, u_- \in O^{\times}$. If $M^{\pm} \not = 0$, then there exists a unimodular $O$-lattice
having a semi-simple isometry with characteristic polynomial $F$ such that the associated $K[\Gamma]$-bilinear form
$(M^{\pm},q^{\pm})$ is such that $${\rm det}(q^{\pm}) = u_{\pm} F_1(\pm 1)$$
in $K^{\times}/K^{\times 2}$.
\end{theo}
\noindent {\bf Proof.} Let us write $F = F_0 F_1 F_2$, where $F_i$ is the product of the irreducible factors of $F$ of type $i$.
The hyperbolic $O$-lattice of rank ${\rm deg}(F_2)$ has an isometry with characteristic polynomial $F_2$, therefore
it is enough to prove the theorem for $F = F_0 F_1$.
\medskip
From now on, we assume that $F = F_0 F_1$, in other words, all the irreducible factors of $F$ are symmetric, of type 0 or 1. Let $I_1$ be the set of irreducible factors
of type 1 of $F$. We have $F_1 = \underset{f \in I_1} \prod f^{n_f}$; note that $F_1(1)F_1(-1) \not = 0$.
Let us write $F(X) = F_1(X)(X-1)^{n^+}(X+1)^{n^-}$ for some integers $n^+, n^-$ such that $n^+, n^- \geqslant 0$.
\medskip
For all $f \in I_1$, set $E_f = K[X]/(f)$. Let $\sigma_f : E_f \to E_f$ be the involution induced by $X \mapsto X^{-1}$,
and let $(E_f)_0$ be the fixed field of $\sigma$ in $E_f$. Let $M_f$ be an extension of degree $n_f$ of $(E_f)_0$, linearly disjoint from $E_f$
over $(E_f)_0$. Set $\tilde E_f = E_f \otimes_{{(E_f)}_0} M_f$. Let $\alpha_f$ be a root of $f$ in $E_f$. The characteristic polynomial of the multiplication
by $\alpha_f$ on $\tilde E_f$ is $f^{n_f}$, and its minimal polynomial is $f$. Set $\tilde E = \underset {f \in I_1} \prod \tilde E_f$, and
$\tilde M = \underset {f \in I_1} \prod \tilde M_f$. Let $\tilde \sigma : \tilde E \to \tilde E$ be the involution of $\tilde E$ induced by the involutions
$\sigma : E_f \to E_f$. Set $\tilde \alpha = (\alpha_f)_{f \in I_1}$, and let us denote by
$\tilde \alpha : \tilde E \to \tilde E$ the multiplication by
$\tilde \alpha$. Note that $\tilde \alpha$ is semi-simple,
with characteristic polynomial $F_1$.
\medskip
Let $V = V^+ \oplus V^-$ be a
$K$-vector space with ${\rm dim}(V^+) = n^+ $ and ${\rm dim}(V^-) = + n^-$.
Applying Proposition \ref{technical} (i) with $E_0 = \tilde M$, $E = \tilde E$, $\sigma = \tilde \sigma$,
$\alpha = \tilde \alpha$ and $f = F_1$, we see that
there exists $\lambda \in \tilde M^{\times}$ and a non-degenerate quadratic form $q : V \times V \to K$ such that
$$\partial [\tilde E \oplus V, b_{\lambda} \oplus q, \alpha \oplus \epsilon] = 0$$ in $W_{\Gamma}(k)$,. By Theorem \ref{4.3} (iv) this implies that there exists a unimodular $O$-lattice having a semi-simple isometry with characteristic polynomial $F$,
proving part (i) of the theorem. Similarly, Proposition \ref{technical} (ii) implies part (ii) of the theorem.
\bigskip
To show that the conditions of Theorem \ref{local odd} are necessary, we start with some notation and a preliminary result.
\medskip
Extending the scalars to $K$, an even, unimodular lattice having a semi-simple isometry with characteristic polynomial $F$ gives
rise to a $K[\Gamma]$-bilinear form on the semi-simple $K[\Gamma]$-module associated to $F$ (see \S \ref{symmetric section}), and
this form has an orthogonal decomposition $M = M^0 \oplus M^1 \oplus M^2$, cf. \S \ref{isometries section}. The $K[\Gamma]$-form $M^0$
has the further orthogonal decomposition $M^0 = (M^+,q^+) \oplus (M^-,q^-)$.
\begin{notation}\label{N pm} Let $\gamma$ be a generator of $\Gamma$.
Let $N_{\pm}$ be the simple $k[\Gamma]$-module such that ${\rm dim}_k(N_+) = 1$, and that $\gamma$ acts on $N_{\pm}$ by $\pm id$; note
that $N_+ = N_-$ if ${\rm char}(k) = 2$.
\end{notation}
\begin{lemma} \label{necessary lemma} Let $F \in O[X]$ be a monic, symmetric polynomial, and suppose that there exists a unimodular lattice having a semi-simple isometry
with characteristic polynomial $F$. Let $M = M^0 \oplus M^1 \oplus M^2$ be the corresponding orthogonal decomposition of $K[\Gamma]$-bilinear forms. Let us write $F(X) = F_1(X)(X-1)^{n^+}(X+1)^{n^-}$ for some integers $n^+, n^-$ such that $n^+, n^- \geqslant 0$, and such that $F_1(1)F_1(-1) \not = 0$. Then we have
\medskip
{\rm (i)} Assume that ${\rm char}(k) \not = 2$. Then the component of $\partial[M^1]$ in $W_{\Gamma}(k,N_+) \simeq W(k)$ is represented by a
quadratic form of dimension $v(F_1(1))$ over $k$.
\medskip
{\rm (i)} Assume that ${\rm char}(k) \not = 2$. Then the component of $\partial[M^1]$ in $W_{\Gamma}(k,N_-) \simeq W(k)$ is represented by a
quadratic form of dimension $v(F_1(-1))$ over $k$.
\medskip
{\rm (i)} Assume that ${\rm char}(k) = 2$. Then the component of $\partial[M^1]$ in $W_{\Gamma}(k,N_+) = W_{\Gamma}(k,N_-)
\simeq W(k)$ is represented by a
quadratic form of dimension $v(F_1(1))+v(F_1(-1))$ over $k$.
\end{lemma}
\noindent
{\bf Proof.} Since $M$ is extended from a unimodular lattice, we have $\partial [M] = 0$ (see Theorem \ref{4.3}). Let $M = M^0 \oplus M^1 \oplus M^2$ be the
orthogonal decoposition of \S \ref{isometries section}. We have $\partial [M^2] = 0$, hence $\partial ([M^0] + [M^1] )= 0$.
\medskip From now on, we assume that $M = M^0 \oplus M^1$; equivalently, all the irreducible factors of $F$ are of type 0 or 1.
Let us write $F_1 = \underset {f \in I_1} \prod f^{n_f}$.
We have an orthogonal decomposition $$M^1 = \underset{f \in I} \oplus M_f,$$ where $M_f = [K[X]/(f)]^{n_f}$ (see \cite {M}, \S 3, or \cite {B 15},
Propositions 3.3 and 3.4). For all $f \in I$, set $E_f = K[X]/(f)$, and let $\sigma : E_f \to E_f$ be the $K$-linear involution induced by $X \mapsto X^{-1}$.
By a well-known transfer property (see for instance \cite {M}, Lemma 1.1 or \cite {B 15}, Proposition 3.6) the $K[\Gamma]$-bilinear form $M_f$
is the trace of a non-degenerate hermitian form over $(E_f,\sigma)$, hence it is an orthogonal sum of forms of the type $b_{\lambda}$, see
notation \ref{lambda}.
\medskip By \cite {BT}, Lemma 6.8 (i)
and Proposition 6.6,
the
component of $\partial[M^1]$ in $W_{\Gamma}(k,M_+)$ is represented by a form of dimension $v(F_1(1))$, and this implies (i). Similarly,
applying \cite {BT}, Lemma 6.8 (ii)
and Proposition 6.6 implies (ii), and \cite {BT}, Lemma 6.8 (ii)
and Proposition 6.7) implies (iii).
\begin{prop} \label{necessary} Let $F \in O[X]$ be a monic, symmetric polynomial, and suppose that there exists a unimodular lattice having a semi-simple isometry
with characteristic polynomial $F$. Then we have
\medskip If ${\rm char}(k) \not = 2$, then $v(F(1)) = v(F(-1)) = 0$.
\medskip If ${\rm char}(k) = 2$, then $v(F(1) F(-1)) = 0$.
\end{prop}
\noindent
{\bf Proof.}
Suppose first that ${\rm char}(k) \not = 2$.
If $n^+ > 0$ and $n^- > 0$, then $F(1) = F(-1) = 0$, so there is nothing to prove. Assume that $n^+ = 0$. Then the component of $\partial [M^0]$ in $W_{\Gamma}(k,M_+)$ is trivial, and note that $F(1) = F_1(1)$. By Lemma \ref{necessary lemma} (i), the
component of $\partial[M^1]$ in $W_{\Gamma}(k,M_+)$ is represented by a form of dimension $v(F_1(1)) = v(F(1))$, hence $v(F(1)) = 0$.
Similarly, $n^- = 0$ implies that $v(F(-1)) = 0$. This completes the proof of the proposition in the case where ${\rm char}(k) \not = 2$.
\medskip
Assume now that ${\rm char}(k) = 2$. If $n^+ > 0$ or $n^- > 0$, then $F(1) F(-1) = 0$, so there is nothing to prove. Assume that $n^+ = n^- = 0$. The
component of $\partial[M^1]$ in $W_{\Gamma}(k,M_+) = W_{\Gamma}(k,M_-)$ is represented by a form of dimension $v(F(1)) + v(F(-1))$ (cf. Lemma \ref{necessary lemma} (iii)). Since $n^+ = n^- = 0$ we have $M = M^1$, hence $\partial[M^1] = 0$,
and we also have $F = F_1$; therefore $v(F(1)) + v(F(-1)) = 0$. This completes
the proof of the proposition.
\medskip
\noindent {\bf Proof of Theorem \ref{local odd}.} The theorem follows from Theorem \ref{sufficient} and Proposition \ref{necessary}.
\section{Even, unimodular $\Gamma$-lattices over ${\bf Z}_2$}
We keep the notation of \S \ref{local}, with $K = {\bf Q}_2$ and $O = {\bf Z}_2$. Recall that if $a \in {\bf Z}_2$, we set
$v(a) = 0$ if $a = 0$ or if the $2$-adic valuation of $a$ is even, and $v(a) = 1$ if the $2$-adic valuation of $a$ is odd.
\medskip
If $F \in {\bf Z}_2[X]$ is a monic, symmetric polynomial, we write $F = F_0 F_1 F_2$, where $F_i$ is the product of
the irreducible factors of type $i$ of $F$. Recall that $M = M^0 \oplus M^1 \oplus M^2$ is the semi-simple ${\bf Q}_2 [\Gamma]$-module
associated to $F$, and that $M_0 = M^+ \oplus M^-$.
\medskip
\begin{theo}\label {local even} Let $F \in {\bf Z}_2[X]$ be a monic, symmetric polynomial of even degree such that $F(0) = 1$, and set $2n = {\rm deg}(F)$. Let $u_+, u_- \in {\bf Z}_2^{\times}$ such that $u_+ u_- = (-1)^n$. Assume that the following conditions hold :
\medskip
{\rm (a)} $v(F(1)) = v(F(-1)) = 0$.
\medskip
{\rm (b)} If $F(1)F(-1) \not = 0$, then $(-1)^nF(1)F(-1) = 1$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$.
\medskip
Then we have
\medskip
{\rm (i)} There exists an even, unimodular ${\bf Z}_2$-lattice having a
semi-simple isometry with characteristic polynomial $F$.
\medskip
{\rm (ii)} If $M^+ \not = 0$, $M^- \not = 0$ and $M^1 \not = 0$, then there exists an even, unimodular ${\bf Z}_2$-lattice
having a semi-simple isometry with characteristic polynomial $F$ such that the associated ${\bf Q}_2[\Gamma]$-bilinear form
$(M^{\pm},q^{\pm})$ is such that $${\rm det}(q^{\pm}) = u_{\pm} F_1(\pm 1)$$
in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$.
\end{theo}
\noindent{\bf Proof.}
Let $I$ be the set of irreducible factors of $F$ of type 1, and set
$F_1 = \underset{f \in I_1} \prod f^{n_f}$.
The hyperbolic ${\bf Z}_2$-lattice of rank ${\rm deg}(F_2)$ has a semi-simple isometry with characteristic polynomial $F_2$, therefore
it is enough to prove the theorem for $F = F_0 F_1$.
\medskip
From now on, we assume that all the irreducible factors of $F$ are symmetric, of type 0 or 1; we have $F = F_1(X-1)^{n^+}(X+1)^{n^-}$ for some integers $n^+ \geqslant 0$, $n^- \geqslant 0$.
Note that since ${\rm deg}(F)$ is even and $F(0) = 1$ by hypothesis, $n^+$ and $ n^- $ are both even.
\medskip For all $f \in I_1$, set $E_f = {\bf Q}_2[X]/(f)$. Let $\sigma : E_f \to E_f$ be the involution induced by $X \mapsto X^{-1}$,
and let $(E_f)_0$ be the fixed field of $\sigma$ in $E_f$. Let $M_f$ be an extension of degree $n_f$ of $(E_f)_0$, linearly disjoint from $E_f$
over $(E_f)_0$. Set $\tilde E_f = E_f \otimes_{{(E_f)}_0} M_f$. Let $\alpha_f$ be a root of $f$ in $E_f$. The characteristic polynomial of the multiplication
by $\alpha_f$ on $\tilde E_f$ is $f^{n_f}$, and its minimal polynomial is $f$. Set $\tilde E = \underset {f \in I_1} \prod \tilde E_f$, and
$\tilde M = \underset {f \in I_1} \prod \tilde M_f$. Let $\tilde \sigma : \tilde E \to \tilde E$ be the involution of $\tilde E$ induced by the involutions
$\sigma : E_f \to E_f$. Set $\tilde \alpha = (\alpha_f)_{f \in I_1}$, and let us denote by
$\tilde \alpha : \tilde E \to \tilde E$ the multiplication by
$\tilde \alpha$. Note that $\tilde \alpha$ is semi-simple,
with characteristic polynomial $F_1$.
\medskip We apply Theorem 8.1 of \cite {BT} and Proposition \ref{technical} with $E_0 = \tilde M$, $E = \tilde E$, $\sigma = \tilde \sigma$ and
$\alpha = \tilde \alpha$.
\medskip Let $V^{\pm}$ be a ${\bf Q}_2$-vector spaces of dimension $n^{\pm}$, and set $V = V^+ \oplus V^-$. Note that
if $n^+ = n^- = 0$, then $F_1 = F$, hence the class of
$(-1)^nF_1(1)F_1(-1) = 1$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$ by hypothesis; therefore the hypotheses of Proposition \ref{technical}
are satisfied.
Proposition \ref{technical} (i) and (iii) imply that there exist $\lambda \in \tilde M^{\times}$ and a non-degenerate quadratic form $q : V \times V \to {\bf Q}_2$
such that
$$\partial [\tilde E \oplus V, b_{\lambda} \oplus q, \tilde {\alpha} \oplus \epsilon] = 0$$ in $W_{\Gamma}(k)$, that $(\tilde E \oplus V,b_{\lambda} \oplus q)$
contains an even, unimodular ${\bf Z}_2$-lattice, and that $v({\rm det}(q^-) = v(F_1(-1))$. By Theorem \ref{4.3} (iv), this
implies that there exists a unimodular lattice in $\tilde E \oplus V$ stable by $\tilde \alpha \oplus \epsilon$, hence a unimodular lattice having a semi-simple isometry with characteristic polynomial $F$; therefore conditions (i) and (ii) of \cite {BT}, Theorem 8.1 hold. Moreover, since
$v({\rm det}(q^-) = v(F_1(-1))$, Theorem 8.4 of \cite {BT} implies that condition (iii) of \cite {BT}, Theorem 8.1 is also satisfied.
This implies that there exists an even, unimodular ${\bf Z}_2$-lattice having a semi-simple isometry with characteristic polynomial $F$, and this completes the proof of (i). Part (ii) of the theorem also follows from Proposition \ref{technical}, part (iii).
\medskip
\begin{theo}\label {local even necessary} Let $F \in {\bf Z}_2[X]$ be a monic, symmetric polynomial of even degree such that $F(0) = 1$, and set $2n = {\rm deg}(F)$. Assume that there
exists an even, unimodular ${\bf Z}_2$-lattice having a
semi-simple isometry with characteristic polynomial $F$. Then we have
\medskip
{\rm (a)} $v(F(1)) = v(F(-1)) = 0$.
\medskip
{\rm (b)} If $F(1)F(-1) \not = 0$, then the class of $(-1)^nF(1)F(-1)$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$ lies in $\{1,-3 \}$.
\end{theo}
\noindent{\bf Proof.}
Let $L$ be an even, unimodular lattice having a semi-simple isometry with characteristic polynomial $F$.
The lattice $L$ gives rise to a ${\bf Q}_2[ \Gamma]$-bilinear form $M$ on a bounded module. Let us consider
the orthogonal decomposition of ${\bf Q}_2[ \Gamma]$-bilinear forms
$$M = M^0 \oplus M^1 \oplus M^2$$ (cf. \S \ref{isometries section}). Since $L$ is unimodular we have $\partial [M] = 0$; note that
$\partial [M^2] = 0$, hence we have $\partial ([M^0] + [M^1] )= 0$.
\medskip From now on, we assume that $M = M^0 \oplus M^1$; equivalently, all the irreducible factors of $F$ are of type 0 or 1.
Let $I$ be the set of irreducible factors of $F$ of type 1, and set
$F_1 = \underset{f \in I_1} \prod f^{n_f}$.
We have
$F = F_1(X-1)^{n^+}(X+1)^{n^-}$ for some integers $n^+ \geqslant 0$, $n^- \geqslant 0$.
\medskip
Further, we have an orthogonal decomposition $M^1 = \underset{f \in I_1} \oplus M_f$, where $$M_f = [{\bf Q}_2[X]/(f)]^{n_f}$$ (see \cite {M}, \S 3, or \cite {B 15},
Propositions 3.3 and 3.4). For all $f \in I_1$, set $E_f = {\bf Q}_2[X]/(f)$, and let $\sigma : E_f \to E_f$ be the ${\bf Q}_2$-linear involution induced by $X \mapsto X^{-1}$.
By a well-known transfer property (see for instance \cite {M}, Lemma 1.1 or \cite {B 15}, Proposition 3.6) the ${\bf Q}_2[\Gamma]$-bilinear form $M_f$
is the trace of a non-degenerate hermitian form over $(E_f,\sigma)$, hence it is an orthogonal sum of forms of the type $b_{\lambda}$, see
notation \ref{lambda}.
\medskip The
component of $\partial[M^1]$ in $W_{\Gamma}(k,N_{\pm})$ is represented by a form of dimension $v(F(1)) + v(F(-1))$ (mod 2) (cf. \cite {BT}, Lemma 6.8 (ii)
and Proposition 6.7).
\medskip
Suppose that $n^+ = n^- = 0$. Then $M = M^1$, hence we have $\partial[M^1] =0$; by the above argument this implies that
$v(F(1)) + v(F(-1))$ (mod 2). By \cite {BT}, Proposition 8.6 and Theorem 8.5, we have $v(F(-1)) = 0$, hence (a) holds. Since $L$ is even and
unimodular, the class of $(-1)^n F(1) F(-1)$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$ lies in
$\{1,-3 \}$; this shows that (b) holds as well.
\medskip
Let $M^0 = V^+ \oplus V^-$, and let $q^{\pm}$ the the quadratic form on $V^{\pm}$.
\medskip
Suppose that $n^+ \not = 0$, and $n^- = 0$. Then $F(1) = 0$, hence $v(F(1)) = 0$. Since $n^- = 0$, the quadratic form $q^-$ is the zero form,
and $v({\rm det}(q^-) )= 0$. By \cite {BT}, Theorem 8.5 and Proposition 8.6, we have $v({\rm det}(q^-) ) = v(F(-1))$, hence $v(F(-1)) = 0$. This implies
(a), and (b) is obvious since $F(1) = 0$.
\medskip
Assume now that $n^+ = 0$ and $n^- \not = 0$; then $F(-1) = 0$, hence (b) holds. By \cite {BT}, Theorem 8.5 and Proposition 8.6, we have $v({\rm det}(q^-)) = v(F(-1))$; since $F(-1) = 0$, this implies that $v({\rm det}(q^-)) = 0$. Therefore $\partial [M^0] = 0$. This implies that $\partial[M^1] = 0$, and hence
$v(F(1)) + v(F(-1)) = 0$. Since we already know that $v(F(-1)) = 0$, we obtain $v(F(1)) = 0$, and this implies (a).
\medskip
Finally, if $n^+ \not = 0$ and $n^- \not = 0$, then $F(1) = F(-1) = 0$, and hence both (a) and (b) hold. This concludes the proof of the theorem.
\section{Milnor signatures and Milnor indices}\label{Milnor}
The aim of this section is to recall from \cite {B 20} some notions of signatures and indices, inspired by Milnor's paper \cite {M 68}.
\medskip Let $F \in {\bf R}[X]$ be a monic, symmetric polynomial.
If $(V,q)$ is a non-degenerate quadratic form over $\bf R$ and if $t : V \to V$ is a semi-simple isometry of $q$ with characteristic
polynomial $F$, we associate to each irreducible, symmetric factor $\mathcal P$ of $F$ an
index $\tau({\mathcal P})$ and a signature $\mu({\mathcal P})$ as follows. Let $V_{\mathcal P(t)}$ be the $\mathcal P(t)$-primary subspace of $V$, consisting
of all $v \in V$ with $\mathcal P(t)^N v = 0$ for $N$ large. The {\it Milnor index} $\tau(\mathcal P)$ is by definition the index
of the restriction of $q$ to the subspace $V_{\mathcal P(t)}$,
and we define the {\it Milnor signature} $\mu({\mathcal P})$ at $\mathcal P$ as the signature of the restriction of $q$ to $V_{\mathcal P(t)}$.
\medskip
Let ${\rm Irr}_{\bf R}(F)$ be the set of irreducible, symmetric factors
of $F \in {\bf R}[X]$; if $\mathcal P \in {\rm Irr}_{\bf R}(F)$, then either ${\rm deg}({\mathcal P}) = 2$, or ${\mathcal P}(X) = X \pm 1$. If $(r,s)$ is the signature of $q$, we have
$$\underset {\mathcal P} \sum \tau({\mathcal P}) = r-s,$$
where the sum runs over $\mathcal P \in {\rm Irr}_{\bf R}(F)$.
\medskip
If $\mathcal P \in {\rm Irr}_{\bf R}(F)$, let $n_{\mathcal P} > 0$ be the integer such that
$\mathcal P^{n_{\mathcal P}}$ is the power of $\mathcal P$ dividing $F$.
\medskip We denote by ${\rm Mil}(F)$ the set of maps $\tau : {\rm Irr}_{\bf R}(F) \to {\bf Z}$ such that the image of
$\mathcal P \in {\rm Irr}_{\bf R}(f)$ belongs to the set $\{ -{\rm deg}({\mathcal P})n_{\mathcal P},\dots,{\rm deg}({\mathcal P})n_{\mathcal P} \}$.
For all integers $r, s \geqslant 0$, let
${\rm Mil}_{r,s}(F)$ be the subset of ${\rm Mil}(F)$ such that $\underset {\mathcal P} \sum \tau({\mathcal P}) = r-s$,
where the sum runs over $\mathcal P \in {\rm Irr}_{\bf R}(F)$.
\begin{prop}\label{bijection} Sending a semi-simple element ${\rm SO}_{r,s}({\bf R})$ with characteristic polynomial $F$ to its Milnor index
induces a bijection between the conjugacy classes of semi-simple elements of ${\rm SO}_{r,s}({\bf R})$ and ${\rm Mil}_{r,s}(F)$.
\end{prop}
\noindent {\bf Proof.} See \cite {B 20}, \S 6.
\section{Local conditions for even, unimodular $\Gamma$-lattices}\label{local section}
Let $F \in {\bf Z}[X]$ be a monic, symmetric polynomial, and let
$r, s \ge 0$ be integers such that $r + s = {\rm deg}(F)$. The aim of this section is to give local conditions for the existence of an even, unimodular lattice of signature $(r,s)$ having a semi-simple
isometry with characteristic polynomial $F$. More precisely, given a Milnor index $\tau \in {\rm Mil}_{r,s}(F)$, we give a necessary and
sufficient condition for an even, unimodular lattice having a semi-simple isometry with characteristic polynomial $F$
and Milnor index $\tau$ to exist everywhere locally.
\medskip
Let $m(F)$ be the number of roots $z$ of $F$ with $|z| > 1$ (counted with multiplicity).
\begin{prop}\label{reals} There exist an ${\bf R}$-vector space $V$ and a non-degenerate quadratic form $q$ of signature $(r,s)$ having
a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau \in {\rm Mil}_{r,s}(F)$ if and only if
$r \geqslant m(F)$, $s \geqslant m(F)$, and if moreover $F(1)F(-1) \not = 0$, then $m(F) \equiv r \equiv s \ {\rm (mod \ 2)}$.
\end{prop}
\noindent
{\bf Proof.} This follows from \cite {B 15}, Proposition 8.1. Indeed, the necessity of the conditions follows immediately from
\cite {B 15}, Proposition 8.1. To prove the sufficiency, note that while the statement of \cite {B 15}, Proposition 8.1 only claims
the existence of a non-degenerate quadratic form $q$ of signature $(r,s)$ having
a semi-simple isometry with characteristic polynomial $F$, the proof shows the existence of such a form
having a semi-simple isometry with a given Milnor index $\tau \in {\rm Mil}_{r,s}(F)$.
\bigskip
If $p$ is a prime number, we say
that a ${\bf Z}_p$-lattice $(L,q)$ is even if $q(x,y) \in 2{\bf Z}_p$; note that if $p \not = 2$, then every lattice is even, since $2$
is a unit in ${\bf Z}_p$.
\medskip
Assume moreover that $F$ has even degree, and that $F(0) = 1$. Set $2n = {\rm deg}(F)$.
\begin{theo}\label{integral local} There exists an even, unimodular ${\bf Z}_p$-lattice having a semi-simple isometry
with characteristic polynomial $F$ for all prime numbers $p$ if and only if $|F(1)|$, $|F(-1)|$ and $(-1)^n F(1) F(-1)$ are all squares.
\end{theo}
\noindent
{\bf Proof.} This follows from Theorems \ref{local odd}, \ref{local even} and \ref{local even necessary}. Indeed, if $p$ is a prime number $ \not = 2$, the existence of a unimodular
${\bf Z}_p$-lattice having a semi-simple isometry
with characteristic polynomial $F$ implies that either $F(1) = 0$, or $v_p(F(1))$ is even; similarly, either $F(-1) = 0$, or $v_p(F(-1))$ is even
(see Theorem \ref{local odd}). The existence of an even, unimodular ${\bf Z}_2$-lattice implies the same property for $p = 2$ by
Theorem \ref{local even necessary}. This implies that $|F(1)|$ and $|F(-1)|$ are both squares, and therefore $|F(1)F(-1)|$ is a square. If $F(1)F(-1) = 0$,
we are done. If not,
Theorem \ref{local even necessary} implies that the class of $(-1)^n F(1) F(-1)$ in ${\bf Q}_2^{\times}/{\bf Q_2}^{\times 2}$ lies in
$\{1,-3 \}$; since $|F(1)F(-1)|$ is a square, this implies that
$(-1)^n F(1) F(-1)$ is a square. The converse
is an immediate consequence of Theorems \ref{local odd} and \ref{local even}.
\section{The local-global problem}\label{local-global problem section}
The aim of this section is to reformulate the local conditions of \S \ref{local section}, and to give a framework for the local-global problem
of the next sections. We also introduce some notation that will be used in the following sections,
\medskip
Let $F \in {\bf Z}[X]$ be a monic, symmetric polynomial of even degree such that $F(0) = 1$; set $2n = {\rm deg}(F)$.
Let $J$ be the set of irreducible factors of $F$, and let us write $F = \underset{f \in J} \prod f^{n_f}$. Let $I_1 \subset J$ be the subset of irreducible factors of type 1, and let $I_0 \subset J$ be the
set of irreducible factors of type 0.
\medskip Let $M = M^0 \oplus M^1 \oplus M^2$ be the semi-simple ${\bf Q}[\Gamma]$-module associated to the polynomial $F$ (see \S
\ref{symmetric section}).
\bigskip
Let $r, s \ge 0$ be integers such that $r + s = {\rm deg}(F)$ and that $r \equiv s \ {\rm (mod \ 8)}$. Let $(V,q) = (V_{r,s},q_{r,s})$ be
the diagonal quadratic form over ${\bf Q}$ with $r$ entries $1$ and $s$ entries $-1$.
\begin{prop}\label{equivalent local} The following properties are equivalent
\medskip
{\rm (i)} For all prime numbers $p$ there exists an even, unimodular ${\bf Z}_p$-lattice having a semi-simple isometry with characteristic
polynomial $F$.
\medskip
{\rm (ii)} $|F(1)|$, $|F(-1)|$ and $(-1)^n F(1) F(-1)$ are all squares.
\medskip
{\rm (iii)} For all prime numbers $p$, the quadratic form $(V,q) \otimes_{\bf Q} {\bf Q}_p$ has a semi-simple isometry with characteristic
polynomial $F$ that stabilizes an even, unimodular lattice.
\medskip
{\rm (iv)} For all prime numbers $p$, the quadratic form $(V,q) \otimes_{\bf Q} {\bf Q}_p$ has an isometry with module $M \otimes _{\bf Q} {\bf Q}_p$,
giving rise to a class $[M \otimes _{\bf Q} {\bf Q}_p,q]$ in $W_{\Gamma}({\bf Q}_p)$ such that $\partial_p [M \otimes _{\bf Q} {\bf Q}_p,q] = 0$
in $W_{\Gamma}({\bf F}_p)$ and that $v_2({\rm det}(q_-)) \equiv v_2(F_1(-1)) \ {\rm (mod \ 2)}$.
\end{prop}
\noindent
{\bf Proof.} The equivalence of {\rm (i)} and {\rm (ii)} is Proposition \ref{integral local}, and it is clear that {\rm (iii)} implies {\rm (i)}. Let us show
that {\rm (i)} implies {\rm (iii)}. Set $u = (-1)^s$. Since $r + s = 2n$ and $r \equiv s \ {\rm (mod \ 8)}$, we have $n \equiv s \ {\rm (mod \ 8)}$, hence $u = (-1)^n$.
By {\rm (i)}, there exists an
even, unimodular ${\bf Z}_p$-lattice having a semi-simple isometry with characteristic polynomial $F$. If $F(1)F(-1) \not = 0$, then
the class of the determinant of this lattice in in ${\bf Q}_p^{\times}/{\bf Q_p}^{\times 2}$ is equal to $F(1)F(-1)$, and $F(1)F(-1) = u$ by
{\rm (ii)}; if $F(1)F(-1) = 0$,
then by Theorem \ref{sufficient} (ii) and Theorem \ref{local even} (ii) we can assume that the determinant of this lattice in ${\bf Q}_p^{\times}/{\bf Q_p}^{\times 2}$ is equal to $u$. Therefore the
lattice is isomorphic to the diagonal ${\bf Z}_p$-lattice $\langle 1,\dots, u \rangle$ of determinant $u$ if $p \not = 2$ (cf. \cite{OM}, 92:1), and to the orthogonal sum of $n$ hyperbolic planes if $p = 2$ (see for instance \cite {BT}, Proposition 8.3).
Let $q^p$ be the
quadratic form over ${\bf Q}_p$ obtained from this lattice by extension of scalars; then the Hasse-Witt invariant of $q^p$ is trivial
if $p \not = 2$, and is equal to the Hasse-Witt invariant of the orthogonal sum of $n$ hyperbolic planes if $p = 2$; its
determinant is equal to $u = (-1)^n$ in ${\bf Q}_p^{\times}/{\bf Q_p}^{\times 2}$. This implies that $q^p$ and
$(V,q) \otimes {\bf Q}_p$ are isomorphic as quadratic forms over ${\bf Q}_p$. Since $q^p$ has a semi-simple isometry with characteristic
polynomial $F$ that stabilizes a unimodular lattice, property (iii) holds.
Finally, the equivalence of (iii) and (iv) follows from Theorem \ref{4.3} (iv) and from \cite{BT}, Theorems 8.1 and 8.5.
\medskip
\noindent
{\bf Terminology.} We say that the {\it local conditions for $F$} hold at the finite places if the equivalent conditions of Proposition \ref{equivalent local}
are satisfied.
\bigskip
Recall that $m(F)$ is the number of roots $z$ of $F$ with $|z| > 1$ (counted with multiplicity).
\begin{prop}\label{equivalent real} Let $\tau \in {\rm Mil}_{r,s}(F)$ be a Milnor index. The following properties
are equivalent :
\medskip
{\rm (i)} The quadratic form $(V,q) \otimes_{\bf Q} {\bf R}$ has a semi-simple isometry with characteristic polynomial $F$ and
Milnor index $\tau$.
\medskip
{\rm (ii)} $r \geqslant m(F)$, $s \geqslant m(F)$, and if moreover $F(1)F(-1) \not = 0$, then $m(F) \equiv r \equiv s \ {\rm (mod \ 2)}$.
\medskip
{\rm (iii)} The quadratic form $(V,q) \otimes_{\bf Q} {\bf R}$ has an isometry with module $M \otimes_{\bf Q} {\bf R}$ and
Milnor index $\tau$.
\end{prop}
\noindent {\bf Proof.} The equivalence of (i) and (ii) follows from Proposition \ref{reals}, and (iii) is a reformulation of (i).
\medskip
\noindent
{\bf Terminology.} We say that the {\it local conditions for $F$ and $\tau$} hold at the infinite place if the equivalent conditions of Proposition \ref{equivalent real}
are satisfied.
\medskip We consider the following conditions
\medskip
(C 1) {\it $|F(1)|$, $|F(-1)|$ and $(-1)^n F(1) F(-1)$ are all squares.}
\medskip
(C 2) {\it $r \geqslant m(F)$, $s \geqslant m(F)$, and if moreover $F(1)F(-1) \not = 0$, then $m(F) \equiv r \equiv s \ {\rm (mod \ 2)}$.}
\medskip
Note that the local conditions for $F$ at the finite places hold if and only if condition (C 1) is satisfied, and that the local conditions
for $F$ and $\tau$ hold if and only if condition (C 2) is satisfied.
\medskip
\noindent
{\bf Terminology.} Let $M$ and $q$ be as above, and let $p$ be a prime number. A $\Gamma$-quadratic form $(M \otimes _{\bf Q} {\bf Q}_p,q)$ such that $\partial_p[(M \otimes _{\bf Q} {\bf Q}_p,q] = 0$ in $W_{\Gamma}({\bf F}_p)$
and that $v_2({\rm det}(q_-)) \equiv v_2(F_1(-1)) \ {\rm (mod \ 2)}$ if $p = 2$
is called a {\it local solution} for $F$ at the prime number $p$.
\medskip
\section{${\bf Q}[\Gamma]$-forms, signatures and determinants}\label{Q section}
Let $F \in {\bf Z}[X]$ be a monic, symmetric polynomial, and let us write $F = F_0 F_1 F_2$, where $F_i$ is the product
of the irreducible factors of type $i$ of $F$. Let $r, s \ge 0$ be integers such that $r + s = {\rm deg}(F)$ and that $r \equiv s \ {\rm (mod \ 8)}$, and let $\tau \in {\rm Mil}_{r,s}(F)$ be a Milnor index. Let $(L,q)$ be an even, unimodular
lattice having a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$, and let $(M,q)$ be
the corresponding ${\bf Q}[\Gamma]$-form, and let $$M = M^0 \oplus M^1 \oplus M^2$$ and $$M^0 = M^+ \oplus M^-$$
be the associated orthogonal decompositions (cf. \S \ref{isometries section}). Note that the Milnor index $\tau$ and the degrees
of the polynomials determine the signatures of the factors. We have
${\rm sign}(M) = (r,s)$. Set ${\rm sign}(M^1) = (r_1,s_1)$, and ${\rm sign}(M^2) = (r_2,s_2)$; note that $r_2 = s_2 = {\rm deg}(F_2)/2$, since $M^2$ is hyperbolic, and set $${\rm sign}(M^+) = (r^+,s^+), \ \ {\rm sign}(M^-) = (r_-,s_-).$$
\smallskip
\noindent
We have ${\rm det}(M) = (-1)^s$, ${\rm det}(M^1) = F_1(1)F_1(-1) = (-1)^{s_1} |F_1(1)F_1(-1)|$, and ${\rm det}(M^2) = (-1)^{s_2}$.
\begin{prop}\label{det} We have $${\rm det}(M^+) = (-1)^{s_+}|F_1(1)|, \ \ {\rm det}(M^-) = (-1)^{s_-}|F_1(-1)|$$ in ${\bf Q}^{\times}/{\bf Q}^{\times 2}$.
\end{prop}
\noindent
{\bf Proof.} The sign of ${\rm det}(M^{\pm})$ is $(-1)^{s_{\pm}}$. Let $p$ be a prime, $p \not = 2$.
By Lemma \ref{necessary lemma},
the component of $\partial_p[M^1]$ in $W_{\Gamma}({\bf F}_p,N_{\pm}) \simeq W({\bf F}_p)$ is represented by a
quadratic form of dimension $v(F_1(\pm 1))$ over $\bf F_p$. If $v(F_1(\pm 1)) = 0$, then this component
of $\partial_p[M^1]$ is trivial, hence $\partial_p[M^{\pm}]$ is also trivial. This implies that $v({\rm det}(M^{\pm}) = 0$.
Assume now that $v(F_1(\pm 1)) = 1$. Then the component of $\partial_p[M^1]$ in $W_{\Gamma}({\bf F}_p,N_{\pm}) \simeq W({\bf F}_p)$ is represented by a
quadratic form of dimension 1. Since $\partial_p[M] = 0$, this implies that $\partial_p[M^{\pm}]$ is represented by a form
of dimension 1, and therefore $v({\rm det}(M^{\pm}) = 1$. Hence in this case too, we have
$v({\rm det}(M^{\pm})) = v(F_1(\pm 1))$.
\medskip
Assume that $p = 2$. The component of $\partial_2[M^1]$ in $W_{\Gamma}({\bf F}_2,N_{\pm}) \simeq W({\bf F}_2)$ is represented by a
quadratic form of dimension $v(F_1(1)) + v(F_1(-1))$ over ${\bf F}_2$ (see Lemma \ref{necessary lemma}). If $M^+ = 0$
and $M^- = 0$, there is nothing to prove. Assume that $M^+ \not = 0$, and $M^- = 0$. Then $F_1(-1) = F(-1)$, and
by Theorem \ref{local even necessary} (a), we have $v(F_1(-1)) = 0$. Hence $\partial_2(M^1)$ is represented by a form of dimension
$v(F_1(1))$. If $v(F_1(1)) = 0$, then $\partial_2(M^1)=0$, and hence $\partial_2(M^+) = 0$; therefore $v({\rm det}(M^+)) = 0$.
If $v(F_1(1)) = 1$, then $\partial_2(M^1)$ is represented by a form of dimension 1 over ${\bf F}_2$, hence
$\partial_2(M^+)$ is also represented by a form of dimension 1 over ${\bf F}_2$. This implies that $v({\rm det}(M^+)) = 1$.
Therefore $v({\rm det}(M^+)) = v(F_1(1))$. The same argument shows that if $M^+ = 0$ and $M^- \not = 0$, then
$v({\rm det}(M^-)) = v(F_1(-1))$. Suppose now that $M^+ \not = 0$ and $M^- \not = 0$. By \cite {BT}, Theorem 8.5 and
Proposition 8.6, we have $v({\rm det}(M^-)) = v(F_1(-1))$. If $v(F_1(1)) = v(F_1(-1))$, then $\partial_2(M^1) = 0$.
Therefore $\partial_2(M^+ \oplus M^-) = 0$.
Since $v({\rm det}(M^-)) = v(F_1(-1))$, this implies that $v({\rm det}(M^+)) = v(F_1(1))$. If
$v(F_1(1)) + v(F_1(-1))= 1$, then $\partial_2(M^1) \not = 0$, and hence $\partial_2(M^+ \oplus M^-) \not = 0$.
Therefore $v({\rm det}(M^+)) + v({\rm det}(M^-)) = 1$. Since $v({\rm det}(M^-)) = v(F_1(-1))$, we have
$v({\rm det}(M^+)) = v(F_1(1))$. This completes the proof of the proposition.
\bigskip
\section{Local decomposition}\label{decomposition section}
Let $F \in {\bf Z}[X]$ be a monic, symmetric polynomial of even degree with $F(0) = 1$; set $2n = {\rm deg}(F)$.
Let $r, s \ge 0$ be integers such that $r + s = {\rm deg}(F)$ and that $r \equiv s \ {\rm (mod \ 8)}$, let $\tau \in {\rm Mil}_{r,s}(F)$ be a Milnor index. If the local conditions (C 1) and (C 2) hold, then we obtain
a local solution everywhere (see \S \ref{local-global problem section}). The aim of this section is to define
local decompositions that will be useful in the following sections.
\medskip We start by introducing some notation.
Let
$M = M^0 \oplus M^1 \oplus M^2$ be the semi-simple ${\bf Q}[\Gamma]$-module associated to $F$ as in \S \ref{symmetric section}, with
\medskip
\centerline {$M^1 = \underset{i \in I} \oplus M_f$ and $M^0 = M^+ \oplus M^-$.}
\medskip If $f \in I_1$, set $E_f = {\bf Q}[X]/(f)$ and let $\sigma_f : E_f \to E_f$ be the involution induced by $X \mapsto X^{-1}$.
Let $(E_f)_0$ be the fixed field of $\sigma_f$, and let $d_f \in (E_f)_0$ be such that $E_f = (E_f)_0 (\sqrt d_f)$.
Note that $M_f$ is an $E_f$-vector space of dimension $n_f$.
Let $$Q_f : M_f \times M_f \to {\bf Q}$$ be the orthogonal sum of $n_f$ copies of the quadratic form $E_f \times E_f \to {\bf Q}$ defined by $(x,y) \mapsto {\rm Tr}_{E_f/{\bf Q}}(x \sigma_f(y))$.
\medskip The Milnor index $\tau \in {\rm Mil}_{r,s}(F)$ determines the signatures of $M^+$ and $M^-$, as follows.
Recall that ${\rm dim}(M^+) = n_+$ and ${\rm dim}(M^+) = n_+$.
\medskip Let $s_+$ and $s_-$ be
as in \S \ref{Q section}, and set $D_{\pm} = (-1)^{s_{\pm}}F_1(\pm 1)$; let $Q{\pm}$ be the diagonal quadratic form
of dimension $n_{\pm}$ over ${\bf Q}$
defined by $Q{\pm} = \langle D_{\pm}, 1,\dots,1 \rangle$.
Let $Q$ be the orthogonal sum $$Q = \underset{f \in I_1} \oplus Q_f \oplus Q_+ \oplus Q_-.$$
\medskip
$${\rm sign}(M^+) = (r^+,s^+), \ \ {\rm sign}(M^-) = (r_-,s_-).$$
\medskip
\medskip Recall form \S \ref{local-global problem section} that we denote by $q = q_{r,s}$ the diagonal
quadratic form over ${\bf Q}$ having $r$ diagonal entries $1$ and $s$ diagonal entries $-1$.
\medskip Assume that conditions (C 1) and (C 2) hold. If $p$ is a prime number, then $(M,q) \otimes_{\bf Q} {\bf Q}_p$ has
a structure of a ${\bf Q}_p[\Gamma]$-quadratic form (see \S \ref{local-global problem section}), and we have the
orthogonal decomposition (cf. \S \ref{isometries section}).
$$(M,q) \otimes_{\bf Q} {\bf Q}_p = \underset {f \in I_1} \oplus (M^p_f,q_f^p) \oplus (M^p_+,q_+^p) \oplus (M^p_-,q_-^p) \oplus (M^p_2,q_2^p),$$
\noindent
where $M^p_f = M_f \otimes_{\bf Q} {\bf Q}_p$, $M_+^p = M^+ \otimes_{\bf Q} {\bf Q}_p$, $M_-^p = M^- \otimes_{\bf Q} {\bf Q}_p$, and $M_2^p = M^2 \otimes_{\bf Q} {\bf Q}_p$. The ${\bf Q}_p[\Gamma]$-quadratic form $(M^p_2,q_2^p)$ is hyperbolic.
\medskip
For $f \in I_1$, set
$E_f^p = E_f \otimes_{\bf Q} {\bf Q}_p$ and $(E_f)_0^p = (E_f)_0 \otimes_{\bf Q} {\bf Q}_p$. There exists a unique non-degenerate hermitian form $(M_f^p,h^p_f)$ over
$(E_f^p,\sigma_f)$ such that $$q_f^p(x,y) = {\rm Tr}_{E_f^p/{\bf Q}_p}(h^p_f(x,y)),$$
see for instance \cite {M}, Lemma 1.1 or \cite {B 15}, Proposition 3.6.
\medskip
Set
$\lambda^p_f = {\rm det}(h^p_f) \in (E^p_f)_0^{\times}/{\rm N}_{E^p_f/(E^p_f)_0}$. Note that the hermitian
form $h^p_f$ is isomorphic to the $n_f$-dimensional diagonal hermitian form $\langle \lambda^p_f,1,\dots,1 \rangle$
over $E_f^p$. Hence $q_f^p$ is determined by $\lambda^p_f$.
\begin{notation}\label{lambda bis} With the notation above, set
$$\partial_p(\lambda^p_f) = \partial_p[q_f^p] \in W_{\Gamma}({\bf F}_p).$$
\end{notation}
\smallskip
\begin{prop}\label{dim det w f} We have ${\rm dim}(q_f^p) = {\rm deg}(f)n_f$, ${\rm det}(q_f^p) = [f(1)f(-1)]^{n_f}$, and
the Hasse-Witt invariant of $q_f^p$ satisfies
$$w_2(q_f^p) + w_2(Q_f) = {\rm cor}_{(E_f)_0^p/{\bf Q}_p}({\rm det}(h^p_f),d_f)$$ in ${\rm Br}_2({\bf Q}_p)$.
\end{prop}
\noindent
{\bf Proof.} The assertion concerning the dimension is clear, the one on the determinant follows from
Lemma \ref{determinant}, and the property of the Hasse-Witt invariants from \cite {B 20}, Proposition 12.8.
\begin{prop}\label{dim det w pm}
{\rm (a)} ${\rm dim}(q_{\pm}^p) = n_{\pm}$ and $${\rm det}(q_{+}^p){\rm det}(q_{+}^p) = (-1)^{s_+ + s_-}|F_1(1)F_1(-1)|.$$
\medskip
\noindent
{\rm (b)} If $n_+ \not = 0$ and $n_- \not = 0$, then we can choose $q_+^p$ and $q_-^p$ such that
${\rm det}(q_{\pm}^p) = (-1)^{s_{\pm}}|F_1(\pm 1)|$.
\medskip
\noindent
{\rm (c)} If $n_{\pm} \not = 0$, then the
Hasse-Witt invariant of $q_{\pm}^p$ can take either of the two possible values of
$\{0,1\} = {\rm Br}_2({\bf Q}_p)$.
\medskip
\noindent
{\rm (d)} If $p \not = 2$, then $\partial_p[q_{\pm}^p]$ can be either of the
two possible classes of dimension $v_p({\rm det}(q_{\pm}^p))$ of $W_{\Gamma}({\bf F}_p,N^{\pm}) \simeq W({\bf F}_p)$.
\end{prop}
\noindent
{\bf Proof.} (a) is clear, (b) follows from Theorem \ref{sufficient} (ii) and Theorem \ref{local even} (ii); (c) and (d) are
straightforward to check.
\medskip We also need the following
\begin{lemma} Let $p$ be a prime number, $p \not = 2$, and let $b_1$ and $b_2$ be two quadratic forms over ${\bf Q}_p$
with ${\rm dim}(b_1) = {\rm dim}(b_2)$ and ${\rm det}(b_1) = {\rm det}(b_2)$. Then we have
$$w_2(b_1) = w_2(b_2) \ \ {\rm in} \ \ {\rm Br}_2({\bf Q}_p) \ \ \iff \ \ \partial_p[b_1] = \partial_p[b_2] \ \ {\rm in} \ \ W({\bf F}_p).$$
\end{lemma}
\noindent
{\bf Proof.} The proof is straightforward.
\bigskip
Similarly, we have
$$(M,q) \otimes_{\bf Q} {\bf R} = \underset {f \in I_1} \oplus (M^{\infty}_f,q_f^{\infty}) \oplus (M^{\infty}_+,q_+^{\infty}) \oplus (M^{\infty}_-,q_-^{\infty}) \oplus (M^{\infty}_2,q_2^{\infty}),$$
\noindent
where $M^{\infty}_f = M_f \otimes_{\bf Q} {\bf R}$, $M_+^{\infty} = M^+ \otimes_{\bf Q} {\bf R}$, $M_-^{\infty} = M^- \otimes_{\bf Q} {\bf Q}_p$, and $M_2^{\infty} = M^2 \otimes_{\bf Q} {\bf Q}_p$. The ${\bf R}[\Gamma]$-quadratic form $(M^{\infty}_2,q_2^p)$ is hyperbolic.
\medskip
The ${\bf R}[\Gamma]$-quadratic forms $(M^{\infty}_f,q_f^{\infty})$ and $(M^{\infty}_{\pm},q_{\pm}^{\infty})$ are
determined by the Milnor index
$\tau \in {\rm Mil}_{r,s}(F)$.
\begin{prop}\label{real Hasse-Witt} We have ${\rm dim}(q_f^{\infty}) = {\rm deg}(f)n_f$, ${\rm det}(q_f^{\infty}) = [f(1)f(-1)]^{n_f}$, and
the Hasse-Witt invariant of $q_f^{\infty}$ satisfies
$$w_2(q_f^{\infty}) + w_2(Q_f) = {\rm cor}_{(E_f)_0^{\infty}/{\bf R}}({\rm det}(h^{\infty}_f),d_f)$$ in ${\rm Br}_2({\bf R})$.
\end{prop}
\noindent
{\bf Proof.} The assertion concerning the dimension is clear, the one on the determinant follows from
Lemma \ref{determinant}, and the property of the Hasse-Witt invariants from \cite {B 20}, Proposition 12.8.
\medskip
For $f \in I_1$, set
$E_f^{\infty} = E_f \otimes_{\bf Q} {\bf R}$ and $(E_f)_0^{\infty} = (E_f)_0 \otimes_{\bf Q} {\bf R}$. There exists a unique non-degenerate hermitian form $(M_f^{\infty},h_f)$ over
$(E_f^{\infty},\sigma_f)$ such that $$q_f^{\infty}(x,y) = {\rm Tr}_{E_f^{\infty}/{\bf R}}(h^{\infty}_f(x,y)),$$
see for instance \cite {M}, Lemma 1.1 or \cite {B 15}, Proposition 3.6.
Set
$$\lambda^{\infty}_f = {\rm det}(h^{\infty}_f) \in (E^{\infty}_f)_0^{\times}/{\rm N}_{E^{\infty}_f/(E^{\infty}_f)_0}.$$
\section {Obstruction group}\label{obstruction section}
We keep the notation of the previous sections. The aim of this section is to define a finite elementary abelian $2$-group
that will play an important role in the Hasse principle (see \S \ref {HP}). Recall that $J$ is the set of irreducible factors
of the polynomial $F$, that $I_0 \subset J$ is the set of factors of type 0, $I_1 \subset J$ is the set of factors of type $1$, and $I = I_0 \cup I_1$.
\medskip Let $C(I)$ be the set of maps $I \to {\bf Z}/2{\bf Z}$.
If $f, g \in I$, let $\Pi_{f,g}$ be the set of prime numbers $p$ such that
\medskip
\centerline {$f \ {\rm (mod \ {\it p})}$ and $g \ {\rm (mod \ {\it p})}$}
\medskip
\noindent have a common
symmetric factor in ${\bf F}_p[X]$.
Note that if $g(X) = X \pm 1$, then $\Pi_{f,g}$ is the set of prime numbers $p$ such that $$v_p(f(\mp1)) \not = 0.$$
\medskip
Let $C_0 (I)$ be the set of $c \in C(I)$ such that
\medskip
\centerline { $c(f) = c(g)$ if $\Pi_{f,g} \not = \varnothing$,}
\medskip
\noindent
and let ${\mbox{\textcyr{Sh}}}_F$ be the quotient of
$C_0(I)$ by the subgroup of constant maps.
\section{Local data}\label{local data section}
We keep the notation of \S \ref {decomposition section}.
Assume that conditions (C 1) and (C 2) of \S \ref{local-global problem section} hold, and recall that this implies the
existence of a ``local solution" everywhere. This leads, for all prime numbers $p$, to an orthogonal decomposition of the associated ${\bf Q}_p[\Gamma]$-bilinear form (see \S \ref{decomposition section}). We obtain in this way a collection of ${\bf Q}_p[\Gamma]$-bilinear forms,
one for each irreducible, symmetric factor of the characteristic polynomial. The dimensions and determinants of
the bilinear forms are always the same, but their Hasse-Witt invariants vary.
\medskip The aim of this section is to give a combinatorial encoding of the possible Hasse-Witt invariants, called ``local data".
\medskip We identify ${\rm Br}_2({\bf R})$ and ${\rm Br}_2({\bf Q}_p)$, where $p$ is a prime number, with $\{0,1\} = {\bf Z}/2{\bf Z}$. Let $\mathcal V$ be the set of all places of ${\bf Q}$, and let $\mathcal V'$ be the set of finite places.
\medskip
If $p$ is a prime number, let $q^p_f$ for $f \in I_1$, $q^p_+$ and $q^p_-$ be as in \S \ref{decomposition section}; recall
that if $n_+ \not = 0$ and $n_- \not = 0$, we choose $q^p_{\pm}$ such that ${\rm det}(q_{\pm}^p) = (-1)^{s_{\pm}}|F_1(\pm 1)|$
(see Proposition \ref {dim det w pm} (b)).
\medskip
Let $a^p \in C(I)$ be the map defined as follows :
\medskip
$$a^p(f) = w_2(q^p_f) + w_2(Q_f)$$ if $f \in I_1$, set $$a^p(X \pm 1) = w_2(q^p_{\pm}) + w_2(Q_{\pm}).$$
\bigskip
Let $\mathcal C^p$ be the set of maps $a^p \in C(I)$ obtained in this way.
\begin{prop}\label{almost zero} For almost all prime numbers $p$, the zero map belongs to the set $\mathcal C^p$.
\end{prop}
\noindent
{\bf Proof.} Let $S$ be the set of prime numbers such that $p$ is ramified in the extension $E_f/{\bf Q}$ for some
$f \in I_1$, or $w_2(q) \not = w_2(Q)$ in ${\rm Br}_2({\bf Q}_p)$; this is a finite set. We claim that if $p \not \in S$, then
the zero map belongs to $\mathcal C^p$. Indeed, set $q_f^p = Q_f^p$ for all $f \in I_1$, and
$q_{\pm}^p = Q_{\pm}^p$. We have ${\rm det}(q) = {\rm det}(Q)$ in $\bf Q^{\times}/{\bf Q}^{\times 2}$, and if $p \not \in S$ we have $w_2(q) = w_2(Q)$ in ${\rm Br}_2({\bf Q}_p)$, therefore, for $p \not \in S$, we have
$$(M,q) \otimes_{\bf Q} {\bf Q}_p = \underset {f \in I_1} \oplus (M^p_f,q_f^p) \oplus (M^p_+,q_+^p) \oplus (M^p_-,q_-^p) \oplus (M^p_2,q_2^p).$$
If $p$ is unramified in all the extensions $E_f/{\bf Q}$ for $f \in I_1$, by \cite{B 20}, Lemma 11.2 we
have $\partial_p[M^p_f,q_f^p] = 0$ in $W_{\Gamma}({\bf F}_p)$; moreover, $v_p(D_{\pm}) = 0$, hence
$\partial [M^p_{\pm},q_{\pm}^p] = 0$ in $W_{\Gamma}({\bf F}_p)$.
\medskip The above arguments show that if $p \not \in S$, then the choice of $q_f^p = Q_f^p$ for all $f \in I_1$ and
$q_{\pm}^p = Q_{\pm}^p$ gives rise to the element $a^p = 0$ of $\mathcal C^p$; therefore the zero map is in $\mathcal C^p$,
as claimed. This completes the proof of the proposition.
\bigskip
Proposition \ref{dim det w f} implies that if $f \in I_1$, then $a^p(f)$ is determined by ${\rm det}(h^p_f)$. Set
$\lambda^p_f = {\rm det}(h^p_f) \in (E^p_f)_0^{\times}/{\rm N}_{E^p_f/(E^p_f)_0}$. Set $E^p = \underset{f \in I_1} \prod E^p_f$ and
$E^p_0 = \underset{f \in I_1} \prod (E^p_f)_0$; the map $a^p$ is determined by
$\lambda^p \in (E^p_0)^{\times}/{\rm N}_{E^p/E^p_0}({E^p})^{\times})$, and the quadratic forms $q^p_{\pm}$.
\begin{notation}\label{[]} With the above notation, we set
$a^p = a^p[\lambda^p,q^p_{\pm}] = a[\lambda^p,q_+^p,q_-^p]$.
\end{notation}
\begin{notation} If $f, g \in I$, let $c_{f,g} \in C(I)$ be such that
\medskip
\centerline {$c_{f,g}(f) = c_{f,g}(g) = 1$ and $c_{f,g}(h) = 0$
if $h \not = f,g$. }
\medskip Let $(f,g): C(I) \to C(I)$
be the map map sending $c$ to $c + c_{f,g}$.
\end{notation}
\bigskip
Recall that for all $f, g \in I$, the set $\Pi_{f,g}$ consists of the prime numbers $p$ such that $f \ {\rm (mod \ p)}$ and $g \ {\rm (mod \ p)}$ have a common
symmetric factor in ${\bf F}_p[X]$.
\bigskip If $p$ is a prime number, let us consider the equivalence relation on $C(I)$ generated by the elementary equivalence
$$a \sim b \iff b = (f,g) a \ {\rm with} \ p \in \Pi_{f,g}.$$
\medskip
We denote by $\sim_p$ this equivalence relation.
\begin{prop}\label{equivalence class} The set $\mathcal C^p$ is a $\sim_p$-equivalence class of $C(I)$.
\end{prop}
\noindent
{\bf Proof.} Set $A^p = w_2(q) + w_2(Q)$
in ${\rm Br}_2({\bf Q}_p) = {\bf Z}/2{\bf Z}$, and note that for all $a^p \in \mathcal C^p$, we have
$\underset{f \in J}\sum a^p(f) = A^p$.
\medskip We start by proving that the set $\mathcal C^p$ is stable by the maps $(f,g)$ for $p \in \Pi_{f,g}$.
Let $a^p[\lambda^p,q_{\pm}^p] \in \mathcal C^p$, let $f,g \in J$ be such that $p \in \Pi_{f,g}$, and let
us show that $(f,g)(a^p[\lambda^p,q_{\pm}^p]) \in \mathcal C^p$. Note that if $f \in I_1$, then $p \in \Pi_{f,g}$ implies that
$(E^p_f)_0^{\times}/{\rm N}_{E^p_f/(E^p_f)_0}(E^p_f) \not = 0$.
Assume first that $f,g \in I$. There exist $\mu_f, \mu_g \in (E^p_f)_0^{\times}/{\rm N}_{E^p_f/(E^p_f)_0}(E^p_f)$
such that ${\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\mu_f,d_f) \not = {\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\lambda_f,d_f)$
and ${\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\mu_g,d_g) \not = {\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\lambda_g,d_g)$.
Let $\mu^p \in E_0^p$ be obtained by replacing $\lambda^p_f$ by $\mu^p_f$, $\lambda^p_f$ by $\mu^p_f$, and
leaving the other components unchanged.
We have $a^p[\mu^p,q_{\pm}^p] = (f,g)(a^p[\lambda^p,q^p_{\pm}])$. Using the arguments of \cite {B 20}, Propositions 16.5 and 22.1
we see that $a^p[\mu^p,q_{\pm}^p] \in \mathcal C^p$. Assume now that $f \in I_1$ and $g = X -1$. In this case,
the hypothesis $p \in \Pi_{f,g}$ implies that there exists a place $w$ of $(E_f)_0$ above $p$ that
ramifies in $E_f$, and such that the $w$-component $\lambda^w$ of $\lambda^p$ is such that with the notation
of \cite {B 20}, \S 22, $\partial_p(\lambda^w)$ is in $W_{\Gamma}({\bf F}_p,N_+)$.
We modify the $w$-component of $\lambda^p$ to obtain $\mu_f \in (E^p_f)_0^{\times}/{\rm N}_{E^p_f/(E^p_f)_0}(E^p_f)$
such that
${\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\mu_f,d_f) \not = {\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\lambda_f,d_f)$, and let
$b^p$ be a quadratic form over $\bf Q_p$ with ${\rm dim}(b^p) = {\dim}(q^p_+)$, ${\rm det}(b^p) = {\det}(q^p_+)$, and
$w_2(b^p) = w_2(q^p_+)$. We have $a^p[\mu^p,b^p,q^p_-] = (f,g)(a^p[\lambda^p,q^p_{\pm}])$.
The arguments of \cite {B 20}, Propositions 16.5 and 22.1 show that $a^p[\mu^p,b^p,q^p_-] \in \mathcal C^p$.
\medskip
Conversely, let us show that if $a^p[\lambda^p,q_{\pm}^p]$ and $a^p[\mu^p,b_{\pm}^p]$ are in $\mathcal C^p$, then
$a^p[\lambda^p,q_{\pm}^p] \sim_p a^p[\mu^p,b_{\pm}^p]$.
Let $J'$ be the set of $f \in J$ such that $a^p[\lambda^p,q_{\pm}^p](f) \not = a^p[\mu^p,b_{\pm}^p](f)$. Since
$\underset{h \in J}\sum a^p(h) = A^p$ for
all $a^p \in \mathcal C^p$, the set $J'$ has an even number of elements.
\medskip
Assume first that $p \not = 2$. This implies that for all $f \in J'$, we have $\partial_p(\lambda^p_f) \not = \partial_p(\mu^p_f)$
and that if $f(X) = X \pm 1$, then $\partial_p(q_{\pm}^p) \not = \partial_p(b_{\pm}^p)$. Hence there exist $f, g \in J'$
with $f \not = g$ such that $\partial_p(W_{\Gamma}({\bf Q}_p,M_f^p))$ and $\partial_p(W_{\Gamma}({\bf Q}_p,M_g^p))$ have a
non-zero intersection. This implies that $p \in \Pi_{f,g}$. The element $f,g)(a^p[\lambda^p,q^p_{\pm}])$ differs
from $a^p[\mu^p,b_{\pm}^p]$ in less elements than
$a^p[\lambda^p,q_{\pm}^p]$. Since $J'$ is a finite set, continuing this way we see that $a^p[\lambda^p,q_{\pm}^p] \sim_p a^p[\mu^p,b_{\pm}^p]$.
\medskip
Suppose now that $p = 2$. Let $J''$ be the set of $f \in J'$ such that $\partial_2(\lambda^2_f) \not = \partial_2(\mu^2_f)$,
and note that $J''$ has an even number of elements. The same argument as in the case $p \not = 2$ shows
that applying maps $(f,g)$, we can assume that $J'' = \varnothing$. If $f \in J'$ and $f \not \in J''$, then
$\partial_2(\lambda^2_f)$ belongs to $W_{\Gamma}({\bf F}_2,1) \subset W_{\Gamma}({\bf F}_2)$. Therefore $f,g \in J'$ and $f,g \not \in J''$, then
$2 \in \Pi_{f,g}$. The number of these elements is also even, hence after a finite number of elementary
equivalences we see that $a^p[\lambda^p,q_{\pm}^p] \sim_p a^p[\mu^p,b_{\pm}^p]$. This completes
the proof of the proposition.
\medskip
\begin{notation}\label {epsilon p} Let $a^p \in \mathcal C^p$, and let $c \in C(I)$. Set $$\epsilon_{a^p}(c) =
\underset{f \in I} \sum c(f) a^p(f).$$
\end{notation}
Recall from \S \ref{obstruction section} that $C_0(I)$ is the set of $c \in C(I)$ such that
\medskip \centerline {$c(f) = c(g)$ if $\Pi_{f,g} \not = \varnothing$.}
\begin{lemma} Let $a^p$, $b^p$ be two elements of $\mathcal C^p$, and let $c \in C_0(I)$. Then
$$\epsilon_{a^p}(c) = \epsilon_{b^p}(c).$$
\end{lemma}
\noindent
{\bf Proof.} By Proposition \ref{equivalence class}, we have $a^p \sim_p b^p $; we can assume that $b^p = (f,g)a^p$ with
$p \in \Pi_{f,g}.$ By definition, we have $b^p(h) = a^p(h)$ if $h \not = f,g$, $b^p(f) = a^p(f) + 1$ and
$b^p(g) = a^p(g) + 1$. Since $c \in C_0(I)$ and $\Pi_{f,g} \not = \varnothing$, we have $c(f) = c(g)$, and this shows that
$\epsilon_{a^p}(c) = \epsilon_{b^p}(c)$, as claimed.
\medskip
Since $\epsilon_{a^p}(c)$ does not depend on the choice of $a^p \in \mathcal C^p$, we set
$\epsilon^p (c) = \epsilon_{a^p}(c)$ for some $a^p \in \mathcal C^p$, and obtain a map $$\epsilon^p : C_0(I) \to {\bf Z}/2{\bf Z}.$$
\bigskip
By Proposition \ref{almost zero}, we have $\epsilon^p = 0$ for almost all prime numbers $p$.
\bigskip Let $\epsilon^{\rm finite} = \sum \epsilon^p$, where the sum is taken over all the prime numbers $p$; this is
a finite sum. Note that if $(X-1)(X+1)$ does not divide $F$, then $\epsilon^{\rm finite}$ only depends on $F$; it does
not depend of the choice of the Milnor signature $\tau$. We have a homomorphism
$$\epsilon^{\rm finite} : C_0(F) \to {\bf Z}/2{\bf Z}.$$
\bigskip
Let $v_{\infty} \in \mathcal V$ be the unique infinite place. Recall that the forms $q^{\infty}_f$ and
$q^{\infty}_{\pm}$ are uniquely determined by the choice of the Milnor index $\tau \in {\rm Mil}_{r,s}(F)$.
Let $a^{\infty} \in C(I)$ be the map defined as follows :
\medskip
$$a^{\infty}(f) = w_2(q^{\infty}_f) + w_2(Q_f)$$ if $f \in I_1$, $$a^{\infty}(X \pm 1) = w_2(q^{\infty}_{\pm}) + w_2(Q_{\pm}),$$ and
\medskip \centerline {$a^{\infty}(f) = 0$ if $f \in J$ with $f \not \in I$, $f \not = X \pm 1$. }
\bigskip
We obtain a map
$$\epsilon^{\infty}_{\tau} : C(I) \to {\bf Z}/2{\bf Z}$$ by setting $$\epsilon^{\infty}_{\tau}(c) = \underset{f \in J} \sum c(f) a^{\infty}(f).$$
For $v \in \mathcal V$, set $\epsilon^v = \epsilon^p$ if $v = v_p$, and $\epsilon^v = \epsilon^{\infty}_{\tau}$ if $v = v_{\infty}$. Set
$$\epsilon_{\tau}(c) = \underset{v \in \mathcal V} \sum \epsilon^v(c).$$
Since $\epsilon^v = 0$ for almost all $v \in \mathcal V$ (cf. Proposition \ref{almost zero}), this is a finite sum. We have
$\epsilon_{\tau} = \epsilon^{\rm finite} + \epsilon^{\infty}_{\tau}$. We obtain
a homomorphism
$$\epsilon _{\tau}: C_0(I) \to {\bf Z}/2{\bf Z}.$$
\bigskip
Recall from \S \ref{obstruction section} that ${\mbox{\textcyr{Sh}}}_F$ is the
quotient of $C_0(I)$ by the constant maps.
\begin{prop}\label{epsilon} The homomorphism $\epsilon_{\tau} : C_0(I) \to {\bf Z}/2{\bf Z}$ induces a homomorphism
$$\epsilon_{\tau} : {\mbox{\textcyr{Sh}}}_F \to {\bf Z}/2{\bf Z}.$$
\end{prop}
\noindent
{\bf Proof.} It suffices to show that if $c(f) = 1$ for all $f \in J$, then $\epsilon(c) = 0$.
For all $v \in \mathcal V$, set $A^v = w_2(q) + w_2(Q)$
in ${\rm Br}_2({\bf Q}_v) = {\bf Z}/2{\bf Z}$, where
${\bf Q}_v$ is either $\bf R$ or ${\bf Q}_p$, for a prime number $p$. Note that
$A^v = 0$ for almost all $v \in \mathcal V$, and that
$\underset {v \in \mathcal V} \sum A^v = 0$. Moreover, for all $a^v \in \mathcal C^v$, we have by
definition $\underset{f \in J}\sum a^v(f) = A^v$.
\medskip Let $c \in C(I)$ be such that $c(f) = 1$ for all $f \in J$. We have
$$\epsilon_{\tau}(c) = \underset{v \in \mathcal V} \sum \ \underset {f \in J} \sum c(f) \ a^v(f) = \underset{v \in \mathcal V} \sum \ \underset {f \in J} \sum \ a^v(f) = \underset {v \in \mathcal V} \sum A^v = 0.$$
\section{Hasse Principle}\label{HP}
We keep the notation of the previous sections; in particular, $F \in {\bf Z}[X]$ is a monic, symmetric polynomial of even degree such that $F(0) = 1$ and we set $2n = {\rm deg}(F)$.
Let $r, s \ge 0$ be integers such that $r + s = {\rm deg}(F)$ and that $r \equiv s \ {\rm (mod \ 8)}$, and let $\tau \in {\rm Mil}_{r,s}(F)$ be a Milnor index.
We assume that conditons
(C 1) and (C 2) hold.
\medskip
Recall from \S \ref{local data section} that we have a homomorphism
$$\epsilon_{\tau} : {\mbox{\textcyr{Sh}}}_F \to {\bf Z}/2{\bf Z}.$$
\begin{theo}\label{final} There exists an even, unimodular lattice having a semi-simple isometry with characteristic
polynomial $F$ and Milnor index $\tau$ if and only if $\epsilon_{\tau} = 0$.
\end{theo}
\noindent
{\bf Proof.} Assume that there exists an even, unimodular lattice $(L,q)$ having a semi-simple isometry with characteristic
polynomial $F$ and Milnor index $\tau$, and let $(M,q)$ be the associated ${\bf Q}[\Gamma]$-quadratic form. Let $M^0 \oplus M^1 \oplus M^2$ the corresponding orthogonal decomposition
of \S \ref{Q section}. We have the further orthogonal decompositions
$(M^1,q^1) = \underset {f \in I_1} \oplus (M_f,q_f)$, and $(M^0,q^0) = (M^+,q^+) \oplus (M^-,q^-)$ (see \S
\ref{isometries section}). For
all prime numbers $p$, this gives rise to a local decomposition as in \S \ref{decomposition section}, and to an
element $a^p \in \mathcal C^p$ given by $a^p(f) = w_2(q^p_f) + w_2(Q_f)$ if $f \in I_1$, by $a^p(X \pm 1) = w_2(q^p_{\pm}) + w_2(Q_{\pm}),$ and $a^p(f) = 0$ if $f \in J$ with $f \not \in I$, $f \not = X \pm 1$ (see \S \ref{local data section}).
Similarly, we have the element $a^{\infty} \in C(I)$ given by
$a^{\infty}(f) = w_2(q^{\infty}_f) + w_2(Q_f)$ if $f \in I_1$, $a^{\infty}(X \pm 1) = w_2(q^{\infty}_{\pm}) + w_2(Q_{\pm}),$ and
$a^{\infty}(f) = 0$ if $f \in J$ with $f \not \in I$, $f \not = X \pm 1$.
Since $q^p_f = q_f \otimes_{\bf Q}{\bf Q}_p$ and $q^{\infty}_f = q_f \otimes_{\bf Q}{\bf R}$
for all $f \in J$, we have
\medskip
\centerline { $ \underset{v \in \mathcal V} \sum \ a^v(f) = 0$ for all $f \in J$.}
\medskip This implies that $\epsilon_{\tau} = 0$.
\bigskip Conversely, assume that $\epsilon_{\tau} = 0$.
By \cite{B 20}, Theorem 13.5 this implies that for all $v \in \mathcal V$ there exists $b^v \in \mathcal C^v$ such
that
for all $f \in J$, we have $\underset {v \in \mathcal V} \sum b^v(f) = 0$.
\medskip
If $v \in \mathcal V'$ with $v = v_p$ where $p$ is a prime number, let us write $b^v = a[\lambda^p,q_+^p,q_-^p]$, for some
$\lambda^p \in (E^p_0)^{\times}/{\rm N}_{E^p/E^p_0}({E^p})^{\times}$, and some quadratic forms
$q_+^p,q_-^p$ over ${\bf Q}_p$, as in notation \ref{[]}.
\medskip
Note that since $v_{\infty}$ does not belong to any of the sets $\Pi_{f,g}$,
we have $b^{\infty}(f) = a^{\infty}(f) = w_2(q^{\infty}_f) + w_2(Q_f)$ if $f \in I_1$, $b^{\infty}(f) = a^{\infty}(X \pm 1) = w_2(q^{\infty}_{\pm}) + w_2(Q_{\pm}),$ and
$b^{\infty}(f) = a^{\infty}(f) = 0$ if $f \in J$ with $f \not \in I$, $f \not = X \pm 1$.
Recall that the forms $q^{\infty}_f$ and
$q^{\infty}_{\pm}$ are uniquely determined by the choice of the Milnor index $\tau \in {\rm Mil}_{r,s}(F)$.
\bigskip
If $f \in I_1$, we have $$b^{v_p}(f) = a[\lambda^p,q_+^p,q_-^p](f) = {\rm cor}_{(E_f)_0^p/{\bf Q}_p}(\lambda^p_f,d_f),$$ and
$$b^{v_{\infty}}(f) = a[\lambda^{\infty},q_+^{\infty},q_-^{\infty}](f) = {\rm cor}_{(E_f)_0^{\infty}/{\bf R}}({\rm det}(h^{\infty}_f),d_f)$$
\bigskip
\noindent
(cf. Propositions \ref{dim det w f} and \ref{real Hasse-Witt}).
\medskip Since $\underset {v \in \mathcal V} \sum b^v(f) = 0$, we have
$$\underset {v \in \mathcal V} \sum {\rm cor}_{(E_f)_0^v/{\bf Q}_v}(\lambda^v_f,d_f) = 0,$$
where ${\bf Q}_v = {\bf Q}_p$ if $v = v_p$ and ${\bf Q}_v = {\bf R}$ if $v = v_{\infty}$. This implies that
$$\underset {w \in {\mathcal W}} \sum (\lambda^w_f,d_f) = 0,$$ where $\mathcal W$ is the set of primes of $E_0$.
Therefore there exists $\lambda_f \in E_0^{\times}/N_{E/E_0}(E^{\times})$ mapping to $\lambda_f^w$ for all
$w \in {\mathcal W}$ (see for instance \cite{B 20}, Theorem 10.1). In particular, we have $(\lambda_f,d_f) = (\lambda^w_f,d_f)$
in ${\rm Br}_2(E_0^w)$ for all $w \in {\mathcal W}$.
\medskip
Note that $\tau(f)$ is an even integer. Let $h_f : M_f \times M_f \to E_f$ be a hermitian form such that
${\rm det}(h_f) = \lambda_f$, and that the index of $h_f$ is equal to ${\tau (f)} \over 2$; such a hermitian
form exists (see for instance \cite{Sch}, 10.6.9). Let us define $$q_f : M_f \times M_f \to {\bf Q}$$ by $$q_f(x,y) = {\rm Tr}_{E_f/{\bf Q}}(h_f(x,y)).$$
\bigskip
Let $f = X \pm 1$. We have $\underset {v \in \mathcal V} \sum b^v(f) = 0$, hence by the Brauer-Hasse-Noether theorem
there exists $a(\pm) \in {\rm Br}_2({\bf Q})$ mapping to $b^v(f)$ in ${\rm Br}_2({\bf Q}_v)$ for all $v \in \mathcal V$.
Let $q_{\pm}$ be a quadratic form over $\bf Q$ of dimension $n_{\pm}$, determinant $D_{\pm}$, Hasse-Witt
invariant $w_2(q_{\pm}) = a(\pm) + w_2(Q_{\pm})$ and index $\tau(X\pm 1) = r_{\pm}-s_{\pm}$.
Such a quadratic form exists; see for instance \cite{S}, Proposition 7.
\medskip
Let $q' : M \times M \to {\bf Q}$ be the quadratic form given by $$(M,q')= \underset{f \in I_1} \oplus (M_f,q_f) \oplus
\underset {f \in I_0} \oplus (M_f,q_f) \oplus (M^2,q^2),$$ where $(M^2,q^2)$ is hyperbolic. By construction,
$(M,q')$ has the same dimension, determinant, Hasse-Witt invariant and signature as $(M,q)$, hence
the quadratic forms $(M,q')$ and $(M,q)$ are isomorphic.
\medskip Let $t : M \to M$ be defined by $t(m) = \gamma m$, where $\gamma$ is a generator of $\Gamma$.
By construction, $t$ is an isometry of $(M,q')$ and it is semi-simple with characteristic
polynomial $F$.
By hypothesis, conditions (C 1) and (C 2) hold, hence $(M,q')\otimes_{\bf Q} {\bf Q}_p$ contains
an even, unimodular ${\bf Z}_p$ lattice $L_p$ stable by the isometry $t$. Let
$$L = \{ x \in M \ | \ x \in L_p \ {\rm for \ all} \ {\rm prime \ numbers} \ p \}.$$
$(L,q')$ is an even, unimodular lattice having a semi-simple isometry with characteristic polynomial $F$. This
completes the proof of the theorem.
\begin{coro}\label{final coro} Assume that conditions {\rm (C 1)} and {\rm (C 2)} hold, and that ${\mbox{\textcyr{Sh}}}_F = 0$. Then there exists
an even, unimodular lattice having a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$.
\end{coro}
\section{Even, unimodular lattices preserved by a semi-simple element of ${\rm SO}_{r,s}({\bf R})$ }\label{reformulation}
In this section, we reformulate the Hasse principle result of \S \ref{HP}, and prove a result stated in the introduction.
We keep the notation of \S \ref{HP}. In particular $F \in {\bf Z}[X]$ is a monic, symmetric polynomial of even degree such that $F(0) = 1$, and
$r, s \ge 0$ are integers such that $r + s = {\rm deg}(F)$ and that $r \equiv s \ {\rm (mod \ 8)}$.
\medskip Let us now assume that condition (C 2) holds, and let $t \in {\rm SO}_{r,s}({\bf R})$ be a semi-simple
isometry with characteristic polynomial $F$. Let $\tau = \tau(t) \in {\rm Mil}_{r,s}(F)$ be the Milnor index
associated to $t$ in Proposition \ref{bijection}.
\medskip Assume that condition (C 1) also holds, and let $\epsilon_{\tau} : {\mbox{\textcyr{Sh}}}_F \to {\bf Z}/2{\bf Z}$ be the
homomorphism defined in \S \ref{HP}; set $\epsilon_t = \epsilon_{\tau}$.
The following is a reformulation of Theorem \ref{final} :
\begin{theo}\label{preserves} The isometry $t \in {\rm SO}_{r,s}({\bf R})$ preserves an even, unimodular lattice if and
only if $\epsilon_t = 0$.
\end{theo}
\begin{coro}\label{preserves coro} If ${\mbox{\textcyr{Sh}}}_F = 0$, the isometry $t \in {\rm SO}_{r,s}({\bf R})$ preserves an even, unimodular lattice.
\end{coro}
\section{Automorphisms of $K3$ surfaces}\label{K3}
Which Salem numbers occur as dynamical degrees of automorphisms of complex analytic $K3$ surfaces ? This question was raised
by Curt McMullen in \cite{Mc1}, and was studied in many other papers (see for instance \cite {GM}, \cite {O}, \cite {Mc2},
\cite {BG}, \cite {Mc3}, \cite {Br}).
\medskip
We refer to \cite{H} and \cite{Ca} for background on complex $K3$ surfaces (henceforth
$K3$ surfaces, for short) and their automorphisms.
\medskip
Let $\mathcal X$ be a $K3$ surface, and let $T : \mathcal X \to \mathcal X$ be an automorphism; it induces an isomorphism
$T^* : H^2(\mathcal X,{\bf Z}) \to H^2(\mathcal X,{\bf Z})$. The {\it dynamical degree} of $T$ is by definition
the spectral radius of $T^*$; it is either 1 or a Salem number. The characteristic polynomial
of $T^*$ is a product of at most one Salem polynomial and of a finite number of cyclotomic polynomials (see
\cite {Mc1}, Theorem 3.2).
\medskip Let $H^2(\mathcal X,{\bf C}) = H^{2,0}(\mathcal X) \oplus H^{1,1}(\mathcal X) \oplus H^{0,2}(\mathcal X)$ be the Hodge decomposition of $H^2(\mathcal X,{\bf C})$. Since the subspace $H^{2,0}(\mathcal X)$ is one dimensional, $T^*$ acts
on it by multiplication by a scalar, denoted by $\delta(T)$, and called the {\it determinant} of $T$; we have $|\delta(T)| = 1$. Moreover, $\delta(T)$ is a root of unity if $\mathcal X$ is projective (cf. \cite {Mc1}, Theorem 3.5).
\medskip The intersection form of $H^2(\mathcal X,{\bf Z)}$ is an even, unimodular lattice of signature (3,19), hence it is
isomorphic to $\Lambda_{3,19}$, and an automorphism of $\mathcal X$ induces an isometry of that form. Therefore a
necessary condition for a Salem number $\alpha$ to occur as the dynamical degree of such an automorphism is that $\Lambda_{3,19}$ has an isometry with
characteristic polynomial $S C$, where $S$ is the minimal polynomial of $\alpha$, and $C$ is a (possibly empty) product
of cyclotomic polynomials.
\begin{defn}\label{complemented} A {\it complemented Salem polynomial} is by definition a degree $22$ polynomial that
is the product of a Salem polynomial and of a (possibly empty) product of cyclotomic polynomials.
\end{defn}
Recall from \S \ref{local-global problem section} that a monic, symmetric polynomial $F \in {\bf Z}[X]$ satisfies condition (C~1) if and only if
\medskip
\centerline {\it $|F(1)|$, $|F(-1)|$ and $(-1)^n F(1) F(-1)$ are squares,}
\medskip
\noindent
where $2n = {\rm deg}(F)$, and that this condition is {\it necessary} for $F$ to be the characteristic polynomial of an isometry of an even, unimodular lattice.
\medskip
If $F$ is a complemented Salem polynomial, then $m(F) = 1$, since $F$ has exactly two roots that are not on the unit circle. This implies that condition (C 2) holds for $(r,s) = (3,19)$.
\medskip
The first result is the following :
\begin{theo}\label{theorem 4 - first part} Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d\leqslant 20$, set
$$F(X) = S(X)(X-1)^{22-d},$$
and suppose that condition {\rm (C 1)} holds for $F$;
let
$\tau \in {\rm Mil}_{3,19}(F)$. Assume that $|S(1)| > 1$. Then the lattice $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$.
\end{theo}
\noindent
{\bf Proof.} By hypothesis, Condition (C 1) holds for $F$. Since $|S(1)| > 1$, the set $\Pi_{S,X-1}$ is not empty, therefore ${\mbox{\textcyr{Sh}}}_F = 0$. By Corollary \ref{final coro}, this
implies that $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$,
as claimed.
\medskip
\begin{defn}\label{realizable polynomial} Let $F$ be a complemented Salem polynomial, and let $\delta$ be a root of $F$
with $|\delta| = 1$. We say that $(F,\delta)$ is (projectively, resp. non-projectively) {\it realizable} if there exists
a (projective, resp. non-projective) $K3$ surface $\mathcal X$
and an automorphism $T : \mathcal X \to \mathcal X$ such
that
\medskip \noindent
$\bullet
\ \ $F$ is the characteristic polynomial of $T^*|H^2(\mathcal X).$
\medskip \noindent
$\bullet$ \ \ $T^*$ acts on $H^{2,0}(\mathcal X)$ by multiplication by $\delta$.
\end{defn}
\medskip
\begin{theo}\label{coro 4 - first part}
Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d\leqslant 20$
and let $\delta$ be a root of $S$ with $|\delta| = 1$. Let $$F(X) = S(X)(X-1)^{22-d},$$
and assume that condition {\rm (C 1)} holds for $F$, and that $|S(1)| > 1$.
Then $(F,\delta)$ is non-projectively realizable.
\end {theo}
\noindent
{\bf Proof.}
This follows from Theorem \ref {theorem 4 - first part} and results of McMullen and Brandhorst, cf. \cite{Br}, Lemma 3.3 (1).
\begin{example} Let $S$ be the Salem polynomial $$S(X) = X^{18} - X^{12} - X^{11} - X^{10} - X^9 - X^8 - X^7 - X^6 + 1, $$
minimal polynomial of the Salem number $\alpha = \lambda_{18,3} = 1.2527759374...$,
and let $\delta$ be a root of $S$ with $|\delta| = 1$. Let $F(X) = S(X)(X-1)^4$.
We have $S(1) = -5$ and $S(-1) = 1$, hence Corollary \ref{coro 4 - first part} implies that $(F,\delta)$ is non-projectively realizable.
\end{example}
For a table of small Salem numbers, see Boyd, \cite {Bo}.
\begin{theo}\label{Salem square} Let $\alpha$ be a Salem number of degree $d$ with $4 \leqslant d \leqslant 20$, let $S$ be the minimal
polynomial of $\alpha$, and suppose that $|S(1)S(-1)| \not = 1$. Let $S_2$ be the
minimal polynomial of $\alpha^2$, and let $\delta$ be a root of $S_2$ with $|\delta| = 1$; set
$$F(X) = S_2(X)(X-1)^{22-d}.$$
Then $(F,\delta)$ is non-projectively realizable.
\end{theo}
\begin{lemma}\label {S2} Let $\alpha$ be a Salem number, let $S$ be its minimal polynomial and let $S_2$ be the minimal
polynomial of $\alpha^2$. Then $|S_2(-1)|$ is a square, and $S_2(1) = S(1) S(-1)$.
\end{lemma}
\noindent
{\bf Proof.} Note that $S$ and $S_2$ have the same degree (see for instance \cite {Sm}, Lemma 2). Let us write $S(X) = \prod (X-\alpha_i)$ and
$S_2(X) = \prod (X-\alpha^2_i)$.
We have $S_2(1) = S(1) S(-1)$ and $S_2(-1) = S(i) S(-i)$. Since $S$ is symmetric, we have $S(i) = i^{{\rm deg}(S)}S(i^{-1})
= i^{{\rm deg}(S)}S(-i)$. Therefore $|S(i)| = |S(-i)|$, hence $|S_2(-1)|$ is a square.
\medskip
\noindent
{\bf Proof of Theorem \ref{Salem square}.} Lemma \ref{S2} implies that
$|S_2(-1)|$ is a square, therefore condition (C 1) holds for $F$. Moreover, since $|S(1)S(-1)| > 1$ by hypothesis,
Lemma \ref{S2} implies that $|S_2(1)| > 1$. Applying Theorem \ref{coro 4 - first part} gives the desired result.
\begin{coro}\label{} Let $\alpha$ be a Salem number of degree $d \leqslant 20$ with
$d \equiv 0 \ {\rm (mod \ 4)}$, let $S_2$ be the minimal polynomial of $\alpha^2$, let
$\delta$ be a root of $S_2$ with $|\delta| = 1$, and set
$$F(X) = S_2(X)(X-1)^{22-d}.$$
Then $(F,\delta)$ is non-projectively realizable.
\end{coro}
\noindent
{\bf Proof.} Gross and McMullen proved that if $S$ is a Salem polynomial of degree $d$ with $d \equiv 0 \ {\rm (mod \ 4)}$, then
$|S(1)S(-1)| \not = 1$ (see \cite {GM},
Proposition 3.3). Therefore the corollary follows from Theorem \ref{Salem square}.
\bigskip Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d \leqslant 20$, and set $$F(X) = S(X)(X-1)^{22-d}.$$
In Theorem \ref{theorem 4 - first part}, we assume that $|S(1)| > 1$. We now consider Salem polynomials $S$ with
$|S(1)| = 1$. In this case, $\Pi_{S,X-1} = \varnothing$, hence the obstruction group ${\mbox{\textcyr{Sh}}}_F$ is not trivial, and not all Milnor indices are realized. We start
by introducing some notation.
\begin{notation}
If $\delta$ is a root of $S$ with $|\delta| = 1$, let $\tau_{\delta} \in {\rm Mil}_{3,19}(F)$
be such that
\medskip
\centerline {$\tau_{\delta}(\mathcal P) = 2$ \ \ if \ \ $\mathcal P(X) = (X-\delta)(X-\delta^{-1})$,}
\medskip
\noindent
that
\medskip
\centerline {$\tau_{\delta}(\mathcal Q) = -2$ \ \ for all \ \ $\mathcal Q \in {\rm Irr}_{\bf R}(S)$ with $\mathcal Q \not = \mathcal P$,}
\medskip
\noindent
and that
$$\tau_{\delta}(X-1) = d - 22.$$
\bigskip
Let $\tau_{1} \in {\rm Mil}_{3,19}(F)$ be such that
$\tau_{1}(\mathcal Q) = -2$ for all $\mathcal Q \in {\rm Irr}_{\bf R}(S)$, and that
$\tau_{1}(X-1) = d - 20$.
\end{notation}
\begin{theo}\label{theorem 4 - second part}
Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d\leqslant 20$, set
$$F(X) = S(X)(X-1)^{22-d}.$$
Assume that condition {\rm (C 1)} holds for $F$ and that $|S(1)| = 1$.
Let
$\tau \in {\rm Mil}_{3,19}(F)$.
\medskip
Then the lattice $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$ and Milnor index $\tau$ if
and only if one of the following holds
\medskip
{\rm (i)} $d \equiv -2 \ {\rm (mod \ 8)}$ and $\tau = \tau_{\delta}$ where $\delta$ is a root of $S$ with $|\delta| = 1$.
\medskip
{\rm (ii)} $d \equiv 2 \ {\rm (mod \ 8)}$ and $\tau = \tau_{1}$.
\end{theo}
\medskip
\noindent
{\bf Proof.} The polynomials $S$ and $X-1$ are relatively prime over ${\bf Z}$. This implies that if the lattice
$\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$, then $\Lambda_{3,19} \simeq L_1 \oplus L_2$
where $L_1$ and $L_2$ are even, unimodular lattices, such that $L_1$ has an isometry with characteristic polynomial $S$,
and $L_2$
has a semi-simple isometry with characteristic polynomial $(X-1)^{22-d}$.
\medskip Note that every $\tau \in {\rm Mil}_{3,19}(F)$ is either equal to $\tau_1$, or to $\tau_{\delta}$, where $\delta$ is
a root of $S$ with $|\delta| = 1$.
Assume first that $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and
Milnor index $\tau_{1}$, and let $\Lambda_{3,19} \simeq L_1 \oplus L_2$ be as above. The signature of $L_1$ is $(1,d-1)$, and since $L_1$ is unimodular and
even, this implies that $d \equiv 2 \ {\rm (mod \ 8)}$.
\medskip Suppose now that $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and
Milnor index $\tau_{\delta}$, where $\delta$ is
a root of $S$ with $|\delta| = 1$. Let
$\Lambda_{3,19} \simeq L_1 \oplus L_2$ be as above. The signature of $L_1$ is then $(3,d-3)$, and since $L_1$ is unimodular and
even, we have $d \equiv -2 \ {\rm (mod \ 8)}$.
\medskip This implies that if $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$, then we
are in one of the cases (i) or (ii).
\medskip
Let us show the converse. Suppose first that we are in case (i). We have $d \equiv -2 \ {\rm (mod \ 8)}$; this means that $d = 6$ or $d = 14$.
Let $(r,s) = (3,3)$ if $d = 6$ and $(r,s) = (3,11)$ if $d = 14$; note that condition (C 2) holds for $S$ and $(r,s)$, and that $r \equiv s \ {\rm (mod \ 8)}$. By hypothesis, condition (C 1) holds for $F$; since $F(-1) = S(-1)$, this implies that $|S(-1)|$ is a square.
Moreover, $|S(1)| = 1$ by hypothesis. We claim that condition (C~1) also holds for $S$.
Since $S$ is a Salem polynomial, we have $S(1) < 0$ and $S(-1) > 0$; we have $d \equiv 2 \ {\rm (mod \ 4)}$, therefore
$(-1)^{d/2}S(1)S(-1)$ is a square. This implies that condition (C 1) holds for $S$. Moreover, $S$ is irreducible, hence
${\mbox{\textcyr{Sh}}}_S = 0$.
\medskip
Let $\tau' \in {\rm Mil}_{r,s}(S)$ be the restriction of $\tau_{\delta}$ to ${\rm Mil}_{r,s}(S)$. We have seen that
conditions (C 1) and (C 2) hold for $S$, and that ${\mbox{\textcyr{Sh}}}_S = 0$. By Corollary \ref {final coro} the even, unimodular lattice $\Lambda_{r,s}$ has an
isometry with characteristic polynomial $S$ and Milnor index $\tau'$. The identity is a semi-simple isometry of the lattice $-E_8$ with
characteristic polynomial $(X-1)^{8}$. Since $\Lambda_{3,19} = \Lambda_{3,3} \oplus (-E_8) \oplus (- E_8)
= \Lambda_{3,11} \oplus (-E_8)$, the lattice $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau_{\delta}$, as claimed.
\medskip
Suppose now that we are in case (ii). We have $d \equiv 2 \ {\rm (mod \ 8)}$; that is, $d = 10$ or $d = 18$.
Let $(r,s) = (1,d-1)$;
note that $r \equiv s \ {\rm (mod \ 8)}$, and that condition (C 2) holds for $S$ and $(r,s)$. We show as in case (i) that condition (C 1) holds for $S$ and that ${\mbox{\textcyr{Sh}}}_S = 0$.
\medskip
Let $\tau'' \in {\rm Mil}_{r,s}(S)$ be the restriction of $\tau_{1}$ to ${\rm Mil}_{r,s}(S)$. By Corollary \ref {final coro} the even, unimodular lattice $\Lambda_{r,s}$ has an
isometry with characteristic polynomial $S$ and Milnor index $\tau''$. The identity is a semi-simple isometry of the lattice $\Lambda_{2,20-d}$ with
characteristic polynomial $(X-1)^{22-d}$. Since $\Lambda_{3,19} = \Lambda_{1,d-1} \oplus \Lambda_{2,20-d}$, the lattice $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau_{1}$.
\begin{theo}\label{summary}
Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d\leqslant 22$
and let $\delta$ be a root of $S$ with $|\delta| = 1$. Let $$F(X) = S(X)(X-1)^{22-d},$$
and assume that condition {\rm (C 1)} holds for $F$.
The following are equivalent :
\medskip
\noindent
{\rm (a)} $(F,\delta)$ is non-projectively realizable.
\medskip \noindent
{\rm (b)} One of the following holds
\medskip
{\rm (i)} ${\rm deg}(S) = 22$.
\medskip
{\rm (ii)} $|S(1)| > 1$.
\medskip
{\rm (iii)} $|S(1)| = 1$ and $d \equiv -2 \ {\rm (mod \ 8)}$.
\end{theo}
\noindent
{\bf Proof.} Let us prove that (b) implies (a). If we are in case (i), then $F = S$ is irreducible, hence ${\mbox{\textcyr{Sh}}}_F = 0$. Conditions
(C 1) and (C 2) hold, therefore Corollary \ref {final coro} implies that for all $\tau \in {\rm Mil}_{3,19}(F)$, the lattice $\Lambda_{3,19}$ has a semi-simple isometry
with characteristic polynomial $F$ and Milnor index $\tau$; hence (a) holds by a result of Brandhorst (see \cite {Br}, Lemma 3.3 (1)).
\medskip
Assume that we are in case (ii); then (a) holds by Theorem \ref{coro 4 - first part}.
\medskip
Finally, suppose that we are in case (iii). Theorem \ref {theorem 4 - second part} implies that
the lattice $\Lambda_{3,19}$ has a semi-simple isometry
with characteristic polynomial $F$ and Milnor index $\tau_{\delta}$. Applying \cite {Br}, Lemma 3.3 (1) of Brandhorst gives
the desired result.
\medskip
Conversely, suppose that (a) holds. If $d = 22$, then (i) holds; assume that $4 \leqslant d \leqslant 20$. If $|S(1)| > 1$, we
are in case (ii). Assume that $|S(1)| = 1$. Then Theorem \ref{theorem 4 - second part} implies that $d \equiv -2 \ {\rm (mod \ 8)}$;
hence (b) holds.
\begin{example}\label {Salem 6} If $a \geqslant 0$ is an integer, the polynomial
$$S_a(X) = X^6 -aX^5 - X^4 + (2a -1) X^3 - X^2 -a X + 1$$
is an Salem polynomial (see \cite {GM}, page 284, Example 1), and $S(1) = -1$. Part (iii) of Theorem \ref{summary} implies
that if $\delta_a$ is a root of $S_a$ with $|\delta_a| = 1$ and $F_a(X) = S_a(X)(X-1)^{16}$, then $(F_a,\delta_a)$ is non-projectively realizable.
\medskip The polynomials $S_a$ also appear in \S 4 of \cite {Mc1} : for every integer $a \geqslant 0$, McMullen
gives a geometric construction of an automorphism of a non-projective $K3$ surface such that the dynamical
degree and the determinant of the automorphisms are roots of $S_a$ (see \cite{Mc1}, Theorem 4.1); this construction uses
complex tori.
\end{example}
\begin{example}\label{14} Let $S$ be a Salem polynomial of degree $14$, and assume that $|S(1)|= 1$. Let $\delta$ be
a root of $S$, and set $F(X) = S(X)(X-1)^{8}$. If moreover $S(-1)$ is a square, then condition (C 1) holds for $F$, and Theorem \ref{summary} (iii) implies that $(F,\delta)$ is non-projectively realizable.
\medskip
Salem polynomials of degree $14$ with $|S(1)| = |S(-1)| = 1$ were considered in several papers. Oguiso proved that
the third smallest Salem number $\lambda_{14}$ is the dynamical degree of an automorphism of a non-projective
$K3$ surface (see \cite {O}, Proposition 3.2). If a Salem number is a root of a Salem polynomial $S$ of degree $14$
with $|S(1)| = |S(-1)| = 1$, then this was shown by Reschke (see \cite{R}, Theorem 1.2).
\end{example}
\section{Realizable Salem numbers}\label{4}
The aim of this section is to show that if $\alpha$ is a Salem number of degree $d$ with $d = 4,6,8, 12, 14$ or $16$, then
$\alpha$ is the dynamical degree of an automorphism of a {\it non-projective} $K3$ surface; partial results are given
for the other values of $d$ as well.
\begin{theo}\label{n+} Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d \leqslant 20$. Suppose that
$|S(1)S(-1)| > 1$. Let $n_+, n_- \geqslant 0$ be even integers with $d + n_- + n_+ = 22$, and let $$F(X) = S(X)(X-1)^{n+}(X+1)^{n_-}.$$
Let $\tau \in {\rm Mil}_{3,19}(F)$. Assume that
\medskip
$\bullet$ If $n_- = 0$, then $|S(1)| >1$ and $|S(-1)|$ is a square;
\medskip
$\bullet$ If $n_+ = 0$, then $|S(-1)| > 1$ and $|S(1)|$ is a square.
\medskip
Then $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$.
\end{theo}
\noindent
{\bf Proof.} Condition (C 1) is satisfied : indeed, if $n_+ \not = 0$ and $n_- \not = 0$, then $F(1) = F(-1) = 0$. If
$n_- = 0$, then $F(-1) = S(-1)2^{n_+}$ and $F(1) = 0$; if
$n_+ = 0$, then $F(1) = S(1)2^{n_-}$ and $F(-1) = 0$. Note that $n_-$ and $n_+$ cannot both be zero, since $d \leqslant 20$. Hence in all cases, $|F(1)|$ and $|F(-1)|$ are squares, and $F(1)F(-1) = 0$;
this implies that Condition (C 1) holds.
\medskip
We have $\Pi_{S,X-1} \not = \varnothing$ if $|S(1)| > 1$, $\Pi_{S,X+1} \not = \varnothing$ if $|S(-1)| > 1$,
and $\Pi_{X+1,X-1} = \{2 \}$, therefore ${\mbox{\textcyr{Sh}}}_F = 0$; hence Corollary \ref{final coro} implies that $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F$ and Milnor index $\tau$.
\begin{notation} Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d \leqslant 20$, and let $\delta$ be a root of $S$
with $|\delta| = 1$. Let $F$ be a complemented Salem polynomial with Salem factor $S$. We define the Milnor index $\tau_{\delta} \in
{\rm Mil}_{3,19}(F)$ as follows :
\medskip
$\bullet$ $\tau_{\delta}((X - \delta)(X - \delta^{-1})) = 2$;
\medskip
$\bullet$ $\tau_{\delta}(\mathcal P)< 0$ for all $\mathcal P \in {\rm Irr}_{\bf R}(F)$ such that $\mathcal P (X) \not =
(X - \delta)(X - \delta^{-1})$.
\end{notation}
\begin{theo}\label{lots} Let $S$ be a Salem polynomial of degree $d$ with $4 \leqslant d \leqslant 20$, and let
$\delta$ be a root of $S$
with $|\delta| = 1$. Suppose that
$|S(1)S(-1)| > 1$. Let $$F_0(X) = S(X) (X-1)^{22-d},$$
$$F_+(X) = S(X)(X+1)^2(X-1)^{20-d},$$ $$F_-(X) = S(X)(X^2 -1)(X-1)^{20-d}.$$ Then we have :
\medskip
{\rm (a)} Assume that $S(-1)$ is a square, and if moreover $d = 10, 18$, then $|S(1)| > 1$.
\medskip Then $(F_0,\delta)$ is realizable.
\medskip
{\rm (b)} Assume that $S(-1)$ is not a square, and if moreover $d = 20$, then $|S(1)|$ is a square.
\medskip
Then $(F_+,\delta)$ or $(F_-,\delta)$ is realizable.
\end{theo}
\noindent
{\bf Proof.} Let us prove (a). If $d \equiv 0 \ {\rm (mod \ 4)}$, then $S(1) \equiv S(-1) \ {\rm (mod \ 4)}$. Since $S(1) < 0$,
if $|S(1)| = 1$ then $S(-1)$ is not a square; hence we have $|S(1)| > 1$, and therefore Theorem \ref{summary} (ii) gives
the desired result. Suppose that $d = 6, 14$; then applying Theorem \ref{summary}, (ii) or (iii), we see that (a) holds.
If $d = 10$ or $d = 18$, then we are assuming that $|S(1)| > 1$, therefore Theorem \ref{summary} (ii) implies that
$(F_0,\delta)$ is realizable.
\medskip
(b) By hypothesis, $S(-1)$ is not a square, hence in particular $S(-1) > 1$. Theorem \ref{n+} implies that
$\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F_+$ and Milnor index $\tau_{\delta}$.
Let $(L,q)$ be an even, unimodular lattice isomorphic to $\Lambda_{3,19}$, and let $t : L \to L$ be an isometry
with characteristic polynomial $F_+$ and Milnor index $\tau_{\delta}$.
Set $L_1 = {\rm Ker}(S(t))$, $L_2 = {\rm Ker}(t+1)$ and $L_3 = {\rm Ker}(t-1)$.
Let $t_2 : L_2 \to L_2$ be the restriction of $t$ to $L_2$.
\medskip A root of $(L_2,q)$ is by definition an element $x \in L_2$ such that $q(x,x) = -2$. If $(L_2,q)$ has no
roots, then $t_2$ is a positive isometry of $(L_2,- q)$ by \cite{Mc2}, Theorem 2.1, and hence \cite {Mc2}, Theorem 6.2 (see
also \cite {Mc3}, Theorem 6.1) implies that $(F_+,\delta)$ is realizable, hence (b) holds.
\medskip Suppose that $(L_2,q)$ has at least one root.
By hypothesis, $S(-1)$ is not a square, therefore ${\rm det}(L_2,q)$ is not a square. Since $(L_2,q)$ is of rank $2$, even and negative
definite, there exist integers $D \geqslant 1$ and $f \geqslant 1$ such that
${\rm det}(L_2,q) = f^2D$, where $-D$ is the discriminant of an imaginary quadratic field.
The lattice $(L_2,q)$ is isomorphic to a quadratic form $q'$ on an
order $O$ of the imaginary quadratic field ${\bf Q}(\sqrt {-D})$ (see for instance \cite {Co}, Theorem 7.7). Complex
conjugation induces an isometry of the quadratic form $(O,q')$ with characteristic polynomial $X^2 - 1$. If $D = 3$ and $f = 1$, then
$(O,q')$ is isomorphic to the root lattice $A_2$, and complex conjugation is a positive isometry of $(O, -q')$ (see \cite {Mc2}, \S 5, Example);
otherwise, $(O,q')$ contains only one root, fixed by complex conjugation, hence we obtain a positive isometry of $(O, -q')$ in
this case as well. Let $t_2' : L_2 \to L_2$ be the isometry of $(L_2,q)$ obtained via the isomorphism $(O,q') \simeq (L_2,q)$.
Then $t_2'$ is a positive isometry of $(L_2,-q)$. Let $G(L_2) = (L_2)^{\sharp}/L_2$, and note that $t_2$ and $t_2'$ both
induce $- {\rm id}$ on $G(L_2)$. This implies that $(L,q)$ has a semi-simple isometry $t' : L \to L$ inducing the positive
isometry $t_2$ or $t_2'$ on
$L_2$ and $t_i$ on $L_i$ for $i = 1,3$. By \cite {Mc2}, Theorem 6.2 (see
also \cite {Mc3}, Theorem 6.1) this implies that $(F_+,\delta)$ or $(F_-,\delta)$ is realizable, and this implies (b).
\begin{defn}\label{realizable}
Let $\alpha$ be a Salem number and let $\delta$ be a conjugate of $\alpha$ such that $|\delta| = 1$. We say
that $(\alpha,\delta)$ is (non-projectively) {\it realizable} if there exists an automorphism of a non-projective $K3$ surface of dynamical degree $\alpha$ and and determinant $\delta$.
\end{defn}
\begin{coro}\label{lots coro} Let $\alpha$ be a Salem number of degree $d$ with $4 \leqslant d \leqslant 20$, let $S$ be
the minimal polynomial of $\alpha$, and let
$\delta$ be a root of $S$
with $|\delta| = 1$. Assume that one of the following holds
\medskip
{\rm (i)} $d = 4, 6, 8, 12, 14$ or $16$;
\medskip
{\rm (ii)} $d = 10$ or $d = 18$ and $|S(1)| > 1$ or $S(-1)$ is not a square;
\medskip
{\rm (iii)} $d = 20$ and $|S(1)|$ or $|S(-1)|$ is a square.
\medskip
Then $(\alpha,\delta)$ is realizable.
\end{coro}
\noindent
{\bf Proof.} (i) If $d = 4,8, 12$ or $16$, then $|S(1)S(-1)| > 1$ (cf. \cite {GM}, Proposition 3.3.). By Theorem \ref{lots}, this
implies that $(F_0,\delta)$, $(F_+,\delta)$ or $(F_-,\delta)$ is realizable; therefore $(\alpha,\delta)$ is realizable. The same
argument applies if $d = 6$ or $d = 14$ and $|S(1)S(-1)| > 1$. Assume that $d = 6$ or $d = 14$ and $|S(1)S(-1)| = 1$;
then Theorem \ref{summary} (iii) implies that $(F_0,\delta)$ is realizable, hence $(\alpha,\delta)$ is realizable in this
case as well.
\medskip (ii) Suppose that $d = 10$ or $d = 18$. If $|S(1)| > 1$ and $S(-1)$ is a square, then
$(F_0,\delta)$ is realizable by Theorem \ref{theorem 4 - first part}. If $S(-1)$ is not a square, then
Theorem \ref{lots} implies that
$(F_+,\delta)$ or $(F_-,\delta)$ is realizable, hence $(\alpha,\delta)$ is realizable.
\medskip
(iii) Assume that $d = 20$, and note that by \cite {GM}, Proposition 3.3 we have $|S(1)S(-1)| > 1$. If $S(-1)$ is a square,
then Theorem \ref{lots} (a) implies that $(F_0,\delta)$ is realizable, hence $(\alpha,\delta)$ is realizable.
If $S(-1)$ is a not a square, then by hypothesis $|S(1)|$ is a square, and Theorem \ref{lots} (b) implies that
$(F_+,\delta)$ or $(F_-,\delta)$ is realizable, therefore $(\alpha,\delta)$ is realizable.
\begin{example} McMullen proved that the Salem numbers $\lambda_{14}$, $\lambda_{16}$ and $\lambda_{20}$
are not realized as dynamical degrees of automorphisms of {\it projective} $K3$ surfaces (cf. \cite{Mc3}, \S 9).
Corollary \ref{lots coro} shows that they are realized by automorphisms of {\it non-projective } $K3$ surfaces.
\end{example}
\section{A nonrealizable Salem number}\label{18}
McMullen proved that the Salem number $\lambda_{18} = 1.1883681475...$ (the second smallest Salem number) is the dynamical degree of
an automorphism of a {\it projective} $K3$ surface (cf. \cite{Mc3}, Theorem 8.1). The aim of this section is to show that this is not possible
for {\it non-projective} $K3$ surfaces.
\medskip
Let $S$ be a Salem polynomial of degree $18$, and let
$\delta$ be a root of $S$ with $|\delta| = 1$. Let $\sigma_{\delta} \in {\rm Mil}_{3,15}(S)$
be such that $\sigma_{\delta}(\mathcal P) = 2$ for $\mathcal P(x) = (x-\delta)(x - \delta^{-1})$ and that
$\sigma_{\delta}(\mathcal Q) = -2$ for all $\mathcal Q \in {\rm Irr}_{\bf R}(S)$ with $\mathcal Q \not = \mathcal P$.
\medskip
If $f \in {\bf Z}[X]$ is a monic polynomial, we denote by ${\rm Res}(S,f)$ the resultant of the polynomials $S$ and $f$.
\begin{prop}\label{resultant} Assume that $|{\rm Res}(S,f)| = 1$ for all $f \in \{\Phi_1,\Phi_2,\Phi_3,\Phi_4, \Phi_6 \}$. Let $C$ be a product
of cyclotomic polynomials such that ${\rm deg}(C) = 4$, and set $F = SC$. Let $\tau_{\delta} \in {\rm Mil}_{3,19}(F)$ be
such that the restriction of $\tau_{\delta}$ to $ {\rm Mil}_{3,15}(S)$ is $\sigma_{\delta}$, and that
$\tau_{\delta}(\mathcal Q) < 0$ for all $\mathcal Q \in {\rm Irr}_{\bf R}(C)$.
\medskip If $\Lambda_{3,19}$ has a semi-simple isometry with characteristic polynomial $F = SC$ and Milnor index $\tau_{\delta}$,
then $C = \Phi_{12}$.
\end{prop}
\noindent
{\bf Proof.} If $C = \Phi_5$, $\Phi_8$ or $\Phi_{10}$, then $FC$ does not satisfy Condition (C 1), hence
$\Lambda_{3,19}$ does not have any isometry with characteristic polynomial $F$ for these choices of $C$.
\medskip
Assume that all the factors of $C$ belong to the set $\{\Phi_1,\Phi_2,\Phi_3,\Phi_4, \Phi_6 \}$. Then $S$ and $C$
are relatively prime over ${\bf Z}$. If $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$, then
$\Lambda_{3,19} = L_1 \oplus L_2$, where $L_1$ and $L_2$ are even, unimodular lattices such that
$L_1$ has an isometry with characteristic polynomial $S$ and Milnor index $\sigma_{\delta}$, and $L_2$ has
an isometry with characteristic polynomial $C$. This implies that the signature of $L_1$ is $(3,15)$ and that
the signature of $L_2$ is $(0,4)$, and this is impossible.
\medskip Therefore the only possiblity is $C = \Phi_{12}$, as claimed.
\begin{notation}
Let $C = \Phi_{12}$, and set $F = SC$. Let $\zeta$ be a primitive $12$th root of unity. Let $\tau_{\delta}, \tau_{\zeta}
\in {\rm Mil}_{3,19}(F)$ be such that
\medskip
$\tau_{\delta}(\mathcal P) = 2$ for $\mathcal P(x) = (x-\delta)(x - \delta^{-1})$ and that
$\tau_{\delta}(\mathcal Q) = -2$ for all $\mathcal Q \in {\rm Irr}_{\bf R}(F)$ with $\mathcal Q \not = \mathcal P$;
\medskip
$\tau_{\zeta}(\mathcal P) = 2$ for $\mathcal P(x) = (x-\zeta)(x - \zeta^{-1})$ and that
$\tau_{\zeta}(\mathcal Q) = -2$ for all $\mathcal Q \in {\rm Irr}_{\bf R}(F)$ with $\mathcal Q \not = \mathcal P$.
\end{notation}
\begin{theo}\label{phi 12} Let $S$ be an Salem polynomial of degree $18$ such that $$|S(1)S(-1)| = 1,$$ let $C = \Phi_{12}$, and set $F = SC$.
Let $\delta$ be a root of $S$ with $|\delta| = 1$, and let $\zeta$ be a primitive $12$th root of unity.
With the above notation, we have
\medskip
{\rm (a)} The lattice $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$ and Milnor
index $\tau_{\zeta}$.
\medskip
{\rm (b)} The lattice $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$ and Milnor
index $\tau_{\delta}$ if and only if ${\mbox{\textcyr{Sh}}}_F = 0$.
\end{theo}
\noindent
{\bf Proof.} The polynomial $F$ satisfies Condition (C 1), since $F(1) = -1$ and $F(-1) = 1$.
\medskip Let us prove (a). Let $\sigma_1 \in {\rm Mil}_{1,17}(S)$ and $\sigma_2 \in {\rm Mil}_{2,2}(C)$ be the restrictions of
of $\tau_{\zeta} \in {\rm Mil}_{3,19}(F)$. Since $S$ and $C$ are both irreducible, we have ${\mbox{\textcyr{Sh}}}_S = 0$ and ${\mbox{\textcyr{Sh}}}_C = 0$.
Therefore by Corollary \ref {final coro}, $\Lambda_{1,17}$ has an isometry with characteristic polynomial $S$ and Milnor
index $\sigma_1$ and $\Lambda_{2,2}$ has an isometry with characteristic polynomial $C$ and Milnor
index $\sigma_2$. This implies (a).
\medskip
Let us prove (b). If ${\mbox{\textcyr{Sh}}}_F = 0$, then Corollary \ref {final coro} implies that $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$ and any Milnor index.
\medskip
Assume that ${\mbox{\textcyr{Sh}}}_F \not = 0$; since $F$ has two irreducible factors, this implies that ${\mbox{\textcyr{Sh}}}_F \simeq {\bf Z}/2{\bf Z}$.
Recall from \S \ref{local data section} that $\epsilon_{\tau_{\delta}} = \epsilon^{\rm finite} + \epsilon^{\infty}_{\tau_{\delta}}$ and
$\epsilon_{\tau_{\zeta}} = \epsilon^{\rm finite} + \epsilon^{\infty}_{\tau_{\zeta}}$.
\medskip
By (a) we know that $\Lambda_{3,19}$ has an isometry with characteristic polynomial $F$ and Milnor
index $\tau_{\zeta}$; this implies that $\epsilon_{\tau_{\zeta}} = 0$. Note that $\epsilon^{\infty}_{\tau_{\delta}} \not =
\epsilon^{\infty}_{\tau_{\zeta}}$. Therefore $\epsilon_{\tau_{\delta}} \not = 0$, and by Theorem \ref{final} this implies that
$\Lambda_{3,19}$ does not have an isometry with characteristic polynomial $F$ and Milnor
index $\tau_{\delta}$. This completes the proof of (b).
\begin{example} Let $S$ be the Salem polynomial corresponding to the Salem number $\lambda_{18}$.
This polynomial satisfies the conditions of Proposition
\ref{resultant} : we have $|{\rm Res}(S,f)| = 1$ for all $f \in \{\Phi_1,\Phi_2,\Phi_3,\Phi_4, \Phi_6 \}$. Therefore
by Proposition \ref{resultant}, if $\Lambda_{3,19}$ has an isometry with characteristic polynomial $S C$
and Milnor index $\tau_{\delta}$
for
some product $C$ of cyclotomic polynomials, then we have $C = \Phi_{12}$.
\medskip
Let $F = S \Phi_{12}$. We have ${\mbox{\textcyr{Sh}}}_F \not = 0$. Indeed, $|{\rm Res}(S,f)| = 169$, and the common factors modulo $13$
of $S$ and $\Phi_{12}$ in ${\bf F}_{13}[X]$ are $X + 6, X + 11 \in {\bf F}_{13}[X]$. These polynomials are not symmetric.
Therefore $\Pi_{S,\Phi_{12}} = \varnothing$, and hence ${\mbox{\textcyr{Sh}}}_F \simeq {\bf Z}/2{\bf Z}$. Theorem \ref{phi 12} implies
that $\Lambda_{3,19}$ does not have any isometry with characteristic polynomial $S \Phi_{12}$
and Milnor index $\tau_{\delta}$.
\medskip
Since this holds for all roots $\delta$ of $S$ with $|\delta| = 1$, the Salem number $\lambda_{18}$ is not
realized by an automorphism of a non-projective $K3$ surface.
\end{example}
\section{Exceptional sets, roots of unity and bounds}\label{bounds}
Recall that a unit $u$ of an algebraic number field is called an {\it exceptional unit} if $1-u$ is also a unit.
Following Gross and McMullen (cf. \cite {GM}) we say that a Salem number $\alpha$ is {\it unramified} if $\alpha^2$ is an exceptional unit, and {\it ramified} otherwise.
\medskip As we will see, the results of \S \ref{K3} imply that the square of every Salem number of degree $d$ with $4 \leqslant d \leqslant 20$ is the dynamical degree of an automorphism of a non-projective $K3$ surface, with the possible exception of {\it unramified Salem numbers of degree $10$ or $18$}.
\medskip
Let $\alpha$ be a Salem number and let $\delta$ be a conjugate of $\alpha$ such that $|\delta| = 1$. Recall that
that $(\alpha,\delta)$ is said to be (non-projectively) {\it realizable} if there exists an automorphism of a non-projective $K3$ surface of dynamical degree $\alpha$ and and determinant $\delta$ (see Definition \ref{realizable}).
\medskip
The following is an immediate consequence of Theorem \ref{Salem square} :
\begin{prop}\label{ramified square} Let $\alpha$ be a ramified Salem number of degree $d$ with $4 \leqslant d \leqslant 20$,
and let $\delta$ be a conjugate of $\alpha$ such that $|\delta| = 1$. Then $(\alpha^2,\delta^2)$ is realizable.
\end{prop}
Gross and McMullen proved that every Salem number of degree divisible by $4$ is ramified (cf. \cite {GM},
Proposition 3.3); hence the proposition implies that the square of every Salem number of degree $d$ with $4 \leqslant d \leqslant 20$
is realizable, except possibly if $d = 10$ or $18$ (and the Salem number is unramified).
\medskip
We recall a notation introduced by Silverman in \cite {Si1}~:
\begin{notation}\label{E alpha} Let $\alpha$ be an algebraic unit that is not a root of unity. We denote by $\mathcal E(\alpha)$
the set of integers $m \geqslant 1$ such that $\alpha^m$ is an exceptional unit, and by $E(\alpha)$
the number of elements of $\mathcal E(\alpha)$.
\end{notation}
In \cite {Si1}, Silverman proved that there exists an absolute and effectively computable constant $c$ such that
$E(\alpha) \leqslant c d^{{1 + 0.7}/{\rm log log} d}$, where $d$ is the degree of $\alpha$. Note that
an algebraic number field of degree $d$ has at most $3.7^{3d}$ exceptional units (see Everste \cite {E}; the finiteness of this
number is a theorem of Siegel).
\medskip
We obtain a more precise version of a result of Brandhorst (see \cite {Br})~:
\begin{theo}\label{Br}
Let $\alpha$ be a Salem number of degree $d \leqslant 20$, let $\delta$ be a conjugate of $\alpha$ such that $|\delta| = 1$,
and let $m \geqslant 1$ be an integer with
$m \not \in \mathcal E(\alpha)$. Then
\medskip
{\rm (a)} $(\alpha^{2m},\delta^{2m})$ is realizable.
\medskip
{\rm (b)} If $m$ is even, then $(\alpha^m,\delta^m)$ is realizable.
\end{theo}
\noindent
{\bf Proof.} (a)
Since $m \not \in \mathcal E(\alpha)$ by hypothesis, $\alpha^m$ is not an exceptional unit; hence
$\alpha^m$ is ramified.
By Proposition \ref{ramified square}, this implies that $(\alpha^{2m},\delta^{2m})$ is realizable.
\medskip
(b) Suppose that $m$ is even, and let us write $m = 2k$ with $k \geqslant 1$ an integer. By hypothesis,
$m = 2r \not \in \mathcal E(\alpha)$; this implies that $\alpha^r$ is ramified. Applying Proposition \ref{ramified square} to $\alpha^r$
shows that $(\alpha^m,\delta^m)$ is realizable.
\begin{notation}\label{cyclotomic} If $m \geqslant 1$ is an integer, we denote by $\Phi_m$ the $m$-th cyclotomic
polynomial. Let $\zeta_m$ be a primitive $m$-th root of unity, and let ${\bf Q}(\zeta_m)$ be the $m$-th
cyclotomic field.
\end{notation}
\begin{prop}\label{cyclotomic unit} Let $S$ be a Salem polynomial, and let $m \geqslant 1$ be an integer. Assume that
$m \in \mathcal E(\alpha)$; then $S(\zeta_m)$ is a unit of ${\bf Q}(\zeta_m)$.
\end{prop}
\noindent
{\bf Proof.} Since $m \in \mathcal E(\alpha)$, by definition $\alpha^m - 1$ is a unit of ${\bf Q}(\alpha)$. We have
$\alpha^m - 1 = \underset {d | m} \prod \ \Phi_d(\alpha)$, hence $\Phi_m(\alpha)$ is also a unit of ${\bf Q}(\alpha)$.
Note that $$|{\rm Res}(S,\Phi_m)| = | {\rm N}_{{\bf Q}(\alpha)/{\bf Q}}(\Phi_m(\alpha))| = | {\rm N}_{{\bf Q}(\zeta_m)/{\bf Q}}(S(\zeta_m))|,$$
hence $S(\zeta_m)$ is a unit of ${\bf Q}(\zeta_m)$, as claimed.
\begin{prop}\label {alpha m realizable}
Let $\alpha$ be a Salem number of degree $d \leqslant 20$, let $\delta$ be a conjugate of $\alpha$ such that $|\delta| = 1$,
and let $m \geqslant 1$ be an integer. Let $S$ be the minimal polynomial of $\alpha$. Assume that $S(\zeta_m)$ is a not unit of ${\bf Q}(\zeta_m)$. Then
\medskip
{\rm (a)} $(\alpha^{2m},\delta^{2m})$ is realizable.
\medskip
{\rm (b)} If $m$ is even, then $(\alpha^m,\delta^m)$ is realizable.
\end{prop}
\noindent
{\bf Proof.} Since $S(\zeta_m)$ is a not unit of ${\bf Q}(\zeta_m)$, the integer $m$ does not belong to
$\mathcal E(\alpha)$ (cf. Proposition \ref{cyclotomic unit}); hence Theorem \ref{Br} implies the proposition.
\begin{example} Let $\alpha = \lambda_{18,2} = 1.2197208590...$ be the $6$-th smallest Salem number, with minimal polynomial $$S(X) = X^{18} - X^{17} - X^{10} + X^9 - X^8 - X + 1.$$ We have $S(\zeta_3) = 5$, hence
$\alpha^6$ is realizable.
\end{example}
One can show that if $\alpha$ is an unramified Salem number of degree $10$, then $\alpha^k$ is realizable for some $k \leqslant 8$. Moreover,
the exponent $8$ is only (possibly) needed for {\it one} Salem number, $\alpha = \lambda_{10,2} =
1.2163916611...$ (the fifth smallest Salem number)
root of the Salem polynomial
$S(X) = X^{10} - X^6 - X^5 - X^4 +1$;
for all other degree $10$ Salem numbers $\alpha$, we show that $\alpha^k$ is realizable for some $k \leqslant 6$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Appendix}
\section{Proof for Gaussian case}\label{app:gaussian}
\begin{lemma}\label{lem:gaussians}
If the entries of $S \in \mathbb{R}^{m \times n}$ are i.i.d. $N(0,1/m)$, $m = O(d/\varepsilon^2)$, and $U^\top b = 0$, then
\[
|a^\top(SA)^\dagger Sb| \lesssim \frac{\varepsilon \sqrt{\log d}}{\sqrt{d}}\norm{a}_2\norm{b}_2\norm{\Sigma^{-1}}_2
\]
for any vectors $a, b$ with probability $1-1/\mathrm{poly}(d)$.
\end{lemma}
\begin{proof}
With probability $1$, the matrix $SA$ has linearly independent columns, and so $(SA)^\dagger$ is{
\begin{align*} = & ~(A^\top S^\top S A)^{-1} A^\top S^\top \\
= & ~( V \Sigma U^\top S^\top S U \Sigma V^\top)^{-1} V \Sigma U^\top S^\top \\
= & ~ V \Sigma^{-1} (U^\top S^\top S U)^{-1} \Sigma^{-1} V^\top V \Sigma U^\top S^\top \\
= & ~ V \Sigma^{-1} (U^\top S^\top S U)^{-1} U^\top S^\top.
\end{align*}}
Hence, we would like to bound
\[
X = a^\top V\Sigma^{-1}(U^\top S^\top SU)^{-1} U^\top S^\top Sb.
\]
It is well-known (stated, for example, explicitly in
Theorem 2.3 of \cite{woo14}) that with probability
$1-\exp(-d)$, the singular values of $SU$ are $(1 \pm \varepsilon)$ for $m
= O(d/\varepsilon^2)$. We condition on this event.
It follows that {
\begin{eqnarray*}
&& \|V \Sigma^{-1} (U^\top S^\top S U)^{-1} U^\top S\|_2 \\
& = & \|\Sigma^{-1} (U^\top S^\top S U)^{-1} U^\top S\|_2\\
& \leq & \|\Sigma^{-1}\|_2 \|(U^\top S^\top S U)^{-1}\|_2 \|U^\top S\|_2\\
& \leq & \|\Sigma^{-1}\|_2 \cdot \frac{1}{1-\varepsilon} \cdot (1+\varepsilon)\\
& = & O(\|\Sigma^{-1}\|_2),
\end{eqnarray*}}
where the first equality uses that $V$ is a rotation, the first inequality follows by sub-multiplicativity,
and the second inequality uses that the singular values of $SU$ are in the range $[1 -\varepsilon, 1+\varepsilon]$.
Hence, with probability $1-\exp(-d)$,
\begin{eqnarray}\label{eqn:oper}
\|a^\top V \Sigma^{-1} (U^\top S^\top SU)^{-1} U^\top S^\top\|_2 = O( \|\Sigma^{-1}\|_2 \|a\|_2).
\end{eqnarray}
The main observation is that since $U^\top b = 0$, $SU$ is statistically independent from $Sb$.
Hence, $Sb$ is distributed as $N(0, \|b\|_2^2 I_m)$, conditioned on the vector $a^\top V \Sigma^{-1} (U^\top S^\top SU)^{-1} U^\top S^\top$.
It follows that conditioned on the value of $a^\top V \Sigma^{-1} (U^\top S^\top SU)^{-1} U^\top S^\top$, $X$ is distributed as
\begin{align*}
N(0, \|b\|_2^2 \| a^\top V \Sigma^{-1} (U^\top S^\top SU)^{-1} U^\top S^\top \|_2^2/m),
\end{align*}
and so using (\ref{eqn:oper}) , with probability $1-1/\mathrm{poly}(d)$,
we have $|X| = O(\varepsilon \sqrt{\log d} \|a\|_2 \|b\|_2 \|\Sigma^{-1}\|_2/ \sqrt{d})$.
\end{proof}
\section{Combining Different Matrices}\label{sec:combining}
In some cases it can make sense to combine different matrices that
satisfy the generalization bound.
\restate{thm:combine}
\begin{proof}
For any vectors $a, b$, and $x^* = A^\dagger b$ we want to show
\[
|a^\top (RSA)^\dagger RSb - a^\top x^*| \lesssim \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{b - Ax^*}_2 \norm{A^\dagger }_2
\]
As before, it suffices to consider the $x^*=0$ case. We have with
probability $1-\delta$ that
\[
|a^\top (SA)^\dagger Sb| \lesssim \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{b}_2 \norm{A^\dagger }_2 ;
\]
suppose this happens. We also have by the properties of $R$,
applied to $SA$ and $Sb$, that
\[
|a^\top (RSA)^\dagger RSb - a^\top (SA)^\dagger Sb| \lesssim \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{Sb}_2 \norm{(SA)^\dagger}_2.
\]
Because $S$ is an OSE, we have $\norm{Sb}_2 \leq (1+\varepsilon)$ and
$\norm{(SA)^\dagger }_2 \gtrsim (1-\varepsilon) \norm{A^\dagger}_2$. Therefore
\[
|a^\top (RSA)^\dagger RSb| \lesssim \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{b}_2 \norm{A^\dagger}_2
\]
\end{proof}
We describe a few of the applications of combining sketches.
\subsection{Removing dependence on $n$ via Count-Sketch}
One of the limitations of the previous section is that the choice of
$k$ depends on $n$. To prove that theorem, we have to assume that
$\log d>\log \log n$. Here, we show an approach to remove that
assumption.
The main idea is instead of applying matrix $S\in \mathbb{R}^{m\times
n}$ to matrix $A\in \mathbb{R}^{n\times d}$ directly, we pick two
matrices $ S \in \mathbb{R}^{m\times \mathrm{poly}(d) }$ and $C\in
\mathbb{R}^{\mathrm{poly}(d) \times n}$, e.g. $S$ is FastJL matrix and
$C$ is Count-Sketch matrix with $s=1$. We first compute $C\cdot A$,
then compute $S\cdot (CA)$. The benefit of these operations is $S$
only needs to multiply with a matrix $(CA)$ that has
$\mathrm{poly}(d)$ rows, thus the assumption we need is $\log d > \log
\log (\mathrm{poly} (d))$ which is always true. The reason for
choosing $C$ as a Count-Sketch matrix with $s=1$ is: (1)
$\mathrm{nnz}(CA) \leq \mathrm{nnz}(A)$ (2) The running time is
$O(\mathrm{poly}(d)\cdot d + \mathrm{nnz}(A))$.
\subsection{Combining Gaussians and SRHT} By combining Gaussians
with SRHT matrices, we can embed into the optimal dimension $O(d/\varepsilon^2)$
with fast $\Ot(nd\log n + d^\omega/\varepsilon^4)$ embedding time.
\subsection{Combining all three} By taking Gaussians times SRHT times
Count-Sketch, we can embed into the optimal dimension $O(d/\varepsilon^2)$
with fast $O(\text{nnz}(A) + d^4 \text{poly}(\frac{1}{\varepsilon}, \log d))$
embedding time.
\section{Count-Sketch does not obey the $\ell_{\infty}$ guarantee}\label{sec:cs}
Here we demonstrate an $A$ and a $b$ such that Count-Sketch will not
satisfy the $\ell_\infty$ guarantee with constant probability, so such
matrices cannot satisfy the generalization guarantee~\eqref{eq:ellinf}
with high probability.
\define{Theorem}{thm:count_sketch_not_infty}{
Let $S \in {\mathbb R}^{m \times n}$ be drawn as a Count-Sketch matrix with
$s$ nonzeros per column. There exists a matrix $A \in {\mathbb R}^{n \times
d}$ and $b \in {\mathbb R}^n$ such that,
if $s^2 d \lesssim m \lesssim \sqrt{d^3 s}$,
then the ``true''
solution $x^* = A^\dagger b$ and the approximation $x' = (SA)^\dagger Sb$ have
large $\ell_\infty$ distance with constant probability:
\[
\norm{x' - x^*}_\infty \gtrsim \sqrt{\frac{d}{ms}} \norm{b}_2.
\]
Plugging in $m = d^{1.5}$ and $s = d^{0.25}$ we find that
\[
\norm{x' - x^*}_\infty \gtrsim 1/d^{3/8}\norm{b}_2 \gg 1/\sqrt{d} \norm{b}_2,
\]
even though such a matrix is an OSE with probability exponential in
$s$. Therefore there exists a constant $c$ for which this matrix
does not satisfy the generalization guarantee~\eqref{eq:ellinf} with
$1 - \frac{c}{d}$ probability.
}
\state{thm:count_sketch_not_infty}
\begin{proof}
We choose the matrix $A$ to be the identity on its top $d$ rows:
$A= \begin{bmatrix}I_d \\ 0 \end{bmatrix}$. Choose some $\alpha \geq
1$, set the value of the first $d$ coordinates of vector $b$ to be
$\frac{1}{\sqrt{d}}$ and set the value to be $1 / \sqrt{\alpha}$ for
the next $\alpha$ coordinates, with the remaining entries all
zero. Note that $\norm{b}_2 = \sqrt{2}$, $x^* = (1/\sqrt{d}, \dotsc,
1/\sqrt{d})$, and
$
\norm{Ax^* - b}_2 = 1.
$
Let $S_k$ denote the $k$th column vector of matrix $S \in \mathbb{R}^{m\times n}$.
We define two events,
Event I, $\forall k'\in [d]$ and $k'\neq k $, we have $\supp(S_{k'}) \cap \supp(S_k) = \emptyset$; Event II, $\exists$ a unique $k'\in \{ d+1, d+2, \cdots, d+\alpha \}$ such that $| \supp(S_{k'}) \cap \supp (S_k) |=1$, and all other $k'$ have $\supp(S_{k'}) \cap \supp(S_k) = \emptyset$. Using Claim \ref{cla:two_events_for_S_k}, with probability at least $.99$ there exists a $k$ for which both events hold.
Given the constructions of $A$ and $b$ described early, it is obvious that {
\[Ax-b = \begin{bmatrix} x_1-\frac{1}{\sqrt{d}}, \cdots, x_d -\frac{1}{\sqrt{d}}, -\frac{1}{\sqrt{\alpha}}, \cdots, -\frac{1}{\sqrt{\alpha}}, 0,\cdots, 0 \end{bmatrix}^\top.\]}
Conditioned on event I and II are holding, then denote $\supp(S_j)= \{i_1, i_2, \cdots, i_s \}$. Consider the terms involving $x_j$ in the quadratic form
\[
\min_x \norm{SAx - Sb}_2^2.
\]
it can be written as $(s-1)(x_j-1/\sqrt{d})^2 + (x_j - 1/\sqrt{d} \pm 1/\sqrt{\alpha})^2$.
Hence the optimal
$x'$ will have $x'_j = \frac{1}{\sqrt{d}} \pm \frac{1}{s\sqrt{\alpha}}$, which is different
from the desired $1/\sqrt{d}$ by $\frac{1}{s \sqrt{\alpha}} $. Plugging in our requirement of
$\alpha \eqsim m^2/(s^3d^2)$, we have
\[
\norm{x' - x^*}_\infty \geq \frac{1}{s\sqrt{\alpha}} \gtrsim c\sqrt{\frac{sd^2}{m^2}} \gtrsim \frac{1}{\sqrt{d}}
\]
where the last inequality follows by $m \lesssim \sqrt{sd^3}$. Thus, we get the result.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\textwidth]{S}
\caption{Count Sketch matrix $S\in\mathbb{R}^{m\times n}$. Event I, for any $k'\in [d]$ and $k'\neq k$, $\supp(S_k) \cap \supp(S_{k'}) = \emptyset$. Event II, there exists a unique $k' \in \{d+1, d+2, \cdots, d+\alpha\}$ such that $S_k$ and $S_{k'}$ intersect at exactly one location(row index). }\label{fig:S_count_sketch_not_linf}
\end{figure}
\begin{claim}\label{cla:two_events_for_S_k}
If $ m = \Omega(s^2 d)$, $m = o(d^2)$, $\alpha <d$, and $\alpha =
O(\frac{m^2}{s^3 d^2})$, with probability at least $.99$
there exists a $k \in [d]$ for which both event I and II hold.
\end{claim}
\begin{proof}
If $m = \Omega(s^2 d)$, then for any $i$ in $\{1, 2, ..., d\}$, let $X_i$ be an indicator that the entries of column i are disjoint from all $i'$ in $[d] \backslash \{i\}$. Then $\E[X_i] \geq .9999$, so by Markov's inequality, with probability .99, we have $.99d$ columns having this property (indeed, the expected value of $d-X$ is at most $.0001d$, so $\Pr[ d-X \geq .01 d ] \leq \frac{\E[d-X]} {.01d} \leq \frac{.0001d}{.01d} = .01 $).
Define Event $E$ to be that $.99d$ columns of first $d$ columns have the property that the entries of that column are disjoint from all the other $d-1$ columns. Let $S$ be the set of these $.99d$ columns. Let $N$ be the union of supports of columns in $S$.
Each column $i$ in $\{d+1, ..., d+\alpha\}$ chooses $s$ non-zero entries. Define event $F$( which is similar as event $E$) to be that $.99\alpha$ columns of the next $\alpha$ columns have the property that the entries of that column are disjoint from all the other $\alpha-1$ columns. By the same argument, since $\alpha < d$, with probability .99, we have $.99\alpha$ columns in $\{d+1, ..., d+\alpha\}$ being disjoint from other columns in $\{d+1, ..., d+\alpha\}$. Condition on event $F$ holding. Let $L$ be the multiset union of supports of all columns in $\{d+1, ..., d+\alpha\}$. Then $L$ has size $\alpha \cdot s$. Let $M$ be the union of supports of all columns in $\{d+1, ..., d+\alpha\}$, that is, the set union rather than the multiset union. Note that $|M| \geq .99\alpha \cdot s$ because of $.99 \alpha$ columns are disjoint from each other.
The intersection size $x$ of $N$ and $M$ is hyper-geometrically distributed with expectation
\begin{equation*}
\E[x] = \frac{s|S| \cdot |M|}{ m}.
\end{equation*}
By a lower tail bound for the hypergeometric distribution \footnote{\url{https://en.wikipedia.org/wiki/Hypergeometric_distribution}} ,
\begin{equation*}
\Pr[x \leq (p-t)n] \leq \exp(-2t^2n),
\end{equation*}
where $p = s\cdot |S|/m$ and $n = |M|$, so
\begin{equation*}
\Pr[x \leq \E[x] - t \cdot |M|] \leq \exp(-2t^2\cdot |M|) \leq 0.01,
\end{equation*}
where the last inequality follows by setting $t = \Theta(1/\sqrt{|M|})$. Thus, we get with probability $.99$, the intersection size is at least $\frac{s|S| \cdot |M|}{ m} - \Theta(\sqrt{|M|})$ .
Now let $W$ be the distinct elements in $L\backslash M$, so necessarily $|W| \leq .01 \alpha \cdot s$. By an upper tail bound for the hypergeometric distribution, the intersection size $y$ of $W$ and $N$ satisfies
\begin{equation*}
\Pr[y \geq (p+t)n] \leq \exp(-2t^2n),
\end{equation*}
where $p = s\cdot |S|/m$ and $n = |W|$, we again get
\begin{equation*}
\Pr[y \geq \E[y] + t\cdot |W|] \leq \exp(-2t^2\cdot|W|).
\end{equation*}
If $|W| = 0$, then $y = 0$. Otherwise, we can set $t = \Theta(1/\sqrt{|W|})$ so that this probability is less than $.01$, and we get with probability $.99$, the intersection size $y$ is at most $s\cdot |S|\cdot |W|/m + \Theta(\sqrt{|W|})$. Note that we have that $\Theta(\sqrt{|M|})$ and $\Theta(\sqrt{|W|})$ are bounded by $\Theta(\sqrt{s \cdot \alpha})$. Setting $\alpha = O( \frac{m^2}{ s^3 d^2} )$ suffices to ensure $y$ is at most $(1.01)s\cdot |S|\cdot |W|/m$, and earlier that $x$ is at least $.99\cdot s\cdot |S|\cdot |M|/m$.
The probability one of the $|S|$ blocks in $N$ has two or more intersections with $M$ is less than ${x \choose 2}$ times the probability two random distinct items in the intersection land in the block. This probability is
\begin{equation*}
\frac{ {x \choose 2}\cdot {s \choose 2}} { {s\cdot |S| \choose 2}} = \Theta(x^2/|S|^2) = \Theta(x^2/d^2) = \Theta(m^2/(d^4 s^2)).
\end{equation*}
So the expected number of such blocks is $\Theta(m^2 sd/(d^4 s^2)) = \Theta(m^2 /(d^3 s))$ which is less than $(.99\cdot s\cdot |S|\cdot |M|)/(2m) \leq X/2$ if $m = o(d^2)$, which we have. So, there are at least $x/2$ blocks which have intersection size exactly 1 with $N$. Note that the number of intersections of the $|S|$ blocks with $W$ is at most $y$, which is at most $(1.01)s\cdot |S|\cdot |W|/m \leq (1.01)s\cdot |S|\cdot \frac{1}{99}\cdot |M|/m < x/2$, and therefore there exists a block, that is, a column among the first $d$ columns, which intersects $M$ in exactly one position and does not intersect $W$. This is our desired column. Thus, we complete the proof.
\end{proof}
\iffalse
\begin{proof}
Fix column index $k$ of matrix $S$, using union bound we have
\begin{equation*}
\Pr [\text{~Event~I~holds~}] \geq 1 - \sum_{k'=1, k'\neq k}^d \Pr\left[\supp( S_{k'}) \cap \supp(S_k) \neq \emptyset \right].
\end{equation*}
Suppose we create $S_{k'}$ after $S_k$ is already fixed. For the first nonzero entry of $S_{k'}$ it has probability $1-\frac{s}{m}$ to not collide(share the same row index) with the nonzero entry of $S_k$. For the second nonzero entry of $S_{k'}$ it has probability $1- \frac{s}{m-1}$ to not collide with the nonzero entry of $S_k$. For the $s$ th (last) nonzero entry of $S_{k'}$ it has probability $1-\frac{s}{m-(s-1)}$ to not collide with the nonzero entry of $S_k$. Thus,
\begin{equation*}
\Pr [\text{~Event~I~holds~}] \geq 1- d ( 1- \prod_{i=0}^{s-1} (1- \frac{s}{m-i}) ) \geq 1- d ( 1- (1-\frac{s}{m})^s ) \geq 1-d s^2 /m > 0.99
\end{equation*}
where the last inequality follows by $m \gtrsim s^2 d $.
Next, we show that Event II holds with some good probability. Define the indicator variable $X_k$, $\forall k\in [d]$ to be 1 if Event II holds for $k$, 0 otherwise. Define random variable $X = \overset{d}{ \underset{k=1}{\sum} } X_k$. Then we can upper bound the variance of $X$,
\begin{align}\label{eq:count_sketch_main_variance}
\V[X] = \E [X^2] - (\E [X] )^2 = \E[ (\sum_{k=1}^d X_k)^2 ] - ( \sum_{k=1}^d \E [X_k])^2 \leq \sum_{k=1}^d \E [X_k^2] + \sum_{k_1 \neq k_2}^d \E[X_{k_1} X_{k_2}],
\end{align}
where the last inequality follows by $ ( \sum_{k=1}^d \E [X_k])^2 \geq 0 $.
Consider a fixed $k\in[d]$, for any $k''\in [d+\alpha ] \backslash [d]$, we can estimate the probability of $S_{k''}$ intersecting with $S_k$ in exact 1 position,
\begin{equation*}
\Pr [ |\supp(S_{k''}) \cap \sup(S_k) | = 1 ] = s \cdot \frac{s}{m} \prod_{i=1}^{s-1} (1- \frac{s}{m-i}) = \Theta(s^2/m)
\end{equation*}
Thus, for any $k\in [d]$
\begin{equation*}
\E[X_k] = \Pr [X_k=1] = \alpha \cdot \Theta(s^2/m) \cdot (1- \Theta(s^2/m) )^{\alpha-1} = \Theta(\frac{\alpha s^2}{m})
\end{equation*}
which also implies that
\begin{equation}\label{eq:expectation_of_X_k_square}
\E[X_k^2] = \Theta(\frac{\alpha s^2}{m})
\end{equation}
It remains to upper bound the second term of Equation (\ref{eq:count_sketch_main_variance}), for any $k_1, k_2 \in [d]$ and $k_1\neq k_2$,
\begin{align}\label{eq:expectation_X_k_1_and_X_k_2}
~ & \E[X_{k_1} X_{k_2}] \notag \\
= ~ & \Pr [X_{k_1} = 1 | X_{k_2} = 1] \cdot \Pr[ X_{k_2} = 1] \notag \\
\leq ~ & \Pr[ X_{k_1} = 1 | X_{k_2}=1 \land \supp (S_{k_1} ) \cap \supp(S_{k_2} ) =\emptyset ] \cdot \Pr [\supp (S_{k_1}) \cap \supp (S_{k_2}) =\emptyset] \notag \\
+ ~ & 1- \Pr[ \supp(S_{k_1}) \cap \supp(S_{k_2}) =\emptyset ]
\end{align}
For the term $\Pr [\supp (S_{k_1}) \cap \supp (S_{k_2}) =\emptyset]$, we have
\begin{align*}
\Pr [\supp (S_{k_1}) \cap \supp (S_{k_2}) =\emptyset] = \prod_{i=0}^{s-1} (1-\frac{s}{m-i}) = 1 - \Theta(s^2/m)
\end{align*}
We can upper bound $\Pr[ X_{k_1} = 1 | X_{k_2}=1 \land \supp (S_{k_1} ) \cap \supp(S_{k_2} ) =\emptyset ]$, for any $k_1, k_2\in [d]$ and $k_1\neq k_2$, in the following sense,
\begin{align}\label{eq:X_k1_conditioned_on_X_k2}
~ & \Pr[ X_{k_1} = 1 | X_{k_2}=1 \land \supp (S_{k_1} ) \cap \supp(S_{k_2} ) =\emptyset ] \notag \\
\leq ~& \Pr[ \exists k' ~s.t.~ |\supp(S_{k'}) \cap \supp(S_{k_1})| = 1 | X_{k_2} = 1 \land \supp(S_{k_1}) \cap \supp(S_{k_2}) = \emptyset] \notag \\
\leq ~ & \Pr[ \exists k' ~s.t.~ | \supp(S_{k'}) \cap \supp(S_{k_1}) | > 0 | X_{k_2} = 1 \land \supp(S_{k_1}) \cap \supp(S_{k_2}) = \emptyset] \notag \\
\leq ~ & (\# k') \cdot \Pr[ | \supp(S_{d+1}) \cap \supp(S_{k_1}) |>0 | X_{k_2} = 1 \land \supp(S_{k_1}) \cap \supp(S_{k_2}) = \emptyset] \notag \\
= ~ & (\#k') \cdot \Pr[ S_{d+1}~\text{is~the~unique~s.t.~} |\supp(S_{d+1}) \cap \supp(S_{k_2}) | =1 ] \notag \\
\cdot ~ &\Pr[ | \supp(S_{d+1}) \cap \supp(S_{k_1})| >0 ~| \notag \\
~ & X_{k_2} = 1 \land \supp(S_{k_1}) \cap \supp(S_{k_2}) = \emptyset \land S_{d+1}~\text{is~the~unique~s.t.~} |\supp(S_{d+1}) \cap \supp(S_{k_2}) | =1 ] \notag \\
+ ~ & (\#k') \cdot \Pr[ S_{d+1} \text{~is~not~the~unique~s.t.} |\supp(S_{d+1}) \cap \supp(S_{k_2}) | =1] \notag \\
\cdot ~ & \Pr[ | \supp(S_{d+1}) \cap \supp(S_{k_1})| > 0 ~| \notag \\
~ & X_{k_2} = 1 \land \supp(S_{k_1}) \cap \supp(S_{k_2}) = \emptyset \land S_{d+1} \text{~is~not~the~unique~s.t.} |\supp(S_{d+1}) \cap \supp ( S_{k_2} ) | =1] \notag \\
\leq ~ & (\#k') \cdot (\frac{1}{\#k'}) \cdot \frac{s}{m-s} (s-1) (1- \frac{s}{m-s})^{s-1} \notag \\
+ ~& (\#k') \cdot (1-\frac{1}{\#k'} ) \cdot \frac{s}{m-s} s (1- \frac{s}{m-s} )^s \notag \\
\leq ~ & (\#k') \cdot \frac{ s}{ m-s} s (1- \frac{s}{m-s})^{s-1} \notag \\
\leq ~ & (\#k') \cdot \Theta( \frac{s^2}{m}) \notag \\
= ~ & \Theta ( \frac{\alpha s^2 }{m} )
\end{align}
Plugging Equations (\ref{eq:expectation_of_X_k_square}), (\ref{eq:expectation_X_k_1_and_X_k_2}) and (\ref{eq:X_k1_conditioned_on_X_k2}) into (\ref{eq:count_sketch_main_variance})
\begin{align*}
\V[ X ] ~ & \leq \sum_{k=1}^d \E [X_k^2 ] + \sum_{k_1 \neq k_2}^d \E [X_{k_1} X_{k_2}] \\
~ & \leq d\cdot \Theta ( \frac{\alpha s^2}{m} ) + d^2 \left( \Theta(\frac{\alpha s^2}{m})( 1-\Theta( \frac{s^2}{m} ) ) + \Theta( \frac{s^2}{m} ) \right) \\
~ & \leq \Theta(\frac{d^2 \alpha s^2}{m})
\end{align*}
Applying Chebyshev Inequality to variable $X$ where $\E[X] = \Theta(\frac{d\alpha s^2}{m})$ and $\V[X] \leq \Theta(\frac{d^2 \alpha s^2}{m})$,
\begin{align*}
& \Pr[ |X - \E [X] | \geq t ] \leq \frac{\V[X]}{t^2} \\
\implies & \Pr[ |X - \E [X] | \geq C \E[X] ] \leq \frac{\V[X]}{C^2 \E[X]^2} & \text{~by~setting~}t=C \E[X] \\
\implies & \Pr[ |X - \E [X] | \geq C \E[X] ] \leq \Theta( \frac{m}{C^2 \alpha s^2} )
\end{align*}
which is at most $0.01$ as long as $\alpha s^2 \gtrsim m$
\end{proof}
\fi
\section{Leverage score sampling does not obey the $\ell_\infty$ guarantee}
Not only does Count-Sketch fail, but so does leverage score sampling,
which is a technique that takes a subsample of rows of $A$ with
rescaling. In this section we show an $A$ and a $b$ such that
leverage score sampling will not satisfy the $\ell_{\infty}$ guarantee. We
start with a formal definition of leverage scores.
\begin{definition}[Leverage Scores]
Given an arbitrary $n\times d$ matrix $A$, with $n>d$, let $U$ denote the $n\times d$ matrix
consisting of the $d$ left singular vectors of $A$, let $U_{(i)}$ denote the $i-$th row of the
matrix $U$, so $U_{(i)}$ is a row vector. Then the leverage scores of the rows of $A$ are given by $l_i = \| U_{(i)}\|_2^2$,
for $i\in [n]$.
\end{definition}
The leverage score sampling matrix can be thought of as a square diagonal matrix $D\in \mathbb{R}^{n\times n}$ with diagonal entries chosen from some distribution. If $D_{ii}=0$, it means we do not choose the $i$-th row of matrix $A$., If $D_{ii}>0$, it means we choose that row of the matrix $A$ and also rescale that row. We show that the leverage score sampling matrix cannot achieve $\ell_{\infty}$
guarantee, nor can it achieve our notion of generalization error.
\define{Theorem}{thm:leverage_score_not_infty}{
Let $D \in {\mathbb R}^{n \times n}$ be a leverage score sampling matrix with
$m$ nonzeros on the diagonal. There exists a matrix $A \in {\mathbb R}^{n \times
d}$ and a vector $b \in {\mathbb R}^n$ such that, if $m \lesssim d\sqrt{d}$, then the ``true''
solution $x^* = A^\dagger b$ and the approximation $x' = (DA)^\dagger Db$ have
large $\ell_\infty$ distance with constant probability:
\[
\norm{x' - x^*}_\infty \gtrsim \frac{1}{\sqrt{d}} \norm{b}_2.
\]
Therefore there exists a constant $c$ for which this matrix
does not satisfy the generalization guarantee~\eqref{eq:ellinf} with
$1 - \frac{c}{d}$ probability.
}
\state{thm:leverage_score_not_infty}
\begin{proof}
We choose the matrix $A$ to be the identity on its top $d$ rows, and $L$ scaled identity matrices $\frac{1}{ \sqrt{\alpha d}} I_d$ for the next $dL$ rows, where $L$ satisfies $\frac{1}{d} + \frac{1}{\alpha d} L =1$ (to normalize each column of $A$), which implies $L= \alpha(d-1)$.
Choose some $\beta \in [1,d)$. Set the value of the first $d$ coordinates of vector $b$ to be $\frac{1}{\sqrt{d}}$ and set the value to be $\frac{1}{\sqrt{\beta} }$ for the next $\beta$ coordinates, with the remaining entries all zero. Note that $\| b \|_2 = \sqrt{2}$.
First, we compute $\|Ax-b\|_2^2$. Because $\beta$ is less than $d$, there are two kinds of $x_j$: one involves the following term,
\begin{equation}\label{eq:nonspecial_j_without_leverage}
(\frac{1}{\sqrt{d}} x_j - \frac{1}{\sqrt{d}} )^2 + (L-1) ( \frac{1}{\sqrt{\alpha d} } x_j )^2,
\end{equation}
where the optimal $x_j$ should be set to $1/d$. The other involves the term:
\begin{equation}\label{eq:special_j_without_leverage}
(\frac{1}{\sqrt{d}} x_j - \frac{1}{\sqrt{d} } )^2 + (\frac{1}{\sqrt{\alpha d}} x_j - \frac{1}{ \sqrt{\beta}})^2 + (L-1) ( \frac{1}{\sqrt{\alpha d} } x_j )^2,
\end{equation}
where the optimal $x_j$ should be set to $1/d + 1/\sqrt{\alpha \beta d} $. Because we are able to choose $\alpha, \beta$ such that $\alpha \beta \gtrsim d$, then
\begin{equation*}
x_j = 1/d + 1/\sqrt{\alpha \beta d} \lesssim 1/d.
\end{equation*}
Second, we compute $\| DAx - Db \|_2^2$. With high probability, there exists a $j$ satisfying Equation (\ref{eq:special_j_without_leverage}), but after applying leverage score sampling, the middle term of Equation (\ref{eq:special_j_without_leverage}) is removed. Let $p_1 = \frac{1}{d}$ denote the leverage score of each of the top $d$ rows of $A$, and let $p_2=\frac{1}{\alpha d}$ denote the leverage score of each of the next $Ld$ rows of $A$. We need to discuss the cases $m>d$ and $m\leq d$ separately.
If $m > d$, then the following term involves $x_j$, {
\begin{eqnarray*}
&& ( \frac{1}{ \sqrt{p_1}} \frac{1}{\sqrt{d}} x_j - \frac{1}{ \sqrt{p_1}} \frac{1}{\sqrt{d}} )^2 + \frac{m-d}{d} \cdot ( \frac{1}{\sqrt{p_2}} \frac{1}{\sqrt{\alpha d} }x_j )^2 \\
& = &\frac{1}{p_1} ( \frac{1}{\sqrt{d}} x_j - \frac{1}{\sqrt{d}} )^2 + \frac{m-d}{d} \cdot \frac{1}{p_2} (\frac{1}{\sqrt{\alpha d} }x_j )^2 \\
& = & d \left( ( \frac{1}{\sqrt{d}} x_j - \frac{1}{\sqrt{d}} )^2 + \frac{m-d}{d} \alpha (\frac{1}{\sqrt{\alpha d} }x_j )^2 \right).
\end{eqnarray*}
}
where the optimal $x_j$ should be set to{
\begin{align*}
x_j = & ~ \frac{ 1/d }{ 1/d + (m-d)\alpha/ (\alpha d^2) } \\
= & ~ \frac{ 1 }{ 1+ (m-d)/d } \\
\gtrsim & ~ \frac{1}{ (m-d) /d } \\
\gg & ~ \frac{1}{\sqrt{d}}. & \text{~by~ } m \ll d \sqrt{d}
\end{align*}}
If $m \leq d$, then the term involving $x_j$ is
$
( \frac{1}{ \sqrt{p_1}} \frac{1}{\sqrt{d}} x_j - \frac{1}{ \sqrt{p_1}} \frac{1}{\sqrt{d}} )^2
$
where the optimal $x_j$ should be set to be $1 \gg 1/\sqrt{d}$.
Third, we need to compute $\| Ax^* -b \|_2^2$ and $\sigma_{\min}(A)$. It is easy to see that $\sigma_{\min}(A)$ because $A$ is an orthonormal matrix. The upper bound for $\| Ax^* -b \|_2^2 =2$, and the lower bound is also a constant, which can be proved in the following way:
\begin{align*}
\| A x^* - b\|_2^2 & = \sum_{j=1}^\beta (\ref{eq:nonspecial_j_without_leverage}) +\sum_{j=\beta+1}^d (\ref{eq:special_j_without_leverage}) \geq d (\frac{1}{\sqrt{d}} \frac{1}{d} - \frac{1}{\sqrt{d}})^2 \gtrsim d \cdot \frac{1}{d} = 1.
\end{align*}
\end{proof}
\begin{comment}
\section{Lower bound for $\ell_2$ and $\ell_{\infty}$ guarantee}\label{sec:lower_bound_details}
It remains to define several tools which are used in the main proof of the lower bound.
\begin{claim}\label{cla:EAg_is_fnorm_A}
For any matrix $A\in \mathbb{R}^{n\times d}$, if each entry of a
vector $g\in \mathbb{R}^d$ is chosen from an i.i.d Gaussian ${\cal
N}(0,\sigma^2)$, then $\underset{g}{\E} [\| A g\|_2^2] = \sigma^2 \|
A\|_F^2$ .
\end{claim}
\begin{proof}
\begin{align*}
\underset{g}{\E} [\| A g\|_2^2] = & ~ \underset{g}{\E} \left[ \sum_{i=1}^n (\sum_{j=1}^d A_{ij} g_j)^2 \right] \\
= & ~\underset{g}{\E} \left[ \sum_{i=1}^n ( \sum_{j=1}^d A_{ij}^2 g_{j}^2 + \sum_{j\neq j'} A_{ij} A_{ij'} g_j g_{j'} ) \right] \\
= & ~\sum_{i=1}^n \sum_{j=1}^d A_{ij}^2 \sigma^2 \\
= & ~\sigma^2 \| A\|_F^2.
\end{align*}
\end{proof}
Let $g_1, g_2, \cdots, g_t$ be i.i.d. ${\cal N}(0,1)$ random variables. The random variables $\sum_{i=1}^t g_i^2$ are ${\cal X}^2$ with $t$ degree of freedom. Furthermore, the following tail bounds are known.
\begin{fact}[Lemma 1 of \cite{LM00}]\label{fac:kai_squared_distribution}
Let $g_1, g_2, \cdots, g_t$ be i.i.d. ${\cal N}(0,1)$ random variables. Then for any $x\geq 0$,
\begin{align*}
\Pr \left[ \sum_{i=1}^t g_i^2 \geq t+ 2 \sqrt{tx} + 2x \right] \leq \exp(-x),
\end{align*}
and
\begin{align*}
\Pr\left[ \sum_{i=1}^t g_i^2 \leq t- 2 \sqrt{tx} \right] \leq \exp(-x).
\end{align*}
\end{fact}
\begin{definition}
Given a matrix $A\in \mathbb{R}^{n\times d}$, vector $b\in \mathbb{R}^{n}$ and matrix $S\in \mathbb{R}^{r\times n}$, denote $x^*=A^\dagger b$. We say that an algorithm ${\cal B}(A,b,S)$ that outputs a vector $x'=(SA)^\dagger S b$ ``succeeds'' if the following property holds:
\begin{equation*}
\| x' - x^* \|_{\infty} \lesssim \frac{\varepsilon}{\sqrt{d}} \| b\|_2 \cdot \| A^\dagger \|_2 \cdot \| Ax^*-b \|_2.
\end{equation*}
\end{definition}
Applying $\| x'-x\|_{\infty} \geq \frac{1}{\sqrt{d}} \| x'-x\|_2$ to Theorem \ref{thm:l2_lower_bound} ,we obtain the $\ell_{\infty}$ lower bound as a corollary,
\begin{corollary}\label{cor:linf_lower_bound}
Suppose $\Pi$ is a distribution over $\mathbb{R}^{m\times n}$ with the property that for any $A\in \mathbb{R}^{n\times d}$ and $b\in \mathbb{R}^{n}$,
\begin{equation*}
\underset{S\sim \Pi}{ \Pr } [ {\cal B}(A,b,S) \mathrm{~succeeds~} ] \geq 9/10.
\end{equation*}
Then $m \gtrsim \min(n,d/\varepsilon^2)$.
\end{corollary}
\end{comment}
\section{Bounding $\E[ \|Z\|_F^2 ]$}\label{sec:Z}
Before getting into the proof details, we define the key property of $S$ being used in the rest of the proofs.
\begin{definition}[All Inner Product Small(AIPS) Property]
For any matrix $S\in \mathbb{R}^{r\times n}$, if for all $i,j \in [n]$ with $i\neq j$ we have
\begin{equation*}
| \langle S_i, S_j \rangle | = O(\sqrt{\log n} /\sqrt{r}),
\end{equation*}
we say that $S$ satisfies the ``$\mathrm{AIPS}$'' property.
\end{definition}
\begin{claim}
If $S\in \mathbb{R}^{r\times n}$ is a subsampled Hadamard transform
matrix, then the $\mathrm{AIPS}$ property holds with probability at
least $1-1/{\mathrm{poly}}(n)$.
\end{claim}
\begin{proof}
From the structure of $S$, for any $i\neq j$, we have with probability $1-1/{\mathrm{poly}}(n)$ such that $ |\langle S_i, S_j \rangle| = O(\sqrt{\log n} /\sqrt{r})$. Applying a union bound over $O(n^2)$ pairs, we obtain that
\begin{align*}
\Pr[ \text{~AIPS~holds~}] \geq 1-1/{\mathrm{poly}}(n).
\end{align*}
\end{proof}
The main idea for bounding $\E [\| Z\|_F^2]$ is to rewrite it as
$
\E[ \| Z \|_F^2] = \E [~ \|Z \|_F^2 ~|~ \text{AIPS~holds}~] + \E [~ \|Z \|_F^2 ~|~ \text{AIPS~does~not~hold}~].
$
Because $ \Pr[ \text{~AIPS~does~not~hold}]$ is at most $1/{\mathrm{poly}}(n)$,
the first term dominates the second term, which means we
only need to pay attention to the first term. We repeatedly apply this
idea until all the $S$ are removed.
We start by boundinbg $\E[\| Z\|_F^2]$ by squaring $Z_{i_0, j_0}$ and using that $\E[\sigma_i \sigma_j] = 1$ if $i = j$ and $0$ otherwise. Then, we obtain,
{%
\begin{eqnarray}\label{eqn:Zone}
\underset{\sigma}{\E} [ Z_{i_0,j_k}^2 ] = a_{i_0}^2 b_{j_k}^2 \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}} \prod_{c=0}^k \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^k (UU^\top)_{i_{c-1},j_c}^2.
\end{eqnarray}
}
We thus have, {%
\begin{align*}
\sum_{i_0,j_k, j_k\neq i_k} a_{i_0}^2 \langle S_{i_0}, S_{j_0}\rangle^2 b_{j_k}^2 \langle S_{i_k}, S_{j_k} \rangle^2 = a_{j_0}^2 \|b\|_2^2 O( (\log n)/r ) + \| a\|_2^2 \|b\|_2^2 O( (\log^2 n)/ r^2 ) \mathbin{\stackrel{\rm def}{=}} C_{j_0},
\end{align*}}
where the first equality is from our conditioning, and the second equality is the definition of $C_{j_0}$.
Hence, $\underset{S}{\E} [\| Z\|_F^2 ]$ is
{%
\begin{eqnarray*}
& = & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}} \prod_{c=1}^{k-1} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^k (UU^\top)_{j_{c-1},i_c}^2 \cdot \sum_{i_0,j_k, j_k\neq i_k} a_{i_0}^2 \langle S_{i_0}, S_{j_0}\rangle^2 b_{j_k}^2 \langle S_{i_k}, S_{j_k} \rangle^2 \\
& = & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}} \prod_{c=1}^{k-1} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^k (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& = & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}} \langle S_{i_{k-1}}, S_{j_{k-1}} \rangle^2 (UU^\top)_{j_{k-1},i_k}^2 \cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0}, \\
\end{eqnarray*}
}
where the first equality follows from (\ref{eqn:Zone}), the second equality by definition
of $C_{j_0}$, and the final equality by factoring out $c = k-1$ from one product and
$c = k-2$ from the other product.
The way to bound the term $\langle S_{i_{k-1}}, S_{j_{k-1}} \rangle$ is by separating the diagonal term where $i_{k-1} = j_{k-1}$ and the non-diagonal term where $i_{k-1} \neq j_{k-1}$. We now use the aforementioned property of $S$, namely, that $\langle S_{i_{k-1}}, S_{j_{k-1}} \rangle = 1$, if $i_{k-1} = j_{k-1}$, while for $i_{k-1}\neq j_{k-1}$, we have with probability $1 - 1/\mathrm{poly}(n)$ that $|\langle S_{i_{k-1}}, S_{j_{k-1}} \rangle| = O(\sqrt{\log n}/\sqrt{r})$ conditioned on $\mathrm{AIPS}$ holding.
Conditioned on $\mathrm{AIPS}$ holding, we can recursively reduce the number of terms in the product:
{%
\begin{eqnarray*}
&& \| Z\|_F^2 \\
& = & \sum_{ \substack{i_1,\cdots i_k, j_0, \cdots j_{k-1}, i_{k-1} \neq j_{k-1} }} O((\log n)/r) \cdot (UU^\top)_{j_{k-1},i_k}^2 \cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& + & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}, i_{k-1} = j_{k-1}} 1 \cdot (UU^\top)_{j_{k-1},i_k}^2 \cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& \leq & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1} } O( (\log n) / r) \cdot (UU^\top)_{j_{k-1},i_k}^2 \cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& + & \sum_{i_1,\cdots i_k, j_0, \cdots j_{k-1}, i_{k-1} = j_{k-1}} 1 \cdot (UU^\top)_{j_{k-1},i_k}^2 \cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0},
\end{eqnarray*}
}
where the first equality follows from the property just mentioned, and the inequality
follows by including back the tuples of indices for which $i_{k-1} = j_{k-1}$, using
that each summand is non-negative.
Our next step will be to bound the term $(UU^\top)^2_{j_{k-1}, i_k}$. We have, $\| Z\|_F^2$ is
{
\begin{align*}
& \leq \sum_{i_k,j_{k-1}} (UU^\top)_{i_k,j_{k-1}}^2 \sum_{\substack{i_1,\cdots, i_{k-1} \\ j_0, \cdots, j_{k-2} } } O( (\log n) / r) \\
&\cdot \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& + \sum_{\substack{ i_1,\cdots, i_k \\ j_0, \cdots, j_{k-1}\\ i_{k-1} = j_{k-1} } } 1 \cdot (UU^\top)_{j_{k-1},i_k}^2 \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& = \underbrace{ O(d(\log n)/r) \sum_{ \substack{ i_1,\cdots, i_{k-1}\\ j_0, \cdots, j_{k-2} } } \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} }_{A} \\
& + \underbrace{ \sum_{ \substack{ i_1,\cdots, i_k \\j_0, \cdots, j_{k-1}\\ i_{k-1} = j_{k-1}}} 1 \cdot (UU^\top)_{j_{k-1},i_k}^2 \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} }_{B},\\
\end{align*}}
where the equality uses that $\sum_{i_k, j_{k-1}} (UU^\top)_{i_k, j_{k-1}}^2 = \|UU^\top\|_F^2 = d$.
We first upper bound term $B$:{%
\begin{eqnarray*}
& = & \sum_{ \substack{ i_1,\cdots, i_k\\ j_0, \cdots, j_{k-1}\\ i_{k-1} = j_{k-1}} } 1 \cdot (UU^\top)_{j_{k-1},i_k}^2 \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0}\\
& = & \sum_{ \substack{ i_1,\cdots, i_{k-1}\\ j_0, j_1, \cdots ,j_{k-1}\\ i_{k-1} = j_{k-1} } } C_{j_0} \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 \sum_{i_k} (UU^\top)_{j_{k-1},i_k}^2\\
& = & \sum_{ \substack{ i_1,\cdots, i_{k-1} \\ j_0, j_1, \cdots ,j_{k-1}\\ i_{k-1} = j_{k-1} } } C_{j_0} \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 |e_{j_{k-1} } UU^\top|^2\\
& \leq & \sum_{ \substack{ i_1,\cdots, i_{k-1}\\ j_0, j_1, \cdots ,j_{k-1}\\ i_{k-1} = j_{k-1} } } C_{j_0} \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 1\\
& = & \sum_{ \substack{ i_1,\cdots, i_{k-1}\\ j_0, j_1, \cdots ,j_{k-2} } } C_{j_0} \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2, \\
\end{eqnarray*}
where the first equality is the definition of $B$, the second
equality follows by separating out the index $i_k$, the third equality
uses that $\sum_{i_k} (UU^\top)^2_{j_{k-1}, i_k} = \|e_{j_{k-1}} UU^\top\|_2^2$, that is, the squared
norm of the $j_{k-1}$-th row of $UU^\top$,
the inequality follows since all rows of a projection matrix $UU^\top$ have norm at most $1$,
and the final equality uses that $j_{k-1}$ no longer appears in the expression.
We now merge our bounds for the terms $A$ and $B$ in the following way:
{%
\begin{eqnarray*}
&& \| Z\|_F^2 \\
&\leq & A+B \\
&\leq & O(d(\log n)/r) \sum_{ \substack{ i_1,\cdots,i_{k-1} \\ j_0,\cdots,j_{k-2} } } \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& + & \sum_{ \substack{ i_1,\cdots,i_{k-1}\\ j_0, j_1,\cdots,j_{k-2}} } C_{j_0} \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 \\
& = & \left( O(d(\log n)/r) + 1 \right) \sum_{\substack{ i_1,\cdots, i_{k-1}\\ j_0, \cdots, j_{k-2}} } \prod_{c=1}^{k-2} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0}\\
& \leq & \cdots \\
& \leq & \left( O(d(\log n)/r) + 1 \right)^2 \sum_{ \substack{i_1,\cdots, i_{k-2} \\ j_0, \cdots,j_{k-3} }} \prod_{c=1}^{k-3} \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^{k-2} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0} \\
& \leq & \cdots \\
& \leq & \left( O(d(\log n)/r) + 1 \right)^{k-1} \sum_{i_1, j_0 } \prod_{c=1}^{1} (UU^\top)_{j_{c-1},i_c}^2 C_{j_0}\\
& \leq & \left( O(d(\log n)/r) + 1 \right)^{k-1} (d\|b\|_2^2 (\log^2 n)/r^2 + \| b\|_2^2 (\log n)/r),
\end{eqnarray*}
}
where the first two inequalities and first equality
are by definition of $A$ and $B$ above. The first inequality follows by induction,
since at this point we have replaced $k$ with $k-1$, and can repeat the argument,
incurring another multiplicative factor of $O(d (\log n)/r) + 1$. Repeating the
induction in this way we arrive at the last inequality. Finally, the last inequality follows by plugging in the definition of $C_{j_0}$, using
that $\sum_{i_1, j_0} (UU^\top)^2_{j_0, i_1} = d$, and {
$$\sum_{j_0, i_1} (UU^\top)^2_{j_0, i_1} a_{j_0}^2
= \sum_{j_0} a_{j_0}^2 \sum_{i_1} (UU^\top)^2_{j_0, i_1}
= \sum_{j_0} a_{j_0}^2 \|e_{j_0}UU^\top\|_2^2
\leq 1,$$}
where the inequality follows since each row of $UU^\top$ has norm at most
$1$, and $a$ is a unit vector. The final result is that {
\[
\| Z\|_F^2 \leq \left( O(d(\log n)/r) + 1 \right)^{k-1} (d\|b\|_2^2 (\log^2 n)/r^2 + \| b\|_2^2 (\log n)/r).
\]}
\section{Introduction}
Oblivious subspace embeddings (OSEs) were introduced by
Sarlos~\cite{sar06} to solve linear algebra problems more quickly than
traditional methods. An OSE is a distribution of matrices $S \in
{\mathbb R}^{m \times n}$ with $m \ll n$ such that, for any $d$-dimensional
subspace $U \subset {\mathbb R}^n$, with ``high'' probability $S$ preserves the
norm of every vector in the subspace. OSEs are a generalization of
the classic Johnson-Lindenstrauss lemma from vectors to subspaces.
Formally, we require that with probability $1-\delta$,
\[
\norm{Sx}_2 = (1 \pm \varepsilon) \norm{x}_2
\]
simultaneously for all $x \in U$, that is,
$(1-\varepsilon) \norm{x}_2 \leq \norm{Sx}_2 \leq (1+\varepsilon) \norm{x}_2$.
A major application of OSEs is to regression. The regression problem
is, given $b \in {\mathbb R}^n$ and $A \in {\mathbb R}^{n \times d}$ for $n \geq d$, to
solve for
\begin{align}
x^* = \argmin_{x \in \mathbb{R}^d} \norm{Ax-b}_2\label{eq:regression}
\end{align}
Because $A$ is a ``tall'' matrix with more rows than columns, the
system is overdetermined and there is likely no solution to $Ax = b$,
but regression will find the closest point to $b$ in the space spanned
by $A$. The classic answer to regression is to use the Moore-Penrose
pseudoinverse: $x^* = A^\dagger b$ where
\[
A^\dagger = (A^\top A)^{-1}A^\top
\]
is the ``pseudoinverse'' of $A$ (assuming $A$ has full column rank,
which we will typically do for simplicity). This classic solution
takes $O(nd^{\omega - 1} + d^{\omega})$ time, where $\omega < 2.373$ is
the matrix multiplication constant~\cite{cw90,w12,g4a}: $nd^{\omega -
1}$ time to compute $A^\top A$ and $d^{\omega}$ time to compute the inverse.
OSEs speed up the process by replacing~\eqref{eq:regression} with
\[
x' = \argmin_{x\in {\mathbb R}^d} \norm{SAx-Sb}_2
\]
for an OSE $S$ on $d+1$-dimensional spaces. This replaces the $n
\times d$ regression problem with an $m \times d$ problem, which can be
solved more quickly since $m \ll n$. Because $Ax - b$ lies in the
$d+1$-dimensional space spanned by $b$ and the columns of $A$, with
high probability $S$ preserves the norm of $SAx - Sb$ to $1 \pm \varepsilon$
for all $x$. Thus,
\[
\norm{Ax'-b}_2 \leq \frac{1+\varepsilon}{1-\varepsilon} \norm{Ax^*-b}_2.
\]
That is, $S$ produces a solution $x'$ which preserves the {\it cost}
of the regression problem. The running time for this method depends
on (1) the reduced dimension $m$ and (2) the time it takes to multiply
$S$ by $A$. We can compute these for ``standard'' OSE types:
\begin{itemize}
\item If $S$ has i.i.d. Gaussian entries, then $m = O(d/\varepsilon^2)$ is
sufficient (and in fact, $m \geq d/\epsilon^2$ is required \cite{nn14}).
However, computing $SA$ takes $O(mnd) =
O(nd^2/\varepsilon^2)$ time, which is worse than solving the original
regression problem (one can speed this up using fast matrix multiplication,
though it is still worse than solving the original problem).
\item If $S$ is a subsampled randomized Hadamard transform (SRHT)
matrix with random sign
flips~(see Theorem 2.4 in \cite{woo14} for a survey, and also see
\cite{CNW16} which gives a recent improvement)
then $m$
increases to $\Ot(d/\varepsilon^2 \cdot \log n)$, where $\Ot(f) = f
{\mathrm{poly}}(\log(f))$. But now, we can compute $SA$ using the fast
Hadamard transform in $O(nd\log n)$ time. This makes the overall
regression problem take $O(nd\log n + d^\omega/\varepsilon^2)$ time.
\item If $S$ is a random sparse matrix with random signs (the
``Count-Sketch'' matrix), then $m = d^{1 + \gamma}/\varepsilon^2$ suffices
for $\gamma > 0$ a decreasing function of the
sparsity~\cite{CW13,MM13,NN13,BDN15,c16}.
(The definition of a Count-Sketch matrix
is, for any $s\geq 1$, $S_{i,j}\in \{0, -1/\sqrt{s}, 1/\sqrt{s} \}$,
$\forall i\in [m], j\in [n]$ and the column sparsity of matrix $S$ is $s$.
Independently in
each column $s$ positions are chosen uniformly at random without
replacement, and each chosen position is set to $-1/\sqrt{s}$ with
probability $1/2$, and $+1/\sqrt{s}$ with probability $1/2$.)
Sparse OSEs can benefit from the sparsity of $A$, allowing for a running time of
$\Ot(\nnz(A)) + \Ot(d^\omega/\varepsilon^2)$,
where $\nnz(A)$ denotes the number of non-zeros in $A$.
\end{itemize}
When $n$ is large, the latter two algorithms are substantially faster
than the na\"ive $nd^{\omega-1}$ method.
\subsection{Our Contributions}
Despite the success of using subspace embeddings to speed up regression,
often what practitioners are interested is not in preserving the cost
of the regression problem, but rather in the {\it generalization} or
{\it prediction} error
provided by the vector $x'$. Ideally, we would like for any future (unseen) example
$a \in \mathbb{R}^d$, that
$\langle a, x' \rangle \approx \langle a, x^* \rangle$ with
high probability.
Ultimately one may want to use $x'$ to do classification,
such as regularized least squares classification (RLSC) \cite{ryp03},
which has been found in
cases to do as well as support vector machines but is much simpler \cite{zp04}.
In this application, given a training
set of examples with multiple (non-binary)
labels identified with the rows of an $n \times d$
matrix $A$, one creates an $n \times r$ matrix $B$, each column indicating the presence
or absence of one of the $r$ possible labels in each example. One then solves
the multiple response regression problem $\min_X \|AX-B\|_F$, and uses $X$ to classify
future examples. A commonly used method is for a future example $a$, to compute
$\langle a, x_1 \rangle, \ldots, \langle a, x_r \rangle$, where $x_1, \ldots, x_r$
are the columns of $X$. One then chooses the label $i$ for which $\langle a, x_i \rangle$ is maximum.
For this to work, we would like the inner products $\langle a,
x'_1 \rangle, \ldots, \langle a, x'_r \rangle$ to be close to $\langle
a, x_1^* \rangle, \ldots$, $\langle a, x_r^* \rangle$, where $X'$ is the
solution to $\min_X \|SAX-SB\|_F$ and $X^*$ is the solution to $\min_X
\|AX-B\|_F$. For any $O(1)$-accurate OSE on $d+r$ dimensional spaces \cite{sar06}, which also satisfies so-called
approximate matrix multiplication with error $\varepsilon' = \varepsilon/\sqrt(d+r)$, we get that
\begin{align}\label{eq:ell2}
\norm{x' - x^*}_2 \leq O(\varepsilon) \cdot \norm{Ax^* - b}_2 \cdot \norm{A^\dagger}_2
\end{align}
where $\norm{A^\dagger}$ is the spectral norm of $A^\dagger$, which equals the
reciprocal of the smallest singular value of $A$. To obtain a generalization
error bound for an unseen example $a$, one has
\begin{eqnarray}\label{eqn:old}
|\langle a, x^* \rangle - \langle a, x' \rangle | = |\langle a, x^*-x' \rangle| \leq \|x^*-x'\|_2 \|a\|_2 = O(\varepsilon) \|a\|_2 \norm{Ax^*-b}_2 \norm{A^\dagger}_2,
\end{eqnarray}
which could be tight if given only the guarantee in \eqref{eq:ell2}.
However, if the difference vector $x' - x^*$ were distributed in a uniformly
random direction subject to~\eqref{eq:ell2}, then one would expect an
$\Ot(\sqrt{d})$ factor improvement in the bound. This is what our main
theorem shows:
\begin{theorem}[Main Theorem, informal]\label{thm:main}
Suppose $n \leq {\mathrm{poly}}(d)$ and matrix $A\in {\mathbb R}^{n\times d}$ and vector $b\in \mathbb{R}^{n}$ are given. Let $S\in {\mathbb R}^{m\times n}$ be a subsampled randomized
Hadamard transform matrix with $m = d^{1+\gamma}/\varepsilon^2$ rows for an
arbitrarily small constant $\gamma > 0$. For $x' = \argmin_{x\in {\mathbb R}^d}
\norm{SAx-Sb}_2$ and $x^* = \argmin_{x\in {\mathbb R}^d} \norm{Ax-b}_2$, and any fixed
$a \in {\mathbb R}^d$,
\begin{eqnarray}\label{eq:general}
|\langle a, x^* \rangle - \langle a, x' \rangle |
\leq \frac{\varepsilon}{\sqrt{d}} \|a\|_2 \norm{Ax^*-b}_2 \norm{A^\dagger}_2.
\end{eqnarray}
with probability $1-1/d^C$ for an arbitrarily large constant $C > 0$.
This implies that
\begin{eqnarray}\label{eq:ellinf}
\| x^* - x' \|_{\infty}
\leq \frac{\varepsilon}{\sqrt{d}} \norm{Ax^*-b}_2 \norm{A^\dagger}_2.
\end{eqnarray}
with $1 - 1/d^{C-1}$ probability.
If $n > {\mathrm{poly}}(d)$, then by first composing $S$ with a Count-Sketch OSE with ${\mathrm{poly}}(d)$
rows, one can achieve the same guarantee.
\end{theorem}
(Here $\gamma$ is a constant going to zero as $n$ increases;
see Theorem \ref{thm:eps} for a formal statement of Theorem \ref{thm:main}.)
Notice that Theorem \ref{thm:main} is considerably stronger than
that of (\ref{eqn:old}) provided by existing guarantees. Indeed, in order
to achieve the guarantee (\ref{eq:ellinf})
in Theorem \ref{thm:main}, one would need
to set $\varepsilon' = \varepsilon/\sqrt{d}$ in existing OSEs, resulting in $\Omega(d^2/\epsilon^2)$
rows. In contrast, we achieve only $d^{1+\gamma}/\epsilon^2$ rows.
We can improve the bound in Theorem \ref{thm:main} to $m = d/\varepsilon^2$
if $S$ is a matrix
of i.i.d. Gaussians; however, as noted, computing $S \cdot A$ is slower in
this case.
Note that Theorem \ref{thm:main} also {\it makes no distributional
assumptions} on the data, and thus the data could be heavy-tailed or
even adversarially corrupted. This implies that our bound is still useful
when the rows of $A$ are not sampled independently from a distribution
with bounded variance.
The $\ell_\infty$ bound~\eqref{eq:ellinf} of Theorem~\ref{thm:main} is
achieved by applying~\eqref{eq:general} to the standard basis vectors
$a = e_i$ for each $i \in [d]$ and applying a union bound. This
$\ell_{\infty}$ guarantee often has a more natural interpretation than
the $\ell_2$ guarantee---if we think of the regression as attributing
the observable as a sum of various factors,~\eqref{eq:ellinf} says
that the contribution of each factor is estimated well. One may also
see our contribution as giving a way for estimating the pseudoinverse
$A^\dagger$ \emph{entrywise}. Namely, we get that $ (SA)^\dagger S
\approx A^\dagger $ in the sense that each entry is within additive $O(\varepsilon
\sqrt{\frac{\log d}{d}} \norm{A^\dagger}_2)$. There is a lot of work
on computing entries of inverses of a matrix, see, e.g.,
\cite{ADLRRU12,LiAKD08}.
Another benefit of the $\ell_\infty$ guarantee is when the regression
vector $x^*$ is expected to be
$k$-\emph{sparse}~(e.g. \cite{leekasso}). In such cases, thresholding
to the top $k$ entries will yield an $\ell_2$ guarantee a factor
$\sqrt{\frac{k}{d}}$ better than~\eqref{eq:ell2}.
One could ask if Theorem \ref{thm:main} also holds for sparse OSEs, such
as the Count-Sketch. Surprisingly, we show that one cannot achieve the
generalization error guarantee in Theorem \ref{thm:main}
with high probability, say, $1-1/d$,
using such embeddings, despite the fact that such
embeddings do approximate the cost of the regression problem up to a
$1+\epsilon$ factor with high probability. This shows that the generalization
error guarantee is achieved by some subspace embeddings but not all.
\begin{theorem}[Not all subspace embeddings give the $\ell_{\infty}$ guarantee; informal version of Theorem \ref{thm:count_sketch_not_infty}]
The Count-Sketch matrix with $d^{1.5}$ rows and sparsity
$d^{.25}$---which is an OSE with exponentially small failure probability---with
constant probability will have a result $x'$ that does not satisfy the
$\ell_{\infty}$ guarantee (\ref{eq:ellinf}).
\label{thm:negative}
\end{theorem}
We can show that Theorem \ref{thm:main} holds for $S$ based
on the Count-Sketch OSE $T$ with $d^{O(C)}/\epsilon^2$ rows with $1-1/d^C$ probability.
We can thus compose the Count-Sketch OSE with the SRHT matrix
and obtain an $O(\nnz(A)) + {\mathrm{poly}}(d/\epsilon)$ time algorithm to compute $S \cdot T A$
achieving (\ref{eq:ellinf}). We can also compute $R \cdot S \cdot T \cdot A$, where
$R$ is a matrix of Gaussians, which is more efficient now that $S T A$ only has
$d^{1+\gamma}/\epsilon^2$ rows; this will reduce the number of rows to $d/\epsilon^2$.
Another common method of dimensionality reduction for linear
regression is \emph{leverage score
sampling}~\cite{DMMW12,LMP13,PKB14,CMM15}, which subsamples the rows
of $A$ by choosing each row with probability proportional to its
``leverage scores''. With $O(d \log(d/\delta)/\varepsilon^2)$ rows taken,
the result $x'$ will satisfy the $\ell_2$ bound~\eqref{eq:ell2} with
probability $1-\delta$.
However, it does not give a good $\ell_\infty$ bound:
\begin{theorem}[Leverage score sampling does not give the $\ell_{\infty}$ guarantee; informal version of Theorem \ref{thm:leverage_score_not_infty}]
Leverage score sampling with $d^{1.5}$ rows---which satisfies the
$\ell_2$ bound with exponentially small failure probability---with constant
probability will have a result $x'$ that does not satisfy the
$\ell_{\infty}$ guarantee (\ref{eq:ellinf}).
\end{theorem}
Finally, we show that the $d^{1+\gamma}/\varepsilon^2$ rows that SRHT
matrices use is roughly optimal:
\begin{theorem}[Lower bounds for $\ell_2$ and $\ell_\infty$
guarantees; informal versions of of Theorem \ref{thm:l2_lower_bound}
and Corollary \ref{cor:linf_lower_bound}]
Any sketching matrix distribution over $m \times n$ matrices that
satisfies either the $\ell_{2}$ guarantee~\eqref{eq:ell2} or the
$\ell_\infty$ guarantee~\eqref{eq:ellinf} must have $m \gtrsim
\min(n,d/\varepsilon^2)$.
\end{theorem}
Notice that our result shows the necessity of the $1/\varepsilon$ separation
between the results originally defined in Equation (3) and (4) of
Theorem 12 of \cite{sar06}. If we want to output some vector $x'$
such that $\| Ax'-b \|_2 \leq (1+\varepsilon) \| A x^* -b\|_2$, then it is
known that $m=\Theta(d/\varepsilon)$ is necessary and sufficient. However,
if we want to output a vector $x'$ such that $\| x'-x^*\|_2 \leq \varepsilon
\| A x^* -b \|_2 \cdot \|A^\dagger \|_2$, then we show that $m =
\Theta(d/\varepsilon^2)$ is necessary and sufficient.
\subsubsection{Comparison to Gradient Descent}
While this work is primarily about sketching methods, one could instead apply
iterative methods such as gradient descent, after appropriately
preconditioning the matrix, see, e.g., \cite{amt10,zf13, CW13}.
That is, one can use an OSE with constant
$\varepsilon$ to construct a preconditioner for $A$
and then run conjugate gradient using the
preconditioner. This gives an overall dependence of $\log(1/\epsilon)$.
The main drawback of this approach is
that one loses
the ability to save on
storage space or number of passes when $A$ appears in a stream, or to save on
communication or rounds when $A$ is distributed. Given increasingly
large data sets, such scenarios are now quite common, see, e.g., \cite{CW09}
for regression algorithms in the data stream model.
In situations where the entries of $A$ appear
sequentially, for example, a row at a time, one does not need to store
the full $n \times d$ matrix $A$ but only the $m \times d$ matrix
$SA$.
Also, iterative methods can be less efficient
when solving multiple response regression, where one wants to
minimize $\norm{AX - B}$ for a $d \times t$ matrix $X$ and an $n \times t$
matrix $B$. This is the case when $\varepsilon$ is constant and $t$ is large,
which can occur in some applications (though there are also other
applications for which $\varepsilon$ is very small). For
example, conjugate gradient with a preconditioner will take
$\Ot(ndt)$ time while using an OSE directly will take only $\Ot(nd +
d^2t)$ time (since one effectively replaces $n$ with $O~(d)$ after computing $S \cdot A$), separating $t$ from $d$. Multiple response regression, arises,
for example, in the RLSC application above.
\subsubsection{Proof Techniques}
\noindent {\bf Theorem \ref{thm:main}.}
As noted in Theorem~\ref{thm:negative}, there are some OSEs for which
our generalization error bound does not hold. This hints that our
analysis is non-standard and cannot use generic properties of OSEs as
a black box. Indeed, in our analysis, we have to consider matrix
products of the form $S^\top S (UU^\top S^\top S)^k$ for our random
sketching matrix $S$ and a fixed matrix $U$, where $k$ is a positive
integer. We stress that it is the {\it same matrix} $S$ appearing
multiple times in this expression, which considerably complicates the
analysis, and does not allow us to appeal to standard results on
approximate matrix product (see, e.g., \cite{woo14} for a survey).
The key idea is
to recursively reduce $S^\top S (UU^\top S^\top S)^k$ using a property
of $S$. We use properties that only hold for specifics OSEs $S$:
first, that each column of $S$ is unit vector; and
second, that for all pairs $(i,j)$ and $i\neq j$, the inner product
between $S_i$ and $S_j$ is at most $\frac{\sqrt{\log n}}{\sqrt{m}}$
with probability $1-1/{\mathrm{poly}}(n)$.
\\\\
\noindent {\bf Theorems \ref{thm:count_sketch_not_infty} and \ref{thm:leverage_score_not_infty}.}
To show that Count-Sketch does not give the $\ell_{\infty}$ guarantee,
we construct a matrix $A$ and vector $b$ as in Figure
\ref{fig:A_b_count_sketch_not_linf}, which has optimal solution $x^*$
with all coordinates $1/\sqrt{d}$. We then show, for our setting of
parameters, that there likely exists an index $j \in [d]$ satisfying
the following property: the $j$th column of $S$ has disjoint support
from the $k$th column of $S$ for all $k \in [d+\alpha]\setminus \{j\}$
except for a single $k > d$, for which $S_j$ and $S_k$ share exactly
one common entry in their support. In such cases we can compute $x'_j$
explicitly, getting $\abs{x'_j - x^*_j} = \frac{1}{s\sqrt{\alpha}}$.
By choosing suitable parameters in our construction, this gives that
$\norm{x'-x^*}_\infty \gg \frac{1}{\sqrt{d}}$.
The lower bound for leverage score sampling follows a similar
construction.
\\\\
\noindent {\bf Theorem \ref{thm:l2_lower_bound} and Corollary \ref{cor:linf_lower_bound}.}
The lower bound proof for the $\ell_{2}$ guarantee uses Yao's minimax
principle. We are allowed to fix an $m\times n$ sketching matrix $S$
and design a distribution over $[A ~ b]$. We first write the sketching matrix $S = U \Sigma V^\top$ in its singular value decomposition (SVD). We choose the $d+1$ columns of the adjoined matrix $[A, b]$ to be random orthonormal vectors. Consider an $n \times n$ orthonormal matrix $R$ which contains the columns of $V$ as its first $m$ columns, and is completed on its remaining $n-m$ columns to an arbitrary orthonormal basis. Then $S \cdot [A, b] = V^\top R R^\top \cdot [A,b] = [U \Sigma I_m, 0] \cdot [R^\top A, R^\top b]$. Notice that $[R^\top A, R^\top b]$ is equal in distribution to $[A, b]$, since $R$ is fixed and $[A, b]$ is a random matrix with $d+1$ orthonormal columns. Therefore, $S \cdot [A, b]$ is equal in distribution to $[U \Sigma G, U \Sigma h]$ where $[G, h]$ corresponds to the first $m$ rows of an $n \times (d+1)$ uniformly random matrix with orthonormal columns.
A key idea is that if $n = \Omega(\max(m,d)^2)$, then by a result of Jiang \cite{J06}, any $m \times (d+1)$ submatrix of a random $n \times n$ orthonormal matrix has $o(1)$ total variation distance to a $d \times d$ matrix of i.i.d. $N(0,1/n)$ random variables, and so any events that would have occurred had $G$ and $h$ been independent i.i.d. Gaussians, occur with the same probability for our distribution up to an $1-o(1)$ factor, so we can assume $G$ and $h$ are independent i.i.d. Gaussians in the analysis.
The optimal solution $x'$ in the sketch space equals $(SA)^{\dagger} Sb$, and by using that $SA$ has the form $U \Sigma G$, one can manipulate $\|(SA)^{\dagger} Sb\|$ to be of the form $\|\tilde{\Sigma}^{\dagger} (\Sigma R)^{\dagger} \Sigma h\|_2$, where the SVD of $G$ is $R \tilde{\Sigma} T$. We can upper bound $\|\tilde{\Sigma}\|_2$ by $\sqrt{r/n}$, since it is just the maximum singular value of a Gaussian matrix, where $r$ is the rank of $S$, which allows us to lower bound $\|\tilde{\Sigma}^{\dagger} (\Sigma R)^{\dagger} \Sigma h\|_2$ by $\sqrt{n/r}\|(\Sigma R)^{\dagger} \Sigma h\|_2$. Then, since $h$ is i.i.d. Gaussian, this quantity concentrates to $\frac{1}{\sqrt{r}} \|(\Sigma R)^{\dagger} \Sigma h\|$, since $\|Ch\|^2 \approx \|C\|_F^2/n$ for a vector $h$ of i.i.d. $N(0,1/n)$ random variables. Finally, we can lower bound $\|(\Sigma R)^{\dagger} \Sigma\|_F^2$ by $\|(\Sigma R)^{\dagger} \Sigma RR^\top \|_F^2$ by the Pythagorean theorem, and now we have that $(\Sigma R)^{\dagger} \Sigma R$ is the identity, and so this expression is just equal to the rank of $\Sigma R$, which we prove is at least $d$. Noting that $x^* = 0$ for our instance, putting these bounds together gives
$\|x'-x^*\| \geq \sqrt{d/r}$. The last ingredient is a way to ensure that the rank of $S$ is at least $d$. Here we choose another distribution on inputs $A$ and $b$ for which it is trivial to show the rank of $S$ is at least $d$ with large probability. We require $S$ be good on the mixture. Since $S$ is fixed and good on the mixture, it is good for both distributions individually, which implies we can assume $S$ has rank $d$ in our analysis of the first distribution above.
\subsection{Notation}
For a positive integer, let $[n] = \{1,2,\dotsc,n\}$. For a vector $x\in \mathbb{R}^n$, define $\| x\|_2=(\sum_{i=1}^n x_i^2 )^{\frac{1}{2}}$ and $\| x \|_{\infty}= \max_{i\in [n]} |x_i|$. For a matrix $A\in \mathbb{R}^{m\times n}$, define $\| A\|_2 = \sup_x \|Ax\|_2 /\|x\|_2$ to be the spectral norm of $A$ and $\| A\|_F = ( \sum_{i,j} A_{i,j}^2 )^{1/2}$ to be the Frobenius norm of $A$. We use $A^\dagger$ to denote the Moore-Penrose pseudoinverse of $m\times n$ matrix $A$, which if $A = U \Sigma V^\top$ is its SVD (where $U\in \mathbb{R}^{m\times n}$, $\Sigma\in \mathbb{R}^{n\times n}$ and $V\in \mathbb{R}^{n\times}$ for $m\geq n$), is given by $A^{\dagger} = V \Sigma^{-1} U^\top$.
In addition to $O(\cdot)$ notation, for two functions $f,g$, we use the shorthand $f\lesssim g$ (resp. $\gtrsim$) to indicate that $f\leq C g$ (resp. $\geq$) for an absolute constant $C$. We use $f\eqsim g$ to mean $cf\leq g\leq Cf$ for constants $c,C$.
\begin{definition}[Subspace Embedding]\label{def:subspace_embedding}
A $(1\pm\epsilon)$ $\ell_2$-subspace embedding for the column space of an $n\times d$ matrix $A$ is a matrix $S$ for which for all $x\in \mathbb{R}^d$, $\| SA x\|_2^2 = (1\pm \epsilon) \| A x\|_2^2$.
\end{definition}
\begin{definition}[Approximate Matrix Product]\label{def:approximate_matrix_product}
Let $0<\epsilon<1$ be a given approximation parameter. Given matrices $A$ and $B$, where $A$ and $B$ each have $n$ rows, the goal is to output a matrix $C$ so that $\| A^\top B - C\|_F \leq \epsilon \| A \|_F \| B \|_F$. Typically $C$
has the form $A^\top S^\top S B$, for a random matrix $S$ with a small
number of rows. In particular, this guarantee holds for the subsampled randomized Hadamard transform $S$ with $O(\epsilon^{-2})$ rows \cite{DMMS11}.
\end{definition}
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\textwidth]{countsketch}
\caption{Our construction of $A$ and $b$ for the proof that Count-Sketch does not obey the $\ell_{\infty}$ guarantee. $\alpha < d$.}
\label{fig:A_b_count_sketch_not_linf}
\end{figure}
\section{Warmup: Gaussians OSEs}\label{sec:gaussians}
We first show that if $S$ is a Gaussian random matrix, then it
satisfies the generalization guarantee. This follows from the
rotational invariance of the Gaussian distribution.
\begin{theorem}\label{thm:gaussians}
Suppose $A \in {\mathbb R}^{n \times d}$ has full column rank. If the
entries of $S \in \mathbb{R}^{m \times n}$ are i.i.d. $N(0,1/m)$, $m
= O(d/\varepsilon^2)$, then for any vectors $a, b$ and $x^* = A^\dagger b$, we
have, with probability $1-1/{\mathrm{poly}}(d)$,
\[
|a^\top (SA)^\dagger Sb - a^\top x^*| \lesssim \frac{\varepsilon \sqrt{\log d}}{\sqrt{d}} \norm{a}_2 \norm{b - Ax^*}_2 \norm{A^\dagger }_2.
\]
\end{theorem}
Because $SA$ has full column rank with probability $1$, $(SA)^\dagger SA =
I$. Therefore
\begin{align*}
|a^\top (SA)^\dagger Sb - a^\top x^*| &= |a^\top (SA)^\dagger S (b - Ax^*)| = |a^\top (SA)^\dagger S (b - AA^\dagger b)|.
\end{align*}
Thus it suffices to only consider vectors $b$ where $A^\dagger b = 0$, or
equivalently $U^\top b = 0$. In such cases, $SU$ will be independent of
$Sb$, which will give the result. The proof is in Appendix~\ref{app:gaussian}.
\section{SRHT Matrices}
We first provide the definition of the subsampled randomized Hadamard transform(SRHT):
let $S=\frac{1}{\sqrt{rn}} P H_n D$. Here, $D$ is an $n\times n$ diagonal matrix
with i.i.d. diagonal entries $D_{i,i}$, for which $D_{i,i}$ in uniform on
$\{-1,+1\}$. The matrix $H_n$ is
the Hadamard matrix of size $n \times n$, and we assume $n$ is a power of $2$.
Here, $H_n=[ H_{n/2}, ~ H_{n/2}; H_{n/2}, ~ - H_{n/2} ]$ and $H_1=[1]$.
The $r\times n$ matrix $P$ samples $r$ coordinates of an $n$ dimensional vector uniformly at random.
For other subspace embeddings, we no longer have that $SU$ and $Sb$
are independent. To analyze them, we start with a claim that allows
us to relate the inverse of a matrix to a power series.
\begin{claim}\label{claim:series}
Let $S \in {\mathbb R}^{m \times n}$, $A \in {\mathbb R}^{n \times d}$ have SVD $A = U
\Sigma V^\top $, and define $T \in {\mathbb R}^{d \times d}$ by
\[
T = I_d - U^\top S^\top S U.
\]
Suppose $SA$ has linearly independent columns and $\|T\|_2 \leq 1/2$.
Then
\begin{align}
(SA)^\dagger S = V \Sigma^{-1} \left(\sum_{k = 0}^\infty T^k
\right)U^\top S^\top S. \label{eq:powerseries}
\end{align}
\end{claim}
\begin{proof}
\begin{align*}
(SA)^\dagger S = & ~ (A^\top S^\top SA)^{-1}A^\top S^\top S \\
= & ~(V\Sigma U^\top S^\top SU\Sigma V^\top )^{-1}V\Sigma U^\top S^\top S \\
= & ~ V\Sigma^{-1}(U^\top S^\top SU)^{-1} U^\top S^\top S\\
= & ~ V\Sigma^{-1}(I_d - T)^{-1} U^\top S^\top S \\
= & ~ V\Sigma^{-1} \left(\sum_{k=0}^\infty T^k \right) U^\top S^\top S,
\end{align*}
where in the last equality, since $\|T\|_2 < 1$, the von Neumann
series $\sum_{k=0}^{\infty} T^k$ converges to $(I_d-T)^{-1}$.
\end{proof}
We then bound the $k$th term of this sum:
\define{Lemma}{l:bound_T_k}{%
Let $S \in {\mathbb R}^{r \times n}$ be the subsampled randomized Hadamard
transform, and let $a$ be a unit vector.
Then with probability $1-1/\mathrm{poly}(n)$, we have
\begin{align*}
|a^\top S^\top S ( UU^\top S^\top S)^k b| =& O(\log^k n) \cdot \left( O(d(\log n)/r) + 1 \right)^\frac{k-1}{2} \cdot (\sqrt{d} \|b\|_2 (\log n)/r + \| b\|_2 (\log^\frac{1}{2} n)/r^\frac{1}{2})
\end{align*}
Hence, for $r$ at least $d \log^{2k+2} n \log^2 (n/\varepsilon) / \varepsilon^2$, this is
at most $O(\|b\|_2 \varepsilon / \sqrt{d})$ with probability at least
$1-1/\mathrm{poly}(n)$.
}
\state{l:bound_T_k}
We defer the proof of this lemma to the next section, and now show
how the lemma
lets us prove that SRHT matrices satisfy the generalization
bound with high probability:
\begin{theorem}\label{thm:eps}
Suppose $A \in {\mathbb R}^{n \times d}$ has full column rank with $\log n =
d^{o(1)}$. Let $S \in {\mathbb R}^{m \times n}$ be a subsampled randomized
Hadamard transform with $m = O(d^{1+\alpha}/\varepsilon^2)$ for $\alpha = \Theta(\sqrt{\frac{\log \log n}{\log d}})$. For any vectors $a, b$ and $x^* = A^\dagger b$, we have
\[
|a^\top (SA)^\dagger Sb - a^\top x^*| \lesssim \frac{\varepsilon}{\sqrt{d}} \norm{a}_2 \norm{b - Ax^*}_2 \norm{\Sigma^{-1}}_2
\]
with probability $1-1/\mathrm{poly}(d)$.
\end{theorem}
\begin{proof}
Define $\Delta = \Theta \left (\frac{1}{\sqrt{m}} \right ) (\log^c d)\norm{a}_2 \norm{b - Ax^*}_2 \norm{\Sigma^{-1}}_2 .$
For a constant $c>0$, we have that $S$ is a $(1\pm\gamma)$-subspace embedding (Definition~\ref{def:subspace_embedding}) for $\gamma =
\sqrt{\frac{d\log^c n}{m}}$ with probability $1-1/\mathrm{poly}(d)$
(see, e.g., Theorem 2.4 of \cite{woo14} and references therein), so
$\norm{SUx}_2 = (1\pm \gamma)\norm{Ux}_2$ for all $x$, which we condition
on. Hence for $T = I_d - U^\top S^\top S U$, we have $\norm{T}_2 \leq
(1+\gamma)^2-1 \lesssim \gamma$. In particular, $\norm{T}_2 < 1/2$ and
we can apply Claim \ref{claim:series}.
As in Section~\ref{sec:gaussians}, $SA$ has full column rank if $S$ is
a subspace embedding, so $(SA)^\dagger SA
= I$ and we may assume $x^* = 0$ without loss of generality.
By the approximate matrix product (Definition \ref{def:approximate_matrix_product}),
we have for
some $c$ that
\begin{align}
|a^\top V\Sigma^{-1} U^\top S^\top S b| \leq
\frac{\log^c d}{\sqrt{m}} \norm{a}_2 \norm{b}_2 \norm{\Sigma^{-1}}_2 \leq \Delta\label{eq:k0}
\end{align}
with $1-1/\mathrm{poly}(d)$ probability. Suppose this event occurs, bounding the $k=0$ term
of~\eqref{eq:powerseries}. Hence it suffices to show that the $k \geq
1$ terms of~\eqref{eq:powerseries} are bounded by $\Delta$.
By approximate matrix product (Definition~\ref{def:approximate_matrix_product}), we also have with $1-1/d^2$ probability that
\[
\norm{U^\top S^\top Sb}_F \leq \frac{\log^c d}{\sqrt{m}} \norm{U^\top }_F \norm{b}_2 \leq \frac{\log^c d \sqrt{d}}{\sqrt{m}} \norm{b}_2.
\]
Combining with $\norm{T}_2 \lesssim \gamma$ we have for any $k$ that
\[
|a^\top V\Sigma^{-1}T^k U^\top S^\top Sb| \lesssim \gamma^k (\log^c d) \frac{\sqrt{d}}{\sqrt{m}} \norm{a}_2 \norm{\Sigma^{-1}}_2 \norm{b}_2.
\]
Since this decays exponentially in $k$ at a rate of $\gamma < 1/2$,
the sum of all terms greater than $k$ is bounded by the $k$th term.
As long as
\begin{align}
m \gtrsim \frac{1}{\varepsilon^2}d^{1 + \frac{1}{k}} \log^c n,\label{eq:2}
\end{align}
we have $\gamma = \sqrt{\frac{d\log^c n}{m}} < \varepsilon d^{-1/(2k)} / \log^c n$, so that
\[
\sum_{k' \geq k} |a^\top V\Sigma^{-1}T^{k'} U^\top S^\top Sb| \lesssim \frac{\varepsilon}{\sqrt{d}} \norm{a}_2 \norm{\Sigma^{-1}}_2 \norm{b}_2.
\]
On the other hand, by Lemma~\ref{l:bound_T_k}, increasing $m$ by a $C^k$ factor, we have for all $k$ that
\[
|a^\top V^\top \Sigma^{-1} U^\top S^\top S ( UU^\top S^\top S)^k b| \lesssim \frac{1}{2^k} \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{b}_2 \norm{\Sigma^{-1}}_2
\]
with probability at least $1 - 1/{\mathrm{poly}}(d)$, as long as $m \gtrsim d
\log^{2k+2}n\log^2(d/\varepsilon) /\varepsilon^2$. Since the $T^k$ term can be
expanded as a sum of $2^k$ terms of this form, we get that
\[
\sum_{k'=1}^k |a^\top V\Sigma^{-1}T^k U^\top S^\top Sb| \lesssim \frac{\varepsilon}{\sqrt{d}}\norm{a}_2 \norm{b}_2 \norm{\Sigma^{-1}}_2
\]
with probability at least $1 - 1/{\mathrm{poly}}(d)$, as long as $m \gtrsim d
(C\log n)^{2k+2}\log^2(d/\varepsilon) /\varepsilon^2$ for a sufficiently large constant $C$.
Combining with~\eqref{eq:2}, the result holds as long as
\[
m \gtrsim \frac{d \log^c n}{\varepsilon^2}\max((C\log n)^{2k+2}, d^{\frac{1}{k}})
\]
for any $k$. Setting $k = \Theta(\sqrt{\frac{\log d}{\log \log n}})$ gives the result.
\end{proof}
\iffalse
All that remains is to bound the $k=1$ term of~\eqref{eq:powerseries},
\[
a^\top V\Sigma^{-1}(I - U^\top S^\top SU) U^\top S^\top S b.
\]
Now, by~\eqref{eq:k0} we have that $a^\top V\Sigma^{-1} U^\top S^\top S b \leq
\Delta$ so it suffices to show that
\[
-a^\top V\Sigma^{-1}U^\top S^\top SUU^\top S^\top S b \lesssim \Delta.
\]
\fi
{\bf{Combining Different Matrices}.} In some cases it can make sense to combine different matrices that
satisfy the generalization bound.
\define{Theorem}{thm:combine}{
Let $A \in {\mathbb R}^{n \times d}$, and let $R \in {\mathbb R}^{m \times r}$ and
$S \in {\mathbb R}^{r \times n}$ be drawn from distributions of matrices that
are $\varepsilon$-approximate OSEs and satisfy the generalization
bound~\eqref{eq:ellinf}. Then $RS$ satisfies the generalization
bound with a constant factor loss in failure probability and
approximation factor.
}
\state{thm:combine}
We defer the details to Appendix~\ref{sec:combining}.
\section{Proof of Lemma~\ref{l:bound_T_k}}
\begin{proof}
Each column $S_i$ of the subsampled randomized Hadamard transform has the same distribution
as $\sigma_i S_i$, where $\sigma_i$ is a random sign. It also has
$\inner{S_i, S_i} = 1$ for all $i$ and $\abs{\inner{S_i, S_j}}
\lesssim \frac{\sqrt{\log (1/\delta)}}{\sqrt{r}}$ with probability
$1-\delta$, for any $\delta$ and $i \neq j$.
See, e.g., \cite{ldfu13}.
By expanding the following product into a sum, and rearranging terms, we obtain
\begin{align*}
& a^\top S^\top S ( U U^\top S^\top S)^k b \\
= &\sum_{i_0,j_0, i_1, j_1, \cdots, i_k,j_k}a_{i_0} b_{j_k} \sigma_{i_0} \sigma_{i_1} \cdots \sigma_{i_k} \sigma_{j_0} \sigma_{j_1} \cdots \sigma_{j_k} \\
\cdot & \langle S_{i_0}, S_{j_0} \rangle (UU^\top )_{j_0,i_1} \langle S_{i_1}, S_{j_1} \rangle \cdots (UU^\top )_{j_{k-1},i_k} \langle S_{i_k}, S_{j_k} \rangle\\
= & \sum_{i_0,j_k} a_{i_0} b_{j_k} \sigma_{i_0} \sigma_{j_k} \sum_{j_0, i_1, j_1, \cdots, i_k} \sigma_{i_1} \cdots \sigma_{i_k} \sigma_{j_0} \sigma_{j_1} \cdots \sigma_{j_{k-1}} \\
& \cdot \langle S_{i_0}, S_{j_0} \rangle (UU^\top )_{j_0,i_1} \langle S_{i_1}, S_{j_1} \rangle \cdots (UU^\top )_{j_{k-1},i_k} \langle S_{i_k}, S_{j_k} \rangle \\
= & \sum_{i_0,j_k} \sigma_{i_0} \sigma_{j_k} Z_{i_0,j_k}
\end{align*}
}
where $Z_{i_0,j_k}$ is defined to be
{
\begin{eqnarray*}
Z_{i_0,j_k} &=& a_{i_0} b_{j_k} \sum_{ \substack{ i_1,\cdots i_k \\ j_0, \cdots j_{k-1} }} \prod_{c=1}^{k} \sigma_{i_c} \prod_{c=0}^{k-1} \sigma_{j_c} \cdot \prod_{c=0}^k \langle S_{i_c}, S_{j_c}\rangle \prod_{c=1}^k (UU^\top )_{i_{c-1},j_c}
\end{eqnarray*}
}
Note that $Z_{i_0,j_k}$ is independent of $\sigma_{i_0}$ and $\sigma_{j_k}$. We observe that in the above expression if $i_0 = j_0$, $i_1 = j_1$, $\cdots$, $i_k = j_k$,
then the sum over these indices equals $a^\top (UU^\top )\cdots (UU^\top ) b =0$, since
$\langle S_{i_c}, S_{j_c} \rangle = 1$ in this case for all $c$.
Moreover, the sum over all indices conditioned on $i_k = j_k$ is equal to $0$.
Indeed, in this case, the expression can be factored into the form $\zeta \cdot U^\top b$,
for some random variable $\zeta$, but $U^\top b = 0$.
Let $W$ be a matrix with $W_{i,j} = \sigma_i \sigma_j Z_{i,j}$. We
need Khintchine's inequality:
\begin{fact}[Khintchine's Inequality] \label{fact:Khintchine}
Let $\sigma_1, \ldots, \sigma_n$ be i.i.d. sign random variables, and let $z_1,
\ldots, z_n$ be real numbers. Then there are constants $C, C' > 0$ so that
\begin{align*}
\Pr \left[ \left|\sum_{i=1}^n z_i \sigma_i \right| \geq Ct \|z\|_2 \right] \leq e^{-C't^2}.
\end{align*}
\end{fact}
We note that Khintchine's inequality sometimes refers to bounds on the moment of
$|\sum_i z_i \sigma_i|$, though the above inequality follows readily by applying a Markov
bound to the high moments.
We apply Fact \ref{fact:Khintchine} to each column of $W$, so that if $W_i$ is the $i$-th column, we have by a union bound that with probability $1-1/\mathrm{poly}(n)$, $\| W_i\|_2 = O( \| Z_i \|_2 \sqrt{\log n})$ simultaneously for all columns $i$. It follows that with the same probability, $\| W\|_F^2 = O(\|Z\|_F^2 \log n)$, that is, $\| W\|_F = O(\| Z\|_F \sqrt{\log n})$. We condition
on this event in the remainder.
Thus, it remains to bound $\| Z\|_F$. By squaring $Z_{i_0, j_0}$ and
using that ${\bf E}[\sigma_i \sigma_j] = 1$ if $i = j$ and $0$
otherwise, we have,
\begin{eqnarray}\label{eqn:Zone}
\underset{\sigma}{\bf E} [ Z_{i_0,j_k}^2 ] = a_{i_0}^2 b_{j_k}^2 \sum_{ \substack{i_1,\cdots i_k \\ j_0, \cdots j_{k-1}} } \prod_{c=0}^k \langle S_{i_c}, S_{j_c}\rangle^2 \prod_{c=1}^k (UU^\top )_{i_{c-1},j_c}^2
\end{eqnarray}
}
We defer to Appendix \ref{sec:Z}
the proof that
\begin{align*} \underset{S}{\bf E} [\| Z\|_F^2 ] &\leq \left( O(d(\log n)/r) + 1
\right)^{k-1} \cdot (d\|b\|_2^2 (\log^2 n)/r^2 + \| b\|_2^2 (\log n)/r)
\end{align*}
}
Note that we also have the bound:
\begin{align*}
(O(d(\log n) /r) + 1)^{k-1} &\leq ( e^{O( d(\log n) /r)} )^{k-1} \leq e^{O(kd(\log n) /r)} \leq O(1)
\end{align*}
}
for any $r = \Omega(kd\log n)$.
Having computed the expectation of $\|Z\|_F^2$, we now would like to show concentration. Consider a specific
\begin{eqnarray*}
Z_{i_0,j_k} = a_{i_0} b_{j_k} \sum_{i_k} \sigma_{i_k} \langle S_{i_k}, S_{j_k} \rangle \cdots \sum_{j_1} \sigma_{j_1} (UU^\top )_{j_1,i_2} \sum_{i_1} \sigma_{i_1} \langle S_{i_1}, S_{j_1}\rangle \sum_{j_0} \sigma_{j_0} \langle S_{i_0}, S_{j_0} \rangle (UU^\top )_{j_0, i_1}.
\end{eqnarray*}
}
By Fact \ref{fact:Khintchine},
for each fixing of $i_1$, with probability $1-1/\mathrm{poly}(n)$, we have
\begin{align}\label{eqn:induct1}
&\sum_{j_0} \sigma_{j_0} \langle S_{i_0}, S_{j_0} \rangle (UU^\top )_{j_0, i_1} = O(\sqrt{\log n}) \left(\sum_{j_0} \langle S_{i_0}, S_{j_0} \rangle^2 (UU^\top )_{j_0, i_1}^2 \right)^{\frac{1}{2}}.
\end{align}
}
Now, we can apply Khintchine's inequality for each fixing of $j_1$, and combine
this with (\ref{eqn:induct1}).
With probability $1-1/\mathrm{poly}(n)$, again we have
\begin{eqnarray}
& &\sum_{i_1} \sigma_{i_1} \langle S_{i_1}, S_{j_1}\rangle \sum_{j_0} \sigma_{j_0} \langle S_{i_0}, S_{j_0} \rangle (UU^\top )_{j_0, i_1} \notag \\
& =& \sum_{i_1} \sigma_{i_1} \langle S_{i_1}, S_{j_1}\rangle O(\sqrt{\log n}) \left(\sum_{j_0} \langle S_{i_0}, S_{j_0} \rangle^2 (UU^\top )_{j_0, i_1}^2 \right)^{\frac{1}{2}} \notag \\
& =&O(\log n ) \left( \sum_{i_1} \langle S_{i_1}, S_{j_1}\rangle^2 \sum_{j_0} \langle S_{i_0}, S_{j_0} \rangle^2 (UU^\top )_{j_0, i_1}^2 \right)^{\frac{1}{2}} \notag
\end{eqnarray}
}
Thus, we can apply
Khintchine's inequality recursively over all the $2k$ indexes $j_0, i_1, j_1, \cdots ,j_{k-1}, i_k$, from which it follows that with probability $1-1/\mathrm{poly}(n)$, for each
such $i_0, j_k$, we have $Z_{i_0,j_k}^2 = O(\log^k n) \underset{S}{\bf E}[Z_{i_0, j_k}^2]$, using
(\ref{eqn:Zone}). We thus have with this probability, that
$\|Z\|_F^2 = O(\log^k n) \underset{S}{\bf E}[\|Z\|_F^2],$
completing the proof.
\end{proof}
\section{Lower bound for $\ell_2$ and $\ell_{\infty}$ guarantee}\label{sec:lower_bound}
We prove a lower bound for the $\ell_2$ guarantee, which immediately
implies a lower bound for the $\ell_{\infty}$ guarantee.
\begin{definition}
Given a matrix $A\in \mathbb{R}^{n\times d}$, vector $b\in
\mathbb{R}^{n}$ and matrix $S\in \mathbb{R}^{r\times n}$, denote
$x^*=A^\dagger b$. We say that an algorithm ${\cal A}(A,b,S)$ that
outputs a vector $x'=(SA)^\dagger S b$ ``succeeds'' if the following
property holds:
$\| x' - x^* \|_2 \lesssim \varepsilon \| b\|_2 \cdot \| A^\dagger \|_2 \cdot \| Ax^*-b \|_2.$
\end{definition}
\define{Theorem}{thm:l2_lower_bound}{
Suppose $\Pi$ is a distribution over $\mathbb{R}^{m\times n}$ with the property that for any $A\in \mathbb{R}^{n\times d}$ and $b\in \mathbb{R}^{n}$,
$\underset{S\sim \Pi}{ \Pr } [ {\cal A}(A,b,S) \mathrm{~succeeds~} ] \geq 19/20.$
Then $m \gtrsim \min(n,d/\varepsilon^2)$.
}
\state{thm:l2_lower_bound}
\begin{proof}
The proof uses Yao's minimax principle. Let ${\cal D}$ be an arbitrary distribution over $\mathbb{R}^{n\times (d+1)}$, then
$
\underset{ (A,b) \sim {\cal D} }{ \mathbb{E} } ~\underset{ S \sim \Pi }{ \mathbb{E} } [ {\cal A}(A,b,S) \mathrm{~succeeds~} ] \geq 1-\delta.
$
Switching the order of probabilistic quantifiers, an averaging argument implies
the existence of a fixed matrix $S_0 \in \mathbb{R}^{m\times n}$ such that
\begin{align*}
\underset{ (A,b) \sim {\cal D} }{ \mathbb{E} } [ {\cal A}(A,b,S_0) \mathrm{~succeeds~} ] \geq 1-\delta.
\end{align*}
Thus, we must construct a distribution ${\cal D}_{\hard}$ such that
\begin{align*}
\underset{ (A,b) \sim {\cal D}_{\hard} }{ \mathbb{E} } [ {\cal A}(A,b,S_0) \mathrm{~succeeds~} ] \geq 1- \delta,
\end{align*}
cannot hold for any $\Pi_0 \in \mathbb{R}^{m\times n}$ which does not satisfy $m=\Omega(d/\varepsilon^2)$.
The proof can be split into three parts.
First, we prove a useful property. Second, we prove a lower bound for the case $\rank(S) \geq d$. Third, we show why $\rank(S)\geq d$ is necessary.
(\RN{1})
We show that $[SA,Sb]$ are independent Gaussian, if both $[A,b]$ and $S$ are orthonormal matrices. We can rewrite $SA$ in the following sense,
\begin{eqnarray}\label{eq:orthonormal_S_and_A_looks_Gaussian}
\underbrace{S}_{m \times n} \cdot \underbrace{A}_{n \times d} = \underbrace{S}_{m \times n} \underbrace{ R}_{n\times n} \underbrace{R^\top}_{n\times n} \underbrace{A}_{n\times d} = S \begin{bmatrix} S^\top & \overline{S}^\top \end{bmatrix} \begin{bmatrix} S \\ \overline{S} \end{bmatrix} A \notag = \begin{bmatrix} I_m & 0 \end{bmatrix} \begin{bmatrix} S \\ \overline{S} \end{bmatrix} A = \begin{bmatrix} I_m & 0 \end{bmatrix} \underbrace{\wt{A}}_{n \times d} = \underbrace{\wt{A}_m}_{m \times d}
\end{eqnarray}
where $\ov{S}$ is the complement of the orthonormal basis $S$, $I_m$ is a $m\times m$ identity matrix, and $\wt{A}_m$ is the left $m\times d$ submatrix of $\wt{A}$. Thus, using \cite{J06} as long as $m = o(\sqrt{n})$ (because of $n=\Omega(d^3)$)
the total variation distance between $[SA, Sb]$ and a random Gaussian matrix is small, i.e.,
\begin{equation}\label{eq:total_variation_distance}
D_{TV}( [SA,Sb], H) \leq 0.01
\end{equation}
where each entry of $H$ is i.i.d. Gaussian ${\cal N}(0,1/n)$.
(\RN{2})
Here we prove the theorem in the case when $S$ has rank $r\geq d$ (we will prove this is necessary in part \RN{3}. Writing $S=U \Sigma V^\top$ in its SVD, we have
\begin{equation}\label{eq:SA_is_USigmaG}
\underbrace{S}_{m \times n}A = \underbrace{U}_{m\times r} \underbrace{\Sigma}_{r\times r} \underbrace{V^\top}_{r\times n} R R^\top A = U \Sigma G
\end{equation}
where $R=\begin{bmatrix} V & \ov{V} ~\end{bmatrix}$. By a similar argument in Equation (\ref{eq:orthonormal_S_and_A_looks_Gaussian}), as long as $r=o(\sqrt{n})$ we have that $G$ also can be approximated by a Gaussian matrix, where each entry is sampled from i.i.d. ${\cal N}(0,1/n)$. Similarly, $Sb = U \Sigma h$, where $h$ also can be approximated by a Gaussian matrix, where each entry is sampled from i.i.d. ${\cal N}(0,1/n)$.
Since $U$ has linearly independent columns, $(U\Sigma G)^\dagger U \Sigma h= (\Sigma G)^\dagger U^\top U \Sigma h = (\Sigma G)^\dagger \Sigma h$.
The $r\times d$ matrix $G$ has ${\it SVD}$ $G= \underbrace{R}_{r\times d} \underbrace{\wt{\Sigma}}_{d\times d} \underbrace{T}_{d\times d}$, and applying the pseudo-inverse property again, we have
\begin{align*}
\| (SA)^\dagger Sb\|_2 = & ~ \| (\Sigma G)^\dagger \Sigma h\|_2 = \| (\Sigma R \wt{\Sigma} T)^\dagger \Sigma h\|_2 = \| T^\dagger (\Sigma R \wt{\Sigma})^\dagger \Sigma h\|_2 = \| (\Sigma R \wt{\Sigma} )^\dagger \Sigma h\|_2 \\
= & ~\| \wt{\Sigma}^\dagger (\Sigma R )^\dagger \Sigma h\|_2,
\end{align*}
where the the first equality follows by Equation (\ref{eq:SA_is_USigmaG}), the second equality follows by the {\it SVD} of $G$, the third and fifth equality follow by properties of the pseudo-inverse\footnote{\url{https://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse}} when $T$ has orthonormal rows and $\widetilde{\Sigma}$ is a diagonal matrix, and the fourth equality follows since $\| T^\dagger\|_2 = 1$ and $T$ is an orthonormal basis.
Because each entry of $G=R\wt{\Sigma}T \in \mathbb{R}^{r\times d}$ is sampled from an i.i.d. Gaussian ${\cal N}(0,1)$, using the result of \cite{V10} we can give an upper bound for the maximum singular value of $G$: $\| \wt{\Sigma} \| \lesssim \sqrt{\frac{r}{n}}$ with probability at least $.99$. Thus,
\begin{align*}
\| \wt{\Sigma }^\dagger (\Sigma R)^\dagger \Sigma h\|_2 \geq \sigma_{\min} (\wt{\Sigma }^\dagger) \cdot \| (\Sigma R)^\dagger \Sigma h\|_2 = \frac{1}{\sigma_{\max} (\wt{\Sigma})} \| (\Sigma R)^\dagger \Sigma h\|_2 \gtrsim \sqrt{n/r} \| (\Sigma R)^\dagger \Sigma h\|_2.
\end{align*}
Because $h$ is a random Gaussian vector which is independent of $(\Sigma R)^\dagger \Sigma$, by Claim \ref{cla:EAg_is_fnorm_A},
$\E_h [ \| (\Sigma R)^\dagger \Sigma h \|_2^2 ] = \frac{1}{n} \cdot \| (\Sigma R)^\dagger \Sigma \|_F^2,$
where each entry of $h$ is sampled from i.i.d. Gaussian ${\cal N}(0,1/n)$.
Then, using the Pythagorean~Theorem,
\begin{align*}
\| (\Sigma R)^\dagger \Sigma \|_F^2 = & ~ \| (\Sigma R)^\dagger \Sigma R R^\top \|_F^2 + \| (\Sigma R)^\dagger \Sigma (I-R R^\top ) \|_F^2 \\
\geq & ~ \| (\Sigma R)^\dagger \Sigma R R^\top \|_F^2 \\
= & ~ \| (\Sigma R)^\dagger \Sigma R \|_F^2 \\
= & ~ \rank(\Sigma R) \\
= & ~ \rank(SA) \\
= & ~ d.
\end{align*}
Thus, $\| x' -x^*\|_2 \gtrsim \sqrt{d/r} \geq \sqrt{d/m}=\varepsilon$
(\RN{3}) Now we show that we can assume that $\rank(S)\geq d$.
We sample $A,b$ based on the following distribution ${\cal D}_{\hard}$: with probability $1/2$, $A,b$ are sampled from ${\cal D}_1$; with probability $1/2$, $A,b$ are sampled from ${\cal D}_2$. In distribution ${\cal D}_1$, $A$ is a random orthonormal basis and $d$ is always orthogonal to $A$. In distribution ${\cal D}_2$, $A$ is a $d\times d$ identity matrix in the top-$d$ rows and $0$s elsewhere, while $b$ is a random unit vector.
Then, for any $(A,b)$ sampled from ${\cal D}_1$, $S$ needs to work with probability at least $9/10$. Also for any $(A,b)$ sampled from ${\cal D}_2$, $S$ needs to work with probability at least $9/10$. The latter two statements follow
since overall $S$ succeeds on ${\cal D}_{\hard}$ with probability at least $19/20$.
Consider the case where $A, b$ are sampled from distribution ${\cal D}_2$. Then $x^*=b$ and $\OPT = 0$. Then consider $x'$ which is the optimal solution to $\min_x \| SAx - Sb\|_2^2$, so
$x' = (SA)^\dagger Sb = (S_L)^\dagger S_L b$,
where $S$ can be decomposed into two matrices $S_L\in \mathbb{R}^{r\times d}$ and $S_R\in \mathbb{R}^{r\times(n-d)}$, $S = \begin{bmatrix} S_L & S_R \end{bmatrix}$. Plugging $x'$ into the original regression problem,
$\| Ax' - b\|_2^2 = \| A (S_L)^\dagger S_L b - b \|_2^2$,
which is at most $(1+\varepsilon)\OPT=0$. Thus $\rank(S_L)$ is $d$. Since $S_L$ is a submatrix of $S$, the rank of $S$ is also $d$.
\end{proof}
It remains to define several tools which are used in the main proof of the lower bound.
\begin{claim}\label{cla:EAg_is_fnorm_A}
For any matrix $A\in \mathbb{R}^{n\times d}$, if each entry of a
vector $g\in \mathbb{R}^d$ is chosen from an i.i.d Gaussian ${\cal
N}(0,\sigma^2)$, then $\underset{g}{\E} [\| A g\|_2^2] = \sigma^2 \|
A\|_F^2$ .
\end{claim}
\begin{proof}
\begin{align*}
\underset{g}{\E} [\| A g\|_2^2] = & ~ \underset{g}{\E} \left[ \sum_{i=1}^n (\sum_{j=1}^d A_{ij} g_j)^2 \right] \\
= & ~\underset{g}{\E} \left[ \sum_{i=1}^n ( \sum_{j=1}^d A_{ij}^2 g_{j}^2 + \sum_{j\neq j'} A_{ij} A_{ij'} g_j g_{j'} ) \right] \\
= & ~\sum_{i=1}^n \sum_{j=1}^d A_{ij}^2 \sigma^2 \\
= & ~\sigma^2 \| A\|_F^2.
\end{align*}
\end{proof}
Let $g_1, g_2, \cdots, g_t$ be i.i.d. ${\cal N}(0,1)$ random variables. The random variables $\sum_{i=1}^t g_i^2$ are ${\cal X}^2$ with $t$ degree of freedom. Furthermore, the following tail bounds are known.
\begin{fact}[Lemma 1 of \cite{LM00}]\label{fac:kai_squared_distribution}
Let $g_1, g_2, \cdots, g_t$ be i.i.d. ${\cal N}(0,1)$ random variables. Then for any $x\geq 0$,
\begin{align*}
\Pr \left[ \sum_{i=1}^t g_i^2 \geq t+ 2 \sqrt{tx} + 2x \right] \leq \exp(-x),
\end{align*}
and
\begin{align*}
\Pr\left[ \sum_{i=1}^t g_i^2 \leq t- 2 \sqrt{tx} \right] \leq \exp(-x).
\end{align*}
\end{fact}
\begin{definition}
Given a matrix $A\in \mathbb{R}^{n\times d}$, vector $b\in \mathbb{R}^{n}$ and matrix $S\in \mathbb{R}^{r\times n}$, denote $x^*=A^\dagger b$. We say that an algorithm ${\cal B}(A,b,S)$ that outputs a vector $x'=(SA)^\dagger S b$ ``succeeds'' if the following property holds:
\begin{equation*}
\| x' - x^* \|_{\infty} \lesssim \frac{\varepsilon}{\sqrt{d}} \| b\|_2 \cdot \| A^\dagger \|_2 \cdot \| Ax^*-b \|_2.
\end{equation*}
\end{definition}
Applying $\| x'-x\|_{\infty} \geq \frac{1}{\sqrt{d}} \| x'-x\|_2$ to Theorem \ref{thm:l2_lower_bound} ,we obtain the $\ell_{\infty}$ lower bound as a corollary,
\begin{corollary}\label{cor:linf_lower_bound}
Suppose $\Pi$ is a distribution over $\mathbb{R}^{m\times n}$ with the property that for any $A\in \mathbb{R}^{n\times d}$ and $b\in \mathbb{R}^{n}$,
\begin{equation*}
\underset{S\sim \Pi}{ \Pr } [ {\cal B}(A,b,S) \mathrm{~succeeds~} ] \geq 9/10.
\end{equation*}
Then $m \gtrsim \min(n,d/\varepsilon^2)$.
\end{corollary}
\iffalse
\section{Unbiased estimation }
\begin{theorem}[Generalization error]\label{thm:generalization}
Given a full column rank matrix $A \in \mathbb{R}^{n\times d}$ and vector $b\in \mathbb{R}^n$. Think of $[A,b]$ as $n$ training samples, and each row $i\in [n]$ is corresponding to one training sample $(a_i,b_i) \in \mathbb{R}^d \times \mathbb{R}$.
Define $x^* = \underset{x\in \mathbb{R^d}} { \arg\min} \| Ax - b\|_{\infty} $, and $x' = \underset{x\in \mathbb{R}^d }{\arg\min} \| (SA) x - Sb\|_{\infty}$, where $S$ is any sketching matrix satisfies that ${\bf E} [\langle S_i, S_j \rangle] = 0 ~\forall i\neq j, {\bf E}[\langle S_i, S_i \rangle] = 1 ~\forall i \in [n]$, where $S_i$ is the $i$th column of $S$. For any incoming test sample $(a_j,b_j) \in \mathbb{R}^d \times \mathbb{R}$, we always have
\begin{equation}\label{eq:generalization_error_expectation}
\underset{S}{\bf E} [ ( a_j^\top x' - b)^2 ] - ( a_j^\top x^* - b_j)^2 = \underset{S}{\bf E} [a_j^\top (x'-x^*)^2]
\end{equation}
Moreover with probability $1-1/\mathrm{poly}(d)$, we have
\begin{equation}
\underset{S}{\bf E} [(a_j^\top x' -b)^2]- (a_j^\top x^* -b_j)^2 \lesssim \frac{\varepsilon\sqrt{\log d}}{\sqrt{d}} \|a\| \|\Sigma^{-1}\|
\end{equation}
\end{theorem}
\begin{proof}
For any incoming test sample $(a_j,b_j) \in \mathbb{R}^{d} \times \mathbb{R}$, we have the following observations,
\begin{eqnarray*}
& &{\bf E} [ (a_j^\top x' - b_j)^2 - (a_j^\top x^* - b_j)^2] = {\bf E} [ (a_j^\top (x'-x^*) )^2 ] \notag \\
& \iff & {\bf E} [ (a_j^\top x')^2 - 2 (a_j^\top x')b_j - (a_j^\top x^*)^2 + 2(a_j^\top x^*)b_j] = {\bf E} [ (a_j^\top x')^2 -2(a_j^\top x' a_j^\top x^*)+ (a_j^\top x^*)^2 ] \notag \\
& \iff& {\bf E} [2 (a_j^\top x^*) b_j - 2(a_j^\top x^*)^2] = {\bf E}[ 2(a_j^\top x') (b_j - a_j^\top x^*) ]\notag \\
& \iff& {\bf E} [ (a_j^\top x^*) b_j - (a_j^\top x^*)^2] = {\bf E}[ (a_j^\top x') (b_j - a_j^\top x^*) ] \notag \\
& \iff & 0 = {\bf E} [ (a_j^\top x' -a_j^\top x^*)(b_j - a_j^\top x^*) ] \\
\end{eqnarray*}
So far, we already show that equation ${\bf E} [ (a_j^\top x' -a_j^\top x^*)(b_j - a_j^\top x^*) ]$ is equivalent to equation \ref{eq:generalization_error_expectation} in Theorem \ref{thm:generalization}.
First, it is not hard to see that $(a^\top x' -a^\top x^*)$ is independent of $(b - a^\top x^*)$, since $(b - a^\top x^*)$ is a constant term. Second, by Lemma \ref{lem:E_additive_error_x}, we know that ${\bf E} [ (a^\top x' -a^\top x^*)] = 0$.
It remains to apply Theorem \ref{thm:combine_three_sketches} to $\underset{S}{\bf E} [a_j^\top (x'-x^*)^2]$.
Thus we finish the proof.
\end{proof}
To complete the proof, we still need to prove Lemma \ref{lem:E_additive_error_x}. Since Lemma \ref{lem:E_additive_error_x} needs the result of Lemma \ref{lem:E_first_moment_Tk}. Thus, we're going to explain Lemma \ref{lem:E_first_moment_Tk}.
\begin{lemma}\label{lem:E_first_moment_Tk}
Given a full column rank matrix $A\in \mathbb{R}^{n\times d}$, for any vector $a, b\in\mathbb{R}^d$ with $U^\top b = 0$. Define SVD $A = U \Sigma V^\top $. Let $S \in \mathbb{R}^{m\times d}$ denote any random sketching matrix(Gaussian, FastJL and Count-Sketch). For any $k>1$, with probability 1,
\begin{equation}
\underset{S}{\bf E} a^\top S^\top S (UU^\top S^\top S)^k b =0
\end{equation}
\end{lemma}
\begin{proof}
\begin{eqnarray*}
&& \underset{S}{\bf E} a^\top S^\top S (UU^\top S^\top S)^k b \\
& = & \underset{S}{\bf E} \sum_{i_0,j_0,i_1,j_1,\cdots,i_k,j_k} a_{i_0} b_{j_k} \prod_{c=0}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c}\\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0 }} a_{i_0} b_{j_k} \prod_{c=0}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& + & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 \neq j_0 }} a_{i_0} b_{j_k} \prod_{c=0}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c}\\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0 }} a_{i_0} b_{j_k} \sigma_{i_0}^2 \langle S_{i_0}, S_{i_0} \rangle \prod_{c=1}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& + & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 \neq j_0 }} a_{i_0} b_{j_k} \underset{i_0\neq j_0}{\bf E} [\sigma_{i_0} \sigma_{j_0} \langle S_{i_0}, S_{j_0} \rangle ] \prod_{c=1}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c}\\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0 }} a_{i_0} b_{j_k} \cdot 1 \cdot \prod_{c=1}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& + & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 \neq j_0 }} a_{i_0} b_{j_k} \cdot 0 \cdot \prod_{c=1}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c}\\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0 }} a_{i_0} b_{j_k} \prod_{c=1}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& = & \cdots \\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0, i_1=j_1 }} a_{i_0} b_{j_k} \prod_{c=2}^k \sigma_{i_c} \sigma_{j_c} \langle S_{i_c}, S_{j_c} \rangle \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& = & \cdots \\
& = & \underset{S}{\bf E} \sum_{ \substack{ i_0,j_0,i_1,j_1,\cdots,i_k,j_k \\ i_0 = j_0, i_1=j_1, \cdots, i_k=j_k }} a_{i_0} b_{j_k} \prod_{c=1}^{k} (UU^\top )_{j_{c-1},i_c} \\
& = & 0
\end{eqnarray*}
\end{proof}
\begin{lemma}\label{lem:E_additive_error_x}
Given any full column rank matrix $A\in \mathbb{R}^{n\times d}$, for any vector $b\in \mathbb{R}^n$ with $U^\top b = 0$, we have $ \underset{S}{\bf E} [a^\top ( (SA)^\dagger S - A^\dagger ) b] = 0$. Moreover $\underset{S}{\bf E} [a^\top (x' - x^*)]=0$
\end{lemma}
\begin{proof}
Based on Claim \ref{claim:series}, we have
\begin{equation*}
\underset{S}{\bf E} [a^\top (SA)^\dagger S b] = \underset{S}{\bf E} [a^\top V\Sigma^{-1} \left( \sum_{k=0}^{\infty} T^k \right) U^\top S^\top S b] = \sum_{k=0}^{\infty} \underset{S}{\bf E} [a^\top V\Sigma^{-1} T^k U^\top S^\top S b]
\end{equation*}
First, it is easy to show that
\begin{equation}\label{eqn:E_first_moment_I}
\underset{S}{\bf E} [a^\top V\Sigma^{-1} U^\top S^\top S b] = 0
\end{equation}
Second, by Lemma \ref{lem:E_first_moment_Tk}, for any $k$ we have
\begin{eqnarray}\label{eqn:E_first_moment_Tk}
&& \underset{S}{\bf E} a^\top S^\top S (UU^\top S^\top S)^k b =0
\end{eqnarray}
Thus, by combining \ref{eqn:E_first_moment_I} and \ref{eqn:E_first_moment_Tk}:
\begin{equation*}
\underset{S}{\bf E} [a^\top (SA)^\dagger Sb] = 0
\end{equation*}
By $U^\top b = 0$,
\begin{equation*}
\underset{S}{ \bf E} [a^\top A^\dagger b] = 0
\end{equation*}
Thus, we complete the proof.
\end{proof}
\fi
\iffalse
\section{ $\ell_\infty$ for low rank application}
Let $A$ be a square symmetric matrix $\in \mathbb{R}^{d\times d}$, let SVD $A = U\Sigma V^\top $,
define SVD $A_k = U_k \Sigma_k V_k^\top \in \mathbb{R}^{d\times d}$. Let $X^* = \arg\min \{ \|X'\|_F \mid \| A X' - A\| = \min \| A_k X - A\|_F \}$, then it is straightforward that $X^* = U_k U_k^\top $.
By theorem \ref{thm:main_informal_low_rank}, for any vectors $a \in \mathbb{R}^{d}$, $b\in \mathbb{R}^d$, $A_k \in \mathbb{R}^{d\times d}$, we have
\begin{equation*}
\abs{ a^\top ( (S A_k)^\dagger S - A_k^\dagger) b } \lesssim \frac{\varepsilon}{\sqrt{k} } \| a^\top A_k^\dagger \| \| (I - U_kU_k^\top ) b\|
\end{equation*}
Plugging $b = A b'$, then
\begin{eqnarray*}
\abs{ a^\top ( (S A_k)^\dagger S - A_k^\dagger) Ab' } & \lesssim & \frac{\varepsilon}{\sqrt{k} } \| a^\top A_k^\dagger \| \| (I - U_kU_k^\top ) Ab'\| \\
& \lesssim & \frac{\varepsilon}{\sqrt{k} } \| a^\top \| \| A_k^\dagger\| \| (I - U_kU_k^\top ) A\| \|b'\| \\
& = & \frac{\varepsilon}{\sqrt{k} } \| a^\top \| \sigma_k^{-1} \sigma_{k+1} \|b'\| \\
\end{eqnarray*}
By definition of $A$ and $A_k$, we have $a^\top A_k^\dagger A b' = a^\top U_k U_k^\top b'$, thus
\begin{equation*}
\abs{ a^\top (S A_k)^\dagger S A b' - a^\top U_k U_k^\top b' } \lesssim \frac{\varepsilon}{\sqrt{k} } \| a^\top \| \| b'\| \frac{\sigma_{k+1}}{\sigma_k}
\end{equation*}
where $\sigma_k$ is the $k$th singular value $A$.
Let $b' = e_i$, then $\forall i \in [d]$:
\begin{equation*}
\|a^\top (S A_k)^\dagger SA - a^\top U_k U_k^\top \|_{\infty} \leq \frac{\varepsilon}{\sqrt{k}}\| a\| \frac{\sigma_{k+1}}{\sigma_k}
\end{equation*}
By triangle inequality,
\begin{eqnarray}
\| a^\top (SA_k)^\dagger SA - a^\top \|_{\infty} & \leq & \| a^\top - a^\top U_k U_k^\top \|_{\infty} + \|a^\top (S A_k)^\dagger SA - a^\top U_k U_k^\top \|_{\infty} \notag \\
& \leq & \| a^\top - a^\top U_k U_k^\top \|_{\infty} + \frac{\varepsilon}{\sqrt{k}}\| a\| \frac{\sigma_{k+1}}{\sigma_k} \notag\\
\end{eqnarray}
Given $a$, find $a' \in W$ which is a subspace of dimension $k/\varepsilon^2$ such that
\begin{equation*}
\| a' - a\|_{\infty} \leq \| a^\top - a^\top U_k U_k^\top \|_{\infty} + \frac{\varepsilon}{\sqrt{k}}\| a\| \frac{\sigma_{k+1}}{\sigma_k}
\end{equation*}
Let $z\in \mathbb{R}^{k/\varepsilon^2}$,
\begin{equation*}
\| z^\top SA - y \|_{\infty} < w
\end{equation*}
where the linear programming has $\frac{k}{\varepsilon^2}$ variables and $d$ equations.
{\bf Compared to $\ell_2$ OSEs}
Based on current observation, it looks hard to get $(1+\varepsilon)$ for $\ell_{\infty}$- guarantee :
\begin{eqnarray*}
\| a' - a\|_{\infty} \leq (1+\varepsilon) \| a - U_kU_k^\top a\|_{\infty}
\end{eqnarray*}
By using general $\ell_2$ OSEs,
\begin{eqnarray*}
\| a' - a\|_{2} \leq (1+\varepsilon) \| a - U_kU_k^\top a\|_{2}
\end{eqnarray*}
Now, we want to show that in some situations, our result is better, which means
\begin{eqnarray}
\underbrace{ \frac{\varepsilon}{\sqrt{k} } \| a\| \frac{\sigma_{k+1}}{\sigma_k}}_{C_1} < \underbrace{ \varepsilon \| a - U_k U_k^\top a\|_2 }_{C_2}
\end{eqnarray}
Assume that $a \sim {\cal N}(0,\Sigma)$ and $\sigma_i \approx i^{-\alpha}$.
\begin{eqnarray}
C_1 &\approx & \frac{\varepsilon}{\sqrt{k}} \frac{\sigma_{k+1}}{\sigma_k} \approx \frac{\varepsilon}{\sqrt{k}} \frac{(k+1)^{-\alpha}}{k^{-\alpha}} \approx \frac{\varepsilon}{\sqrt{k}}
\end{eqnarray}
\begin{eqnarray}
C_2 \approx & \varepsilon (\sum_{i>k}^d \sigma_i^2 )^{\frac{1}{2}} \approx \varepsilon \int_{k}^{\infty} i^{-2\alpha} \mathrm{d} i \approx \varepsilon \frac{k^{1-2\alpha}}{2\alpha -1}
\end{eqnarray}
Thus, if $k^{\frac{1}{2} - 2\alpha} > 2\alpha -1$, our result is better.
\fi
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Nitrogen-vacancy centers (referred to as NV-centers in the following) are used in various applications ranging from high-precision temperature measurements \cite{2013toylithermometry}, magnetic-field measurements in various modalitites, with \cite{2015wolfsubpico,georgioscavity} or without employing microwaves \cite{arnemwfree} or bias fields\cite{tillzero} for instance, electric-field measurements\cite{doldeelectric} to quantum computing\cite{Quantumcomputer2010}, gyroscopy \cite{andreygyro} as well as bio-sensing \cite{barryworm}.
A NV-center is a point defect in diamond, where a pair of neighbouring carbon atoms is replaced by a nitrogen atom and a vacancy. It is an atom-like system, that can be in different charge states, called NV$^{+}$, NV$^{0}$ and NV$^{-}$, where the latter one is favoured for applications due to its optical spin read-out \cite{DOHERTY20131}. The NV$^{-}$ center has a total electron spin of 1, the corresponding electrons being contributed by the NV-center nitrogen itself, the open bonds from the carbon atoms and of another substitutional nitrogen atom in the lattice. The electron spins can be optically pumped into one of the NV's Zeeman sublevels by illuminating the diamond with, for example, \SI{532}{\nano \meter} laser light driving transitions into the phonon broadened excited state. The spin state can be read out by detecting the amount of (infra)red fluorescence light, due to spin selective non-radiative transitions \cite{DOHERTY20131}. Driving microwave transitions between the various spin states leads to observable changes in fluorescence. Those can be used to measure magnetic fields over the respective transition frequency shifts via the Zeeman effect .
A fundamental noise limit of fluorescence detection arises from photon shot noise, depending on the amount of collected light which usually dominates over spin-projection noise as another fundamental limit. Therefore, by increasing the amount of collected light, the signal-to-noise ratio of such measurements can be improved.
Several different techniques have been developed to improve photon-collection efficiency, in both single-NV setups \cite{Haddena2010,Li2015,Choy2013} and ensemble experiments \cite{2015wolfsubpico,fourphotdiodes}.
\begin{figure}[h]
\centering
{\includegraphics[width=1\linewidth]{figures/MainSketch.pdf}}
\caption{Diamond assembly images: (a) Sketch of the NV-bearing diamond pyramid at the front (science diamond) dimensions on top of the diamond anvil. (b) Photo of the science diamond, glued to the diamond anvil. c) Image of fluorescence light collected from the back of the diamond anvil used for alignment purposes and taken with a CCD camera. The circle in the center of the cross is the apex of the fluorescence cone from the laser beam focal spot and the four side beams arise due to the side reflections of the anvil. All measures are in mm. }
\label{fig:diamond}
\end{figure}
In this work, the diamond containing the NV centers, referred to as the sensing diamond, was glued to a cone-shaped diamond piece, referred to as the diamond anvil, which increases the amount of collected fluorescence light. The sensing diamond was cut to direct side emitted fluorescence from the sensing volume into the back direction via total internal reflection, see Fig.\,\ref{fig:diamond}. The curved back surface of the diamond anvil reduces losses due to total internal reflection at the diamond to air interface. Detection with a photodiode confirms an improvement factor of $3.8(1)$ expected from simulations compared to the other exiting surface.
\section{Sample preparation}
The sensing diamond, a high-pressure high-temperature (HPHT) sample (Element Six DNV-B14), is specified to contain 13\,ppm nitrogen, 3.7\,ppm NV$^-$-centers and 1.4\,ppm NV$^0$-centers. This specific sample is $^{13}$C-depleted (99.999\% $ ^{12}$C).
The sample was irradiated with \SI{5}{\mega\electronvolt} electrons at a dose of \SI{2e19}{\centi\meter^{-2}} and then annealed at \SI{700}{\degreeCelsius} for eight hours. Its measured minimal linewidth in a pulsed optically detected magnetic resonance (ODMR) experiment is around \SI{300}{\kilo\hertz}.
The shape of the diamond anvil and sensing diamond pieces was optimised using the COMSOL Multiphysics software. The simulations were used to evaluate the improvement in fluorescence collection between the back and front side.
The science diamond is a trapezoid with a back surface being a square \SI{0.5}{\milli\meter} on the side, a height of \SI{0.18}{\milli\meter}, and the upper square surface being \SI{0.15}{\milli\meter} on the side, see Fig.\,\ref{fig:diamond}. The base angle for this shape is close to 45 degrees to match with the single-crystal diamond anvil manufactured by Dutch Diamond Technologies. This limits the angular distribution of about 90\% of rays exiting the diamond construction to below 45 degrees with respect to the symmetry axis, see Fig.\,\ref{fig:Simulation}. This means 90\% of the light can be picked up by a lens with numerical aperture of 0.7. A very weak requirement.
The two diamond pieces were joined with a thin layer of Norland Optical Adhesive 170 with refractive index of 1.7 (the highest-index material that we could find) applied between the anvil and the back surface of the sensing diamond while pressing the pieces together. Effects as for instance etaloning due to a significant glue layer thickness were not observed.
In the COMSOL simulation within the ray-tracing module, the number of collected rays were given a cylindrical distribution of ray sources inside the sensing diamond to mimic the excitation volume shape by the laser light inside the diamond. Three \SI{30}{\micro\meter} spaced point sources arranged along the symmetry axes of the sensing diamond emitted isotropically each 2000 rays. The ratio of collected rays between the front and back side on a photodiode using an 8 mm focal length, 12.7 mm diameter aspheric condenser lens was 3.8.
The diamond was mounted on a custom made peek mount during the experiment. Trying to test thermal durability of the glue joint, we applied around 1.8\,W of green laser light in a 0.9 mm diameter beam focused using 8\,mm focal length lens for around 10\,s on the sensing diamond from the back side. No degradation of the diamond optical assembly was observed at temperatures at which the peek material started to deform which are estimated to be around 150 °C based on its glass transition temperature \cite{peek}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/anglecomparison2_AW.pdf}
\caption{Simulated fractional cumulative angular ray distribution as a function of $\alpha$, the ray angle with respect to the symmetry axis of the diamond anvil with the sensing diamond. The cumulative ray fraction per single side collection is indicated as fraction of total number of emitted rays per side. The dashed red line indicates the numerical aperture of the collecting lens used in this note, the grey one the anvil opening angle of 45 degrees. }
\label{fig:Simulation}
\end{figure}
\section{Characterisation measurements}
To verify the simulation results we built a setup to measure the amount of fluorescence light collected from the front and back sides of the diamond simultaneously.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/sketch_AW.pdf}
\caption[Experimental setup schematic]{Experimental setup: the \SI{532}{\nano \meter} laser light is focused on a NV-center doped sensing diamond, attached to a diamond anvil. The emitted light is focused onto the same model of photodiodes on each side. A dichroic mirror is used to block the reflected green light going towards the first photodiode, which is used to to record the intensity of the fluorescence light. An interference filter is employed to reject the laser light and transmit the fluorescence light. The right photodiode and lens were replaced with a CCD camera and a longer focal length lens respectively to capture images used to align the light beam with respect to the diamond.}
\label{fig:experimental setup}
\end{figure}
\subsection{Experimental setup}
The setup is sketched in
Fig.\,\ref{fig:experimental setup}. A \SI{532}{\nano\meter} laser beam was focused into the sensing diamond using a plano-convex lens with a focal length of $f=$ \SI{8}{\milli\meter}. Behind the diamond we placed initially another lens and a notch filter to separate out green light from the (infra)red fluorescence light emitted from the diamond detected with a charge-coupled device (CCD) camera. That way we were able to verify that the diamond was well centred illuminated with the \SI{532}{\nano\meter} light. The camera was positioned on the back side producing the characteristic cross shape shown in Fig.\,\ref{fig:diamond} (c). This shape originates from reflections of the side surfaces of the sensing diamond and allows for precise positioning of the diamond relative to the laser beam using a XYZ-stage.
After alignment, the optics at the back side were replaced with the same type of aspheric condenser lens with $f$=\SI{8}{\milli\meter} focal length, a notch filter and a photodiode. The fluorescence was compared in both front and back direction simultaneously. Integrated over the expected fluorescence spectrum the notch filter (Thorlabs NF 533-17) transmits about 2\% more than the dichroic mirror (Thorlabs DMLP-567 ), negligible within the measurement error.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figures/comparisonsidesodmr.pdf}
\caption{Comparison of the signal improvement due to the higher photon collection efficiency exemplified by an optically detected magnetic resonance spectra, both normalised to the back surface peak signal value.}
\label{fig:odmr}
\end{figure}
\subsection{Measurements}
Measuring on both sides simultaneously, for five laser powers equally spaced between 50 and 150\,mW gave a mean increase of collected fluorescence light by a factor of $ 3.8 (1)$ between the back and front side.
Next, a magnetic field was applied with Helmholtz coils and we used microwaves to obtain ODMR spectra of the NV centers to visualize the difference, see Fig.\,\ref{fig:odmr}.
\section{Conclusion}
We described a design to improve the amount of collected fluorescence light emitted by a nitrogen-vacancy center ensemble in diamond.
We were able to experimentally measure an increase by a factor of $3.8(1)$ between the improved design (back) with respect to the not improved opposing facet (front). This increase is supported by ray tracing simulations. An additional feature of the design is the improved angular distribution of the fluorescence. It would allow for over 90\% of the emitted fluorescence to be collected by a lens with a numerical aperture bigger than 0.7. These lenses are widely available.
In sensing applications relying on the collected fluorescence this improvement results in a lowered shot noise limit by a factor of nearly 2. Further improvement in overall light collection are possible for example deploying reflective coating on the front surface and anti-reflective coating on the back surface. Including the coating and neglecting losses this optic then allows to collect all emitted photons, which amounts to an additional increase of more than 40\%.
\section{Funding}
This work was supported by the European Commission's Horizon Europe Framework Program under the
Research and Innovation Action MUQUABIS GA no.10107054 and by the German Federal Ministry of Education and Research (BMBF) within the MILIQUANT project no. 13N15062 and DIAQNOS project no. 13N16455.
\section{Acknowledgements}
We thank Dr. Till Lenz, Joseph Shaji Rebeirro and Omkar Dhungel for the many and fruitful discussions concerning this project.
\section{Disclosures}
The authors declare no conflicts of interest.
\section{Data availability}
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and motivations}
This paper mainly deals with a class of biorthogonal polynomials $ \{p_n(x)\}_{\mathbb N},\{ q_n(y)\}_{\mathbb N}$ of degree $n$ satisfying the
biorthogonality relations
\begin{equation}
\int_{{\mathbb R}_+} \int_{{\mathbb R}_+} p_n(x) q_m(y) \frac {{\rm d} \alpha(x){\rm d}\beta(y)}{x+y} = \delta_{mn}, \label{CBOPs}
\end{equation}
where ${\rm d}\alpha, {\rm d}\beta$ are positive measures supported on ${\mathbb R}_+$ with finite bimoments. These polynomials will be introduced in Sec. \ref{SecPosKer} in a more general context of polynomials associated to general {\it totally positive kernels} (Def. \ref{posker}) with which they share some general properties in regard to their zeroes.
While these properties are interesting in their own right, we wish to put the work in a more general context and explain the two main motivations behind it. They fall within two different and rather distant areas of mathematics : peakon solutions to nonlinear PDEs and Random Matrix theory.
\paragraph{ Peakons for the Degasperis-Procesi equation.}
In the early 1990's, Camassa and Holm \cite{ch} introduced the (CH) equation to model (weakly) dispersive shallow wave propagation.
More generally, the CH equation belongs to the so-called b-family of PDEs
\begin{equation}
\label{eq:family}
u_t - u_{xxt} + (b+1) u u_x = b u_x u_{xx} + u u_{xxx},
\quad (x,t) \in {\mathbb R}^2, \quad b\in {\mathbb R},
\end{equation}
Two cases, $b=2$ and $b=3$ within this family are now known to be integrable: the case $b=2$ is the original CH equation whereas the case $b=3$ is the Degasperis-Procesi \cite{dp} (DP) equation, which is more directly related to the present paper.
In all cases the b-family admits weak (distributional) solutions of the form:
\begin{equation} \label{eq:peakonansatz}
u(x,t) = \sum_{i=1}^n m_i(t) \, e^{-\abs{x-x_i(t)}},
\end{equation}
if and only if the positions $x_i(t)$ and the heights $m_i(t)$
satisfy the system of nonlinear ODEs:
\begin{equation}
\label{eq:CH-peakonODE}
\dot{x}_k = \sum_{i=1}^n m_i e^{-\abs{x_k-x_i}},
\qquad
\dot{m}_k =(b-1) \sum_{i=1}^n m_k m_i \sgn(x_k-x_i) \, e^{-\abs{x_k-x_i}},
\end{equation}
for $k=1,\ldots,n$.
The non-smooth character of the solution manifests itself by the presence of sharp peaks
at $\{x_k\}$, hence the name {\sl{peakons}}.
For the CH equation the peakons solution were studied in \cite{bss-stieltjes, bss-moment}, while for the DP equation in \cite{ls-invprob, ls-cubicstring}; in both cases the solution is related to the {\bf isospectral evolution} of an associated linear boundary-value problem
\begin{eqnarray}
\begin{array}{c|c}
b=2\ (CH) & b=3\ (DP)\\[10pt]
\hline\\
-\phi ''(\xi, z) =z g (\xi) \phi (\xi, z) &
-\phi'''(\xi,z)= z g(\xi) \phi(\xi,z) \\[7pt]
\phi(-1)= \phi(1)=0&
\phi(-1) = \phi'(-1) = \phi(1)=0
\end{array}
\end{eqnarray}
The variables $x,\xi$ and the quantities $m, g, u$ are related by
\begin{equation}
\xi=\tanh\left(\frac {x}{b-1}\right)\ ,\qquad
g(\xi) = \left( \frac {1-\xi^2}{2}\right)^{-b} m(x)\ ,\qquad
m(x,t) = u(x,t)-u_{xx}(x,t)
\end{equation}
Because of the similarity to the equation of an inhomogeneous classical string (after a separation of variables) we refer to the two linear ODEs as the {\it quadratic} and {\it cubic} string, respectively.
The case of peakons corresponds to the choice
\begin{equation}
m(x,t) = 2\sum_{j=1}^n \delta(x-x_i(t)) m_i(t)\label{diraccomb}
\end{equation}
The remarkable fact is that in both cases the associated spectral problems have a finite {\bf positive} spectrum; this is not so surprising in the case of the quadratic string which is a self-adjoint problem, but it is quite unexpected for the cubic string, since the problem is not self-adjoint and there is no {\it a priori} reason for the spectrum to even be real \cite{ls-cubicstring}.
As it is natural within the Lax approach to integrable PDEs, the {\it spectral map} linearizes the evolution of the isospectral evolution: if $\{z_j\}$ are the eigenvalues of the respective {boundary value } problems and one introduces the appropriate spectral residues
\begin{equation}
b_j := \mathop{\mathrm{res}}\limits_{z=z_j}\frac{W(z)}{z} {\rm d} z\ ,\ \ \ \ \ W(z):= \frac {\phi_\xi(1,z)}{\phi(1,z)}
\end{equation}
then one can show \cite{ls-invprob} that the evolution linearizes as follows (with the dot representing the time evolution)
\begin{eqnarray}
\dot z_k=0\ ,\qquad \frac{\dot b_k}{b_k} = \frac 1{z_k}
\end{eqnarray}
Since this is not the main focus of the paper, we are deliberately glossing over several interesting points; {the interested reader is referred to \cite{ls-cubicstring} and our recent work \cite{Paper1mini} for further details. In short, the solution method for the DP equation can by illustrated by the diagram}
$$\begin{CD}
\{ x_k(0), m_k(0) \}_{k=1}^n @> \phantom{\text{aa}} \text{ spectral map}\phantom{\text{aa}}>> \{z_k, b_k\}\\
@VV \text{DP flow}V@VV \text{evolution of the extended spectral data}V\\
\{ x_k(t), m_k(t) \}_{k=1}^n @<\text{inverse spectral map}< < \{\displaystyle z_k(t)= z_k\qquad \qquad\qquad\ \ \ \atop \displaystyle b_k(t) = b_k(0) \exp( t/z_k) \}
\end{CD}$$
In the inverse spectral map resides the r\^ole of the biorthogonal polynomials to be studied here, as we briefly sketch below.
The inverse problem for the ordinary string with finitely many point masses
is solved by the method of continued fractions of Stieltjes' type as was pointed out
by M.G. Krein (\cite{gantmacher-krein}). The inverse problem for the cubic string
with finitely many masses is solved with the help of the following
simultaneous Hermite-Pad\'e type approximation (\cite{ls-cubicstring})
\begin{definition}[Pad\'{e}-like approximation problem]
\label{def:pade}
Let $d\mu(x)$ denote the spectral measure associated with the cubic string boundary value problem and $\frac{W(z)}{z}=\int\frac{1}{z-x}d\mu(x)$, $\frac{Z}{z}=\int\!\!\!\!\int \frac{x}{z-x}\frac{1}{x+y}d\mu(x)d\mu(y) $ denote the Weyl functions introduced in \cite{ls-cubicstring}. Then, given an integer $1 \leq k \leq n$,
we seek three polynomials
$(Q,P,\widehat{P})$ of degree $k-1$
satisfying the following conditions:
\begin{enumerate}
\item[] {\bf [Approximation]}:
$\displaystyle
W=\frac{P}{Q}+O\left(\frac{1}{z^{k-1}}\right),
\qquad
Z=\frac{\widehat{P}}{Q}+O\left(\frac{1}{z^{k-1}}\right)
\qquad
(z\to\infty).
$
\item[]{\bf [Symmetry]}:
$
\displaystyle Z^* \, Q + W^* \, P + \widehat{P}
=O\left(\frac{1}{z^k}\right) \ (z\to\infty)
$ with $W^*(z)=-W(-z)$, $Z^*(z)=Z(-z)$.
\item []{\bf [Normalization]}:
$\displaystyle
P(0)=1,
\qquad
\widehat{P}(0)=0.
$
\end{enumerate}
\end{definition}
This approximation problem has a unique solution (\cite{ls-cubicstring}) which, in turn,
is used to solve the inverse problem for the cubic string. We point out that it is here in
this approximation problem that the Cauchy kernel $\frac{1}{x+y}$ makes its, somewhat unexpected, appearance through the spectral representation of the second Weyl function.
\paragraph{Random Matrix Theory}
The other source of our interest in biorthogonal polynomials comes from random matrix theory. It is well known \cite{MehtaBook} that the Hermitean matrix model is
intimately related to (in fact, solved by) orthogonal polynomials (OPs). Not
so much is known about the role of biorthogonal polynomials (BOPs).
However, certain
biorthogonal polynomials somewhat similar to the ones in the
present paper appear prominently in the analysis of ``the''
two--matrix model after reduction to the spectrum of eigenvalues \cite
{BEH_dualityCMP, BEHDuality, BEH_diffCMP, KenNick}; in that case the pairing is of the form
\begin{equation} \int\int p_n(x) q_m(y) {\rm e}^{-xy} {\rm d} \alpha (x)
{\rm d}\beta(y)=\delta_{mn}, \ \label {IZBOPS} \end{equation}
and the associated biorthogonal polynomials are sometimes called the Itzykson--Zuber BOPs, in short, the IZBOPs.
Several algebraic structural properties of these
polynomials and their recurrence relation (both multiplicative and
differential) have been thoroughly analyzed in the previously cited
papers for densities of the form ${\rm d}\alpha(x) = {\rm e}^{-V_1(x)}{\rm d} x,
\ {\rm d} \beta(y) = {\rm e}^{-V_2(y)}{\rm d} y$ for {\it polynomials potentials}
$V_1(x), \ V_2(y)$ and for potentials with rational derivative (and
hard--edges) in \cite{Bertosemiclass}.
We recall that while ordinary OPs satisfy a multiplicative
three--term recurrence relation, the BOPs defined by \eqref{IZBOPS}
solve a longer recurrence relation of length related to the degree of
the differential ${\rm d} V_j(x)$ over the Riemann sphere
\cite{Bertosemiclass}; a direct (although not immediate)
consequence of the finiteness of the recurrence relation is the fact
that these BOPs (and certain integral transforms of them) are
characterized by a Riemann--Hilbert problem for a matrix of size equal
to the length of the recurrence relation (minus one).
The BOPs introduced in this paper share all these features, although in some respects they are closer to the ordinary orthogonal polynomials than to the IZBOPs.
The relevant two--matrix model our polynomials are related to was introduced in \cite{Paper2}. We now give a brief summary of that work.
Consider the set of pairs $\mathcal H_+^{(2)}:=\{ (M_1,M_2)\} $ of Hermitean {\it positive-definite} matrices endowed with the ($U(N)$--invariant) Lebesgue measure denoted by ${\rm d} M_1{\rm d} M_2$.
Define then the probability measure on this space by the formula:
\begin{equation}
{\rm d} \mu(M_1,M_2) = \frac 1{\mathcal Z_N^{(2)}} \frac {\alpha'(M_1)\beta'(M_2) {\rm d} M_1 {\rm d} M_2}{\det(M_1+M_2)^N}
\end{equation}
where $\mathcal Z_N^{(2)}$ (the {\it partition function}) is a normalization constant, while $\alpha'(M_1), \beta'(M_2)$ stand for the product of the densities $\alpha', \beta'$ (the Radon--Nikodym derivatives of the measures ${\rm d} \alpha,{\rm d}\beta$ with respect to the Lebesgue measure) over the (positive) eigenvalues of $M_j$.
This probability space is similar to the two--matrix model discussed briefly above for which the coupling between matrices is ${\rm e}^{N \mathrm {Tr} M_1M_2}$ \cite{EynardMehta} instead of $\det(M_1+M_2)^{-N}$. The connection with our BOPs (\ref{CBOPs}) is analogous to the connection between ordinary orthogonal polynomials and the Hermitean Random matrix model \cite{MehtaBook}, whose probability space is the set of Hermitean matrices $\mathcal H_N$ equipped with the measure
$
{\rm d}\mu_1(M):= \frac 1{\mathcal Z_N^{(1)}}{\alpha'(M)} {\rm d} M.
$
In particular, we show in \cite{Paper2} how the statistics of the eigenvalues of the two matrices $M_j$ can be described in terms of the biorthogonal polynomials we are introducing in the present work. A prominent role in the description of that statistics is played by the generalized Christoffel--Darboux identities we develop in Section \ref{Section6}.
We now summarize the main results of the paper:
\begin{itemize}
\item [-] for an arbitrary totally positive kernel $K(x,y)$ and arbitrary positive measures ${\rm d}\alpha,{\rm d} \beta$ on ${\mathbb R}_+^2$ we prove that the matrix of bimoments $I_{ab}:= \int\!\!\!\!\int_{{\mathbb R}_+^2} x^a y^b K(x,y){\rm d} \alpha(x) {\rm d}\beta( y)$ is totally positive ({\bf Thm. \ref{thm:I}});
\item[-] this implies that there exist, unique, sequences of monic polynomials of degree $n$, $\widetilde p_n(x), \widetilde q_n(y)$ biorthogonal to each other as in (\ref{Korth}); we prove that they have {\bf positive and simple} zeroes ({\bf Thm. \ref{thm:alphazeros}});
\item [-] we then specialize to the kernel $K(x,y)= \frac 1{x+y}$; in this case the {\bf zeroes} of $\widetilde p_n(x)$ ($\widetilde q_n(y)$) {\bf are interlaced} with the zeroes of the neighboring polynomials
({\bf Thm. \ref{Sturm} });
\item [-] they solve a {\bf four--term} recurrence relation as specified after \eqref{CBOPs} ({\bf Cor. \ref{fourterm}});
\item [-] they satisfy {\bf Christoffel--Darboux identities} ({\bf Prop. \ref{propCDI}, Cor. \ref{cor:CDIuni}, Thms. \ref{thm:ECD1}, \ref{thm:ECD2}})
\item [-] they solve a {\bf Hermite-Pad\'{e} } approximation problem to a novel type of
Nikishin systems ({\bf Sec. \ref{sec:AproxPerfD}, Thms. \ref{thm:Padeq}, \ref{thm:Padep}});
\item [-] they can be characterized by a $3\times 3$ {\bf Riemann--Hilbert problems}, ({\bf Props. \ref{RHP1}, \ref{RHP2}}) ;
\end{itemize}
In the follow-up paper we will explain the relation of the asymptotics of the BOPs introduced in this paper with
a rigorous asymptotic analysis for continuous (varying) measures ${\rm d}\alpha, {\rm d} \beta$ using the nonlinear steepest descent method \cite{Paper3}.
%
%
\section{Biorthogonal polynomials associated to a totally positive kernel} %
\label{SecPosKer}
As one can see from the last section
the kernel
$K(x,y)=\frac{1}{x+y}, x,y >0$, which we will refer to as the Cauchy kernel,
plays a significant, albeit mysterious, role. We now turn to explaining the
role of this kernel.
We recall, following \cite{Karlin}, the definition of the totally positive
kernel.
\begin{definition}
\label{posker}
A real function $K(x,y)$ of two variables ranging over linearly ordered
sets $\mathcal X$ and $\mathcal Y$, respectively, is said to be
totally positive (TP) if for all
\begin{equation}
x_1<x_2<\cdots < x_m, \quad y_1<y_2<\cdots < y_m \quad x_i\in \mathcal X, y_j\in \mathcal Y,
m \in {\mathbb N}
\end{equation}
we have
\begin{equation}
\det\left[K(x_i,y_j)\right]_{1\leq i,j\leq m} >0
\end{equation}
\end{definition}
We will also use a discrete version of the same concept.
\begin{definition}
A matrix $A:=[a_{ij}],\, i,j=0,1,\cdots n$ is said to be totally positive (TP) if all its minors are
strictly positive. A matrix $A:=[a_{ij}],\, i,j=0,1,\cdots n$ is said to be totally nonnegative (TN) if all its minors are nonnegative. A TN matrix $A$ is said to be oscillatory if some positive integer power
of $A$ is TP.
\end{definition}
Since we will be working with matrices of infinite size we introduce a concept of the
principal truncation.
\begin{definition} A finite $n+1$ by $n+1$ matrix $B:=[b_{i,j}], i,j=0,1, \cdots n$ is said to be the principal truncation of an infinite matrix
$A:=[a_{ij}],\, i,j=0,1,\cdots \, $ if $b_{i,j}=a_{i,j}, i,j=0,1, \cdots n$.
In such a case $B$ will be denoted $A[n]$.
\end{definition}
Finally,
\begin{definition} An infinite matrix
$A:=[a_{ij}],\, i,j=0,1,\cdots $ is said to be TP (TN) if $A[n]$ is TP (TN) for every $n=0,1,\cdots$.
\end{definition}
\begin{definition} Basic Setup \label{def:K}
Let $K(x,y)$ be a {\bf totally positive kernel} on ${\mathbb R}_+\times {\mathbb R}_+$
and let ${\rm d}\alpha,{\rm d} \beta$ be two Stieltjes measures on $R_+$. We make two simplifying assumptions to avoid degenerate cases:
\begin{enumerate}
\item $0$ is not an atom of either of the measures (i.e. $\{0\}$ has zero measure).
\item $\alpha$ and $\beta$ have infinitely many points of increase.
\end{enumerate}
We furthermore assume:
\begin{enumerate}
\item [3.] the polynomials are dense in the corresponding Hilbert spaces $H_{\alpha}:=L^2({\mathbb R}_+,{\rm d} \alpha)$,
$H_{\beta}:=L^2({\mathbb R}_+,{\rm d} \beta)$,
\item [4.]the map
$
\displaystyle K: H_{\beta}\rightarrow H_{\alpha}$, $\displaystyle Kq(x):=\int K(x,y)q(y){\rm d} \beta(y)
$
is bounded, injective and has a dense range in $H_{\alpha}$.
\end{enumerate}
\end{definition}
Under these assumptions $K$ provides a non-degenerate pairing between $H_{\beta}$ and $H_{\alpha}$:
\begin{equation} \label{def:pairing}
\langle a | b\rangle= \int\!\!\!\!\int a(x) b(y) K(x,y){{\rm d}\alpha{\rm d}\beta}, \quad a\in H_{\alpha}, b\in H_{\beta}.
\end{equation}
\begin{remark} Assumptions ~3 and ~4 could be weakened, especially the density
assumption, but we believe the last two assumptions are the most natural to work with in the Hilbert space set-up of the theory. \end{remark}
Now, let us
consider the matrix $ {\cal{I}}$ of generalized bimoments
\begin{equation} \label{eq:bimoments}
[{\cal{I}}]_{ij} = I_{ij}:= \int\int x^i y^j K(x,y) {\rm d} \alpha(x) {\rm d}\beta(y)\ .
\end{equation}
\begin{theorem} \label{thm:I}
The semiinfinite matrix ${\cal{I}}$ is TP.
\end{theorem}
\begin{proof}
According to a theorem of Fekete, (see Chapter 2, Theorem 3.3 in \cite {Karlin} ), we only need to consider minors of consecutive rows/columns.
Writing out the determinant,
\begin{equation*}
\Delta_n^{ab}:= \det [I_{a+i, b+j}]_{0\leq i,j\leq n-1}
\end{equation*}
we find
\begin{align*}
&& \Delta_n^{ab} = \sum_{\sigma\in S_n}\epsilon(\sigma) \int\!\!\!\!\int \prod_{j=1}^n x_j^a y_j^b \prod_{j=1}^n x_j^{\sigma_j-1} y_j^{j-1} K(x_j,y_j) {\rm d}^n\alpha(X) {\rm d}^n\beta(Y) =\\
&&
\int\!\!\!\!\int C(X)^aC(Y)^b \Delta(X) \prod_{j=1}^{n} y_j^{j-1} \prod_{j=1}^{n} K(x_j,y_j) {\rm d}^n\alpha {\rm d}^n \beta.
\end{align*}
Since our intervals are subsets of ${\mathbb R}_+$ we can absorb the powers of $C(X), C(Y)$ into the measures to simplify the notation.
Moreover, the function $S(X,Y):= \prod_{j=1}^{n} K(x_j, y_j)$ enjoys the following simple property
\begin{equation*}
S(X, Y_\sigma) = S (X_{\sigma^{-1}} ,Y)\,
\end{equation*}
for any $\sigma \in S_n$. Finally, the product measures ${\rm d}^n\alpha = {\rm d} ^n\alpha(X), {\rm d}^n\beta = {\rm d}^n \beta(Y)$ are clearly permutation invariant.
Thus, without any loss of generality, we only need to show that
\begin{equation*}
D_n:= \int\!\!\!\!\int \Delta(X) \prod_{j=1}^{n} y_j^{j-1} S(X,Y) {\rm d}^n\alpha {\rm d}^n \beta\, >0,
\end{equation*}
which is tantamount to showing positivity for $a=b=0$.
First, we symmetrize $D_n$ with respect to the variables $X$; this produces
\begin{align*}
D_n = \frac 1{n!} \sum_{\sigma \in S_n} \int\!\!\!\!\int \Delta(X_\sigma) \prod_{j=1}^{n} y_j^{j-1} S(X_\sigma,Y) {\rm d}^n\alpha {\rm d}^n \beta =
\frac 1{n!} \sum_{\sigma \in S_n} \int\!\!\!\!\int \Delta(X)\epsilon(\sigma) \prod_{j=1}^{n} y_j^{j-1} S(X,Y_{\sigma^{-1}} ) {\rm d}^n\alpha {\rm d}^n \beta = \cr
\frac 1{n!} \sum_{\sigma \in S_n} \int\!\!\!\!\int \Delta(X)\epsilon(\sigma) \prod_{j=1}^{n} y_{\sigma_j}^{j-1} S(X,Y) {\rm d}^n\alpha {\rm d}^n \beta
= \frac 1{n!} \int\!\!\!\!\int \Delta(X) \Delta(Y) S(X,Y) {\rm d}^n\alpha {\rm d}^n \beta.
\end{align*}
Subsequent symmetrization over the $Y$ variables does not change the value of the integral and we obtain (after restoring the definition of $S(X,Y)$)
\begin{align*}
D_n = \frac 1{(n!)^2} \sum_{\sigma\in S_n} \epsilon(\sigma) \int\!\!\!\!\int \Delta(X) \Delta(Y) \prod_{j=1}^{n} K(x_j,y_{\sigma_j}) {\rm d}^n\alpha {\rm d}^n \beta
=\\
\frac 1{(n!)^2} \int\!\!\!\!\int \Delta(X) \Delta(Y) \det[ K(x_i,y_j)]_{i,j\leq n} {\rm d}^n\alpha {\rm d}^n \beta.
\end{align*}
Finally, since $\Delta(X) \Delta(Y) \det[ K(x_i,y_j)]_{i,j\leq n} {\rm d}^n\alpha {\rm d}^n \beta$ is permutation invariant, it suffices to integrate over the region
$0<x_1<x_2<\cdots <x_n \times 0<y_1<y_2<\cdots <y_n$, and, as a result
\begin{equation} \label{prinmin}
D_n=\int\!\!\!\!\int_{\substack{0<x_1<x_2<\cdots <x_n \\
0<y_1<y_2<\cdots <y_n}} \Delta(X) \Delta(Y) \det[ K(x_i,y_j)]_{i,j\leq n} {\rm d}^n\alpha {\rm d}^n \beta.
\end{equation}
Due to the total positivity of the kernel $K(x,y)$ the integrand is a positive function of all variables and so the integral must be strictly positive.
\end{proof}
To simplify future computations we define
$
[x] := (1,x,x^2, \dots)^T
$
so that the matrix of generalized bimoments \eqref{eq:bimoments} is simply given by:
$
{\cal{I}} = \langle [x]|[y]^T\rangle.\label{219}
$
Now, let $\Lambda$ denote the semi-infinite upper shift matrix. Then we observe that multiplying the measure ${\rm d}\alpha(x)$ by $x^i $ or,
multiplying $ {\rm d} \beta(y)$ by $y^j $, is tantamount to multiplying
${\cal{I}}$ on the left by $\Lambda^i$, or
on the right by $(\Lambda^T )^j$ respectively, which gives us a whole family of bimoment matrices
associated with the same $K(x,y)$ but different measures. Thus we have
\begin{coroll} \label{cor:ILambda}
For any nonnegative integers $i,j$ the matrix of generalized bimoments
$ \Lambda^i {\cal{I}} (\Lambda^T)^j$ is TP.
\end{coroll}
We conclude this section with a few comments about the scope of Theorem \ref{thm:I}. \begin{remark}
Provided that the negative moments are well defined, the theorem then applies to the doubly infinite matrix $I_{i,j}$, $i,j\in {\mathbb Z}$.
\end{remark}
\begin{remark}
If the intervals are ${\mathbb R}$ and $K(x,y) = {\rm e}^{xy}$ then the proof above fails because we cannot re-define the measures by multiplying by powers of the variables, since they become then signed measures, so in general the matrix of bimoments is {\bf not} totally positive. Nevertheless the proof above shows (with $a=b=0$ or $a,b\in 2{\mathbb Z}$) that the matrix of bimoments is positive definite and --in particular-- the biorthogonal polynomials always exist, which is known and proved in \cite{KenNick}.
\end{remark}
%
\subsection {Biorthogonal polynomials}
%
Due to the total positivity of the matrix of bimoments in our setting, there exist uniquely defined two sequences of monic polynomials
\begin{equation*}
\widetilde p_n(x) = x^n + \dots\ , \ \ \widetilde q_n(y)= y^n + \dots
\end{equation*}
such that
\begin{equation*}
\int\!\!\!\!\int \widetilde p_n(x) \widetilde q_m(y) K(x,y) {\rm d}\alpha(x) {\rm d}\beta(y) = h_n \delta_{mn}\label{Korth}\ .
\end{equation*}
Standard considerations (Cramer's Rule) show that they are provided by the following formul\ae
\begin{eqnarray}
\widetilde p_n(x) = \frac 1{D_n} \det \left[ \begin{array}{ccc|c}
I_{00}&\dots &I_{0n-1}& 1\cr
\vdots &&\vdots&\vdots\cr
I_{n0}&\dots &I_{nn-1}&x^n
\end{array}
\right]
\qquad \widetilde q_n(y) = \frac 1{D_n} \det \left[ \begin{array}{ccc}
I_{00}&\dots &I_{0n}\cr
\vdots &&\vdots\cr
I_{n-10}&\dots &I_{n-1n}\cr
\hline
1&\dots&y^n
\end{array}
\right]\\
h_n = \frac {D_{n+1}}{D_n} > 0,
\end{eqnarray}
where $D_j>0$ by equation \eqref{prinmin}.
For convenience we re-define the sequence in such a way that they are also {\bf normalized} (instead of monic), by dividing them by the square root of $h_n$;
\begin{eqnarray} \label{def:pq}
&p_n(x) =\frac{1} {\sqrt{D_nD_{n+1}}} \det \left[ \begin{array}{ccc|c}
I_{00}&\dots &I_{0n-1}& 1 \cr
\vdots &&\vdots&\vdots\cr
I_{n0}&\dots &I_{nn-1}&x^n
\end{array}
\right],\\
&q_n(y) = \frac{1}{\sqrt{{D_nD_{n+1}}}} \det \left[ \begin{array}{ccc}
I_{00}&\dots &I_{0n}\cr
\vdots &&\vdots\cr
I_{n-10}&\dots &I_{n-1n}\cr
\hline
1&\dots&y^n
\end{array}
\right] \label{BOPs}
\end{eqnarray}
Thus $\langle p_n| q_m\rangle= \delta_{nm}$.
We note also that the BOPs can be obtained by triangular transformations of $[x], [y]$
\begin{equation}
\mathbf p(x) =S_p [x]\ ,\ \ \ \mathbf q(y) = S_q[y]\ ,\qquad [x]=[1,x,x^2,\dots]^t\label{220}
\end{equation}
where $S_{p,q}$ are (formally) invertible lower triangular matrices
such that $S_p^{-1}(S_q^{-1})^T={\cal{I}}$, where, we recall, ${\cal{I}}$ is the
generalized bimoment matrix. Moreover, our BOPs satisfy, by construction,
the recursion relations:
$$
x p_i(x) = X_{i,i+1}p_{i+1}(x)+X_{i,i}p_i(x)+\cdots X_{i,0}p_0(x), \qquad
y q_i(y) = Y_{i,i+1}q_{i+1}(y)+Y_{i,i}q_i(y)+\cdots Y_{i,0}q_0(y ),
$$
which will be abbreviated as
\begin{equation} \label{def:XY}
x \mathbf p(x) = {\bf X}\mathbf p(x)\ ,\ \ y \mathbf q(y) ^T= \mathbf q(y)^T{\bf Y}^T,
\end{equation}
where ${\bf X}$ and ${\bf Y}$ are Hessenberg matrices with positive entries on the supradiagonal,
and $\mathbf p(x)\, \mathbf q(y)$ are infinite column vectors $\mathbf p(x)^T:=(p_0(x),p_1(x),p_2(x),\dots)^t, \,
\mathbf q(y)^T:=(q_0(y),q_1(y),q_2(y),\dots)^T$ respectively.
The biorthogonality can now be written as
$
\langle \mathbf p | \mathbf q^T\rangle= Id
$
where $Id$ denotes the semi-infinite identity matrix.
Moreover
\begin{equation} \label{eq:XY}
\langle x\mathbf p | \mathbf q^T\rangle = {\bf X}\ ,\qquad \langle \mathbf p|y\mathbf q^T\rangle = {\bf Y}^T
\end{equation}
\begin{remark} The significance of the last two formulas lies in the fact that the operator of multiplication
is no longer symmetric with respect to the pairing $\langle \bullet|\bullet\rangle$ and as a result the matrices
${\bf X}$ and ${\bf Y}^T$ are distinct. \end{remark}
%
\subsection{Simplicity of the zeroes}
%
In this section we will use the concept of a Chebyshev system of order $n$ and a closely related concept of a Markov sequence. We refer to \cite{ns} and \cite{gantmacher-krein} for more information.
The following theorem is a convenient restatement of Lemma 2 in \cite{gantmacher-krein}, p.137. For easy display we replace determinants with wedge products.
\begin{theorem} \label{thm:CS}
Given a system of continuous functions $\{u_i(x)|i=0\cdots n\}$ let us define the vector field
\begin{equation}
\mathbf u(x) =\begin{bmatrix} u_0(x), &u_1(x), & \hdots , & u_n(x) \end{bmatrix}^T, \qquad x\in U.
\end{equation}
Then
$\{u_i(x)|i=0\cdots n\}$ is a Chebyshev system of order $n$ on $U$
iff
the top exterior power
\begin{equation}
\mathbf u(x_0)\wedge \mathbf u(x_1)\wedge \cdots \mathbf u(x_n) \ne 0
\end{equation}
for all $x_0<x_1<\cdots <x_n$ in $U$.
Furthermore, for $\{u_i(x)|i=0\cdots \}$, if we denote the truncation of $\mathbf u(x)$ to the
first $n+1$ components by $\mathbf u_n(x)$, then $\{u_i(x)|i=0\cdots \}$ is a Markov system
iff
the top exterior power
\begin{equation}
\mathbf u_n(x_0)\wedge \mathbf u_n(x_1)\wedge \cdots \mathbf u_n(x_n) \ne 0
\end{equation}
for all $x_0<x_1<\cdots <x_n$ in $U$ and all $n\in {\mathbb N}$.
\end{theorem}
The following well known theorem is now immediate
\begin{theorem} \label{thm:signchange}
Suppose $\{u_i(x)|i=0\cdots n\}$ is a Chebyshev system of order $n$ on $U$,
and suppose we are given $n$ distinct points $x_1, \cdots x_n$ in $U$.
Then, up to a multiplicative factor, the only
generalized polynomial $ P(x)=\sum_{i=0}^n a_i u_i(x)$, which vanishes precisely at $x_1, \cdots x_n$ in $U$
is given by
\begin{equation}
P(x)=\mathbf u(x)\wedge \mathbf u(x_1)\wedge \cdots \mathbf u(x_n)
\end{equation}
\end{theorem}
\begin{theorem} \label{thm:usignchange}
Denote by $u_i(x)=\int K(x,y)y^i d\beta(y), \, i=0\cdots n$.
Then $\{u_i(x)|i=0\cdots n\}$ is a Chebyshev system of order $n$ on ${\mathbb R} _+$.
Moreover, $P(x)$ as defined in Theorem \ref{thm:signchange} changes sign each
time $x$ passes through any of the zeros $x_j$.
\end{theorem}
\begin{proof}
It is instructive to look at the computation. Let $x_0<x_1<\cdots x_n$, then using multi-linearity of the exterior product,
\begin{align*}
&P(x_0)=\mathbf u(x_0)\wedge \mathbf u(x_1)\wedge \cdots \mathbf u(x_n) =\\
&\int
K(x_0,y_0)K(x_1,y_1)\cdots K(x_n,y_n)[y_0]_n\wedge [y_1]_n\wedge \cdots
\wedge [y_n]_nd\beta(y_0)\cdots d\beta(y_n)= \\
&\frac{1}{n!}\int
\det[K(x_i, y_j)]_{i,j=0}^n \Delta(Y)d\beta(y_0)\cdots d\beta(y_n)=
\int_
{y_0<y_1<\cdots y_n}\det[K(x_i, y_j)]_{i,j=0}^n
\Delta(Y) d\beta(y_0)\cdots d\beta(y_n),
\end{align*}
where
$
[y]_n=\begin{bmatrix}y^0, &y^1, &\hdots &y^n \end{bmatrix}^T.
$
Thus $P(x_0)>0$. The rest of the proof is the argument about the sign of the
integrand. To see how sign changes we observe that
the sign of $P$ depends only on the ordering of
$x, x_1, x_2, \cdots x_n$, in view of the total positivity of
the kernel. In other words, the sign of $P$ is $sgn(\pi)$ where
$\pi$ is the permutation rearranging $x, x_1, x_2, \cdots x_n$ in an increasing sequence.
\end{proof}
\begin{coroll} \label{cor:f}
Let $\{f_i(x):=\int K(x,y)q_i(y) d\beta(y), | i=0\cdots \}$.
Then $\{f_i(x)|i=0\cdots n\}$ is a Markov sequence on ${\mathbb R} _+$,
\end{coroll}
\begin{proof}
Indeed, Theorem \ref{thm:CS} implies that the group $GL(n+1)$ acts on
the set of Chebyshev systems of order $n$. It suffices now to observe
that $q_j$ are obtained from $\{1,y, \cdots, y^n\}$ by an invertible
transformation.
\end{proof}
\begin{remark} Observe that $\{f_i(x)|i=0\cdots n\}$ is a Markov sequence regardless of biorthogonality. \end{remark}
Biorthogonality enters however in the main theorem
\begin{theorem} \label{thm:alphazeros}
The zeroes of $p_n, q_n$ are all simple and positive. They fall within the convex hull of the support of the measure ${\rm d}\alpha$ (for $p_n$'s) and ${\rm d}\beta$ (for the $q_n$'s).
\end{theorem}
\begin{proof} We give first a proof for $p_n$.
The theorem is trivial for $n=0$. For $1\leq n$ , let us
suppose $p_n$ has $r<n$ zeros of odd order in the convex full of $supp({\rm d} \alpha)$. In full analogy with
the classical case, $1\leq r$, since
\begin{equation*}
\int p_n(x) f_0(x)d\alpha(x)=\int\!\!\!\!\int p_n(x)K(x,y)d\alpha(x) d\beta(y)
=0
\end{equation*}
by biorthogonality, forcing, in view of positivity of $K(x,y)$, $p_n(x)$
to change sign in the convex hull of
$supp({\rm d}\alpha)$. In the general case,
denote the zeros by $x_1<x_2<\cdots x_r$. Using a Chebyshev
system $f_i(x), i=0, \cdots r$ on ${\mathbb R}_+$ we can construct a unique, up to a
multiplicative constant, generalized polynomial which vanishes exactly
at those points, namely
\begin{equation}
R(x)=F(x)\wedge F(x_1)\wedge F(x_2)\wedge \cdots \wedge F(x_r)
\end{equation}
where
\begin{equation*}
F(x) =\begin{bmatrix} f_0(x) & f_1(x)& \cdots & f_r(x) \end{bmatrix}^t, \qquad x\in {\mathbb R}.
\end{equation*}
It follows then directly from biorthogonality
that
\begin{equation}
\int p_n(x) F(x)\wedge F(x_1)\wedge F(x_2)\wedge \cdots \wedge F(x_r)d\alpha(x)
=0
\end{equation}
On the other hand, $R(x)$ is
proportional to $P(x)$ in Theorem \ref{thm:signchange} which, by Theorem \ref{thm:usignchange}, changes sign at each of its zeroes,� so
the product $p_n(x)R(x)$ is nonzero and of fixed sign over ${\mathbb R} _+\setminus \{x_1,
x_2,\cdots, x_r\}$. Consequently, the integral is nonzero, since $\alpha$ is assumed to have
infinitely many points of increase. Thus, in view of the contradiction, $r\geq n$, hence $r=n$, for $p_n$ is a polynomial of degree $n$.
The case of $q_n$ follows by
observing that the adjoint $K^*$ is also a TP kernel and hence it suffices
to switch $\alpha$ with $\beta$ throughout the argument given above. \end{proof}
\begin{lemma}
In the notation of Corollary \ref{cor:f}
$f_n(x)$
has $n$ zeros and $n$ sign changes in the convex hull of $supp({\rm d} \alpha)$.
\end{lemma}
\begin{proof} Clearly, since $\{u_i(x)|i=0\cdots n\}$ is a Chebyshev system of order $n$ on ${\mathbb R} _+$, the number of zeros of $f_n$ cannot be greater
than $n$. Again, from
\begin{equation*}
\int f_n(x) p_0(x) {\rm d} \alpha(x)=0,
\end{equation*}
we conclude that $f_n$ changes sign at least once within the convex hull of $supp({\rm d}\alpha)$. Let then $x_1<x_2<\cdots x_r$, $1\leq r\leq n$ be all zeros of $f_n$ within the convex hull of $supp({\rm d}\alpha)$ at which $f_n$ changes its sign. Thus, on one hand,
\begin{equation*}
\int \epsilon \ \prod _{i=1}^r (x-x_i) f_n(x){\rm d}\alpha(x) >0, \qquad \epsilon =\pm,
\end{equation*}
while, on the other hand, using biorthogonality we get
\begin{equation*}
\int \epsilon \ \prod _{i=1}^r (x-x_i) f_n(x){\rm d}\alpha(x) =0, \qquad \epsilon =\pm,
\end{equation*}
which shows that $r=n$. \end{proof}
In view of Theorem \ref{thm:signchange}
the statement about the zeros of $f_n$ has the following
corollary
\begin{coroll} {\bf Heine-like representation for $f_n$}
\begin{equation}
f_n(x)=C u(x)\wedge u(x_1)\wedge u(x_2) \cdots \wedge u(x_n)
\end{equation}
where $x_j$ are the zeros of $f_n$ and $C$ is a constant.
\end{coroll}
\par \vskip 5pt
\section{Cauchy BOPs}
From now on we restrict our attention to the particular case of the totally positive kernel,
namely, the Cauchy kernel
\begin{equation} \label{eq:Ckernel}
K(x,y) = \frac 1{x+y}
\end{equation}
whose associated biorthogonal polynomials will be called Cauchy BOPs .
Thus, from this point onward, we will be studying the general properties of BOPs for the pairing
\begin{equation}
\int\!\!\!\!\int p_n(x) q_m(y) \frac {{\rm d}\alpha(x) {\rm d}\beta(y)} {x+y}= \langle p_n|q_m\rangle\ .
\end{equation}
Until further notice, we do not assume anything about the relationship between the two measures ${\rm d} \alpha, {\rm d}\beta$, other than what is in the basic setup of Definition
\ref{def:K}.
\subsection{Rank One Shift Condition}
It follows immediately from equation \eqref{eq:Ckernel} that
\begin{equation}
I_{i+1,j} + I_{i,j+1} = \langle x^{i+1}|y^j\rangle +\langle x^i|y^{j+1}\rangle= \int x^i {\rm d}\alpha \int y^j {\rm d}\beta\ ,
\end{equation}
which, with the help of the shift matrix $\Lambda$ and the matrix of
bimoments ${\cal{I}}$, can be written as:
\begin{align*}
& \Lambda {\cal{I}} + {\cal{I}} \Lambda^T=\boldsymbol \alpha \boldsymbol \beta^T,\\
&\boldsymbol \alpha = (\alpha_0, \alpha_1,\dots)^T\ ,\ \ \alpha_j = \int x^j {\rm d} \alpha(x)>0, \\
&\boldsymbol\beta = (\beta_0, \beta_1, \dots)^T\ ,\ \ \beta_j = \int y^j {\rm d}\beta(y)>0.
\end{align*}
%
%
Moreover, by linearity and equation \eqref{eq:XY}, we have
\begin{equation} \label{eq:XYT}
{\bf X} + {\bf Y}^T = {\boldsymbol \pi} {\boldsymbol \eta}^T\ ,\quad
{\boldsymbol \pi} := \int \mathbf p {\rm d}\alpha\ ,\ \ {\boldsymbol \eta}:= \int \mathbf q {\rm d}\beta\ ,\quad
{\mathbf p}(x):= (p_0(x),p_1(x),\dots)^t\ ,\ {\mathbf q}(y):= (q_0(y),q_1(y),\dots)^t
\end{equation}
which connects the multiplication operators in $H_{\alpha}$ and $H_{\beta}$.
Before we elaborate on the nature of this connection we
need to clarify one aspect of equation \eqref{eq:XYT}.
\begin{remark} One needs to exercise a great deal of caution using the matrix relation
given by equation \eqref{eq:XYT}. Its only rigorous meaning is in
action on vectors with finitely many nonzero entries or, equivalently,
this equation holds for all principal truncations.
\end{remark}
\begin{proposition}
\label{etapi}
The vectors ${\boldsymbol \pi}, {\boldsymbol \eta}$ are strictly positive (have nonvanishing positive coefficients).
\end{proposition}
\begin{proof}
We prove the assertion only for ${\boldsymbol \pi}$, the one for ${\boldsymbol \eta}$ being obtained by interchanging the roles of ${\rm d}\alpha$ and ${\rm d}\beta$.
~From the expressions (\ref{BOPs}) for $p_n(x)$ we immediately have
\begin{eqnarray}
\pi_n = \sqrt{\frac 1{ D_nD_{n+1}}} \det
\left[ \begin{array}{ccc|c}
I_{00}&\dots &I_{0n-1}& \alpha_0 \cr
\vdots &&\vdots&\vdots\cr
I_{n0}&\dots &I_{nn-1}&\alpha _n
\end{array}
\right].
\end{eqnarray}
Since we know that $D_n>0$ for any $ n\geq 0$ we need to prove the positivity of the other determinant.
Determinants of this type were studied in Lemma 4.10 in \cite{ls-cubicstring}.
We nevertheless give a complete proof of positivity. First, we
observe that
\begin{eqnarray}
\pi_n \sqrt{D_{n+1}D_n} &&= \sum_{\sigma \in S_{n+1}}\epsilon(\sigma) \int \prod_{j=1}^{n+1} x_j^{\sigma_j -1} \prod_{j=1}^{n} y_j^{j-1}
\frac{{\rm d}^{n+1}\alpha{\rm d}^n\beta}{\prod_{j=1}^n(x_j + y_j)}= \cr
&& = \int \Delta(X_1^{n+1}) \prod_{j=1}^{n} y_j^{j-1}
\frac{{\rm d}^{n+1}\alpha{\rm d}^n\beta}{\prod_{j=1}^n(x_j + y_j)}.
\end{eqnarray}
Here the symbol $X_1^{n+1}$ is to remind that the vector consists of $n+1$ entries (whereas $Y$ consists of $n$ entries) and that the Vandermonde determinant is taken accordingly. Note also that the variable $x_{n+1}$ never appears in the product in the denominator.
Symmetrizing the integral in the $x_j$'s with respect to labels $j=1, \dots, n$ , but leaving $x_{n+1}$ fixed, gives
\begin{eqnarray}
\pi_n\sqrt{D_{n+1}D_n} = \frac 1 {n!} \int \Delta(X_1^{n+1}) \Delta(Y) \frac{{\rm d}^{n+1}\alpha{\rm d}^n\beta}{\prod_{j=1}^n(x_j + y_j)}.
\end{eqnarray}
Symmetrizing now with respect to the whole set $x_1, \dots, x_{n+1}$ we obtain
\begin{eqnarray}
\pi_n\sqrt{D_{n+1}D_n} = \frac 1 {n!(n+1)!} \int \Delta(X_1^{n+1}) \Delta(Y)
\det \left[
\begin{array}{ccc}
K(x_1,y_1) & \dots & K(x_{n+1},y_1)\cr
\vdots &&\vdots \cr
K(x_1,y_{n}) & \dots & K(x_{n+1},y_{n})\cr
1& \dots & 1
\end{array}\right]
{\rm d}^{n+1}\alpha{\rm d}^n\beta
\end{eqnarray}
Moreover, since the integrand is permutation invariant, it suffices to integrate over the region
$0<x_1<x_2<\cdots <x_n<x_{n+1} \times 0<y_1<y_2<\cdots <y_n$, and, as a result
\begin{equation}
\begin{split}
&\pi_n\sqrt{D_{n+1}D_n}=\\
&\int\!\!\!\!\int_{\substack{0<x_1<x_2<\cdots <x_{n+1}
0<y_1<y_2<\cdots <y_n}} \Delta(X_1^{n+1}) \Delta(Y)\det \left[\begin{array}{ccc}
K(x_1,y_1) & \dots & K(x_{n+1},y_1)\cr
\vdots &&\vdots \cr
K(x_1,y_{n}) & \dots & K(x_{n+1},y_{n})\cr
1& \dots & 1
\end{array}\right]
{\rm d}^{n+1}\alpha{\rm d}^n\beta.
\end{split}
\end{equation}
We thus need to prove that the determinant containing the Cauchy kernel $\frac 1{x+y}$ is positive for $0<x_1<x_2<\dots <x_{n+1}$ and $0<y_1<y_2<\dots<y_n$.
It is not difficult to prove that
\begin{equation}
\det \left[
\begin{array}{ccc}
\frac 1{x_1+y_1} & \dots & \frac 1{x_{n+1}+y_1}\cr
\vdots &&\vdots \cr
\frac 1{x_1+y_{n}} & \dots & \frac 1{x_{n+1}+y_{n}}\cr
1& \dots & 1
\end{array}\right]
= \frac{\Delta(X_1^{n+1}) \Delta(Y)}
{\prod_{j=1}^{n+1} \prod_{k=1}^{n} (x_j + y_k)}
\end{equation}
and this function is clearly positive in the above range.\end{proof}
\par \vskip 5pt
%
\subsection{Interlacing properties of the zeroes}
~From (\ref{219}), (\ref{220}) and (\ref {def:XY}) the following factorizations are valid for
all principal truncations:
$$
{\cal{I}}=S_p^{-1} (S_q^{-1})^T\ ,\quad {\bf X}= S_p \Lambda (S_p)^{-1}\ , \quad {\bf Y}= S_q \Lambda S_q^{-1}\ .
$$
Moreover, since ${\cal{I}}$ is TP, the triangular matrices $S_p^{-1} $ and $S_q^{-1} $
are totally nonnegative (TN) \cite{Cryer2} and have the same diagonal entries: the $n$th
diagonal entry being $\sqrt{D_n/D_{n-1}}$. Furthermore, one can amplify the statement
about $S_p^{-1} $ and $S_q^{-1} $ using another result of Cryer (\cite{Cryer1}) which implies that
both triangular matrices are in fact triangular TP matrices (all non-trivial in the sense defined
in \cite{Cryer1} minors are strictly positive). This has the immediate consequence
\begin{lemma}\label{lem:IXIY}
All principal truncations ${\bf X}[n], {\bf Y}[n]$ are invertible.
\end{lemma}
\begin{proof}
~From the factorization ${\bf X}= S_p \Lambda (S_p)^{-1} $ we conclude that it suffices to
prove the claim for $\Lambda S_p^{-1} [n]$ which in matrix form reads:
\begin{equation*}
\begin{bmatrix} (S_p^{-1})_{10}&(S_p^{-1})_{11}&\\
(S_p^{-1})_{20}&(S_p^{-1})_{21}&\hspace{-10pt}(S_p^{-1})_{22}&
\begin{picture}(0,0)
\put(-50.5,-10){\line(1,-1){20}}
\put(-51,-9.5){\line(1,-1){20}}
\put(-50,-10){\line(1,-1){20}}
\put(0,-10){\hbox{\Huge $0$}}
\end{picture}
\\
\\
\\
\vdots&\vdots&&(S_p^{-1})_{_{n+1, n+1}} \\
(S_p^{-1})_{n+1,0}&(S_p^{-1})_{n+1, 1}&\cdots &(S_p^{-1})_{n+1, n}\\
\end{bmatrix}.
\end{equation*}
However, the determinant of this matrix is strictly positive, because $S_p^{-1}$ is a triangular TP.
\end{proof}
\begin{remark}
This lemma is not automatic, since $\Lambda[n]$ is not invertible.
\end{remark}
We now state the main theorem of this section.
\begin{theorem} \label{thm:TN}
${\bf X}$ and ${\bf Y}$ are TN.
\end{theorem}
\begin{proof}
We need to prove the theorem for every principal truncation. Let $n\geq 0$ be fixed.
We will suppress the dependence on $n$, for example ${\bf X}$ in the body of the proof means
${\bf X}[n]$ etc. First, we claim that
${\bf X}$ and ${\bf Y}$ admit the L-U factorization:
$ {\bf X} = {\bf X}_- {\bf X}_+,\ {\bf Y}= {\bf Y}_- {\bf Y}_+$, where $A_+$ denotes the upper triangular factor and $A_-$ is the unipotent
lower triangular factor in the Gauss factorization of a matrix $A$. Indeed,
$ {\bf X}_+= (\Lambda S_p^{-1})_+,\ {\bf Y}_+= (\Lambda S_q^{-1})_+$ are upper triangular components of TN matrices
$ \Lambda S_p^{-1}$ and $\Lambda S_q^{-1}$
and thus are totally nonnegative invertible bi-diagonal matrices by Lemma \ref{lem:IXIY}.
~From ${\bf X} + {\bf Y}^T= \boldsymbol\pi \boldsymbol\eta^T$ we then obtain
$$ ({\bf Y}_+^T)^{-1} {\bf X}_- + {\bf Y}_- {\bf X}_+^{-1} =\left ( ({\bf Y}_+^T)^{-1}\boldsymbol\pi\right )
\left ( \boldsymbol\eta^T {\bf X}_+^{-1} \right ) := \boldsymbol\rho \boldsymbol\mu^T\ .
$$
We need to show that vectors $ \boldsymbol\rho\ ,\ \boldsymbol\mu$ have positive entries. For this, notice that
\begin{align*}
\boldsymbol\rho&= ((Y_+)^T)^{-1} S_p \boldsymbol\alpha= ( ( (\Lambda S_q^{-1})_+)^T)^{-1} S_p \boldsymbol\alpha\ ,\\
\boldsymbol\mu&=((X_+)^T)^{-1} S_q\boldsymbol\beta=( ( (\Lambda S_p^{-1})_+)^T)^{-1} S_q\boldsymbol\beta. \end{align*}
Now, it is easy to check that if the matrix of generalized bimoments ${\cal{I}}$ is replaced
by ${\cal{I}}\Lambda ^T$ (see Corollary \ref{cor:ILambda} ) then $S_p \rightarrow (( (\Lambda S_q^{-1})_+)^T)^{-1} S_p $,
while $\boldsymbol\alpha$ is unchanged, which implies that $\boldsymbol\rho$ is a new $\boldsymbol\pi $
in the notation of Proposition \ref{etapi}
and hence positive by the same Proposition. Likewise, considering
the matrix of generalized bimoments $\Lambda {\cal{I}}$, for which $\boldsymbol\beta$ is
unchanged, $S_q \rightarrow (( (\Lambda S_p^{-1})_+)^T)^{-1} S_q$ and $\boldsymbol\mu$ is
a new $\boldsymbol\eta$ in the notation of Proposition $\ref{etapi}$ implying the claim.
Thus
$$
\boldsymbol\rho= D_\rho \mathbf 1\ , \boldsymbol\mu = D_\mu \mathbf 1,
$$
where $ D_\rho \ , D_\mu $ are diagonal matrices with positive entries and $\mathbf 1$ is a vector of 1s.
We have
$$ D_\rho^{-1} ({\bf Y}_+^T)^{-1} {\bf X}_- D_\mu^{-1} + D_\rho^{-1}{\bf Y}_- {\bf X}_+^{-1} D_\mu^{-1} = \mathbf 1 \ { \mathbf 1 ^T}\ .
$$
The first (resp. second) term on the left that we can call $\tilde {\bf X}$ (resp. $\tilde {\bf Y}^T$)
is a lower (resp. upper) triangular matrix with
positive diagonal entries . The equality above then implies that
(i) $ \tilde X_{ij} = \tilde Y_{ij} =1$ for all $ i>j$ and (ii) $ \tilde X_{ii} + \tilde Y_{ii} = 1$ for all
$i$. In particular, both $ \tilde X_{ii}$ and $ \tilde Y_{ii} $ are positive numbers strictly less then 1.
This means that $\tilde {\bf X}, \tilde {\bf Y}$ admits factorizations
$$ \tilde {\bf X}= (Id - \Lambda^T)^{-1} L_X \ ,\ \tilde {\bf Y}= (Id - \Lambda^T)^{-1} L_Y\ ,
$$
where
$$
L_X= \sum_{i=0}^\infty \tilde X_{ii} E_{ii} + (1- \tilde X_{ii}) E_{i+1\ i}\ ,
L_Y= \sum_{i=0}^\infty \tilde Y_{ii} E_{ii} + (1- \tilde Y_{ii}) E_{i+1\ i}\ .
$$
Since all entries of bi-diagonal matrices $L_X, L_Y$ are positive, these matrices are totally nonnegative
and so are
\begin{equation} \label{eq:XYFac}
{\bf X}= {\bf Y}_+^T (Id - \Lambda^T)^{-1} L_X {\bf X}_+\ , \quad {\bf Y}= {\bf X}_+^T (Id - \Lambda^T)^{-1} L_Y {\bf Y}_+\ .
\end{equation}
\end{proof}
\begin{coroll}
${\bf X}$ and ${\bf Y}$ are oscillatory matrices.
\end{coroll}
\begin{proof}
We give a proof for ${\bf X}$. The factorization \eqref{eq:XYFac} we have just obtained shows that ${\bf X}$
is the product of an invertible lower-triangular TN matrix ${\bf Y}_+^T (Id -\Lambda^T)^{-1}$
and a tri-diagonal
matrix
$J=L_X {\bf X}_+$. Note that $L_X$ has all positive values on the main diagonal and
the first
sub-diagonal.
Entries on the first super-diagonal of ${\bf X}_+$ coincide with
corresponding
entries of ${\bf X}$ and thus are strictly positive by construction. Moreover, leading
principal minors of ${\bf X}$
are strictly positive
(see the proof
of Lemma \ref{lem:IXIY}), which implies that all diagonal entries of ${\bf X}_+$ are
strictly positive too.
Thus $J$ is a tri-diagonal matrix with all non-trivial entries strictly
positive.
Since diagonal entries of ${\bf Y}_+^T (Id -\Lambda^T)^{-1}$ are strictly positive and all
other entries
are non-negative,
every zero entry of ${\bf X}$ implies that the corresponding entry of $J$ is zero.
In view of that all entries on the first super- and sub-diagonals of ${\bf X}$ must be
strictly positive, which,
by a fundamental criterion of Gantmacher and Krein (Theorem 10, II,
\cite{gantmacher-krein}),
ensures
that ${\bf X}$ is oscillatory.
\end{proof}
The interlacing properties for the zeros of polynomials $p_n, q_n$,
as well as other properties of Sturm sequences,
follow then from Gantmacher-Krein theorems on spectral properties of oscillatory matrices (see II, Theorem 13, in \cite{gantmacher-krein}).
We summarize the most important properties implied by Gantmacher-Krein theory.
\begin{theorem}
\label{Sturm}
The sequences of BOPs $\{q_n\}$ and $\{p_n\}$ are Sturm sequences. Moreover,
\begin{enumerate}
\item their respective zeros are positive and simple,
\item the roots of adjacent polynomials in the sequences are interlaced,
\item the following alternative representations of the biorthogonal polynomials hold
\begin{align*}
p_n(x)&=\sqrt{\frac{D_n}{D_{n+1}}}\det (x-X[n-1]), \quad 1\leq n, \\
q_n(y)&=\sqrt{\frac{D_n}{D_{n+1}}}\det (y-Y[n-1]), \quad 1\leq n.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{remark} The fact that the roots are positive and simple follows indeed from the
fact that $X$ and $Y$ are oscillatory. Theorem \eqref{thm:alphazeros}, however, indicates
that this property is true even for a more general case when the totally positive kernel $K(x,y)$
is not necessarily the Cauchy kernel.
\end{remark}
\section{Four-term recurrence relations and Christoffel Darboux identities}
%
\label{Section6}
We establish in this section a basic form of recurrence relations and
an analog of classical Christoffel-Darboux identities satisfied by
$\{q_n\}$ and $\{p_n\}$. First, we introduce the following
notation for semi-infinite, finite-band matrices.
\begin{definition} \label{def:matrixsupp}
Given two integers $a\leq b$ , a semi-infinite matrix $A$ is said to have the support
in $[a,b]$ if
\begin{equation} j-i<a \text{ or } j-i>b \text{ imply } A_{ij}=0 \end{equation}
The set of all matrices with supports in $[a,b]$ is denoted $M_{[a,b]}$.
\end{definition}
The content of this section relies heavily on the relation \eqref{eq:XYT}
which we recall for convenience:
$$
{\bf X} +{\bf Y} ^T=\boldsymbol \pi \boldsymbol \eta^T =D_{\pi}\mathbf 1 \mathbf 1 ^T D_{\eta}
$$
where $D_{\pi}$, $D_{\eta}$ respectively, are diagonal matrices of averages
of $\mathbf p$ and $\mathbf q$.
Since the vector $\mathbf 1$ is a null vector
of $\Lambda -Id$ we obtain
\begin{proposition} \label{thm:XYCR}
${\bf X}$ and ${\bf Y}$ satisfy:
\begin{enumerate}
\item $(\Lambda -Id)D_{\pi}^{-1}{\bf X} +(\Lambda -Id)D_{\pi}^{-1}{\bf Y}^T=0.$
\item
$A:=(\Lambda -Id)D_{\pi}^{-1}{\bf X}\in M_{[-1,2]}.$
\item
$
{\bf X} D_{\eta}^{-1}(\Lambda ^T-Id)+{\bf Y} ^TD_{\eta}^{-1}(\Lambda ^T-Id)=0.$
\item
$\widehat A:={\bf X} D_{\eta}^{-1}(\Lambda ^T-Id)\in M_{[-2,1]}.$
\end{enumerate}
\end{proposition}
As an immediate corollary we obtain the factorization property for $X$ and $Y$.
\begin{coroll}
Let $A$, $\widehat A$ and
$$L:=(\Lambda -Id)D_{\pi}^{-1}, \qquad \widehat L:=D_{\eta}^{-1}(\Lambda ^T-Id),$$
respectively, denote matrices occurring in Proposition \ref{thm:XYCR}. Then
$$ L{\bf X} =A, \quad {\bf X}\widehat L=\widehat A, \qquad A\in M_{[-1,2]}, \, \widehat A\in M_{[-2,1]}. $$
Likewise, ${\bf Y}$ admits a similar factorization:
$$
{\bf Y} L^T=B, \qquad (\widehat L ^T) {\bf Y}=\widehat B,
$$
where $B=-A^T, \widehat B=-\widehat A ^T$.
\end{coroll}
Hence,
\begin{coroll}
\label{fourterm}
$\mathbf p$ and $\mathbf q$ satisfy four-term recurrence relations of the form
\begin{align*}
x\left(\frac{p_n(x)}{\pi _n}-\frac{p_{n-1}(x)}{\pi_{n-1}}\right)=A_{n-1,n+1}p_{n+1}(x)+A_{n-1,n}p_n(x)+A_{n-1,n-1}p_{n-1}(x)+A_{n-1,n-2}p_{n-2}(x), \\
y\left(\frac{q_n(y)}{\eta_n}-\frac{q_{n-1}(y)}{\eta_{n-1}}\right)=\widehat B_{n-1,n+1}q_{n+1}(y)+\widehat B_{n-1,n}q_n(y)+\widehat B_{n-1,n-1}q_{n-1}(y)+\widehat B_{n-1,n-2}q_{n-2}(y),
\end{align*}
for $1\leq n$ with the proviso that $p_{-1}=q_{-1}=0$.
\end{coroll}
\begin{proof}
We give the proof for $\mathbf p(x)$ in matrix form. Indeed, from
$$
x\mathbf p(x)={\bf X} \mathbf p(x),$$
it follows that
$$
xL\mathbf p(x)=L {\bf X} \mathbf p(x),
$$
hence the claim, since $L \in M_{[0,1]}$
and $L{\bf X} =A\in M_{[-1,2]}$. \end{proof}
Let us observe that $\widehat L$ has a unique formal inverse, represented by a lower
triangular matrix. Let us then define
$$
\widehat \mathbf p(x)=\widehat L ^{-1} \mathbf p(x).
$$
\begin{theorem}[Christoffel-Darboux Identities for $\mathbf q$ and $\mathbf p$]\label{thm:CD1}
\begin{equation}\label{eq:CDI2}
(x+y) \sum_{j=0}^{n-1} q_j(y) p_j (x) = \mathbf q^T(y) [\Pi,(y-{\bf Y}^T)\widehat L]\widehat \mathbf p(x)
\end{equation}
where $\Pi := \Pi_n$ is the diagonal matrix $diag (1,1,\dots, 1,0,\dots)$ with $n$ ones (the entries are labeled from $0$ to $n-1$). The explicit form of the commutators is:
\begin{multline}\label{eq:comm2}
[\Pi , (y-{\bf Y}^T)\widehat L] =\widehat A_{n-1,n}E_{n-1,n}-(\frac{y}{\eta_n}+\widehat A_{n,n-1})E_{n,n-1}
-\\
\widehat A_{n,n-2}E_{n,n-2}-\widehat A_{n+1,n-1}E_{n+1,n-1},
\end{multline}
where $A_{i,j}$, $\widehat A_{i,j}$ respectively, denote the $(i,j)$th entries of $A$, $\widehat A$, occurring in Proposition \ref{thm:XYCR}.
\end{theorem}
\begin{proof}
We give the proof of equation \eqref{eq:CDI2}. Since $(y-{\bf Y})\mathbf q=0$ it suffices
to prove that the left hand side equals $\mathbf q ^T\Pi(y-Y^T)\widehat L\widehat \mathbf p(x)$.
From the definition of $\widehat \mathbf p$ and equation \eqref{def:XY} we obtain
$$
(x+y)\mathbf q^T(y)\Pi \mathbf p(x)= \mathbf q^T(y)\Pi y \widehat L \widehat \mathbf p(x)+
\mathbf q^T(y)\Pi {\bf X}\mathbf p(x)=\mathbf q^T(y)\Pi y \widehat L \widehat \mathbf p(x)+
\mathbf q^T(y)\Pi {\bf X}\widehat L \widehat \mathbf p(x), $$
which, after switching ${\bf X}\widehat L$ with $-{\bf Y}^T \widehat L$ in view of Proposition \ref{thm:XYCR}, gives equation \eqref{eq:CDI2}. To get the commutator equation \eqref{eq:comm2}
one needs to perform an elementary computation using the definition of $\widehat A$.
\end{proof}
We establish now basic properties of $\widehat \mathbf p$ and its biorthogonal partner
$\widehat \mathbf q$ defined below.
\begin{proposition}
\label{hattedBOPs}
The sequences of polynomials
\begin{equation}
\widehat \mathbf p = \widehat L^{-1} \mathbf p \ , \ \ \ \widehat \mathbf q ^T= \mathbf q ^T \widehat L
\end{equation}
are characterized by the following properties
\begin{enumerate}
\item $\deg \widehat q_n = n+1$, $\deg \widehat p_n = n$;
\item $\displaystyle \int \widehat q_n {\rm d} \beta = 0$;
\item $\displaystyle \int\!\!\!\!\int \widehat p_n(x) \widehat q_m(y) \frac {{\rm d}\alpha {\rm d}\beta}{x+y} = \delta_{mn}$ ;
\item $\widehat q _n(y) =\frac{1}{\eta_{n+1}}\sqrt{\frac{D_{n+1}}{D_{n+2}}}y^{n+1}+
\mathcal O (y^n);$
\end{enumerate}
In addition
\begin{description}
\item a. $\widehat \mathbf q$ and $\widehat \mathbf p$ satisfy the intertwining relations with $\mathbf q$ and $\mathbf p$
\begin{eqnarray}
&& y \widehat\mathbf q^T= - \mathbf q^T\widehat A , \cr
&& x \mathbf p = \widehat A\widehat \mathbf p; \label{recrelhat}
\end{eqnarray}
\item b. $\widehat \mathbf q$ and $\widehat \mathbf p$ admit the determinantal representations:
\begin{eqnarray}
\widehat q_n(y) &\&= \frac{1}{\eta_{n}\eta_{n+1}\sqrt{D_nD_{n+2}}} \det \left[
\begin{array}{cccc}
I_{00} & \dots &&I_{0n+1}\\
\vdots& && \vdots\\
I_{n-1\,0}& \dots&&I_{n-1\,n+1}\\
\beta_0& \dots & &\beta_{n+1}\\
1 & \dots &&y^{n+1}
\end{array}
\right]\\
\widehat p_n(x) &\& =\frac {1}{D_{n+1}} \det \left[
\begin{array}{cccc}
I_{00} & \dots &I_{0\,n}&1\\
\vdots& && \vdots\\
I_{n-1\,0}& \dots&I_{n-1\,n}&x^{n-1}\\
I_{n 0}& \dots & I_{n\,n}&x^n\\
\beta_0& \dots &\beta_{n}& 0
\end{array}
\right]\label{detwhp}
\end{eqnarray}
\item c. $\displaystyle \beta_0 \int\!\!\!\!\int \widehat p_n(x) y^j \frac {{\rm d}\alpha {\rm d}\beta}{x+y} = \beta_j \int\!\!\!\!\int \widehat p_n(x) \frac {{\rm d}\alpha {\rm d}\beta}{x+y}$, $ j\leq n$.
\end{description}
\end{proposition}
\begin{proof}
Assertions (1), (2) and (4) follow directly from the shape of the matrix $\widehat L$. Assertion (3) follows from $\langle \mathbf p,\mathbf q^t\rangle={\bf 1}$ by multiplying it by $\widehat L$ on the right and by $\widehat L^{-1}$ on the left.
Assertion (c) follows from assertions (1), (2) and (3); indeed from (2) and (3), it follows that the polynomial $\widehat p_n$ is biorthogonal to all polynomials of degree $\leq n$ with zero ${\rm d}\beta$--average and $\{\beta_0 y^j - \beta_j:
0\leq j\leq n\}$
is a basis for such polynomials.
The intertwining relations follow from the definitions of the matrices $\widehat L, \widehat A$ and of the polynomials $\widehat \mathbf p, \widehat \mathbf q$.
The determinantal expression for $\widehat q_n$ follows by inspection since the proposed expression has the defining properties (1) and (2) and is biorthogonal to all powers $1,x,\dots, x^{n-1}$. The normalization is found by comparing the leading coefficients of $\widehat q_n = \frac{1}{\eta_{n+1}} q_{n+1}+ \mathcal O(y^n)$.
The determinantal expression for $\widehat p_n(x)$ follows again by inspection; indeed if $F(x)$ is the determinant in (\ref{detwhp}) then
\begin{equation}
\langle F(x)|y^j \rangle= \det \left[
\begin{array}{cccc}
I_{00} & \dots &I_{0\,n}&I_{0j}\\
\vdots& && \vdots\\
I_{n-1\,0}& \dots&I_{n-1\,n}&I_{n-1\, j}\\
I_{n 0}& \dots & I_{n\,n}&I_{n\,j}\\
\beta_0& \dots &\beta_{n}& 0
\end{array}
\right] = -\beta_j D_{n+1} = \frac{\beta_j}{\beta_0} \langle F(x)|1\rangle.
\end{equation}
where the determinants are computed by expansion along the last row.
The proportionality constant is again found by comparison.
\end{proof}
\par\vskip 5pt
One easily establishes a counterpart to Theorem \ref{thm:CD1} valid
for $\widehat \mathbf q$ and $\widehat \mathbf p$.
\begin{proposition}[Christoffel--Darboux identities for $\widehat \mathbf q$ and $\widehat \mathbf p$ ]
\label{propCDI}
We have
\begin{equation} \label{CDI1}
(x+y) \sum_{j=0}^{n-1} \widehat q_j(y) \widehat p_j (x) = \mathbf q^T(y) [(x-X)\widehat L, \Pi]\widehat \mathbf p(x)=
\mathbf q^T(y) [\Pi,(-x -Y^T)\widehat L]\widehat \mathbf p(x).
\end{equation}
\end{proposition}
\begin{remark} Observe that the commutators occurring in both theorems have
identical structure; they only differ in the variable $y$ in Theorem
\ref{thm:CD1} being now replaced by $-x$.
We will denote by $\mathbb A(x) $ the commutator $[\Pi, (-x-{\bf Y}^T)\widehat L]$ and by $\mathbb A_n(x)$ its nontrivial $3\times 3$ block. Thus the nontrivial block in Proposition \ref{propCDI} reads:
\begin{equation}
\mathbb A_n(x) = \left[
\begin{array}{cc|c}
0&0&\widehat A _{n-1,n}\\
\hline
-\widehat A _{n,n-2}& \frac{x}{\eta_{n}} -\widehat A _{n, n-1}& 0\\
0&-\widehat A_{n+1,n-1}&0
\end{array}\right]
\end{equation}
while the block appearing in Theorem \ref{thm:CD1} is simply $\mathbb A_n(-y)$.
\par \vskip 5pt
\end{remark}
%
With this notation in place we can present the Christoffel-Darboux
identities in a unified way.
\begin{coroll}[Christoffel--Darboux identities for $\mathbf q, \mathbf p$, and $\widehat \mathbf q,\widehat \mathbf p$ ] \label{cor:CDIuni}
The biorthogonal polynomials $\mathbf q, \mathbf p$, and $\widehat \mathbf q,\widehat \mathbf p$ satisfy
\begin{eqnarray}
(x+y) \sum_{j=0}^{n-1} q_j(y) p_j (x) = \mathbf q^T(y) \mathbb A(-y)\widehat \mathbf p(x),\\
(x+y) \sum_{j=0}^{n-1} \widehat q_j(y) \widehat p_j (x) = \mathbf q^T(y) \mathbb A(x)\widehat \mathbf p(x).
\end{eqnarray}
\end{coroll}
\section{ Approximation problems and perfect duality} \label{sec:AproxPerfD}
We will associate a chain of Markov functions associated with measures
${\rm d} \alpha$ and ${\rm d} \beta$ by taking the
Stieltjes' transforms of the corresponding measures as well as their reflected, with respect to the origin, images.
\begin{definition}
Define
\begin{align}
&W_{\beta}(z)=\int \frac{1}{z-y} {\rm d} \beta(y), &W_{\alpha^*}(z)&=\int \frac{1}{z+x}{\rm d} \alpha(x), \cr
&W_{\alpha^*\beta}(z)=-\int\!\!\!\!\int \frac{1}{(z+x)(x+y)}{\rm d} \alpha(x) {\rm d} \beta(y),
&W_{\beta \alpha^*}(z)&=\int\!\!\!\!\int \frac{1}{(z-y)(y+x)}{\rm d} \alpha(x) {\rm d} \beta(y).
\end{align}
\end{definition}
We recall now an important notion of a Nikishin system
associated with two measures (see \cite{ns}, p. 142, called there a
MT system of order $2$).
\begin{definition} Given two measures ${\rm d} \mu _1$ and ${\rm d}\mu_2$ with disjoint
supports $\Delta _1 $, $\Delta _2$ respectively, a Nikishin system of order $2$ is a pair of
functions
$$
f_1(z)=\int _{\Delta_1} \frac{{\rm d} \mu_1(x_1)}{z-x_1}, \qquad
f_2(z)=\int_{\Delta_1}\frac{{\rm d} \mu_1(x_1)}{z-x_1}\int _{\Delta_2} \frac{{\rm d} \mu_2(x_2)}{x_1-x_2}.
$$
\end{definition}
\begin{remark} The definition of a Nikishin system depends on the order in
which one "folds" measures. If one starts from ${\rm d} \mu_2$ , rather than
${\rm d} \mu _1$ one
obtains a priory a different system. As we show below the relation between
these two Nikishin systems is in fact of central importance to the
theory we are developing.
\end{remark}
The following elementary observation provides the proper framework for our
discussion.
\begin{lemma} Let ${\rm d} \alpha^*$ denote the measure obtained from ${\rm d} \alpha$ by
reflecting the support of ${\rm d} \alpha$ with respect to the origin. Then
$W_{\beta}, W_{\beta \alpha^*}$ and $W_{\alpha^*}, W_{\alpha^*\beta}$
are Nikishin systems associated with measures ${\rm d} \beta $ and ${\rm d} \alpha^*$ with no predetermined ordering of measures.
\end{lemma}
The relation between these two Nikishin systems can now be readily
obtained.
\begin{lemma} \label{lem:Plucker}
\begin{equation} \label{eq:Plucker}
W_{\beta}(z)W_{\alpha^*}(z)=W_{\beta \alpha^*}(z)+W_{\alpha^*\beta}(z).
\end{equation}
\end{lemma}
\begin{proof}
Elementary computation gives:
$$
W_{\beta}(z)W_{\alpha^*}(z)=\int\!\!\!\!\int \frac{1}{(z-y)(z+x)}{\rm d} \alpha(x) {\rm d} \beta(y)=
\int\!\!\!\!\int \frac{1}{(x+y)}\left[\frac{1}{z-y}-\frac{1}{z+x}\right ]{\rm d} \alpha(x) {\rm d} \beta(y),
$$
which implies the claim.
\end{proof}
\begin{remark} Equation \eqref{eq:Plucker} was introduced in \cite{ls-cubicstring} for the DP peakons (see Lemma 4.7 there).
Observe that this formula is valid for any Nikishin system of order $2$.
\end{remark}
We formulate now the main approximation problem, modeled after that of
\cite{ls-cubicstring}
\begin{definition} Let $n\geq 1$. Given two Nikishin systems $W_{\beta}, W_{\beta \alpha^*}$ and $W_{\alpha^*}, W_{\alpha^*\beta}$ we seek polynomials $Q(z), degQ=n$, $P_{\beta}(z),
deg P_{\beta}=n-1$ and $P_{\beta \alpha^*}(z), deg P_{\beta \alpha^*}=n-1$,
which satisfy Pad$\acute{e}$-like approximation conditions as $z\rightarrow \infty, \, z\in {\mathbb C}_{\pm} $:
\begin{subequations}\label{eq:PadeA}
\begin{align}
Q(z)W_{\beta}(z)-P_{\beta}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z)W_{\beta\alpha^*}(z)-P_{\beta\alpha^*}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z) W_{\alpha^*\beta}(z)-P_{\beta}(z)W_{\alpha^*}(z)+P_{\beta\alpha^*}(z)=\mathcal O\left(\frac{1}{z^{n+1}}\right)
\end{align}
\end{subequations}
\end{definition}
\begin{remark} In the case that both measures have compact support we can
remove the condition that $z\in {\mathbb C} _{\pm}$ since
all the functions involved are then holomorphic around $z=\infty$.
\end{remark}
\begin{remark} In the terminology used for example in \cite{vanassche} the triplets of
polynomials $Q, P_{\beta},P_{\beta \alpha^*}$ provide a Hermite-Pad\'{e}
approximation of type $I$ to the Nikishin system $W_{\beta}, W_{\beta \alpha^*}$ and, simultaneously, a Hermite-Pad\'{e}
approximation of type $II$ to the Nikishin system $W_{\alpha^*}, W_{\alpha^*\beta}$. \end{remark}
\begin{definition}
We call the left hand sides of approximation problems \eqref{eq:PadeA}
$R_{\beta}, R_{\beta \alpha^*}$ and $R_{\alpha^*\beta}$ respectively, referring to them as remainders.
\end{definition}
The relation of the approximation problem \eqref{eq:PadeA} to the theory of biorthogonal
polynomials $\mathbf q$ and $\mathbf p$ is the subject of the next theorem.
\begin{theorem} \label{thm:Padeq}
Let $q_n(y)$ be defined as in \eqref{BOPs}, and let us set
$Q(z)=q_n(z)$ Then $Q(z)$ is the unique, up to a multiplicative
constant, solution of the approximation problem \eqref{eq:PadeA}.
Moreover, $P_{\beta},P_{\beta \alpha^*}$ and all the remainders
$R_{\beta}, R_{\beta \alpha^*}$ and $R_{\alpha^*\beta}$ are uniquely determined from $Q$ with the help of the formulas:
\begin{subequations}
\begin{align}
P_{\beta}(z)&=\int \frac{Q(z)-Q(y)}{z-y}{\rm d} \beta(y),\qquad
P_{\beta \alpha^*}(z)=\int\!\!\!\!\int\frac{Q(z)-Q(y)}{(z-y)(x+y)}{\rm d} \alpha(x) {\rm d} \beta(y) , \\
R_{\beta}(z)&=\int \frac{Q(y)}{z-y}{\rm d} \beta(y), \qquad \qquad
R_{\beta \alpha^*}(z)=\int\!\!\!\!\int\frac{Q(y)}{(z-y)(x+y)}{\rm d} \alpha(x) {\rm d} \beta(y) , \\
R_{\alpha^*\beta}(z)&=-\int\!\!\!\!\int\frac{Q(y)}{(z+x)(x+y)}{\rm d} \alpha(x) {\rm d} \beta(y)
=\int\frac{R_{\beta}(x)}{z-x} {\rm d} \alpha^*(x).
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
We start with the first approximation problem involving $Q(z)W_{\beta}(z)$.
Writing explicitly its first term we get:
$$
\int \frac{Q(z)}{z-y}{\rm d}\beta(y)=\int \frac{Q(z)-Q(y)}{z-y}{\rm d}\beta(y)+\int \frac{Q(y)}{z-y}{\rm d}\beta(y).$$
Since $ \int \frac{Q(z)-Q(y)}{z-y}{\rm d}\beta(y)$ is a polynomial in $z$ of degree $n-1$, while $\int \frac{Q(y)}{z-y}{\rm d}\beta(y)=
\mathcal O (\frac{1}{z})$, we get the first and the third formulas. The second and fourth formulas are obtained in an analogous way from the second approximation problem.
Furthermore, to get the last formula we compute $P_{\beta}$ and $P_{\beta \alpha^*}$ from the first two approximation problems and substitute into the third
approximation problem, using on the way Lemma \ref{lem:Plucker}, to obtain:
$$
R_\beta W_{\alpha^*}-R_{\beta \alpha^*}=R_{\alpha^* \beta}.
$$
Substituting explicit formulas for $R_{\beta}$ and $R_{\beta \alpha^*}$ gives
the final formula. To see that $Q(z)$ is proportional to $q_n(z)$ we
rewrite $-R_{\alpha^*\beta}$ as:
\begin{align*}
&\int\!\!\!\!\int \frac{Q(y)}{(z+x)(x+y)}{\rm d} \alpha(x){\rm d} \beta(y)=\int\!\!\!\!\int\frac {Q(y)}{(x+y)}
\left[
\frac{1}{z+x}-\frac{1-(-(\frac{x}{z}))^n}{z+x}\right ]{\rm d} \alpha(x){\rm d} \beta(y)+\\
&\int\!\!\!\!\int\sum_{j=0}^{n-1}\frac{(-x)^j}{z^{j+1}}\frac{Q(y)}{(x+y)(z+x)}{\rm d} \alpha {\rm d} \beta =
\int\!\!\!\!\int\frac {Q(y)}{(x+y)}\left[
\frac{(\frac{-x}{z})^n}{z+x}\right]{\rm d} \alpha(x){\rm d} \beta(y)+
\int\!\!\!\!\int\sum_{j=0}^{n-1}\frac{(-x)^j}{z^{j+1}}\frac{Q(y)}{(x+y)(z+x)}{\rm d} \alpha {\rm d} \beta
\end{align*}
To finish the argument we observe that the first term is already $\mathcal O(
\frac{1}{z^{n+1}})$, hence the second term must vanish. This gives:
$$
\int\!\!\!\!\int \frac{x^j Q(y)}{x+y}{\rm d}\alpha(x) {\rm d} \beta(y)=0, \qquad 0\leq j\leq n-1,
$$
which characterizes uniquely (up to a multiplicative constant)
the polynomial $q_n$.
\end{proof}
\begin{remark} In the body of the proof we used an equivalent form
of the third approximation condition, namely
\begin{equation} \label{eq:3rdPadeA}
R_\beta W_{\alpha^*}(z)-R_{\beta \alpha^*}(z)=R_{\alpha^* \beta}(z)=\mathcal O(\frac{1}{z^{n+1}}).
\end{equation}
\end{remark}
By symmetry, we can consider the Nikishin systems associated with measures
$\alpha$ and $\beta^*$ with the corresponding Markov functions
$W_{\alpha}, W_{\alpha \beta^*}$ and $W_{\beta^*}, W_{\beta^*\alpha}$.
We then have an obvious interpretation of the polynomials $p_n$.
\begin{theorem} \label{thm:Padep}
Let $p_n(x)$ be defined as in \eqref{BOPs}, and let us set
$Q(z)=p_n(z)$. Then $Q(z)$ is the unique, up to a multiplicative
constant, solution of the approximation problem for $z \rightarrow
\infty, z\in {\mathbb C}_{\pm}$:
\begin{subequations}\label{eq:PadeB}
\begin{align}
Q(z)W_{\alpha}(z)-P_{\alpha}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z)W_{\alpha\beta^*}(z)-P_{\alpha\beta^*}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z)W_{\beta^*\alpha}(z)-P_{\alpha}(z) W_{\beta^*}(z) +P_{\alpha\beta^*}(z)=\mathcal O\left(\frac{1}{z^{n+1}}\right),
\end{align}
\end{subequations}
where $P_{\alpha}, P_{\alpha \beta^*}$ are given by formulas of Theorem
\ref{thm:Padeq} after switching $\alpha$ with $\beta$.
\end{theorem}
Clearly, one does not need to go to four different types of
Nikishin systems in order to characterize $q_n$ and $p_n$.
The following corollary is an alternative characterization of
biorthogonal polynomials which uses only the first pair of Nikishin systems.
\begin{coroll} \label{cor:Padeqp}
Consider the Nikishin systems $W_{\beta}, W_{\beta \alpha^*}$ and $W_{\alpha^*}, W_{\alpha^*\beta}$. Then the pair of biorthogonal polynomials $\{q_n, p_n\}$ solves:
\begin{enumerate}
\item $Q(z)=q_n(z) $ solves Hermite-Pad\'{e} approximations given by equations
\eqref{eq:PadeA},
\begin{subequations}
\begin{align*}
Q(z)W_{\beta}(z)-P_{\beta}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z)W_{\beta\alpha^*}(z)-P_{\beta\alpha^*}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z) W_{\alpha^*\beta}(z)-P_{\beta}(z)W_{\alpha^*}(z)+P_{\beta\alpha^*}(z)=\mathcal O\left(\frac{1}{z^{n+1}}\right)
\end{align*}
\end{subequations}
\item $Q(z)=p_n(-z)$ solves switched (Type I with Type II) Hermite-Pad\'{e} approximations
\begin{subequations}\label{eq:PadeC}
\begin{align}
Q(z)W_{\alpha^*}(z)-P_{\alpha^*}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z)W_{\alpha^*\beta }(z)-P_{\alpha^*\beta}(z)=\mathcal O\left(\frac{1}{z}\right), \\
Q(z) W_{\beta \alpha^*}(z)-P_{\alpha^*}(z)W_{\beta}(z)+P_{\alpha^*\beta}(z)=\mathcal O\left(\frac{1}{z^{n+1}}\right)
\end{align}
\end{subequations}
\end{enumerate}
\end{coroll}
We finish this section with a few results needed for the Riemann-Hilbert problem approach to biorthogonal polynomials $\{q_n, p_n\}$ which will be presented in the next section.
\begin{definition}
\label{defauxwave}
We define the auxiliary vectors
in addition to the main polynomial vectors $\mathbf q_{_0}(w):= \mathbf q(w)$ and $\mathbf p_{_0}(z) := \mathbf p(z)$, as
\begin{eqnarray}
&& \mathbf q_{_1}(w) :=\int \mathbf q(y)\frac {{\rm d}\beta(y)}{w-y}, \qquad \mathbf q_{_2}(w):=\int \frac{\mathbf q_1(x)}{w-x}{\rm d} \alpha^*(x), \\
&& \mathbf p_{_1} (z) :=\int \frac {\mathbf p(x){\rm d}\alpha(x)}{z-x}, \qquad \mathbf p_{_2}(z) := \int \frac{\mathbf p_1(y)}{z-y}{\rm d} \beta^*(y).
\end{eqnarray}
Moreover,
\begin{eqnarray}
&&\widehat \mathbf p_{_1}(z) := \widehat L^{-1} \left(\mathbf p_{_1}(z)+\frac{1}{\beta_0}\langle \mathbf p|1\rangle\right)=\widehat L^{-1} \mathbf p_{_1}(z) - {\bf 1},\\
&& \widehat \mathbf p_2(z):=\int \frac{\widehat \mathbf p_1(y)}{z-y}{\rm d} \beta^*(y).
\end{eqnarray}
Here ${\bf 1}$ is the vector of ones.\footnote{The formula $\beta_0^{-1} <\widehat \mathbf p_n,1> = -1$ follows directly from the determinantal expression in Proposition \ref{hattedBOPs}}.
\end{definition}
\begin{remark}
Note that the definition above unifies the approximants and their
respective remainders (see Theorem \ref{thm:Padeq}), thus, for example, $\mathbf q_{_1}(w) =\mathbf R_{\beta}(w),
\mathbf q_{_2}(w)=\mathbf R_{\alpha^*\beta}(w)$ etc. The definition of ``hatted'' quantities
is justified below.
\end{remark}
\begin{theorem} [Extended Christoffel-Darboux Identities]\label{thm:ECD1}
Let ${a,b=0,\dots 2}$. Then
\begin{equation}
(w+z) \mathbf q_{_a}^T(w) \Pi \mathbf p_{_b}(z) = \mathbf q_{_a}^T(w) \mathbb A(-w) \widehat \mathbf p_{_b}(z)-\mathbb F(w,z)_{ab}
\end{equation}
where
\begin{equation}
\mathbb F(w,z)= \begin{bmatrix}
0&0&1\\
0& 1&W_{\beta^*}(z)+W_{\beta}(w) \\
1&W_{\alpha}(z)+W_{\alpha^*}(w)&W_{\alpha^*}(w)W_{\beta^*}(z)+W_{\alpha^*\beta}(w)+W_{\beta^*\alpha}(z)
\end{bmatrix}.
\end{equation}
\end{theorem}
\begin{proof}
The proof goes by repeated applications of the Christoffel-Darboux Identities
given by Theorem \ref{thm:CD1} and Pad\'{e} approximation conditions
\ref{eq:PadeA}. The details have been relegated to Appendix \ref{sec:app-CD}.
\end{proof}
We point out that if we set $w=-z$ in the CDI's contained in Theorem
\ref{thm:ECD1}, the left hand side vanishes identically and the RHS contains terms of the form $\mathbf q_{a}(-z) \mathbb A(z) \widehat\mathbf p_{_b}(z) $ minus $\mathbf F_{ab}(-z,z)$. The main observation is that the second term is {\bf constant}, independent of both $z$ and $n$, and hence one ends up with the
{\bf perfect pairing} (see \cite{Bertosemiclass})
between the auxiliary vectors.
For the reader's convenience we recall the definition of $\mathbb A(z)$ to
emphasize the implicit dependence on the index $n$ hidden in the projection
$\Pi$.
\begin{theorem} (Perfect Duality)
Let
$$
\mathbb J=\begin{bmatrix}0&0&1\\0&1&0\{\bf 1}&0&0 \end{bmatrix}.
$$
Then
$$
\mathbf q_{_a}^T(-z) \mathbb A(z) \widehat \mathbf p_{_b}(z)=\mathbb J_{ab}, \qquad
\text{ where }
\mathbb A(z)=[(z-{\bf X})\widehat L,\Pi].
$$
\end{theorem}
\begin{proof}
The only nontrivial entry to check is $(2,2)$. In this case, after one
substitutes $w=-z$ into
$W_{\alpha^*}(w)W_{\beta^*}(z)+W_{\alpha^*\beta}(w)+W_{\beta^*\alpha}(z)$, one
obtains the identity of Lemma \ref{lem:Plucker}.
\end{proof}
There also exists an analog of the extended Christoffel-Darboux identities
of Theorem \ref{thm:ECD1} for the ``hatted'' quantities.
We first define:
\begin{definition}
For $a=0,1,2$,
\begin{equation}
\widehat \mathbf q_{_a}^T := \mathbf q_{_a}^T \widehat L.
\end{equation}
\end{definition}
The following identities follow directly from the respective
definitions.
\begin{lemma} \label{lem:wqzp}
\begin{align*}
&w \widehat \mathbf q_a^T(w)=\begin{cases} \mathbf q_a^T(w){\bf Y}^T \widehat L, \quad &a=0,1\\
\mathbf q_2^T(w){\bf Y}^T \widehat L -\langle 1|\widehat\mathbf q_0^T\rangle, \quad &a=2. \end{cases}\\
&(z-{\bf X})\widehat L \widehat \mathbf p_b(z)=\begin{cases}0, \quad &b=0,\\
\frac{\langle \mathbf p_0|z+y\rangle}{\beta_0}, \quad &b=1,\\
-\langle \mathbf p_0|1\rangle +\frac{\langle \mathbf p_0|z+y\rangle W_{\beta^*}(z)}{\beta_0}, \quad &b=2.
\end{cases}
\end{align*}
\end{lemma}
\begin{theorem} [Extended Christoffel-Darboux Identities for $\widehat \mathbf q_a, \widehat \mathbf p_b$]\label{thm:ECD2}
Let $a,b=0,\dots 2$. Then
\begin{equation}
(w+z) \widehat \mathbf q_{a}^T(w) \Pi \widehat \mathbf p_{b}(z) =
\mathbf q_{a}^T(w)\mathbb A(z)\widehat \mathbf p_{b}(z)-\widehat {\mathbb F}(w,z)_{ab}
\end{equation}
where
\begin{equation}
\widehat {\mathbb F}(w,z)={\mathbb F}(w,z) -\frac{w+z}{\beta_0}\begin{bmatrix}
0&1&W_{\beta^*}(z)\\
0& W_{\beta}(z)&W_{\beta}(w)W_{\beta^*}(z)\\
1&W_{\alpha^*\beta^*}(w)&W_{\alpha^*\beta^*}(w)W_{\beta^*}(z)
\end{bmatrix}.
\end{equation}
\end{theorem}
\begin{proof}
We give an outline of the proof.
For $a=0,1$, in view of Lemma \ref{lem:wqzp}
$$
(w+z) \widehat \mathbf q_{a}^T(w) \Pi \widehat \mathbf p_{b}(z)=\mathbf q_{a}^T(w)\mathbb A(z)\widehat \mathbf p_{b}(z)
+\mathbf q_{a}^T(w)\Pi (z-{\bf X})\widehat L \widehat \mathbf p_{b}(z).
$$
The second term equals, again by Lemma \ref{lem:wqzp},
$$
\mathbf q_{a}^T(w)\Pi\begin{cases}0, \quad &b=0,\\
\frac{\langle \mathbf p_0|z+y\rangle}{\beta_0}, \quad &b=1,\\
-\langle \mathbf p_0|1\rangle+\frac{\langle \mathbf p_0|z+y\rangle W_{\beta^*}(z)}{\beta_0}, \quad &b=2.
\end{cases}
$$
Now, one goes case by case, using biorthogonality of $\mathbf q_0^T$ and $\mathbf p_0$,
and the definition of $\mathbf q_1^T(w)$. After a few elementary steps one arrives
at the claimed result.
The computation for $a=2$ is only slightly more involved. From Lemma
\ref{lem:wqzp} we obtain:
$$
(w+z) \widehat \mathbf q_{2}^T(w) \Pi \widehat \mathbf p_{b}(z) =
\mathbf q_{2}^T(w)\mathbb A(z)\widehat \mathbf p_{b}(z)-\langle 1|\widehat \mathbf q_0\rangle \Pi \widehat \mathbf p_b(z)+
\mathbf q_2^T(w)\Pi (z-{\bf X})\widehat L \widehat \mathbf p_b(z).
$$
In view of biorthogonality of $\widehat \mathbf q_0^T$ and $\widehat \mathbf p$, after some
intermediate computations, one obtains:
\begin{align*}
\langle 1|\widehat \mathbf q_0\rangle \Pi \widehat \mathbf p_b(z)=\begin{cases} 1, \quad &b=0\\
W_{\alpha}(z)+\frac{\langle 1|1\rangle}{\beta_0},
\quad &b=1,\\
W_{\beta^*\alpha}(z)+\frac{\langle 1|1\rangle }{\beta_0}W_{\beta^*}(z), \quad &b=2.
\end{cases}
\end{align*}
Likewise,
\begin{align*}
\mathbf q_2^T(w)\Pi (z-{\bf X})\widehat L \widehat \mathbf p_b(z)=\begin{cases} 0, \quad &b=0\\
\frac{w+z}{\beta_0}W_{\alpha^*\beta}(w) - W_{\alpha^*}(w)+\frac{\langle 1|1\rangle }{\beta_0},
\quad &b=1,\\\frac{w+z}{\beta_0}W_{\beta^*}(z)W_{\alpha^*}(w)-W_{\alpha^*\beta}(w)-W_{\beta^*}(z)W_{\alpha^*}(w)+
\frac{\langle 1|1\rangle }{\beta_0}W_{\beta^*}(z), \quad &b=2,
\end{cases}
\end{align*}
and the claim follows.
\end{proof}
\section{Riemann--Hilbert problems}
In this section we set up two
Riemann--Hilbert problems characterizing the Cauchy BOPs that enter the Christoffel--Darboux identities of the previous section. This is done in anticipation of possible applications to the study of universality for the corresponding two--matrix model. Moreover, since the Christoffel--Darboux kernels contain also the hatted polynomials, it is useful to
formulate the Riemann--Hilbert problems for those polynomials as well.
We will also make the {\bf assumption} (confined to this section) that the measures ${\rm d}\alpha,{\rm d}\beta$ are {\it absolutely continuous with respect to Lebesgue's measure} on the respective axes. Thus one can write $
\frac{{\rm d} \alpha}{{\rm d} x} = {\rm e}^{-\frac{U(x)}\hbar}\ ,
\frac{{\rm d} \beta}{{\rm d} y} = {\rm e}^{-\frac{V(y)}\hbar},
$
for the respective (positive!) densities on the respective supports: the signs in the exponents are conventional so as to have (in the case of an unbounded support) the {\it potentials} $U,V$ bounded from below. The constant $\hbar$ is only for convenience when studying the asymptotics of biorthogonal polynomials
for large degrees (small $\hbar$).
Since the Christoffel--Darboux identities involve the expressions $\mathbf q_{_a}\mathbb A \widehat \mathbf p_{_b}$, we are naturally led to characterize the sequences $\mathbf q$ and $\widehat \mathbf p$. However, the other sequences can be characterized in a similar manner by swapping the r\^oles of the relevant measures and symbols.
\subsection{Riemann--Hilbert problem for the $\mathbf q$--BOPs}
We will be describing here only the RHP characterizing the polynomials $q_n(y)$, where the characterization of the polynomials $p_n(x)$ is obtained by simply interchanging $\alpha$ with $\beta$ (see for example Theorem \ref{thm:Padep}).
We consider the real axis ${\mathbb R}$ oriented as usual and define
\begin{eqnarray}
\vec \mathbf q^{(n)}_0(w):= \left[
\begin{array}{ccc}
q_{n-2}(w) \
q_{n-1}(w)\
q_{n}(w)
\end{array}\right]^t, \quad
\vec\mathbf q_{_1}^{(n)} (w):= \int \vec \mathbf q^{(n)}(y) \frac{{\rm d}\beta(y)}{w-y},
\quad \vec \mathbf q_{_2}^{(n)}(w) :=\int \vec \mathbf q_1^{(n)}(x)\frac{{\rm d}\alpha^*(x)}{w-x}
\end{eqnarray}
For simplicity of notation we will suppress the superscript $^{(n)}$ in most of the following discussions, only to restore it when necessary for clarity; the main point is that an arrow on top of the corresponding vector will denote a ``window'' of three consecutive entries of either the ordinary vector $\mathbf q$ (index $a=0$), or the auxiliary vectors $\mathbf q_{_a}$ (index $a=1, 2$, see Def. \ref{defauxwave}) which, as we might recall at this point,
combine the polynomials and the corresponding remainders in the
Hermite-Pad\'{e} approximation problem given by Theorem \ref{thm:Padeq}.
Some simple observations are in order.
The vector $\vec\mathbf q_{_1}(w)$ is an analytic vector which has a jump--discontinuity on the support of ${\rm d}\beta$ contained in the positive real axis. As $w\to \infty$ (away from the support of ${\rm d}\beta$) it decays as $\frac 1 w$. Its jump-discontinuity is (using Plemelj formula)
\begin{equation}
\vec\mathbf q_{_1}(w)_+ = \vec\mathbf q_{_1}(w)_- - 2\pi i \frac{{\rm d}\beta}{{\rm d} w} \vec\mathbf q_{_0}(w)\ ,\ \ w\in supp({\rm d} \beta).
\end{equation}
Looking at the leading term at $w=\infty$ we see that
\begin{equation}
\vec \mathbf q_{_1}(w) =\frac{1}{w} \begin{bmatrix}
\eta_{n-2}&
\eta_{n-1}&
\eta_{n}
\end{bmatrix}^t+ \mathcal O(1/w^{2})\ .
\end{equation}
The vector $\vec \mathbf q_{_2}(w)$ is also analytic with a jump discontinuity on the {\bf reflected support} of ${\rm d}\alpha$ (i.e. on $supp ({\rm d} \alpha^*)$). In view of Theorem \ref{thm:Padeq}, recalling that $\mathbf q_2$ are remainders
of the Hermite-Pad\`{e} approximation problem of type II,
we easily see that
\begin{eqnarray}
\vec\mathbf q_{_2}(w) = \begin{bmatrix}
\displaystyle \frac {c_{n-2}} {(-w)^{n-1}} &
\displaystyle \frac {c_{n-1}} {(-w)^{n}} &
\displaystyle \frac {c_n} {(-w)^{n+1}}
\end{bmatrix}^t
(1+\mathcal O(1/w)), \qquad
c_n := \langle x^n|q_n\rangle = \sqrt{\frac {D_{n+1}}{D_n}} >0.
\label{q2asym}
\end{eqnarray}
The jump-discontinuity of $\vec \mathbf q_{_2}$ is
\begin{equation}
\vec\mathbf q_{_2}(w)_+ = \vec\mathbf q_{_2}(w)_- - 2\pi i \frac{{\rm d} \alpha^*}{{\rm d} w}\vec \mathbf q_{_1}(w)\ \ \ \ w \in supp (d\alpha^*).
\end{equation}
The behavior of $\vec\mathbf q_{_0}(w)$ at infinity is
\begin{equation}
\vec\mathbf q_{_0}(w) =
\begin{bmatrix}
\displaystyle \frac{w^{n-2}} {c_{n-2}}
&
\displaystyle\frac {w^{n-1}}{c_{n-1}} &
\displaystyle \frac {w^n}{c_n}
\end{bmatrix}^t(1+\mathcal O(1/w)),
\end{equation}
with the same $c_n$'s as in \ref{q2asym}.
Define the matrix
\begin{equation}
\Gamma(w) :=\overbrace{\begin{bmatrix}
1&-c_n \eta_n&0\\
0&1&0\\
0&(-1)^{n-1}\frac{\eta_{n-2}}{c_{n-2}}&1 \end{bmatrix}
\left[\begin{array}{ccc}
0 &0&c_n
\cr
0 &\frac{1}{\eta_{n-1}}&0
\cr
\frac {(-1)^{n}} {c_{n-2}}&0&0
\end{array}\right] }^{=: \mathcal N_q} [\vec \mathbf q_{_0}^{(n)}(w), \vec \mathbf q_{_1}^{(n)}(w), \vec \mathbf q_{_2}^{(n)}(w)]
\label{normalizedqRHP}
\end{equation}
\begin{proposition}
\label{RHP1}
The matrix $\Gamma(w)$ is analytic on ${\mathbb C} \setminus (supp({\rm d}\beta)\cup
supp({\rm d} \alpha^*)$. Moreover, it satisfies the jump conditions
\begin{equation} \label{eq:RHq}
\begin{split}
\Gamma(w)_+ & = \Gamma(w)_-\left[\begin{array}{ccc}
1 & -2\pi i \frac{{\rm d} \beta}{{\rm d} w} & 0 \cr
0&1&0\cr
0&0&1
\end{array}\right]\ , \qquad w\in supp({\rm d}\beta)\subset {\mathbb R}_+\cr
\Gamma(w)_+ & = \Gamma(w)_- \left[
\begin{array}{ccc}
1&0&0\\
0&1& -2\pi i \frac{{\rm d} \alpha^*}{{\rm d} w}\\
0&0&1
\end{array}
\right]\ ,\qquad w\in supp({\rm d} \alpha^*)\subset {\mathbb R}_-
\end{split}
\end{equation}
and its asymptotic behavior at $w=\infty$ is
\begin{eqnarray} \label{eq:Gamma-as}
\Gamma(w) = ({\bf 1} + \mathcal O(w^{-1}))\left[
\begin{array}{ccc}
w^n& 0 & \\
0& w^{-1}& 0 \\
0&0&w^{-n+1}
\end{array}
\right]
\end{eqnarray}
Moreover, $\Gamma(w)$ can be written as:
\begin{align}\label{eq:q-recovery}
\Gamma(w)=\begin{bmatrix}c_n\eta_n&0&0\\0&\frac{1}{\eta_{n-1}}&0\\0&0&\frac{(-1)^{n-1}\eta_{n-2}}{c_{n-2}}\end{bmatrix}
\begin{bmatrix}\widehat q_{n-1}& \widehat q_{1,n-1}&\widehat q_{2,n-1}\\
q_{n-1}&q_{1,n-1}&q_{2,n-1}\\\widehat q_{n-2}& \widehat q_{1,n-2}&\widehat q_{2,n-2}\end{bmatrix}. \end{align}
\end{proposition}
\begin{proof}
All the properties listed are obtained from elementary matrix computations.
\end{proof}
\begin{remark}
An analogous problem with the r\^oles of $\alpha,\beta$, etc., interchanged, characterizes the monic orthogonal polynomials $p_{n-1}(x)$ of degree $n-1$ in $x$.
\end{remark}
\begin{coroll} \label{cor:RHq}
Given $n\in \mathbb{N}$, the absolutely continuous measures ${\rm d}\beta\subset \mathbb{R}_+$ and ${\rm d} \alpha^* \subset \mathbb{R}_-$, and assuming the existence of all the bimoments
$I_{ij}$ there exists a unique
matrix $\Gamma(w)$ solving the RHP specified by equations
\eqref{eq:RHq}, \eqref{eq:Gamma-as}. The solution characterizes
uniquely the polynomials $q_{n-1}$ as well as $\widehat q_{n-1}$. In particular,
the normalization constants $c_{n-1},\eta_{n-1}$ (i.e. the ``norm'' of the monic orthogonal polynomials and the $\beta$ average of the $q_{n-1}$) are read off the following expansions
\begin{eqnarray}
\Gamma_{2,1}(w) = \frac 1{c_{n-1}\eta_{n-1}} w^{n-1} + \mathcal O(w^{n-2}),\qquad
\Gamma_{2,3}(w) = (-1)^{n} \frac {c_{n-1}}{\eta_{n-1}w^n} + \mathcal O(w^{-n-1})
\end{eqnarray}
or, equivalently,
\begin{equation}
\frac{1}{\eta_{n-1}^2} =(-1)^n \lim_{w\to \infty} w\Gamma_{2,1}(w)\Gamma_{2,3}(w),\qquad
c_{n-1}^2=(-1)^n\lim_{w\to \infty} w^{2n-1}\frac{\Gamma_{2,3}(w)}{\Gamma_{2,1}(w)}.
\end{equation}
\end{coroll}
\begin{proof}
Given ${\rm d}\beta$ and ${\rm d}\alpha^*$ it suffices
to construct the Nikishin systems $W_{\beta}, W_{\beta \alpha^*}$ and
$W_{\alpha^*}, W_{\alpha^*\beta}$ followed by solving the Hermite-Pad\'{e}
approximation problems given by equations \eqref{eq:PadeA}. The existence of the solution is ensured by the existence of all bimoments $I_{ij}$ (see equation
\eqref{eq:bimoments} for the definition). Then one constructs
the polynomials $\widehat q_{j}$, finally the matrix $\Gamma(w)$ using
equation \eqref{eq:q-recovery}. By construction $\Gamma(w)$ satisfies the
Riemann-Hilbert factorization problem specified by equations \eqref{eq:RHq} and
\eqref{eq:Gamma-as}.
Since the determinant of $\Gamma(w)$ is constant in $w$ (and equal to one), the solution of the Riemann--Hilbert problem is unique. The formulas for
$\eta_{n-1}$ and $c_{n-1}$ follow by elementary matrix computations.
\end{proof}
\begin{remark}
By multiplication on the right with a diagonal matrix
$ \mathbb Y (w) := \Gamma(w) {\rm diag}\bigg(\exp \left({-\frac {2V+U^\star}{3\hbar}}\right),\\ \exp\left({\frac {V-U^\star}{3\hbar}}\right),\exp \left({\frac{2U^\star+V}{3\hbar}}\right)\bigg)$
one can reduce the RHP to an equivalent one with constant jumps. It then follows that $\mathbb Y(w)$ solves a linear ODE with the same singularities as $V', {U^\star}'$; for example if $U', V'$ are rational functions then so is the coefficient matrix of the ODE and the orders of poles do not exceed those of $V', U'$. In this case it can be shown \cite{Bertola:MomentTau} that the principal minors of the matrix of bimoments are isomonodromic tau--functions in the sense of Jimbo--Miwa--Ueno \cite{JMU}.
\end{remark}
\subsection{ Riemann--Hilbert problem for the $\widehat \mathbf p$--BOPs}
Referring to the defining properties of $\widehat p_n(x)$ as indicated in Prop. \ref{hattedBOPs} we are going to define a second $3\times 3$ local RHP that characterizes them.
Define
\begin{equation}
\vec{\widehat \mathbf p_{0}}(z) := \left[\begin{array}{ccc}
\widehat p_{n-2}(z) &
\widehat p_{n-1}(z) &
\widehat p_{n}(z)
\end{array}\right]^t
\end{equation}
and $\vec {\widehat \mathbf p}_{1,2}(z)$ as the same {\bf windows}
of the auxiliary vectors $\widehat \mathbf p _{1,2}$ introduced in Definition \ref{defauxwave}.
We first study the large $z$ asymptotic behavior of
$\widehat p_{0,n}(z), \widehat p _{1,n}(z), \widehat p_{2,n}(z)$.
\begin{lemma}
The asymptotic behavior at $z \to \infty, z\in {\mathbb C}_{\pm}$ is given by:
\begin{align}\label{eq:phat-as}
&\widehat p_{0,n}(z)=-\frac{\eta_n}{c_n}z^n(1+\mathcal{O}(1/z)),\\
&\widehat p_{1,n}(z)=-1+ \mathcal{O}(1/z), \\
&\widehat p_{2,n}(z)=(-1)^n \frac{c_{n+1} \eta_{n+1}}{z^{n+2}}(1+
\mathcal{O}(1/z)).
\end{align}
\end{lemma}
\begin{proof}
We give a proof for $\widehat p_{1,n}(z)=\int \frac{\widehat p_{0,n}(x)}{z-x}{\rm d}\alpha(x)+
\frac{1}{\beta_0}\langle \widehat p_{0,n}|1\rangle $. The first term is $\mathcal{O}(\frac{1}{z})$,
while the second term can be computed using biorthogonality and the fact that
$\widehat p_{0,n}=-(\eta_n p_{0,n}+\eta_{n-1}p_{0,n-1}+\cdots+ \eta_0p_{0,0})$.
Thus the second term equals $-\frac{\eta_0}{\beta_0}\langle p_{0,0}|1\rangle =-1$, since
$\eta_0=q_0 \beta_0$, hence the claim for $\widehat p_{1,n}(z)$ follows.
The remaining statements are proved in a similar manner.
\end{proof}
For reasons of normalization, and in full analogy with equation
\eqref{normalizedqRHP}, we
arrange the window of all $\widehat \mathbf p$s wave vectors into the matrix
\begin{equation}
\widehat \Gamma(z) =\overbrace{
\left[
\begin{array}{ccc}
0&0& -\frac {c_n}{\eta_n} \\
0&-1&0\\
\frac {(-1)^{n}}{c_{n-1}\eta_{n-1}} &0&0
\end{array}
\right]\left[
\begin{array}{crc}
1 & -1 & 0\\
0 &1&0\\
0&-1 &1
\end{array}
\right] }^{=: \mathcal N_{\widehat p}} \left[\vec{\widehat \mathbf p}(z),\vec{\widehat \mathbf p}_1(z),\vec{\widehat \mathbf p}_2(z)\right].
\label{normalizedphatRHP}
\end{equation}
\begin{proposition}
\label{RHP2}
The matrix $\widehat \Gamma(z)$ is analytic in
${\mathbb C} \setminus supp({\rm d}\alpha)\cup supp({\rm d}\beta^*)$. Moreover, it
satisfies the jump conditions
\begin{equation} \label{eq:RHphat}
\begin{split}
\widehat \Gamma(z) _+ &= \widehat \Gamma(z)_- \left[
\begin{array}{ccc}
1 & -2\pi i \frac{{\rm d} \alpha}{{\rm d} z} & 0\cr
0&1&0\\
0&0&1
\end{array}
\right]\ ,\ \ z\in supp({\rm d}\alpha)\subseteq {\mathbb R}_+\\
\widehat \Gamma(z) _+ &= \widehat \Gamma(z)_- \left[
\begin{array}{ccc}
1 & 0 & 0\cr
0&1& -2\pi i \frac{{\rm d} \beta^*}{{\rm d} z}\\
0&0&1
\end{array}
\right]\ ,\ \ z\in supp({\rm d} \beta^*)\subseteq {\mathbb R}_-,
\end{split}
\end{equation}
and its asymptotic behavior at $z=\infty$ is
\begin{eqnarray} \label{eq:Gammahat-as}
\widehat \Gamma(z) = \left({\bf 1} + \mathcal O\left(\frac 1 z\right)\right)\left[
\begin{array}{ccc}
z^n & 0& \cr
0 &1 &0\cr
0 &0& \frac 1{ z^{n}}
\end{array}
\right].
\end{eqnarray}
$\widehat \Gamma(z)$ can be written as:
\begin{align}\label{eq:p-recovery}
\Gamma(z)=\begin{bmatrix}c_n&0&0\\0&-1&0\\0&0&\frac{(-1)^n}{c_{n-1}}\end{bmatrix}
\begin{bmatrix}p_{0,n}&p_{1,n}&p_{2,n}\\
\widehat p_{0,n-1}&\widehat p_{1,n-1}&\widehat p_{2,n-1}\\mathbf p_{0,n-1}& p_{1,n-1}&p_{2,n-1}\end{bmatrix}. \end{align}
\end{proposition}
The existence and uniqueness of the solution of the Riemann-Hilbert problem
\eqref{eq:RHphat}, \eqref{eq:Gammahat-as} is proved
in a similar way to the proof of Corollary \ref{cor:RHq}.
\begin{coroll}
Given $n\in \mathbb{N}$, the absolutely continuous measures ${\rm d}\alpha\subset \mathbb{R}_+$ and ${\rm d} \beta^* \subset \mathbb{R}_-$, and assuming the existence of all the bimoments
$I_{ij}$ there exists a unique
matrix $\Gamma(z)$ solving the RHP specified by equations
\eqref{eq:RHphat}, \eqref{eq:Gammahat-as}. The solution characterizes
uniquely the polynomials $\widehat p_{n-1}$ and $p_{n}$.
\end{coroll}
\section{ Acknowledgments}
M.B. would like to thank the Department of Mathematics of the
University of Notre Dame for hospitality during which the project was initiated and J. Harnad for insight on the relationship of Cauchy biorthogonal polynomials with matrix models.
While working on this project, M. B. and M. G. enjoyed the hospitality of
the Department of Mathematics, University of Saskatchewan and M. G. and J. S.
enjoyed the hospitality of the Centre de recherches math\'ematiques,
Universit\'e de Montr\'eal.
J.S would also like to thank
H. Lundmark for an ongoing collaboration on the cubic string problem which
motivated many of the questions addressed in this paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
{\partial_{x}}{{\partial_{x}}}
\def{\rm div}{{\rm div}}
\def{\rm Graph}{{\rm Graph}}
\def{{}^\prime{}^\prime}{{{}^\prime{}^\prime}}
\def{\rm B}{{\rm B}}
\def{\cal M}{{\cal M}}
\baselineskip= 17.2pt plus 0.6pt
\font\titlefont=cmr17
\centerline{\titlefont A time-step approximation scheme}
\vskip 1 pc
\centerline{\titlefont for a viscous version of the Vlasov equation}
\vskip 4pc
\font\titlefont=cmr12
\centerline{ \titlefont {Ugo Bessi}\footnote*{{\rm
Dipartimento di Matematica, Universit\`a\ Roma Tre, Largo S.
Leonardo Murialdo, 00146 Roma, Italy.}} }{}\footnote{}{
{{\tt email:} {\tt bessi@matrm3.mat.uniroma3.it}Work partially supported by the PRIN2009 grant "Critical Point Theory and Perturbative Methods for Nonlinear Differential Equations}}
\vskip 0.5 pc
\font\tenrm=cmr10
\par
\vskip 2pc
\centerline{\bf Abstract}
Gomes and Valdinoci have introduced a time-step approximation scheme for a viscous version of Aubry-Mather theory; this scheme is a variant of that of Jordan, Kinderlehrer and Otto. Gangbo and Tudorascu have shown that the Vlasov equation can be seen as an extension of Aubry-Mather theory, in which the configuration space is the space of probability measures, i. e. the different distributions of infinitely many particles on a manifold. Putting the two things together, we show that Gomes and Valdinoci's theorem carries over to a viscous version of the Vlasov equation. In this way, we shall recover a theorem of J. Feng and T. Nguyen, but by a different and more "elementary" proof.
\vskip 2 pc
\centerline{\bf Introduction}
\vskip 1 pc
The Vlasov equation models a group of particles governed by an external potential $V$ and a mutual interaction $W$; we shall always suppose that the particles move on the $p$-dimensional torus ${\bf T}^p\colon=\frac{{\bf R}^p}{{\bf Z}^p}$, that $V$ and $W$ are sufficiently regular and that $V$ depends periodically on time. More precisely,
\noindent 1) $V\in C^4({\bf T}\times{\bf T}^p)$ and
\noindent 2) $W\in C^4({\bf T}^p)$; moreover $W$, seen as a periodic potential on ${\bf R}^p$, is even: $W(x)=W(-x)$. Up to adding a constant, we can suppose that $W(0)=0$.
Let ${\cal M}_1({\bf T}^p\times{\bf R}^p)$ denote the space of Borel probability measures on ${\bf T}^p\times{\bf R}^p$; we say that a continuous curve $\fun{\eta}{{\bf R}}{{\cal M}_1({\bf T}^p\times{\bf R}^p)}$ solves the Vlasov equation if it satisfies, in the weak sense, the continuity equation
$$\partial_t\eta_t+
{\rm div}_{(x,v)}(\eta_t\cdot(v,\partial_x P^{\eta_t}(x)))=0 \eqno (CE)$$
where $(x,v)$ are the position and velocity coordinates on
${\bf T}^p\times{\bf R}^p$,
$$P^{\eta_t}(t,x)=V(t,x)+W^{\eta_t}(x)$$
and
$$W^{\eta_t}(x)=\int_{{\bf T}^p\times{\bf R}^p}W(x-y) {\rm d} \eta_t(y,v) . $$
An idea underlying several papers (see for instance [1], [9], [10], [12]) is to consider the Vlasov equation as a Hamiltonian system with infinitely many particles, i. e. as a Hamiltonian system on the space ${\cal M}_1(\T^p)$ of probability measures on ${\bf T}^p$; in particular, one can define, on ${\cal M}_1(\T^p)$, both the Hopf-Lax semigroup and the Hamilton-Jacobi equation.
In this paper, we follow [13] adding a viscosity term to the Hopf-Lax semigroup; we want to check two things. The first one (theorem 1 below) is that the minimal characteristics are solutions of a Fokker-Planck equation whose drift is determined by Hamilton-Jacobi, exactly as in the case without viscosity. The second check we want to do is about the time-discretization method of [14], which was developed for a final condition linear on measures, say
$$U_f(\mu)=\int_{{\bf T}^p}f {\rm d} \mu . $$
We would like to see if it survives when the final condition $U$ is merely differentiable. Our definition of differentiability will be a little different from the usual one: indeed, we shall approximate minimal characteristics through "discrete characteristics"; since we shall see that the latter always have a density, we shall differentiate $U$ as a function on $L^1({\bf T}^p)$, i. e. $U^\prime (\mu)$ will be a scalar function, an element of $L^\infty({\bf T}^p)$.
We are going to consider a Lagrangian on
${\bf R}\times{\bf T}^p\times{\bf R}^p$ given by
$$L^{\gamma_t}(t,q,\dot q)=
\frac{1}{2}|\dot q|^2-P^{\gamma_t}(t,q) $$
whose Legendre transform is
$$H^{\gamma_t}(t,q,p)=\frac{1}{2}|p|^2+P^{\gamma_t}(t,q) . $$
\thm{1} Let $\fun{U}{{\cal M}_1(\T^p)}{{\bf R}}$ be Lipschitz for the 1-Wasserstein distance and differentiable in the sense of section 4 below; let
${\cal L}^p$ denote the Lebesgue measure on ${\bf T}^p$. Then, the following three points hold.
\noindent 1) For every $\mu\in{\cal M}_1(\T^p)$ and every $m\in{\bf N}$, the $\inf$ below is a minimum.
$$(\Lambda^mU)(\mu)\colon =\inf\left\{
\int_{-m}^0\hbox{{\rm d}$t$}\int_{{\bf T}^p}L^{\frac{1}{2}\rho}(t,x,Y(t,x))\rho(t,x)\hbox{{\rm d}$x$}+
U(\rho(0){\cal L}^p) \right\} . \eqno (1) $$
In the formula above, the $\inf$ is taken over all the Lipschitz vector fields $Y$; the curve of measures $\rho$ is a weak solution of the Fokker-Planck equation
$$\left\{
\eqalign{
\partial_t\rho_t-\Delta\rho_t+{\rm div}(\rho_t\cdot Y)&=0,\quad t\in[-m,0]\cr
\rho_{-m}&=\mu .
} \right. \eqno (FP)_{-m,Y,\mu} $$
\noindent 2) The operator $\Lambda^m$ defined in point 1) has the semigroup property
$$\Lambda^{m+n}U=\Lambda^m\circ\Lambda^n U\quad
\forall m,n\in{\bf N} . $$
\noindent 3) There is a vector field $Y$ minimal in (1); it is given by
$Y=c-\partial_x u$, where $u$ solves the Hamilton-Jacobi equation with time reversed
$$\left\{
\eqalign{
\partial_t u +\Delta u-H^\rho(t,x,-\partial_xu)&=0,\quad
t\in(-m,0)\cr
u(0,x)&=f
} \right. \eqno(HJ)_{0,\rho,f} $$
for a suitable $f\in L^\infty({\bf T}^p)$.
\rm
\vskip 1pc
Note that [8] contains a stronger version of this theorem; in a sense, the aim of this paper is to show that it is possible to prove part of [8] using the technique of [14].
We briefly expand on this technique: roughly speaking, the difference with [15] is that the entropy term is embedded in the kinetic energy. Let us be more precise and describe the time-step, which is backwards in time. Given a continuous function $U$ on ${\cal M}_1(\T^p)$, we are going to define
$$U(-\frac{1}{n},\mu)=\min
\left\{
\int_{{\bf T}^p\times{\bf R}^p}[
\frac{1}{n}L^{\frac{1}{2}\mu}(\frac{-1}{n},x,nv)+\log\gamma(x,v)
] \gamma(x,v) {\rm d} \mu(x) {\rm d} v +U(\mu\ast\gamma)
\right\} -\log\left(\frac{n}{2\pi}\right)^\frac{p}{2}$$
where the minimum is over all the functions $\gamma$ on
${\bf T}^p\times{\bf R}^p$ such that $\gamma(x,\cdot)$ is a probability density on ${\bf R}^p$ for all $x$. One should look at $\gamma$ as at the probability distribution of the velocities: a particle starting at $x$ has velocity $nv$ with probability $\gamma(x,v)$. Since $U$ is non linear there is some work to do in order to show that the minimal $\gamma$ exists; we shall prove this in section 1 below. In section 2, we prove a bound on the $L^\infty$ norm of the minimal; in section 3, we shall iterate backward the formula above, getting the "discrete value function" $U(\frac{j}{n},\mu)$ for $j\le 0$; naturally, we shall also get a discrete characteristic
$\mu_\frac{j}{n},\mu_\frac{j+1}{n},\dots,\mu_0$. We shall show that the discrete value functions is bounded as the time-step tends to zero. In section 4 we reduce to the linear case expressing the minima of section 1 in terms of the differential of $U$ at the endpoint of the discrete characteristic. In section 5, we discuss the regularity of the linear problem. Thanks to this regularity, in section 6 we can prove that the discrete characteristics converge to a solution of the Fokker-Planck equation and that the discrete value function converges to a solution of Hamilton-Jacobi; this will end the proof of theorem 1.
\vskip 1pc
\vskip 2pc
\centerline{\bf \S 1}
\centerline{\bf The time-step: existence of the minimal}
\vskip 1pc
We begin with a few standard definitions.
\vskip 1pc
\noindent{\bf Definitions.} \noindent $\bullet$) We denote by ${\cal M}_1(\T^p)$ the space of Borel probability measures on ${\bf T}^p$.
\noindent $\bullet$) Let $\tilde x,\tilde y\in{\bf R}^p$, and let $x$, $y$ be their projections on ${\bf T}^p$. We define
$$|x-y|_{{\bf T}^p}\colon=\min_{k\in{\bf Z}^p}|\tilde x-\tilde y-k| . $$
\noindent $\bullet$) For $\lambda\ge 1$ and $\mu_1,\mu_2\in{\cal M}_1(\T^p)$, we set
$$d_\lambda(\mu_1,\mu_2)^\lambda\colon=\min
\int_{{\bf T}^p\times{\bf T}^p}|x-y|_{{\bf T}^p}^\lambda {\rm d} \Gamma(x,y)$$
where the minimum is over all the measures $\Gamma$ on
${\bf T}^p\times{\bf T}^p$ whose first and second marginals are $\mu_1$ and $\mu_2$ respectively; we recall from [2] that
$({\cal M}_1(\T^p),d_\lambda)$ is a complete metric space whose topology is equivalent to the weak$\ast$ one.
\vskip 1pc
The term on the right in the formula above is a minimum by a standard theorem ([2], [16]); a useful characterization of $d_1$ is the dual one, i. e.
$$d_1(\mu_1,\mu_2)=\sup\left\{
\int_{{\bf T}^p}f {\rm d} \mu_1-\int_{{\bf T}^p}f {\rm d} \mu_2
\right\} \eqno (1.1)$$
where the $\sup$ is taken over all the functions $f\in C({\bf T}^p)$ such that
$$|f(x)-f(y)|\le|x-y|_{{\bf T}^p}\qquad \forall x,y\in{\bf T}^p . $$
We need to adapt a few definitions of [14] to our situation.
\vskip 1pc
\noindent{\bf Definitions.} $\bullet$) Let $\mu\in{\cal M}_1(\T^p)$. We define
${\cal D}_\mu$ as the set of all the Borel functions
$\fun{\gamma}{{\bf T}^p\times{\bf R}^p}{[0,+\infty)}$ such that
$$\int_{{\bf R}^p}\gamma(x,v) {\rm d} v=1\txt{for $\mu$ a. e. } x\in{\bf T}^p .
\eqno (1.2)$$
\noindent $\bullet$) We denote by
$$\fun{\pi_{{\bf T}^p}}{{\bf T}^p\times{\bf R}^p}{{\bf T}^p},\qquad
\fun{\pi_{{\bf R}^p}}{{\bf T}^p\times{\bf R}^p}{{\bf R}^p},\qquad
\fun{\pi_{cover}}{{\bf R}^p}{{\bf T}^p}$$
the natural projections, and define
$\fun{\tilde\pi}{{\bf T}^p\times{\bf R}^p}{{\bf T}^p}$ by
$\tilde\pi=\pi_{cover}\circ\pi_{{\bf R}^p}$.
\noindent $\bullet$) If $\mu\in{\cal M}_1(\T^p)$ and $\gamma\in{\cal D}_\mu$, we define a measure on ${\bf T}^p$ by
$$\mu\ast\gamma=
(\pi_{{\bf T}^p}-\tilde\pi)_\sharp(\mu\otimes(\gamma(x,\cdot){\cal L}^p))$$
where ${\cal L}^p$ denotes the Lebesgue measure on ${\bf R}^p$; the sharp sign denotes, as usual, the push-forward of a measure. In other words, if $f\in C({\bf T}^p)$, then
$$\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma)(z)=
\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma(x,v) {\rm d} \mu(x) {\rm d} v . $$
Note that, if $\gamma$ does not depend on $x\in{\bf T}^p$, this is the usual convolution of the two measures $\mu$ and $\gamma{\cal L}^p$. One can see $\gamma$ as the probability, for a particle placed in $x$, to jump to $x-v$; if the initial distribution of the particles is $\mu$,
$\mu\ast\gamma$ is the distribution after one jump.
\noindent $\bullet$) Let now $U\in C({\cal M}_1(\T^p),{\bf R})$; for $h>0$ and
$t\in{\bf R}$ we define
$$\fun{G^h_tU}{{\cal M}_1(\T^p)}{{\bf R}}$$
by
$$(G^h_tU)(\mu)=\inf_{\gamma\in{\cal D}_\mu}\left\{
\int_{{\bf T}^p\times{\bf R}^p}[
hL^{\frac{1}{2}\mu}(t,x,\frac{1}{h}v)+\log\gamma(x,v)
] \gamma(x,v) {\rm d} \mu(x) {\rm d} v+U(\mu\ast\gamma)
\right\} $$
where the Lagrangian $L_c^{\frac{1}{2}\mu}$ has been defined in the introduction.
\vskip 1pc
\noindent{\bf Observation.} For $c\in{\bf R}^p$, it is natural to consider the Lagrangian
$$L_c^{\gamma_t}(t,q,\dot q)=\cin{\dot q}-\inn{c}{\dot q}-
P^{\gamma_t}(t,q) . $$
Naturally, it is possible to prove theorem 1 for $L_c^{\gamma_t}$. Indeed, let
$$\fun{\tau_c}{{\bf T}^p}{{\bf T}^p},\qquad
\fun{\tau_c}{x}{x+hc} $$
and
$$\hat U(\mu)=U((\tau_{hc})_\sharp\mu) . $$
If we set $\tilde\gamma(x,v)=\gamma(x,v+hc)$, it is easy to see that
$$\int_{{\bf T}^p\times{\bf R}^p}[
hL^{\frac{1}{2}\mu}_c(t,x,\frac{1}{h}v)+\log\gamma(x,v)
] \gamma(x,v) {\rm d} v {\rm d} \mu(x)+U(\mu\ast\gamma)=$$
$$\int_{{\bf T}^p\times{\bf R}^p}[
hL^{\frac{1}{2}\mu}_0(t,x,\frac{1}{h}v)+\log\tilde\gamma(x,v)
] \tilde\gamma(x,v) {\rm d} v {\rm d} \mu(x)+\hat U(\mu\ast\tilde\gamma) -
\frac{h}{2}|c|^2 . $$
In other words, a simple transformation brings the minima for
$L^{\frac{1}{2}\mu}_c$ into those for $L^{\frac{1}{2}\mu}_0$. We have restricted statement and proof of theorem 1 to the case $c=0$ to keep the notation (relatively) simple.
We want to write $G^h_t U$ in a different way. First of all, we define
$$A_h(\gamma,(x,v))=
\frac{1}{2h}|v|^2\gamma(x,v)+\gamma(x,v)\log\gamma(x,v) . $$
If $\gamma$ does not depend on $x\in{\bf T}^p$, we shall call this function
$A_h(\gamma,v)$.
Then, we note that the minimal $\gamma$ does not depend on the potential in $L_c^{\frac{1}{2}\mu}$ (though the value function $G^h_tU$ obviously does); indeed, since
$\gamma\in{\cal D}_\mu$, if $Z$ is any potential on ${\bf T}^p$, we have by Fubini
$$\int_{{\bf T}^p\times{\bf R}^p}
Z(x)\gamma(x,v) {\rm d} \mu(x) {\rm d} v=
\int_{{\bf T}^p}Z(x) {\rm d} \mu(x) . \eqno (1.3)$$
As a consequence,
$$(G^h_tU)(\mu)=\int_{{\bf T}^p}P^{\frac{1}{2}\mu}(t,x) {\rm d} \mu(x)+
\inf_{\gamma\in{\cal D}_\mu} S(U,\mu,\gamma) \eqno (1.4)$$
where the single particle functional $S$ is given by
$$S(U,\mu,\gamma)=
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma,(x,v)) {\rm d} \mu(x) {\rm d} v+U(\mu\ast\gamma)
\eqno (1.5) $$
and the potential $P^{\frac{1}{2}\mu}$ is as in the introduction.
\vskip 1pc
\noindent{\bf Observation.} We must show that the integral in (1.5) is well-defined, though possibly $+\infty$. Indeed, denoting by
$f^-$ the negative part of a function $f$, we have that
$$\int_{{\bf T}^p} {\rm d} \mu(x)\int_{{\bf R}^p}A_h^-(\gamma,(x,v)) {\rm d} v=$$
$$\int_{{\bf T}^p} {\rm d} \mu(x)\int_{{\bf R}^p}\left[
\frac{1}{2h}|v|^2\gamma(x,v)+\gamma(x,v)\log\gamma(x,v)
\right]^- {\rm d} v\ge
-\int_{{\bf T}^p} {\rm d} \mu(x)\int_{{\bf R}^p}e^{
-1-\frac{1}{2h}|v|^2
} {\rm d} v=-e^{-1}(2\pi h)^\frac{p}{2} \eqno (1.6)$$
where the inequality comes from the fact that
$$\frac{1}{2h}|v|^2x+x\log x\ge -e^{
-1-\frac{1}{2h}|v|^2
} \qquad\forall x\ge 0 . $$
\vskip 1pc
We want to prove that the $\inf$ in (1.4) is a minimum; since $U$ is nonlinear, we cannot write the minimum explicitly as in [14]; we shall need a few lemmas, the first of which is an elementary fact on the behaviour of the Gaussian.
\lem{1.1} Let $h,\epsilon>0$. Then,
$$\min\left\{
\int_{{\bf R}^p}A_h(\gamma,v) {\rm d} v\;\colon\; \gamma\ge 0,\quad \int_{{\bf R}^p}\gamma(v) {\rm d} v=1,
\quad
\int_{{\bf R}^p}\frac{1}{2h}|v|^2\gamma(v) {\rm d} v=\frac{p\epsilon}{2} \right\}=$$
$$\frac{p\epsilon}{2}+\log\frac{1}{(2\pi\epsilon h)^\frac{p}{2}}-
\frac{p}{2} . \eqno (1.7) $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We note that we are minimizing the strictly convex functional
$$\fun{J}{L^1((1+\frac{1}{2}|v|^2){\cal L}^p)}{{\bf R}\cup{+\infty}},\qquad
\fun{J}{\gamma}{\int_{{\bf R}^p}A_h(\gamma,v) {\rm d} v}$$
on the closed convex set
$$H=\left\{
\gamma\in L^1((1+\frac{1}{2}|v|^2){\cal L}^p)\;\colon\; \gamma\ge 0,\quad\int_{{\bf R}^p}\gamma(v) {\rm d} v=1,\quad
\int_{{\bf R}^p}\frac{1}{2h}|v|^2\gamma(v) {\rm d} v=\frac{p\epsilon}{2}
\right\} . $$
It is standard (see for instance the argument of proposition 1 of [14] or proposition 5.6 of chapter 1 of [6]) that, if we find a density $\gamma$ and $\chi,\delta\in{\bf R}$ solving the Lagrange multiplier problem
$$\left\{
\eqalign{
\frac{1}{2h}|v|^2+\log\gamma(v)+1&=\chi+\frac{\delta}{2h}|v|^2\cr
\int_{{\bf R}^p}\gamma(v) {\rm d} v&=1\cr
\int_{{\bf R}^p}\frac{1}{2h}|v|^2\gamma(v) {\rm d} v&=\frac{p\epsilon}{2}
}
\right. \eqno (1.8)$$
then $\gamma$ is the unique minimizer of $J$ restricted to $H$. Thus, solving (1.8) is next in the order of business.
By the first one of (1.8), we see that
$$\gamma(v)=e^{\chi-1}e^{-\frac{1-\delta}{2h}|v|^2} . $$
Since we want $\gamma\in L^1$, eventually we shall have to check that $\delta<1$. The constant $\chi$ is the unique one for which the second formula of (1.8) holds, i. e.
$$e^{\chi-1}=\left(
\frac{1-\delta}{2\pi h}
\right)^\frac{p}{2} . $$
The constant $\delta$ is chosen so that the third one of (1.8) holds:
$$\frac{p\epsilon}{2}=\int_{{\bf R}^p}\frac{1}{2h}|v|^2
\left(
\frac{1-\delta}{2\pi h}
\right)^\frac{p}{2}
e^{-\frac{1-\delta}{2h}|v|^2} {\rm d} v=
\frac{1}{(2\pi)^\frac{p}{2}}\cdot\frac{1}{1-\delta}
\int_{{\bf R}^p}\frac{1}{2}|y|^2e^{-\frac{|y|^2}{2}} {\rm d} y=
\frac{p}{2}\cdot\frac{1}{1-\delta} $$
where we have set $y=\sqrt\frac{1-\delta}{h}v$.
From this we get
$$1-\delta=\frac{1}{\epsilon} . $$
Since $\epsilon>0$, this implies that $\delta<1$, as we wanted. From the last four formulas,
$$\gamma(v)=\left(
\frac{1}{2\pi\epsilon h}
\right)^\frac{p}{2} e^{
-\frac{1}{2\epsilon h}|v|^2
} . $$
This yields the first equality below, while the second one follows from (1.8).
$$\int_{{\bf R}^p}\gamma(v)\log\gamma(v) {\rm d} v=
\int_{{\bf R}^p}\gamma(v)[
\log\frac{1}{(2\pi\epsilon h)^\frac{p}{2}}-\frac{1}{2\epsilon h}|v|^2
] {\rm d} v=
\log\frac{1}{(2\pi\epsilon h)^\frac{p}{2}}-\frac{p}{2} . $$
From the formula above and the third one of (1.8), we get the second equality below.
$$\int_{{\bf R}^p}A_h(\gamma,v) {\rm d} v=
\int_{{\bf R}^p}\left[
\frac{1}{2h}|v|^2\gamma(v)+\gamma(v)\log\gamma(v)
\right] {\rm d} v=
\frac{p\epsilon}{2}+\log\frac{1}{(2\pi\epsilon h)^\frac{p}{2}}-\frac{p}{2} $$
which is (1.7).
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{1.2} Let $\mu\in{\cal M}_1(\T^p)$, let $C\in{\bf R}$ and let us consider the set
$E_\mu$ of the functions $\gamma\in{\cal D}_\mu$ such that
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma,(x,v)) {\rm d} \mu(x) {\rm d} v\le C .
\eqno (1.9)$$
Then,
\noindent 1) $E_\mu$ is uniformly integrable for the measure
$\mu\otimes{\cal L}^p$ on ${\bf T}^p\times{\bf R}^p$.
\noindent 2) The set of the measures
$\{ \mu\otimes\gamma{\cal L}^p \}$ as $\mu$ varies in ${\cal M}_1(\T^p)$ and $\gamma$ varies in $E_\mu$ is tight on ${\bf T}^p\times{\bf R}^p$.
\noindent 3) The set ${\cal D}_\mu$ is weakly closed in
$L^1(\mu\otimes{\cal L}^p)$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We begin with point 1). We fix $a>1$ and consider
$\gamma\in E_\mu$; the first inequality below is (1.9), the second one follows from Fubini, (1.6) and the fact that $\log\gamma\ge 0$ if
$\gamma\ge a$; the last one is obvious.
$$C\ge\int_{{\bf T}^p\times{\bf R}^p}
\left[
\frac{1}{2h}|v|^2\gamma(x,v)+\gamma(x,v)\log\gamma(x,v)
\right] {\rm d} \mu(x) {\rm d} v\ge$$
$$-e^{-1}(2\pi h)^\frac{p}{2}+
\int_{{\bf T}^p} {\rm d} \mu(x)\int_{ \{ \gamma\ge a \} }\left[
\frac{1}{2h}|v|^2\gamma(x,v)+\gamma(x,v)\log\gamma(x,v)
\right] {\rm d} v\ge$$
$$-e^{-1}(2\pi h)^\frac{p}{2}+
\log a\int_{ \{ \gamma\ge a \} }\gamma(x,v) {\rm d} \mu(x) {\rm d} v . $$
This implies immediately that $E_\mu$ is uniformly integrable.
We prove point 2), i. e. that for all $\epsilon>0$ we can find
$R>0$ such that
$$\int_{{\bf T}^p\times B(0,R)^c}\gamma(x,v) {\rm d} \mu(x) {\rm d} v\le\epsilon
\qquad\forall\mu\in{\cal M}_1(\T^p),\quad\forall \gamma\in E_\mu . \eqno (1.10)$$
If we show that
$$\int_{{\bf T}^p\times{\bf R}^p}
\cinh{v}\gamma(x,v) {\rm d} \mu(x) {\rm d} v\le C_5\qquad
\forall\mu\in{\cal M}_1(\T^p),\quad\forall\gamma\in E_\mu $$
then (1.10) follows by the Chebishev inequality. By Fubini, the last formula is equivalent to
$$\int_{{\bf T}^p}r_\gamma(x) {\rm d} \mu(x)\le C_6\qquad
\forall\mu\in{\cal M}_1(\T^p),\quad\forall\gamma\in E_\mu
\eqno (1.11) $$
where $r_\gamma$ is defined by
$$\frac{p}{2}\cdot r_\gamma(x)\colon=\int_{{\bf R}^p}
\cinh{v}\gamma(x,v) {\rm d} v . $$
Since $\mu$ is a probability measure, (1.11) follows if we prove that, for some $A>0$, there is $C_7>0$ such that,
$$\int_{
\{ x\;\colon\; r(x)>A \}
} r_\gamma(x) {\rm d} \mu(x) \le C_7\qquad
\forall\mu\in{\cal M}_1(\T^p),\quad\forall\gamma\in E_\mu . \eqno (1.12)$$
We call $g(\epsilon)$ the function on the right hand side of (1.7); the first inequality below comes from (1.9) and Fubini, the second one from (1.7).
$$C\ge\int_{{\bf T}^p} {\rm d} \mu(x)
\int_{{\bf R}^p}A_h(\gamma,(x,v)) {\rm d} v\ge
\int_{{\bf T}^p}g(r_\gamma(x)) {\rm d} \mu(x) . $$
Since the logarithmic term in the definition of $g$ grows less than linearly, we easily get that there is $A>0$ such that, for $y\ge A$, we have
$g(y)\ge\frac{y}{4}$; since $g$ is bounded from below, the last formula implies that there is $C_8>0$, independent on $\gamma$ and
$\mu$, such that
$$C_8\ge\int_{
\{ x\;\colon\; r_\gamma(x)\ge A \}
}
\frac{r(x)}{4} {\rm d} \mu(x)\qquad
\forall\mu\in{\cal M}_1(\T^p),\quad\forall\gamma\in E_\mu . $$
But this is (1.12).
We prove point 3). Let $B\subset{\bf T}^p$ be a Borel set; the function
$$\fun{}{\gamma}{
\int_{B} {\rm d} \mu(x)\int_{{\bf R}^p}\gamma(x,v) {\rm d} v
} $$
is continuous for the weak topology of $L^1(\mu\otimes{\cal L}^p)$; moreover, if $\gamma\in{\cal D}_\mu$,
$$\int_{B} {\rm d} \mu(x)\int_{{\bf R}^p}\gamma(x,v) {\rm d} v=
\mu(B) . $$
As a result, if $\bar\gamma$ belongs to the weak closure of
${\cal D}_\mu$, then
$$\int_{
B
} {\rm d} \mu(x)\int_{{\bf R}^p}\bar\gamma(x,v) {\rm d} v=
\mu(B) $$
for every Borel set $B\subset{\bf T}^p$. If we set
$$R(x)=\int_{{\bf R}^p}\bar\gamma(x,v) {\rm d} v $$
the last formula implies that
$$\mu(B)=\int_{B}R(x) {\rm d} \mu(x) $$
for every Borel set $B\subset{\bf T}^p$.
It is standard that this implies that $R(x)=1$ for
$\mu$ a. e. $x\in{\bf T}^p$, i. e. that $\bar\gamma\in{\cal D}_\mu$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{1.3} Let $U\in C({\cal M}_1(\T^p))$ and let $\mu\in{\cal M}_1(\T^p)$; then the function
$$\fun{I}{{\cal D}_\mu}{{\bf R}}$$
$$\fun{I}{\gamma}{\int_{{\bf T}^p\times{\bf R}^p}}\left[
hL^{\frac{1}{2}\mu}(t,x,\frac{1}{h}v)+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} \mu(x) {\rm d} v+ U(\mu\ast\gamma)$$
is l. s. c. for the weak topology of $L^1(\mu\otimes{\cal L}^p)$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} {\bf Step 1.} We begin to show that the function
$$\fun{}{\gamma}{U(\mu\ast\gamma)}$$
is continuous; since we are supposing that $\fun{U}{{\cal M}_1(\T^p)}{{\bf R}}$ is continuous, it suffices to prove that
$\fun{}{\gamma}{\mu\ast\gamma}$ is continuous from ${\cal D}_\mu$ endowed with the weak topology of $L^1(\mu\otimes{\cal L}^p)$ to the
weak$\ast$ topology of
${\cal M}_1(\T^p)$. Let $\gamma\in{\cal D}_\mu$ be fixed and let $f\in C({\bf T}^p)$; it suffices to note that we can write the weak neighbourhood of $\gamma$
$$\left\{
\gamma^\prime\;\colon\;
\left\vert
\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma^\prime(x,v) {\rm d} \mu(x) {\rm d} v-
\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma(x,v) {\rm d} \mu(x) {\rm d} v
\right\vert <\epsilon
\right\} $$
as
$$\left\{
\gamma^\prime\;\colon\;
\left\vert
\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma^\prime)(z)-
\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma)(z)
\right\vert <\epsilon
\right\}$$
by the definition of $\mu\ast\gamma^\prime$ and $\mu\ast\gamma$.
\noindent{\bf Step 2.} We note that the linear function
$$\fun{I_{pot}}{\gamma}{
\int_{{\bf T}^p\times{\bf R}^p}P^{
\frac{1}{2}\mu
} (t,x)\gamma(x,v) {\rm d} \mu(x) {\rm d} v
} $$
does not depend on $\gamma$ by (1.3).
\noindent{\bf Step 3.} We prove that
$$\fun{I_{gauss}}{\gamma}{
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma,(x,v)) {\rm d} \mu(x) {\rm d} v
} $$
is weakly l. s. c.. Since $I_{gauss}$ is convex, it suffices to prove that it is l. s. c. for the strong topology of
$L^1(\mu\otimes{\cal L}^p)$. We saw after formula (1.6) that
$$\frac{1}{2h}|v|^2\gamma(v)+\gamma(v)\log\gamma(v)\ge
e^{
-1-\frac{1}{2h}|v|^2
} . $$
Since the term on the right is integrable, lower semicontinuity follows from Fatou's lemma.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\prop{1.4} Let $U\in C({\cal M}_1(\T^p),{\bf R})$ and let $\mu\in{\cal M}_1(\T^p)$; then, the $\inf$ in the definition of $(G^h_tU)(\mu)$ is a minimum.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} {\bf Step 1.} We begin to show that $(G^h_tU)(\mu)$ is finite.
If we substitute
$$\gamma(x,v)=\left(\frac{1}{2\pi h}\right)^\frac{p}{2}e^{
-\frac{1}{2h}|v|^2
} $$
into (1.4) (or (1.5), which is the same up to a constant), we immediately get that $(G^h_tU)(\mu)<+\infty$; to prove that
$(G^h_tU)(\mu)>-\infty$, it suffices to prove that the functional $I$ defined in the last lemma is a sum of functions, each of which is bounded from below.
First, the function bringing $\gamma$ in $U(\mu\ast\gamma)$, i. e.
$$\fun{}{\gamma}{U(\mu\ast\gamma)}$$
is bounded from below because $U$, a continuous function on a compact space, is bounded from below.
Second, the functional $I_{pot}$ defined in the last lemma does not depend on $\gamma$ by (1.3); we have an explicit bound on its value since
$$||P^{\frac{1}{2}\mu}||_{C^4}\le M\colon=(
||V||_{C^4}+
||W||_{C^4}
) . \eqno (1.13)$$
Third,
$$I_{gauss}(\gamma)=
\int_{{\bf T}^p} {\rm d} \mu(x)
\int_{{\bf R}^p}\left[
\frac{1}{2h}|v|^2+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v$$
is bounded below by (1.6).
\noindent {\bf Step 2.} Let $\{ \gamma_n \}$ be a sequence minimizing in (1.4); we assert that, up to subsequences,
$\gamma_n\rightharpoonup\gamma\in{\cal D}_\mu$.
We begin to show that
$$\int_{{\bf T}^p\times{\bf R}^p} A_h(\gamma_n,(x,v)) {\rm d} \mu(x) {\rm d} v\le C
\eqno (1.14)$$
for some $C>0$ independent on $n$.
Since $\{ \gamma_n \}$ is minimizing in (1.4), by step 1 and (1.13) there is
$C_1>0$ such that
$$\int_{{\bf T}^p\times{\bf R}^p} A_h(\gamma_n,(x,v)) {\rm d} \mu(x) {\rm d} v+
U(\mu\ast\gamma_n)\le C_1\qquad\forall n\ge 1 . $$
Now (1.14) follows by the fact that $U$, being a continuous function on the compact space ${\cal M}_1(\T^p)$, is bounded.
By (1.14), we get that $\{ \gamma_n \}$ satisfies points 1) and 2) of lemma 1.2; it is well-known that this implies that $\{ \gamma_n \}$ is weakly compact in $L^1(\mu\otimes{\cal L}^p)$. The weak limit $\gamma$ belongs to ${\cal D}_\mu$ by point 3) of lemma 1.2.
\noindent{\bf End of the proof.} By step 2, any minimizing sequence $\{ \gamma_n \}$ has a subsequence $\{ \gamma_{n_k} \}$ such that $\gamma_{n_k}\rightharpoonup\gamma\in{\cal D}_\mu$. Since the function $I$ is l. s. c. by lemma 1.3, $\gamma$ is a minimizer and the thesis follows.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\centerline{\bf \S 2}
\centerline{\bf The time step: properties of the minimal}
\vskip 1pc
In this section, we prove proposition 2.3 below, which says that the modulus of continuity of $G^h_tU$ is only slightly larger than the modulus of continuity of $U$; and proposition 2.8, which says that, if $\gamma$ is minimal, then the $L^\infty$ norm of $\gamma$ (and that of
$\frac{1}{\gamma}$ on ${\bf T}^p\times B(0,2\sqrt p)$) is bounded in terms of the Lipschitz constant of $U$.
We begin with a standard fact from [14].
\lem{2.1} Let $U_1,U_2\in C({\cal M}_1(\T^p))$. Then, the following three points hold.
\noindent 1) If $U_1\le U_2$, then $G^h_t U_1\le G^h_tU_2$.
\noindent 2) For all $a\in{\bf R}$, $G^h_t(U_1+a)=G^h_tU_1+a$.
\noindent 3) $||G^h_tU_1-G^h_tU_2||_\infty\le||U_1-U_2||_\infty$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Points 1) and 2) are immediate consequences of the definition of the operator $G^h_t$, i. e. of formula (1.4); point 3) follows from 1) and 2) in a standard way.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
We need a technical fact, lemma 2.2. below, and some notation; the readers of [3] will recognize the "push forward by plans".
\vskip 1pc
\noindent{\bf Definition.} Let $\mu_0,\mu_1\in{\cal M}_1(\T^p)$, let $\Gamma$ be a transfer plan between $\mu_0$ and $\mu_1$ and let
$\gamma_0\in{\cal D}_{\mu_0}$.
Here and in the following, we shall always reserve the variable
$x\in{\bf T}^p$ for integration in $\mu_0$, and $y\in{\bf T}^p$ for integration in $\mu_1$.
We disintegrate $\Gamma$ as $\Gamma=\Gamma_y\otimes\mu_1$ (see [5], II.70 for the precise statement and proof of the disintegration theorem) and we set
$$\gamma_1(y,v)=\int_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma_y(x) . \eqno (2.1)$$
Formula (2.1) is just a generalized way of composing $\gamma$ with a map; indeed, if $\Gamma$ were induced by an invertible map $g$, then we would have
$$\gamma_1(y,v)=\gamma_0(g^{-1}(y),v) . $$
\lem{2.2} Let $\mu_0$, $\mu_1$, $\gamma_0$ and $\gamma_1$ be as in the definition above. Then
$$\gamma_1\in{\cal D}_{\mu_1} \txt{and} \leqno 1) $$
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_1,(y,v)) {\rm d} \mu_1(y) {\rm d} v\le
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_0,(x,v)) {\rm d} \mu_0(x) {\rm d} v .
\leqno 2) $$
Moreover, if $\Gamma$ is a transfer plan on which $d_1(\mu_0,\mu_1)$ is attained, we have that
$$d_1(\mu_0\ast\gamma_0,\mu_1\ast\gamma_1)\le d_1(\mu_0,\mu_1) .
\leqno 3)$$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} The first equality below follows from (2.1), the second one from Fubini.
$$\int_{{\bf R}^p}\gamma_1(y,v) {\rm d} v=
\int_{{\bf R}^p} {\rm d} v\int_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma_y(x)=
\int_{{\bf T}^p} {\rm d} \Gamma_y(x)\int_{{\bf R}^p}\gamma_0(x,v) {\rm d} v . $$
Now recall that $\gamma_0\in{\cal D}_{\mu_0}$, and thus
$$\int_{{\bf R}^p}\gamma_0(x,v) {\rm d} v=1
\txt{for $\mu_0$ a. e. $x$.} $$
Since a $\mu_0$-null set is a $\Gamma_y$-null set for $\mu_1$ a. e.
$y$, the last two formulas imply that
$$\int_{{\bf R}^p}\gamma_1(y,v) {\rm d} v=1
\txt{for $\mu_1$ a. e. $y$.} $$
This proves point 1); we turn to point 2). The first equality below is (2.1); for the inequality, we consider the strictly convex function
$\phi(z)=z\log z$ and apply Jensen.
$$A_h(\gamma_1,(y,v))=
\int_{{\bf T}^p}\cinh{v}\gamma_0(x,v) {\rm d} \Gamma_y(x)+
\int_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma_y(x)\log\int_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma_y(x)\le$$
$$\int_{{\bf T}^p}[
\cinh{v}\gamma_0(x,v)+\gamma_0(x,v)\log\gamma_0(x,v)
] {\rm d} \Gamma_y(x)=\int_{{\bf T}^p}A_h(\gamma_0,(x,v)) {\rm d} \Gamma_y(x) . $$
Since $\phi(z)=z\log z$ is strictly convex, equality holds if there is an invertible minimal transfer map. Integrating, we get the inequality below.
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_1,(y,v)) {\rm d} \mu_1(y) {\rm d} v\le
\int_{{\bf T}^p\times{\bf R}^p} {\rm d} \mu_1(y) {\rm d} v\int_{{\bf T}^p}
A_h(\gamma_0,(x,v)) {\rm d} \Gamma_y(x)=$$
$$\int_{{\bf T}^p\times{\bf T}^p\times{\bf R}^p}A_h(\gamma_0,(x,v)) {\rm d} \Gamma(x,y) {\rm d} v=
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_0,(x,v)) {\rm d} \mu_0(x) {\rm d} v . $$
The first equality above follows because
$\Gamma=\Gamma_y\otimes\mu_1$, the second one because the first marginal of $\Gamma$ is $\mu_0$.
We prove 3). The first equality below is (1.1), while the second one is the definition of $\mu_1\ast\gamma_1$ and
$\mu_0\ast\gamma_0$; the third one is the definition of $\gamma_1$ in (2.1); the fourth one follows from the fact that $\Gamma=\Gamma_y\otimes\mu_1$ and the marginals of $\Gamma$ are $\mu_0$ and $\mu_1$.
$$d_1(\mu_1\ast\gamma_1,\mu_0\ast\gamma_0)=
\sup_{f\in Lip^1({\bf T}^p)}\left\vert
\int_{{\bf T}^p}f(y) {\rm d} (\mu_1\ast\gamma_1)(y)-
\int_{{\bf T}^p}f(x) {\rm d} (\mu_0\ast\gamma_0)(x)
\right\vert = $$
$$\sup_{f\in Lip^1({\bf T}^p)}\left\vert
\int_{{\bf T}^p\times{\bf R}^p}f(y-v)\gamma_1(y,v) {\rm d} \mu_1(y) {\rm d} v-
\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma_0(x,v) {\rm d} \mu_0(x) {\rm d} v
\right\vert = $$
$$\sup_{f\in Lip^1({\bf T}^p)}\left\vert
\int_{{\bf T}^p\times{\bf R}^p}f(y-v) {\rm d} \mu_1(y) {\rm d} v
\int_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma_y(x)-
\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma_0(x,v) {\rm d} \mu_0(x) {\rm d} v
\right\vert = $$
$$\sup_{f\in Lip^1({\bf T}^p)}\left\vert
\int_{{\bf T}^p\times{\bf T}^p\times{\bf R}^p}f(y-v)\gamma_0(x,v) {\rm d} \Gamma(x,y) {\rm d} v-
\int_{{\bf T}^p\times{\bf T}^p\times{\bf R}^p}f(x-v)\gamma_0(x,v) {\rm d} \Gamma(x,y) {\rm d} v
\right\vert \le $$
$$\int_{{\bf T}^p\times{\bf T}^p\times{\bf R}^p}
|x-y|_{{\bf T}^p}\gamma_0(x,v) {\rm d} \Gamma(x,y) {\rm d} v . $$
Recalling that $\gamma_0$ satisfies (1.2) for $\mu_0$ a. e. $x\in{\bf T}^p$, the formula above yields the inequality below; the equality comes from the fact that $\Gamma$ is a minimal transfer plan.
$$d_1(\mu_1\ast\gamma_1,\mu_0\ast\gamma_0)\le
\int_{{\bf T}^p\times{\bf T}^p}|x-y|_{{\bf T}^p} {\rm d} \Gamma(x,y)=
d_1(\mu_0,\mu_1) . $$
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\noindent{\bf Definition.} Let $U\in C({\cal M}_1(\T^p))$; we say that
$\fun{\omega}{[0,+\infty)}{[0,+\infty)}$ is a 1-modulus of continuity for
$U$ if
\noindent 1) $\omega$ is concave.
\noindent 2) $\omega(0)=0$.
\noindent 3) $|U(\mu_1)-U(\mu_0)|\le\omega(d_1(\mu_1,\mu_2))$ for all
$\mu_1,\mu_2\in{\cal M}_t$.
\prop{2.3} There is a constant $C>0$, depending only on the potentials $V$ and $W$, such that the following holds. Let $\omega$ be a 1-modulus of continuity for $U\in C({\cal M}_1(\T^p))$; then,
$\tilde\omega(z)\colon=Chz+\omega(z)$ is a 1-modulus of continuity for
$G^h_tU$.
In particular, if $U$ is $L$-Lipschitz for the 1-Wasserstein distance, then $G^h_tU$ is $(Ch+L)$-Lipschitz; if $U$ is continuous, then $G^h_tU$ is continuous.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We assert that it suffices to show the following: if
$\mu_0,\mu_1\in{\cal M}_1(\T^p)$ and $\gamma_0$ minimizes
$$\fun{}{\gamma}{S(U,\mu_0,\gamma)}$$
($\gamma_0$ exists by proposition 1.4), then we can find
$\gamma_1\in{\cal D}_{\mu_1}$ such that
$$S(U,\mu_1,\gamma_1)
+\int_{{\bf T}^p}P^{\frac{1}{2}\mu_1}(t,x) {\rm d} \mu_1(x)\le
S(U,\mu_0,\gamma_0)+
\int_{{\bf T}^p}P^{\frac{1}{2}\mu_0}(t,x) {\rm d} \mu_0(x)+
\tilde\omega(d_1(\mu_1,\mu_0)) .
\eqno (2.2)$$
Indeed, this implies by (1.4) that
$$(G^h_tU)(\mu_1)\le(G^h_tU)(\mu_0)+
\tilde\omega(d_1(\mu_1,\mu_0)) . $$
Exchanging the r\^oles of $\mu_1$ and $\mu_0$, we get that
$\tilde\omega$ is a modulus of continuity for $G^h_tU$, and the assertion follows.
To prove (2.2), we let $\Gamma$ be a minimal transfer plan between
$\mu_0$ and $\mu_1$ and define $\gamma_1$ by (2.1). Now, (1.3) implies the equality below.
$$\int_{{\bf T}^p\times{\bf R}^p}
[hL^{\frac{1}{2}\mu_1}(t,y,\frac{1}{h}v)+\log\gamma_1(y,v)]
\gamma_1(y,v) {\rm d} \mu_1(y) {\rm d} v-$$
$$\int_{{\bf T}^p\times{\bf R}^p}
[hL^{\frac{1}{2}\mu_0}(t,x,\frac{1}{h}v)+\log\gamma_0(x,v)]
\gamma_0(x,v) {\rm d} \mu_0(x) {\rm d} v
=$$
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_1,(y,v)) {\rm d} \mu_1(y) {\rm d} v-
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_0,(x,v)) {\rm d} \mu_0(x) {\rm d} v
+ \eqno (2.3)_a$$
$$h\int_{{\bf T}^p}V(t,x) {\rm d} \mu_0(x)-
h\int_{{\bf T}^p}V(t,y) {\rm d} \mu_1(y)
+ \eqno (2.3)_b $$
$$h\int_{{\bf T}^p}W^{\frac{1}{2}\mu_0}(x) {\rm d} \mu_0(x)-
h\int_{{\bf T}^p}W^{\frac{1}{2}\mu_1}(y)
{\rm d} \mu_1(y) . \eqno (2.3)_c $$
Let us tackle the terms $(2.3)_a$, $(2.3)_b$ and $(2.3)_c$; first of all, point 2) of lemma 2.2 implies that
$$(2.3)_a\le 0 . \eqno (2.4)$$
As for the term $(2.3)_b$, we have that
$$(2.3)_b=
h\int_{{\bf T}^p\times{\bf T}^p}[
V(t,x)-V(t,y)
] {\rm d} \Gamma(x,y) \le
h\int_{{\bf T}^p\times{\bf T}^p}
C_1|x-y|_{{\bf T}^p} {\rm d} \Gamma(x,y)=C_1hd_1(\mu_0,\mu_1) .
\eqno (2.5)$$
The first equality above follows because the marginals of $\Gamma$ are $\mu_0$ and $\mu_1$; the inequality follows because $V$ is
$C_1$-Lipschitz. The last equality follows from the fact that $\Gamma$ is minimal in the definition of $d_1(\mu_0,\mu_1)$.
Analogously, we get that
$$(2.3)_c=h\int_{{\bf T}^p\times{\bf T}^p}[
W^{\frac{1}{2}\mu_0}(x)-W^{\frac{1}{2}\mu_1}(y)
] {\rm d} \Gamma(x,y)\le $$
$$h\int_{{\bf T}^p\times{\bf T}^p}
|
W^{\frac{1}{2}\mu_0}(x)-W^{\frac{1}{2}\mu_0}(y)
| {\rm d} \Gamma(x,y)+
h\int_{{\bf T}^p\times{\bf T}^p}
|
W^{\frac{1}{2}\mu_1}(y)-W^{\frac{1}{2}\mu_0}(y)
| {\rm d} \Gamma(x,y) . $$
We can see as in (1.13) that the Lipschitz constant of
$W^{\frac{1}{2}\mu_0}$ is bounded by one half of the Lipschitz constant
$C_2$ of $W$; this implies as in (2.5) that
$$\int_{{\bf T}^p\times{\bf T}^p}
|
W^{\frac{1}{2}\mu_0}(x)-W^{\frac{1}{2}\mu_0}(y)
| {\rm d} \Gamma(x,y) \le \frac{1}{2} C_2 d_1(\mu_0,\mu_1) . $$
On the other hand, since $W$ is $C_2$-Lipschitz, (1.1) yields the inequality below; the equality comes from the definition of
$W^{\frac{1}{2}\mu_i}$.
$$|
W^{\frac{1}{2}\mu_1}(y)-W^{\frac{1}{2}\mu_0}(y)
| =$$
$$\left\vert
\frac{1}{2}\int_{{\bf T}^p}W(x-y) {\rm d} \mu_1(x)-
\frac{1}{2}\int_{{\bf T}^p}W(x-y) {\rm d} \mu_0(x)
\right\vert \le
\2C_2 d_1(\mu_1,\mu_0) . $$
By the last three formulas, we get that
$$(2.3)_c\le C_2h d_1(\mu_0,\mu_1) . $$
From (2.3), (2.4), (2.5) and the last formula, we get that
$$\int_{{\bf T}^p\times{\bf R}^p}\left[
hL_c^{\frac{1}{2}\mu_1}(t,y,\frac{1}{h}v)\gamma_1(y,v)
+\gamma_1(y,v)\log\gamma_1(y,v)
\right] {\rm d} \mu_1(y) {\rm d} v-$$
$$\int_{{\bf T}^p\times{\bf R}^p} \left[
hL_c^{\frac{1}{2}\mu_0}(t,x,\frac{1}{h}v)\gamma_0(x,v)+
\gamma_0(x,v)\log\gamma_0(x,v)
\right]
{\rm d} \mu_0(x) {\rm d} v \le
C_3h d_1(\mu_0,\mu_1) . $$
Since $\omega$ is the modulus of continuity of $U$, point 3) of lemma 2.2 implies that
$$|
U(\mu_1\ast\gamma_1)-U(\mu_0\ast\gamma_0)
| \le \omega(d_1(\mu_0,\mu_0)) . $$
Setting $C=C_3$, (2.2) is implied by the last two formulas.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
We begin the estimate on $||\gamma||_\infty$ with a technical lemma.
\lem{2.4} Let $\mu\in{\cal M}_1(\T^p)$, and let $\gamma_0,\gamma_1\in{\cal D}_\mu$. Let us suppose that the functions $\gamma_0(x,\cdot)$ and $\gamma_1(x,\cdot)$ coincide whenever $x$ does not belong to a Borel set
$E\subset{\bf T}^p$. Then,
$$d_1(\mu\ast\gamma_0,\mu\ast\gamma_1)\le
\sqrt p
\int_{E\times{\bf R}^p}|
\gamma_0(x,v)-\gamma_1(x,v)
| {\rm d} \mu(x) {\rm d} v . $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We use the dual formulation (1.1) for the first equality below; the second one is the definition of $\mu\ast\gamma_i$; the third one follows because $\gamma_0$ and $\gamma_1$ coincide on
$E^c\times{\bf R}^p$; the inequality comes from the fact that, since
$f\in Lip^1({\bf T}^p)$ and
${\bf T}^p$ has diameter $\sqrt p$, we can as well suppose that
$||f||_\infty\le\sqrt p$.
$$d_1(\mu\ast\gamma_0,\mu\ast\gamma_1)=
\sup_{f\in Lip^1({\bf T}^p)}\left[
\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma_0)(z)-\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma_1)(z)
\right] = $$
$$\sup_{f\in Lip^1({\bf T}^p)}\left[
\int_{{\bf T}^p\times{\bf R}^p}
f(x-v)[
\gamma_0(x,v)-\gamma_1(x,v)
] {\rm d} \mu(x) {\rm d} v
\right] =$$
$$\sup_{f\in Lip^1({\bf T}^p)}\left[
\int_{E\times{\bf R}^p}
f(x-v)[
\gamma_0(x,v)-\gamma_1(x,v)
] {\rm d} \mu(x) {\rm d} v
\right] \le
\sqrt p \int_{E\times{\bf R}^p} |
\gamma_0(x,v)-\gamma_1(x,v)
| {\rm d} \mu(x) {\rm d} v . $$
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{2.5} There is a constant
$C_1(L,h)$, depending only on $L,h>0$, for which the following happens. Let $U$ be $L$-Lipschitz for the 1-Wasserstein distance $d_1$, let $\mu\in{\cal M}_1(\T^p)$ and let
$\gamma$ minimize in the definition of $(G^h_tU)(\mu)$. Then,
$$||\gamma||_{L^\infty({\bf T}^p\times{\bf R}^p,\mu\otimes{\cal L}^p)} \le C_1(L,h) . $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We are going to use the fact that the superlinear entropy term becomes huge when $\gamma$ is large; thus, if $|| \gamma ||_\infty$ is too large, we can take some mass from the region where $\gamma$ is big, smear it where $\gamma$ is small and obtain a function $\tilde\gamma$ such that
$$S(U,\mu,\tilde\gamma)< S(U,\mu,\gamma) , $$
contradicting the minimality of $\gamma$.
\noindent{\bf Step 1.} We define the set where $\gamma$ is large.
Let us suppose that
$||\gamma||_{L^\infty(\mu\otimes{\cal L}^p)}\ge 2A$; then, there is a Borel set
$D_A\subset{\bf T}^p\times{\bf R}^p$ such that
$$0<(\mu\otimes{\cal L}^p)(D_A) \txt{and}
\gamma(x,v)\ge A\qquad \forall (x,v)\in D_A . \eqno (2.6)$$
We denote by $D_A(x)$ its sections:
$$D_A(x)\colon=\{
v\in{\bf R}^p\;\colon\; (x,v)\in D_A
\} . $$
Since $\gamma\in{\cal D}_\mu$, Chebishev's inequality implies that
${\cal L}^p(D_A(x))\le\frac{1}{A}$ for $\mu$ a. e. $x$.
We set $2a(p)={\cal L}^p(B(0,1))$ and we define
$$B_A(x)=\int_{D_A(x)}\left(
\gamma(x,v)-a(p)
\right) {\rm d} v . $$
We shall suppose that $A>\max(a(p),a(p)^{-1})$ (otherwise there is nothing to prove); as a consequence, $B_A(x)\ge 0$.
Since $\gamma\in{\cal D}_\mu$, we have that $B_A(x)\in[0,1]$ for
$\mu$ a. e. $x$.
\noindent {\bf Step 2.} We show that set where $\gamma$ is small has room enough to accommodate some mass from $D_A$.
We let
$$Z=\{
(x,v)\in{\bf T}^p\times B(0,1)\;\colon\;\gamma(x,v)\le a(p)^{-1}
\} . $$
As above, we call $Z(x)$ its sections. Since
$\int_{{\bf R}^p}\gamma(x,v) {\rm d} v=1$ for $\mu$ a. e. $x$, we get by the Chebishev inequality that ${\cal L}^p(B(0,1)\setminus Z(x))\le a(p)$ for $\mu$ a. e. $x$. Since ${\cal L}^p(B(0,1))=2a(p)$, this implies that
${\cal L}^p(Z(x))\ge a(p)$ for $\mu$ a. e. $x$.
A standard consequence of this is that we can find a Borel set
$\tilde Z\subset Z$ such that $\mu$ a. e. section $\tilde Z(x)$ satisfies
${\cal L}^p(\tilde Z(x))=a(p)$.
\noindent{\bf Step 3.} We build $\tilde\gamma$.
Since we have chosen $A> a(p)^{-1}$, we have that $\tilde Z(x)$ and $D_A(x)$ are disjoint; we set $M(x)={\bf R}^p\setminus(\tilde Z(x)\cup B_A(x))$ and
$$\tilde\gamma(x,v)=\left\{
\eqalign{
\gamma(x,v) &\txt{if} v\in M(x)\cr
\gamma(x,v)+ B_A(x)a(p)^{-1}
&\txt{if} v\in \tilde Z(x)\cr
a(p) &\txt{if} v\in D_A(x) .
} \right. $$
The first equality below comes from the fact that $\gamma$ and
$\tilde\gamma$ coincide on $M(x)$, the second one from the fact that
${\cal L}^p(\tilde Z(x))=a(p)$ and the third one from the definition of $B_A(x)$.
$$\int_{{\bf T}^p\times{\bf R}^p}|
\gamma(x,v)-\tilde\gamma(x,v)
| {\rm d} \mu(x) {\rm d} v = $$
$$\int_{{\bf T}^p} {\rm d} \mu(x)\int_{\tilde Z(x)}a(p)^{-1} B_A(x) {\rm d} v +
\int_{{\bf T}^p} {\rm d} \mu(x)\int_{D_A(x)}[
\gamma(x,v)-a(p)
] {\rm d} v= $$
$$\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x)+
\int_{{\bf T}^p} {\rm d} \mu(x)\int_{D_A(x)}[
\gamma(x,v)-a(p)
] {\rm d} v =
2 \int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . \eqno (2.7)$$
The same argument without the modulus shows the first equality below; the second one follows since $\gamma\in{\cal D}_\mu$.
$$\int_{{\bf R}^p}\tilde\gamma(x,v) {\rm d} v=
\int_{{\bf R}^p}\gamma(x,v) {\rm d} v=1
\txt{for} \mu \txt{a. e.} x . $$
In other words, $\tilde\gamma\in{\cal D}_\mu$.
In order to compare the actions of $\gamma$ and $\tilde\gamma$ we note that, for all $x$,
$$\int_{{\bf R}^p}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v=
\int_{M(x)}
\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+$$
$$\int_{\tilde Z(x)}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v+
\int_{D_A(x)}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v \eqno (2.8)$$
because $\gamma$ and $\tilde\gamma$ coincide on $M(x)$.
\noindent{\bf Step 4.} We compare the actions of $\gamma$ and
$\tilde\gamma$ on $\tilde Z(x)$.
If $v\in\tilde Z(x)$, then $\gamma(x,v)\le a(p)^{-1}$; this yields the first inequality below; for the second one, we recall that, by step 1, $B_A(x)\le 1$.
$$\tilde\gamma(x,v)=\gamma(x,v)+B_A(x) a(p)^{-1}\le
a(p)^{-1}(1+B_A(x))\le 2a(p)^{-1}
\qquad\forall v\in\tilde Z(x) . \eqno (2.9)$$
The inequality below follows because
$\tilde\gamma(x,v)\ge\gamma(x,v)$ and $\cinh{v}\le\frac{1}{2h}$ on
$\tilde Z(x)\subset B(0,1)$; moreover, we have used the Lagrange inequality and the fact, which follows from (2.9), that
$[\gamma(x,v),\tilde\gamma(x,v)]\subset [0,2 a(p)^{-1}]$. The second equality follows by the definition of $\tilde\gamma$ on $\tilde Z(x)$, and the fact that
${\cal L}^p(\tilde Z(x))=a(p)$.
$$\int_{\tilde Z(x)}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v=
\int_{\tilde Z(x)}
\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+ $$
$$\int_{\tilde Z(x)}
\cinh{v}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v+
\int_{\tilde Z(x)}[
\tilde\gamma(x,v)\log\tilde\gamma(x,v)-\gamma(x,v)\log\gamma(x,v)
] {\rm d} v\le$$
$$\le
\int_{\tilde Z(x)}
\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+ $$
$$\int_{\tilde Z(x)}\frac{1}{2h}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v+
\max_{t\in[0,2a(p)^{-1}]}\frac{ {\rm d} }{ {\rm d} t}(t\log t)\cdot
\int_{\tilde Z(x)}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v=$$
$$\int_{\tilde Z(x)}
\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
\frac{1}{2h} B_A(x)+[
\log(2a(p)^{-1})+1
]B_A(x) .
\eqno (2.10)$$
\noindent {\bf Step 5.} We compare the actions of $\gamma$ and
$\tilde\gamma$ on $D_A(x)$.
Since $\fun{}{t}{t\log t}$ is superlinear, there is $M(A)$, tending to
$+\infty$ as $A\rightarrow+\infty$, such that
$$\gamma(x,v)\log\gamma(x,v)-\tilde\gamma(x,v)\log\tilde\gamma(x,v)\ge
M(A)[
\gamma(x,v)-a(p)
] \qquad\forall v\in D_A(x) . $$
This implies the inequality below, while the last equality comes from the definition of $B_A(x)$.
$$\int_{D_A(x)}\tilde\gamma(x,v)\log\tilde\gamma(x,v) {\rm d} v=$$
$$\int_{D_A(x)}\gamma(x,v)\log\gamma(x,v) {\rm d} v+
\int_{D_A(x)}[\tilde\gamma(x,v)\log\tilde\gamma(x,v)-
\gamma(x,v)\log\gamma(x,v)] {\rm d} v\le $$
$$\int_{D_A(x)}\gamma(x,v)\log\gamma(x,v) {\rm d} v -
M(A) \int_{D_A(x)}[\gamma(x,v)-a(p)] {\rm d} v=
\int_{D_A(x)}\gamma(x,v)\log\gamma(x,v) {\rm d} v -
M(A) B_A(x) . $$
The first inequality below follows from the fact that $\gamma\ge\tilde\gamma$ on $D_A(x)$; the second one, from the last formula.
$$\int_{D_A(x)}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v
\le
\int_{D_A(x)}\cinh{v}\gamma(x,v) {\rm d} v+
\int_{D_A(x)}\tilde\gamma(x,v)\log\tilde\gamma(x,v) {\rm d} v\le$$
$$\int_{D_A(x)}\cinh{v}\gamma(x,v) {\rm d} v+
\int_{D_A(x)}\gamma(x,v)\log\gamma(x,v) {\rm d} v-
M(A)B_A(x) . $$
\noindent{\bf End of the proof.} From the last formula, (2.8) and (2.10) we get that
$$\int_{{\bf R}^p}
\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v\le
\int_{{\bf R}^p}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+$$
$$\left(
\frac{1}{2h}+1+\log(2a(p)^{-1}) -M(A)
\right)
B_A(x) . \eqno (2.11)$$
The first inequality below follows by lemma 2.4; the second one, from (2.7).
$$d_1(\mu\ast\gamma,\mu\ast\tilde\gamma)\le
\sqrt p\int_{{\bf T}^p\times{\bf R}^p}|
\gamma(x,v)-\tilde\gamma(x,v)
| {\rm d} \mu(x) {\rm d} v\le
2\sqrt p\int_{{\bf T}^p} B_A(x) {\rm d} \mu(x) . $$
Since $U$ is $L$-Lipschitz, this implies
$$|
U(\mu\ast\gamma)-U(\mu\ast\tilde\gamma)
| \le 2\sqrt p L\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . $$
By the last formula and (2.11) we get that
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\tilde\gamma,(x,v)) {\rm d} \mu(x) {\rm d} v+
U(\mu\ast\tilde\gamma)\le
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma,(x,v)) {\rm d} \mu(x) {\rm d} v+
U(\mu\ast\gamma)+$$
$$\left(
\frac{1}{2h}+1+\log 2a(p)^{-1}+
2\sqrt p L -M(A)
\right)
\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . $$
We have seen that $M(A)\rightarrow+\infty$ as $A\rightarrow+\infty$; thus, if $A$ is large enough, we have contradicted the minimality of
$\gamma$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
Before proving the estimate from below on $\gamma$, we need a result on the tightness on the set of all minimal $\gamma$'s.
\lem{2.6} Let $U$ be $L$-Lipschitz. Then, for all $\epsilon>0$ there is $R>0$ such that the following happens. If $\mu\in{\cal M}_1(\T^p)$ and $\gamma$ minimizes the single particle functional $S(U,\mu,\gamma)$, then for
$\mu$ a. e. $x\in{\bf T}^p$ we have that
$$\int_{B(0,R)^c}\gamma(x,v) {\rm d} v\le\epsilon . $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Let us suppose by contradiction that the thesis does not hold; then there is $\epsilon>0$ such that for infinitely many $l\in{\bf N}$ we can find
\noindent 1) a measure $\mu_l\in{\cal M}_1(\T^p)$,
\noindent 2) a minimal $\gamma_l$ and
\noindent 3) a set $E_l\subset{\bf T}^p$ with
$\mu_l(E_l)>0$ such that
$$\int_{B(0,l)^c}\gamma(x,v) {\rm d} v>\epsilon\txt{for $\mu$-a. e. }x\in{\bf T}^p . $$
Point 3) implies the second inequality below.
$$\inf_{x\in E_l}
\int_{{\bf R}^p}\cinh{v}\gamma_l(x,v) {\rm d} v\ge \inf_{x\in E_l}
\int_{B(0,l)^c}\cinh{v}\gamma_l(x,v) {\rm d} v\ge
\frac{\epsilon}{2h}|l|^2 \rightarrow+\infty . $$
Then, (1.7) implies that
$$\inf_{x\in E_l}
\int_{{\bf R}^p}A_h(\gamma_l,(x,v)) {\rm d} v
\ge M_l \eqno (2.12)$$
with $M_l\rightarrow+\infty$ as $l\rightarrow+\infty$.
We set
$$\tilde\gamma_l(x,v)=
\left\{
\eqalign{
\gamma_l(x,v)&\txt{if}x\not\in E_l\cr
\left(
\frac{1}{2\pi h}
\right)^\frac{p}{2}
e^{
-\frac{1}{2h}|v|^2
} &\txt{if}x\in E_l .
}
\right. $$
Since $\gamma_l(x,\cdot)$ and the Gaussian have integral one over
${\bf R}^p$, we have that $\tilde\gamma_l\in{\cal D}_\mu$; by (2.12), we get that
$$\int_{{\bf R}^p}A(\gamma_l,(x,v)) {\rm d} v-
\int_{{\bf R}^p}A(\tilde\gamma_l,(x,v)) {\rm d} v\ge M_l^\prime \txt{for}
x\in E_l$$
with $M_l^\prime\rightarrow+\infty$ as $l\rightarrow+\infty$. Integrating over ${\bf T}^p$ and recalling the definition of $\tilde\gamma_l$, we get that
$$\int_{{\bf T}^p\times{\bf R}^p}A(\tilde\gamma_l,(x,v)) {\rm d} \mu_l(x) {\rm d} v\le
\int_{{\bf T}^p\times{\bf R}^p}A(\gamma_l,(x,v)) {\rm d} \mu_l(x) {\rm d} v-
M_l^\prime \mu_l(E_l) . $$
Lemma 2.4 yields the first inequality below; since
$\gamma,\tilde\gamma\in{\cal D}_{\mu_l}$, the second one follows.
$$d_1(\mu_l\ast\gamma_l,\mu_l\ast\tilde\gamma_l)\le
\sqrt{p}\int_{E_l} {\rm d} \mu_l(x)
\int_{{\bf R}^p}|
\tilde\gamma_l(x,v)-\gamma_l(x,v)
| {\rm d} v\le
2\sqrt{p}\mu_l(E_l) . $$
From the last two formulas and the fact that $U$ is $L$-Lipschitz, we get that
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\tilde\gamma_l,(x,v)) {\rm d} \mu(x) {\rm d} v+
U(\mu_l\ast\tilde\gamma_l)\le$$
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma_l,(x,v)) {\rm d} \mu(x) {\rm d} v+
U(\mu_l\ast\gamma_l) +(2L\sqrt{p}-M^\prime_l)\mu_l(E_l) . $$
Since $M^\prime_l\rightarrow+\infty$ and $\mu_l(E_l)>0$, if we take
$l$ large enough we contradict the minimality of $\gamma_l$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{2.7} There is a constant $C_2(L,h)$, depending only on $L,h>0$, for which the following happens. Let $U$ be
$L$-Lipschitz, let $\mu\in{\cal M}_1(\T^p)$ and let
$\gamma$ minimize in the definition of $(G^h_tU)(\mu)$. Then,
$$||\frac{1}{\gamma}||_{L^\infty({\bf T}^p\times B(0,2\sqrt p),\mu\otimes{\cal L}^p)} \le
C_2(L,h) . $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Reversing the procedure of the lemma 2.5, we add some mass to the region $D_A$ where $\gamma$ is small, taking it from the region $Z_\delta$ where it is larger; some work (step 2 below) is necessary to check that $(\mu\otimes{\cal L}^p)(Z_\delta)$ is large enough.
\noindent{\bf Step 1.} We settle the notation.
We begin with the set where $\gamma$ is small. Let us suppose that
$||\frac{1}{\gamma}||_{L^\infty({\bf T}^p\times B(0,2\sqrt{p}))}\ge
\frac{2}{A}$; we define
$$D_A=\{
(x,v)\;\colon\; v\in B(0,2\sqrt{p})\txt{and} \gamma(x,v)\le A
\} . $$
Clearly, $(\mu\otimes{\cal L}^p)(D_A)>0$.
As in lemma 2.5, we set $2a(p)={\cal L}^p(B(0,1))$; we define
$$B_A(x)=\int_{D_A(x)}\gamma(x,v) {\rm d} v . $$
Let us set $P=(2\sqrt{p})^p$.
Since $D_A(x)\subset B(0,2\sqrt{p})$, we have that
$B_A(x)\le 2a(p)A P$ for $\mu$ a. e. $x$.
Now we define a set of points which are not too far from the origin and where $\gamma$ is not too small. For $\delta>0$, we define
$$Z_\delta=\{
(x,v)\in{\bf T}^p\times B(0,\frac{1}{(8a(p)\delta)^\frac{1}{p}})\;\colon\;
\delta\le\gamma(x,v)
\} $$
and we call $Z_\delta(x)$ its sections.
\noindent{\bf Step 2.} By lemma 2.6, we can find $l\in{\bf N}$ such that
$$\int_{B(0,l)}\gamma(x,v) {\rm d} v\ge\frac{1}{2}
\txt{for $\mu$ a. e. } x\in{\bf T}^p . \eqno (2.13)$$
We want to exclude the possibility that the mass of (2.13) is concentrated on a set of very small measure.
More precisely, we assert that, for all $\delta>0$ small enough and independent on $\mu$, any minimal $\gamma$ satisfies
${\cal L}^p(Z_\delta(x))>\frac{\delta^p}{8a(p)}$ for $\mu$ a. e. $x$.
Indeed, if this were not the case, for all $k\ge 1$ we could find
$\mu_k\in{\cal M}_1(\T^p)$, a minimal $\gamma_k$ and a set $E_k\subset{\bf T}^p$ with $\mu_k(E_k)>0$ such that, for all $x\in E_k$ we have
${\cal L}^p(Z_\frac{1}{k}(x))\le\frac{1}{8a(p)k^p}$.
Formula (2.13) implies that, for $k$ large enough, the first inequality below holds; the first equality is the definition of
$Z_\frac{1}{k}(x)$, while the last one comes from the fact that $2a(p)$ is the measure of the unit ball.
$$\int_{Z_\frac{1}{k}(x)}\gamma_k(x,v) {\rm d} v=$$
$$\int_{
\{ v\in B(0,\left(\frac{k}{8a(p)}\right)^\frac{1}{p})\;\colon\;
\gamma_k(x,v)\ge\frac{1}{k} \}
}
\gamma_k(x,v) {\rm d} v\ge
\frac{1}{2}-
\int_{
\{ v\in B(0,\left(\frac{k}{8a(p)}\right)^\frac{1}{p})\;\colon\;
\gamma_k(x,v)<\frac{1}{k} \}
}
\gamma_k(x,v) {\rm d} v\ge$$
$$\frac{1}{2}-\int_{B(0,\left(\frac{k}{8a(p)}\right)^\frac{1}{p})}
\frac{1}{k} {\rm d} v=\frac{1}{4}\qquad
\forall x\in E_k . $$
In other words, the integral of $\gamma_k(x,\cdot)$ over
$Z_\frac{1}{k}(x)$ is larger than $\frac{1}{4}$, while we are supposing that
${\cal L}^p(Z_\frac{1}{k}(x))\le\frac{1}{8a(p)k^p}$; this implies that
$\{ \gamma_k(x_k,\cdot) \}_k$ is not uniformly integrable, however we choose the sequence $x_k\in E_k$. Using this and arguing as in point 1) of lemma 1.2, we get that
$$\int_{
{\bf R}^p
}
A_h(\gamma_k,(x,v)) {\rm d} v
\ge M_k \qquad
\forall x\in E_k $$
with $M_k\rightarrow+\infty$ as $k\rightarrow+\infty$. If we define
$$\hat\gamma_k(x,v)=
\left\{
\eqalign{
\gamma_k(x,v)&\txt{if}x\not\in E_k\cr
\left(
\frac{1}{2\pi h}
\right)^\frac{p}{2}
e^{
-\frac{1}{2h}|v|^2
} &\txt{if}x\in E_k
}
\right. $$
it is easy to see, using the last formula and arguing as in lemma 2.6, that, for $k$ large,
$$S(U,\mu,\hat\gamma_k)<S(U,\mu,\gamma_k)$$
contradicting the minimality of $\gamma$.
\noindent{\bf Step 3.} Here we build the function $\tilde\gamma$; we shall show in the next two steps that, if $A$ is small enough, the action of $\tilde\gamma$ is lower than the optimal $\gamma$; this contradiction will end the proof.
Let us fix $\delta>0$ such that step 2 holds; we shall suppose that
$A<\delta$ (otherwise there is nothing to prove); with this choice,
$D_A(x)$ and $Z_\delta(x)$ are disjoint. The first inequality below follows from the fact, shown in step 1, that $B_A(x)\le 2a(P)AP$; the second one, from the fact that $A<\delta$; we choose $\epsilon$ so small that the third one also holds.
$$\delta-\epsilon B_A(x)\frac{8a(p)}{\delta^p}\ge
\delta-\frac{16\epsilon a(p)^2AP}{\delta^p}\ge
\delta-\frac{16\epsilon a(p)^2\delta P}{\delta^p}\ge
\frac{\delta}{2} . \eqno (2.14)$$
By step 2, we can find $\tilde Z\subset Z$ such that, for $\mu$ a. e. $x$, ${\cal L}^p(\tilde Z(x))=\frac{\delta^p}{8a(p)}$. We define
$M(x)={\bf R}^p\setminus(\tilde Z(x)\cup D_A(x))$ and we set
$$\tilde\gamma(x,v)=\left\{
\eqalign{
\gamma(x,v)&\txt{if}v\in M(x)\cr
\gamma(x,v)-\epsilon B_A(x)\frac{8a(p)}{\delta^p}&\txt{if} v\in\tilde Z(x)\cr
(1+\epsilon)\gamma(x,v)&\txt{if}v\in D_A(x) .
}
\right. $$
We have to prove that $\tilde\gamma(x,v)\in{\cal D}_{\mu}$; we begin to show that, if $\epsilon$ is small enough, $\tilde\gamma(x,v)\ge 0$; by the definition above, it suffices to prove that $\tilde\gamma(x,v)\ge 0$ when $v\in\tilde Z(x)$. Since $v\in\tilde Z(x)$, we get the first inequality below; the second one is (2.14).
$$\gamma(x,v)- \epsilon B_A(x)\frac{8a(p)}{\delta^p}\ge
\delta- \epsilon B_A(x)\frac{8a(p)}{\delta^p}\ge\frac{\delta}{2} . \eqno (2.15) $$
This shows that $\tilde\gamma(x,v)\ge 0$. We prove that
$$\int_{{\bf R}^p}\tilde\gamma(x,v) {\rm d} v=1\txt{for $\mu$ a. e.} x\in{\bf T}^p . $$
Since $\gamma$ and $\tilde\gamma$ coincide on $M(x)$, we have the first equality below; the second one comes from the definition of $\tilde\gamma$ and the fact that ${\cal L}^p(\tilde Z(x))=\frac{\delta^p}{8a(p)}$; the last one, from the definition of $B_A(x)$.
$$\int_{{\bf R}^p}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v=$$
$$\int_{\tilde Z(x)}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v +
\int_{D_A(x)}[
\tilde\gamma(x,v)-\gamma(x,v)
] {\rm d} v =$$
$$-\epsilon B_A(x)\frac{8a(p)}{\delta^p}\cdot\frac{\delta^p}{8a(p)}+
\epsilon\int_{D_A(x)}\gamma(x,v) {\rm d} v =0\txt{for $\mu$ a. e. $x$.} $$
Since $\gamma\in{\cal D}_\mu$, this ends the proof that
$\tilde\gamma\in{\tilde D}_\mu$. With the same argument,
$$\int_{{\bf T}^p\times{\bf R}^p}|
\gamma(x,v)-\tilde\gamma(x,v)
| {\rm d} \mu(x) {\rm d} v\le
2\epsilon\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . $$
\noindent{\bf Step 4.} We compare $U(\mu\ast\gamma)$ with
$U(\mu\ast\tilde\gamma)$; actually, by lemma 2.4 and the last formula, we get that
$$|
U(\mu\ast\gamma)-U(\mu\ast\tilde\gamma)
| \le
2\sqrt{p}L\epsilon\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . \eqno (2.16)$$
\noindent{\bf Step 5.} We compare Lagrangian actions on
$\tilde Z(x)$.
We recall that, if $v\in\tilde Z(x)$, then
\noindent 1) $\tilde\gamma(x,v)\le\gamma(x,v)$ and
\noindent 2) the derivative of $t\log t$ on $[\tilde\gamma(x,v),\gamma(x,v)]$ is greater than $(1-\log\frac{2}{\delta})$; indeed, by (2.15),
$[\tilde\gamma(x,v),\gamma(x,v)]\subset [\frac{\delta}{2},+\infty)$.
Point 1) yields the first inequality below, point 2) the second one; the last equality follows from the fact that
${\cal L}^p(\tilde Z(x))=\frac{\delta^p}{8a(p)}$.
$$\int_{\tilde Z(x)}\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v=
\int_{\tilde Z(x)}\left[
\cinh{v}+\log \gamma(x,v)
\right] \gamma(x,v) {\rm d} v+$$
$$\int_{\tilde Z(x)}
\cinh{v}[\tilde\gamma(x,v)-\gamma(x,v)] {\rm d} v+
\int_{\tilde Z(x)}[
\tilde\gamma(x,v)\log\tilde\gamma(x,v)-\gamma(x,v)\log\gamma(x,v)
] {\rm d} v \le$$
$$\int_{\tilde Z(x)}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v-
\inf_{t\in[\tilde\gamma(x,v),\gamma(x,v)]}\frac{ {\rm d} }{ {\rm d} t}(t\log t)
\int_{\tilde Z(x)}[\gamma(x,v)-\tilde\gamma(x,v)] {\rm d} v\le$$
$$\int_{\tilde Z(x)}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
(
-1+\log\frac{2}{\delta}
) \int_{\tilde Z(x)} \epsilon B_A(x)\frac{8a(p)}{\delta^p} {\rm d} v= $$
$$\int_{\tilde Z(x)}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
(
-1+\log\frac{2}{\delta}
) \epsilon B_A(x) . \eqno (2.17)$$
\noindent{\bf Step 6.} We compare Lagrangian actions on
$D_A(x)$.
With the same calculations as in step 5, and using the fact that the derivative of $A\log A$ tends to $-\infty$ as $A\searrow 0$,
$$\int_{D_A(x)}\left[
\cinh{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v=$$
$$\int_{D_A(x)}\left[
\cinh{v}+\log \gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
\int_{D_A(x)}
\cinh{v}[\tilde\gamma(x,v)-\gamma(x,v)] {\rm d} v+$$
$$\int_{D_A(x)}[
\tilde\gamma(x,v)\log\tilde\gamma(x,v)-\gamma(x,v)\log\gamma(x,v)
] {\rm d} v \le
\int_{D_A(x)}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v +$$
$$\int_{D_A(x)}
\cinh{v}[\tilde\gamma(x,v)-\gamma(x,v)] {\rm d} v
- M(A)\int_{D_A(x)}[\tilde\gamma(x,v)-\gamma(x,v)] {\rm d} v$$
for a constant $M(A)\rightarrow+\infty$ as $A\searrow 0$. Since
$D_A(x)\subset B(0,2\sqrt p)$, we get that
$\frac{1}{2h}|v|^2\le\frac{1}{2h}4p$ on $D_A(x)$; this and the last formula yield the first inequality below, while the equality comes by the definition of $\tilde\gamma$ and $B_A(x)$.
$$\int_{D_A(x)}\left[
\cin{v}+\log\tilde\gamma(x,v)
\right] \tilde\gamma(x,v) {\rm d} v\le$$
$$\int_{D_A(x)}\left[
\cinh{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
\int_{D_A(x)}\frac{1}{2h}4p\epsilon\gamma(x,v) {\rm d} v-
M(A)\int_{D_A(x)}\epsilon\gamma(x,v) {\rm d} v=$$
$$\int_{D_A(x)}\left[
\cin{v}+\log\gamma(x,v)
\right] \gamma(x,v) {\rm d} v+
\frac{1}{2h}4p\epsilon B_A(x)-\epsilon M(A)B_A(x) . \eqno (2.18)$$
\noindent{\bf End of the proof.} By (2.16), (2.17), (2.18) and the fact that Lagrangian actions on $M(x)$ coincide, we get that
$$\int_{{\bf T}^p\times{\bf R}^p}A_h(\tilde\gamma,(x,v))
{\rm d} \mu(x) {\rm d} v+U(\mu\ast\tilde\gamma)\le
\int_{{\bf T}^p\times{\bf R}^p}A_h(\gamma,(x,v))
\gamma {\rm d} \mu(x) {\rm d} v+U(\mu\ast\gamma)+$$
$$\left[
\left(
-1+\log\frac{2}{\delta}
\right) +
2\sqrt p L
+\frac{1}{2h}4p -M(A)
\right]
\epsilon\int_{{\bf T}^p}B_A(x) {\rm d} \mu(x) . $$
Recall that $M(A)\rightarrow+\infty$ as $A\searrow 0$; thus, if $A$ is small enough, the last formula contradicts the minimality of $\gamma$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\prop{2.8} There is a constant
$C(L,h)$, depending only on $L,h>0$, for which the following happens. Let $U$ be $L$-Lipschitz, let $\mu\in{\cal M}_1(\T^p)$ and let
$\gamma$ minimize in the definition of $(G^h_tU)(\mu)$. Then,
\noindent 1) the function $\gamma$ satisfies
$$\max\left(
||\gamma||_{L^\infty({\bf T}^p\times{\bf R}^p,\mu\otimes{\cal L}^p)},
||\frac{1}{\gamma}||_{L^\infty({\bf T}^p\times B(0,2\sqrt p),\mu\otimes{\cal L}^p)}
\right) \le C(L,h) . \eqno (2.19)$$
\noindent 2) Let us denote by $\rho_{\mu\ast\gamma}$ the density of
$\mu\ast\gamma$; then
$$||\frac{1}{\rho_{\mu\ast\gamma}}||_{L^\infty{({\bf T}^p,{\cal L}^p)}} \le C(L,h) . \eqno (2.20)$$
\noindent 3) The set
$$\{
\rho_{\mu\ast\gamma} \;\colon\; \mu\in{\cal M}_1(\T^p)\txt{and}\gamma\in{\cal D}_\mu
\txt{is minimal}
\}$$
is uniformly integrable in $L^1({\bf T}^p,{\cal L}^p)$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Point 1) is just the statement of lemmas 2.5 and 2.7. We prove point 2).
Let ${\cal F}$ denote the class of all continuous probability densities on ${\bf T}^p$. The first equality below is standard; the second one comes from the fact that $\rho_{\mu\ast\gamma}$ is the density of $\mu\ast\gamma$, and the third one from the definition of
$\mu\ast\gamma$; for the fourth one, we have set $Q\colon=[-\frac{1}{2},\frac{1}{2})^p$ and used the fact that $f$ is periodic. The first inequality below comes from the fact that $f$ and $\gamma$ are non negative; for the second one, we have pushed the measure $\mu$, which lives on ${\bf T}^p$, to a measure on $Q$, which we denote by the same letter; for the last one, we use (2.19) and the fact that, if $x,w\in Q$, then
$x-w\in B(0,2\sqrt p)$.
$${\rm ess}\inf \rho_{\mu\ast\gamma}=
\inf_{f\in{\cal F}}\int_{{\bf T}^p}f(z)\rho_{\mu\ast\gamma}(z) {\rm d} z=
\inf_{f\in{\cal F}}\int_{{\bf T}^p}f(z) {\rm d} (\mu\ast\gamma)(z)=$$
$$\inf_{f\in{\cal F}}\int_{{\bf T}^p\times{\bf R}^p}f(x-v)\gamma(x,v) {\rm d} \mu(x) {\rm d} v=
\inf_{f\in{\cal F}}\int_{{\bf T}^p} {\rm d} \mu(x)\sum_{k\in{\bf Z}^p}\int_{Q}
f(w)\gamma(x,x-w-k) {\rm d} w\ge$$
$$\inf_{f\in{\cal F}}\int_{{\bf T}^p} {\rm d} \mu(x)\int_Qf(w)\gamma(x,x-w) {\rm d} w\ge
\int_{Q}[{\rm ess}\inf_{w\in Q}\gamma(x,x-w)] {\rm d} \mu(x) \ge
\frac{1}{C(L,h)} . $$
But this is (2.20).
We prove point 3). Let $\mu_k\in{\cal M}_1(\T^p)$ and let
$\gamma_k\in{\cal D}_{\mu_k}$ be minimal. Let $\rho_k$ be the density of
$\mu_k\ast\rho_k$; we want to prove that, up to subsequences,
$\rho_k\rightharpoonup\rho$ in $L^1({\bf T}^p,{\cal L}^p)$. Thus, let
$g\in L^\infty({\bf T}^p,{\cal L}^p)$; the first equality below is the definition of $\rho_k$, the second one is the definition of $\mu_k\ast\gamma_k$, the last one is the change of variables in ${\bf R}^p$ $\fun{}{v}{w=x-v}$.
$$\int_{{\bf T}^p}g(z)\rho_k(z) {\rm d} z=
\int_{{\bf T}^p}
g(z) {\rm d} (\mu_k\ast\gamma_k)(z)=
\int_{{\bf T}^p} {\rm d} \mu_k(x)
\int_{{\bf R}^p}g(x-v)\gamma_k(x,v) {\rm d} v = $$
$$\int_{{\bf R}^p}g(w) {\rm d} w\int_{{\bf T}^p}\gamma_k(x,x-w) {\rm d} \mu_k(x) . $$
Thus, point 3) holds if we prove that
$$a_k(w)\colon=
\int_{{\bf T}^p}\gamma_k(x,w-x) {\rm d} \mu_k(x)$$
has a subsequence converging weakly in $L^1({\bf R}^p)$. To prove this, we recall from point 1) that $\gamma_k(x,v)\le C(L,h)$ for
$x\in E_k$, with $\mu_k(E_k^c)=0$; this implies that
$||a_k||_\infty\le C(L,h)$ and thus $a_k$ is uniformly integrable on ${\bf R}^p$.
Next, we have to show that the measures $a_k{\cal L}^p$ are tight. For the first equality below, we lift $\mu$ to a measure on
$Q$ and use Fubini; the first inequality comes from the fact that, if $x\in Q$ and $w\ge R$, then $|w-x|\ge R-\sqrt p$; the second inequality follows by lemma 2.6 if we take $R$ large enough.
$$\int_{B(0,R)^c}a_k(w) {\rm d} w=
\int_{Q} {\rm d} \mu_k(x)\int_{B(0,R)^c}\gamma_k(x,w-x) {\rm d} w\le$$
$$\int_{Q} {\rm d} \mu_k(x)
\int_{B(0,R-\sqrt p)^c}\gamma_k(x,v) {\rm d} v\le
\int_Q\epsilon {\rm d} \mu_k(x)=\epsilon . $$
This proves tightness.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\centerline{\bf \S 3}
\centerline{\bf Discrete characteristics and value functions}
\vskip 1pc
In this section, we define the characteristics and value function for the problem with discrete time, and we show that the discrete value function is bounded as the time-step tends to zero. From now on, the parameter $h$ in the definition of $G^h_tU$ (formula (1.4)) will be set to
$h=\frac{1}{n}$, with $n\in{\bf N}$.
\vskip 1pc
\noindent{\bf Definitions.} $\bullet$) Let $m,n\in{\bf N}$ and let $U\in C({\cal M}_1(\T^p))$ be $L$-Lipschitz for the 1-Wasserstein distance; we can define inductively the following sequence of functions.
$$\matrix{
\hat U_n(0,\mu)=U(\mu)\cr
\hat U_n(-\frac{1}{n},\mu)=
\left[
G^\frac{1}{n}_{-\frac{1}{n}}\hat U_n(0,\cdot)
\right] (\mu)-
\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} \cr
\dots\cr
\hat U_n(-\frac{mn}{n},\mu)=\left[
G^\frac{1}{n}_{\frac{-mn}{n}}\hat U_n(-\frac{mn-1}{n},\cdot)
\right] (\mu)-
\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} .
} $$
Applying iteratively proposition 2.3, we see that
$\hat U_n(\frac{j}{n},\cdot)$ is $(L+Cm)$-Lipschitz if
$j\in(-mn,\dots,0)$.
\noindent $\bullet$) Let $s\in(1,\dots,mn)$; we say that
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{0}$ is a
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{-1}$-sequence starting at
$(-\frac{s}{n},\mu)$ if the following three points hold.
\noindent 1) $\mu^\frac{1}{n}_{-\frac{s}{n}}=\mu$.
\noindent 2) $\gamma^\frac{1}{n}_{\frac{j}{n}}\in{\cal D}_{\mu^\frac{1}{n}_{\frac{j}{n}}}$ for $j=-s,\dots,-1$.
\noindent 3) $\mu^\frac{1}{n}_{\frac{j+1}{n}}=
\mu^\frac{1}{n}_{\frac{j}{n}}\ast\gamma^\frac{1}{n}_{\frac{j}{n}}$ for
$j=-s,\dots,-1$.
\noindent $\bullet$) If
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{0}$ is a
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{-1}$-sequence starting at
$(-\frac{s}{n},\mu)$, and
$\gamma^\frac{1}{n}_{\frac{j}{n}}$ is minimal in the definition of
$$\left[
G^\frac{1}{n}_\frac{j}{n}\hat U(\frac{j}{n},\cdot)
\right] \left(
\mu^\frac{1}{n}_{\frac{j}{n}}
\right) $$
for $j=-s,\dots,-1$, then we say that
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{0}$ is a minimal
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{-1}$-sequence starting at
$(-\frac{s}{n},\mu)$.
\noindent $\bullet$) If
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{0}$ is a
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{-1}$-sequence and
$t\in[\frac{-s}{n},0]$, say $t\in[\frac{j}{n},\frac{j+1}{n}]$ for some
$j\in(-s,\dots,-1)$, we let $\mu^\frac{1}{n}_t$ be the geodesic for the 2-Wasserstein distance which connects
$\mu^\frac{1}{n}_{\frac{j}{n}}$ at time $\frac{j}{n}$ with
$\mu^\frac{1}{n}_{\frac{j+1}{n}}$ at time $\frac{j+1}{n}$.
\noindent $\bullet$ For $j\in(-s+1,\dots,0)$ and
$t\in(\frac{j-1}{n},\frac{j}{n}]$ we define
$$\hat U_n(t,\mu)=\hat U_n(\frac{j}{n},\mu) . $$
\noindent $\bullet$) For $t\in [-m,0]$, we let
$$\hat U(t,\mu)=\liminf_{n\rightarrow+\infty}\hat U_n(t,\mu) . $$
When there is no ambiguity, we shall drop the $\frac{1}{n}$, and denote $\gamma^\frac{1}{n}_{\frac{j}{n}}$,
$\mu^\frac{1}{n}_{\frac{j}{n}}$ and $\mu^\frac{1}{n}_{t}$ by
$\gamma_{\frac{j}{n}}$, $\mu_{\frac{j}{n}}$ and $\mu_{t}$ respectively.
\vskip 1pc
The definitions above raise at least two questions: the first one is the convergence of a $\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_j$-minimal sequence $\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_j$ to a minimal characteristic as $n\rightarrow+\infty$; this will have to wait until section 6 for an answer. The second one is whether
$\hat U(t,\mu)$ is finite; this is the content of proposition 3.2 below. Before proving it, we need a definition and a lemma.
\noindent $\bullet$) Let
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{0}$ be a
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_{j=-s}^{-1}$-sequence starting at
$(-\frac{s}{n},\mu)$; we define the functional
$$I(\mu,\gamma^\frac{1}{n}_\frac{-s}{n},\dots,\gamma^\frac{1}{n}_\frac{-1}{n})=
\sum_{j=-s}^{-1}\int_{
{\bf T}^p\times{\bf R}^p
}\left[
\frac{1}{n}L^{
\frac{1}{2}\mu_\frac{j}{n}
} (t,x,nv)+\log\gamma_\frac{j}{n}(x,v)
\right] \gamma_\frac{j}{n}(v) {\rm d} \mu_\frac{j}{n}(x,v) {\rm d} v . $$
We omit the proof of the next lemma, since the fact that the value function defines the Hopf-Lax semigroup is standard.
\lem{3.1} Let $\{ \bar\mu_\frac{j}{n} \}_j$ be a minimal
$\{ \bar\gamma_\frac{j}{n} \}_j$-sequence starting at $(-\frac{s}{n},\mu)$; then, $\{ \bar\gamma_\frac{j}{n} \}_j$ minimizes the functional which brings $(\gamma_{\frac{-s}{n}},\gamma_\frac{-s+1}{n},\dots,\gamma_\frac{-1}{n})$ to
$$I(\mu,\gamma^\frac{1}{n}_\frac{-s}{n},\dots,\gamma^\frac{1}{n}_\frac{-1}{n})+
U(\mu_0) -
s\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} . \eqno (3.1) $$
Moreover, the value of the minimum is equal to
$\hat U(-\frac{s}{n},\mu)$.
In other words, if for $s>j$ we define
$$T_{-\frac{s}{n},-\frac{j}{n}}U(\mu)\colon=
\min_{(\gamma_{\frac{-s}{n}},\gamma_\frac{-s+1}{n},\dots,\gamma_\frac{-j}{n})}
I(\mu,\gamma^\frac{1}{n}_\frac{-s}{n},\dots,\gamma^\frac{1}{n}_\frac{-j}{n})+
U(\mu_0) -
(s-j)\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} $$
then $T_{-\frac{s}{n},-\frac{j}{n}}$ is a semigroup in the past: for $t>j>s$,
$T_{-\frac{t}{n},-\frac{s}{n}}\circ T_{-\frac{s}{n},-\frac{j}{n}}=
T_{-\frac{t}{n},-\frac{j}{n}}$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\prop{3.2} There is $C>0$, only depending on $m\in{\bf N}$, such that
for all $\mu\in{\cal M}_1(\T^p)$ and $t\in[-m,0]$, we have
$$|\hat U(t,\mu)|\le C . $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} It suffices to show that there is $C>0$ such that
$$\left\vert
\hat U_n(\frac{-s}{n},\mu)
\right\vert \le C\quad
\forall s\in(0,1,\dots,mn),\quad\forall n\ge 1,\quad
\forall\mu\in{\cal M}_1(\T^p) . \eqno (3.2)$$
For $j\in(-s,\dots,-1)$ let us set
$$\tilde\gamma_{\frac{j}{n}}(v)=\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} e^{
-\frac{n}{2}|v|^2
} $$
and let $\{ \tilde\mu_j \}$ be a $\{ \tilde\gamma_j \}$-sequence starting at $(-\frac{s}{n},\mu)$. By lemma 3.1 we get the first inequality below; by (1.13) and the fact that $U$ is bounded, the second one follows.
$$\hat U_n(-\frac{s}{n},\mu)\le$$
$$\sum_{j=-s}^{-1}\int_{
{\bf T}^p\times{\bf R}^p
}\left[
\frac{1}{n}L^{
\frac{1}{2}\tilde\mu_\frac{j}{n}
} (t,x,nv)+\log\tilde\gamma_\frac{j}{n}(x,v)
\right] \tilde\gamma_\frac{j}{n}(x,v) {\rm d} \tilde\mu_\frac{j}{n}(x) {\rm d} v+
U(\tilde\mu_0) -
s\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2}\le$$
$$\sum_{j=-s}^{-1}\int_{{\bf T}^p\times{\bf R}^p}\left[
\frac{n}{2}|v|^2+\log\tilde\gamma_\frac{j}{n}(x,v)
\right] \tilde\gamma_\frac{j}{n}(x,v) {\rm d} \tilde\mu_\frac{j}{n}(x) {\rm d} v +
\frac{s}{n}(
||V||_\infty+||W||_\infty
) + ||U||_{\sup}
-s\log\left(\frac{n}{2\pi}\right)^\frac{p}{2} . $$
Since
$$\int_{{\bf T}^p\times{\bf R}^p}
\left[
\cinn{v}+\log\tilde\gamma_\frac{j}{n}(x,v)
\right] \tilde\gamma_\frac{j}{n}(x,v) {\rm d} \tilde\mu_\frac{j}{n}(x) {\rm d} v=
\log\left(
\frac{n}{2\pi }
\right)^\frac{p}{2} $$
and $s\le nm$, we get that
$$\hat U_n(\frac{-s}{n},\mu)\le
m(||V||_\infty+||W||_\infty)+||U||_{\sup} . \eqno (3.3)$$
To prove the opposite inequality, we let
$\{ \mu^\frac{1}{n}_{\frac{j}{n}} \}_j$ be a minimal
$\{ \gamma^\frac{1}{n}_{\frac{j}{n}} \}_j$-sequence starting at
$(-\frac{s}{n},\mu)$, with $s\in(1,2,\dots,mn)$. By lemma 1.1, the function $\tilde\gamma_\frac{j}{n}$ defined above minimizes the integral of $A_\frac{1}{n}$; as a consequence,
$$\int_{{\bf T}^p\times{\bf R}^p}\left[
\frac{n}{2}|v|^2+\log\gamma_\frac{j}{n}(x,v)
\right] \gamma_\frac{j}{n}(x,v) {\rm d} \mu_\frac{j}{n}(x) {\rm d} v\ge$$
$$\int_{{\bf T}^p\times{\bf R}^p}\left[
\frac{n}{2}|v|^2+\log\tilde\gamma_\frac{j}{n}(x,v)
\right] \tilde\gamma_\frac{j}{n}(x,v) {\rm d} \mu_\frac{j}{n}(x) {\rm d} v=
\log\left(
\frac{n}{2\pi }
\right)^\frac{p}{2} . $$
Lemma 3.1 implies the equality below; the first inequality comes from (1.13) and the fact that $U$ is bounded; the second one comes from the formula above.
$$\hat U_n\left(
\frac{-s}{n},\mu
\right) =$$
$$\sum_{j=-s}^{-1}\int_{
{\bf T}^p\times{\bf R}^p
}\left[
\frac{1}{n}L^{
\frac{1}{2}\mu_\frac{j}{n}
} (t,x,nv)+\log\gamma_\frac{j}{n}(x,v)
\right] \gamma_\frac{j}{n}(x,v) {\rm d} \mu_\frac{j}{n}(x) {\rm d} v+
U(\mu_0) -
s
\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} \ge$$
$$\sum_{j=-s}^{-1}
\int_{{\bf T}^p\times{\bf R}^p}\left[
\frac{n}{2}|v|^2+\log\gamma_\frac{j}{n}(x,v)
\right] \gamma_\frac{j}{n}(x,v) {\rm d} \mu_\frac{j}{n}(x) {\rm d} v -C_7-
s
\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} \ge -C_7 . $$
This inequality and (3.3) imply (3.2).
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\noindent\centerline{\bf \S 4}
\noindent\centerline{\bf Differentiability of $U$}
\vskip 1pc
In this section, we want to show that the minimal of
$\fun{}{\psi}{S(U,\mu,\psi)}$ also minimizes a problem for a linear final condition. Proposition 4.1 below deals with a single time step, while proposition 4.6 deals with the whole history.
\prop{4.1} Let $U$ be Lipschitz and differentiable on densities (see below for a definition). Let
$\mu\in{\cal M}_1(\T^p)$, and let $(G^\frac{1}{n}_tU)(\mu)$ be achieved on
$\gamma$. Let
$\rho_{\gamma}$ be the density of $\mu\ast\gamma$, and let $f$ be the differential of $U$ at $\mu\ast\gamma=\rho_{\gamma}{\cal L}^p$. Then, there is a bounded Borel function $a(x)$ such that
$$\gamma(x,v)=e^{-\cinn{v}-f(x-v)+a(x)} . \eqno (4.1)$$
Moreover, there is a constant $M>0$, independent of $\mu$ and
$h$, such that
$$||f||_\infty\le M . \eqno (4.2)$$
\rm
The following definition will come handy.
\vskip 1pc
\noindent{\bf Definition.} If $\fun{f}{{\bf T}^p}{{\bf R}}$ is a bounded Borel function, we define
$$\fun{U_f}{{\cal M}_1(\T^p)}{{\bf R}},\qquad U_f(\mu)\colon=\int_{{\bf T}^p}f {\rm d} \mu .
$$
\vskip 1pc
We sketch the proof of proposition 4.1: we are going to concentrate on the particles which lie in a small ball centered in
$x\in{\bf T}^p$, i. e. we shall consider the probability measure
$\mu_{in}\colon=\frac{1}{\mu(B(x,r))}\mu|_{B(x,r)}$. We shall see that the optimal strategy for $\mu_{in}$ approximates, as
$r\rightarrow 0$, the optimal strategy for a single particle problem; the final condition is the linear $U_f$, where $f$ is the derivative of $U$ at $\mu\ast\gamma$. When the final condition is linear, the minimizer can be written explicitly by [14], and (4.1) will follow. As for the potential, in the case of the single time step it won't even enter the picture; for more time steps, i. e. in lemma 4.5, we shall see that it tends to the mean field generated by all the particles.
In general, the gradient of a function on the space of probability measures is defined as a vector field ([2] and [16]) rather delicate to find; however, in our case $\mu\ast\gamma$ is the convolution of a probability measure with a $L^1$ function, and thus has a $L^1$ density. As a consequence, we can use the standard definition of derivative in $L^1({\bf T}^p)$.
\vskip 1pc
\noindent{\bf Definition.} We say that $\fun{U}{{\cal M}_1(\T^p)}{{\bf R}}$ is differentiable at densities if the following two points hold.
\noindent $i$) There is a function
$\fun{\tilde U}{L^1({\bf T}^p)}{{\bf R}}$ such that
$U(\phi{\cal L}^p)=\tilde U(\phi)$ when $\phi{\cal L}^p$ is a probability measure. We ask that $\tilde U$ be differentiable at every probability density $\phi$; in other words, there is a function
$h\in L^\infty({\bf T}^p)$ such that
$$\left\vert
\tilde U((\phi+\psi){\cal L}^p)-\tilde U(\phi{\cal L}^p)-
\int_{{\bf T}^p}h\cdot\psi\hbox{{\rm d}$x$}
\right\vert = o(||\psi||_{L^1({\bf T}^p)}) . $$
We set $U^\prime(\phi{\cal L}^p)\colon=h$. Actually, we shall need the formula above only on the affine space of probability densities, i. e. when $h$ has zero mean.
\noindent $ii$) We also ask that there is $M>0$ such that, if
$h=U^\prime(\phi{\cal L}^p)$ for a probability density $\phi$, then
$$||h||_{L^\infty({\bf T}^p)}\le M . $$
Note that, if $U$ is Lipschitz for $d_1$, then point $ii$) holds automatically; indeed, in this case $U$ is Lipschitz also for the total variation distance, which easily implies point $ii$).
\vskip 1pc
The typical example of a function $U$ differentiable on densities is the usual one: we take $k$ bounded Borel functions
$\fun{f_1,\dots,f_k}{{\bf T}^p}{{\bf R}}$ and we set
$$U(\mu)=\left(
\int_{{\bf T}^p}f_1 {\rm d} \mu
\right)\cdot
\dots
\cdot\left(
\int_{{\bf T}^p}f_k {\rm d} \mu
\right) . $$
As we saw above, $\mu\ast\gamma$ has a density; this leads us to the first of the following definitions.
\vskip 1pc
\noindent {\bf Definitions.} $\bullet$) Let $U\in C({\cal M}_1(\T^p))$; we define
$$||U||_{den}\colon=\sup\{
|U(\rho{\cal L}^p)| \;\colon\; \rho\in L^1({\bf T}^p),\quad \rho\ge 0,\quad
\int_{{\bf T}^p}\rho(x)\hbox{{\rm d}$x$}=1
\} . $$
If $f\in L^\infty({\bf T}^p)$, then it is easy to see that
$$||U_f||_{den}= ||f||_{L^\infty({\bf T}^p)} . $$
Since in $L^\infty({\bf T}^p)$ we disregard null sets with respect to the Lebesgue measure, the $\sup$ of $|U_f|$ on ${\cal M}_1(\T^p)$ could be larger than $||f||_{L^\infty({\bf T}^p)}$.
\noindent$\bullet$) We want to isolate the particles in $B(x,r)$; thus, for $\mu\in{\cal M}_1(\T^p)$ and $x\in{\bf T}^p$, we define
$$\mu_{ext}=\mu\vert B(x,r)^c,\qquad
\mu_{in}=\frac{1}{\mu(B(x,r))}\mu\vert B(x,r) . $$
\noindent$\bullet$) Let $U$, $f$, $\mu$ and $\gamma$ be as in the hypotheses of proposition 4.1; for $\psi\in{\cal D}_\mu$, we define the function
$U^r_\psi$ as
$$U^r_{\psi}(\lambda)=\frac{1}{\mu(B(x_0,r))}\cdot
\left\{
U[
\mu_{ext}\ast\psi+\mu(B(x,r))\lambda
] -U(\mu\ast\gamma) +\mu(B(x,r))\cdot U_f(\mu_{in}\ast\gamma)
\right\} = $$
$$\frac{1}{\mu(B(x,r))}\cdot\big\{
U[
\mu\ast\psi+\mu(B(x,r))\cdot(\lambda-\mu_{in}\ast\psi)
] -U(\mu\ast\gamma) +\mu(B(x,r))\cdot U_f(\mu_{in}\ast\gamma)
\big\} . \eqno (4.3)$$
As for the second equality above, it comes from the fact, easy to check, that the operator $\ast$ is linear in $\mu$:
$(\mu_1+\mu_2)\ast\gamma=\mu_1\ast\gamma+\mu_2\ast\gamma$.
\vskip 1pc
Lemma 4.2 below shows that $U^r_\gamma$ is the final condition that is seen by the particles in $B(x,r)$; by lemma 4.3 below, $U^r_\gamma$ is very close to the derivative of $U$ at $\mu\ast\gamma$.
\lem{4.2} Let $U$, $\mu$ and $\gamma$ be as in proposition 4.1, let
$x\in{\bf T}^p$ and let $U^r_\gamma$ be defined as in (4.3); then $\gamma|B(x,r)$ minimizes $\fun{}{\psi}{S(U^r_\gamma,\mu_{in},\psi)}$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We set
$$\hat U(\nu)=U(\nu)-U(\mu\ast\gamma)+
\mu(B(x,r))\cdot U_f(\mu_{in}\ast\gamma) $$
and we note that the minima of
$S(U,\mu,\cdot)$ coincide with the minima of
$S(\hat U,\mu,\cdot)$. Indeed, adding a constant to the final condition does not change the set of minima.
Next, we isolate the particles in $B(x,r)$: taking
$\psi\in{\cal D}_\mu$, the first equality below is the definition of $S$, while the second one comes from the definition of $\mu_{in}$ and $\mu_{ext}$.
$$S(\hat U,\mu,\psi)=
\int_{{\bf T}^p\times{\bf R}^p}\left[
\cinn{v}+\log\psi
\right] \psi {\rm d} \mu(x) {\rm d} v+
\hat U(\mu\ast\psi)=$$
$$\int_{{\bf T}^p\times{\bf R}^p}\left[
\cinn{v}+\log\psi(x,v)
\right] \psi(x,v) {\rm d} \mu_{ext}(x) {\rm d} v+$$
$$\mu(B(x,r))\cdot
\int_{{\bf T}^p\times{\bf R}^p}\left[
\cinn{v}+\log\psi(x,v)
\right] \psi(x,v) {\rm d} \mu_{in}(x) {\rm d} v +
\hat U(\mu_{ext}\ast\psi+\mu(B(x,r))\mu_{in}\ast\psi) . $$
If $U^r_{\psi}$ is defined as in (4.3), then we can write the formula above as
$$S(\hat U,\mu,\psi)=$$
$$\int_{{\bf T}^p\times{\bf R}^p}\left[
\cinn{v}+\log\psi(x,v)
\right] \psi(x,v) {\rm d} \mu_{ext}(x) {\rm d} v+ \eqno (4.4)_a$$
$$\mu(B(x,r))\cdot S(U^r_{\psi},\mu_{in},\psi) .
\eqno (4.4)_b$$
Since $\gamma$ minimizes in (4.4) and $(4.4)_a$ does not depend on
$\gamma |_{B(x,r)\times{\bf R}^p}$, we have that
$\gamma|_{B(x,r)\times{\bf R}^p}$ must minimize $(4.4)_b$, i. e.
$\fun{}{\psi}{S(U^r_\psi,\mu_{in},\psi)}$. This is almost the thesis, save for the fact that we have $U^r_\psi$ in stead of $U^r_\gamma$. But we can restrict to the functions $\psi$ such that
$\psi |_{B(x,r)^c\times{\bf R}^p}=\gamma |_{B(x,r)^c\times{\bf R}^p}$; in this way we have that $\mu_{ext}\ast\psi=\mu_{ext}\ast\gamma$; since $\psi$ enters the definition of $U^r_\psi$ only through
$\mu_{ext}\ast\psi$ (this is the first equality of (4.3)), we get that
$U^r_\psi=U^r_\gamma$, and the thesis follows.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{4.3} Let $U$, $f$, $\mu$ and $\gamma$ be as in the hypotheses of proposition 4.1; let us suppose that $x$ is not an atom of $\mu$. Then,
$$\lim_{r\rightarrow 0}||U^r_\gamma-U_f||_{den}=0 . \eqno (4.5)$$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Let $\eta$ be a probability density on ${\bf T}^p$. The first quality below is the definition of $U^r_\gamma$, the second one comes from the fact that $U_f$ is the differential of $U$ at $\mu\ast\gamma$; in the "small oh" we have denoted by $\rho_{in}$ the density of
$\mu_{in}\ast\gamma$.
$$|
U^r_\gamma(\eta{\cal L}^p)-U_f(\eta{\cal L}^p)
| = $$
$$\Bigg\vert
\frac{1}{\mu(B(x,r))} \cdot
\{
U[
\mu\ast\gamma+\mu(B(x,r))\cdot(\eta{\cal L}^p-\mu_{in}\ast\gamma)
]-U(\mu\ast\gamma)+$$
$$\mu(B(x,r))\cdot U_f(\mu_{in}\ast\gamma)
\} -U_f(\eta{\cal L}^p)
\Bigg\vert =$$
$$ \Bigg\vert
\frac{1}{\mu(B(x,r))}
\{
U_f[
\mu(B(x,r))\cdot(\eta{\cal L}^p-\mu_{in}\ast\gamma)
]+\mu(B(x,r))\cdot U_f(\mu_{in}\ast\gamma)+$$
$$o[\mu(B(x,r))\cdot ||\eta-\rho_{in}||_{L^1({\bf T}^p)}]
\}
-U_f(\eta{\cal L}^p)
\Bigg\vert . $$
Now we note that, for all probability density $\eta$,
$$\mu(B(x,r))\cdot ||\eta-\rho_{in}||_{L^1({\bf T}^p)}\le 2\mu(B(x,r)) $$
and thus
$$|
U^r_\gamma(\eta{\cal L}^p)-U_f(\eta{\cal L}^p)
| =
\frac{1}{\mu(B(x,r))} o(\mu(B(x,r))) $$
where the "small oh" does not depend on $\eta$. Since $x$ is not an atom, $\mu(B(x,r))\rightarrow 0$ and the thesis follows.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\lem{4.4} Let $U$ and $\mu$ be as in proposition 4.1 and let $\gamma$ minimize $S(U,\mu,\cdot)$. Then, for $\mu$ a. e. $x\in{\bf T}^p$ which is not an atom,
$$\liminf_{r\rightarrow 0}S(U^r_\gamma,\mu_{in},\gamma)\ge
S(U_f,\delta_x,\gamma) \eqno (4.6)$$
and
$$\limsup_{r\rightarrow 0} S(U^r_\gamma,\mu_{in},\gamma)\le
\min_\psi S(U_f,\delta_x,\psi) . \eqno (4.7)$$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} {\bf Step 1.} We begin to show that, for $\mu$ a. e. $x$, atom or not, the three fact below hold. First, that
$$\lim_{r\rightarrow 0}
\frac{1}{\mu(B(x,r))}
\int_{B(x,r)\times{\bf R}^p}
|
\gamma(x,v)-\gamma(y,v)
| {\rm d} \mu(y) {\rm d} v=0 . \eqno (4.8)$$
Second, if $f=U^\prime(\mu\ast\gamma)$, then
$$\lim_{r\rightarrow 0}
\frac{1}{\mu(B(x,r))}
\int_{B(x,r)\times{\bf R}^p} |
f(y-v)\gamma(y,v)-f(x-v)\gamma(x,v)
| {\rm d} \mu(y) {\rm d} v =0 . \eqno (4.9)$$
Third, if $\hat\gamma(v)=\left(\frac{n}{2\pi p}\right)^\frac{p}{2}
e^{-\cinn{v}}$, then
$$\lim_{r\rightarrow 0}\frac{1}{{\mu(B(x,r))}}
\int_{B(x,r)\times{\bf R}^p}|f(y-v)-f(x-v)|\hat\gamma(v) {\rm d} \mu_{in}(y) {\rm d} v
=0 . \eqno (4.10)$$
We begin with the standard proof of (4.8): we let
$\{ \gamma_m \}_{m\ge 1}$ be a dense sequence in $L^1({\bf R}^p)$ and consider the Borel measures on ${\bf T}^p$
$$\mu_m(A)\colon=\int_{A\times{\bf R}^p}
|\gamma(y,v)-\gamma_m(v)| {\rm d} \mu(y) {\rm d} v . $$
By the Lebesgue differentiation theorem, for all $x\in E$ with
$\mu(E^c)=0$ we have that, for all $m$,
$$\lim_{r\rightarrow 0} \frac{1}{\mu(B(x,r))}
\int_{B(x,r)\times{\bf R}^p}|\gamma(y,v)-\gamma_m(v)| {\rm d} \mu(y) {\rm d} v=
\int_{{\bf R}^p}|\gamma(x,v)-\gamma_m(v)| {\rm d} v . \eqno (4.11)$$
For $x\in E$ and $\epsilon>0$, we choose $\gamma_m$ such that
$$\int_{{\bf R}^p}|\gamma(x,v)-\gamma_m(v)| {\rm d} v\le\epsilon . \eqno (4.12)$$
The first inequality below is obvious, while the equality follows by (4.11) and the last inequality by (4.12).
$$\limsup_{r\rightarrow 0}\frac{1}{\mu(B(x,r))}
\int_{\mu(B(x,r))\times{\bf R}^p}
|\gamma(y,v)-\gamma(x,v)| {\rm d} \mu(y) {\rm d} v \le $$
$$\lim_{r\rightarrow 0}\frac{1}{\mu(B(x,r))}
\int_{\mu(B(x,r))\times{\bf R}^p}
|\gamma(y,v)-\gamma_m(v)|
{\rm d} \mu(y) {\rm d} v +$$
$$\lim_{r\rightarrow 0}\frac{1}{\mu(B(x,r))}
\int_{\mu(B(x,r))\times{\bf R}^p}
|\gamma_m(v)-\gamma(x,v)|
{\rm d} \mu(y) {\rm d} v = $$
$$2\int_{{\bf R}^p}|\gamma_m(v)-\gamma(x,v)| {\rm d} v\le 2\epsilon . $$
Since $\epsilon$ is arbitrary, (4.8) follows. Formulas (4.9) and (4.10) follow by the same argument, but applied to $f(y-v)\gamma(y,v)$ and
$f(y-v)\hat\gamma(v)$ respectively.
For the next steps, we suppose that $x\in E$ and $x$ is not an atom of $\mu$.
\noindent{\bf Step 2.} Here we prove (4.6). For $\epsilon>0$ and
$x\in E$ let us set
$$F\colon=\{
y\in B(x,r)\;\colon\; ||\gamma(y,\cdot)-\gamma(x,\cdot)||_{L^1({\bf R}^p)}<\epsilon
\} . $$
By (4.8) and the Chebishev inequality we have that
$$\mu_{in}(F)\rightarrow 1 \txt{and}
\mu_{in}(B(x,r)\setminus F)\rightarrow 0
\txt{as}r\rightarrow 0 . \eqno (4.13)$$
Now,
$$\int_{B(x,r)}
{\rm d} \mu_{in}(y)\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v=$$
$$\int_{F}
{\rm d} \mu_{in}(y)\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v+
\int_{B(x,r)\setminus F}
{\rm d} \mu_{in}(y)\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v .
\eqno (4.14) $$
We saw in lemma 1.3 that the map
$$\fun{}{\psi}{\int_{{\bf R}^p}A_\frac{1}{n}(\psi,v)} {\rm d} v$$
is l. s. c. for the $L^1$ topology; thus, there is $\delta(\epsilon)\rightarrow 0$ as
$\epsilon\rightarrow 0$ such that
$$\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(y,\cdot),v) {\rm d} v\ge
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v-\delta(\epsilon)\quad
\forall y\in F. $$
This implies the first equality below, while the second one comes from (4.13).
$$\liminf_{r\rightarrow 0}\int_F {\rm d} \mu_{in}(y)
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v\ge
\liminf_{r\rightarrow 0}\mu_{in}(F)\left[
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v
-\delta(\epsilon)
\right] = $$
$$\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v -\delta(\epsilon) . $$
By (4.13) and (1.6) we get that
$$\liminf_{r\rightarrow 0}\int_{B(x,r)\setminus F} {\rm d} \mu_{in}(y)
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v\ge 0 . $$
Thus, by the last two formulas and (4.14),
$$\liminf_{r\rightarrow 0} \int_{B(x,r)} {\rm d} \mu_{in}(y)
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(x,v)) {\rm d} v\ge
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v -\delta(\epsilon) . $$
Since $\epsilon$ is arbitrary and $\delta(\epsilon)$ tends to zero as $\epsilon\rightarrow 0$, we have that
$$\liminf_{r\rightarrow 0} \int_{B(x,r)} {\rm d} \mu_{in}(y)
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma,(x,v)) {\rm d} v\ge
\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v . $$
The equality below comes from the definition of $U_f$ and of
$\mu\ast\gamma$; the limit comes from (4.9).
$$|
U_f(\mu_{in}\ast\gamma)-U_f(\delta_x\ast\gamma)
| =$$
$$\left\vert
\int_{{\bf T}^p\times{\bf R}^p}f(y-v)\gamma(y,v) {\rm d} \mu_{in}(y) {\rm d} v-
\int_{{\bf R}^p}f(x-v)\gamma(x,v) {\rm d} v
\right\vert \rightarrow 0 . $$
The first inequality below comes from the definition of $S$; the first equality follows from (4.5); the last inequality comes from the last two formulas above.
$$\liminf_{r\rightarrow 0} S(U^r_\gamma,\mu_{in},\gamma)\ge
\liminf_{r\rightarrow 0}[
S(U_f,\mu_{in},\gamma)- ||U^r_\gamma-U_f||_{den}
] =$$
$$\liminf_{r\rightarrow 0} S(U_f,\mu_{in},\gamma)=
\liminf_{r\rightarrow 0}\left[
\int_{B(x,r)} {\rm d} \mu_{in}(y)
\int_{{\bf R}^p} A_\frac{1}{n}(\gamma,(y,v)) {\rm d} v+
U_f(\mu_{in}\ast\gamma)
\right] \ge$$
$$\int_{{\bf R}^p}A_\frac{1}{n}(\gamma(x,\cdot),v) {\rm d} v+
U_f(\delta_x\ast\gamma)=S(U_f,\delta_x,\gamma) . $$
This proves (4.6).
\noindent{\bf Step 3.} We prove (4.7).
We know from [14] that the function
$\fun{}{\psi}{S(U_f,\delta_x,\psi)}$ has a unique minimum, given by
$$\tilde\gamma(v)=e^{
-\cinn{v}-f(x-v)+a(x)
} $$
with $a(x)$ such that $\tilde\gamma$ is a probability density. In order to compare the actions, we are going to plug $\tilde\gamma$ into
$S(U_\gamma^{r},\mu_{in},\cdot)$. The first inequality below holds because we have seen in lemma 4.2 that $\gamma$ minimizes
$\fun{}{\psi}{S(U^r_\gamma,\mu_{in},\psi)}$; the second one holds by the definition of $S$; the first equality is the definition of $S$ while the last one comes from the fact that
$\tilde\gamma$ does not depend on $y$ and $\mu_{in}$ is a probability measure.
$$S(U^r_\gamma,\mu_{in},\gamma)\le
S(U^{r}_\gamma,\mu_{in},\tilde\gamma)\le
S(U_f,\mu_{in},\tilde\gamma)+||U_\gamma^{r}-U_f||_{den}= $$
$$\int_{B(x,r)} {\rm d} \mu_{in}(y)
\int_{{\bf R}^p}A_\frac{1}{n}(\tilde\gamma,v) {\rm d} v+U_f(\mu_{in}\ast\tilde\gamma)+
||U_\gamma^{r}-U_f||_{den}=$$
$$\int_{{\bf R}^p}A_\frac{1}{n}(\tilde\gamma,v) {\rm d} v+
U_f(\mu_{in}\ast\tilde\gamma)+||U_\gamma^{r}-U_f||_{den} . $$
By (4.5) we have that
$$||U_\gamma^{r}-U_f||_{den}\rightarrow 0
\txt{as} r\rightarrow 0 . $$
The first inequality below comes from the definitions of
$\mu_{in}\ast\gamma$ and $\delta_x\ast\gamma$; the second one, from the definition of $\tilde\gamma$,
$\hat\gamma$ (the definitions are just above and in step 1 respectively) and the fact that $||f||_\infty\le M$; the limit comes from (4.10).
$$|U_f(\mu_{in}\ast\tilde\gamma)-U_f(\delta_x\ast\tilde\gamma)|\le
\int_{{\bf T}^p\times{\bf R}^p}|f(y-v)-f(x-v)|\tilde\gamma(v)
{\rm d} \mu_{in}(y) {\rm d} v\le$$
$$e^{M}\int_{{\bf R}^p}|f(y-v)-f(x-v)|\hat\gamma(v) {\rm d} \mu_{in}(y) {\rm d} v
\rightarrow 0 . $$
From the last three formulas, we get that
$$\limsup_{r\rightarrow 0}S(U^{r}_\gamma,\mu_{in},\gamma)\le
S(U_f,\delta_x,\tilde\gamma) $$
as we wanted.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\noindent{\bf End of the proof of proposition 4.1.} By (4.6) and (4.7) we see that, for $\mu$ a. e. $x\in{\bf T}^p$ which is not an atom,
$$S(U_f,\delta_x,\gamma(x,\cdot)) \le
\min_\psi S(U_f,\delta_x,\psi) . $$
In other words, $\gamma(x,\cdot)$ coincides with a minimum of
$S(U_f,\delta_x,\cdot)$; by [14], this functional has just one minimum, which is the right hand side of formula (4.1). This proves (4.1), while (4.2) follows from point $ii$) of the definition of differentiability on densities.
It remains to prove (4.1) when $x$ is an atom of $\mu$. To show this, we have to enlarge our set of controls. Namely, let us suppose for simplicity that $\mu$ has just one atom, say $x_0$ with $\mu(\{ x_0 \})=\lambda$; let us write
$$\mu=\tilde\mu+\lambda\delta_{x_0} . $$
Then we assign to each $x\not=x_0$ a strategy $\gamma(x,\cdot)$ as before, but we assign to $x_0$ an enlarged set of controls, say
$\gamma_w(x_0,\cdot)$ with $w\in [0,\lambda]$: in a sense, we are supposing that in $x_0$ sits a continuum of particles, each parametrized by $w$ and each with a strategy $\gamma_w(x_0,\cdot)$. We define
$$K(U,\mu,\gamma)=
\int_{{\bf T}^p\times{\bf R}^p}A_\frac{1}{n}(\gamma,(x,v)) {\rm d} \tilde\mu(x) {\rm d} v+
\int_0^\lambda {\rm d} w\int_{{\bf R}^p}A_\frac{1}{n}(\gamma_w(x_0,\cdot),v) {\rm d} v+$$
$$U\left(
\tilde\mu\ast\gamma+\int_0^\lambda (\delta_{x_0}\ast\gamma_w(x_0,\cdot)) {\rm d} w
\right) . $$
Two things are clear:
\noindent 1) first, that the minimum of $K(U,\mu,\cdot)$ is lower than the minimum of $S$, simply because we have a larger set of controls.
\noindent 2) Second, that if we find a minimum $\gamma$ of $K$ such that $\gamma_w(x_0,\cdot)$ does not depend on $w$, then it is also a minimum of $S$: indeed, in this case $K(U,\mu,\gamma)=S(U,\mu,\gamma)$ and point 1) implies the assertion.
Thus, (4.1) follows if we show that any minimum
$(\gamma_w(x_0,\cdot),\gamma(x,\cdot))$ of
$K(U,\mu,\cdot)$ is given by (4.1) for ${\cal L}^1$ a. e.
$w\in[0,\lambda]$ and $\tilde\mu$ a. e. $x\in{\bf T}^p$. This is done exactly as in lemma 4.4: indeed, instead of the torus we are considering
$({\bf T}^p\setminus \{ x_0 \})\sqcup [0,\lambda]$ with the measure
$\tilde\mu$ on ${\bf T}^p\setminus \{ x_0 \}$ and ${\cal L}^1$ on $[0,l]$; to this space and measure the proof of lemma 4.4 applies. We avoid repeating the details: if $w_0\in(0,l)$, as in lemma 4.4 we isolate that particles $w$ with $|w-w_0|<r$ and, letting $r\rightarrow 0$, we show that
$$S(U_f,\delta_{x_0},\gamma_{w_0}(x_0,\cdot))\le
\min_\psi S(U_f,\delta_{x_0},\psi) . $$
Since the unique minimal of the expression on the right is the linear one given by [14], i. e. formula (4.1), we are done.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
Proposition 4.1 gives an explicit expression for the minimal
$\gamma_\frac{-1}{n}$; now we want to extend this result to more than one time-step, i. e. the situation of section 3. However, if we want to find an explicit expression for the minimizer
$(\gamma_\frac{-s}{n},\dots,\gamma_\frac{-1}{n})$ of
$$I(\mu,\gamma_\frac{-s}{n},\dots,\gamma_\frac{-1}{n})+U(\mu_0),$$
we need a slightly different proof. The reason for this is that, even if we isolate the particles in $B(z_0,r)$ at the initial time
$\frac{-s}{n}$, they are going to spread over all ${\bf T}^p$ at time
$\frac{-s+1}{n}$, and after this time their trajectories coincide with the rest of the pack. In other words, after the first step, there is no way to control some particles separately from the other ones. To tackle this problem, we are going to minimize over a larger set of controls which keeps track of the initial position.
\vskip 1pc
\noindent{\bf Definition.} We consider the functions
$\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \psi^\frac{1}{n}_{\frac{-1}{n},z}$ depending measurably on
$z\in{\bf T}^p$; we let
$$\mu^\psi_{\frac{-s}{n},z}=\delta_z,\quad
\mu^\psi_{\frac{-s+1}{n},z}=\mu^\psi_{\frac{-s}{n},z}\ast
\psi^\frac{1}{n}_{\frac{-s}{n},z},\quad\dots,\quad
\mu^\psi_{0,z}=\mu^\psi_{\frac{-1}{n},z}\ast
\psi^\frac{1}{n}_{\frac{-1}{n},z} $$
be a $(\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \psi^\frac{1}{n}_{\frac{-1}{n},z})$-sequence starting at
$(-\frac{s}{n},\delta_z)$. In other words, $\mu^\psi_{\frac{j}{n},z}$ is the distribution at time $\frac{j}{n}$ of the particle which started at $z$.
If $\mu_\frac{-s}{n}$ is the initial distribution of the particles at time
$\frac{-s}{n}$, we define the total distribution of all the particles at time $\frac{j}{n}\ge\frac{-s}{n}$ by
$$\mu^\psi_\frac{j}{n}=
\int_{{\bf T}^p}\mu_{\frac{j}{n},z} {\rm d} \mu^\psi_\frac{-s}{n}(z) . $$
We say that the mean field generated by all the particles at time
$\frac{j}{n}$ is
$$W^{\mu^\psi_\frac{j}{n}}(x) . $$
We define the cost for particle $z$ analogously as the functional
$I$ of lemma 3.1; it considers the history of a particle subject to the mean field generated by the whole community.
$$I_\frac{1}{2}(\delta_z,\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\psi^\frac{1}{n}_{\frac{-1}{n},z})=$$
$$\sum_{j=-s}^{-1}\int_{{\bf T}^p\times{\bf R}^p}
[
\frac{1}{n} L^{\frac{1}{2}\mu_\frac{j}{n}^\psi}(\frac{j}{n},x,nv)
+\log\psi_{\frac{j}{n},z}(x,v)
]\psi_{\frac{j}{n},z}(x,v)
{\rm d} \mu^\psi_{\frac{j}{n},z}(x) {\rm d} v=$$
$$\sum_{j=-s}^{-1}\int_{{\bf T}^p\times{\bf R}^p}\{
A_\frac{1}{n}(\psi_\frac{j}{n}^\frac{1}{n},(x,v))-
\frac{1}{n}[
V(\frac{j}{n},x)+ W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}(x)]
\}
{\rm d} \mu_{\frac{j}{n},z}^\psi(x) {\rm d} v . $$
We have called it $I_\frac{1}{2}$ because of the coefficient $\frac{1}{2}$ in
$W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}$; we shall call $I_1$ its counterpart with $W^{\mu^\psi_\frac{j}{n}}$.
Integrating over the initial distribution $\mu_\frac{-s}{n}$ and adding the final condition, we define the cost for all particles.
$$J_\frac{1}{2}[U,\quad\mu_\frac{-s}{n},\quad(\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots,
\psi^\frac{1}{n}_{\frac{-1}{n},z})]
\colon=$$
$$\int_{{\bf T}^p}
I_\frac{1}{2}(\delta_z,\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots \psi^\frac{1}{n}_{\frac{-1}{n},z})
{\rm d} \mu_\frac{-s}{n}(z) +
U\left(
\mu_0^\psi
\right) . $$
We omit the proof that $J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ has a minimum, since it is identical to proposition 1.4.
\lem{4.5} Let
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,\gamma^\frac{1}{n}_{\frac{-1}{n},z})$ minimize the functional
$$\colon (\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots \psi^\frac{1}{n}_{\frac{-1}{n},z})\rightarrow
J_\frac{1}{2}[U,\quad\mu_\frac{-s}{n},\quad(\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots \psi^\frac{1}{n}_{\frac{-1}{n},z})] . $$
Then, for $\mu_\frac{-s}{n}$ a. e. $z\in{\bf T}^p$ which is not an atom,
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,\gamma^\frac{1}{n}_{0,z})$ does not depend on $z$ and has the following expression. Let $f_0$ be the derivative of $U$ at the measure $\mu_0^\gamma$ defined above. For
$j\in(-s+1,\dots,0)$ we define by backward induction
$$f_\frac{j-1}{n}(x)=-\log\int_{{\bf R}^p}
e^{-\cinn{v}+
\frac{1}{n}V(\frac{j}{n},x)+\frac{1}{n}W^{\mu_\frac{j}{n}}(x)-
f_\frac{j}{n}(x-v)
} {\rm d} v . $$
Then, we have that
$$\gamma_\frac{j}{n}(x,v)=
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)+a_\frac{j}{n}(x)
} \eqno (4.15) $$
with $a_\frac{j}{n}(x)$ chosen in such a way that
$\gamma_\frac{j}{n}(x,\cdot)$ is a probability density for all $x\in{\bf T}^p$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} In the first two steps below, which correspond to lemma 4.2, we isolate the particles in $B(z_0,r)$; in the third one we let
$r\rightarrow 0$ as in lemma 4.3. It is in this step that we need that
$\mu_\frac{-s}{n}(\{ z_0 \})=0$.
\noindent {\bf Step 1.} In this step, we set some notation and add a constant to $U$, as we did at the beginning of lemma 4.1.
For $j\ge -s$ we set
$$\mu^\psi_{\frac{j}{n},int}=
\frac{1}{\mu_\frac{-s}{n}(B(z_0,r))}
\int_{B(z_0,r)}\mu^\psi_{\frac{j}{n},z} {\rm d} \mu_\frac{-s}{n}(z) . $$
This is the distribution at time $\frac{j}{n}$ of the particles which started in $B(z_0,r)$ at time $\frac{-s}{n}$.
Let $(\gamma^\frac{1}{n}_{\frac{-s}{n},z},
\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})$ be as in the hypotheses; as at the beginnig of this section, we define
$$\tilde U(\mu)\colon=U(\mu)-
U\left(
\mu_0^\gamma
\right) +
\mu_\frac{-s}{n}(B(z_0,r))U_f(\mu^\gamma_{0,in}) $$
and
$$U^r_\psi(\lambda)=
\frac{1}{\mu(B(z_0,r))}
\tilde U\left[
\int_{{\bf T}^p}\mu_{0,z}^\psi {\rm d} \mu_\frac{-s}{n}(z)+
\mu_\frac{-s}{n}(B(z_0,r))\cdot
\left(
\lambda-\mu_{0,int}^\psi
\right)
\right] . $$
As in lemma 4.2, we shall see that $U^r_\gamma$ is the final condition seen by the particles in $B(z_0,r)$. Note that, as in lemma 4.2,
$$\tilde U(\mu^\psi_0)=\mu(B(z_0,r))U^r_\psi(\mu^\psi_{0,int}) . \eqno (4.16)$$
Since the addition of a constant to $U$ does not change the set of minima neither of $J_\frac{1}{2}(\mu_\frac{-s}{n},\cdot)$ nor of
$I_\frac{1}{2}(\mu_\frac{-s}{n},\cdot)$, we have that
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,\gamma^\frac{1}{n}_{\frac{-1}{n},z})$ is a minimum of the functional
$$\fun{}{
(\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots,\psi^\frac{1}{n}_{\frac{-1}{n},z})
}{
J_\frac{1}{2}[\tilde U,\quad\mu_\frac{-s}{n},\quad
(\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots,\psi^\frac{1}{n}_{\frac{-1}{n},z})]
} . $$
\noindent{\bf Step 2.} In this step, we deal with the mutual interaction; this is the main difference with lemma 4.2, where there was none of it. We define $W_{\mu_\frac{j}{n},in}$ and
$W_{\mu_\frac{j}{n},ext}$ as the potentials generated by the particles starting in $B(z_0,r)$ and $B(z_0,r)^c$ respectively, i. e.
$$W_{\mu_\frac{j}{n},in}(x)\colon=
\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}W(x-y) {\rm d} \mu^\psi_{\frac{j}{n},z}(y) {\rm d} y , \eqno (4.17)_a$$
$$W_{\mu_\frac{j}{n},ext}(x)\colon=
\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}W(x-y) {\rm d} \mu^\psi_{\frac{j}{n},z}(y) {\rm d} y . \eqno (4.17)_b$$
Now our particles interact among themselves only through the potential $W$. Note that $W$ appears in
$I_\frac{1}{2}(\delta_z,\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\psi^\frac{1}{n}_{\frac{-1}{n},z})$ in terms of the form
$$\int_{{\bf T}^p}W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}(x) {\rm d} \mu_{\frac{j}{n},z}(x) .
$$
Integrating in $\mu_{\frac{-s}{n}}$, we get that $W$ appears in
$J_\frac{1}{2}[\tilde U^r_\psi,\quad\mu_\frac{-s}{n},\quad
(\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots,\psi^\frac{1}{n}_{\frac{-1}{n},z})]$ in terms which have the form of the left hand side of the equation below; the first equality is the definition of
$W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}$, while the second one is the definition of $\mu^\psi_\frac{j}{n}$.
$$\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}(x) {\rm d} \mu^\psi_{\frac{j}{n},z}(x)=
\frac{1}{2}\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p} {\rm d} \mu^\psi_{\frac{j}{n},z}(x)
\int_{{\bf T}^p}W(x-y) {\rm d} \mu^\psi_{\frac{j}{n}}(y)=$$
$$\frac{1}{2}\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(w)
\int_{{\bf T}^p\times{\bf T}^p}W(x-y) {\rm d} \mu_{\frac{j}{n},z}^\psi(x)
{\rm d} \mu_{\frac{j}{n},w}^\psi(y) . $$
The term on the right in the formula above is the sum of the three terms below: the first one is the interaction of $B(z_0,r)^c$ with itself, the last one is the interaction of $B(z_0,r)$ with itself, while the middle one is the interaction of $B(z_0,r)$ with $B(z_0,r)^c$; note that here a factor $\frac{1}{2}$ is missing due to the symmetry of the potential.
$$\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}
W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}(x) {\rm d} \mu_{\frac{j}{n},z}^\psi(x)=$$
$$\frac{1}{2}\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(z)
\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(w)
\int_{{\bf T}^p\times{\bf T}^p}W(x-y) {\rm d} \mu_{\frac{j}{n},z}^\psi(x)
{\rm d} \mu_{\frac{j}{n},w}^\psi(y) + $$
$$\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(w)
\int_{{\bf T}^p\times{\bf T}^p}W(x-y) {\rm d} \mu_{\frac{j}{n},z}^\psi(x)
{\rm d} \mu_{\frac{j}{n},w}^\psi(y) +$$
$$\frac{1}{2}\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(w)
\int_{{\bf T}^p\times{\bf T}^p}W(x-y) {\rm d} \mu_{\frac{j}{n},z}^\psi(x)
{\rm d} \mu_{\frac{j}{n},w}^\psi(y) . $$
Using this and $(4.17)_{a-b}$ above, we can write
$$\int_{{\bf T}^p} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}(x)
{\rm d} \mu_{\frac{j}{n},z}^\psi(x)=
\frac{1}{2}\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}
W_{\mu_{\frac{j}{n},ext}}(x) {\rm d} \mu^\psi_{\frac{j}{n},z}(x)+$$
$$\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}
W_{\mu_{\frac{j}{n},ext}}(x) {\rm d} \mu^\psi_{\frac{-s}{n},z}(x)+
\frac{1}{2}\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\int_{{\bf T}^p}
W_{\mu_{\frac{j}{n},int}}(x) {\rm d} \mu^\psi_{\frac{j}{n},z}(x) . $$
The first term above crops up in $(4.18)_a$ below, the second one crops up in $(4.18)_b$ and the third one in $(4.18)_c$; we have used (4.16) to get $(4.18)_b$.
$$J_\frac{1}{2}[\tilde U,\quad\mu_\frac{-s}{n},\quad(\psi^\frac{1}{n}_{\frac{-s}{n},z},\psi^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})]= $$
$$\int_{B(z_0,r)^c} {\rm d} \mu_\frac{-s}{n}(z)
\sum_{j=-s}^{-1}
\int_{{\bf T}^p\times{\bf R}^p}\left[
A_\frac{1}{n}(\gamma_{\frac{j}{n},z},(x,v))-
\frac{1}{n}V(\frac{j}{n},x)-\frac{1}{2n} W_{\mu_\frac{j}{n},ext}(x)
\right]
{\rm d} \mu_{\frac{j}{n},z}(x) {\rm d} v +\eqno (4.18)_a$$
$$\mu_\frac{-s}{n}(B(z_0,r))\Bigg[
\int_{B(z_0,r)}
{\rm d} \mu_{\frac{-s}{n},in}(z)
\sum_{j=-s}^{-1}
\int_{{\bf T}^p\times{\bf R}^p}
[A_\frac{1}{n}(\gamma_{\frac{j}{n},z},(x,v))-\frac{1}{n}V(\frac{j}{n},x)-
\frac{1}{n}W_{\mu_\frac{j}{n},ext}(x)] {\rm d} \mu_{\frac{j}{n},z}(x) {\rm d} v+$$
$$\tilde U^r_\psi\left(
\mu_0^\psi
\right)
\Bigg] - \eqno (4.18)_b$$
$$\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\sum_{j=-s}^{-1}\int_{{\bf T}^p}
\frac{1}{2n} W_{\mu_\frac{j}{n},int}(x) {\rm d} \mu_{\frac{j}{n},z}(x) .
\eqno (4.18)_c$$
We shall call $\hat J$ the term in the square parentheses in $(4.18)_b$; it is almost equal to the functional $J_1$, the only difference being that the potential is
$V(\frac{j}{n},x)+W_{\mu^\psi_{\frac{j}{n},ext}}$ instead of
$V(\frac{j}{n},x)+W^{\mu^\psi_{\frac{j}{n}}}$. Note that we have lost the constant $\frac{1}{2}$ before the potential $W$.
Now $(4.18)_a$ is not affected by
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\gamma^\frac{1}{n}_{\frac{-1}{n},z})$ when $z\in B(z_0,r)$; this prompts us to restrict, as in lemma 4.2, to functions
$(\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots, \psi^\frac{1}{n}_{\frac{-1}{n},z})$ which coincide with
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})$ for $z\not\in B(z_0,r)$; since
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})$ is minimal, we see that
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})|_{z\in B(z_0,r)}$ must minimize $(4.18)_{b-c}$. Note that, by our choice of
$(\psi^\frac{1}{n}_{\frac{-s}{n},z},\dots, \psi^\frac{1}{n}_{\frac{-1}{n},z})$, $\tilde U^r_\psi=\tilde U^r_\gamma$; in other words,
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})|_{z\in B(z_0,r)}$ minimizes
$$\mu_\frac{-s}{n}(B(z_0,r))\cdot
\hat J[\tilde U^r_\gamma,\quad\mu_{in},\quad(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\gamma^\frac{1}{n}_{\frac{-1}{n},z})] + \eqno (4.19)_a$$
$$\int_{B(z_0,r)} {\rm d} \mu_\frac{-s}{n}(z)
\sum_{j=-s}^{-1}\int_{{\bf T}^p}
\frac{1}{2} W_{\mu_\frac{j}{n},int}(x) {\rm d} \mu_{\frac{j}{n},z}(x) .
\eqno (4.19)_b$$
\noindent {\bf Step 3.} We want to use the fact that
$(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})|_{z\in B(z_0,r)}$ minimizes $(4.19)_{a-b}$ to get (4.15). First of all, we fix $z_0$, a Lebesgue point of
$$\fun{}{z}{(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})} \eqno (4.20)$$
for the measure $\mu_\frac{-s}{n}$.
Since we are supposing that $\{ z_0 \}$ is not an atom of
$\mu_\frac{-s}{n}$, by $(4.17)_a$ we have that
$$(4.19)_b=o(\mu(B(z_0,r))) . \eqno (4.21)$$
Since this term is negligible with respect to $(4.19)_a$, with a proof similar to that of lemma 4.3 we get that
$$\limsup_{r\rightarrow 0}
J_1[\tilde U^r_\gamma,\quad\mu_{in},\quad(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\gamma^\frac{1}{n}_{\frac{-1}{n},z})]\le
\min_{\psi_{\frac{-s}{n},z_0},\dots,\psi_{\frac{-1}{n},z_0}}
J_1[\tilde U_{f_0},\quad\delta_{z_0},\quad(\psi_{\frac{-s}{n},z_0},\dots,
\psi_{\frac{-1}{n},z_0})] . $$
Note that here we are dealing with $J_1$: the coefficient $\frac{1}{2}$ in
$W^{\frac{1}{2}\mu^\psi_\frac{j}{n}}$ was shed already in (4.18).
Moreover, we can see as in formula (4.6) of lemma 4.3 that
$$\liminf_{r\rightarrow 0}
J_1[\tilde U^r_\gamma,\quad\mu_{in},\quad(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\dots,
\gamma^\frac{1}{n}_{\frac{-1}{n},z})]\ge
J_1[\tilde U_{f_0},\quad\delta_{z_0},\quad(\gamma_{\frac{-s}{n},z_0},\dots,
\gamma_{\frac{-1}{n},z_0})] . $$
As in the proof of lemma 4.2, the last two formulas imply that
$(\gamma_{\frac{-s}{n},z_0},\dots,\gamma_{\frac{-1}{n},z_0})$ minimizes the term on the right in the formula above; now [14] prescribes that
$(\gamma_{\frac{-s}{n},z_0},\dots,\gamma_{\frac{-1}{n},z_0})$ satisfies (4.15).
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\prop{4.6} Let $U$ be $L$-Lipschitz and differentiable on densities, let $s\in(1,\dots,mn)$ and let $\{ \mu_\frac{j}{n} \}_j$ be a minimal
$\{ \gamma_\frac{j}{n} \}_j$-sequence starting at $\mu_{-\frac{s}{n}}$. Let $f_0$ be the derivative of $U$ at $\mu_0$. For
$j\in(-s+1,\dots,0)$ we define by backward induction
$$f_\frac{j-1}{n}(x)=-\log\int_{{\bf R}^p}
e^{-\cinn{v}+
\frac{1}{n}V(\frac{j}{n},x)+\frac{1}{n}W^{\mu_\frac{j}{n}}(x)-
f_\frac{j}{n}(x+v)
} {\rm d} v . $$
Then, we have that
$$\gamma_\frac{j}{n}(x,v)=
e^{
-\cinn{v}-f_\frac{j}{n}(x+v)+a_\frac{j}{n}(x)
} \eqno (4.22)$$
with $a_\frac{j}{n}(x)$ chosen in such a way that
$\gamma_\frac{j}{n}(x,\cdot)$ is a probability density for all $x\in{\bf T}^p$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} We shall prove the assertion when $\mu_\frac{-s}{n}$ has no atoms; the argument for the atoms of $\mu_\frac{-s}{n}$ is identical to the one in the proof of proposition 4.1, and we skip it.
For the functional $I$ we defined before lemma 3.1, let us set
$$\hat I_\frac{1}{2}[U,\quad\mu_\frac{-s}{n},\quad(\psi_\frac{-s}{n},\dots,\psi_\frac{-1}{n})]=
I(\mu_\frac{-s}{n},\psi_\frac{-s}{n},\dots,\psi_\frac{-1}{n})
+U(\mu_0) . $$
By lemma 3.1 it suffices to show that, if
$(\gamma_\frac{-s}{n},\dots,\gamma_\frac{-1}{n})$ minimizes $\hat I_\frac{1}{2}$, then it satisfies (4.22). We prove this. If we compare $\hat I_\frac{1}{2}$ with the function $J_\frac{1}{2}$ of the last lemma, we see two things:
\noindent 1) the minimum of $J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ is smaller than the minimum of $\hat I_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$, simply because the dependence on $z\in{\bf T}^p$ gives us a larger set of strategies.
\noindent 2) If $(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})$ minimizes
$J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ and does not depend on $z$, then
$$\hat I_\frac{1}{2}[U,\quad\mu_\frac{-s}{n},\quad(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})]=
J_\frac{1}{2}[U,\quad\mu_\frac{-s}{n},\quad(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})] . $$
A consequence is the following: suppose we can find a minimizer of $J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ which does not depend on $z$, then it is also a minimal of $\hat I_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$; thus, the value of the minimum for the two functional is the same and any minimizer of $\hat I_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ is a minimizer of $J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ too. In other words, the proposition follows if we prove that any minimizer $(\gamma^\frac{1}{n}_{\frac{-s}{n},z},\gamma^\frac{1}{n}_{\frac{-s+1}{n},z},\dots, \gamma^\frac{1}{n}_{\frac{-1}{n},z})$ of $J_\frac{1}{2}(U,\mu_\frac{-s}{n},\cdot)$ has the form (4.15); but that is the content of lemma 4.5.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\centerline{\bf \S 5}
\centerline{\bf Regularity of the linearized action}
\vskip 1pc
Thanks to proposition 4.6, we can express the minimals
$\gamma^\frac{1}{n}_\frac{j}{n}$ in terms of the functions
$f^\frac{1}{n}_\frac{j}{n}$; in this section, we shall suitably normalize these functions and show, in proposition 5.2 below, that they are regular; by Ascoli-Arzel\`a\ this will imply (lemma 5.3 below) that, up to subsequences, they converge to a function $u$. We shall use proposition 5.2 in the next section, when we prove that $u$ solves Hamilton-Jacobi and that the discretized characteristics converge.
\noindent{\bf Definitions.} $\bullet$ Let $Q$ be a symmetric, positive-definite matrix and let $\alpha\in{\bf R}^n$; we denote by $N(\alpha,Q)$ the Gaussian of mean $\alpha$ and variance $Q$, i. e.
$$N(\alpha,Q)(v)=
\frac{1}{\sqrt{(2\pi)^p{\rm det}Q}}
e^{
-\frac{1}{2}\inn{Q^{-1}(v-\alpha)}{v-\alpha}
} . $$
\vskip 1pc
\noindent$\bullet$ Let $m\in{\bf N}$, $s\in(0,\dots,mn)$;
let $\{ \mu^{\frac{1}{n}}_\frac{j}{n} \}_j$ be a minimal
$\{ \gamma^{\frac{1}{n}}_\frac{j}{n} \}_j$-sequence starting at
$\mu_{\frac{-s}{n}}$ and let $f_0$ be the derivative of
$U$ at $\mu_0^\frac{1}{n}$. As in proposition 4.6, we define by backward induction
$$e^{
-f_\frac{j-1}{n}(x)
}
\colon =
\int_{{\bf R}^p}e^{
-\cinn{v}-f_\frac{j}{n}(x-v)+
\frac{1}{n}V(\frac{j}{n},x)+\frac{1}{n}W^{\mu_\frac{j}{n}}(x)
} {\rm d} v =$$
$$e^{
\frac{1}{n}P_\frac{j}{n}(x)
}
\int_{{\bf R}^p}e^{-\cinn{v}}
e^{
-f_\frac{j}{n}(x-v)
} {\rm d} v ,
\qquad j\in(-s+1,\dots,0) \eqno (5.1)$$
where we have set
$$P_\frac{j}{n}(x)=V(\frac{j}{n},x)+W^{\mu_\frac{j}{n}}(x) .
\eqno (5.2)$$
Once more by proposition 4.6, we have for the minimal
$\gamma^\frac{1}{n}_\frac{j}{n}$ the expression
$$\gamma^{\frac{1}{n}}_\frac{j}{n}(x,v)=
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)+a_\frac{j}{n}(x)
} \eqno (5.3)$$
where $a_\frac{j}{n}(x)$ is such that
$\gamma^{\frac{1}{n}}_\frac{j}{n}(x,\cdot)$ is a probability density for all $x$.
\noindent $\bullet$ We normalize the functions $f_\frac{j}{n}$, setting
$$\bar f_\frac{j}{n}(x)\colon=f_\frac{j}{n}(x)-
|j|\log\left(\frac{n}{2\pi}\right)^\frac{p}{2} $$
and we see that (5.1) becomes
$$e^{
-\bar f_\frac{j-1}{n}(x)
}
=
e^{
\frac{1}{n}P_\frac{j}{n}(x)
}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(v)e^{
-\bar f_\frac{j}{n}(x-v)
} {\rm d} v \eqno (5.4) $$
or, equivalently,
$$\bar f_\frac{j-1}{n}(x)=
-\frac{1}{n}P_\frac{j}{n}(x)-
\log\left[
\int_{{\bf R}^p}
N(0,\frac{1}{n}Id)(v)e^{-\bar f_\frac{j}{n}(x-v)} {\rm d} v
\right] . \eqno (5.5)$$
\noindent $\bullet$ We set
$$b_\frac{j}{n}(x)=-|j-1|\log\left(
\frac{n}{2\pi}
\right)^\frac{p}{2} +a_\frac{j}{n}(x)$$
and (5.3) becomes
$$\gamma^\frac{1}{n}_\frac{j}{n}(x,v)=\left(
\frac{n}{2\pi}
\right)^\frac{p}{2}
e^{
-\cinn{v}-\bar f_\frac{j}{n}(x-v)-
|j-1|\log\left(\frac{n}{2\pi}\right)^\frac{p}{2}+
a_\frac{j}{n}(x)
} =N(0,\frac{1}{n}Id)(v)e^{
-\bar f_\frac{j}{n}(x-v)+b_\frac{j}{n}(x)
} . \eqno (5.6)$$
In the following, we shall drop the bar from $\bar f_\frac{j}{n}$ and call it simply $f_\frac{j}{n}$.
\noindent $\bullet$ We shall say that
$\{ f_\frac{j}{n}^\frac{1}{n} \}_{j=-s}^0$ is the linearized cost for the minimal characteristic starting at
$\left( \frac{-s}{n},\mu \right)$.
\noindent $\bullet$ We gather here two other bits of notation: if $P_\frac{j}{n}$ is as in (5.2), we set
$${\cal P}(x_\frac{j+1}{n},x_\frac{j+2}{n},\dots,x_\frac{-1}{n})=
{\rm exp}\left\{
\frac{1}{n}[P_\frac{j+1}{n}(x_\frac{j}{n})+
P_\frac{j+2}{n}(x_\frac{j+1}{n})+\dots+
P_0(x_\frac{-1}{n}) ]
\right\} . \eqno (5.7)$$
\noindent $\bullet$ We also give a name to the linear path which at time $t=\frac{j}{n}<0$ is in $0$ and at time $t=0$ is in $y$:
$$a_{y}(t)=
\frac{n}{|j|}\left( t-\frac{j}{n} \right) y . \eqno (5.8)$$
\vskip 1pc
In the next lemma we shall see that (5.4) is simply a version of the Feynman-Kac formula. This is by no means surprising: indeed, the Hopf-Cole transform $\fun{}{f}{e^{-f}}$ brings Hamilton-Jacobi into the Schr\"odinger equation, for which Feynman-Kac provides a solution. We refer the reader to [14] for a discussion of this.
\lem{5.1} Let $U$ be Lipschitz and differentiable on densities.
Let $\{ \mu^\frac{1}{n}_\frac{j}{n} \}_j$ be a minimal
$\{ \gamma^\frac{1}{n}_\frac{j}{n} \}_j$-sequence starting at
$\left( \frac{-s}{n},\mu \right)$; we saw above that
$\gamma^\frac{1}{n}_\frac{j}{n}$ has the form (5.6) for a function
$f_\frac{j}{n}$ defined by (5.4).
Let $E_{0,0}$ denote the expectaction of the Brownian bridge
$\tilde w$ which is in $0$ at $t=\frac{j}{n}$ and at $t=0$ (see [11] for a definition). Let ${\cal P}$ and $a_y$ be as in (5.7) and (5.8) respectively.
Then,
$$e^{-f_\frac{j}{n}(x)}=$$
$$\int_{{\bf R}^p}N\left(
0,\frac{|j|}{n}Id
\right) (x-z) e^{-f_0(z)}
E_{0,0}\left[
{\cal P}(
x-a_{x-z}(\frac{j}{n})-\tilde w(\frac{j}{n}) , \dots,
x-a_{x-z}(-\frac{1}{n})-\tilde w(-\frac{1}{n})
)
\right]
{\rm d} z . \eqno (5.9)$$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Let $j\in(s,s+1,\dots,-1)$; for
$v_\frac{j+1}{n},\dots,v_0\in{\bf R}^p$, we set
$$\tilde v_\frac{j+1}{n}=v_\frac{j+1}{n}
\txt{and, if}\frac{l}{n}>\frac{j+1}{n},\quad
\tilde v_\frac{l}{n}=v_\frac{j+1}{n}+
\dots+v_\frac{l}{n} . $$
Heuristically, our particle will be in $x$ at time $\frac{j}{n}$, in
$x+\tilde v_\frac{j+1}{n}$ at time $\frac{j+1}{n}$, ending up in
$x+\tilde v_0$ at time $0$; the increment at each step is
$v_\frac{j}{n}$.
Given $f_0$, which is the derivative of $U$ at $\mu_0$, we can use (5.4) to get $f_\frac{-1}{n}$ and then, iterating backwards,
$f_\frac{-2}{n}$, $f_\frac{-3}{n}$, etc...; in this way, we get the first equality below, while the second one comes from the fact that the map
$\fun{}{(v_\frac{j+1}{n},\dots,v_0)}{(\tilde v_\frac{j+1}{n},\dots,\tilde v_0)}$ has determinant one.
$$e^{-f_\frac{j}{n}(x)}=$$
$$e^{
\frac{1}{n}P_\frac{j+1}{n}(x)
}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(v_\frac{j+1}{n})
e^{\frac{1}{n}
P_\frac{j+2}{n}(x-\tilde v_\frac{j+1}{n})} {\rm d} v_\frac{j+1}{n}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(v_\frac{j+2}{n})
e^{
\frac{1}{n}P_\frac{j+3}{n}(x-\tilde v_\frac{j+2}{n})
} {\rm d} v_\frac{j+2}{n}
\dots$$
$$\dots\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(v_\frac{-1}{n})
e^{\frac{1}{n}P_0(x-\tilde v_\frac{-1}{n})}
{\rm d} v_\frac{-1}{n}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(v_0)
e^{-f_0(x-\tilde v_0)}
{\rm d} v_0 =$$
$$e^{
\frac{1}{n}P_\frac{j+1}{n}(x)
}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(\tilde v_\frac{j+1}{n})
e^{\frac{1}{n}
P_\frac{j+2}{n}(x-\tilde v_\frac{j+1}{n})} {\rm d} \tilde v_\frac{j+1}{n}
\cdot$$
$$\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(\tilde v_\frac{j+2}{n}-
\tilde v_\frac{j+1}{n})
e^{
\frac{1}{n}P_\frac{j+3}{n}(x-\tilde v_\frac{j+2}{n})
} {\rm d} \tilde v_\frac{j+2}{n}
\dots$$
$$\dots\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(\tilde v_\frac{-1}{n}-
\tilde v_\frac{-2}{n})
e^{\frac{1}{n}P_0(x-\tilde v_\frac{-1}{n})}
{\rm d} \tilde v_\frac{-1}{n}
\int_{{\bf R}^p}N(0,\frac{1}{n}Id)(\tilde v_0-\tilde v_\frac{-1}{n})
e^{-f_0(x-\tilde v_0)}
{\rm d} \tilde v_0 \eqno (5.10)$$
This equality looks complicated only because we have written in full the Wiener measure on cylinders; indeed, let $w$ be the Brownian motion with $w(\frac{j}{n})=0$ and let us denote by $E_w$ the expectation with respect to the Wiener measure; by the definition of the latter, (5.10) becomes
$$e^{-f_\frac{j}{n}(x)}=
E_w\left[
{\cal P}\left(
x-w\left(\frac{j}{n}\right) ,x-w\left(\frac{j+1}{n}\right),\dots,
x-w(-\frac{1}{n})
\right)
e^{-f_0(x-w(0))}
\right] $$
where ${\cal P}$ has been defined in (5.7).
We denote by $E_{0,y}$ the expectation of the Brownian bridge which is in $0$ at $t=\frac{j}{n}$ and in $y$ at $t=0$. By the properties of the Brownian bridge (see for instance [11]), the formula above becomes
$$e^{-f_\frac{j}{n}(x)}=
\int_{{\bf R}^p}N\left( 0,\frac{|j|}{n}Id \right) (y)
e^{-f_0(x-y)}
E_{0,y}\left[
{\cal P}\left(
x-w\left(\frac{j}{n}\right) ,x-w\left(\frac{j+1}{n}\right),\dots,
x-w\left( -\frac{1}{n}\right)
\right)
\right] {\rm d} y . \eqno (5.11) $$
If $a_y$ is as in (5.8) and $\tilde w$ is a Brownian bridge which is in $0$ at $t=\frac{j}{n}$ and at $t=0$, then we have that
$$w(t)=a_{y}(t)+\tilde w(t)$$
is a Brownian bridge which is in $0$ at $t=\frac{j}{n}$ and in $y$ at $t=0$. Thus, (5.11) becomes
$$e^{-f_\frac{j}{n}(x)}=$$
$$\int_{{\bf R}^p}N(0,\frac{|j|}{n}Id)(y)
e^{-f_0(x-y)}
E_{0,0}\left[
{\cal P}\left(
x-a_{y}\left(\frac{j}{n}\right)-\tilde w\left(\frac{j}{n}\right) , \dots,
x-a_{y}\left(-\frac{1}{n}\right)-\tilde w\left(-\frac{1}{n}\right)
\right)
\right]
{\rm d} y . $$
By the change of variables $z=x-y$ we get (5.9).
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
We fix $m\in{\bf N}$, which basically will be the time of formula (1) of theorem 1; in the following proofs, $D_i$ will always denote an increasing function from $[-m,0)$ to $(0,+\infty)$, independent of $n$ and of the starting point $\left( \frac{-s}{n},\mu \right)$ of the minimal characteristic, provided that $\frac{-s}{n}\in[-m,0)$.
\prop{5.2} There is an increasing function
$\fun{D_1}{[-m,0)}{(0,+\infty)}$ such that the following happens. If $(\frac{-s}{n},\mu)\in[-m,0)\times{\cal M}_1(\T^p)$, if
$\{ \mu^\frac{1}{n}_\frac{j}{n} \}_j$ is a minimal
$\{ \gamma^\frac{1}{n}_\frac{j}{n} \}_j$-sequence starting at
$(\frac{-s}{n},\mu)$, if $f_\frac{j}{n}$ is as in (5.4), then we have that
$$||f_\frac{j}{n}||_{C^4({\bf T}^p)}\le D_1(\frac{j}{n})
\txt{for} -s\le j\le -1 . \eqno (5.12) $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} If we set
$$g_\frac{j}{n}(x,z,\tilde w) =$$
$${\rm exp}\left\{
\frac{1}{n}\left[
P_\frac{j+1}{n}\left( x-a_{x-z}\left(\frac{j}{n}\right)-
\tilde w\left(\frac{j}{n}\right) \right)
+\dots+
P_0\left( x-a_{x-z}\left(\frac{-1}{n}\right)-
\tilde w\left(\frac{-1}{n}\right) \right)
\right]
\right\}$$
then (5.9) becomes
$$e^{-f_\frac{j}{n}(x)}=
\int_{{\bf R}^p}N\left( 0,\frac{|j|}{n}Id \right)(x-z)
e^{-f_0(z)}E_{0,0}(g(x,z,\tilde w)) {\rm d} z . $$
If we differentiate under the the integral sign in the last formula, we see that
$\partial^l_x e^{-f_\frac{j}{n}(x)}$ is the sum of terms of the form
$$a_k(x)\colon=\int_{{\bf R}^p}
\partial_x^{l-k}N\left( 0,\frac{|j|}{n}Id \right)
(x-z)e^{-f_0(z)}
\cdot
E_{0,0}\left[
\partial_x^kg_\frac{j}{n}(x,z,\tilde w)
\right] {\rm d} z \eqno (5.13)$$
with $0\le k\le l$. We are going to estimate each of the terms in the integral above.
From (5.8) and (1.13) we see that, for $j\le l\le 0$,
$$\left\vert
\partial^r_xP_\frac{l}{n}\left( x-a_{x-z}\left(\frac{l}{n}\right)-
\tilde w\left(\frac{l}{n}\right) \right)
\right\vert \le C ,
\qquad 0\le r\le 4 $$
for a constant $C$ independent of $\tilde w$, $x$ and $z$. If we sum up in the definition of $g_\frac{j}{n}$, we get that there is
$D_2>0$ such that
$$|\partial_x^rg_\frac{j}{n}(x,z,\tilde w)|\le
D_2
\qquad\forall x\in{\bf T}^p, \qquad 0\le r\le 4 . \eqno (5.14)$$
On the other side, a simple calculation on the Gaussian shows that there is an increasing function
$D_3$ on $[-m,0)$ such that
$$\int_{{\bf R}^p}
\left\vert
\partial^r_x N(0,\frac{|j|}{n}Id)(x-z)
\right\vert {\rm d} z \le D_3\left(
\frac{j}{n}
\right) \txt{for} 0\le r\le 4. \eqno (5.15)$$
By point $ii$) of the definition of differentiability on densities, we have that
$$||e^{-f_0}||_{L^\infty({\bf T}^p)}\le e^M . \eqno (5.16)$$
The first inequality below follows from (5.13) and H\"older, the second one from (5.14), (5.15) and (5.16).
$$||a_k||_\infty\le
||\partial^{l-k}_x N\left( 0,\frac{|j|}{n}Id \right) (\cdot)||_{L^1({\bf R}^p)}
\cdot ||e^{-f_0}||_{L^\infty({\bf R}^p)}\cdot
||\partial^k_x g_\frac{j}{n}(\cdot,z,\tilde w)||_{L^\infty({\bf R}^p)}\le
D_4\left( \frac{j}{n} \right) . $$
Summing over $k\in(0,\dots,4)$, the thesis follows.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\noindent{\bf Definition.} Let $f^\frac{1}{n}_\frac{j}{n}$ be the linearized cost for a discrete minimal characteristic starting at
$\left( \frac{-s}{n},\mu \right)$. For $t\ge\frac{-s}{n}$, we define the function $f^\frac{1}{n}(t,x)$ by
$$f^\frac{1}{n}(t,x)=f^\frac{1}{n}_\frac{j}{n}(x)
\txt{if} t\in[\frac{j}{n},\frac{j+1}{n}) . $$
\lem{5.3} Let $\mu\in{\cal M}_1(\T^p)$, let $T\in[-m,0]$, let $s$ be the largest integer such that $T\le\frac{-s}{n}$ and let
$\{ f^\frac{1}{n}_\frac{j}{n} \}_{j\ge s}$ be the linearized cost for a discrete minimal characteristic starting at
$\left( \frac{-s}{n},\mu \right)$. Then there is
$u\in Lip_{loc}([-m,0),C^2({\bf T}^p))$ such that, up to subsequences, for all $\epsilon\in(0,\frac{s}{n})$ we have that
$$\sup_{(t,x)\in[\frac{-s}{n},-\epsilon]\times{\bf T}^p}
|\partial^l_xf^\frac{1}{n}(t,x)-\partial^l_xu(t,x)| \rightarrow 0
\txt{as}n\rightarrow+\infty\txt{for}l=0,1,2. $$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Let us consider the maps
$$\fun{
F^\frac{1}{n}
}{
\left( \frac{-s}{n},\frac{-s+1}{n},\dots,\frac{-[n\epsilon]}{n} \right)
}{
C^2({\bf T}^p)
} , \qquad
\fun{F^\frac{1}{n}}{\frac{j}{n}}{f^\frac{1}{n}(\frac{j}{n},\cdot)} $$
where $[\cdot]$ denotes the integer part. By Ascoli-Arzel\`a\ the lemma follows if we prove that
\noindent 1) $F^\frac{1}{n}$ arrives in the same compact subset of $C^2({\bf T}^p)$ for all $n$ and
\noindent 2) the functions $F^\frac{1}{n}$ are equilipschitz.
Point 1) follows by (5.12); as for point 2), we shall show that there is an increasing function $\fun{D_2}{[-m,0)}{(0,+\infty)}$ such that
$$||
\partial^l_xf^\frac{1}{n}_\frac{j}{n}-
\partial^l_xf^\frac{1}{n}_\frac{j+1}{n}
||_{C^0({\bf T}^p)} \le\frac{1}{n} D_2\left(\frac{j+1}{n}\right)
\txt{for} l=0,1,2 . $$
We begin to show the estimate above when $l=0$.
By (5.5) we have that
$$f_\frac{j-1}{n}(x)-f_\frac{j}{n}(x)=$$
$$\frac{-1}{n}P_\frac{j}{n}(x)-
\log\left[
\int_{{\bf R}^p}N\left( 0,\frac{1}{n}Id \right) (v)
e^{-f_\frac{j}{n}(x-v)}
\right] -f_\frac{j}{n}(x) . $$
Thus, by (1.13), it suffices to show that
$$\left\vert
\log\left[
\int_{{\bf R}^p}N\left( 0,\frac{1}{n}Id \right) (v)
e^{-f_\frac{j}{n}(x-v)}
\right] +f_\frac{j}{n}(x)
\right\vert \le\frac{1}{n}D_3\left( \frac{j}{n} \right)\quad
\forall x\in{\bf T}^p . $$
We can take exponentials and get that the formula above is equivalent to
$${\rm exp}\left\{
\frac{-1}{n}D_3\left( \frac{j}{n} \right)
\right\} \le
\int_{{\bf R}^p}N\left( 0,\frac{1}{n}Id \right) (v)
e^{-f_\frac{j}{n}(x-v)+f_\frac{j}{n}(x)} {\rm d} v
\le{\rm exp}\left\{
\frac{1}{n}D_3\left( \frac{j}{n} \right)
\right\} \quad
\forall x\in{\bf T}^p . \eqno (5.17)$$
We shall prove the estimate from above, since the one from below is analogous. Let us consider
$[-f_\frac{j}{n}(x-v)+f_\frac{j}{n}(x)]$ when
$v\in B(0,n^\frac{-1}{3})$; by (5.12) we can develop this function in Taylor series and get that
$$\int_{{\bf R}^p}N\left( 0,\frac{1}{n}Id \right)
e^{-f_\frac{j}{n}(x-v)+f_\frac{j}{n}(x)} {\rm d} v \le$$
$$\int_{B(0,n^\frac{-1}{3})}
N\left( 0,\frac{1}{n}Id \right) (v)
e^{f^\prime_\frac{j}{n}(x)\cdot v-\frac{1}{2} f^{{}^\prime{}^\prime}_\frac{j}{n}(x)(v,v)+
r(x,v)} {\rm d} v+$$
$$\int_{B(0,n^\frac{-1}{3})^c}
N\left( 0,\frac{1}{n}Id \right) (v)
e^{-f_\frac{j}{n}(x-v)+f_\frac{j}{n}(x)}
{\rm d} v \eqno (5.18)$$
where
$$|r(x,v)|\le D_5\left( \frac{j}{n} \right) |v|^3 . $$
As for the first exponential in (5.18), we develop it in Taylor series, getting
$$\int_{B(0,n^\frac{-1}{3})}
N\left( 0,\frac{1}{n}Id \right) (v)
e^{f^\prime_\frac{j}{n}(x)\cdot v-\frac{1}{2} f^{{}^\prime{}^\prime}_\frac{j}{n}(x)(v,v)+r(x,v)} {\rm d} v=$$
$$\int_{B(0,n^\frac{-1}{3})}
N\left( 0,\frac{1}{n}Id \right) (v)\left[
1+f^\prime_\frac{j}{n}(x)\cdot v-\frac{1}{2} f^{{}^\prime{}^\prime}_\frac{j}{n}(x)(v,v)+
Bil^1_\frac{j}{n}(x)(v,v)+r^\prime(x,v)
\right] {\rm d} v$$
where $Bil^1_\frac{j}{n}$ is a positive bilinear form bounded by $D_7\left( \frac{j}{n} \right) Id$ and
$$|r^\prime(x,v)|\le D_5^\prime\left( \frac{j}{n} \right) |v|^3 . $$
By the last two formulas and by standard properties of the Gaussian we get that
$$\int_{B(0,n^\frac{-1}{3})}
N\left( 0,\frac{1}{n}Id \right) (v)
e^{f^\prime_\frac{j}{n}(x)\cdot v-\frac{1}{2} f^{{}^\prime{}^\prime}_\frac{j}{n}(x)(v,v)+r(x,v)} {\rm d} v\le
e^{
\frac{1}{n}D_6\left( \frac{j}{n} \right)
} . $$
On the other side, (5.12) and standard properties of the Gaussian imply that
$$\int_{B(0,n^\frac{-1}{3})}
N\left( 0,\frac{1}{n}Id \right) (v)
e^{
-f_\frac{j}{n}(x-v)+f_\frac{j}{n}(x)
} {\rm d} v\le
e^{
D_7\left( \frac{j}{n} \right) n^\frac{1}{6}
} . $$
If we apply the last two formulas to (5.18), (5.17) follows.
Note that we need to know that $f_\frac{j}{n}$ is $C^2$ to get an estimate on
$||f_\frac{j-1}{n}-f_\frac{j}{n}||_{C^0({\bf T}^p)}$; the method for the estimate on $f^\prime_\frac{j}{n}$ and $f^{{}^\prime{}^\prime}_\frac{j}{n}$ is exactly analogous; the reason for the $C^4$ estimate on
$f_\frac{j}{n}$ is that, to estimate the norm of the second derivative of $f_\frac{j-1}{n}-f_\frac{j}{n}$, we need two derivatives more on $f_\frac{j}{n}$.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\centerline{\bf \S 6}
\centerline{\bf Fokker-Planck and Hamilton-Jacobi}
\vskip 1pc
By the last section, $f^\frac{1}{n}_\frac{j}{n}$ is a regular function; we shall use this information in proposition 6.2 below to prove that the discrete minimal characteristic
$\{ \mu^\frac{1}{n}_\frac{j}{n} \}$ converges, as $n\rightarrow+\infty$, to a weak solution of Fokker-Planck. Moreover we shall prove, in proposition 6.3 below, that the function $u$ we defined in lemma 5.3 is a solution of the Hamilton-Jacobi equation.
We begin to show that $\gamma_\frac{j}{n}^\frac{1}{n}(x,\cdot)$ is a good approximation of a Gaussian.
\lem{6.1} Let $\{ f_\frac{j}{n} \}$ be as in (5.4); we set
$$Q_\frac{j}{n}(x)=
\left[
Id+\frac{1}{n}f^{{}^\prime{}^\prime}_\frac{j}{n}(x)
\right]^{-1} \txt{and}
\beta_\frac{j}{n}(x)=
\frac{1}{n}Q_\frac{j}{n}(x)f^\prime_\frac{j}{n}(x) \eqno (6.1)$$
Then, there are increasing functions
$\fun{D_{3},D_{4},D_{5},D_6}{[-m,0)}{[0,+\infty)}$ such that the following holds.
For $a>0$, let $L(a)=[-\frac{1}{2} a,\frac{1}{2} a)^p$ and let
$\gamma^\frac{1}{n}_\frac{j}{n}$ be as in the last lemma; we have that
$$e^{-D_{3}\left(\frac{j}{n}\right)\frac{1}{n}}\cdot
N(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x))(v)
e^{-d_\frac{j}{n}(x,v)}\le $$
$$\gamma^\frac{1}{n}_\frac{j}{n}(x,v) \le
e^{D_{3}\left(\frac{j}{n}\right)\frac{1}{n}}\cdot
N(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x))(v)
e^{-d_\frac{j}{n}(x,v)} \eqno (6.2)$$
where $d_\frac{j}{n}$ is a function such that
$$|d_\frac{j}{n}(x,v)|\le
\frac{D_4(\frac{j}{n})}{n} \txt{if}v\in L(n^\frac{-1}{3}) .
\eqno (6.3) $$
Moreover,
$$\int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}\gamma^\frac{1}{n}_\frac{j}{n}(x,v) \le
e^{D_{5}\left(\frac{s_2}{n}\right)}e^{-n^\frac{1}{6}}
\eqno (6.4)$$
and
$$\sup_{{\bf R}^p\setminus L(n^\frac{1}{3})}\gamma^\frac{-1}{n}_\frac{j}{n}(x,v) \le
e^{D_{6}\left(\frac{s_2}{n}\right)}e^{-n^\frac{1}{6}} .
\eqno (6.5)$$
\rm\vskip 1 pc\noindent{\bf Proof.\quad} {\bf Step 1.} We are going to use Taylor's formula to get an equivalent expression for $\gamma^\frac{1}{n}_\frac{j}{n}$. We begin to note that, by (5.12), there is an increasing function
$\fun{D_7}{[-m,0)}{(0,+\infty)}$, not depending on $n$, such that
$$|| \beta_\frac{j}{n} ||_\infty +
||Q_\frac{j}{n}(x)-Id||_\infty\le
D_7\left(\frac{j}{n}\right)\frac{1}{n} . \eqno (6.6)$$
The first equality below is (5.6), the second one is the definition of the function $d_\frac{j}{n}(x,v)$.
$$\gamma^\frac{1}{n}_\frac{j}{n}(x,v)=
e^{b_\frac{j}{n}(x)}
\left(\frac{n}{2\pi}\right)^\frac{p}{2}
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)
}= \eqno (6.7)_a$$
$$\left(\frac{n}{2\pi}\right)^\frac{p}{2}
{\rm exp}\left\{
b_\frac{j}{n}(x)-f_\frac{j}{n}(x)
+\frac{1}{2}\inn{nQ_\frac{j}{n}(x)^{-1}\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
\right\} \cdot \eqno (6.7)_b$$
$${\rm exp}\left[
-\frac{1}{2}\inn{nQ_\frac{j}{n}^{-1}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)} -
d_\frac{j}{n}(x,v)
\right] . \eqno (6.7)_c$$
Since by Taylor's formula,
$$f_\frac{j}{n}(x-v)=f_\frac{j}{n}(x)-
\inn{f^\prime_\frac{j}{n}(x)}{v}+
\frac{1}{2}\inn{f^{{}^\prime{}^\prime}_\frac{j}{n}(x)v}{v}+\tilde d_\frac{j}{n}(x,v)$$
and easy but lengthy computation implies that
$\tilde d_\frac{j}{n}(x,v)=d_\frac{j}{n}(x,v)$; together with (5.12) this implies that there is a function $D_8$, bounded on $[-m,-\epsilon]$ for all $\epsilon>0$, such that
$$|d_\frac{j}{n}(x,v)|\le
D_8(\frac{j}{n})|v|^3 \qquad
\forall (x,v)\in{\bf T}^p\times{\bf R}^p. $$
By the formula above, $d_\frac{j}{n}$ satisfies (6.3).
\noindent{\bf Step 2.} We want to show that the rather complicated expression in $(6.7)_b$ and $(6.7)_c$ is not too far from a Gaussian of suitable mean and variance. Note that
$(6.7)_c$ has the form $e^{-\frac{1}{2}\inn{A(v-b)}{v-b}}$; thus, we have to show that $(6.7)_b$ is the "right" normalization coefficient. This is the content of this step.
Since $\gamma^\frac{1}{n}_\frac{j}{n}(x,\cdot)$ is a probability density for all $x$, we get that $(6.7)_b$ is the reciprocal of the integral of
$(6.7)_c$ in the variable $v$. We calculate this integral.
First of all, since $f_\frac{j}{n}$ satisfies (5.12), we get the second inequality below, while the third one comes from standard properties of the Gaussian; the constant $C$ does not depend on anything.
$$0\le\int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)
} {\rm d} v \le
e^{
D_1\left( \frac{j}{n} \right)
}
\int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}
e^{
\cinn{v}
} {\rm d} v\le
e^{
D_9\left(\frac{j}{n}\right)
} e^{
-Cn^\frac{1}{6}
} . $$
Now,
$$||
f_\frac{j}{n}(x)+
\frac{1}{2}\inn{nQ^{-1}_\frac{j}{n}(x)\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
||_\infty\le D_{10}\left(\frac{j}{n}\right) \eqno (6.8)$$
by (6.6) and (5.12). By $(6.7)$ we have that
$$\int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}
e^{-\cinn{v}-f_\frac{j}{n}(x-v)} {\rm d} v=
e^{
-f_\frac{j}{n}(x)+\frac{1}{2}\inn{nQ_\frac{j}{n}(x)^{-1}\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
} \cdot $$
$$\int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}
{\rm exp}\left[
-\frac{1}{2}\inn{nQ_\frac{j}{n}^{-1}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)} -
d_\frac{j}{n}(x,v)
\right] {\rm d} v . $$
The last three formulas imply that
$$0\le \int_{{\bf R}^p\setminus L(n^\frac{-1}{3})}
{\rm exp}\left[
-\frac{1}{2}\inn{nQ_\frac{j}{n}^{-1}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)} -
d_\frac{j}{n}(x,v)
\right] {\rm d} v\le
e^{
D_{11}\left(\frac{s_2}{n}\right)
} e^{
-Cn^\frac{1}{6}
} . \eqno (6.9)$$
Formula (6.3) implies the two inequalities below.
$$e^{
-D_4\left(\frac{j}{n}\right)\frac{1}{n}
}\cdot
\int_{L(n^\frac{-1}{3})}
e^{
-\frac{1}{2}\inn{nQ^{-1}_\frac{j}{n}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)}
} {\rm d} v\le $$
$$\int_{L(n^\frac{-1}{3})}
{\rm exp}\left[
-\frac{1}{2}\inn{nQ^{-1}_\frac{j}{n}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)}
-d_\frac{j}{n}(x,v)
\right] {\rm d} v\le$$
$$e^{
D_4\left(\frac{j}{n}\right)\frac{1}{n}
}\cdot
\int_{L(n^\frac{-1}{3})}
e^{
-\frac{1}{2}\inn{nQ^{-1}_\frac{j}{n}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)}
} {\rm d} v . $$
The integrals on the left and on the right in the last formula are easy to evaluate: indeed, the Gaussian is centered in
$\beta_\frac{j}{n}(x)$, which satisfies (6.6); thus, almost all its mass (save for an exponentially small rest) lies in $L(n^\frac{-1}{3})$. In formulas,
$$\left(
\frac{(2\pi)^p\det Q_\frac{j}{n}(x)}{n^p}
\right)^\frac{1}{2}
e^{-D_{12}\left(\frac{j}{n}\right)\frac{1}{n}}\le
\int_{L(n^\frac{-1}{3})}
{\rm exp}\left[
-\frac{1}{2}\inn{nQ_\frac{j}{n}^{-1}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)} -
d_\frac{j}{n}(x,v)
\right] {\rm d} v\le$$
$$\left(
\frac{(2\pi)^p\det Q_\frac{j}{n}(x)}{n^p}
\right)^\frac{1}{2}
e^{D_{12}\left(\frac{j}{n}\right)\frac{1}{n}} . $$
By the last formula and (6.9), we get that
$$\left(
\frac{(2\pi)^p\det Q_\frac{j}{n}(x)}{n^p}
\right)^\frac{1}{2}
e^{-D_{13}\left(\frac{j}{n}\right)\frac{1}{n}}\le
\int_{{\bf R}^p}
{\rm exp}\left[
-\frac{1}{2}\inn{nQ_\frac{j}{n}^{-1}(x)(v-\beta_\frac{j}{n}(x))}{v-\beta_\frac{j}{n}(x)} -
d_\frac{j}{n}(x,v)
\right] {\rm d} v\le$$
$$\left(
\frac{(2\pi)^p\det Q_\frac{j}{n}(x)}{n^p}
\right)^\frac{1}{2}
e^{D_{13}\left(\frac{j}{n}\right)\frac{1}{n}} . $$
We saw above that $(6.7)_b$ is the inverse of the integral above; thus,
$$\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right)^\frac{1}{2}
e^{-D_{13}\left(\frac{j}{n}\right)\frac{1}{n}}\le
\left(\frac{n}{2\pi}\right)^\frac{p}{2}
{\rm exp}\left\{
b_\frac{j}{n}(x)-f_\frac{j}{n}(x)
+\frac{1}{2}\inn{nQ_\frac{j}{n}(x)^{-1}\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
\right\} \le$$
$$\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right)^\frac{1}{2}
e^{D_{13}\left(\frac{j}{n}\right)\frac{1}{n}} . \eqno (6.10)$$
\noindent{\bf End of the proof.} We saw at the end of step 1 that
$d_\frac{j}{n}$ satisfies (6.3). Formula (6.2) follows by (6.7) and (6.10). We prove (6.4) and (6.5).
We begin to write the normalization coefficient $b_\frac{j}{n}$ in the following complicated way.
$$e^{b_\frac{j}{n}(x)}\left( \frac{n}{2\pi} \right)^\frac{p}{2}=$$
$$\left( \frac{n}{2\pi} \right)^\frac{p}{2}
{\rm exp}\left\{
b_\frac{j}{n}(x)-f_\frac{j}{n}(x)+
\frac{1}{2}\inn{nQ_\frac{j}{n}(x)^{-1}\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
\right\} \cdot$$
$${\rm exp}\left\{
f_\frac{j}{n}(x)-
\frac{1}{2}\inn{nQ_\frac{j}{n}(x)^{-1}\beta_\frac{j}{n}(x)}{\beta_\frac{j}{n}(x)}
\right\} . $$
Formulas (6.10) and (6.8) give an estimate on the first and second term respectively in the product above; thus,
$$e^{-D_{14}\left( \frac{j}{n} \right)}
\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right) \le
e^{b_\frac{j}{n}(x)}
\left( \frac{n}{2\pi} \right)^\frac{p}{2}\le
e^{D_{14}\left( \frac{j}{n} \right)}
\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right) . $$
Together with $(6.7)_a$, this implies that
$$e^{-D_{14}\left( \frac{s_2}{n} \right)}
\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right)
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)
} \le$$
$$\gamma^\frac{1}{n}_\frac{j}{n}(x,v)\le
e^{D_{14}\left( \frac{s_2}{n} \right)}
\left(
\frac{n^p}{(2\pi)^p\det Q_\frac{j}{n}(x)}
\right)
e^{
-\cinn{v}-f_\frac{j}{n}(x-v)
} . $$
Now (6.4) and (6.5) follow from the last formula, (5.12) and well-known properties of the Gaussian.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\prop{6.2} Let $T\in[-m,0]$, let $s$ be the maximal integer such that $T\le\frac{-s}{n}$ and let $\mu\in{\cal M}_1(\T^p)$. Let
$\{ \mu^\frac{1}{n}_\frac{j}{n} \}_{j=-s}^0$ be a minimal
$\{ \gamma^\frac{1}{n}_\frac{j}{n} \}_{j=-s}^{-1}$-sequence starting at
$(\frac{-s}{n},\mu)$ and let the interpolating curve
$\{ \mu^\frac{1}{n}_t \}$ be defined as in section 3. Then, up to subsequences $\{ \mu^\frac{1}{n}_t \}$ converges weakly, uniformly on each interval $[T,-\epsilon]$, to a curve $\mu_t$ which is a weak solution of $(FP)_{-T,-\partial_xu,\mu}$, where $u$ is the limit of lemma 5.3.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} Throughout the proof, we shall deal with the sequence
$\{ n_k \}$ of lemma 5.3, but we shall drop the subscript $k$ to lighten the notation.
It is standard that $(FP)_{T,-\partial_xu,\mu}$ has a weak solution $\mu_{T,t}$; we have to prove that, if $g\in C({\bf T}^p)$ and
$\epsilon\in (0,T)$, then
$$\sup_{t\in [T,-\epsilon]}\inn{\mu_t^\frac{1}{n}-\mu_{T,t}}{g}\rightarrow 0
\txt{as}n\rightarrow+\infty \eqno (6.11)$$
where $\inn{\cdot}{\cdot}$ denotes the duality coupling between
${\cal M}({\bf T}^p)$, the space of signed measures on ${\bf T}^p$, and
$C({\bf T}^p)$.
Let $(\gamma^\frac{1}{n}_\frac{-s}{n},\dots,\gamma^\frac{1}{n}_\frac{-1}{n})$ be as in the hypotheses; for $-s\le j\le -1$ we define
$$\fun{
S^\ast_{\frac{j}{n},\frac{j+1}{n}}
}{
{\cal M}({\bf T}^p)
}{
{\cal M}({\bf T}^p)
},\qquad
\fun{
S^\ast_{\frac{j}{n},\frac{j+1}{n}}
}{
\mu
}{
\mu\ast\gamma^\frac{1}{n}_\frac{j}{n}
} . $$
If $-s\le l\le j\le 0$, we set
$$\fun{
S^\ast_{\frac{l}{n},\frac{j}{n}}
}{
{\cal M}({\bf T}^p)
}{
{\cal M}({\bf T}^p)
},\qquad
S^\ast_{\frac{l}{n},\frac{j}{n}}(\mu)=
S^\ast_{\frac{j-1}{n},\frac{j}{n}}\circ\dots\circ
S^\ast_{\frac{l}{n},\frac{l+1}{n}}(\mu) . $$
Clearly, with this definition $S^\ast_{\frac{l}{n},\frac{j}{n}}$ has the co-cycle property
$$S^\ast_{\frac{j}{n},\frac{i}{n}}\circ
S^\ast_{\frac{l}{n},\frac{j}{n}}=S^\ast_{\frac{l}{n},\frac{i}{n}} \txt{for}
-s\le l\le j\le i\le 0 $$
and
$$\mu^\frac{1}{n}_\frac{j}{n}=S^\ast_{\frac{-s}{n},\frac{j}{n}}\mu $$
i. e. $(\mu,S^\ast_{\frac{-s}{n},\frac{-s+1}{n}}(\mu),\dots,
S^\ast_{\frac{-s}{n},0}(\mu))$ is a
$(\gamma^\frac{1}{n}_\frac{-s}{n},\dots,\gamma^\frac{1}{n}_\frac{-1}{n})$-sequence.
Let us also introduce the operator
$$\fun{F^\ast_{T,t}}{\mu}{\mu_{T,t}} $$
where $\mu_{T,t}$ is the solution, at time $t\ge T$, of the Fokker-Planck equation $(FP)_{T,-\partial_x u,\mu}$.
By the last two formulas, we can write (6.11) as
$$\sup_{t\in[T,-\epsilon]}\inn{(S_{\frac{-s}{n},\frac{[nt]}{n}}^\ast)\mu-
F^\ast_{T,t}\mu}{g}
\rightarrow 0\txt{as} n\rightarrow+\infty . \eqno (6.12)$$
Since
$S^\ast_{\frac{j}{n},\frac{j+1}{n}}\mu=
\mu\ast\gamma^\frac{1}{n}_\frac{j}{n}$, the definition of
$\mu\ast\gamma^\frac{1}{n}_\frac{j}{n}$ immediately yields that
$S^\ast_{\frac{j}{n},\frac{j+1}{n}}$ is the adjoint of the operator
$$\fun{S_{\frac{j}{n},\frac{j+1}{n}}
}{C({\bf T}^p)}{C({\bf T}^p)} \qquad
\fun{S_{\frac{j}{n},\frac{j+1}{n}}
}{g}{
\int_{{\bf R}^p}g(x-v)\gamma^\frac{1}{n}_\frac{j}{n}(x,v) {\rm d} v
} . $$
Note that $S_{\frac{j}{n},\frac{j+1}{n}}$ arrives in $C({\bf T}^p)$ by the results of section 5: indeed, in that section we have proven that
$\gamma^\frac{1}{n}_\frac{j}{n}$ is a continuous function. Actually, proposition 5.2 implies that $S_{\frac{j}{n},\frac{j+1}{n}}$ is a bounded operator; we can associate to it a co-cycle as we did with $S^\ast_{\frac{j}{n},\frac{j+1}{n}}$, with the only difference that $S_{\frac{j}{n},\frac{j+1}{n}}$ is going back in time: we are bringing a final condition $\phi$ to $\phi_\frac{-1}{n}$, $\phi_\frac{-2}{n}$, etc...
Also $F^\ast_{T,t}$ has an adjoint, which is a co-cycle going back in time; namely, for $-m\le T\le t\le 0$ and $u$ as in lemma 5.3, we can define $F_{t,T}g$ to be $\psi_T$, the solution at time $T$ of
$$\left\{
\eqalign{
\partial_t\psi&=-(\Delta\psi-\partial_x u\cdot\partial_x\psi)\cr
\psi_t(x)&=g(x)
}
\right. $$
Thus, (6.12) is equivalent to
$$\sup_{t\in[T,-\epsilon]}
\inn{\mu}{S_{\frac{-s}{n},\frac{[nt]}{n}}g-F_{T,t}g}
\rightarrow 0\txt{as}n\rightarrow+\infty $$
which in turn is implied by
$$\sup_{t\in[T,-\epsilon]}||
S_{\frac{-s}{n},\frac{[nt]}{n}}g-F_{T,t}g
||_\infty \rightarrow 0\txt{as}n\rightarrow+\infty . \eqno (6.13)$$
Let $B_\tau$ be the operator
$$\fun{B_\tau}{\psi}{-(\Delta\psi-\partial_xu(\tau,\cdot)\cdot\partial_x\psi)} . $$
Theorem 6.5 of section 1 of [7] (which holds in the autonomuos case, but is easy to adapt to our situation) says that (6.13) holds if we have that (keeping track that the time is inverted)
$$||
-n[S_{\frac{j}{n},\frac{j+1}{n}}-Id]g-B_\frac{j}{n}g
||_{C^0({\bf T}^p)} \rightarrow 0 \eqno (6.14)$$
for every $g\in C^2({\bf T}^p)$ uniformly for
$\frac{j}{n}\in[\frac{-s}{n},-\epsilon]$. Thus, the theorem reduces to proving this formula. The first equality below is the definition of
$S_{\frac{j}{n},\frac{j+1}{n}}$.
$$n[
S_{\frac{j}{n},\frac{j+1}{n}}-Id
] g=
n\int_{{\bf R}^p}[
g(x-v)-g(x)
] \gamma^\frac{1}{n}_\frac{j}{n}(x,v) {\rm d} v=$$
$$n\int_{L(n^\frac{-1}{3})}[
g(x-v)-g(x)
] \gamma^\frac{1}{n}_\frac{j}{n}(x,v) {\rm d} v + \eqno (6.15)_a$$
$$n\int_{{\bf R}\setminus {L(n^\frac{-1}{3})}}[
g(x-v)-g(x)
] \gamma^\frac{1}{n}_\frac{j}{n}(x,v) {\rm d} v . \eqno (6.15)_b$$
By (6.4) and (6.5) of lemma 6.1 we get that
$$(6.15)_b\rightarrow 0 . \eqno (6.16)$$
As for $(6.15)_a$, we want to substitute $\gamma^\frac{1}{n}_\frac{j}{n}$ with the Gaussian given by lemma 6.1. Indeed, since $g$ is continuous, there is $\delta_n\rightarrow 0$ as
$n\rightarrow+\infty$ such that the first inequality below holds; the second one follows by (6.2) and (6.3), while the last one follows from the fact that the Gaussian has integral one.
$$\left\vert
\int_{L(n^\frac{-1}{3})}
[g(x-v)-g(x)]\cdot
\left[
N(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x))(v)-
\gamma^\frac{1}{n}_\frac{j}{n}(x,v)
\right] {\rm d} v
\right\vert \le $$
$$\delta_n\int_{L(n^\frac{-1}{3})}
\left\vert
N(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x))(v)-
\gamma^\frac{1}{n}_\frac{j}{n}(x,v)
\right\vert {\rm d} v\le$$
$$\delta_n\int_{L(n^\frac{-1}{3})}
N(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x))(v)
\cdot[e^{\frac{1}{n}D_4\left( \frac{j}{n} \right)} -1] {\rm d} v\le
\delta_n\frac{1}{n} D_{15}\left( \frac{j}{n} \right) . $$
We multiply by $n$ and arrange the terms in a different way.
$$n\int_{L(n^\frac{-1}{3})}[
g(x-v)-g(x)
] N\left(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x)\right) (v) {\rm d} v
-\delta_nD_{15}\left( \frac{j}{n} \right)\le$$
$$n\int_{L(n^\frac{-1}{3})}[
g(x-v)-g(x)
] \gamma^\frac{1}{n}_{j}(x,v) {\rm d} v\le$$
$$n\int_{L(n^\frac{-1}{3})}[
g(x-v)-g(x)
] N\left(\beta_\frac{j}{n}(x),\frac{1}{n}Q_\frac{j}{n}(x)\right) (v) {\rm d} v+
\delta_nD_{15}\left( \frac{j}{n} \right) . $$
By (6.1) and lemma 5.3, if $\frac{j}{n}\rightarrow t$, then
$n\beta_\frac{j}{n}(x)\rightarrow\partial_xu(t,x)$; on the other hand,
$Q_\frac{j}{n}\rightarrow Id$ by (5.12). Since $g\in C^2({\bf T}^p)$, this implies in a standard way that the two integrals on the left and on the right of the formula above converge to
$\Delta u-\partial_xu\cdot\partial_xg$; thus,
$$(6.15)_a\rightarrow\Delta g-\partial_xu\cdot\partial_xg . $$
Now (6.14) follows from (6.15), (6.16) and the last formula.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
This immediately calls for a definition.
\vskip 1pc
\noindent{\bf Definition.} Let $\mu_s$ be a limit as in proposition 6.2; in particular, it satisfies $\mu_{T}=\mu$. Then we say that $\mu_s$ is a limit minimal characteristic starting at $(T,\mu)$.
\prop{6.3} Let $T\in[-m,0)$ and let $s$ be the maximal integer such that $T\le\frac{-s}{n}$. Let
$\{ \mu^\frac{1}{n}_\frac{j}{n} \}_j$ be a minimal
$\{ \gamma^\frac{1}{n}_\frac{j}{n} \}_j$-sequence starting at
$(\frac{-s}{n},\mu)$. Let
$f^\frac{1}{n}_t$ be defined by (5.4); then, there is
$f_0\in L^\infty({\bf T}^p)$ such that the following holds.
Up to subsequences, $f^\frac{1}{n}_t$ converges to a function $u$ which satisfies
$(HJ)_{0,\bar\mu,f_0}$, where $\bar\mu_t$ is a limit minimal characteristic. The convergence is in
$C([T,-\epsilon],C^2({\bf T}^p))$ for all $\epsilon\in(0,T)$.
\rm\vskip 1 pc\noindent{\bf Proof.\quad} {\bf Step 1.} In this step, we want to reduce to the situation of [14], i. e. to a problem where neither the potential nor the final condition depend on $n$.
As in proposition 6.2, we can interpolate the measures
$\{ \mu^\frac{1}{n}_\frac{j}{n} \}_j$ by a curve of measures
$\mu^\frac{1}{n}_t$. Taking subsequences, we can suppose that
$f^\frac{1}{n}_t\rightarrow u$ (lemma 5.3) and that
$\mu^\frac{1}{n}_t\rightarrow\mu$ (proposition 6.2). By point $ii$) of the definition of differentiability on densities, we can further refine our subsequence in order to have
$e^{-f_0^\frac{1}{n}}\rightharpoonup e^{-f_0}$ in $L^1({\bf T}^p)$.
For $\mu^\frac{1}{n}_t$ and its limit $\bar\mu_t$, we define as above
$$P^\frac{1}{n}(t,x)=V(t,x)+
W^{\mu_t^\frac{1}{n}}(x),\qquad
\bar P(t,x)=V(t,x)+
W^{\bar\mu_t}(x) .
$$
Note that the function $\bar P$ does not depend on $n$, since it is defined in terms of $\bar\mu_t$ which does not depend on $n$.
Since the curve of measures $\mu^\frac{1}{n}_t$ converges uniformly to $\bar\mu_t$ on $[T,-\epsilon]$ for all $\epsilon>0$, by the definition of $W^{\frac{1}{2}\mu^\frac{1}{n}_t}$ we have that
$$\sup_{\frac{j}{n}\in[T,-\epsilon]}||
P^\frac{1}{n}(\frac{j}{n},\cdot)-\bar P(\frac{j}{n},\cdot)
||_{C^4({\bf T}^p)} \le\delta_n \eqno (6.17)$$
with $\delta_n\rightarrow 0$ as $n\rightarrow+\infty$.
We defined $\{ f_\frac{j}{n}^\frac{1}{n} \}_j$ as the linearized cost for the problem with final condition
$f_0^\frac{1}{n}=U^\prime(\mu_0^\frac{1}{n})$ and potential
$P^\frac{1}{n}$; we let $\{ \bar f_\frac{j}{n}^\frac{1}{n} \}_j$ be the linearized cost for the problem with final condition
$f_0$ and potential $\bar P$.
Since neither the potential nor the final condition for
$\bar f^\frac{1}{n}_t$ depend on $n$, we are exactly in the case of [14]; by theorem 29 of [14], $\bar f^\frac{1}{n}_t$ converges to a solution of $(HJ)_{0,\bar\rho,f_0}$ as $n\rightarrow+\infty$; thus, it suffices to show that, for all $\epsilon>0$,
$$\sup_{\frac{j}{n}\in[T,-\epsilon]}
||
f^\frac{1}{n}_\frac{j}{n}-\bar f^\frac{1}{n}_\frac{j}{n}
||_{C^2({\bf T}^p)} \rightarrow 0 \txt{as} n\rightarrow+\infty .
\eqno (6.18)$$
\noindent {\bf Step 2.} Here we show that the Feynman-Kac formula (5.9) implies (6.18).
We define ${\cal P}_\frac{j}{n}$ as in formula (5.7); we define
$\bar{\cal P}_\frac{j}{n}$ analogously, but for the potential
$\bar P$. We set
$$c_\frac{j}{n}(x,y)\colon=
N(0,\frac{|j|}{n}Id)(x-y)
E_{0,0}\left[
{\cal P}_\frac{j}{n}(
x-a_{x-y}(\frac{j}{n})-\tilde w(\frac{j}{n}) , \dots,
x-a_{x-y}(-\frac{1}{n})-\tilde w(-\frac{1}{n})
)
\right] $$
and
$$\bar c_\frac{j}{n}(x,y)\colon=
N(0,\frac{|j|}{n}Id)(x-y)
E_{0,0}\left[
\bar{\cal P}_\frac{j}{n}(
x-a_{x-y}(\frac{j}{n})-\tilde w(\frac{j}{n}) , \dots,
x-a_{x-y}(-\frac{1}{n})-\tilde w(-\frac{1}{n})
)
\right] . $$
By (5.9) we get
$$e^{-f^\frac{1}{n}_\frac{j}{n}(x)}=
\int_{{\bf R}^p}c_\frac{j}{n}(x,y)e^{-f^\frac{1}{n}_0(y)} {\rm d} y
\txt{and}
e^{-\bar f^\frac{1}{n}_\frac{j}{n}(x)}=
\int_{{\bf R}^p}\bar c_\frac{j}{n}(x,y)e^{-f_0(y)} {\rm d} y . $$
Now, by the triangle inequality,
$$||e^{-f^\frac{1}{n}_\frac{j}{n}}-
e^{-\bar f^\frac{1}{n}_\frac{j}{n}} ||_{C^2({\bf T}^p)}\le
\int_{{\bf R}^p}||
\bar c_\frac{j}{n}(\cdot,y)-c_\frac{j}{n}(\cdot,y)
||_{C^2({\bf R}^p)} e^{-f_0(y)} {\rm d} y+$$
$$\left\vert\left\vert\int_{{\bf R}^p}
c_\frac{j}{n}(\cdot,y)
[ e^{-f_0(y)}-e^{-f_0^\frac{1}{n}(y)}] {\rm d} y
\right\vert\right\vert_{C^2({\bf R}^p)} . $$
Thus, (6.18) follows if we prove that
$$\sup_{-m\le\frac{j}{n}\le-\epsilon}\left\vert\left\vert
\int_{{\bf R}^p}
c_\frac{j}{n}(\cdot,y)[e^{-f_0(y)}-e^{-f_0^\frac{1}{n}(y)}] {\rm d} y
\right\vert\right\vert_{C^2({\bf R}^p)} \rightarrow 0
\txt{as} n\rightarrow+\infty \eqno (6.19)$$
and (recalling that $f_0$ is bounded by the definition of differentiability on densities)
$$\sup_{-m\le\frac{j}{n}\le-\epsilon}
\int_{{\bf R}^p}
||c_\frac{j}{n}(\cdot,y)-\bar c_\frac{j}{n}(\cdot,y)||_{C^2({\bf R}^p)}
{\rm d} y
\rightarrow 0 \txt{as} n\rightarrow+\infty . \eqno (6.20) $$
We begin with (6.19). For $L(a)$ defined as in lemma 6.1, we see that, by the periodicity of $f_0$ and $f_0^\frac{1}{n}$,
$$\int_{{\bf R}^p}c_\frac{j}{n}(x,y)
[e^{-f_0(y)}-e^{-f_0^\frac{1}{n}(y)}] {\rm d} y=
\int_{L(\frac{1}{2})}\left[
\sum_{k\in{\bf Z}^p}c_\frac{j}{n}(x,y+k)
\right] \cdot
[e^{-f_0(y)}-e^{-f_0^\frac{1}{n}(y)}] {\rm d} y . $$
Since we saw at the beginning of the proof that
$e^{-f_0^\frac{1}{n}}\rightharpoonup e^{-f_0}$ in $L^1({\bf T}^p)$, (6.19) follows if we prove that the set of function of $y$
$$\left\{
\sum_{k\in{\bf Z}^p}\partial^l_x c_\frac{j}{n}(x,y+k)\;\colon\;
x\in{\bf T}^p,\quad-m\le\frac{j}{n}\le-\epsilon,\quad 0\le l\le 2,\quad x\in{\bf T}^p
\right\} $$
is relatively compact in $L^\infty(L(\frac{1}{2}))$; since $c_\frac{j}{n}$ is the product of a Gaussian with variance greater than $\epsilon$ and a periodic function bounded in $C^4$, this follows easily by Ascoli-Arzel\`a.
We prove (6.20). We recall that, by the definition of $c_\frac{j}{n}$,
$\bar c_\frac{j}{n}$ and (5.7),
$$c_\frac{j}{n}(x,y)-\bar c_\frac{j}{n}(x,y)=
N\left( 0,\frac{|j|}{n}Id \right)(x-y)\cdot$$
$$E_{0,0}\left\{
{\rm exp}\left[
\frac{1}{n}\cdot\sum_{r=j}^0
P_\frac{r+1}{n}\left(
x-a_{x,y}\left(\frac{r}{n}\right)+\tilde w\left(\frac{r}{n}\right)
\right)
\right] -
{\rm exp}\left[\frac{1}{n}\cdot\sum_{r=j}^0
\bar P_\frac{r+1}{n}\left(
x-a_{x,y}\left(\frac{r}{n}\right)+\tilde w\left(\frac{r}{n}\right)
\right)
\right]
\right\} . $$
The first term in the product above is a Gaussian, which is bounded in $C^2$ if $\frac{|j|}{n}\ge\epsilon$; as for the second one, it is easy to see that it tends to zero by (6.17). This implies (6.20).
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\noindent{\bf End of the proof of theorem 1.} By proposition 6.3, the limit $u$ of the linearized value functions satisfies
$(HJ)_{0,\mu,f_0}$, while the limit minimal characteristic satisfies
$(FP)_{-m,-\partial_xu,\mu}$ by proposition 6.2; thus, the only things we have to prove are (1) and the semigroup property of
$\Lambda^m$. As for the latter, it follows in a standard way from (1) (see for instance theorem 4 of [4]); thus, we shall skip its proof.
We prove 1). Let $\mu_t$ be a limit minimal characteristic starting at $(T,\mu)$; let us call $\rho_t$ its density ($\mu_t$ has a density since the drift $-\partial_xu$ is regular by proposition 5.2 ) and let $u$ be the solution of the associated Hamilton-Jacobi equation; let us define the drift $Y$ as $Y=-\partial_x u$. Proposition 6.2 implies that
$\mu_t$ is the push-forward of the Wiener measure by the map
$\fun{}{\omega}{\xi(t)(\omega)}$, where $\xi$ solves the stochastic differential equation
$$\left\{
\eqalign{
{\rm d} \xi(t)&= Y(t,\xi(s)) {\rm d} t+ {\rm d} w(t)\qquad t\in[T,0]\cr
\xi(T)&= X
} \right. $$
and $X$ has distribution $\mu$. This implies that
$$\inf\{
E_w\int_T^0L_c^{\frac{1}{2}\rho}(t,\xi(t),Y(t,\xi(t))) {\rm d} t+U(\rho(0){\cal L}^p)
\} \le U(T,\mu)$$
where $E_w$ denotes expectation with respect to the Wiener measure. We must prove the opposite inequality.
Let $Y$ be a Lipschitz drift, and let
$\gamma^\frac{1}{n}_\frac{j}{n}(x,\cdot)$ be the law of
$\xi(\frac{j+1}{n})-x$, where $\xi$ solves
$$\left\{
\eqalign{
{\rm d} \xi(t)&=Y(t,\xi(t))+ {\rm d} w(t)\cr
\xi(\frac{j}{n})&=x .
}
\right. $$
Now we consider a $\{ \gamma_\frac{j}{n} \}_j$-sequence
$\{ \mu_\frac{j}{n} \}_j$ starting at $(\frac{s}{n},\mu)$; by lemma 3.1, we have that
$$\sum_{j=s}^{-1}
\int_{{\bf T}^p\times{\bf R}^p}\left[
\frac{1}{n}L_c^{\frac{1}{2}\mu_\frac{j}{n}}(t,x,nv)+
\log\gamma_\frac{j}{n}(x,v)
\right] \gamma_\frac{j}{n}(x,v) {\rm d} \mu_\frac{j}{n} {\rm d} v +
U(\mu_0)\ge
\hat U(\frac{j}{n},\mu) . $$
It is easy to see that, if we let $n\rightarrow+\infty$, the left hand side converges to
$$E_w
\int_t^0L_c(t,\xi(t),Y(t,\xi(t))) {\rm d} t+U(\rho(0){\cal L}^p) . $$
From this, the opposite inequality follows.
\par\hfill $\backslash\backslash\backslash$\vskip 1 pc
\vskip 2pc
\centerline{\bf Bibliography}
\noindent [1] L. Ambrosio, W. Gangbo, Hamiltonian ODE's in the Wasserstein space of probability measures, Communications on Pure and Applied Math., {\bf 61}, 18-53, 2008.
\noindent [2] L. Ambrosio, N. Gigli, G. Savar\'e, Gradient Flows, Birkhaeuser, Basel, 2005.
\noindent [3] L. Ambrosio, N. Gigli, G. Savar\'e, Heat flow and calculus on metric measure spaces with Ricci curvature bounded below - the compact case. Preprint 2012.
\noindent [4] U. Bessi, Viscous Aubry-Mather theory and the Vlasov equation, Discrete and Continuous Dynamical Systems,
{\bf 34}, 379-420, 2014.
\noindent [5] C. Dellacherie, P-A. Meyer, Probabilities and potential, Paris, 1978.
\noindent [6] I. Ekeland, R. Temam, Convex analysis and variational problems, Amsterdam, 1976.
\noindent [7] S. N. Ethier, T. G. Kurtz, Markov processes, Wiley, New York, 1986.
\noindent [8] J. Feng, T. Nguyen, Hamilton-Jacobi equations in space of measures associated with a system of conservation laws, J. Math. Pures Appl., {\bf 97}, 318-390, 2012.
\noindent [9] W. Gangbo, A. Tudorascu, Lagrangian dynamics on an infinite-dimensional torus; a weak KAM theorem, Adv. Math.,
{\bf 224}, 260-292, 2010.
\noindent [10] W. Gangbo, A. Tudorascu, Weak KAM theory on the Wasserstein torus with multi-dimensional underlying space, preprint.
\noindent [11] I. M. Gel'fand, A. M. Yaglom, Integration in functional spaces and its applications in Quantum Physics, J. Math. Phys. {\bf 1}, 48-69, 1960.
\noindent [12] W. Gangbo, T. Nguyen, A. Tudorascu, Hamilton-Jacobi equations in the Wasserstein space, Methods Appl. Anal., {\bf 15}, 155-183, 2008.
\noindent [13] D. Gomes, A stochastic analog of Aubry-Mather theory, Nonlinearity, {\bf 15}, 581-603, 2002.
\noindent [14] D. Gomes, E. Valdinoci, Entropy penalization method for Hamilton-Jacobi equations, Advances in Mathematics, {\bf 215}, 94-152, 2007.
\noindent [15] R. Jordan, D. Kinderlehrer, F. Otto, The variational formulation of the Fokker-Planck equation, SIAM J. Math. Anal., {\bf 29}, 1-17, 1998.
\noindent [16] C. Villani, Topics in optimal transpotation, Providence, R. I., 2003.
\end
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Identifying a surface containing a solution (and/or the support of sparse solutions) represents a
relevant task in optimization, since it allows to reduce the dimension of the problem at handle and to apply
a more sophisticated method in the end (see, e.g. \cite{bertsekas:1982,birgin:2002,2017arXiv170307761C,cristofari:2018new,desantis:2012,hager2011gradient,hager:2006,hager2016active}).
This is the reason why, in the last decades, identification properties
of optimization methods have been the subject of extensive studies.
The Frank-Wolfe (FW) algorithm, first introduced in \cite{frank1956algorithm}, is a classic first-order optimization method that has
recently re-gained popularity thanks to the way it can easily handle the structured constraints appearing in many real-world applications.
This method and its variants have been indeed applied in the context of, e.g., submodular optimization problems \cite{bach2013learning},
variational inference problems \cite{krishnan2015barrier} and sparse neural network training \cite{grigas2019stochastic}.
It is important to notice that the FW approach has a relevant drawback with respect to other algorithms: even when dealing
with the simplest polytopes, it cannot identify the active set in finite time (see, e.g., \cite{FOIMP}).
Due to the renewed interest in the method, it has hence become a relevant issue to determine whether
some FW variants admit active set identification properties similar to those of other first order methods.
In this paper we focus on the away-step Frank-Wolfe (AFW) method and analyze active set identification properties
for problems of the form
\begin{equation*}
\textnormal{min} \left\{f(x) \ | \ x \in \Delta_{n - 1} \right\},
\end{equation*}
where the objective $f$ is a
differentiable function with Lipschitz regular gradient and the feasible set
$$\Delta_{n - 1}=\left\{x\in \mathbb R^n: \,\displaystyle \sum_{i=1}^n x_i = 1, \, x \ge 0\right\}$$
is the probability simplex. We further extend some of the active set
complexity results to general polytopes.\\
\subsection{Contributions} It is a classic result that on polytopes and under strict complementarity conditions
the AFW with exact linesearch identifies the face containing the minimum in finite time for strongly convex
objectives \cite{guelat1986some}. More general active set identification properties for Frank-Wolfe variants
have recently been analyzed in \cite{FOIMP}, where the authors proved active set identification for sequences convergent
to a stationary point, and AFW convergence to a stationary point for $C^2$ objectives with a finite number
of stationary points and satisfying a technical convexity-concavity assumption (this assumption is substantially a generalization
of a property related to quadratic possibly indefinite functions).
The main contributions of this article with respect to \cite{FOIMP} are twofold: \\
\begin{itemize}
\item First, we give quantitative local and global active set identification complexity bounds
under suitable assumptions on the objective. The key element in the computation of those bounds
is a quantity that we call "active set radius". This radius determines a neighborhood of a stationary point for which the AFW at each iteration identifies an active constraint (if there is any not yet identified one).
In particular, to get the active set complexity bound it is sufficient to know how many iterations it takes
for the AFW sequence to enter this neighborhood. \\
\item Second, we analyze the identification properties of AFW without the technical
convexity-concavity $C^2$ assumption used in \cite{FOIMP} (we consider general nonconvex objectives with Lipschitz gradient instead).
More specifically, we prove active set identification under different conditions on the stepsize and some additional hypotheses on the support of stationary points. \\
\end{itemize}
In order to prove our results, we consider stepsizes dependent on
the Lipschitz constant of the gradient (see, e.g., \cite{balashov2019gradient}, \cite{iusem2003convergence} and references therein).
By exploiting the affine invariance property of the AFW (see, e.g., \cite{jaggi2013revisiting}), we also extend some of the results to generic polytopes. In our analysis we will see how the AFW identification properties are related to the value of Lagrangian multipliers on stationary points.
This, to the best of our knowledge, is the first time that some active set complexity bounds are given for a variant of the FW algorithm.
\iffalse
In fact every application of the AFW to a polytope can be seen as an application of the AFW to the simplex, with each
vertex of the simplex corresponding to one of the atoms generating the polytope. \\
\fi
The paper is organized as follows:
after presenting the AFW method and the setting in Section~\ref{prel}, we study the local behaviour of this algorithm regarding the active set in Section~\ref{asradius}. In Section~\ref{asbds} we provide active set identification results in a quite general context, and apply these to the strongly convex case for obtaining complexity bounds. Section~\ref{S:nonconv} treats the nonconvex case, giving both global and local active set complexity bounds. In the final Section~\ref{concl} we draw some conclusions. To improve readability, some technical details are deferred to an appendix.
\subsection{Related work} In \cite{burke1988identification} the authors proved that the projected gradient method and other converging
sequential quadratic programming methods identify quasi-polyhedral faces under some nondegeneracy conditions.
In \cite{burke1994exposing} those results were extended to the case of exposed faces in polyhedral sets without the nondegeneracy assumptions.
This extension is particularly relevant to our work since the identification of exposed faces in polyhedral sets is
the framework that we will use in studying the AFW on polytopes. In \cite{wright1993identifiable} the results of
\cite{burke1988identification} were generalized to certain nonpolyhedral surfaces called "$C^p$ identifiable" contained
in the boundary of convex sets. A key insight in these early works was the openness of a generalized normal cone defined
for the identifiable surface containing a nondegenerate stationary point. This openness guarantees that, in a neighborhood of
the stationary point, the projection of the gradient identifies the related surface. It turns out that for linearly constrained
sets the generalized normal cone is related to positive Lagrangian multipliers on the stationary point.\\
A generalization of \cite{burke1988identification} to nonconvex sets was proved in \cite{burke1990identification}, while an extension
to nonsmooth objectives was first proved in \cite{hare2004identifying}. Active set identification results have also been proved for
a variety of projected gradient, proximal gradient and stochastic gradient related methods (see for instance \cite{sun2019we} and
references therein). \\
Recently, explicit active set complexity bounds have been given for some of the methods listed above.
Bounds for proximal gradient and block coordinate descent method were analyzed in \cite{nutini2019active} and \cite{nutini2017let}
under strong convexity assumptions on the objective.
A more systematic analysis covering many gradient related proximal methods
(like, e.g., accelerated gradient, quasi Newton and stochastic gradient proximal methods)
was carried out in \cite{sun2019we}. \\
As for FW-like methods, in addition to the results in \cite{guelat1986some} and \cite{FOIMP} discussed earlier, identification
results have been proved in \cite{clarkson2010coresets} for fully corrective variants on the probability simplex. However, since fully
corrective variants require to compute the minimum of the objective on a given face at each iteration, they are not suited
for nonconvex problems.
\section{Preliminaries}\label{prel}
In the rest of this article $f: \Delta_{n - 1} \rightarrow \mathbb R$ will be a function with gradient having Lipschitz constant
$L$ and $\mathcal{X}^*$ will be the set of stationary points of $f$. The constant $L$ will also be used as Lipschitz constant for
$\nabla f$ with respect to the norm $\n{\cdot}_1$. This does not require any additional hypothesis on $f$ since $\n{\cdot}_1 \geq \n{\cdot}$, so that
\begin{equation*}
\n{\nabla f(x) - \nabla f(y)} \leq L \n{x-y} \leq L \n{x- y }_1
\end{equation*}
for every $x, y \in \Delta_{n - 1}$. \\
For $x \in \mathbb R^n$, $X \subset \mathbb R^n$ the function $\textnormal{dist}(x, X)$ will be the standard point
set distance and for $A \subset \mathbb R^n$ the function $\textnormal{dist}(A, X)$ will be the minimal distance between points in the sets:
\begin{equation*}
\textnormal{dist}(A, X) = \inf_{a \in A, x \in X} \|a-x\| \, .
\end{equation*}
We define $\textnormal{dist}_1$ in the same way but with respect to $\| \cdot \|_1$. We use the notation
\begin{equation*}
\mathrm{supp}(x) = \{i \in [1:n] \ | \ x_i \neq 0 \}
\end{equation*}
for the support of a point $x \in \mathbb{R}^n$. \\
Given a (convex and bounded) polytope
$P$ and a vector $c$ we define the face of $P$ \textit{exposed by} $c$ as
\begin{equation*}
\mathcal{F}(c) =\textnormal{argmax}\{c^\top x \ | \ x \in P \} \ .
\end{equation*}
It follows from the definition that the face of $P$ exposed by a linear function is always unique and nonempty. \\
We now introduce the multiplier functions, which were recently used in \cite{2017arXiv170307761C} to define an active
set strategy for minimization over the probability simplex.\\
For every $x \in \Delta_{n-1}$, $i \in [1:n]$ the multiplier function $\lambda_i: \Delta_{n - 1} \rightarrow \mathbb R$
is defined as $$ \lambda_i(x) = \nabla f(x)^\top(e_i - x), $$
or in vector form
\begin{equation*}
\lambda(x) = \nabla f(x) - x^\top\nabla f(x)e\ .
\end{equation*}
For every $x \in \mathcal{X}^*$ these functions coincide with the Lagrangian multipliers of the constraints $x_i \geq 0$. \\
For a sequence $\{a_k\}_{k \in \mathbb{N}_0}$ we will drop the subscript and write simply $\{a_k\}$ (unless of course the sequence is defined on some other index set). \\
FW variants require a linear minimization oracle for the feasible set (the probability simplex in our case):
\begin{equation*}
\textnormal{LMO}_{\Delta_{n-1}}(r) \in \textnormal{argmin} \{ x^\top r \ | \ x \in \Delta_{n-1} \}.
\end{equation*}
Keeping in mind that
$$\Delta_{n-1}=\textnormal{conv}(\{e_i,\ i=1,\dots,n \}),$$
we can assume that $\textnormal{LMO}_{\Delta_{n-1}}(r)$ always returns a vertex of the probability simplex, that
is
$$\textnormal{LMO}_{\Delta_{n-1}}(r) = e_{\hat \imath}$$
with $\hat \imath \in \displaystyle\textnormal{argmin}_{i} r_i.$
Algorithm 1 is the classical FW method on the probability simplex. At each iteration, this first order method generates a descent direction
that points from the current iterate $x_k$ to a vertex $s_k$ minimizing the scalar product with the gradient, and then moves along
this search direction of a suitable stepsize if stationarity conditions are not satisfied.
\vspace{3mm}
\begin{center}
\begin{tabular}{|l|}
\hline
\textbf{Algorithm 1} Frank--Wolfe method on the probability simplex \\
\hline
1. \textbf{Initialize} $x_0 \in \Delta_{n-1}$, $k := 0$ \\
2. Set $s_k :=e_{\hat\imath},$ with $\hat\imath\in\displaystyle\textnormal{argmin}_{i} \nabla_i f(x_k)$ and $d_k^{\mathcal{FW}} := s_k - x_k$ \\
3. If $x_k$ is stationary, then STOP \\
4. Choose the step size $\alpha_k \in (0, 1]$ with a suitable criterion \\
5. Update: $x_{k+1} := x_k + \alpha_k d^{\mathcal{FW}}_k$ \\
6. Set $k := k+1$. Go to Step 2. \\
\hline
\end{tabular}
\end{center}
\vspace{3mm}
It is well known \cite{canon1968tight,wolfe1970convergence} that
the method exhibits a zig zagging behaviour as the sequence of iterates $\{x_k\}$ approaches a solution on the
boundary of the feasible set. In particular, when this happens the sequence $\{x_k\}$ converges
slowly and, as we already mentioned, it does not identify the smallest face containing the solution in finite time.
Both of these issues are solved by the away-step variant of the FW method, reported in Algorithm 2.
The AFW at every iteration chooses between the classic FW direction and the away-step direction $d_k^{\mathcal{A}}$
calculated at Step 4. This away direction shifts weight away from the worst vertex to the other vertices
used to represent the iterate $x_k$.
Here the worst vertex (among those having
positive weight in the iterate representation) is the one with the greatest scalar product with the gradient, or, equivalently,
the one that maximizes the linear approximation of $f$ given by $\nabla f(x_k)$. The stepsize upper bound $\alpha_{k}^{\max}$ in Step 8 is the maximal
possible for the away direction given the boundary conditions. When the algorithm performs an away step, we have that either the support
of the current iterate stays the same or decreases of one (we get rid of the component whose index is associated to the away direction in case $\alpha_k = \alpha_k^{\max}$).
On the other hand, when the algorithm performs a Frank Wolfe step, only the vertex given by the $\textnormal{LMO}$ is eventually added to the support of
the current iterate. These two properties are fundamental for the active set identification of the AFW.
\vspace{3mm}
\begin{center}
\begin{tabular}{|l|}
\hline
\textbf{Algorithm 2} Away--step Frank--Wolfe on the probability simplex \\
\hline
1. \ \textbf{Initialize} $x_0 \in \Delta_{n-1}$, $k := 0$ \\
2. \ Set $s_k :=e_{\hat\imath},$ with $\hat\imath\in\displaystyle\textnormal{argmin}_{i} \nabla_i f(x_k)$ and $d_k^{\mathcal{FW}} := s_k - x_k$ \\
3. \ If $x_k$ is stationary then STOP \\
4. \ Let $v_k :=e_{\hat\jmath},$ with $\hat\jmath\in\displaystyle\textnormal{argmax}_{j\in S_k} \nabla_j f(x_k)$, $S_k:=\{j: (x_k)_j > 0 \}$ and $d_k^{\mathcal{A}} := x_k-v_k$ \\
5. \ If $-\nabla f(x_k)^\top d_k^{\mathcal{FW}} \geq -\nabla f(x_k)^\top d_k^{\mathcal{A}}$ then \\
6. \ \quad $d_k := d_k^{\mathcal{FW}}$, and $\alpha_{k}^{\max} :=1$ \\
7. \ else \\
8. \ \quad $d_k := d_k^{\mathcal{A}}$, \text{and} $\alpha_k^{\max} := (x_k)_i/(1-(x_k)_i) $ \\
9. \ End if \\
10. Choose the step size $\alpha_k \in (0, \alpha_k^{\max}]$ with a suitable criterion \\
11. Update: $x_{k+1} := x_k + \alpha_k d_k$ \\
12. $k := k+1$. Go to step 2. \\
\hline
\end{tabular}
\end{center}
\vspace{3mm}
In our analysis, we will sometimes require a lower bound on the step size which is always satisfied by the exact linesearch and the Armijo rule for a proper choice of the parameters.
\section{Local active set variables identification property of the AFW}\label{asradius}
In this section we prove a rather technical proposition which is the key tool to give quantitative estimates for the active set complexity. It states that when the sequence is close enough to a fixed stationary point at every step the AFW identifies one variable violating the complementarity conditions with respect to the multiplier functions on this stationary point (if it exists), and it sets the variable to $0$ with an away step. The main difficulty is giving a tight estimate for how close the sequence must be to a stationary point for this identifying away step to take place. \\
A lower bound on the size of the nonmaximal away steps is needed in the following theorem, otherwise of course the steps could be arbitrarily small and there could be no convergence at all. \\
Let $\{x_k\}$ be the sequence of points generated by the AFW.
We further indicate with $x^*$ a fixed point in $\mathcal{X}^*$, with {\em the extended support} $$I = \{i \in [1:n] \ | \ \lambda_i (x^*) = 0 \}$$ and with $I^c = \{ 1, ...n \}\setminus I$.
Note that by complementary slackness, we have $x_j^*=0$ for all $j\in I^c$.
Before proving the main theorem we need to prove the following lemma to bound the Lipschitz constant of the multipliers on stationary points.
\begin{Lemma} \label{lipest}
Given $h>0$, $x_k \in \Delta_{n - 1}$ such that $\|x_k - x^*\|_1 \leq h$ let $$O_k = \{i \in I^c \ | \ (x_k)_i = 0\}$$ and assume that $O_k \neq I^c$. Let $\delta_{k} = \max_{i \in [1:n]\setminus O_k} \lambda_i(x^*)$. For every $i \in \{1, ..., n\}$:
\begin{equation} \label{lip}
| \lambda_i (x^*) - \lambda_i(x_k) | \leq h(L + \frac{\delta_k}{2})\ .
\end{equation}
\end{Lemma}
\begin{proof}
By considering the definition of $\lambda(x)$, we can write
\begin{eqnarray} \label{split}
|{\lambda}_i(x_k) - {\lambda}_i(x^*)| &=& |\nabla f(x_k)_i - \nabla f (x^*)_i + \nabla f(x^*)^\top (x^*-x_k) + (\nabla f(x^*)-\nabla f(x_k))^\top x_k| \nonumber\\
&\leq& |\nabla f(x^*)_i - \nabla f (x_k)_i + (\nabla f(x_k)-\nabla f(x^*))^\top x_k| + | \nabla f(x^*)^\top(x^*-x_k)|\ .
\end{eqnarray}
By taking into account the fact that $x_k \in \Delta_{n-1}$ and gradient of $f$ is Lipschitz continuous, we have
\begin{eqnarray} \label{1piece}
|\nabla f(x_k)_i - \nabla f (x^*)_i + (\nabla f(x^*)-\nabla f(x_k))^\top x_k| &=& |(\nabla f(x^*)-\nabla f(x_k))^\top (x_k-e_i) | \nonumber\\
&\leq& \|\nabla f(x^*)-\nabla f(x_k)\|_1 \| x_k-e_i\|_{\infty}\\
&\leq& Lh,\nonumber
\end{eqnarray}
where the last inequality is justified by the H\"older inequality with exponents $1, \infty$. \\
We now bound the second term in the right-hand side of \eqref{split}. Let
\begin{equation*}
u_j = \max\{0, (x^*-x_k)_j\}, \ l_j = \max\{0, -(x^*-x_k)_j\}\, .
\end{equation*}
We have $\sum_{j \in [1:n]} x^*_j = \sum_{j \in [1:n]} (x_k)_j = 1 $ since $\{x^*, x_k\}\subset \in \Delta_{n-1}$, so that
\begin{equation*}
\sum_{j \in [1:n]} (x^*-x_k)_j = \sum_{j \in [1:n]} (u_j - l_j) = 0 \quad\mbox{and hence}\quad \sum_{j \in [1:n]} u_j = \sum_{i \in [1:n]} l_j.
\end{equation*}
Moreover, $ h'\myeq 2\sum_{j \in [1:n]} u_j = 2\sum_{j \in [1:n]} l_j = \sum_{j \in[1:n]} u_j + l_j = \sum_{j \in [1:n]} |x^*_j - (x_k)_j| \leq h$, hence
\begin{equation*}
h'/2 = \sum_{j \in [1:n]} u_j = \sum_{j \in [1:n]} l_j \leq h/2\ .
\end{equation*}
We can finally bound the second piece of \eqref{split}, using $u_j=l_j=0$ for all $j\in O_k$ (because $(x_k)_j =x_j^*=0$):
\begin{eqnarray} \label{2piece}
| \nabla f(x^*)^\top (x^*-x_k)|&=& | \nabla f(x^*)^\top k - \nabla f(x^*)^\top l |\leq \frac{h'}{2}(\nabla f(x^*)_M - \nabla f(x^*)_m) \nonumber\\
&\leq& \frac{h}{2}(\nabla f(x^*)_M - \nabla f(x^*)_m),
\end{eqnarray}
where $\nabla f(x_k)_M$ and $\nabla f(x_k)_m$ are respectively the maximum and minimum component of the gradient in $[1:n]\setminus O_k$.\\
Now, considering inequalities \eqref{split}, \eqref{1piece} and \eqref{2piece}, we can write
\begin{equation*}
|{\lambda}_i(x_k) - {\lambda}_i(x^*)|\leq Lh+ \frac{h}{2}(\nabla f(x^*)_M - \nabla f(x^*)_m).
\end{equation*}
By taking into account the definition of $\delta_k$ and the fact that $\lambda(x^*)_j\geq 0$ for all $j$, we can write
$$\delta_{k} = \max_{i,j \in [1:n]\setminus O_k} (\nabla f(x^*)_i-\nabla f(x^*)_j)\geq \nabla f(x^*)_M-\nabla f(x^*)_m.$$
We can finally write
$$
|{\lambda}_i(x_k) - {\lambda}_i(x^*)|\leq h(L+\frac{\delta_k}{2}),
$$
thus concluding the proof.
\end{proof}
We now show a few simple but important results that connect the multipliers and the directions selected by the AFW algorithm. Notice that for a fixed $x_k$ the multipliers $\lambda_i(x_k)$ are the values of the linear function $x \mapsto \nabla f(x_k)^\top x$ on the vertices of $\Delta_{n - 1}$ (up to a constant), which in turn are the values considered in the AFW to select the direction. This basic observation is essentially everything we need for the next results.
\begin{Lemma} \label{awstep}
Let $S_k = \{ i \in \{1, ..., n\} \ | \ (x_k)_i > 0 \}$. Then
\begin{itemize}
\item[(a)] If $\max\{\lambda_i(x_k) \ | \ i \in S_k \} > \max\{- \lambda_i(x_k) \ | \ i \in [1:n] \}$, then the AFW performs an away step with $d_k = d_k^{\mathcal{A}} = x_k - e_{\hat\imath}$ for some
$i \in \textnormal{argmax} \{\lambda_i(x_k) \ | \ i \in S_k \}$.
\item[(b)] For every $i \in [1:n] \setminus S_k$ if $\lambda_i(x_k) > 0$ then $(x_{k+1})_i =(x_k)_i = 0$.
\end{itemize}
\end{Lemma}
\begin{proof}
(a) Notice that since the vertices of the probability simplex are linearly independent for every $k$ the set of active atoms is necessarily $S_k$. In particular \\
$d_k^{\mathcal{A}} \in \textnormal{argmax} \{ -\nabla f(x_k)^\top d) \ | \ d = x_k - e_i, i \in S_k \}$ and this implies
\begin{equation} \label{awdir}
d_k^{\mathcal{A}} = x_k - e_{\hat\imath} \quad \textnormal{for some } \hat\imath \in \textnormal{argmax}\{-\nabla f(x_k)^\top (x_k - e_i) \ | \ i \in S_k \} = \textnormal{argmax} \{\lambda_i(x_k) \ | \ i \in S_k \}\ .
\end{equation}
As a consequence of \eqref{awdir}
\begin{equation} \label{lambda1}
- \nabla f(x_k)^\top d_k^{\mathcal{A}} = \max \{- \nabla f(x_k)^\top d \ | \ d= x_k- e_i, i \in S_k \} = \max \{\lambda_i(x_k) \ | \ i \in S_k \}\ ,
\end{equation}
where the second equality follows from $\lambda_i(x_k) = -\nabla f(x_k)^\top d$ with $d = x_k-e_i$. \\
Analogously
\begin{equation} \label{lambda2}
\begin{aligned}
- \nabla f(x_k)^\top d_k^{\mathcal{FW}} & = \max \{ - \nabla f(x_k)^\top d \ | \ d = e_i - x_k, i \in \{1, ...n \} \} = \\ & = \max \{ - \lambda_i(x_k) \ | \ i \in \{1, ...n \} \}\ .
\end{aligned}
\end{equation}
We can now prove that $ -\nabla f(x_k)^\top d_k^{\mathcal{FW}} < - \nabla f(x_k)^\top d_k^{\mathcal{A}}$, so that the away direction is selected under assumption (a):
\begin{align*}
& -\nabla f(x_k)^\top d_k^{\mathcal{FW}} = \max \{ - \lambda_i(x_k) \ | \ i \in \{1, ...n \} \} < \\ & < \max \{\lambda_i(x_k) \ | \ i \in S_k \} = - \nabla f(x_k)^\top d_k^{\mathcal{A}},
\end{align*}
where we used \eqref{lambda1} and \eqref{lambda2} for the first and the second equality respectively, and the inequality is true by hypothesis. \\
(b) By considering the fact that $(x_k)_i=0$, we surely cannot choose the vertex $e_i$ to define the away-step direction. Furthermore, since $\lambda(x_k)_i=\nabla f(x_k)^\top (e_i-x_k)>0$,
direction $d=e_i-x_k$ cannot be chosen as the Frank-Wolfe direction at step $k$ as well. This guarantees that $(x_{k+1})_i=0$.
\end{proof}
We can now prove the main theorem. The strategy will be to split $[1:n]$ in three subsets $I$, $J_k \subset I^c$ and $O_k = I^c \setminus J_k$ and use Lemma $\ref{lipest}$ to control the variation of the multiplier functions on each of these three subsets.
In the proof we examine two possible cases under the assumption of being close enough to a stationary point. If $J_k = \emptyset$, which means that the current iteration of the AFW has identified the support of the stationary point, then we will show that the AFW chooses a direction contained in the support, so that also $J_{k+1} = \emptyset$.\\
If $J_k \neq \emptyset$, we will show that in the neighborhood claimed by the theorem the largest multiplier in absolute value is always positive, with index in $J_k$, and big enough, so that the corresponding away step is maximal. This means that the AFW at the iteration $k+1$ identifies a new active variable.
\begin{Th} \label{ascmain}
If $I^c$ is not the empty set,
let us define
$$\delta_{\min} = \min\{\lambda_i(x^*) \ | \ i \in I^c \} > 0, \ J_k = \{i \in I^c \ | \ (x_k)_i >0 \}\ .$$ Assume that for every $k$ such that $d_k = d_k^{\mathcal{A}}$ the step size $\alpha_k$ is either maximal with respect to the boundary condition (that is $\alpha_k = \alpha_k^{\max}$) or $\alpha_k \geq \frac{- \nabla f(x_k)^\top d_k}{L\|d_k\|^2} $. If $\| x_k - x^*\|_1 <\frac{\delta_{\min}}{\delta_{\min}+ 2L} = r_*$ then
\begin{equation} \label{actset}
|J_{k+1}| \leq \max\{0, |J_k| - 1\}\ .
\end{equation}
The latter relation also holds in case $I^c=\emptyset$ whence we put $r_* = +\infty$.
\end{Th}
\begin{proof}
If $ I^c = \emptyset$, or equivalently, if $\lambda(x^*) = 0$, then there is nothing to prove since $J_k \subset I^c = \emptyset \Rightarrow |J_k|= |J_{k+1}|= 0$. \\
So assume $I^c \neq \emptyset$. By optimality conditions $\lambda_i(x^*) \geq 0$ for every $i$, so necessarily $\delta_{\min} > 0$. \\
For every $i \in [1:n]$, by Lemma \ref{lipest}
\begin{equation} \label{lambdaineq}
\begin{aligned}
\lambda_i(x_k)
&\geq \lambda_i(x^*) - \|x_k-x^*\|_1(L + \frac{\delta_k}{2}) >\\ &>\lambda_i(x^*) - r_*(L + \frac{\delta_k}{2}) =\lambda_i(x^*) - \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}}\ .
\end{aligned}
\end{equation}
We now distinguish two cases. \\
\textbf{Case 1:} $|J_k| = 0$. Then $\delta_k = 0$ because $J_k \cup I = I$ and $\lambda_i(x^*) = 0$ for every $i \in I$. Relation \eqref{lambdaineq} becomes
\begin{equation*}
\lambda_i(x_k) \geq\lambda_i(x^*) - \frac{\delta_{\min}L}{2L + \delta_{\min}},
\end{equation*}
so that for every $i \in I^c$, since $\lambda_i(x^*) \geq \delta_{\min}$, we have
\begin{equation} \label{lambdaik}
\lambda_i(x_k) \geq \delta_{\min} - \frac{\delta_{\min}L}{2L + \delta_{\min}} > 0\ .
\end{equation}
This means that for every $i \in I^c$ we have $(x_k)_i = 0$ by the Case 1 condition $J_k = \emptyset$ and $\lambda_i(x_k) > 0$ by \eqref{lambdaik}. We can then apply part (b) of Lemma \ref{awstep} and conclude $(x_{k+1})_i = 0$ for every $i \in I^c$. Hence $J_{k+1}= \emptyset = J_k $ and Theorem \ref{ascmain} is proved in this case. \\
\textbf{Case 2.} $|J_k| > 0$.
For every $i \in \textnormal{argmax}\{\lambda_j(x^*) \ | \ j \in J_k\}$, we have
$$\lambda_i(x^*) = \max_{j \in J_k} \lambda_j(x^*) = \max_{j \in J_k\cup I} \lambda_j(x^*), $$
where we used the fact that $\lambda_j(x^*) = 0 < \lambda_i(x^*)$ for every $j \in I$. Then by the definition of $\delta_k$, it follows
$$ \lambda_i(x^*) = \delta_k. $$
Thus \eqref{lambdaineq} implies
\begin{equation} \label{first}
\begin{aligned}
\lambda_i(x_k) > \lambda_i(x^*) - \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}} = \delta_k - \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}},
\end{aligned}
\end{equation}
where we used \eqref{lambdaineq} in the inequality.
But since $\delta_k \geq \delta_{\min}$ and the function $y \mapsto - \frac{y}{2L + y}$ is decreasing in $\mathbb R_{> 0}$ we have
\begin{equation} \label{first2}
\delta_k - \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}} \geq \delta_k - \frac{\delta_{k}(L + \frac{\delta_k}{2})}{2L + \delta_{k}} = \frac{\delta_k}{2}\ .
\end{equation}
Concatenating $\eqref{first}$ with $\eqref{first2}$, we finally obtain
\begin{equation}\label{newlab}
\lambda_i(x_k) > \frac{\delta_k}{2}\ .
\end{equation}
We will now show that $d_k = x_k - e_{\hat\jmath}$ with $\hat\jmath \in J_k$. \\
For every $j \in I$, since $\lambda_j(x^*) = 0$, again by Lemma \ref{lipest}, we have
\begin{equation} \label{Ibound}
\begin{aligned}
|\lambda_j(x_k)| &= |\lambda_j(x_k) - \lambda_j(x^*)| \leq \|x_k-x^*\|_1(L + \delta_k/2) < \\
& < r_*(L + \delta_k / 2) = \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}} \leq \delta_k/2,
\end{aligned}
\end{equation}
where we used $\n{x_k - x^*}_1 < r_* $, which is true by definition, in the first inequality, and rearranged \eqref{first2} to get the last inequality.
For every $j \in I^c$, by \eqref{lambdaineq}, we can write
\begin{equation*}
\lambda_j(x_k) > \delta_{\min} - \frac{\delta_{\min}(L + \frac{\delta_k}{2})}{2L + \delta_{\min}} > - \frac{\delta_k}{2}\ .
\end{equation*}
Then using this together with \eqref{Ibound} and \eqref{first}, we get $- \lambda_j (x_k) < \delta_k/2 < \lambda_h(x_k)$ for every $j \in [1:n], h \in \textnormal{argmax}\{\lambda_q(x^*) \ | \ q \in J_k \}$. So the hypothesis of Lemma \ref{awstep} is satisfied and $d_k = d_k^{\mathcal{A}} = x_k - e_{\hat\jmath}$ with $\hat\jmath \in \textnormal{argmax}\{\lambda_j(x_k) \ | \ j \in S_k \}$.
We need to show $\hat\jmath \in J_k$. But $S_k\subseteq I \cup J_k$ and by \eqref{Ibound} if $\hat\jmath \in I$ then $\lambda_l(x_k) < \delta_k/2 < \lambda_j(x_k)$ for every $j \in \textnormal{argmax}\{\lambda_j(x^*) \ | \ j \in J_k\}$. If $\hat\jmath \in O_k$ then $(x_k)_{\hat\jmath} = 0$ and $\hat\jmath \notin S_k$.
Hence we can conclude $\textnormal{argmax}\{\lambda_j(x_k) \ | \ j \in S_k \} \subseteq J_k$ and $d_k = x_k - e_{\hat\jmath}$ with $\hat\jmath \in J_k$. In particular, by \eqref{newlab} we get
\begin{equation} \label{lamb>del}
\max\{\lambda_j(x_k) \ | \ j \in J_k \}=\lambda_{\hat\jmath}(x_k) > \frac{\delta_k}{2}\, .
\end{equation}
We now want to show that $\alpha_k = \alpha_k^{\max}$. Assume by contradiction $\alpha_k < \alpha_{\max}$. Then by the lower bound on the stepsize and~\eqref{newlab}
\begin{equation}\label{alphai}
\begin{aligned}
\alpha_k \geq \frac{-\nabla f(x_k)^\top d_k}{L\|d_k\|^2} = \frac{\lambda_i(x_k)}{L\|d_k\|^2} \geq \frac{\delta_{\min}}{2L\|d_k\|^2}\, ,
\end{aligned}
\end{equation}
where in the last inequality we used \eqref{lamb>del} together with $\delta_k \geq \delta_{\min}$. Also, by Lemma \ref{simpelem}
\begin{equation} \label{dk<}
\begin{aligned}
\|d_k \| & = \| e_{\hat\jmath} - x_k\| \leq \sqrt{2} (e_{\hat\jmath} - x_k)_{\hat\jmath} = -\sqrt{2}(d_k)_{\hat\jmath} \Rightarrow \frac{(d_k)_{\hat\jmath}}{\|d_k\|^2} \leq\frac{(d_k)_{\hat\jmath}}{\|d_k\|\sqrt{2}}\leq - 1/2\\
(x_k)_{\hat\jmath} &= (x_k - x^*)_{\hat\jmath} \leq \frac{\|x_k - x^*\|_1}{2} < \frac{r_*}{2} = \frac{\delta_{\min}}{4L + 2\delta_{\min}}.
\end{aligned}
\end{equation}
Finally, combining \eqref{dk<} with \eqref{alphai}
\begin{align*}
(x_{k+1})_{\hat\jmath} &= (x_k)_{\hat\jmath} + (d_k)_{\hat\jmath} \alpha_k < \frac{r_*}{2} - \frac{\|d_k\|^2}{2} \alpha_k\leq \frac{r_*}{2} - \frac{\|d_k\|^2}{2} \frac{\delta_{\min}}{2L\|d_k\|^2}\\
&= \frac{\delta_{\min}}{4L + 2\delta_{\min}} - \frac{\delta_{\min}}{4L} < 0,
\end{align*}
where we used \eqref{alphai} to bound $\alpha_k$ in the first inequality, \eqref{dk<} to bound $(x_k)_{\hat\jmath}$ and $\frac{(d_k)_{\hat\jmath}}{\|d_k\|^2}$. Hence $(x_{k+1})_{\hat\jmath}< 0$, contradiction.
\end{proof}
\section{Active set complexity bounds}\label{asbds}
Before giving the active set complexity bounds in several settings it is important to clarify that by active set associated to a stationary point $x^*$ we do not mean the set $\mathrm{supp}(x^*)^c = \{i \in [1:n] \ | \ (x^*)_i = 0\}\}$ but the set $I^c(x^*) = \{i \in [1:n] \ | \ \lambda_i(x^*) > 0 \}$. In general $I^c(x^*) \subset \mathrm{supp}(x^*)^c$ by complementarity conditions, with
\begin{equation} \label{eq:supp}
\mathrm{supp}(x^*)^c = I^c(x^*) \Leftrightarrow \textnormal{complementarity conditions are strict in $x^*$}.
\end{equation}
The face $\mathcal{F}$ of $\Delta_{n - 1}$ defined by the constraints with indices in $I^c(x^*)$ still has a nice geometrical interpretation: it is the face of $\Delta_{n-1}$ exposed by $-\nabla f(x^*)$. \\
It is at this point natural to require that the sequence $\{x_k\}$ converges to a subset $A$ of $\mathcal{X}^*$ for which $I^c$ is constant. This motivates the following definition:
\begin{Def} \label{support}
A compact subset $A$ of $\mathcal{X}^*$ is said to have the {\em support identification property (SIP)} if there exists an index set $I^c_A \subset [1:n]$ such that
$$I^c (x)=I^c_A \quad \mbox{for all }x\in A\, .$$
\end{Def}
The geometrical interpretation of the above definition is the following: for every point in the subset $A$ the negative gradient $-\nabla f(x^*)$ exposes the same face. This is trivially true if $A$ is a singleton, and it is also true if for instance $A$ is contained in the relative interior of a face of $\Delta_{n - 1}$ and strict complementarity conditions hold for every point in this face.
We further define $$ \delta_{\min}(A) =\min\{\lambda_i(x) \ | \ x \in A, \ i \in I^c_A\}\ . $$
Notice that by the compactness of $A$ we always have $\delta_{\min}(A) > 0$ if $A$ enjoys the SIP. We can finally give a rigorous definition of what it means to solve the active set problem:
\begin{Def}
Consider an algorithm generating a sequence $\{x_k\}$ converging to a subset $A$ of $\mathcal{X}^*$ enjoying the SIP. We will say that this algorithm solves the active set problem in $M$ steps if $(x_k)_i = 0$ for every $i \in I^c_A$, $k\geq M$.
\end{Def}
We can now apply Theorem \ref{ascmain} to show that once a sequence is definitely close enough to a set enjoying the SIP, the AFW identifies the active set in at most $|I^c|$ steps. We first need to define a quantity that we will use as a lower bound on the stepsizes:
\begin{equation} \label{alphabound}
\bar{\alpha}_k = \min\left(\alpha_k^{\max}, \frac{-\nabla f(x_k)^\top d_k }{{L\n{d_k}^2}} \right)\ ,
\end{equation}
\begin{Th} \label{activecompl}
Let $\{x_k\}$ be a sequence generated by the AFW, with stepsize $\alpha_k \geq \bar{\alpha}_k$. Let $\mathcal{X}^*$ be the set of stationary points of a function $f: \Delta_{n - 1} \rightarrow \mathbb{R}$ with $\nabla f$
having Lipschitz constant $L$. Assume that
there exists a compact subset $A$ of $\mathcal{X}^*$ with the SIP such that $x_k \rightarrow A$.
Then there exists $M$ such that
$$(x_k)_i = 0\quad\mbox{ for every }k \geq M\mbox{ and all }i \in I^c_A\,.$$
\end{Th}
\begin{proof}
Let $J_k = \{i \in I^c_A \ | \ (x_k)_i> 0\}$ and choose $\bar{k}$ such that $\textnormal{dist}_1(x_k, A) < \frac{\delta_{\min}(A)}{2L + \delta_{\min}(A)} = r_*$ for every $k \geq \bar{k}$.
Then for every $k \geq \bar{k}$ there exists $y^* \in A$ with $\|x_k - y^*\|_1 < r_*$.
But since by hypothesis for every $y^* \in A$ the support of the multiplier function is $I^c_A$ with $\delta_{\min}(A)\leq \lambda_i(y^*)$ for every $i \in I^c_A$, we can apply Theorem \ref{ascmain} with $y^*$ as fixed point and obtain that
$|J_{k+1}| \leq \max (0, |J_k| - 1)$. This means that it takes at most $|J_{\bar{k}}| \leq |I^c_A|$ steps for all the variables with indices in $I^c_A$ to be 0. Again by \eqref{actset}, we conclude by induction $|J_k| = 0$ for every $k\geq M=\bar{k}+|I^c_A|$, since $|J_{\bar{k} + |I^c_A|}|= 0$.
\end{proof}
The proof above also gives a relatively simple upper bound for the complexity of the active set problem:
\begin{Prop} \label{activecomp}
Under the assumptions of Theorem \ref{activecompl}, the active set complexity is at most
$$\min\{\bar{k} \in \mathbb{N}_0 \ | \ \textnormal{dist}_1(x_k, A) < r_* \forall k \geq \bar{k} \} + |I^c_A|, $$
where $r_* = \frac{\delta_{\min}(A)}{2L + \delta_{\min}(A)}$.
\end{Prop}
We now report an explicit bound for the strongly convex case, and analyze in depth the nonconvex case in Section \ref{S:nonconv}.
From strong convexity of $f$, it is easy to see that the following inequality holds for every $x$ on $\Delta_{n-1}$:
\begin{equation}\label{he}
f(x) \geq f(x^*)+ \frac{u_1}{2} \|x - x^* \|_1^2,
\end{equation}
with $u_1>0$.
\iffalse
\begin{Prop} \label{activecompL}
Let $\{x_k\}$ be the sequence generated by the AFW with step sizes
satisfying $\alpha_k \leq 2 (-\nabla f(x_k), d_k) / \|d_k\|^2L$ or more in general
\begin{equation}
x_k \in \textnormal{argmax}\{ f(x) \ | \ x \in \textnormal{conv}(x_k, x_{k + 1}) \}\ .
\end{equation}
Let $Y^*$ be the set of limit points of the sequence $\{x_k\}$. Assume that there exists a strict local minimum $a \in Y^*$. Then $x_k \rightarrow a$, and $(x_k)_i = 0$ for every $i \in I^c({a}, \lambda)$,
\begin{equation} \label{activecompf}
k \geq \bar{k} + |I^c({a}, \lambda)|
\end{equation}
where $\bar{k}$ is the minimum such that
$$\textnormal{dist}_1(x_k,{a}) \leq \frac{\delta_{\min}({a}, \lambda)}{2L + \delta_{\min}({a}, \lambda)}\ .$$
\end{Prop}
\begin{proof}
Thanks to the condition on the step sizes, by Lemma \ref{alphacond} the AFW satisfies the condition
\begin{equation}
x_k \in \textnormal{argmax}\{f(x) \ | \ x \in \textnormal{conv}(x_k, x_{k + 1}) \}.
\end{equation}
In particular the iteration satisfy Lemma \ref{strongdescent} and by Lemma 7.2 $\{x_k\}$ must necessarily converge to $a$. The active set complexity bound \eqref{activecompf} follows immediately from Proposition \ref{activecomp}.
\end{proof}
\begin{Cor}\label{activecompLconv} Let $\{x_k\}$ be a sequence generated by the AFW.
Assume that $f$ is convex and that $\mathcal{X}^*$ has the support identification property. Then $(x_k)_i = 0$ for every $i \in I^c_{\mathcal{X}^*}$, and
\begin{equation} \label{activecompf}
k \geq \bar{k} + |I^c_{\mathcal{X}^*}|
\end{equation}
where $\bar{k}$ is the minimum such that
$$\textnormal{dist}_1(x_k, \mathcal{X}^*) \leq \frac{\delta_{\min}(\mathcal{X}^*, \lambda)}{2L + \delta_{\min}(\mathcal{X}^*, \lambda)}\ .$$
\end{Cor}
\begin{proof}
Since $f(x_k) \rightarrow f^*$ we have $\textnormal{dist}_1(x_k, \mathcal{\mathcal{X}}^*) \rightarrow 0$. The active set complexity bound \eqref{activecompf} follows immediately from \ref{activecomp}.
\end{proof}
\fi
\begin{Cor} \label{ssimplex}
Let $\{x_k\}$ be the sequence of points generated by AFW with $\alpha_k \geq \bar{\alpha}_k$.
Assume that $f$ is strongly convex and let
\begin{equation}\label{ratescc}
h_{k} \leq q^k h_0,
\end{equation}
with $q < 1$ and $h_k = f(x_k) - f_*$, be the convergence rate
related to AFW.
Then the active set complexity is
$$\max\left (0, \left\lceil\frac{\textnormal{ln}(h_0) - \textnormal{ln}(u_1 r_*^2/2)}{\textnormal{ln(1/q)}}\right\rceil\right) +|I^c|\ . $$
\end{Cor}
\begin{proof}
Notice that by the linear convergence rate \eqref{ratescc}, and the fact that $q<1$, the number of steps needed
to reach the condition
\begin{equation} \label{hkleq}
h_k \leq \frac{u_1}{2}r_*^2
\end{equation}
is at most
$$ \bar{k} = \max \left(0, \left\lceil\frac{\textnormal{ln}(h_0) - \textnormal{ln}(u_1r_*^2/2)}{\textnormal{ln}(1/q)}\right\rceil\right)\ . $$
We claim that if condition \eqref{hkleq} holds then it takes at most $|I^c|$ steps for the sequence to be definitely in the active set. \\
Indeed if $q^k h_0 \leq \frac{u_1}{2}r_*^2$ then necessarily $x_k \in B_1(x^*, r_*)$ by \eqref{he}, and by monotonicity of the bound~\eqref{ratescc} we then have $x_{k+h} \in B_1(x^*, r_*)$ for every $h \geq 0$. Once the sequence is definitely in $B_1(x^*, r_*)$ by $\eqref{actset}$ it takes at most $|J_{\bar{k}}| \leq |I^c|$ steps for all the variables with indices in $I^c$ to be 0. To conclude, again by \eqref{actset} since $|J_{\bar{k} + |I^c|}|= 0$ by induction $|J_m| = 0$ for every $m\geq \bar{k}+|I^c|$.
\end{proof}
\begin{Rem}
We would like to notice that strong convexity of $f$ in Corollary \ref{ssimplex} might actually be replaced by
condition given in \eqref{he} if we assume the linear rate \eqref{ratescc} (which may not hold in the nonconvex case).
\end{Rem}
The proof of AFW active set complexity for generic polytopes in the strongly convex case requires additional theoretical results and is presented in the appendix.
\par\medskip\noindent
\section{Active set complexity for nonconvex objectives} \label{S:nonconv}
\iffalse
In this section we give a more explicit convergence bound for the general nonconvex case. A fundamental element in our analysis will be the AFW gap function $g: \Delta_{n - 1} \rightarrow \mathbb R $ defined as
\begin{equation}
g(x) = \max\{|\lambda_i(x)| \ | \ \max\{x_i, - \lambda_i{x}\} > 0\}\ .
\end{equation}
We have clearly $g(x) \geq 0$ for every $x \in \Delta_{n - 1}$ with equality iff $x$ is a stationary point. The reason this function is called AFW gap is evident from the relation
\begin{equation}
g(x_k) = \Sc{-\nabla f(x_k)}{d_k}\ .
\end{equation}
where $d_k$ is the direction selected by the AFW. This function $g$ is the AFW analogous FW gap function $\tilde{g}(x_k) = \Sc{-\nabla f(x_k)}{d^{\mathcal{FW}}_k}$ used in \cite{lacoste2016convergence} to analyze the convergence rate of the classic FW algorithm for nonconvex functions. In particular, a convergence rate of $O(\frac{1}{\sqrt{k}})$ for the minimal FW gap up to iteration $k$. The key insight is that given the sequence
\begin{equation}
g^*_k = \min_{0 \leq i \leq k} g(x_i)\ .
\end{equation}
one can extend in a straightforward way the techniques used in the convex case to bounds its convergence rate. This does not appear to be true if one still tries to prove a convergence rate for the sequences $\{f(x_k)\}$ or $\{\nabla f(x_k)\}$ in the nonconvex case. Following this insight we mostly repeat the steps used to compute the convergence rate of the AFW in the (strongly) convex case (see for instance \cite{pena2018polytope}) to prove a convergence rate for $\{g^*_k \}$ in the nonconvex one. \\
In the rest of this section we assume that the AFW starts from a vertex of the probability simplex. This is not restrictive because otherwise by affine invariance one can apply the same theorems to the AFW starting from $e_{n+1}$ for $\tilde{f}: \Delta_n \rightarrow \mathbb R$ satisfying
\begin{equation}
\tilde{f}(x) = f((x_1,...,x_n)+x_{n+1}p)
\end{equation}
where $p \in \Delta_{n-1}$ is the desired starting point.
We will discuss more in detail the invariance of the AFW under affine transformations in Section~\ref{generalafw}.
\fi
In this section, we focus on problems with nonconvex objectives. We first give a more explicit convergence rate for AFW in the nonconvex case, then we prove a general active set identification result for the method. Finally, we analyze both local and global active set complexity bounds related to AFW. A fundamental element in our analysis will be the FW gap function $g: \Delta_{n - 1} \rightarrow \mathbb R $ defined as
\begin{equation*}
g(x) = \max_{i \in [1:n]} \{-\lambda_i(x)\}\ .
\end{equation*}
We clearly have $g(x) \geq 0$ for every $x \in \Delta_{n - 1}$, with equality iff $x$ is a stationary point. The reason why this function is called FW gap is evident from the relation
\begin{equation*}
g(x_k) = -\nabla f(x_k)^\top d^{\mathcal{FW}}_k.
\end{equation*}
This is a standard quantity appearing in the analysis of FW variants (see, e.g., \cite{jaggi2013revisiting} ) and is computed for free at each iteration of a FW-like algorithm.
In \cite{lacoste2016convergence}, the author uses the gap to analyze the convergence rate of the classic FW algorithm in the
nonconvex case. More specifically, a convergence rate of $O(\frac{1}{\sqrt{k}})$ is proved for the minimal FW gap up to iteration $k$:
\begin{equation*}
g^*_k = \min_{0 \leq i \leq k-1} g(x_i).
\end{equation*}
The results extend in a nice and straightforward way the ones reported in \cite{nesterov2018lectures} for proving the convergence of gradient methods in the nonconvex case.
Inspired by the analysis of the AFW method for strongly convex objectives reported in \cite{pena2018polytope},
we now study the AFW convergence rate in the nonconvex case with respect to the sequence $\{g^*_k \}$.
\iffalse
A fundamental element in our analysis will be the AFW gap function $g: \Delta_{n - 1} \rightarrow \mathbb R $ defined as
\begin{equation}
g(x) = \max\{|\lambda_i(x)| \ | \ \max\{x_i, - \lambda_i{x}\} > 0\}\ .
\end{equation}
We have clearly $g(x) \geq 0$ for every $x \in \Delta_{n - 1}$ with equality iff $x$ is a stationary point. The reason this function is called AFW gap is evident from the relation
\begin{equation}
g(x_k) = -\nabla f(x_k)^\top d_k \ .
\end{equation}
where $d_k$ is the direction selected by the AFW. This function $g$ is the AFW analogous FW gap function $\tilde{g}(x_k) = -\nabla f(x_k)^\top d^{\mathcal{FW}}_k$ used in \cite{lacoste2016convergence} to analyze the convergence rate of the classic FW algorithm for nonconvex functions. In particular, a convergence rate of $O(\frac{1}{\sqrt{k}})$ for the minimal FW gap up to iteration $k$. The key insight is that given the sequence
\begin{equation}
g^*_k = \min_{0 \leq i \leq k} g(x_i)\ .
\end{equation}
one can extend in a straightforward way the techniques used in the convex case to bounds its convergence rate. This does not appear to be true if one still tries to prove a convergence rate for the sequences $\{f(x_k)\}$ or $\{\nabla f(x_k)\}$ in the nonconvex case. Following this insight we mostly repeat the steps used to compute the convergence rate of the AFW in the (strongly) convex case (see for instance \cite{pena2018polytope}) to prove a convergence rate for $\{g^*_k \}$ in the nonconvex one. \\
\fi
In the rest of this section we assume that the AFW starts from a vertex of the probability simplex. This is not a restrictive assumption. By exploiting affine invariance one can indeed apply the same theorems to the AFW starting from $e_{n+1}$ for $\tilde{f}: \Delta_n \rightarrow \mathbb R$ satisfying
\begin{equation*}
\tilde{f}(y) = f(y_1e_1+...y_ne_n+y_{n+1}p),
\end{equation*}
where $p \in \Delta_{n-1}$ is the desired starting point.
We will discuss more in detail the invariance of the AFW under affine transformations in Section \ref{generalafw}.
\subsection{Global convergence}
We start investigating the minimal FW gap, giving estimates of rates of convergence:
\begin{Th} \label{nonconvb}
Let $f^* = \min_{x \in \Delta_{n - 1}} f(x)$, and let $\{x_k\}$ be a sequence generated by the AFW algorithm applied to $f$ on $\Delta_{n-1}$, with $x_0$ a vertex of $\Delta_{n-1}$. Assume that the stepsize $\alpha_k$ is larger or equal than $\bar{\alpha}_k$ (as defined in \eqref{alphabound}), and that
\begin{equation} \label{eq:rho}
f(x_k) - f(x_k + \alpha_kd_k) \geq \rho\bar{\alpha}_k \left(-\nabla f(x_k)^\top d_k\right)
\end{equation}
for some fixed $\rho > 0$.
Then for every $T \in \mathbb{N}$
$$ g_T^* \leq \max\left(\sqrt{\frac{4L (f(x_0) - f^*)}{\rho T}}, \frac{4(f(x_0) - f^*)}{T} \right)\ .$$
\end{Th}
\begin{proof}
Let $r_k = -\nabla f(x_k)$ and $g_k = g(x_k)$. We distinguish three cases. \\
\textbf{Case 1.} $\bar{\alpha}_k < \alpha^{\max}_k$.
Then $\bar{\alpha}_k = \frac{-\nabla f(x_k)^\top d_k}{{L\n{d_k}^2}} $ and relation \eqref{eq:rho} becomes
\begin{equation*}
f(x_k) - f(x_k + \alpha_kd_k) \geq \rho\bar{\alpha}_k r_k^\top d_k = \frac{\rho}{L{\n{d_k}}^2} (r_k^\top d_k)^2
\end{equation*}
and consequently
\begin{equation} \label{c1}
f(x_k) - f(x_{k+1}) \geq \frac{\rho}{ L \n{d_k}^2} (r_k^\top d_k)^2 \geq \frac{\rho}{ L \n{d_k}^2}g_k^2 \geq \frac{\rho g_k^2}{2L},
\end{equation}
where we used $r_k^\top d_k \geq g_k$ in the second inequality and $\n{d_k} \leq \sqrt{2}$ in the third one. \\ As for $S_k$, by hypothesis we have either $d_k = d_k^{\mathcal{FW}}$ so that $d_k = e_i - x_k$ or $d_k = d_k^{\mathcal{A}} = x_k - e_i$ for some $i \in \s{n}$. In particular $S_{k+1}\subseteq S_k \cup \{i\}$ so that $|S_{k+1}| \leq |S_k| + 1$. \\
\textbf{Case 2:} $\alpha_k = \bar{\alpha}_k = \alpha^{\max}_k = 1, d_k = d_k^{\mathcal{FW}}$.
By the standard descent lemma~\cite[Proposition 6.1.2]{bertsekas2015convex} applied to $f$ with center $x_k$ and $\alpha = 1$
$$ f(x_{k +1}) = f(x_k + d_k) \leq f(x_k) + \nabla f(x_k)^\top d_k + \frac{L}{2}\|d_k\|^2\ . $$
Since by the Case 2 condition $\min \left(\frac{-\nabla f(x_k)^\top d_k }{\|d_k\|^2L}, 1\right) = \alpha_k = 1 $ we have
\begin{equation*}
\frac{-\nabla f(x_k)^\top d_k}{\|d_k\|^2L} \geq 1 \, \mbox{, so }\quad -L\n{d_k}^2 \geq \nabla f(x_k)^\top d_k\, ,
\end{equation*}
hence we can write
\begin{equation} \label{c2}
f(x_k) - f(x_{k+1}) \geq -\nabla f(x_k)^\top d_k - \frac{L}{2}\|d_k\|^2 \geq - \frac{\nabla f(x_k)^\top d_k}{2} \geq \frac{1}{2}g_k\ .
\end{equation}
Reasoning as in Case 1 we also have $|S_{k +1}| \leq |S_k| + 1$. \\
\textbf{Case 3:} $\alpha_k = \bar{\alpha}_k = \alpha^{\max}_k, \ d_k = d_k^{\mathcal{A}}$. Then $d_k = x_k - e_i $ for $i \in S_k$ and $$(x_{k + 1})_j= (1+\alpha_k)(x_k)_j - \alpha_k (e_i)_j,$$
with $\alpha_k = \alpha^{\max}_k = \frac{(x_k)_i}{1 - (x_k)_i}$. Therefore $(x_{k+1})_j = 0$ for $j \in \s{n} \setminus S_k \cup \{i\} $ and $(x_{k+1})_j \neq 0$ for $j \in S_k \setminus \{i\}$. In particular $|S_{k+1}| = |S_k| - 1$.
\vspace{2mm}
For $i = 1,2,3$ let now $n_i(T)$ be the number of Case $i$ steps done in the first $T$ iterations of the AFW. We have by induction on the recurrence relation we proved for $|S_k|$
\begin{equation}\label{n1n2}
|S_{T}| - |S_0| \leq n_1(T) + n_2(T) - n_3(T)\ ,
\end{equation}
for every $T \in \mathbb{N}$. \\
Since $n_3(T) = T-n_1(T) - n_2(T)$ from \eqref{n1n2} we get
\begin{equation*}
n_1(T) + n_2(T) \geq \frac{T + |S_{T}| -|S_0|}{2} \geq \frac{T}{2}\ ,
\end{equation*}
where we used $|S_0| = 1 \leq |S_{T}|$.
Let now $C_i^{T}$ be the set of iteration counters up to $T-1$ corresponding to Case $i$ steps for $i \in \{1,2,3\}$, which satisfies $|C_i^{T}| = n_i(T)$.
We have by summing \eqref{c1} and \eqref{c2} for the indices in $C_1^{T}$ and $C_2^{T}$ respectively
\begin{equation} \label{telescopic}
\sum_{k \in C_1^{T}} f(x_k) - f(x_{k+1}) + \sum_{k \in C_2^{T}} f(x_{k+1}) - f(x_k) \geq
\sum_{k \in C_1^{T} }\frac{\rho g_k^2}{2L} + \sum_{k \in C_2^{T} }\frac{1}{2}g_k\ .
\end{equation}
We now lower bound the right-hand side of \eqref{telescopic} in terms of $g^*_{T}$ as follows:
\begin{equation*}
\begin{aligned}
& \sum_{k \in C_1^{T} }\frac{\rho g_k^2}{2L} + \sum_{k \in C_2^{T} }\frac{1}{2}g_k \geq |C_1^{T}| \min_{k \in C_1^{T}} \frac{\rho g_k^2}{2L} + |C_2^{T}| \min_{k \in C_2^{T}} \frac{g_k}{2} \geq \\
\geq & (|C_1^{T}| + |C_2^{T}|) \min \left( \frac{\rho (g^*_T)^2}{2L}, \frac{g^*_T}{2} \right) = \left [n_1(T) + n_2(T) \right] \min \left(\rho \frac{(g^*_T)^2}{2L}, \frac{g^*_T}{2} \right) \geq \\
\geq & \frac{T}{2} \min \left( \frac{\rho (g^*_T)^2}{2L}, \frac{g^*_T}{2} \right)\ .
\end{aligned}
\end{equation*}
Since the left-hand side of $\eqref{telescopic}$ can clearly be upper bounded by $f(x_0) - f^*$ we have
\begin{equation*}
f(x_0) - f^* \geq \frac{T}{2} \min \left(\frac{\rho(g^*_{T})^2}{2L}, \frac{g^*_{T}}{2} \right)\ .
\end{equation*}
To finish, if $\frac{T}{2} \min \left( \frac{g^*_{T}}{2}, \frac{\rho(g^*_{T})^2}{2L} \right) =\frac{Tg^*_{T}}{4} $ we then have
\begin{equation} \label{g*}
g^*_{T} \leq \frac{4(f(x_0) - f^*)}{T}
\end{equation}
and otherwise
\begin{equation} \label{g*sqrt}
g^*_{T} \leq \sqrt{\frac{4L(f(x_0) - f^*)}{\rho T}}\ .
\end{equation}
The claim follows by taking the max in the system formed by \eqref{g*} and \eqref{g*sqrt}.
\end{proof}
When the stepsizes coincide with the lower bounds $\bar{\alpha}_k$ or are obtained using exact linesearch, we have the following corollary:
\begin{Cor} \label{cor:nonconvb}
Under the assumptions of Theorem \ref{nonconvb}, if $\alpha_k = \bar{\alpha}_k$ or if $\alpha_k$ is selected by exact linesearch then for every $T \in \mathbb{N}$
\begin{equation} \label{eq:g*rate}
g_T^* \leq \max\left(\sqrt{\frac{8L (f(x_0) - f^*)}{T}}, \frac{4(f(x_0) - f^*)}{T} \right)\ .
\end{equation}
\end{Cor}
\begin{proof}
By points 2 and 3 of Lemma \ref{alphacond}, relation \eqref{eq:rho} is satisfied with $\rho= \frac{1}{2}$ for both $\alpha_k= \bar{\alpha}_k$ and $\alpha_k$ given by exact linesearch, and we also have $\alpha_k \geq \bar{\alpha}_k$ in both cases. The conclusion follows directly from Theorem \ref{nonconvb}.
\end{proof}
\subsection{A general active set identification result}
We can now give a general active set identification result in the nonconvex setting.
While we won't use strict complementarity when the stepsizes are given by \eqref{alphabound}, without this assumption we will need strict complementarity. Notice that if $A\subseteq \mathcal{X}^*$ enjoys the SIP and if strict complementarity is satisfied for every $x \in A$, then as a direct consequence of \eqref{eq:supp} we have
\begin{equation} \label{eq:strictcompl}
\mathrm{supp}(x) = [1:n] \setminus I^c(x) = [1:n]\setminus I^c_A
\end{equation}
for every $x \in A$. In this case we can then define $\mathrm{supp}(A)$ as the (common) support of the points in $A$.
\begin{Th} \label{nonconvid}
Let $\{x_k\}$ be the sequence generated by the AFW method with stepsizes satisfying $\alpha_k \geq \bar{\alpha}_k$ and \eqref{eq:rho}, where $\bar{\alpha}_k$ is given by \eqref{alphabound}.
Let $\mathcal{X}^*$ be the subset of stationary points of $f$. We have:
\begin{itemize}
\item[(a)] $x_k \rightarrow \mathcal{X}^*$.
\item[(b)] If $\alpha_k= \bar{\alpha}_k$ then $\{x_k\}$ converges to a connected component $A$ of $\mathcal{X}^*$. If additionally $A$ has the SIP then $\{x_k\}$ identifies $I^c_A$ in finite time.
\end{itemize}
Assume now that $\mathcal{X}^* = \bigcup_{i = 1}^{C}A_i$ with $\{A_i\}_{i=1}^C$ compact, with distinct supports and such that $A_i$ has the SIP for each $i\in [1\! : \! C]$.
\begin{itemize}
\item[(c)] If ${\alpha}_k \geq \bar\alpha_k$ and if strict complementarity holds for all points in $\mathcal{X}^*$ then $\{x_k\}$ converges to $A_l$ for some $l \in [1:C]$ and identifies $I^c_{A_{l}}$ in finite time.
\end{itemize}
\end{Th}
\begin{proof}
a) By the proof of Theorem \ref{nonconvb} and the continuity of the multiplier function we have
\begin{equation} \label{eq10:*}
x_{k(j)} \rightarrow g^{-1}(0) = \mathcal{X}^*\ ,
\end{equation}
where $\{k(j)\}$ is the sequence of indexes corresponding to Case 1 or Case 2 steps. Let $k'(j)$ be the sequence of indexes corresponding to Case 3 steps. Since for such steps $\alpha_{k'(j)} = \bar{\alpha}_{k'(j)}$ we can apply Corollary \ref{nonconv0} to obtain
\begin{equation} \label{eq10:'}
\n{x_{k'(j)} - x_{k'(j)+1}} \rightarrow 0\ .
\end{equation}
Combining \eqref{eq10:*}, \eqref{eq10:'} and the fact that there can be at most $n-1$ consecutive Case 3 steps, we get $x_k \rightarrow \mathcal{X}^*$. \\
b) By the boundedness of $f$ and point 2 of Lemma \ref{alphacond} if $\alpha_k= \bar{\alpha}_k$ then $\n{x_{k+1} - x_k} \rightarrow 0$. It is a basic topology fact that if $\{x_k\}$ is bounded and $\n{x_{k+1} - x_k} \rightarrow 0$ then the set of limit points of $\{x_k\}$ is connected. This together with point a) ensures that the set of limit points must be contained in a connected component $A$ of $\mathcal{X}^*$. By Theorem \ref{activecompl} it follows that if $A$ has constant support $\{x_k\}$ identifies $I^c_A$ in finite time. \\
c) Consider a disjoint family of subsets $\{U_i\}_{i=1}^C$ of $\Delta_{n-1}$ with $U_i = \{x \in \Delta_{n-1} \ | \ \textnormal{dist}_1(x, A_i) \leq r_i \}$ where $r_i$ is small enough to ensure some conditions that we now specify. First, we need
\begin{equation*}
r_i < \frac{\delta_{\min}(A_i)}{2L + \delta_{\min}(A_i)}
\end{equation*}
so that $r_i$ is smaller than the active set radius of every $x \in A_i$ and in particular for every $x \in U_i$ there exists $x^* \in A_i$ such that
\begin{equation} \label{eq:xinU}
\n{x-x^*}_1 < \frac{\delta_{\min}(x^*)}{2L + \delta_{\min}(x^*)}.
\end{equation}
Second, we choose $r_i$ small enough so that $\{U_i\}_{i=1}^C$ are disjoint and
\begin{equation} \label{suppincl}
\mathrm{supp}(y) \supseteq \mathrm{supp}(A_i) \ \forall y \in U_i\ ,
\end{equation}
where these conditions can be always satisfied thanks to the compactness of $A_i$. \\
Assume now by contradiction that the set $S$ of limit points of $\{x_k\}$ intersects more than one of the $\{A_i\}_{i=1}^C$. Let in particular $A_{l}$ minimize $|\mathrm{supp}(A_l)|$ among the sets containing points of $S$. By point a) $x_k \in \cup_{i=1}^C U_i$ for $k\geq M$ large enough and we can define an infinite sequence $\{t(j)\}$ of exit times greater than $M$ for $U_{l}$ so that $x_{t(j)} \in U_{l}$ and $x_{t(j) + 1} \in \cup_{i\in [1:C] \setminus l} U_i $. Up to considering a subsequence we can assume $x_{t(j) + 1} \in U_{m}$ for a fixed $m\neq l$ for every $j \in \mathbb{N}_0$. \\
We now distinguish two cases as in the proof of Theorem \ref{ascmain}, where notice that by equation \eqref{eq:xinU} the hypotheses of Theorem \ref{ascmain} are satisfied for $k=t(j)$ and some $x^* \in A_l$. \\
\textbf{Case 1.} $(x_{t(j)})_h = 0$ for every $h \in I^c_{A_l}$. In the notation of Theorem \ref{ascmain} this corresponds to the case $|J_{t(j)}| = 0$. Then by \eqref{lambdaik} we also have $\lambda_h(x_{t(j)}) > 0$ for every $h \in I^c_{A_l}$. Thus $(x_{t(j)+1})_h = (x_{t(j)})_h = 0$ for every $h \in I^c_{A_l}$ by Lemma \ref{awstep}, so that we can write
\begin{equation} \label{incleq}
\mathrm{supp}(A_m) \subseteq \mathrm{supp}(x_{t(j) + 1}) \subseteq [1:n] \setminus I^c_{A_l} = \mathrm{supp}(A_l),
\end{equation}
where the first inclusion is justified by \eqref{suppincl} for $i=m$ and the second by strict complementarity (see also \eqref{eq:strictcompl} and the related discussion).
But since by hypothesis $\mathrm{supp}(A_m) \neq \mathrm{supp}({A_l})$ the inclusion \eqref{incleq} is strict and so it is in contradiction with the minimality of $|\mathrm{supp}(A_l)|$. \\
\textbf{Case 2.} |$J_{t(j)}| > 0$. Then reasoning as in the proof of Theorem \ref{ascmain} we obtain $d_{t(j)} = x_{t(j)} - e_{\bar{h}}$ for some $\bar{h} \in J_{t(j)} \subset I^c_{A_l}$. Let $\tilde{x}^* \in A_{l}$, and let $\tilde{d} = \alpha_{t(j)} d_{t(j)}$. The sum of the components of $\tilde{d}$ is $0$ with the only negative component being $\tilde{d}_{\bar{h}}$ and therefore
\begin{equation} \label{eq:djh}
\tilde{d}_{\bar{h}} = - \sum_{h \in [1:n] \setminus \bar{h}} \tilde{d}_h = - \sum_{h \in [1:n] \setminus \bar{h}} |\tilde{d}_h|
\end{equation}
We claim that $\n{x_{t(j) + 1} - \tilde{x}^*}_1 \leq \n{x_{t(j)} - \tilde{x}^*}_1$. This is enough to finish because since $\tilde{x}^*\in A_{l}$ is arbitrary then it follows $\textnormal{dist}_1(x_{t(j)+1}, A_{l}) \leq \textnormal{dist}_1(x_{t(j)}, A_{ l})$ so that $x_{t(j) + 1} \in U_{l}$, a contradiction. \\
We have
\begin{equation*}
\begin{aligned}
& \n{\tilde{x}^* - x_{t(j) + 1}}_1 = \n{\tilde{x}^* - x_{t(j)} -\alpha_{t(j)}d_{t(j)} }_1 = \\
= &|\tilde{x}^*_{\bar{h}} - (x_{t(j)})_{\bar{h}} -\tilde{d}_{\bar{h}}| + \sum_{h \in [1:n] \setminus \bar{h}} |\tilde{x}^*_h - (x_{t(j)})_h -\tilde{d}_h| = \\
= & |\tilde{x}^*_{\bar{h}} - (x_{t(j)})_{\bar{h}}| + \tilde{d}_{\bar{h}} + \sum_{h\in [1:n] \setminus \bar{h}} |\tilde{x}^*_h - (x_{t(j)})_h -\tilde{d}_h| \leq \\
\leq & |\tilde{x}^*_{\bar{h}} - (x_{t(j)})_{\bar{h}}| + \tilde{d}_{\bar{h}} + \sum_{h \in [1:n] \setminus \tilde{h}} (|\tilde{x}^*_h - (x_{t(j)})_h | + |\tilde{d}_h|) = \\
= & \n{x_{t(j)} - \tilde{x}^*}_1 + \tilde{d}_{\bar{h}} + \sum_{h \in [1:n] \setminus \bar{h}} |\tilde{d}_h| = \n{x_{t(j)} - \tilde{x}^*}_1
\end{aligned}
\end{equation*}
where in the third equality we used $0=\tilde{x}^*_{\bar{h}} \leq - \tilde{d}_{\bar{h}} \leq (x_{t(j)})_{\bar{h}}$ and in the last equality we used \eqref{eq:djh}. \\
Reasoning by contradiction we have proved that all the limit points of $\{x_k\}$ are in $A_{l}$ for some $l~\in~[1,...,C]$. The conclusion follows immediately from Theorem \ref{activecompl}.
\end{proof}
\iffalse
\begin{Rem}
The above proof that $x_k \rightarrow \mathcal{X}^*$ can also be adapted to the AFW with stepsizes given by linesearch. Indeed in this case for the subsequence $\{x_{r_j}\}$ of Case 3 steps we still have
\begin{equation*}
\min(g(x_{r_j}), \n{x_{r_j}-x_{r_j+1}}) \rightarrow 0
\end{equation*}
and this together with $x_{u_j} \rightarrow \mathcal{X}$ is enough to conclude $x_k \rightarrow \mathcal{X}^*$. \\
More in general sufficient conditions are that the stepsizes are bounded by the Lipschitz stepsizes, with decrease of the objective $f(x_{k}) - f(x_{k+1})$ at least equal to $f(x_k) - \min_{\alpha \in [0, \alpha_k^{\max}] }\tilde{f}(x_k + \alpha d_k)$, where $\tilde{f}$ is the usual quadratic upper approximation of $f$.
\end{Rem}
\fi
\subsection{Quantitative version of active set identification}
We now assume that the gap function $g(x)$ satisfies the H\"olderian error bound condition
\begin{equation} \label{hbg}
g(x) \geq \theta \textnormal{dist}_1(x, \mathcal{X}^*)^p
\end{equation}
for some $\theta, p > 0$ (see e.g. \cite{bolte2017error} for some example). This is true for instance if the components of $\nabla f(x)$ are semialgebraic functions. We have the following active set complexity bound:
\newcommand{\bar{\varepsilon}}{\bar{\varepsilon}}
\begin{Th} \label{nonconvcompl}
Assume $\mathcal{X}^* = \bigcup_{i \in [1:C]} A_i$ where $A_i$ is compact and with the SIP for every $i \in [1:C]$ and $0< d \myeq \min_{\{i,j\} \subset [1:C]} \textnormal{dist}_1(A_i, A_j)$. Let $r_*$ be the minimum active set radius of the sets $\{A_i\}_{i=1}^C$. Let $q(\varepsilon): \mathbb R_{>0} \rightarrow \mathbb{N}_0$ be such that
$f(x_k)-f(x_{k+1}) \leq \varepsilon$ for every $k \geq q(\varepsilon)$, and assume that $g(x)$ satisfies \eqref{hbg}. Assume that the stepsizes satisfy $\alpha_k = \bar{\alpha}_k$, with $\bar{\alpha}_k$ given by \eqref{alphabound}. Then the active set complexity is at most $q(\bar{\varepsilon}) + 2n$ for $\bar{\varepsilon}$ satisfying the following conditions
\begin{equation} \label{e1:epscond}
\begin{aligned}
\bar{\varepsilon} & < L\, , \quad \left(\frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}} < r_*\, \quad\mbox{and }\quad
2\left(\frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}} + 2n \sqrt{\frac{2\bar{\varepsilon}}{L}} & \leq d\ .
\end{aligned}
\end{equation}
\end{Th}
The proof is substantially a quantitative version of the argument used to prove point b) of Theorem \ref{nonconvid}.
\begin{proof}
Fix $k \geq q(\bar{\varepsilon})$, so that
\begin{equation} \label{e1:eps}
f(x_{k}) - f(x_{k+1}) \leq \bar{\varepsilon}\ .
\end{equation}
We will refer to Case $i$ steps for $i \in [1:3]$ following the definitions in Theorem \ref{nonconvb}. If the step $k$ is a Case 1 step, then by \eqref{c1} with $\rho= 1/2$ we have
\begin{equation*}
f(x_{k}) - f(x_{k+1}) \geq \frac{g(x_k)^2}{4L}
\end{equation*}
and this together with \eqref{e1:eps} implies
\begin{equation*}
2\sqrt{L\bar{\varepsilon}} \geq 2\sqrt{L(f(x_k)-f(x_{k+1}))} \geq g(x_k)\ .
\end{equation*}
Analogously, if the step $k$ is a Case 2 step, then by \eqref{c2} we have
\begin{equation*}
f(x_{k}) - f(x_{k+1}) \geq \frac{g(x_k)}{2}
\end{equation*}
so that $2\bar{\varepsilon} \geq g(x_k)$. By the leftmost condition in~\eqref{e1:epscond} we have $\bar{\varepsilon} < L$ so that $2\sqrt{L\bar{\varepsilon}} \geq 2\bar{\varepsilon}$, and therefore for both Case 1 and Case 2 steps we have
\begin{equation} \label{e1:gkbound}
g(x_k) \leq 2\sqrt{L\bar{\varepsilon}}\ .
\end{equation}
By inverting relation \eqref{eq:lim}, we also have
\begin{equation} \label{e1:nbound}
\n{x_k - x_{k+1}} \leq \sqrt{\frac{2(f(x_k) - f(x_{k+1}))}{L}} \leq \sqrt{\frac{2\bar{\varepsilon}}{L}}\ .
\end{equation}
Now let $\bar{k} \geq q(\bar{\varepsilon})$ be such that step $\bar{k}$ is a Case 1 or Case 2 step. By the error bound condition together with \eqref{e1:gkbound}
\begin{equation} \label{e1:d1bound}
\textnormal{dist}_1(x_{\bar{k}}, \mathcal{X}^*) \leq \left(\frac{g(x_{\bar{k}})}{\theta}\right)^{\frac{1}{p}} \leq \left( \frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}} < r_*\ ,
\end{equation}
where we used \eqref{e1:gkbound} in the second inequality and the second condition of \eqref{e1:epscond} in the third inequality.
In particular there exists $l$ such that $\textnormal{dist}_1(x_{\bar{k}}, A_{l}) \leq (2\sqrt{L\bar{\varepsilon}}/\theta)^{1/p}$. We claim now that $I^c_{A_{l}}$ is identified at latest at step $\bar{k} + n$. \\
First, we claim that for every Case 1 or Case 2 step with index $\tau \geq \bar{k}$ we have $\textnormal{dist}_1(x_{\tau}, A_{l})\leq (g(x_{\tau})/\theta)^{1/p} $. We reason by induction on the sequence $\{s(k')\}$ of Case 1 or Case 2 steps following $\bar{k}$, so that in particular $s(1) = \bar{k}$ and $\textnormal{dist}_1(x_{s(1)}, A_{l})\leq g(x_{s(1)}) $ is true by \eqref{e1:d1bound}. Since there can be at most $n-1$ consecutive Case 3 steps, we have $s(k'+ 1) - s(k') \leq n$ for every $k' \in \mathbb{N}_0$. Therefore
\begin{equation} \label{e1:0p}
\begin{aligned}
\n{x_{s(k')} - x_{s(k'+1)}}_1 \leq & \sum_{i=s(k')}^{s(k'+1)-1}\n{x_{i+1}-x_i}_1 \leq 2\sum_{i=s(k')}^{s(k'+1)-1}\n{x_{i+1}-x_i} \leq \\ \leq & 2[s(k'+1)-s(k')]\sqrt{\frac{2\bar{\varepsilon}}{L}} \leq 2n\sqrt{\frac{2\bar{\varepsilon}}{L}}\ ,
\end{aligned}
\end{equation}
where in the second inequality we used part 3 of Lemma \ref{simpelem} to bound each of the summands of the left-hand side, and in the third inequality we used \eqref{e1:nbound}. Assume now by contradiction $\textnormal{dist}_1(x_{s(k'+1)}, A_{l}) > (g(x_{s(k' + 1)})/\theta)^{1/p} $. Then by \eqref{e1:d1bound} applied to $s(k'+1)$ instead of $\bar{k}$ there must exists necessarily $j\neq l$ such that $\textnormal{dist}_1(x_{s(k'+1)}, A_{j}) \leq (g(x_{s(k' + 1)})/\theta)^{1/p}$. In particular we have
\begin{equation} \label{e1:2p}
\begin{aligned}
\n{x_{s(k')} - x_{s(k'+1)}}_1 \geq & \textnormal{dist}_1(A_{l}, A_j) - \textnormal{dist}_1(x_{s(k'+1)}, A_{j}) - \textnormal{dist}_1(x_{s(k')}, A_{l}) \geq \\ \geq & d - \left( \frac{g(x_{s(k')})}{\theta} \right)^{\frac{1}{p}} - \left( \frac{g(x_{s(k'+1)})}{\theta} \right)^{\frac{1}{p}} \geq d-2 \left( \frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}}\ ,
\end{aligned}
\end{equation}
where we used~\eqref{e1:gkbound} in the last inequality.
But by the second condition of \eqref{e1:epscond}, we have
\begin{equation} \label{e1:1p}
d- 2 \left( \frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}} > 2n\sqrt{\frac{2\bar{\varepsilon}}{L}}\ .
\end{equation}
Concatenating \eqref{e1:0p}, \eqref{e1:1p} and \eqref{e1:2p} we get a contradiction and the claim is proved. Notice that an immediate consequence of this claim is $\textnormal{dist}_1(x_{\tau}, A_{l}) < r_*$ by \eqref{e1:d1bound} applied to $\tau$ instead of $\bar{k}$, where $\tau \geq \bar{k}$ is an index corresponding to a Case 1 or Case 2 step. \\
To finish the proof, first notice that there exists an index $\bar{k} \in [q(\bar{\varepsilon}), q(\bar{\varepsilon}) + n]$ corresponding to a Case 1 or Case 2 step, since there can be at most $n-1$ consecutive Case 3 steps. Furthermore, since by \eqref{e1:d1bound} we have $\textnormal{dist}_1(x_{\bar{k}}, A_{l}) < r_*$, by the local identification Theorem \ref{ascmain} in the steps immediately after $\bar{k}$ the AFW identifies one at a time the variables in $I^c_{A_{l}}$, so that there exists $h \leq n$ such that $(x_{\bar{k} + h})_i=0$ for every $i \in I^c_{A_{l}}$. Moreover, by the claim every Case 1 and Case 2 step following step $\bar{k}$ happens for points inside $B_1(A_{l}, r_*)$ so it does not change the components corresponding to $I^c_{A_{l}}$ by the local identification Theorem \ref{ascmain}. At the same time, Case 3 steps do not increase the support, so that $(x_{\bar{k} + h + l})_i=0$ for every $i \in I^c_{A_{l}}$, $l \geq 0$. Thus active set identification happens in $\bar{k} + h \leq q(\bar{\varepsilon}) + n + h \leq q(\bar{\varepsilon}) + 2n$ steps.
\end{proof}
\begin{Rem}
Assume that the set of stationary points is finite, so that $A_i = \{a_i\}$ for every $i \in [1\!:\!C]$ with $a_i \in \Delta_{n-1}$. Let \begin{equation}
c_{\min} = \min_{i\in [1:C] } \min_{j:(a_i)_j \neq 0} (a_i)_j
\end{equation}
be the minimal nonzero component of a stationary point. Then one can prove a $q(\bar{\varepsilon}) + n$ active set identification bound replacing \eqref{e1:epscond} with the following condition on $\bar{\varepsilon}$ which has no explicit dependence on $n$:
\begin{equation*}
\begin{aligned}
\bar{\varepsilon} <& L, \quad r(\bar{\varepsilon}) + l(\bar{\varepsilon}) < \min(r_*, d/2, c_{\min}/2 )\ ,
\end{aligned}
\end{equation*}
where $r(\bar{\varepsilon}) = \left(\frac{2\sqrt{L\bar{\varepsilon}}}{\theta}\right)^{\frac{1}{p}}$ and $l(\bar{\varepsilon}) = 2\sqrt{\frac{2\bar{\varepsilon}}{L}}$. We do not discuss the proof since it roughly follows the same lines of Theorem \ref{nonconvcompl}'s proof.
\end{Rem}
\begin{Rem}
When we have an explicit expression for the convergence rate $q(\varepsilon)$, then we can get an active set complexity bound using Theorem \ref{nonconvcompl}.
\end{Rem}
\subsection{Local active set complexity bound}
A key element to ensure local convergence to a strict local minimum will be the following property
\begin{equation}\label{strongdescent}
x_{k} \in \textnormal{argmax}\{f(x) \ | \ x \in \textnormal{conv}(x_k, x_{k+1}) \}\ .
\end{equation}
which in particular holds when $\alpha_k = \bar{\alpha}_k$ as it is proved in Lemma \ref{alphacond}.
The property \eqref{strongdescent} is obviously stronger than the usual monotonicity, and it ensures that the sequence cannot escape from connected components of sublevel sets. When $f$ is convex it is immediate to check that \eqref{strongdescent} holds if and only if $\{f(x_k)\}$ is monotone non increasing.
\\
Let $x^*$ be a stationary point which is also a strict local minimizer isolated from the other stationary points and $\tilde{f} = f(x^*)$.
Let then $\beta$ be such that there exists a connected component $V_{x^*, \beta}$ of $f^{-1}((-\infty, \beta])$ satisfying
\begin{equation*}
V_{x^*, \beta} \cap \mathcal{X}^* = \{x^*\} = \textnormal{argmin}_{x \in V_{x^*, \beta}} f(x).
\end{equation*}
\begin{Th} \label{nonconv}
Let $\{x_k\}$ be a sequence generated by the AFW, with $x_0 \in V_{x^*, \beta}$ and with stepsize given by \eqref{alphabound}.
Let
\begin{equation*}
r_* = \frac{\delta_{\min}(x^*)}{2L + \delta_{\min}(x^*)}\ .
\end{equation*}
Then $x_k \rightarrow x^*$ and the sequence identifies the support in at most
\begin{equation*}
\left\lceil\max \left(\frac{4(f(x_0)-\tilde{f})}{\tau}, \frac{8L(f(x_0)-\tilde{f})}{\tau^2} \right)\right\rceil + 1 + |I^c(x^*)|
\end{equation*}
steps with
\begin{equation*}
\tau = \min\{g(x) \ | \ x \in f^{-1}([m, + \infty)) \cap V_{x^*, \beta} \}\ ,
\end{equation*}
where
$$ m = \min \{\ f(x) \ | \ x \in V_{x^*, \beta} \setminus B_{r_*}(x^*) \}\ . $$
\end{Th}
\begin{proof}
We have all the hypotheses to apply the bound given in Corollary \ref{cor:nonconvb} for $g^*_k$:
\begin{equation*}
g^*_k \leq \max \left(\sqrt{\frac{8L(f(x_0) - f^*)}{k}}, \frac{4(f(x_0) - f^*)}{k}
\right)\ .
\end{equation*}
It is straightforward to check that if
\begin{equation*}
\bar{h} = \left\lceil \max\left(\frac{4(f(x_0) - f^*)}{\tau}, \frac{8L(f(x_0)-f^*)}{\tau^2} \right) \right\rceil+1
\end{equation*}
then
\begin{equation*}
g^*_{\bar{h}} < \tau\ .
\end{equation*}
Therefore, by the definition of $\tau$ ,we get $f(x_{\bar{h}}) < m$.
We claim that $x_h \in B_{r_*}(x^*)$ for every $h \geq \bar{h}$. Indeed by point 1 of Lemma \ref{alphacond} the condition $\alpha_k = \bar{\alpha}_k$ on the stepsizes imply that $\{x_k\}$ satisfies \eqref{strongdescent} and it can not leave connected components of level sets. Thus since $f(x_h) < m$ we have
$$ x_h \in V_{x^*, \beta} \cap f^{-1}(-\infty, m) \subset B_{r_*}(x^*)\ ,$$
where the inclusion follows directly from the definition of $m$. We can then apply the local active set identification Theorem \ref{ascmain} to obtain an active set complexity of
\begin{equation*}
\bar{h} + |I^c(x^*)| = \left\lceil\max\left(\frac{4(f(x_0) - f^*)}{\tau}, \frac{8L(f(x_0)-f^*)}{\tau^2} \right) \right\rceil + 1 + |I^c(x^*)|\ ,
\end{equation*}
thus getting our result.
\end{proof}
\iffalse
Combining this result with Theorem \ref{activecompl} we have the following more explicit estimate for the AFW active set complexity in the nonconvex case.
\begin{Cor}
Under the hypotheses of Theorem \ref{nonconv} the active set identification complexity is at most
\begin{equation*}
n + \max \left (\frac{4(f(x_0)-f^*)}{\tau}, \frac{8L(f(x_0)-f^*)}{\tau^2} \right) + 1\ .
\end{equation*}
\end{Cor}
\begin{proof}
We have by Theorem \ref{activecompl} that the active set complexity is
\begin{equation*}
\mathcal{C}(f) \leq \bar{k} + |I^c_{A_{i}}|
\end{equation*}
where $i$ is the index such that $x_k \rightarrow A_i$. Then
\begin{equation*}
\mathcal{C}(f) \leq \bar{k} + |I^c_{A_{i}}| \leq n + \max\left(\frac{4(f(x_0)-f^*)}{\tau}, \frac{8L(f(x_0)-f^*)}{\tau^2}\right) + 1
\end{equation*}
where we used Theorem \ref{nonconv} in the second inequality.
\end{proof}
\fi
\section{Conclusions}\label{concl}
We proved general results for the AFW finite time active set convergence problem, giving explicit bounds on the number of steps necessary to identify the support of a solution. As applications of these results we computed the active set complexity for strongly convex functions and nonconvex functions. Possible expansions of these results would be to adapt them for other FW variants and, more generally, to other first order methods. It also remains to be seen if these identification properties of the AFW can be extended to problems with nonlinear constraints.
\section{Appendix} \label{tech}
In several proofs we need some elementary inequalities concerning the euclidean norm $\n{\cdot}$ and the norm $\n{\cdot}_1$.
\begin{Lemma} \label{simpelem}
Given $\{x, y\}\subset \Delta_{n-1}$, $i \in [1:n]$:
\begin{itemize}
\item[1.] $\|e_i - x\| \leq \sqrt{2}(e_i - x)_i$;
\item[2.] $(y-x)_i \leq \|y - x\|_1/2 $
\item[3.] If $\{x_k\}$ is a sequence generated on the probability simplex by the AFW then $\n{x_{k+1}-x_k}_1 \leq 2\n{x_{k+1}-x_k}$ for every $k$.
\end{itemize}
\end{Lemma}
\begin{proof}
1. $(e_i - x)_j = -x_j$ for $j \neq i$, $(e_i - x)_i = 1-x_i = \sum_{j \neq i}x_j$. In particular
\begin{equation*}
\begin{aligned}
\|e_i - x\| = (\sum_{j\neq i} x_j^2 + (e_i-x)_i^2)^{\frac{1}{2}} \leq ((\sum_{j\neq i} x_j)^2+ (1-x_i)^2)^{\frac{1}{2}} = \sqrt{2}(\sum_{j \neq i} x_j ) = \sqrt{2}(e_i - x)_i
\end{aligned}
\end{equation*}
2. Since $\sum_{j \in [1:n]} x_j = \sum_{j \in [1:n]}y_j$ so that $\sum (x-y)_j = 0$ we have
$$ (y-x)_i = \sum_{j \neq i} (x-y)_j $$
and as a consequence
\begin{equation*}
\|y-x\|_1 = \sum_{j \in [1:n]} |(y-x)_j| \geq (y-x)_i + \sum_{j \neq i} (x-y)_j = 2(y-x)_i\ .
\end{equation*}
3. We have $x_{k+1} - x_k = \alpha_k d_k$ with $d_k = \pm (e_i - x_k)$ for some $i \in [1:n]$. By homogeneity it suffices to prove $\n{d_k} \geq \frac{1}{2}\n{d_k}_1$. We have
\begin{equation*}
\n{d_k} \geq 1-(x_k)_i = \frac{1}{2} (1-(x_k)_i + \sum_{j \neq i} (x_k)_j) = \frac{1}{2}\n{d_k}_1\ ,
\end{equation*}
where in the first equality we used $\sum_{i=1}^n (x_k)_i=1$ and in the second equality we used $0\leq x_k \leq 1$.
\end{proof}
\iffalse
\begin{Lemma} \label{convlemma2}
Assume that $\mathcal{X}^* = \bigcup_{i = 1}^{C} A_i$ where the $\{A_i\}_{1\leq i \leq C}$ is a family of compact and disjoint sets. Assume that $\{x_k\}$ is a sequence in $\Delta_{n-1}$ with the property \eqref{strongdescent}.
Then if $f(x_k) \rightarrow f^*$ there exists $i$ such that $\textnormal{dist}(x_k, A_i) \rightarrow 0$.
\end{Lemma}
In the proof we use that as a consequence of \eqref{strongdescent} the sequence $x_k$ can not escape from connected components of sublevel sets, so that when it falls into a connected component of a sublevel set close to a certain $A_i$ it cannot reach the other components of $\mathcal{X}^*$. \\
We define
\begin{equation}
B(X , \varepsilon) = \{x \in \mathbb R^n \ | \ \|x - X\| < \varepsilon \}
\end{equation}
and $B_1(X, \varepsilon)$ analogously for $\n{\cdot}_1$.
\begin{proof}
First notice that by the continuity of $f$ and the compactness of $\Delta_{n - 1}$ necessarily
\begin{equation}
\textnormal{dist}(x_k, \mathcal{X}^*) \rightarrow 0\ ,
\end{equation}
otherwise for an $\varepsilon>0$ we could pick a converging subsequence $\{y_k\}$ of $\{x_k\}$ outside $B_{\varepsilon}(\mathcal{X}^*)$ so that $f(y_k) \rightarrow f(\lim_{k\rightarrow \infty}y_k) > f^*$, a contradiction. \\
Since the family of sets $\{A_i\}_{1 \leq i \leq C}$ is formed by compact and disjoint sets we have that $$D = \min_{1\leq i < j \leq C} \textnormal{dist}(A_i, A_j)/2 > 0\ .$$ Consider any $\delta < \frac{D}{2}$: then for every $x \in A_i, \ y \in A_j$ we have $\|x-y\| > 2\delta$ so that $\{B_{\delta}(A_i)\}_{1\leq i\leq C}$ is a family of open and disjoint sets. \\
Let
$$\tilde{f} = \min \{f(x) \ | x \in \Delta_{n-1} / \bigcup_{i=1}^C B_{\delta}(A_i) \} = \min \{f(x) \ | x \in \Delta_{n-1}/B_{\delta}(\mathcal{X}^*) \} > f^*\ .$$
Since $f(x_k) \rightarrow f^*$ there exists $\bar{k}$ such that $f(x_k) < \tilde{f}$ for every $k\geq \bar{k}$. This implies that $x_k \in B_\delta(\mathcal{X}^*)$ for every $k \geq \bar{k}$. Let $x_{\bar{k}} \in A_i$. We claim that $x_k \in B_{\delta}(A_i)$ for every $k \geq \bar{k}$. It suffices to show that if $x_k \in B_\delta(A_i)$ then also $x_{k+1} \in B_\delta(A_i)$. Assume by contradiction $x_{k+1}\in A_j$ with $j\neq i$. Then there exists $p \in \textnormal{conv}(x_k, x_{k+1})$ such that $p \notin \bigcup_{i=1}^C B_\delta(A_i)$ because otherwise $\{B_\delta(A_i) \cap\textnormal{conv}(x_k, x_{k+1}\}_{1\leq i \leq C}$ would be a partition open in $\textnormal{conv}(x_k, x_{k+1})$ of a connected segment. But then $f(p) \geq \tilde{f} > f(x_k)$, contradicting hypothesis \eqref{strongdescent}. \\
We now have $x_k \in B_\delta(A_i)$ which implies $x_k \notin B_\delta(A_j)$ for every $k \geq \bar{k}$, $j \neq i$. This means that $\textnormal{dist}(A_i, x_k) < \delta$ and $\textnormal{dist}(A_j, x_k)>\delta$ for every $j\neq i$, $k \geq \bar{k}$, which implies $\textnormal{dist}(\mathcal{X}^*, x_k) = \textnormal{dist}(A_i, x_k)$ for every $k \geq \bar{k}$.
To finish, notice that since $\textnormal{dist}(\mathcal{X}^*, x_k) \rightarrow 0$ then also $\textnormal{dist}(A_i, x_k) \rightarrow 0$.
\end{proof}
\fi
\subsection{Technical results related to stepsizes} \label{nonstationary}
\iffalse
We report here some useful results for the proof of active set complexity bounds in the nonconvex case. If the set of stationary points $\mathcal{X}^*$ can be partitioned into a finite number of subsets with pairwise distance exceeding $0$ then convergence to one such subset holds for any method convergent to $\mathcal{X}^*$ provided that $\n{x_k - x_{k+1}} \rightarrow 0$. This is not a surprising result, and it holds independently from stationarity, simply because the sequence $\{x_k\}$ can not jump from one connected component to another one by the pairwise separation condition.
\begin{Lemma} \label{nonconvconv}
Assume that
$$\Delta_{n-1} \ni \{x_k\}\rightarrow \mathcal{X}^* = \bigcup_{i=1}^C A_i\ , $$
with $\textnormal{dist}(A_i, A_j) > 0$ for every $\{i, j\}\subset [1:C]$ with $i \neq j$. If $\n{x_k - x_{k+1}} \rightarrow 0$ then there exists $l$ such that $x_k \rightarrow A_{l}$.
\end{Lemma}
\begin{proof}
For every $k$ let $q(k) \in \{i \ | \ \textnormal{dist}(\mathcal{X}^*,x_k) = \textnormal{dist}(A_i, x_k)\}$. Assume by contradiction that the sequence $x_k$ does not converge one of the components $\{A_i\}_{i=1}^C$. Then there exists a subsequence $\{x_{j(k)}\}$ such that $x_{j(k)} \rightarrow A_{l}$ while $x_{j(k)+1}$ does not converge to $A_{l}$ for some $l \in [1:C]$. Indeed it suffices to consider $j(k)$ such that $q(j(k))=l$ but $q(j(k) + 1) \neq l$. But since $\n{x_{j(k)} - x_{j(k)+1}} \rightarrow 0$ we must have $x_{j(k)+1} \rightarrow A_{l}$, contradiction.
\end{proof}
\fi
We now prove several properties related to the stepsize given in \eqref{alphabound}.
\begin{Lemma}\label{alphacond}
Consider a sequence $\{x_k\}$ in $\Delta_{n - 1}$ such that $x_{k+1} = x_k + \alpha_k d_k$ with $\alpha_k \in \mathbb R_{\geq 0}$, $d_k \in \mathbb R^n$. Let $\bar{\alpha}_k$ be defined as in \eqref{alphabound}, let $p_k = -\nabla f(x_k)^\top d_k$ and assume $p_k > 0$. Then:
\begin{itemize}
\item[1.] If $0 \leq \alpha_k \leq 2p_k /(\|d_k\|^2L)$, the sequence $\{x_k\}$ has the property \eqref{strongdescent}.
\item[2.] If $\alpha_k = \bar{\alpha}_k$ then \eqref{eq:rho} is satisfied with $\rho = \frac{1}{2}$. Additionally, we have
\begin{equation} \label{eq:lim}
f(x_k) - f(x_{k+1}) \geq L\frac{\n{x_{k+1} - x_k}^2}{2}\ .
\end{equation}
\item[3.] If $\alpha_k$ is given by exact linesearch, then $\alpha_k \geq \bar{\alpha}_k$ and \eqref{eq:rho} is again satisfied with $\rho= \frac{1}{2}$.
\end{itemize}
\end{Lemma}
\begin{proof}
By the standard descent lemma~\cite[Proposition 6.1.2]{bertsekas2015convex} we have
\begin{equation} \label{e11:std}
f(x_k) - f(x_k + \alpha d_k) \geq \alpha p_k - \alpha^2 \frac{L\|d_k\|^2}{2}\ .
\end{equation}
It is immediate to check
\begin{equation} \label{eq11:ds}
\alpha \nabla f(x_k)^\top d_k +\alpha^2 \frac{L\|d_k\|^2}{2} \leq 0\ ,
\end{equation}
for every $0 \leq \alpha \leq \frac{2p_k}{L\|d_k\|^2} $ and
\begin{equation} \label{eq11:lin}
\alpha p_k - \alpha^2 \frac{L\|d_k\|^2}{2} \geq \alpha p_k/2 \geq \alpha^2 \frac{L\|d_k\|^2}{2}
\end{equation}
for every $0 \leq \alpha \leq \frac{p_k}{L\|d_k\|^2} $. \\
1. For every $x \in \textnormal{conv}(x_k, x_{k+1})\subseteq \left\{x_k+ \alpha d_k \ | \ 0 \leq \alpha \leq \frac{2p_k}{L\|d_k\|^2} \right \}$, we have
\begin{equation*}
f(x) = f(x_k + \alpha d_k) \leq f(x_k) + \alpha \nabla f(x_k)^\top d_k +\alpha^2 \frac{L\|d_k\|^2}{2} \leq f(x_k)\ ,
\end{equation*}
where we used \eqref{e11:std} in the first inequality and \eqref{eq11:ds} in the second inequality. \\
2. We have
\begin{equation*}
f(x_k) - f(x_{k+1}) = f(x_k) - f(x_k + \bar{\alpha}_k d_k) \geq \bar{\alpha}_k p_k/2\ ,
\end{equation*}
where we have the hypotheses to apply \eqref{eq11:lin} since $0 \leq \bar{\alpha}_k \leq \frac{p_k}{L\|d_k\|^2} $.
Again by \eqref{eq11:lin}
\begin{equation*}
f(x_k) - f(x_{k+1}) = f(x_k) - f(x_k + \bar{\alpha}_k d_k) \geq \bar{\alpha}_k^2 \frac{L\|d_k\|^2}{2} = L \frac{\n{x_k - x_{k+1}}^2}{2}\ .
\end{equation*}
3. If $\alpha_k = \alpha_k^{\max}$ then there is nothing to prove since $\bar{\alpha}_k \leq \alpha_k^{\max}$. Otherwise we have
\begin{equation} \label{eq11:linesearch}
0 = \frac{\partial}{\partial \alpha} f(x_k + \alpha d_k) |_{\alpha= \alpha_k} =d_k^\top (\nabla f(x_k + \alpha_k d_k))
\end{equation}
and therefore
\begin{equation} \label{eq11:linesearch2}
\begin{aligned}
- d_k^\top \nabla f(x_k) & = - d_k^\top \nabla f(x_k) + d_k^\top \nabla f(x_k + \alpha_k d_k) = - d_k^\top (\nabla f(x_k) - \nabla f(x_k + \alpha_k d_k)) \\
& \leq L\n{d_k}\n{x_k - (x_k + \alpha_k d_k) } = \alpha_k L\n{d_k}^2\ ,
\end{aligned}
\end{equation}
where we used \eqref{eq11:linesearch} in the first equality and the Lipschitz condition in the inequality. From \eqref{eq11:linesearch2} it follows
\begin{equation*}
\alpha_k \geq \frac{- d_k^\top \nabla f(x_k)}{L \n{d_k}^2} \geq \bar{\alpha}_k
\end{equation*}
and this proves the first claim. As for the second,
\begin{equation*}
f(x_k) - f(x_k + \alpha_k d_k) \geq f(x_k) - f(x_k + \bar{\alpha}_kd_k) \geq \frac{\bar\alpha_k}{2}p_k\ ,
\end{equation*}
where the first inequality follows from the definition of exact linesearch and the second by point 2 of the lemma.
\end{proof}
\begin{Cor}\label{nonconv0}
Under the hypotheses of Lemma \ref{alphacond}, assume that $f(x_k)$ is monotonically decreasing and assume that for some subsequence $k(j)$ we have $x_{k(j) + 1} = x_{k(j)} + \bar{\alpha}_{k(j)} d_{k(j)}$. Then $$\n{x_{k(j)} - x_{k(j)+1}} \rightarrow 0\ .$$
\end{Cor}
\begin{proof}
By~\eqref{eq:lim} we have
\begin{equation*}
f(x_{k(j)}) - f(x_{k(j) + 1}) \geq \frac{L}{2}\n{x_{k(j)} - x_{k(j)+1}}^2
\end{equation*}
and the conclusion follows by monotonicity and boundedness.
\end{proof}
\subsection{AFW complexity for generic polytopes} \label{generalafw}
It is well known as anticipated in the introduction that every application of the AFW to a polytope can be seen as an application of the AFW to the probability simplex. \\
In this section we show the connection between the active set and the face of the polytope exposed by $-\nabla f(y^*)$, where $y^*$ is a stationary point for $f$. We then proceed to show with a couple of examples how the results proved for the probability simplex can be adapted to general polytopes. In particular we will generalize Theorem \ref{activecompl}, thus proving that under a convergence assumption the AFW identifies the face exposed by the gradients of some stationary points. An analogous result is already well known for the gradient projection algorithm, and was first proved in \cite{burke1994exposing} building on \cite{burke1988identification} which used an additional strict complementarity assumption but worked in a more general setting than polytopes, that of convex compact sets with a polyhedral optimal face. \\
Before stating the generalized theorem we need to introduce additional notation and prove a few properties mostly concerning the generalization of the simplex multiplier function $\lambda$ to polytopes.\\
Let $P$ be a polytope and $f: P \rightarrow \mathbb{R}^n$ be a function with gradient having Lipschitz constant $L$. \\
To define the AFW algorithm we need a finite set of atoms $\mathcal{A}$ such that $\textnormal{conv}(\mathcal{A}) = P$. As for the probability simplex we can then define for every $a \in \mathcal{A}$ the multiplier function $\lambda_a: P \rightarrow \mathbb{R}$ by
$$\lambda_a(y) = \nabla f(y)^{\top} (a-y)\ . $$
Let finally $A$ be a matrix having as columns the atoms in $\mathcal{A}$, so that $A$ is also a linear transformation mapping $\Delta_{|\mathcal{A}| - 1}$ in $P$ with $Ae_i = A^i \in \mathcal{A}$. \\
In order to apply Theorem \ref{ascmain} we need to check that the transformed problem
\begin{equation*}
\min \{f(Ax) \ | \ x \in \Delta_{|\mathcal{A}| - 1}\}
\end{equation*}
still has all the necessary properties under the assumptions we made on $f$. \\
Let $\tilde{f}(x) = f(Ax) $.
First, it is easy to see that the gradient of $\tilde{f}$ is still Lipschitz.
Also $\lambda$ is invariant under affine transformation, meaning that $\lambda_{A^i}(Ax) = \lambda_i(x) $ for every $i \in [1:|\mathcal{A}|]$, $x \in \Delta_{|\mathcal{A}|-1}$. Indeed
$$\lambda_{A^i}(Ax) = \nabla f(Ax)^\top (A^i - Ax) = \nabla f(Ax)^\top A(e_i - x) = \nabla \tilde f(x)^\top (e_i-x) = \lambda_i(x)\ . $$
Let $Y^*$ be the set of stationary points for $f$ on $P$, so that by invariance of multipliers $\mathcal{X}^* = A^{-1}(Y^*)$ is the set of stationary points for $\tilde{f}$. The invariance of the identification property follows immediately from the invariance of $\lambda$: if the support of the multiplier functions for $f$ restricted to $B$ is $\{A^i\}_{i \in I^c}$, then the support of the multiplier functions for $\tilde{f}$ restricted to $A^{-1}(B)$ is $I^c$. \\
We now show the connection between the face exposed by $-\nabla f$ and the support of the multiplier function. Let $y^*=Ax^* \in Y^*$ and let
\begin{equation*}
P^*(y^*) = \{y \in P \ | \ \nabla f(y^*)^\top y = \nabla f(y^*)^\top y^* \} = \textnormal{argmax}\{-\nabla f(y^*)^\top y \ | \ y \in P \} = \mathcal{F}(-\nabla f(y^*))
\end{equation*}
be the face of the polytope $P$ exposed by $ - \nabla f(y^*)$. The complementarity conditions for the generalized multiplier function $\lambda$ can be stated very simply in
terms of inclusion in $P^*(y^*)$: since $y^* \in P^*(y^*)$ we have $\lambda_a(y^*) = 0$ for every $a \in P^*(y^*)$, $\lambda_a(y^*) > 0$ for every $a \notin P^*(y^*)$.
But $P$ is the convex hull of the set of atoms in $\mathcal{A}$ so that the previous relations mean that the face $P^*(y^*)$ is the convex hull of the set of atoms for which $\lambda_a(y^*) = 0$:
\begin{equation*}
P^*(y^*) = \textnormal{conv}\{a \in \mathcal{A} \ | \ \lambda_a(y^*) = 0 \}
\end{equation*}
or in other words since $\lambda_{A^i}(y^*) = 0$ if and only if $i \in I(x^*)=\{i\in [1:n]\ | \ \lambda_i(x^*)=0\}$:
\begin{equation} \label{expconv}
P^*(y^*) = \textnormal{conv} \{a \in \mathcal{A} \ | \ a=A^i, \ i \in I(x^*)\}\ .
\end{equation}
A consequence of \eqref{expconv} is that given any subset $B$ of $P$ with a constant active set, we necessarily get $P^*(w)~=~P^*(z)$ for every $w, z \in B$, since $I(w) = I(z)$. For such a subset $B$ we can then define
\begin{equation*}
P^*(B) = P^*(y^*) \textnormal{ for any }y^* \in B
\end{equation*}
where the definition does not depend on the specific $y^* \in B$ considered.
We can now restate Theorem~\ref{activecompl} in slightly different terms:
\begin{Th} \label{activecompl2}
Let $\{y_k\}$ be a sequence generated by the AFW on $P$ and let $\{x_k\}$ be the corresponding sequence of weights in $\Delta_{|\mathcal{A}|-1}$ such that $\{y_k\} = \{Ax_k\}$. Assume that the stepsizes satisfy $\alpha_k \geq \bar{\alpha}_k$ (using $\tilde{f}$ instead of $f$ in \eqref{alphabound}). If there exists a compact subset $B$ of $Y^*$ with the SIP such that $y_k \rightarrow B$, then there exists $M$ such that
$$y_k \in P^*(B) \textit{ for every }k\geq M. $$
\end{Th}
\begin{proof}
Follows from Theorem \ref{activecompl} and the affine invariance properties discussed above.
\end{proof}
A technical point concerning Theorem \ref{activecompl2} is that in order to compute $\bar{\alpha}_k $ the Lipschitz constant $L$ of $ \nabla \tilde{f}$ (defined on the simplex) is necessary. When optimizing on a general polytope, the calculation of an accurate estimate of $L$ for $\tilde{f}$ may be problematic. However, by Lemma \ref{alphacond} if the AFW uses exact linesearch, the stepsize $\bar{\alpha}_k$ (and in particular the constant $L$) is not needed because the inequality $\alpha_k \geq \bar{\alpha}_k$ is automatically satisfied.\\
We now generalize the analysis of the strongly convex case.
The technical problem here is that strong convexity, which is used in Corollary \ref{ssimplex}, is not maintained by affine transformations, so that instead we will have to use a weaker error bound condition. As a possible alternative, in \cite{lacoste2015global} linear convergence of the AFW is proved with dependence only on affine invariant parameters, so that any version of Theorem \ref{ascmain} and Corollary \ref{ssimplex} depending on those parameters instead of $u_1, L$ would not need this additional analysis. \\
Let $P = \{ y \in \mathbb{R}^n \ | \ Cy \leq b\}$, $y^*$ be the unique minimizer of $f$ on $P$ and $u > 0$ be such that
\begin{equation*}
f(y) \geq f(y^*)+\frac{u}{2} \|y - y^*\|^2\ .
\end{equation*}
The function $\tilde{f}$ inherits the error bound condition necessary for Corollary \ref{ssimplex} from the strong convexity of $f$:
for every $x \in \Delta_{|\mathcal{A}|-1}$ by \cite{beck2017linearly}, Lemma 2.2 we have
$$\textnormal{dist} (x, \mathcal{X}^*) \leq \theta \|Ax - y^*\| $$
where $\theta$ is the Hoffman constant related to $[C^T, [I; e; -e]^T]^T$. As a consequence if $\tilde{f}^*$ is the minimum of $\tilde{f}$
\begin{equation*}
\tilde{f}(x) - \tilde{f}^* = f(Ax) - f(y^*) \geq \frac{u}{2} \|Ax -y^*\|^2 \geq \frac{u}{2\theta^2} \textnormal{dist}(x, \mathcal{X}^*)^2
\end{equation*}
and using that $n\|\cdot \|^2 \geq \|\cdot \|_1^2$ we can finally retrieve an error bound condition with respect to $\|\cdot \|_1$:
\begin{equation} \label{errorbound}
\tilde{f}(x) - \tilde{f}^* \geq \frac{u}{2n\theta^2}\textnormal{dist}_1(x, \mathcal{X}^*)^2.
\end{equation}
Having proved this error bound condition for $\tilde{f}$ we can now generalize \eqref{awdir}:
\begin{Cor}
The sequence $\{y_k\}$ generated by the AFW is in $P^*(y^*)$ for
$$ k \geq \max\left (0, \frac{\textnormal{ln}(h_0) - \textnormal{ln}(u_P r_*^2/2)}{\textnormal{ln}(1/q)}\right ) +|I^c| $$
where $q\in(0,1)$, is the constant related to the linear convergence rate $f(y_k) - f(y^*) \leq q^k(f(y_0) - f(y^*))$, $u_P = \frac{u}{2n\theta^2}$, $r_* = \frac{\delta_{\min}}{2L + \delta_{\min}}$ with $ \delta_{\min} = \min \{\lambda_a(y^*) \ | \ \lambda_a(y^*) > 0 \}$.
\end{Cor}
\begin{proof}
Let $I = \{i \in [1:|\mathcal{A}|] \ | \ \lambda_{A^i}(y^*) = 0 \}$, $P^* = P^*(y^*)$.
Since $P^* = \textnormal{conv}(\mathcal{A} \cap P^*)$ and by \eqref{expconv} $\textnormal{conv}(\mathcal{A} \cap P^*) = \textnormal{conv} \{A^i \ | \ i \in I\}$
the theorem is equivalent to prove that for every $k$ larger than the bound, we have $y_k \in \textnormal{conv} \{A^i \ | \ i \in I\}$.
Let $\{x_k\}$ be the sequence generate by the AFW on the probability simplex, so that $y_k=Ax_k$. We need to prove that, for every $k$ larger than the bound, we have
$$ x_k \in \textnormal{conv }\{e_i \ | \ i \in I\}\ , $$
or in other words $(x_k)_i = 0$ for every $i \in I^c$. \\
Reasoning as in Corollary \ref{ssimplex} we get that $\textnormal{ dist}_1(x_k, \mathcal{X}^*) < r_*$ for every
\begin{equation} \label{lowbound}
k \geq \frac{\textnormal{ln}(h_0) - \textnormal{ln}(u_P r_*^2/2)}{\textnormal{ln}(1/q)}\ .
\end{equation}
Let $\bar{k}$ be the minimum index such that \eqref{lowbound} holds.
For every $k \geq \bar{k}$ there exists $x^* \in \mathcal{X}^*$ with $\|x_k - x^*\|_1 < r_*$.
But $\lambda_i(x) = \lambda_{A^i}(y^*)$ for every $x \in \mathcal{X}^*$ by the invariance of $\lambda$, so that we can apply Theorem \ref{ascmain} with fixed point $x^*$ and obtain that if
$J_k = \{i \in I^c \ | \ (x_k)_i> 0\}$ then $J_{k+1} \leq \max (0, J_k - 1)$. The conclusion follows exactly as in Corollary \ref{ssimplex}.
\end{proof}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Appendix \thesection\protect\indent #1}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}
}
\newcommand\encadremath[1]{\vbox{\hrule\hbox{\vrule\kern8pt
\vbox{\kern8pt \hbox{$\displaystyle #1$}\kern8pt}
\kern8pt\vrule}\hrule}}
\def\enca#1{\vbox{\hrule\hbox{
\vrule\kern8pt\vbox{\kern8pt \hbox{$\displaystyle #1$}
\kern8pt} \kern8pt\vrule}\hrule}}
\newcommand\figureframex[3]{
\begin{figure}[bth]
\hrule\hbox{\vrule\kern8pt
\vbox{\kern8pt \vbox{
\begin{center}
{\mbox{\epsfxsize=#1.truecm\epsfbox{#2}}}
\end{center}
\caption{#3}
}\kern8pt}
\kern8pt\vrule}\hrule
\end{figure}
}
\newcommand\figureframey[3]{
\begin{figure}[bth]
\hrule\hbox{\vrule\kern8pt
\vbox{\kern8pt \vbox{
\begin{center}
{\mbox{\epsfysize=#1.truecm\epsfbox{#2}}}
\end{center}
\caption{#3}
}\kern8pt}
\kern8pt\vrule}\hrule
\end{figure}
}
\newcommand{\rf}[1]{(\ref{#1})}
\newcommand{\Eq}[1]{Eq.~(\ref{#1})}
\newcommand{\eq}[1]{eq.~(\ref{#1})}
\newcommand{\rfig}[1]{fig.~\ref{#1}}
\newcommand{\equ}[2]{\begin{equation}{\label{#1}}{#2}\end{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand\eol{\hspace*{\fill}\linebreak}
\newcommand\eop{\vspace*{\fill}\pagebreak}
\newcommand{\hspace{0.7cm}}{\hspace{0.7cm}}
\newcommand{\vspace{0.7cm}}{\vspace{0.7cm}}
\renewcommand{\and}{{\qquad {\rm and} \qquad}}
\newcommand{{\qquad {\rm where} \qquad}}{{\qquad {\rm where} \qquad}}
\newcommand{{\qquad {\rm with} \qquad}}{{\qquad {\rm with} \qquad}}
\newcommand{{\qquad {\rm for} \qquad}}{{\qquad {\rm for} \qquad}}
\newcommand{{\qquad , \qquad}}{{\qquad , \qquad}}
\newcommand{{\it i.e.}\ }{{\it i.e.}\ }
\newcommand{{\,\rm Det}} \newcommand{{\,\rm Tr}\:}{{\,\rm Tr}\:}{{\,\rm Det}} \newcommand{{\,\rm Tr}\:}{{\,\rm Tr}\:}
\newcommand{{\,\rm tr}\:}{{\,\rm tr}\:}
\newcommand{{\,\rm cte}\,}{{\,\rm cte}\,}
\newcommand{\mathop{\,\rm Res\,}}{\mathop{\,\rm Res\,}}
\newcommand{\td}[1]{{\tilde{#1}}}
\renewcommand{\l}{\lambda}
\newcommand{\omega}{\omega}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{{\mathrm{i}}}{{\mathrm{i}}}
\newcommand{{\,\rm e}\,}{{\,\rm e}\,}
\newcommand{\ee}[1]{{{\rm e}^{#1}}}
\renewcommand{\d}{{{\partial}}}
\newcommand{{{\hbox{d}}}}{{{\hbox{d}}}}
\newcommand{\dmat}[2]{\mathrm{d}_{\scriptscriptstyle{#1}}[#2]}
\newcommand{{\int\kern -1.em -\kern-.25em}}{{\int\kern -1.em -\kern-.25em}}
\newcommand{\mathrm{Vol}}{\mathrm{Vol}}
\newcommand{\moy}[1]{\left<{#1}\right>}
\renewcommand{\Re}{{\mathrm{Re}}}
\renewcommand{\Im}{{\mathrm{Im}}}
\newcommand{{\rm sn}}{{\rm sn}}
\newcommand{{\rm cn}}{{\rm cn}}
\newcommand{{\rm dn}}{{\rm dn}}
\newcommand{\ssq}[1]{{\sqrt{\sigma({#1})}}}
\renewcommand{\a}{o}
\renewcommand{\l}{\lambda}
\renewcommand{\L}{\Lambda}
\renewcommand{\ssq}[1]{{\sqrt{\sigma({#1})}}}
\newcommand{\overline}{\overline}
\newcommand{{\mathbf x}}{{\mathbf x}}
\newcommand{{\mathbf y}}{{\mathbf y}}
\newcommand{{\mathbf p}}{{\mathbf p}}
\preprint{SPhT-T05/045, ccsd-00004752, math-ph/0504058}
\title{Topological expansion of the 2-matrix model correlation functions:
diagrammatic rules for a residue formula}
\author{B.\ Eynard, N. \ Orantin \\
Service de Physique Th\'eorique de Saclay, CEA/DSM/SPhT,\\
Unit\'e de Recherche associ\'ee au CNRS (URA D2306), CEA Saclay,\\
F-91191 Gif-sur-Yvette Cedex, France.\\
E-mail: eynard@spht.saclay.cea.fr, orantin@spht.saclay.cea.fr}
\abstract{We solve the loop equations of the hermitian 2-matrix
model to all orders in the topological $1/N^2$ expansion,
i.e. we obtain all non-mixed correlation functions, in terms of residues on an algebraic curve.
We give two representations of those residues as Feynman-like graphs,
one of them involving only cubic vertices. }
\keywords{Matrix Models, Differential and Algebraic Geometry}
\begin{document}
\vspace{26pt}
\pagestyle{plain}
\setcounter{page}{1}
\newsection{Introduction}
The purpose of this article is to generalize the method invented
in \cite{eynloop1mat}, for the 2-matrix model. The method of
\cite{eynloop1mat} is a diagrammatic technique for computing
correlation functions of the 1-matrix model in terms of residues
on some algebraic curve.
\smallskip
Random matrix models play an important role in physics and
mathematics \cite{Mehta}, and have a wealth of applications which
are too long to list here. In this article, we consider ``formal''
random matrix integrals, which are known to be generating
functions for counting some classes of discrete surfaces
\cite{ZJDFG, thooft, BIPZ, courseynard, eynhabilit}.
The partition function, free energy and correlation functions are all generating functions enumerating some kinds of graphs (respectively closed graphs, connected closed graphs, open graphs),
which graphs can be seen as discrete surfaces.
In the formal model, the size $N$ of matrices, is just a complex parameter, it needs not be an integer, and all observables (free energy, correlation functions)
always have a $1/N$ expansion, because for each power of the expansion parameters, there is only a finite number of graphs with a given power of $N$.
The power of $N$ in a graph is its Euler characteristic, and thus the $1/N$ expansion is known as the ``topological expansion'' discovered by 't Hooft \cite{thooft}.
In the formal model, $N$ is thus an expansion parameter, and working order by order in $N$ enumerates only discrete surfaces of a given topology \cite{BIPZ}.
An efficient method for dealing with this formal model is to consider the Schwinger-Dyson equations, called
loop equations in this context \cite{ZJDFG, staudacher}.
To large $N$ limit (i.e. planar topologies), the solution of loop equations is known to be related to Toda hierarchy \cite{Virasoro,KMMM, ZJZ, PZJ}.
For this reason, the large $N$ expansion of matrix models plays an important role in integrable systems, and in many areas of physics \cite{Kos}.
It was understood by \cite{DV} that the low energy effective action of some string theory models is also described by matrix models.
In the beginning, formal matrix models were considered omlyin their 1-cut phase, because a potential which is a small deformation of a quadratic one,
must have only one well, i.e. the variables perturbatively explore only one well.
However, a $N\times N$ matrix has $N$ eigenvalues, and even though each of them can explore perturbatively only one well, they do not need explore all the same well.
That gives ``multicut'' solutions of matrix models, where the number of eigenvalues near each extremum of the potential is fixed (fixed filling fractions).
Multicut solutions play an important role in string theory, as they describe multi-particle states \cite{DV,DW}.
Multicut solutions correspond to enumerating surfaces with contact terms, which can be called ``foam of surfaces'' as described in \cite{BDE, eynhabilit}.
\medskip
The link between formal matrix models (which always have a $1/N$ expansion) and convergent matrix integrals (which have a $1/N$ expansion only in the 1-cut case under certain assumptions),
has been better understood after the work of \cite{BDE}.
We emphasize again, that the results developed in this article concern the formal matrix model with fixed filling fractions, and should not be applied to
convergent matrix model directly.
\medskip
Recently, it has progressively become clear that large $N$
expansion of random matrix models has a strong link with algebraic
geometry \cite{KazMar}. The free energy and correlation functions have been
computed in terms of properties of an algebraic curve. The large
$N$ limit of the 1-point correlation function (called the
resolvent) is solution of an algebraic equation, which thus
defines an algebraic curve. There have been many works which
computed free energy and correlation functions in terms of that
algebraic curve. The leading order resolvent and free energy were
computed in the 1-cut case (algebraic curve of genus zero) in the
pioneering work of \cite{BIPZ}, then some recursive method
for computing correlation functions and free energy to all orders
in $1/N$ were invented by \cite{ACM,ACKM}. Those methods were first
limited to 1-matrix case and 1-cut.
Then for 1-matrix several works have dealt with multicut: Akeman
and Ambj{\o}rn found the first subleading term for the multicut
resolvent and the 2-cut free energy \cite{Ak96,AkAm}, Chekhov \cite{Chekh}
and one of the authors together with Kokotov and Korotkin \cite{EKK}
found simultaneously the first subleading term for the multi-cut free energy
. Then a (non-recursive) diagrammatic method was
invented in \cite{eynloop1mat} to find all correlation functions
to all orders, in the multicut case.
\medskip
The 1-matrix model, corresponds to hyper elliptical curves only. In order to have more general algebraic curves, one needs at least a 2-matrix model.
For the 2-matrix models, the loop equations have been known since \cite{staudacher},
and have been written in a concise form in \cite{eynchain, eynchaint, eynmultimat}.
They have been used to find the subleading term of the free energy, first in the genus zero case in \cite{eynm2m}, then in the genus 1 case in \cite{eynm2mg1},
and with arbitrary genus in \cite{EKK}.
The purpose of this article is to generalize the diagrammatic method of \cite{eynloop1mat} for the computation of non-mixed correlation functions
in the 2-matrix case. We solve the loop equations and present their solutions (the non-mixed correlation function's expansion) under two different diagrammatic forms.
We first build a cubic diagrammatic representation before presenting an effective non cubic theory.
\bigskip
{\bf Outline of the article:}
\begin{itemize}
\item In section 2, we introduce the model and our notations.
\item Section 3 is dedicated to the derivation of loop equations. We derive the fundamental "master loop equation"
before deriving loop equations whose solutions are non-mixed correlation functions
\item In section 4, we show how a compact Riemann surface arises from the leading order of the master loop equation
and present notations and tools of algebraic geometry needed for the computation of correlation functions.
\item In section 5, we present a diagrammatic solution of the loop equations as cubic Feynman-like graphs.
\item Section 6 is dedicated to the presentation of another representation of the non-mixed correlation functions
as graphs of a non cubic effective theory.
\item In section 7, we study the example of the gaussian case corresponding to the 1-matrix model limit.
\end{itemize}
\newsection{Definitions and notations}
\subsection{Definition of the formal 2-matrix model with fixed filling fractions}
\label{secdef}
In this article, we are interested in the study of the formal-two-matrix-model
and the computation of a whole family of observables.
The partition function $Z$ is the formal matrix integral:
\begin{equation}\label{defZ}
Z:=\int_{H_n\times H_n} dM_1 dM_2\, e^{-N Tr(V_1(M_1) + V_2(M_2) - M_1 M_2 )}
\end{equation}
where $M_1$ and $M_2$ are two $N \times N$ hermitian matrices,
$dM_1$ and $dM_2$ the products of Lebesgue measures of the real components of $M_1$ and $M_2$ respectively,
and $V_1$ and $V_2$ two polynomial potentials of degree $d_1+1$ and
$d_2+1$ respectively :
\begin{equation}\label{defVpot}
V_1(x) = \sum_{k=1}^{d_1+1} {g_k\over k} x^k
{\qquad , \qquad}
V_2(y) = \sum_{k=1}^{d_2+1} {\td{g}_k\over k} y^k
\end{equation}
Formal integral means it
is computed as the formal power series expansion order by order in
the $g_k$'s (see \cite{ZJDFG,thooft,BIPZ}) of a matrix
integral, where the non-quadratic terms in the potentials $V_1$
and $V_2$ are treated as perturbations near quadratic potentials.
Such a perturbative expansion can be performed only near local
extrema of $V_1(x)+V_2(y)-xy$, i.e. near points such that: \begin{equation}
V'_1(\xi_i)=\eta_i {\qquad , \qquad} V'_2(\eta_i)=\xi_i \end{equation} which has $d_1
d_2$ solutions. Therefore, if $\overline{M}_1$ and $\overline{M}_2$ are
diagonal matrices, whose diagonal entries are some $\xi_i$'s
(resp. $\eta_i$'s), $(\overline{M}_1,\overline{M}_2)$ is a local extremum of
${\,\rm tr}\: (V_1(M_1)+V_2(M_2)-M_1 M_2)$ around which we can perform a
perturbative expansion.
The choice of such an extremum, around which the perturbative
series is computed, is equivalent to the choice of the number of
eigenvalues near each pair $(\xi_i,\eta_i)$, $i=1,\dots, d_1 d_2$,
i.e. the data of $d_1 d_2$ integers $n_i$ such that:
\begin{equation}
\sum_{i=1}^{d_1 d_2} n_i=N
\end{equation}
This means, that we can choose
some contours ${\cal C}_i$, $i=1,\dots, d_1 d_2$, such that the following equality holds
order by order in the perturbative expansion:
\begin{equation}\label{fixfrac}
\left<{1\over 2i\pi}\oint_{{\cal C}_i} {\,\rm tr}\: {dx\over x-M_1}\right> =-n_i \end{equation}
The numbers ${n_i \over N}$ are called filling fractions.
Thus, in the formal model, filling fractions are fixed parameters.
\bigskip
{\bf Fat graphs and discrete random surfaces }
\smallskip
Once filling fractions are chosen, we perform the perturbative expansion.
Each term of that formal expansion is an
expectation value of a gaussian integral, and using Wick's
theorem, each term can be represented by a Feynman graph. Because
the integration variables are matrices, the graphs are ``fat
graphs'', which have a 2-dimensional structure. The Hermitean
matrix models thus enumerate oriented surfaces (other matrix
ensembles can enumerate non-oriented surfaces). This Formal
expansion equivalent to an enumerating function of Feynman graphs
is a standard tool in physics \cite{ZJDFG,thooft}. Random matrices have thus played a
role in all theories where one needs to sum over surfaces, i.e.
string theory and quantum gravity (i.e. statistical physics on a
random lattice).
Following this interpretation, the loop equations \cite{staudacher} can be
understood as relationships linking surfaces of different genus
and different number of boundaries.
\subsection{Notations}
\subsubsection{Notation for sets of variables}
We will consider functions of many variables $x_1,x_2,x_3,\dots, x_k$, or of a subset of those variables.
In that purpose we introduce the following notations:
Let $K$ be a $k-$upple of integers:
\begin{equation}
K=(i_1,i_2,\dots, i_k)
\end{equation}
We denote $k=|K|$ the length (or cardinal) of $K$.
For any $j\leq |K|$, we denote $K_j$ the set of all $j-$upples (i.e. subsets of length $j$) contained in $K$:
\begin{equation}
K_j:=\{J \subset K \,\,\, , \,\, |J|=j \}
\end{equation}
We define the following $k-$upple of complex numbers:
\begin{equation}
{\mathbf x}_K:=(x_{i_1},x_{i_2},\dots, x_{i_k})
\end{equation}
\subsubsection{Correlation functions}
For a given $k$, we define the correlation function:
\begin{equation}
\overline{w}_{k}(x_1,\dots,x_k) := N^{k-2}\left< \prod_{i=1}^k {\,\rm tr}\:{1\over x_{i}-M_1}\right>_c
{\qquad , \qquad}
\end{equation}
i.e., with the previous notations:
\begin{equation}\label{defwbarkl}
\overline{w}_{|K|}({\mathbf x}_K) := N^{|K|-2}\left< \prod_{r=1}^{|K|} {\,\rm tr}\:{1\over x_{i_r}-M_1}\right>_c
{\qquad , \qquad}
\end{equation}
where the formal average $\langle . \rangle$ is computed with the measure in \eq{defZ},
and the subscript $c$ means connected part (cumulant).
Those correlation functions can be expanded as formal series in ${1 \over N^2}$ in the large N limit:
\begin{equation}\label{defwklh}
\overline{w}_{k}({\mathbf x}_K) = \sum_{h=0}^\infty {1\over N^{2h}}\,\overline{w}_{k}^{(h)}({\mathbf x}_K)
\end{equation}
The purpose of this article is to compute $\overline{w}_{k}^{(h)}({\mathbf x}_K)$ as residues on an algebraic curve
and represent it with Feynman-like graphs of a cubic field theory on the curve.
\medskip
We also define the following auxiliary functions:
\begin{equation}\label{defubarkl}
\overline{u}_{k}(x,y;{\mathbf x}_K) := N^{|K|-1} \left< {\,\rm tr}\: {1\over x-M_1} {V'_2(y)-V'_2(M_2)\over y-M_2}\,\,
\prod_{r=1}^{|K|} {\,\rm tr}\:{1\over x_{i_r}-M_1}\right>_c
\end{equation}
\begin{equation}\label{defpkl}
p_{k}(x,y;{\mathbf x}_K) := N^{|K|-1} \left< {\,\rm tr}\: {V'_1(x)-V'_1(M_1)\over x-M_1} {V'_2(y)-V'_2(M_2)\over y-M_2}\,\,
\prod_{r=1}^{|K|} {\,\rm tr}\:{1\over x_{i_r}-M_1}\right>_c
\end{equation}
\begin{equation}\label{defakl}
a_{k}(x;{\mathbf x}_K) := N^{|K|-1} \left< {\,\rm tr}\: {1\over x-M_1} V'_2(M_2)\,\,
\prod_{r=1}^{|K|} {\,\rm tr}\:{1\over x_{i_r}-M_1}\right>_c
\end{equation}
Notice that $\overline{u}_{k,}(x,y;{\mathbf x}_K)$ is a polynomial in $y$ of degree $d_2-1$, and
$p_{k}(x,y;{\mathbf x}_K)$ is a polynomial in $x$ of degree $d_1-1$ and in $y$ of degree $d_2-1$.
It is convenient to renormalize those functions, and define:
\begin{equation}\label{defukl}
u_{k}(x,y;{\mathbf x}_K):=\overline{u}_{k}(x,y;{\mathbf x}_K) -\delta_{k,0}(V'_2(y)-x)
\end{equation}
and
\begin{equation}\label{defwkl}
w_{k}({\mathbf x}_K):=\overline{w}_{k}({\mathbf x}_K) + {\delta_{k,2} \over (x_1-x_2)^2}
\end{equation}
Let us remark that all those functions have the same kind of topological expansion as $\overline{w}_{k}({\mathbf x}_K)$ and one
defines $p_{k}^{(h)}(x,y;{\mathbf x}_K)$ and $u_{k}^{(h)}(x,y;{\mathbf x}_K)$ as well like in \eq{defwklh}.
\vspace{0.7cm}
We define the function:
\begin{equation}\label{defY}
Y(x):=V'_1(x)-w_{1}(x)
\end{equation}
which we see below, describes the algebraic curve.
The ${1 \over N^2}$ expansion of such correlation functions is
known to enumerate discrete surfaces of a given topology,
whose polygons carry a spin + or - (Ising model on a random surface \cite{Kazakov,Kos}), see \cite{eynhabilit} for the multicut case i.e. foam of Ising surfaces.
The $\overline{w}_{k}^{(h)}$ are generating functions enumerating genus $h$ discrete surfaces with $k$ boundaries of spin $+$.
As an example, $\overline{w}_{2}^{(3)}$ enumerates surfaces of genus 3 with 2 boundaries:
\begin{equation}
\overline{w}_{2}^{(3)}= \begin{array}{r}
{\epsfxsize 7cm\epsffile{surfdiscr.eps}}
\end{array}
\end{equation}
Notice that the question of boundaries with non uniform spin, i.e. with changes of boundary conditions has been solved to leading order only in \cite{EOtrmixte}.
\newsection{Loop equations}
There exist several methods for computing the free energy and correlation functions, the one we consider here is the ``loop equation'' method, which is nothing
but Schwinger-Dyson, or Ward identities \cite{ZJDFG, staudacher}.
They implement the Virasoro or W-algebra constraints on the partition function \cite{KazMar, MMM}, i.e. the fact that the matrix integral is left unchanged under a change of variable.
The loop equations are valid in the formal model, order by order in the expansion parameters.
For the 2-matrix model, loop equations have been known since \cite{staudacher}, and written
in a more systematic way in \cite{eynchain,eynchaint,eynmultimat,KazMar}.
\subsection{The master loop equation}
It is well known that in the large $N$ limit, loop equations imply an algebraic equation for the functions $w_{1}$,
i.e. for the function $Y(x)$, called the master loop equation.
Let us briefly recall how to derive it (see \cite{eynmultimat}):
$\bullet$ the change of variables $M_2 \rightarrow M_2 + \epsilon \frac{1}{x-M_1}$ implies:
\begin{equation}\label{chvara}
0=a_{0}(x) - x \overline{w}_{1}(x) + 1
\end{equation}
$\bullet$ the change of variables $M_1 \rightarrow M_1 + \epsilon \frac{1}{x-M_1}\frac{V_2'(y)-V_2'(M_2)}{y-M_2} $ implies:
\begin{eqnarray}\label{chvaru}
\overline{w}_{1}(x) \overline{u}_{0}(x,y) +{1\over N^2}u_{1}(x,y;x) &=& V'_1(x)\overline{u}_{0}(x,y) - p_{0}(x,y) - y \overline{u}_{0}(x,y)\cr
&& + V'_2(y) w_{1}(x) - a_{0}(x) \cr
\end{eqnarray}
i.e., putting everything together:
\begin{equation}
(y-Y(x))u_{0}(x,y)+{1\over N^2}u_{1}(x,y;x) = (V'_2(y)-x) (V'_1(x)-y) - p_{0}(x,y) +1
\end{equation}
We define:
\begin{equation}\label{defExy}
E(x,y) = ( V_2'(y) -x ) ( V_1'(x)-y) - p_{0}(x,y) + 1
\end{equation}
The {\em master loop equation} is thus:
\begin{equation}\label{masterloopallgenus}\encadremath{
(y-Y(x))u_{0}(x,y)+{1\over N^2}u_{1}(x,y;x) = E(x,y)}
\end{equation}
where $E(x,y)$ is a polynomial of degree $d_1+1$ in $x$ and $d_2+1$ in y.
\subsection{Loop equations for correlation functions}
We now derive the loop equations
which allow to compute recursively the k-point non-mixed correlation
functions.
$\bullet$ The change of variables $\delta M_2 ={1\over x-M_1}
\,\,\prod_{i=1}^k {\,\rm tr}\: {1\over x_i-M_1}$ implies (see
\cite{eynmultimat}): \begin{equation} a_{k}(x;{\mathbf x}_K) = x\,
\overline{w}_{k+1}(x,{\mathbf x}_K) - N^2 \overline{w}_{k}({\mathbf x}_K)\end{equation}
$\bullet$ The change of variables $\delta M_1 ={1\over x-M_1}
{V'_2(y)-V'_2(M_2)\over y-M_2}\,\,\prod_{i=1}^k {\,\rm tr}\: {1\over
x_i-M_1} $ implies (see \cite{eynmultimat}):
\begin{eqnarray}
&& w_{1}(x) \,\overline{u}_{k}(x,y;{\mathbf x}_K) +\sum_{j=0}^{k-1} \sum_{J\in
K_j} \overline{u}_{j}(x,y;{\mathbf x}_J)\, \overline{w}_{k-j+1}(x,{\mathbf x}_{K-J}) \cr
&& + {1 \over N^2} \overline{u}_{k+1}(x,y;x,{\mathbf x}_{K}) \cr
&& +
\sum_{j=1}^k {\d\over \d x_j}\,
{\overline{u}_{k-1}(x,y;{\mathbf x}_{K-\{j\}})-\overline{u}_{k-1}(x_j,y;{\mathbf x}_{K-\{j\}})
\over x-x_j} \cr &=& V'_1(x)\overline{u}_{k,0}(x,y;{\mathbf x}_K) -
p_{k}(x,y;{\mathbf x}_K) \cr && - y \overline{u}_{k}(x,y;{\mathbf x}_K) + V'_2(y)
\overline{w}_{k+1}(x,{\mathbf x}_K) - a_{k}(x;{\mathbf x}_K) \cr \end{eqnarray}
i.e. for $k\geq 1$:
\begin{equation} \label{loop1}
\encadremath{\begin{array}{rcl}
(y-Y(x)) \,u_{k}(x,y;{\mathbf x}_K)
&=&-\sum_{j=0}^{k-1} \sum_{J\in K_j} u_{j}(x,y;{\mathbf x}_J)\,
w_{k-j+1}(x,{\mathbf x}_{K-J}) \cr && - {1 \over N^2}
u_{k+1}(x,y;x,{\mathbf x}_{K}) \cr
&& + \sum_{j=1}^k {\d\over \d
x_j}\, {u_{k-1}(x_j,y;{\mathbf x}_{K-\{j\}}) \over x-x_j} -
p_{k}(x,y;{\mathbf x}_K) \cr
\end{array}}
\end{equation}
The purpose of this article is to solve \eq{loop1} and compute $\overline{w}_k^{(h)}$ for
all $k$ and $h$.
\newsection{Leading order and algebraic geometry}
\subsection{Leading order of the master loop equation}
To large $N$ leading order, the master loop equation
\eq{masterloopallgenus} reads:
\begin{equation}\encadremath{\label{masterloopu} (y-Y(x))u_{0}(x,y) = E(x,y)
}\end{equation}
Since $u_{0}(x,y)$ is a polynomial in $y$, it has no singularity
for y finite and the LHS vanishes for $y=Y(x)$, i.e.:
\begin{equation}\label{masterloopeq} E(x,Y(x))=0 \end{equation} This defines an
algebraic curve $E(x,y)=0$.
Notice that to leading order we have: \begin{equation}\label{uoo} u_{0}(x,y) =
{E(x,y)\over y-Y(x)} \end{equation} and \begin{equation}\label{uooY} u_{0}(x,Y(x)) =
E_y(x,Y(x)) \end{equation}
\subsection{Introduction to some algebraic geometry}
We use notations similar to \cite{Fay} or \cite{Farkas}. Some useful hints for understanding this
section can be found in {\emph{Appendix A}}.
Let us parameterize the curve $E(x,y)=0$ with a running point p of
a compact Riemann surface ${\cal E}$. It means that we define two
meromorphic functions $x(p)$ and $y(p)$ on ${\cal E}$ such that:
\begin{equation} E(x,y)=0 \Leftrightarrow \exists p \in {\cal E} \,\,\,\,\,
x=x(p) \,\, , \,\, y=y(p) \end{equation}
The functions $x$ and $y$ are not bijective. Indeed, since
$E(x,y)$ is a polynomial of degree $d_2+1$ in $y$, it has $d_2+1$
solutions, i.e. for a given $x$, there exist $d_2+1$ points $p$ on
$ {\cal E} $ such that $x(p)=x$. Thus, the Riemann surface is made
of $d_2+1$ $x$-sheets, respectively $d_1+1$ $y$-sheets. Hence,
from now on, we use these notations:
\begin{equation} x(p) = x \Leftrightarrow
p = p^{j}(x) \,\,\,\,\, \hbox{for} \,\,\,\, j=0,\dots,d_2 \end{equation}
\begin{equation} y(p) = y \Leftrightarrow p = \td{p}^{j}(x) \,\,\,\,\,
\hbox{for} \,\,\,\, j=0,\dots,d_1 \end{equation}
We will most often omit the
exponent 0 corresponding to the physical sheet: $p=p^{0}$.
For instance, one can write $E(x,y)$ as:
\begin{eqnarray} E(x(p),y(q)) &=& -
g_{d_1+1} \times \prod_{i=0}^{d_1} (x(p)-x(\td{q}^{i}(y))) \cr &=&
- \td{g}_{d_2+1} \times \prod_{i=0}^{d_2} (y(q)-y(p^{i}(x))) \end{eqnarray}
\vspace{0.7cm}
Considering that the $w_{k}^{(h)}$'s, $u_{k}^{(h)}$'s and
$p_{k}^{(h)}$'s are multivalued functions in their arguments $x$,
we now work with differentials monovalued on the Riemann surface.
Let us write the differentials: \begin{equation}\label{defWk}
W_{k+1}(p,{\mathbf p}_K) := w_{k+1}(x(p),{\mathbf x}(p_K)) dx(p) \prod_{i=1}^k
dx(p_i) \end{equation} \begin{equation}\label{defUk} U_{k}(p,y;{\mathbf p}_K) :=
u_{k}(x(p),y;{\mathbf x}(p_K)) dx(p) \prod_{i=1}^k dx(p_i) \end{equation}
\begin{equation}\label{defPk} P_{k}(x,y;{\mathbf p}_K) := p_{k}(x,y;{\mathbf x}(p_K))
\prod_{i=1}^k dx(p_i) \end{equation}
{\bf Note:} In the following, the arguments of a function will be
called $x(p)$ or $y(r)$ if the function is defined on the basis,
and $p$ or $r$ if the function is defined on the Riemann surface -
and so multivalued on the basis-. \vspace{0.7cm}
Let us now review the notations we use in this article to denote some basic objects. For definitions and details, we refer the reader to {\emph{Appendix A}} and \cite{Fay} or \cite{Farkas}.
\bigskip
$\bullet${\bf Canonical cycles:} ${\cal A}_i$,
${\cal B}_i$ for $i=1,\dots, g$ where $g$ is the genus of the compact Riemann surface ${\cal{E}}$ ($0\leq g \leq d_1d_2 -1$),
such that:
\begin{equation}
{\cal A}_i \cap {\cal B}_i = \delta_{i,j}
\end{equation}
\bigskip
$\bullet${\bf Branch points in $x$:} They are the zeroes of $dx$ on the surface. We denote them by $a_i$, $i=1, \dots , d_2+1+2g$.
\bigskip
$\bullet${\bf Bergmann kernel:} It is the unique bilinear differential with only one double pole
at $p=q$ satisfying:
\begin{equation}\label{defB}
B(p,q)\mathop\sim_{p\to q} {dx(p)dx(q)\over (x(p)-x(q))^2}+{\rm
finite} \quad {\rm and} \quad \forall i \,\,\,\oint_{{p\in{\cal
A}_i}} B(p,q) = 0 \end{equation}
\bigskip
$\bullet${\bf Abelian differential of third kind:} It is the differential defined by
$dS_{q,r}(p) = \int_{q'=r}^{q} B(p,q')$. Notice that it has the following properties:
\begin{equation}\label{defdS} \mathop{\,\rm Res\,}_{p
\to q} dS_{q,r}(p) = 1 = -\mathop{\,\rm Res\,}_{p \to r} dS_{q,r}(p) \quad {\rm
and} \quad \forall i \,\,\,\oint_{{{\cal A}_i}} dS_{q,r}(p) = 0
\end{equation}
\subsection{Fixed filling fractions}
To large $N$ leading order, the loop equation \eq{masterloopeq} is
an algebraic equation: \begin{equation} E(x,Y(x))=0 \end{equation}
The coefficients of $E$ are determined using filling fractions.
Since $w_{1}(x) = V'_1(x)-Y(x)$, \eq{fixfrac} gives (up to a redefinition of ${\cal A}_i$):
\begin{equation}\label{deffillingfractions} {1\over 2i\pi}\oint_{{\cal A}_i} y
dx =-{1\over 2i\pi}\oint_{{\cal A}_i} x dy =\epsilon_i \end{equation}
Let us recall that (see section \ref{secdef}) the
$\epsilon_i$'s are called filling fractions, and they are given
parameters (moduli) of the model. They don't depend on the
potential or on any other parameter.
In particular, since all correlation functions
$w_{k}(x_1,\dots,x_k)$ are obtained by derivation of $w_{1}$ with
respect to the potential $V_1$ (\cite{ACKM}), we have for $k\geq 2$:
\begin{equation}\label{vanishingAcycles} {1\over 2i\pi}\oint_{{\cal A}_i}
w_{k}(x_1,\dots,x_k) dx_1 =0 \end{equation}
Equation \eq{deffillingfractions} together with the large $x$ and $y$
behaviors \eq{largex} and \eq{largey}, are sufficient to
determine completely all the coefficients of the polynomial
$E(x,y)$, and thus the leading large $N$ resolvent $w_{1}(x)$.
\medskip
In what follows, we assume that the leading resolvent, i.e. the
function $Y(x)$ is known, and we refer the reader to
the existing literature on that topic, for instance
\cite{MarcoF,eynmultimat,KazMar,Kri}.
\newsection{Diagrammatic solution as cubic
graphs}
In this section we present a first way of describing the solution
of the loop equation \eq{loop1} by trivalent diagrams whose $h$ loop level
corresponds to the $h$-th term $W_k^{(h)}$ of the topological
expansion.
\subsection{Solution in the planar limit}
Before considering the full ${1 \over N^2}$ expansion, let us
focus on the structure of the leading terms corresponding to planar fat graphs.
Thus the $1/N^2$
terms in the loop equations are omited.
From now on and particularly in this paragraph, we drop the genus
zero exponent ${(0)}$ when it is clear that we deal with the
planar limit, i.e. $w_k^{(0)}({\mathbf x}_K)\to w_k({\mathbf x}_K)$.
\smallskip
Up to now, the loop equations were written in terms of multivalued functions. It is more
appropriate to write them in terms of meromorphic differentials on the Riemann surface.
Thus, one writes \eq{loop1} in the planar limit as follows:
\begin{equation}\label{eqUk}
\begin{array}{rcl}
(y(r)-y(p)) U_{k}(p,y(r);{\mathbf p}_K) &=& - \sum_{j=0}^{k-1} \sum_{J\in K_j} {U_{j}(p,y(r);{\mathbf p}_J)\, W_{k-j+1}(p,{\mathbf p}_{K-J}) \over dx(p)} \cr
&& + \sum_{j=1}^k d_{p_j}\left( {{U}_{k-1}(p_j,y(r);{\mathbf p}_{K-\{j\}}) \over x(p)-x(p_j)}\,{dx(p)\over dx(p_j)}\right) \cr
&& - P_{k}(x(p),y(r);{\mathbf p}_K) dx(p)
\end{array}
\end{equation}
Starting from \eq{eqUk}, we determine $W_k$ and
$U_k$ for any $k$ by recursion on $k$.
Let us assume that one knows $W_{j}({\mathbf p}_J )$ for $j\leq k$ and
$U_{j}(p,{\mathbf p}_J )$ for $j \leq k-1$. The first step consists in
the determination of $W_{k+1}(p,{\mathbf p}_K)$ as a function of the
lower order correlation functions. The second step leads to the
computation of $U_{k}(p,{\mathbf p}_K)$. Once this is done, one knows
the correlation functions one order upper. The initial terms $W_2$ and $U_1$
can be found in the literature \cite{MarcoF,eynmultimat,KazMar}
and are rederived in {\emph{Appendix B}}.
\subsubsection{Determination of $W_{k+1}$ for $k\geq 2$}
If one chooses $r=p$ in \eq{eqUk}, one gets (using \eq{uoo} and \eq{uooY}):
\begin{equation}\label{eqWk}
\begin{array}{lll}
E_y(x(p),y(p)) W_{k+1}(p,{\mathbf p}_{K})
&=& - P_{k}(x(p),y(p);{\mathbf p}_K) \, dx(p) \cr
&& -\sum_{j=1}^{k-1} \sum_{J\in K_j} {U_{j}(p,y(p);{\mathbf p}_J)\, W_{k-j+1}(p,{\mathbf p}_{K-J}) \over dx(p)} \cr
&& + \sum_{j=1}^k d_{p_j}\left({{U}_{k-1}(p_j,y(p);{\mathbf p}_{K-\{j\}})\over x(p)-x(p_j)}\, {dx(p)\over dx(p_j)} \right) \cr
\end{array}
\end{equation}
Notice that the two equations \eq{eqUk} and \eq{eqWk} imply by recursion, that $W_k$ and $U_k$ are
indeed meromorphic differentials on the curve, in all their variables.
\medskip
We define:
\begin{equation}\label{defRik}
\forall (i,j)\qquad
R_{k}^i(p^{j},p_K) := {U_{k}(p^{j},y(p^{i});p_K) \over
E_y(x(p^{j}),y(p^{i})) dx(p^{j})}
\end{equation}
Note that we have already obtained (see \eq{uoo}) that: \begin{equation}
R_{0}^i(p^{l}) = \delta_{i,l} \end{equation}
Using \eq{defdS}, the Cauchy formula gives: \begin{equation} {W}_{k+1}(p,{\mathbf p}_K) = -\mathop{\,\rm Res\,}_{p'\to
p} {W}_{k+1}(p',{\mathbf p}_{K}) dS_{p',\a}(p) \end{equation}
where $\a \in {\cal E}$ is an arbitrary point on the Riemann surface.
The integrand has poles in $p'$ only at $p'=p$ and the branch points
$p'=a_s$ (this can be proven recursively by differentiating wrt the potential ${\partial \over \partial V_1}$).
Using Riemann bilinear identity
\eq{RiemannbilinearId}, we can then move the integration contour
and get: \begin{equation} {W}_{k+1}(p,{\mathbf p}_{K}) = \sum_s \mathop{\,\rm Res\,}_{p'\to a_s}
{W}_{k+1}(p',{\mathbf p}_{K}) dS_{p',\a}(p) \end{equation}
We now introduce the loop equation \eq{eqWk} inside this
expression and remark that only one term has poles when $p'\to
a_s$. Thus ${W}_{k+1}(p,{\mathbf p}_{K})$ can be written:
\begin{eqnarray}\label{recW} {W}_{k+1}(p,{\mathbf p}_{K}) &=& -\sum_s \mathop{\,\rm Res\,}_{p'\to
a_s} \sum_{j=1}^{k-1} \sum_{J\in K_j}
{{U}_{j}(p',y(p');{\mathbf p}_J)\over E_y(x(p'),y(p')) }
{{W}_{k-j+1}(p',{\mathbf p}_{K-J}) \over dx(p')} dS_{p',\a}(p) \cr &=&
-\sum_s \mathop{\,\rm Res\,}_{p'\to a_s} \sum_{j=1}^{k-1}\sum_{J\in K_j}
{R}_{j}^0(p',{\mathbf p}_J) {W}_{k-j+1}(p',{\mathbf p}_{K-J}) dS_{p',\a}(p) \cr
\end{eqnarray}
Notice that $U_{k}(p,y;{\mathbf p}_K)$ is a polynomial in y whose degree is
equal to $d_2-1$. Considering its $d_2$ values for $y=y(p^{i})$
with $i\in [1,d_2]$, the interpolation formula reads:
\begin{equation} \forall y \,\,\,\, {(y-y(p)) U_{k}(p,y;{\mathbf p}_K) \over
E(x(p),y)} = - \sum_{i=1}^{d_2} {U_{k}(p,y(p^{i});{\mathbf p}_K)
(y(p)-y(p^{i})) \over (y-y(p^{i})) E_y(x(p),y(p^{i}))} \end{equation}
for $y=y(p)$, this gives: \begin{equation} R_{k}^0(p,{\mathbf p}_K) = -
\sum_{i=1}^{d_2} R_{k}^i(p,{\mathbf p}_K) \end{equation}
So, in \eq{recW}, one obtains the recursive formula for
$W_{k}({\mathbf p}_K)$: \begin{equation}\encadremath{\label{recWk}
W_{k+1}(p,{\mathbf p}_{K})= \sum_{i=1}^{d_2} \sum_{j=1}^{k-1}
\sum_{J\in K_j} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} {R}_{j}^i(p';{\mathbf p}_J)
{W}_{k-j+1}(p',{\mathbf p}_{K-J}) dS_{p',\a}(p) }\end{equation}
The sum over $j$ represents the summation over all partitions of
$K$ into two subsets $J$ and $K-J$.
\subsubsection{Determination of $R_{k}^i$}
In this section, we find a
recursion formula for $R_{k}^i$.
For this purpose, one needs to know an intermediate expression
defining the different $U_{k}$'s as well as a relation linking the
value of \begin{equation} \sum_{j=0}^{k-1} U_{j}(p^{i},y(p);{\mathbf p}_J)
W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \end{equation} for different $i$'s.
Let us rewrite here \eq{eqUk}: \begin{eqnarray}\label{eqUkbis} (y(r)-y(q))
U_{k}(q,y(r);{\mathbf p}_K) &=& - \sum_{j=0}^{k-1} \sum_{J\in K_j}
\frac{1}{dx(q)}\, U_{j}(q,y(r);{\mathbf p}_J)\, W_{k-j+1}(q,{\mathbf p}_{K-J})
\cr && + \sum_{j=1}^k d_{p_j}\left(
{{U}_{k-1}(p_j,y(r);{\mathbf p}_{K-\{j\}}) \over
x(q)-x(p_j)}\,{dx(q)\over dx(p_j)}\right) \cr && -
P_{k}(x(q),y(r);{\mathbf p}_K) dx(q) \end{eqnarray}
In what follows, we use the
properties of rational functions defined on the basis and not on the
Riemann surface (for some more details, see the case $k=1$ in {\emph{Appendix B}}).
For $r=q=p^{i}$, \eq{eqUkbis} reads:
\begin{eqnarray}\label{eqRkint} 0 &=& - \sum_{j=0}^{k-1} \sum_{J\in K_j}
\frac{1}{dx(p^{i})}\, U_{j}(p^{i},y(p^{i});{\mathbf p}_J)\,
W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \cr && + \sum_{j=1}^k d_{p_j}\left(
{{U}_{k-1}(p_j,y(p^{i});{\mathbf p}_{K-\{j\}}) \over
x(p^{i})-x(p_j)}\,{dx(p^{i})\over dx(p_j)}\right) \cr && -
P_{k}(x(p^{i}),y(p^{i});{\mathbf p}_K) dx(p^{i}) \cr &=& -
\sum_{j=0}^{k-1} \sum_{J\in K_j} \frac{1}{dx(p)}\,
U_{j}(p^{i},y(p^{i});{\mathbf p}_J)\, W_{k-j+1}(p^{i},{\mathbf p}_{K-J})
\cr && + \sum_{j=1}^k d_{p_j}\left(
{{U}_{k-1}(p_j,y(p^{i});{\mathbf p}_{K-\{j\}}) \over
x(p)-x(p_j)}\,{dx(p)\over dx(p_j)}\right) \cr && -
P_{k}(x(p),y(p^{i});{\mathbf p}_K) dx(p) \end{eqnarray}
where we have used that
$x(p)=x(p^{i})$.
Now, write \eq{eqUkbis} with $r=p^{i}$ and $q=p$:
\begin{eqnarray} &&
(y(p^{i})-y(p)) U_{k}(p,y(p^{i});{\mathbf p}_K) \cr &=& -
\sum_{j=0}^{k-1} \sum_{J\in K_j} \frac{1}{dx(p)}\,
U_{j}(p,y(p^{i});{\mathbf p}_J)\, W_{k-j+1}(p,{\mathbf p}_{K-J}) \cr && +
\sum_{j=1}^k d_{p_j}\left(
{{U}_{k-1}(p_j,y(p^{i});{\mathbf p}_{K-\{j\}}) \over
x(p)-x(p_j)}\,{dx(p)\over dx(p_j)}\right) \cr && -
P_{k}(x(p),y(p^{i});{\mathbf p}_K) dx(p) \end{eqnarray}
and inserting \eq{eqRkint} we get:
\begin{eqnarray}\label{Uk} && (y(p^{i})-y(p))
U_{k}(p,y(p^{i});{\mathbf p}_K) \cr &=& - \sum_{j=0}^{k-1} \sum_{J\in
K_j} \frac{1}{dx(p)}\, U_{j}(p,y(p^{i});{\mathbf p}_J)\,
W_{k-j+1}(p,{\mathbf p}_{K-J}) \cr && + \sum_{j=0}^{k-1} \sum_{J\in K_j}
\frac{1}{dx(p)}\, U_{j}(p^{i},y(p^{i});{\mathbf p}_J)\,
W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \end{eqnarray}
This formula is in principle sufficient to compute the $U_{k}$'s
recursively, and then, one can compute the $R_{k}^i$'s. However,
what we need in order to get diagrammatic rules, is a closed recursion
relation for the $R^i_{k}$'s themselves. In order to achieve this
aim, we show that:
\medskip
{\emph{Lemma:}} for any $k\geq 1$, one has:
\begin{eqnarray}\label{UWW}
U_{k}(p,y;{\mathbf p}_K) &=& {E(x(p),y) dx(p)\over
y-y(p)}\sum_{r=1}^{d_2} \sum_{K_1\cup\dots\cup K_r=K}
\sum_{j_1\neq j_2\neq \dots \neq j_r=1}^{d_2}\cr && \qquad
\prod_{t=1}^r {W_{|K_t|+1}(p^{j_t},{\mathbf p}_{K_t})\over
(y-y(p^{(j_t)}))\,dx(p)}\cr \end{eqnarray}
where the sum over
$K_1\cup\dots\cup K_r=K$ is a sum over all partitions of $K$ into
$r$ subsets.
{\emph{Proof:}} It can be proven easily by recursive action of
${\partial/\partial V_1}$, as in \cite{ACKM}, however, in order to
have a self-contained method, we want to derive it here only from
the loop equations \eq{loop1}.
The proof works by recursion on $k$. It is proven in {\emph{Appendix B}} for $k=1$.
Let us assume that, it holds for any $l \leq
k-1$.
Notice, that since both sides of \eq{UWW} are polynomials of $y$,
of degree $d_2-1$, it is sufficient to prove that the equality
holds for $d_2$ values of $y$, namely, it is sufficient to prove
it for $y=y(p^{i})$, $i=1,\dots,d_2$. Therefore, one has to
prove that:
\begin{eqnarray} {U_{k}(p,y(p^{i});{\mathbf p}_K) \over dx(p)} &=&
{E_y(x(p^{i}),y(p^{i})) \over y(p^{i})-y(p)}
\sum_{r=1}^{d_2} \sum_{K_1\cup\dots\cup K_r=K} \sum_{j_1\neq
j_2\neq \dots \neq j_{r-1}\neq 0,i} \cr &&
{W_{|K_r|+1}(p^{i},{\mathbf p}_{K_r})\over \,dx(p)}\,\prod_{t=1}^{r-1}
{W_{|K_t|+1}(p^{j_t},{\mathbf p}_{K_t})\over (y-y(p^{j_t}))\,dx(p)}
\end{eqnarray} where only the sums in which one of the $j_t$'s is equal to
$i$ contribute.
The recursion hypothesis for $j\leq k-1$, and any $J\in K_j$
gives:
\begin{eqnarray} {U_{j}(p^{i},y(p^{i});{\mathbf p}_J)\over dx(p)} &=&
E_y(x(p^{i}),y(p^{i})) \sum_{r=1}^{d_2} \sum_{J_1\cup\dots\cup
J_r=J} \sum_{j_1\neq j_2\neq \dots \neq j_r\neq i} \cr && \qquad
\prod_{t=1}^r {W_{|J_t|+1}(p^{j_t},{\mathbf p}_{J_t})\over
(y(p^{i})-y(p^{j_t}))\,dx(p)} \cr \end{eqnarray}
In order to compute
$U_{j}(p,y(p^{i});{\mathbf p}_J)$, one has to keep only terms in the
sum such that there exists a $t$ such that $j_t=i$, i.e.
\begin{eqnarray}
{U_{j}(p,y(p^{i});{\mathbf p}_J)\over dx(p)} &=&
E_y(x(p^{i}),y(p^{i}))\, \sum_{r=1}^{d_2}
\sum_{J_1\cup\dots\cup J_r=J} \sum_{j_1\neq j_2\neq \dots \neq
j_{r-1}\neq 0,i} \cr && \qquad
{W_{|J_{r}|+1}(p^{i},{\mathbf p}_{J_{r}})\over
(y(p^{i})-y(p))\,dx(p)}\,\prod_{t=1}^{r-1}
{W_{|J_t|+1}(p^{j_t},{\mathbf p}_{J_t})\over
(y(p^{i})-y(p^{j_t}))\,dx(p)} \cr \end{eqnarray}
Insert that into \eq{Uk}:
\begin{eqnarray} && {(y(p^{i})-y(p))
U_{k}(p,y(p^{i});{\mathbf p}_K)} \cr &=& - E_y(x(p^{i}),y(p^{i}))
\sum_{j=0}^{k-1} \sum_{J\in K_j} \sum_{r=1}^{d_2}
\sum_{J_1\cup\dots\cup J_r=J} \sum_{j_1\neq j_2\neq \dots \neq
j_{r-1}\neq 0,i} \cr && \qquad W_{k-j+1}(p,{\mathbf p}_{K-J})
{W_{|J_{r}|+1}(p^{i},{\mathbf p}_{J_{r}})\over
(y(p^{i})-y(p))\,dx(p)}\,\prod_{t=1}^{r-1}
{W_{|J_t|+1}(p^{j_t},{\mathbf p}_{J_t})\over
(y(p^{i})-y(p^{j_t}))\,dx(p)} \cr && +
E_y(x(p^{i}),y(p^{i}))\,\sum_{j=0}^{k-1} \sum_{J\in K_j}
\sum_{r=1}^{d_2} \sum_{J_1\cup\dots\cup J_r=J} \sum_{j_1\neq
j_2\neq \dots \neq j_r\neq i} \cr && \qquad
W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \prod_{t=1}^r
{W_{|J_t|+1}(p^{j_t},{\mathbf p}_{J_t})\over
(y(p^{i})-y(p^{j_t}))\,dx(p)} \cr \end{eqnarray}
The difference between
these two summation, keeps only $j_t\neq 0,i$, thus:
\begin{eqnarray} &&
U_{k}(p,y(p^{i});{\mathbf p}_K) \cr &=&
E_y(x(p^{i}),y(p^{i}))\,dx(p)\,\sum_{j=0}^{k-1} \sum_{J\in
K_j} \sum_{r=1}^{d_2} \sum_{J_1\cup\dots\cup J_r=J} \sum_{j_1\neq
j_2\neq \dots \neq j_r\neq i,0} \cr && \qquad
{W_{k-j+1}(p^{i},{\mathbf p}_{K-J})\over (y(p^{i})-y(p))\,dx(p)}
\prod_{t=1}^r {W_{|J_t|+1}(p^{j_t},{\mathbf p}_{J_t})\over
(y(p^{i})-y(p^{j_t}))\,dx(p)} \cr \end{eqnarray}
i.e. we have proven the
lemma for $k$, for $y=y(p^{i})$, and since both sides are
polynomials in $y$ of degree $d_2-1$, the equality holds for all
$y$.
\begin{flushright}
$\bullet$
\end{flushright}
\bigskip
{\emph{Theorem:}} For all $k\geq 1$, one has:
\begin{equation}\label{equality}
\begin{array}{lll}
&& \sum_{i=1}^{d_2} \sum_{j=0}^{k-1} \sum_{J\in K_j}
U_{j}(p^{i},y(p);{\mathbf p}_J) W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \cr &=&
\sum_{j=1}^{k-1} \sum_{J\in K_j} U_{j}(p,y(p);{\mathbf p}_J)
W_{k-j+1}(p,{\mathbf p}_{K-J})
\end{array}
\end{equation}
{\emph{Proof of the theorem:}} Let us simply perform some basic
rearrangements: \begin{eqnarray} &&\sum_{i=1}^{d_2} \sum_{j=0}^{k-1}
\sum_{J\in K_j} U_{j}(p^{i},y(p);{\mathbf p}_J)
W_{k-j+1}(p^{i},{\mathbf p}_{K-J}) \cr &=& \sum_{K_1\bigcup L = K}
\sum_{j_1=1}^{d_2} W_{|K_1|+1}(p^{j_1},{\mathbf p}_{K_1})
U_{|L|+1}(p^{j_1},y(p);{\mathbf p}_{L})\cr & = & {E_y(x(p),y(p))} dx(p)
\sum_{K_1\bigcup L = K} \sum_{j_1=1}^{d_2} \sum_{r=1}^{d_2}
\sum_{K_2\cup\dots\cup K_{r+1}=L} \sum_{j_2\neq j_3\neq \dots
\neq j_{r} \in [1,d_2]-\{j_1\} } \cr &&
W_{|K_1|+1}(p^{j_1},{\mathbf p}_{K_1})
{W_{|K_{r+1}|+1}(p,{\mathbf p}_{K_{r+1}})\over (y(p)-y(p^{j_1}))}
\prod_{a=2}^r {W_{|K_a|+1}(p^{j_a},{\mathbf p}_{K_a}) \over (y(p)-y(p^{j_a})) dx(p)}\cr
& = & {E_y(x(p),y(p))} dx(p) \sum_{r=1}^{d_2}
\sum_{K_1\cup\dots\cup K_{r+1}=K} \sum_{j_1\neq j_2\neq \dots \neq
j_{r}=1}^{d_2}\cr & & \prod_{a=1}^r
{W_{|K_a|+1}(p^{j_a},{\mathbf p}_{K_a})
W_{|K_{r+1}|+1}(p,{\mathbf p}_{K_{r+1}}) \over (y(p)-y(p^{j_a}))
dx(p)}\cr &=& \sum_{K_{r+1}\bigcup J = K}
W_{|K_{r+1}|+1}(p,{\mathbf p}_{K_{r+1}}) U_{|J|}(p,y(p);{\mathbf p}_J)\cr \end{eqnarray}
\begin{flushright}
$\bullet$
\end{flushright}
\vspace{0.7cm}
This identity simplifies \eq{Uk} which becomes now: \begin{eqnarray} &&
(y(p^{i})-y(p)) R_k^i(p,{\mathbf p}_K) dx(p) = \cr &&
W_{k+1}(p^{i},{\mathbf p}_{K}) + \sum_{j=1}^{k-1} \sum_{J\in K_j}
\sum_{l\neq 0,i} {U_{j}(p^{l},y(p^{i});{\mathbf p}_J)
W_{k-j+1}(p^{l},{\mathbf p}_{K-J}) \over E_y(x(p),y(p^{i})) dx(p)}\cr
\end{eqnarray}
One can now write down the final recursion formula for
$R_k^i(p,{\mathbf p}_K)$ in these terms:
\begin{equation}\label{recUk}\encadremath{
\begin{array}{rcl}
R_k^i(p,{\mathbf p}_K) &=& {W_{k+1}(p^{i},{\mathbf p}_{K}) \over
(y(p^{i})-y(p)) dx(p)} \cr && + \sum_{j=1}^{k-1} \sum_{J\in K_j}
\sum_{l\neq 0,i} {R_j^{i}(p^{l},{\mathbf p}_J)
W_{k-j+1}(p^{l},{\mathbf p}_{K-J}) \over (y(p^{i})-y(p)) dx(p)}\cr
\end{array}
}\end{equation}
\bigskip
The relations \eq{recW} and \eq{recUk} allow to compute recursively $W_k$ for any $k$.
This solution can be represented by binary trees as it is presented in section (\ref{cub}).
\subsection{Solution for any genus}
In the previous paragraph, one has kept only the leading terms when
performing the changes of variables to obtain the Schwinger-Dyson
equations. Let us now write the ${1 \over N^2}$ corrective term
for the same changes of variables so that we write a system of
equations giving the whole ${1 \over N^2}$ expansion. One obtains
the following loop equations :
\begin{equation}\label{eqUklg0}
\begin{array}{lll}
&& (y(r)-y(p)) {U}_{k}(p,y(r);{\mathbf p}_K) \cr &=& -
P_{k}(x(p),y(r);{\mathbf p}_K) dx(p) - \sum_{j=0}^{k-1} {1 \over dx(p)}
{U}_{j}(p,y(r);{\mathbf p}_J) {W}_{k-j+1}(p,{\mathbf p}_{K-J}) \cr && - {1 \over
N^2} {U_{k+1}(p,y(r);p,{\mathbf p}_k) \over dx(p)} + \sum_j d_{p_j}
\left({{U}_{k-1}(p_j,y(r);{\mathbf p}_{K-\{j\}}) \over x(p)-x(p_j)}
{dx(p) \over dx(p_j)} \right) \cr
\end{array}
\end{equation}
For the following, one should remind the expression of the
function $Y(x(p))$: \begin{equation} Y(x):=V'_1(x)-\overline{w}_{1}(x) \end{equation}
Then, for $h\geq 1$: \begin{equation} Y^{(h)}(x(p)) = -{W_{1}^{(h)}(p)\over
dx(p)} \end{equation}
Consider now the ${1 \over N^2}$ expansion of this equation order
by order. The genus h term (corresponding to the ${1 \over N^{2
h}}$ term) gives: \begin{equation}\label{eqUklgh}
\begin{array}{lll}
&& (y(r)-y(p)) {U}_{k}^{(h)}(p,y(r);{\mathbf p}_K) - \sum_{m=1}^h
Y^{(m)}(x(p)) {U}_{k}^{(h-m)}(p,y(r);{\mathbf p}_K) \cr &=& -
P_{k}^{(h)}(x(p),y(r);{\mathbf p}_K) dx(p) \cr && - \sum_{m=0}^h
\sum_{j=0}^{k-1} {1 \over dx(p)} {U}_{j}^{(m)}(p,y(r);{\mathbf p}_J)
{W}_{k-j+1}^{(h-m)}(p,{\mathbf p}_{K-J}) \cr && -
{U_{k+1}^{(h-1)}(p,y(r);p,{\mathbf p}_k) \over dx(p)} + \sum_j d_{p_j}
\left( {{U}_{k-1}^{(h)}(p_j,y(r);{\mathbf p}_{K-\{j\}}) \over
x(p)-x(p_j)} {dx(p) \over dx(p_j)} \right) \cr
\end{array}
\end{equation}
When $y(r)=y(p)$:
\begin{equation}\label{eqWklg0}
\begin{array}{lll}
&& \sum_{m=1}^h Y^{(m)}(x(p)) {U}_{k}^{(h-m)}(p,y(p);{\mathbf p}_K) \cr
&=& P_{k}^{(h)}(x(p),y(p);{\mathbf p}_K) dx(p) + \sum_{m=0}^h
\sum_{j=0}^{k-1} {1 \over dx(p)} {U}_{j}^{(m)}(p,y(p);{\mathbf p}_J)
{W}_{k-j+1}^{(h-m)}(p,{\mathbf p}_{K-J}) \cr && +
{U_{k+1}^{(h-1)}(p,y(p);p,{\mathbf p}_k) \over dx(p)} - \sum_j d_{p_j}
\left( {{U}_{k-1}^{(h)}(p_j,y(p);{\mathbf p}_{K-\{j\}}) \over
x(p)-x(p_j)} {dx(p) \over dx(p_j)} \right) \cr
\end{array}
\end{equation}
These two equations are the generalization of \eq{eqUk} and
\eq{eqWk} for any genus in the topological expansion. With all
these tools, we are now able to compute all the terms of the
${1\over N^2}$ expansion of non mixed traces.
In this section, we proceed in two steps to compute the
correlation function $W_{k}^{(h)}$ for any k and any h, and
represent it as a Feynman graph with h loops. The first step
consists in the determination of a recursive relation for
$W_{k}^{(h)}$, whereas the second one gives $R_{k}^{i,(h)}:={U_{k}^{(h)}(p^{j},y(p^{i});p_K) \over
E_y(x(p^{j}),y(p^{i})) dx(p^{j})}$
considered the lower order terms known.
For the following, let h and k be two given positive integers. Let
us consider $W_{j}^{(m)}$ known for any j if $m<h$ and any $j\leq
k$ if $m=h$. One also assume that $R_{j}^{i,(m)}$ is known for any
i and any j if $m<h$ and any $j< k$ if $m=h$. Starting from these
assumptions, one computes $W_{k+1}^{(h)}$ and $R_{k}^{i,(h)}$,
what will allow to know any term recursively.
\subsubsection{A recursive formula for $W_{k+1}^{(h)}$}
Let us remind \eq{eqWklg0} in a more suitable way to emphasize
that it allows us to compute $W_{k+1}^{(h)}(p,p_K)$ with our
assumption:
\begin{equation}\label{6110}
\begin{array}{lll}
&& W_{k+1}^{(h)}(p,{\mathbf p}_K) U_{0}(p,y(p)) = \cr && -
\sum_{m=0}^{h-1} W_{1}^{(h-m)}(p) {U}_{k}^{(m)}(p,y(p);{\mathbf p}_K) \cr
&& - P_{k}^{(h)}(p,y(p);{\mathbf p}_K) dx(p)^2 \cr && - \sum_{m=0}^h
\sum_{j=0, m+j \neq 0}^{k-1} {U}_{j}^{(m)}(p,y(p);{\mathbf p}_J)
{W}_{k-j+1}^{(h-m)}(p,{\mathbf p}_{K-J}) \cr && -
U_{k+1}^{(h-1)}(p,y(p);p,{\mathbf p}_k) + \sum_j \sum_j d_{p_j} \left(
{{U}_{k-1}^{(h)}(p_j,y(p);{\mathbf p}_{K-\{j\}}) \over x(p)-x(p_j)}
{dx(p) \over dx(p_j)} \right) dx(p) \cr
\end{array}
\end{equation}
Remark that the RHS contains only known terms except
$P_{k}^{(h)}(p,y(p);{\mathbf p}_K)$. Fortunately, it plays no role in
Cauchy formula.
Indeed, we write the Cauchy formula, move the integration contour
and vanish integrals around the cycles thanks to the Riemann bilinear identity \eq{RiemannbilinearId}. This gives: \begin{eqnarray}
{W}_{k+1}^{(h)}(p,{\mathbf p}_{K}) &=& - \mathop{\,\rm Res\,}_{p'\to p}
{W}_{k+1}^{(h)}(p',{\mathbf p}_{K}) dS_{p',\a}(p) \cr & = & \sum_s
\mathop{\,\rm Res\,}_{p'\to a_s} {W}_{k+1}^{(h)}(p',{\mathbf p}_{K}) dS_{p',\a}(p) \end{eqnarray}
We now introduce \eq{6110} inside this formula and keep only terms
which have poles at the branch points: \begin{equation}
\begin{array}{l}
W_{k+1}^{(h)}(p,{\mathbf p}_K) = \cr
- \sum_{m=0}^{h-1} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} W_{1}^{(h-m)}(p') {R}_{k}^{(m)}(p';{\mathbf p}_K) dS_{p',o}(p)\cr
- \sum_{m=0}^h \sum_{j=0, m+j \neq 0}^{k-1} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} {R}_{j}^{(m)}(p';{\mathbf p}_J) {W}_{k-j+1}^{(h-m)}(p',{\mathbf p}_{K-J}) dS_{p',o}(p)\cr
- \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} R_{k+1}^{(h-1)}(p';p',{\mathbf p}_k) dS_{p',o}(p)
\end{array}
\end{equation}
For convenience, let us note: \begin{equation} W_{1}^{(0)}(p)\equiv 0 \end{equation}
Then, the recursive definition of $W_{k+1}^{(h)}(p,p_K)$ reads:
\begin{equation}\label{soluce1}\encadremath{
\begin{array}{l}
W_{k+1}^{(h)}(p,{\mathbf p}_K) = \cr
\sum_{i=1}^{d_2} \sum_{m=0}^h \sum_{j=0, m+j \neq 0}^{k} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} {R}_{j}^{i,(m)}(p';{\mathbf p}_J) {W}_{k-j+1}^{(h-m)}(p',{\mathbf p}_{K-J}) dS_{p',o}(p)\cr
+ \sum_{i=1}^{d_2} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} R_{k+1}^{i,(h-1)}(p';p',{\mathbf p}_K) dS_{p',o}(p)\cr
\end{array}
}\end{equation}
\subsubsection{A recursive formula for $R_{k}^{i,(h)}$}
The second step consists in the derivation of an equivalent
formula for $R_{k}^{i,(h)}$. We proceed in the same way as for the
genus 0 case: we use the rational properties of some of the
correlation functions to write the recursive formula, with the aid
of a relation similar to \eq{equality}.
Let $G_k^{(h)}(x(q),y(r))$ be :
\begin{eqnarray}
G_k^{(h)}(x(q),y(r)) &=& (y(r)-y(q)) {U}_{k}^{(h)}(q,y(r);{\mathbf p}_K) + {U_{k+1}^{(h-1)}(q,y(r);q,p_k) \over dx(q)} \cr
&& + \sum_{m=1}^h
\sum_{j=0}^{k} {1 \over dx(q)} {U}_{j}^{(m)}(q,y(r);{\mathbf p}_J)
{W}_{k-j+1}^{(h-m)}(q,{\mathbf p}_{K-J}) \cr
&& + \sum_{j=0}^{k-1} {1 \over dx(q)} {U}_{j}(q,y(r);{\mathbf p}_J)
{W}_{k-j+1}^{(h)}(q,{\mathbf p}_{K-J}) \cr
\end{eqnarray}
The loop equation \eq{eqUklgh} shows that $G_k^{(h)}(x(q),y(r))$ is a rational function in $x(q)$
and a polynomial in $y(r)$.
Thus, one has:
\begin{equation}
G_k^{(h)}(x(p^i),y(p^i)) = G_k^{(h)}(x(p),y(p^i))
\end{equation}
which can be written:
\begin{equation}\label{solUkh}
\begin{array}{rcl}
(y(p^{i})-y(p)) {U}_{k}^{(h)}(p,y(p^{i});{\mathbf p}_K) &=&\sum_{m=0}^{h} \sum_{j=0}^{k} {{W}_{j+1}^{(m)}(p^{i},{\mathbf p}_J) {U}_{k-j}^{(h-m)}(p^{i},y(p^{i});{\mathbf p}_{K-J}) \over dx(p)}\cr
&& + {U_{k+1}^{(h-1)}(p^{i},y(p^{i});p^{i},{\mathbf p}_k) \over
dx}\cr && - \sum_{m=0}^{h} \sum_{j=0}^{k}
{{W}_{j+1}^{(m)}(p,{\mathbf p}_{J})
{U}_{k-j}^{(h-m)}(p,y(p^{i});{\mathbf p}_{K-J}) \over dx(p)}\cr && -
{U_{k+1}^{(h-1)}(p,y(p^{i});p,{\mathbf p}_k) \over dx}\cr
\end{array}
\end{equation}
We now establish a relation similar to \eq{equality} in order to
present our recursive formula in such a way that it can be
graphically interpreted.
In order to achieve this aim, one has to determine an explicit
intermediate formula for $U_{k}^{(h)}(p,y;p_K)$. Let us assume
that (for the proof, see {\emph{Appendix C}}): \begin{equation}\label{eqUW}
\begin{array}{l}
U_{k}^{(h)}(p,y(p^{i});{\mathbf p}_K)= \cr {E_y(x,y(p^{i})) \over
y(p^{i})-y(p)} \sum_{r=1}^{min(d_2,k+h)} \sum_{ K_1 \bigcup
\dots \bigcup K_r = K} \sum_{h_{\alpha} = 0}^h
\sum_{k_\alpha=|K_\alpha|}^{k+h} \sum_{j_{\alpha,\beta} \neq
j_{\alpha',\beta'} \in [1,d_2]-\{i\}} {1 \over \Omega} \cr
{W_{k_1+1}^{(h_1)}(p^{i}, {\mathbf p}_{K_1} ,p^{j_{1,1}}, \dots
,p^{j_{1,k_1-|K_1|}}) \left(\prod_{\alpha=2}^{r}
W_{k_{\alpha}+1}^{(h_{\alpha})}(p^{j_{\alpha,0}},{\mathbf p}_{K_{\alpha}}
,p^{j_{\alpha,1}}, \dots
,p^{j_{\alpha,k_\alpha-|K_\alpha|}})\right) \over
dx(p)^{r-k-1+\sum k_\alpha} \prod_{\alpha,\beta}
y(p^{i})-y(p^{j_{\alpha,\beta}})}\cr
\end{array}
\end{equation}
where $\Omega = \prod_{\alpha} (k_\alpha-|K_\alpha|)!$ is a symmetry factor and one has the following constraints:
\begin{itemize}
\item $\sum_{\a} (h_{\alpha}+k_{\alpha}) = h+k$; \item $0 \leq
|K_{\alpha}| \leq k_{\alpha}$
\end{itemize}
One should note that the only external parameter entering these
constraints is $k+h$.
It is now possible to derive an equality equivalent to
\eq{equality}. One shows -- in {\emph{Appendix D}} -- that:
\begin{eqnarray}
\label{equality2} &&\sum_{m=0}^{h} \sum_{j=0; mj\neq kh}^{k}
{W}_{j+1}^{(m)}(p,{\mathbf p}_J) {U}_{k-j}^{(h-m)}(p,y(p);{\mathbf p}_{K-J}) +
{U_{k+1}^{(h-1)}(p,y(p);p,{\mathbf p}_k) } \cr &=& \sum_{i=1}^{d_2}
\sum_{m=0}^{h} \sum_{j=0; mj\neq kh}^{k}
{W}_{j+1}^{(m)}(p^{i},{\mathbf p}_J)
{U}_{k-j}^{(h-m)}(p^{i},y(p);{\mathbf p}_{K-J}) \cr && +
\sum_{i=1}^{d_2} {U_{k+1}^{(h-1)}(p^{i},y(p);p^{i},{\mathbf p}_k) }
\cr \end{eqnarray}
This equality allows us to write: \begin{equation}
\begin{array}{l}
(y(p^{i})-y(p)) {U}_{k}^{(h)}(p,y(p^{i});{\mathbf p}_K) = \cr
\sum_{m=0}^{h} \sum_{j=0; mj\neq kh}^{k} \sum_{l\neq 0,i}
{{W}_{j+1}^{(m)}(p^{l},{\mathbf p}_J)
{U}_{k-j}^{(h-m)}(p^{l},y(p^{i});{\mathbf p}_{K-J}) \over dx(p)}\cr
+ \sum_{l\neq 0,i} {U_{k+1}^{(h-1)}(p^{l},y(p^{i});p^{l},{\mathbf p}_k) \over dx(p)} + W_{k+1}^{(h)}(p^{i},{\mathbf p}_K) E_y(x,y(p^{i}))\cr
\end{array}
\end{equation}
That is to say: \begin{equation}\label{soluce2}\encadremath{
\begin{array}{rcl}
R_{k}^{i,(h)}(p,{\mathbf p}_K) &=& \sum_{m=0}^{h} \sum_{j=0; mj\neq
kh}^{k} \sum_{l\neq 0,i} {{W}_{j+1}^{(m)}(p^{l},{\mathbf p}_J)
{R}_{k-j}^{i,(h-m)}(p^{l};{\mathbf p}_{K-J}) \over (y(p^{i})-y(p))
dx(p) }\cr && + \sum_{l\neq 0,i}
{R_{k+1}^{i,(h-1)}(p^{l};p^{l},{\mathbf p}_k) \over (y(p^{i})-y(p))
dx(p)} + {W_{k+1}^{(h)}(p^{i},{\mathbf p}_K) \over (y(p^{i})-y(p))
dx(p) }\cr
\end{array}
}\end{equation}
\subsection{Diagrammatic solution: a cubic theory}
\label{cub}
This section is the principal part of the article. We define a
correspondence between the correlation functions and a system of
Feynman-like graphs. To every $k$-point function of genus $h$, we
associate a graph with $k$ external legs and $h$ loops and
\eq{soluce1} and \eq{soluce2} become two relations describing these
graphs as functions of graphs with less legs or loops thanks to some rules we introduce in this part.
First of all, let us represent diagrammatically \eq{prop1} and
\eq{prop2} as the propagators of the theory:
\begin{equation} W_{2}(p,q) = \begin{array}{l} {\epsfxsize
3.5cm\epsffile{W2.eps}}
\end{array}
\end{equation}
and
\begin{equation} R_{1}^i(p,p_1) = \begin{array}{l} {\epsfxsize
3.5cm\epsffile{R1.eps}}
\end{array}
\end{equation}
These two diagrams represent the basis of the whole
representation: they allow to draw the $k>2$ correlation
functions.
Note that the second propagator can also be seen has a vertex of valence 2,
and this is the way it will be presented in the diagrammatic rules.
\smallskip
Let us now introduce the whole diagrammatic representation:
Let $R_{k}^{i,(h)}$, and $W_{k+1}^{(h)}$ respectively, be
represented as white and black disks with h holes and $k$ external legs
(remember that $W_{k+1}^{(h)}$
is the generating function of discrete surfaces with $k+1$ boundaries and $h$ holes):
\begin{equation} W_{k+1}^{(h)}(p,p_K) := \begin{array}{l} {\epsfxsize
4cm\epsffile{Wh.eps}}
\end{array}
\end{equation}
\begin{equation} R_{k}^{i,(h)}(p,p_K) := \begin{array}{l} {\epsfxsize
4cm\epsffile{Rh.eps}}
\end{array}
\end{equation}
Let us introduce also the following propagators and vertices:
\begin{center}
\begin{tabular}{|r|l|}\hline
non-arrowed propagator:&
$
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{W2.eps}}
\end{array}
:=W_{2}(p,q)
$
\cr\hline
arrowed propagator:&
$
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{arrowedpropagator.eps}}
\end{array}
:=dS_{q,o}(p)
$
\cr\hline
Residue cubic-vertex:&
$
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{ncvertex.eps}}
\end{array}
:= \sum_s \mathop{\,\rm Res\,}_{q\rightarrow a_s}
$
\cr\hline
colored cubic-vertices:&
$
\begin{array}{r}
{\epsfxsize 2cm\epsffile{cvertex.eps}}
\end{array}
:={(1 - \delta_{l,m}) (1-\delta_{m,i}) (1-\delta_{i,l})\over (y(p^{i})-y(p^{l}))dx(p)}
$
\cr\hline
2-valent vertex:&
$
\begin{array}{r}
{\epsfxsize 2cm\epsffile{simplevertex.eps}}
\end{array}
:= {1 \over (y(p^{i})-y(p^{l}))dx(p)} (1-\delta_{i,l})
$
\cr
\hline
\end{tabular}
\end{center}
One can now simply interpret the recursion relations \eq{soluce1}
and \eq{soluce2} in terms of diagrams.
The relation \eq{soluce1} reads:
\begin{eqnarray}
\begin{array}{r}
{\epsfxsize 3cm\epsffile{Wh.eps}}
\end{array}
&=&\sum_{i=1}^{d_2} \sum_{m=0}^h \sum_{j=0, m+j\neq 0}^k
\sum_{J\in K_j} \begin{array}{l} {\epsfxsize
4cm\epsffile{eqWh1.eps}} \end{array} \cr && +
\sum_{i=1}^{d_2}\begin{array}{l} {\epsfxsize
5cm\epsffile{eqWh2.eps}} \end{array} \cr \end{eqnarray}
And given lower order $R_{l}^{i,(m)}$'s and $W_{l}^{(m)}$'s, one can
obtain $R_{k}^{i,(h)}$ diagrammatically by writing \eq{soluce2}:
\begin{equation}
\begin{array}{rcl}
\begin{array}{r}
{\epsfxsize 3cm\epsffile{Rh.eps}}
\end{array}
&=& \sum_{m=0}^h \sum_{j=0, m+j\neq 0}^k \sum_{J\in K_j}
\sum_{l=0}^{d_2} \begin{array}{l} {\epsfxsize
3.2cm\epsffile{eqRh1.eps}} \end{array}\cr &&+ \sum_{l=0}^{d_2}
\begin{array}{l} {\epsfxsize 5cm\epsffile{eqRh2.eps}} \end{array}
\cr && \qquad +\begin{array}{l} {\epsfxsize
5cm\epsffile{eqRh4.eps}} \end{array} \cr
\end{array}
\end{equation}
From these diagrammatic relations, one can see that $W_{k+1}^{(h)}$ is obtained by {\em the summation
over all diagrams with 1 root, k leaves and $h$ loops}
following the rules:
{\em
\begin{itemize}
\item The vertices have valence 2 or 3; there are $2h+k-1$
trivalent vertices;
\item The edges, are arrowed or not, the arrowed edges are waved
or not;
\item The subgraph made of arrowed edges forms a skeleton tree (i.e. a tree whose vertices have valence up to 3);
\item from each trivalent vertex comes one waved and one
non-waved propagator;
\item two vertices linked with a waved propagator have different indices;
\item the k leaves are non-arrowed propagators finishing at
$p_j$'s (i.e. $B(.,p_j)$);
\item the root is an arrowed non waved propagator starting from $p$.
\end{itemize}
}
A practical way to draw these graphs is to draw every skeleton
tree of arrows, put $k$ non arrowed propagators as leaves,
close it with $h$ non arrowed propagators linking one vertex to one of its descendents
in order to obtain $h$ loops and then put waves so that from each trivalent
vertex comes one waved and one non-waved arrow with the possibility that every
waved arrow leads to a bivalent vertex.
{\bf Remarks:}
\begin{itemize}
\item The order for computing the residues is following the arrows backwards from leaves
to root.
\item $W_{k+1}$ is symmetric in its $k+1$ variables, although it is not obvious from this representation.
\item There is no symmetry factor arising in this representation unlike \cite{eynloop1mat}.
\end{itemize}
\subsection{Examples}
Let us briefly show some diagrams for
small $h$ and small $k$.
\subsubsection{Leading terms: tree level}
We begin by the leading terms of the first correlation functions,
i.e. for $h=0$.
$\bullet$ $k=3$:
\begin{equation}
\begin{array}{rcl}
W_{3}^{(0)}(p,p_1,p_2) &=& \sum_{i=1}^{d_2} \begin{array}{l} {\epsfxsize
5.5cm\epsffile{W3.eps}} \end{array}\cr &=& \sum_{i=1}^{d_2}\sum_s
\mathop{\,\rm Res\,}_{p' \rightarrow a_s} \left[ {B(p'^{i},p_1) B(p',p_2) \over
(y(p'^{i})-y(p')) dx(p')} + {B(p'^{i},p_2) B(p',p_1) \over
(y(p'^{i})-y(p')) dx(p')} \right] dS_{p',\a}(p)
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{rcl}
R_{2}^{i,(0)}(p,p_1,p_2) &=& \sum_{j=1}^{d_2} \begin{array}{l}
{\epsfxsize 6cm\epsffile{R2_1.eps}} \end{array} \cr && +
\sum_{j\neq i} \begin{array}{l} {\epsfxsize
7cm\epsffile{R2_2.eps}} \end{array} \cr
\end{array}
\end{equation}
\smallskip
Let us show that $W_3^{(0)}(p,p_1,p_2)$ is indeed symmetric in $p_1$, $p_2$ and $p_3$.
For every branch point $a$, let $\overline{q}$ be the only $q^i$ such that $dx(\overline{q}) \to 0$
when $q \to a$.
\begin{equation}
\begin{array}{rcl}
W_{3}^{(0)}(p,p_1,p_2) &=& \sum_{i=1}^{d_2} \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} {B(q^{i},p_1) B(q,p_2)
+ B(q^{i},p_2) B(q,p_1) \over
(y(q^{i})-y(q)) dx(q)} dS_{q,\a}(p) \cr
&=& \sum_{i=1}^{d_2} \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} \mathop{\,\rm Res\,}_{r \to q^i} {B(r,p_1) B(q,p_2)
+ B(r,p_2) B(q,p_1) \over
(y(r)-y(q)) (x(r)-x(q)) dx(q)} dS_{q,\a}(p) \cr
&=& \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} \mathop{\,\rm Res\,}_{r \to \overline{q}} {B(r,p_1) B(q,p_2)
+ B(r,p_2) B(q,p_1) \over
(y(r)-y(q)) (x(r)-x(q)) dx(q)} dS_{q,\a}(p) \cr
&=& \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} {B(q,p_1) B(\overline{q},p_2) dS_{q,\overline{q}}(p) \over (y(\overline{q})-y(q)) dx(q)} \cr
&=& - \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} {B(q,p_1) B(q,p_2) dS_{q,\overline{q}}(p) \over (y(\overline{q})-y(q)) dx(q)} \cr
&=& \sum_s
\mathop{\,\rm Res\,}_{q \rightarrow a_s} {B(q,p_1) B(q,p_2) B(q,p) \over dx(q) dy(q)} \cr
\end{array}
\end{equation}
which is nothing but the formula found in \cite{Kri} and is a way of writing Rauch's variational formula.
$\bullet$ $k=4$: \begin{equation}
\begin{array}{rcl}
W_4^{(0)}(p,p_1,p_2,p_3) &=& \sum_{i=1}^{d_2} \sum_{j=1}^{d_2}
\begin{array}{l} {\epsfxsize 4cm\epsffile{W41.eps}} \end{array}
\cr && + \sum_{i=1}^{d_2} \sum_{j=1}^{d_2} \begin{array}{l}
{\epsfxsize 4cm\epsffile{W47.eps}} \end{array} \cr && +
\sum_{i=1}^{d_2} \sum_{j\neq i =1}^{d_2} \begin{array}{l}
{\epsfxsize 4cm\epsffile{W48.eps}} \end{array} \cr && + (\;
\hbox{permutations of}\; \{p_1, p_2, p_3 \}\,)
\end{array}
\end{equation}
One has to consider all the permutations on the external legs.
Thus, $W_4^{(0)}$ is the sum over 18 different diagrams.
\subsubsection{Topological expansion: one and two loops level}
Consider now the first non planar examples beginning by the
simplest one, the one loop correction to the one point function.
$\bullet$ $k=1$ and $h=1$:
\begin{eqnarray} W_{1}^{(1)}(x(p))dx(p) &=& \begin{array}{l} {\epsfxsize
3cm\epsffile{W1_1.eps}}
\end{array} \cr
& = & \sum_{i=1}^{d_2} \begin{array}{l} {\epsfxsize
3cm\epsffile{R1_1.eps}} \end{array} \cr &=& \sum_{i=1}^{d_2}
\sum_s \mathop{\,\rm Res\,}_{q\rightarrow a_s} dS_{q,o}(p) {B(q,q^{i}) \over
y(q^{i})-y(q)} \cr \end{eqnarray}
One can check that this is identical to the result of \cite{EKK}.
$\bullet$ $k=2$ and $h=1$:
\begin{equation}
\begin{array}{rl}
W_{2}^{(1)} = & \sum_{i=1}^{d_2} \sum_{j\in [1,d_2]-\{i\}} \left[
\begin{array}{l} {\epsfxsize 3cm\epsffile{W211.eps}} \end{array} +
\begin{array}{l} {\epsfxsize 3cm\epsffile{W216.eps}} \end{array}
\right. \cr & \left. \qquad + \begin{array}{l} {\epsfxsize
3cm\epsffile{W217.eps}} \end{array}\right] \cr & +
\sum_{i=1}^{d_2} \sum_{j=1}^{d_2}\left[ \begin{array}{l}
{\epsfxsize 3cm\epsffile{W212.eps}} \end{array} + \begin{array}{l}
{\epsfxsize 3cm\epsffile{W213.eps}} \end{array}\right. \cr &
\qquad \left. + \begin{array}{l} {\epsfxsize
3cm\epsffile{W214.eps}} \end{array} + \begin{array}{l} {\epsfxsize
3cm\epsffile{W215.eps}} \end{array}\right] \cr
\end{array}
\end{equation}
Analytically, this reads: \begin{equation}
\begin{array}{l}
W_2^{(1)}(p,p_1) = \cr \sum_{i=1}^{d_2} \sum_{j\in [1,d_2]-\{i\}}
\sum_s \mathop{\,\rm Res\,}_{p' \rightarrow a_s} {dS_{p',o}(p) \over
(y(p'^{i})-y(p')) (y(p'^{i})-y(p'^{j})) dx^2(p')} \cr
\quad \left[ B(p',p_1) B(p'^{i},p'^{j}) + B(p'^{i},p_1) B(p',p'^{j}) + B(p',p'^{i}) B(p_1,p'^{j}) \right] \cr
+ \sum_{i=1}^{d_2} \sum_{j=1}^{d_2} \sum_{s,t} \mathop{\,\rm Res\,}_{p' \rightarrow a_s} \mathop{\,\rm Res\,}_{p'' \rightarrow a_t} {dS_{p',o}(p) \over (y(p'^{i})-y(p')) (y(p''^{j})-y(p'')) dx(p') dx(p'')} \cr
\quad \left[ B(p',p_1) B(p'',p''^{j}) dS_{p'',o}(p'^{i}) + B(p'^{i},p_1) B(p'',p''^{j}) dS_{p'',o}(p') \right. \cr
\qquad \left. + B(p'',p') B(p_1,p''^{j}) dS_{p'',o}(p'^{i}) + B(p_1,p'') B(p',p''^{j}) dS_{p'',o}(p'^{i}) \right] \cr
\end{array}
\end{equation}
$\bullet$ $k=1$ and $h=2$:
\begin{eqnarray} W_{1}^{(2)} = & \sum_{i=1}^{d_2} \sum_{j=1}^{d_2}
\sum_{k=1}^{d_2} \left[ \begin{array}{l} {\epsfxsize
3cm\epsffile{W121.eps}} \end{array} + \begin{array}{l} {\epsfxsize
3cm\epsffile{W126.eps}} \end{array} \right. \cr & \qquad \left. +
\begin{array}{l} {\epsfxsize 3cm\epsffile{W128.eps}} \end{array} +
\begin{array}{l} {\epsfxsize 3cm\epsffile{W1210.eps}} \end{array}
+ \begin{array}{l} {\epsfxsize 3cm\epsffile{W1214.eps}}
\end{array} \right] \cr & + \sum_{i=1}^{d_2} \sum_{j\in
[1,d_2]-\{i\}} \sum_{k=1}^{d_2} \left[ \begin{array}{l}
{\epsfxsize 3cm\epsffile{W123.eps}} \end{array} + \begin{array}{l}
{\epsfxsize 3cm\epsffile{W125.eps}} \end{array} \right. \cr &
\qquad \left. + \begin{array}{l} {\epsfxsize
3cm\epsffile{W129.eps}} \end{array} + \begin{array}{l} {\epsfxsize
3cm\epsffile{W1213.eps}} \end{array}\right] \cr & +
\sum_{i=1}^{d_2} \sum_{j=1}^{d_2} \sum_{k\in [1,d_2]-\{j\}} \left[
\begin{array}{l} {\epsfxsize 3cm\epsffile{W122.eps}} \end{array} +
\begin{array}{l} {\epsfxsize 3cm\epsffile{W127.eps}}
\end{array}\right. \cr & \qquad \left. + \begin{array}{l}
{\epsfxsize 3cm\epsffile{W1212.eps}} \end{array} +
\begin{array}{l} {\epsfxsize 3cm\epsffile{W1216.eps}}
\end{array}\right] \cr & + \sum_{i=1}^{d_2} \sum_{j\in
[1,d_2]-\{i\}} \sum_{k\in [1,d_2]-\{j\}} \left[ \begin{array}{l}
{\epsfxsize 3cm\epsffile{W124.eps}} \end{array} \right. \cr &
\qquad \left. + \begin{array}{l} {\epsfxsize
3cm\epsffile{W1211.eps}} \end{array} + \begin{array}{l}
{\epsfxsize 3cm\epsffile{W1215.eps}} \end{array} \right] \cr \end{eqnarray}
\newsection{An effective non cubic theory}
The Feynman-like graphs described up to now correspond to cubic
vertices only, but the price to pay is the introduction of auxiliary functions $R_k^{i,(h)}$.
Nevertheless, in order to study some problems, this property is not needed
and one may prefer an effective diagrammatic representation for only $W_k^{(h)}$
but vertices with valence up to $d_2-1$. This section is dedicated to
building such a diagrammatic representation. It
consists in resumming the linked waved vertices into one
multivalent vertex: \begin{equation}
\begin{array}{r}
{\epsfxsize 4cm\epsffile{wave.eps}}
\end{array}
\sim
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{sum.eps}}
\end{array}
\end{equation}
\subsection{Leading order: Genus 0}
We have already written the equations necessary to
define this effective theory. Let us consider \eq{recW} and
\eq{UWW}: \begin{equation}\label{recWeff} W_{k+1}(p,{{\mathbf p}}_K) = -\sum_s
\mathop{\,\rm Res\,}_{p'\to a_s} \sum_{j=1}^{k-1} \sum_{J\in K_j} \frac{1}{dx'}
{{U}_{j}(p',y(p');{\mathbf p}_J)\over E_y(x(p'),y(p')) }\,
{W}_{k-j+1}(p',{\mathbf p}_{K-J}) dS_{p',\a}(p) \end{equation}
\begin{equation} U_{k}(p,y;{\mathbf p}_K) = {E(x(p),y) dx(p)\over
y-y(p)}\sum_{r=1}^{d_2} \sum_{K_1\cup\dots\cup K_r=K}
\sum_{j_1\neq j_2\neq \dots \neq j_r=1}^{d_2} \prod_{t=1}^r
{W_{|K_t|+1}(p^{j_t},{\mathbf p}_{K_t})\over (y-y(p^{j_t}))\,dx(p)}
\end{equation}
This second equation taken for $y=y(p)$ reads:
\begin{equation}
{U_{k}(p,y(p);{\mathbf p}_K) \over E_y(x(p),y(p)) dx(p)} =
\sum_{r=1}^{d_2} \sum_{K_1\cup\dots\cup K_r=K} \sum_{j_1\neq
j_2\neq \dots \neq j_r=1}^{d_2} \prod_{t=1}^r
{W_{|K_t|+1}(p^{j_t},{\mathbf p}_{K_t})\over
(y(p)-y(p^{j_t}))\,dx(p)}
\end{equation}
Introduce it in \eq{recWeff} and get a closed recursive formula
for the $W_k$'s: \begin{equation}\encadremath{\label{recWeff2}
\begin{array}{rcl}
W_{k+1}(p,{{\mathbf p}}_K) &=& - \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} \sum_{r=1}^{d_2}
\sum_{K_0\cup K_1\cup\dots\cup K_r=K} \sum_{j_1\neq j_2\neq \dots
\neq j_r=1}^{d_2} \cr && {W}_{|K_0|+1}(p',{\mathbf p}_{K_0})
\prod_{t=1}^r {W_{|K_t|+1}(p'^{j_t},{\mathbf p}_{K_t})\over
(y(p')-y(p'^{j_t}))\,dx(p')} dS_{p',\a}(p)
\end{array}
}\end{equation}
Let us introduce the following Feynman rules:
\begin{center}
\begin{tabular}{|r|l|}\hline
non-arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{W2.eps}}
\end{array}
:=W_{2}(p,q) $ \cr\hline
arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{arrowedpropagator.eps}}
\end{array}
:=dS_{q,o}(p)
$ \cr\hline
\begin{tabular}{c}
r+2 - vertex\cr ($1\leq r \leq d_2$)\cr with one marked \cr edge:
\end{tabular}&
$
\begin{array}{r}
{\epsfxsize 3.8cm\epsffile{multiplevertex.eps}}
\end{array}
:= \begin{array}{l}
- \sum_s \sum_{j_1\neq \dots \neq j_r \neq 0} \mathop{\,\rm Res\,}_{q\rightarrow a_s} \cr
\prod_{t=1}^r {1 \over (y(q) - y(q^{j_t})) dx(q)}\cr
\end{array}
$ \cr
\hline
\end{tabular}
\end{center}
Remark that one leg of the multiple vertex is marked: on this leg,
there is no summation over the different sheets.
\bigskip
Using these rules, one can diagrammatically write the recursive
relation as follows: \begin{equation}
\begin{array}{r}
{\epsfxsize 4.5cm\epsffile{Wk_i.eps}}
\end{array}
= \sum_{r=1}^{d_2} \sum_{K_0\cup K_1\cup\dots\cup K_r=K}
\begin{array}{l}
{\epsfxsize 5.5cm\epsffile{multieqWk.eps}}
\end{array}
\end{equation}
From this relation, one can see that $W_{k+1}(p,{{\mathbf p}}_K)$ is
obtained as the {\em summation over all trees with $k+1$ external
legs} and following the rules: {\em \begin{itemize} \item The
vertices have valence r+2 such as $1 \leq r \leq min(k-1,d_2)$;
\item The edges are arrowed; \item One of the legs of each vertex
is marked \item The k leaves are non arrowed propagators
ending at $p_j$'s;
\item The root is an arrowed propagator starting from $p$.
\end{itemize}}
The drawbacks of these effective rules induced by the existence of
multivalent vertices is balanced by the simplicity of the
vertices and the absence of different propagators.
\subsection{Any genus h}
Let us now study the extension of this theory to any genus.
Once again, the fundamental equations have already been written.
Let us recall to mind \eq{soluce1} and
\eq{eqUW}: \begin{equation}\label{Wkh3}
\begin{array}{l}
W_{k+1}^{(h)}(p,{\mathbf p}_K) = \cr
- \sum_{m=0}^h \sum_{j=0, m+j \neq 0}^{k} \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} {{U}_{j}^{(m)}(p',y(p');{\mathbf p}_J) \over E_y(x(p'),y(p'))} {W}_{k-j+1}^{(h-m)}(p',{\mathbf p}_{K-J}) dS_{p',\alpha}(p)\cr
- \sum_s \mathop{\,\rm Res\,}_{p'\to a_s} {{U}_{k+1}^{(h-1)}(p',y(p');p',{\mathbf p}_K) \over E_y(x(p'),y(p'))} dS_{p',\alpha}(p)
\end{array}
\end{equation}
and, for $i\neq 0$:
\begin{equation}\label{eqUW3}
\begin{array}{l}
U_{k}^{(h)}(p,y(p^{i});{\mathbf p}_K)= \cr {E_y(x,y(p^{i})) \over
y(p^{i})-y(p)} \sum_{r=1}^{min(d_2,k+h)} \sum_{ K_1 \bigcup
\dots \bigcup K_r = K} \sum_{h_{\alpha} = 0}^h
\sum_{k_\alpha=|K_\alpha|}^{k+h} \sum_{j_{\alpha,\beta} \neq
j_{\alpha',\beta'} \in [1,d_2]-\{i\}} {1 \over \Omega} \cr
{W_{k_1+1}^{(h_1)}(p^{i}, {\mathbf p}_{K_1} ,p^{j_{1,1}}, \dots
,p^{(j_{1,k_1-|K_1|})}) \left(\prod_{\alpha=2}^{r}
W_{k_{\alpha}+1}^{(h_{\alpha})}(p^{j_{\alpha,0}},{\mathbf p}_{K_{\alpha}}
,p^{j_{\alpha,1}}, \dots
,p^{j_{\alpha,k_\alpha-|K_\alpha|}})\right) \over
dx(p)^{r-k-1+\sum k_\alpha} \prod_{\alpha,\beta}
y(p^{i})-y(p^{j_{\alpha,\beta})}}\cr
\end{array}
\end{equation}
In order to introduce this second formula inside the first one,
one has to use the interpolation formula to consider the case
where $i = 0$ : \begin{equation}
\begin{array}{l}
{U_{l}^{(m)}(p,y(p);{\mathbf p}_L) \over E_y(x(p),y(p))} = \cr
- \sum_{r=1}^{min(d_2,l+m)} \sum_{ L_1 \bigcup \dots \bigcup L_r = L} \sum_{m_{\alpha} = 0}^m \sum_{l_{\alpha}=|L_\alpha|}^{l+m}
\sum_{j_1 \neq \dots \neq j_r \in [1,d_2]} {1 \over \Omega} \cr
{W_{l_1+1}^{(m_1)}(p^{j_{1,0}}, {\mathbf p}_{L_1} ,p^{j_{1,1}}, \dots
,p^{j_{1,l_1-|L_1|}}) \prod_{\alpha=2}^{r}
W_{l_{\alpha}+1}^{(m_{\alpha})}(p^{j_{\alpha,0}},{\mathbf p}_{L_{\alpha}}
,p^{j_{\alpha,1}}, \dots ,p^{j_{\alpha,l_\alpha-|L_\alpha|}})
\over dx(p)^{r-l-1+\sum l_\alpha} (y(p^{j_{1,0}})-y(p))
\prod_{\alpha,\beta}
(y(p^{j_{1,0}})-y(p^{j_{\alpha,\beta})}}\cr
\end{array}
\end{equation}
Recursively, it is easy to check that it can be written:
\begin{eqnarray}\label{Ulm} &&{U_{l}^{(m)}(p,y(p);{\mathbf p}_L) \over E_y(x(p),y(p))
dx(p)} = \cr &&\sum_{r=1}^{min(d_2,l+m)} \sum_{ L_1 \bigcup \dots
\bigcup L_r = L} \sum_{m_{\alpha} = 0}^m
\sum_{l_{\alpha}=|L_\alpha|}^{l+m} \sum_{j_{\alpha,\beta} \neq
j_{\alpha',\beta'} \in [1,d_2]} {1 \over \Omega '} \cr &&\quad
\prod_{\alpha=1}^{r}
{W_{l_{\alpha}+1}^{(m_{\alpha})}(p^{j_{\alpha,0}},{\mathbf p}_{L_{\alpha}}
,p^{j_{\alpha,1}}, \dots ,p^{j_{\alpha,l_\alpha-|L_\alpha|}})
\over dx(p)^{l_\alpha-|L_\alpha|+1} \prod_{\beta =
0}^{l_\alpha-|L_\alpha|} (y(p)-y(p^{j_{\alpha,\beta})})}\cr
\end{eqnarray}
where $\Omega '$ is some other symmetry factor depending only on
the same parameters as $\Omega$.
One is now able to write an explicit recursion formula for the
$W_k^{(h)}$'s that can be graphically represented with the Feynman
rules introduced in this section. The introduction of \eq{Ulm} in
\eq{Wkh3} gives:
\begin{eqnarray}\label{Wkh4} &&W_{k+1}^{(h)}(p,{\mathbf p}_K) = \cr && - \sum_s
\mathop{\,\rm Res\,}_{p'\to a_s} \sum_{r=1}^{d_2} \sum_{ K_0 \bigcup K_1 \bigcup
\dots \bigcup K_r = K} \sum_{h_{\alpha} = 0}^h
\sum_{k_{\alpha}=|K_\alpha|}^{k+h} \sum_{j_{\alpha,\beta} \neq
j_{\alpha',\beta'} \in [1,d_2]} {1 \over \Omega '} \cr && \quad
dS_{p',o}(p) W_{|K_0|+1}^{(h_0)}(p',{\mathbf p}_{K_0}) \prod_{\alpha=1}^r
{W_{k_{\alpha}+1}^{(h_{\alpha})}(p'^{j_{\alpha,0}},{\mathbf p}_{K_{\alpha}}
,p'^{j_{\alpha,1}}, \dots
,p'^{j_{\alpha,k_\alpha-|K_\alpha|}}) \over
dx(p')^{k_\alpha-|K_\alpha|+1} \prod_{\beta =
0}^{k_\alpha-|K_\alpha|} (y(p')-y(p'^{j_{\alpha,\beta})})}\cr
&& - \sum_s \mathop{\,\rm Res\,}_{p'\to a_s}
{{U}_{k+1}^{(h-1)}(p',y(p');p',{\mathbf p}_K) \over E_y(x(p'),y(p'))}
dS_{p',\alpha}(p) \cr \end{eqnarray}
That is to say: \begin{eqnarray}
\begin{array}{r}
{\epsfxsize 4cm\epsffile{Wh.eps}}
\end{array}
& = & \sum_{r=1}^{d_2} \sum_{h_\alpha} \sum_{K_0\cup
K_1\cup\dots\cup K_r=K} {1 \over \Omega'}
\begin{array}{l}
{\epsfxsize 5.5cm\epsffile{multiWh.eps}}
\end{array} \cr
&& + \sum_{r=1}^{d_2} \sum_{h_\alpha} \sum_{K_1\cup\dots\cup
K_r=K} {1 \over \Omega'}
\begin{array}{l}
{\epsfxsize 5.5cm\epsffile{multiWh2.eps}}
\end{array}
\end{eqnarray}
Remark that we have splitted the diagrams in the RHS in order to
reproduce the recursion relation. Nevertheless, the first term in
the RHS is nothing else but a particular case of the second term
where the marked leg of the vertex is left alone inside one of the
$W$'s.
\bigskip
Hence, the $h$-th order expansion term of the correlation function
$W_{k+1}^{(h)}$ is obtained as the {\em summation over all Feynman
diagrams with $k+1$ external legs and $h$ loops} following the same
rules as exposed in the genus 0 case, i.e.: {\em \begin{itemize}
\item The vertices have valence r+2 such as $1 \leq r \leq d_2$;
\item The edges are arrowed or not; \item One of the legs of each
vertex is marked; \item The subgraph made of arrowed edges forms a
skeleton tree; \item The k leaves are non arrowed propagators
ending at $p_j$'s;
\item The root is an arrowed propagator starting from $p$;
\item a non arrowed edge links a vertex to one of its descendants along the tree.
\end{itemize}}
\subsection{Examples}
Let us review some simple examples of this description.
\begin{equation} W_3^{(0)}(p,p_1,p_2) = \begin{array}{l} {\epsfxsize
10cm\epsffile{multiW3.eps}}
\end{array}
\end{equation}
Analytically, this reads: \begin{equation}
\begin{array}{l}
W_3^{(0)}(p,p_1,p_2) = \cr \sum_{i=1}^{d_2}\sum_s \mathop{\,\rm Res\,}_{q \rightarrow
a_s} \left[ B(q^{i},p_1) B(q,p_2) + B(q^{i},p_2) B(q,p_1)
\right] {dS_{q,\a}(p) \over (y(q^{i})-y(q)) dx(q)}
\end{array}
\end{equation}
\bigskip
\begin{eqnarray} W_1^{(1)}(p) &=& \begin{array}{l} {\epsfxsize
6cm\epsffile{multiW1_1.eps}}
\end{array}\cr
&=& \sum_s \sum_{i=1}^{d_2} \mathop{\,\rm Res\,}_{q\rightarrow a_s} dS_{q,o}(p)
{B(q,q^{i}) \over (y(q^{i})-y(q)) dx(q)} \end{eqnarray}
\bigskip
\begin{eqnarray} && W_2^{(1)}(p,p_1) =\cr && \begin{array}{l} {\epsfxsize
6cm\epsffile{multiW211.eps}}
\end{array}
+
\begin{array}{l}
{\epsfxsize 6cm\epsffile{multiW212.eps}}
\end{array} \cr
&& +{1 \over 2} \begin{array}{l} {\epsfxsize
6cm\epsffile{multiW213.eps}}
\end{array}
+
\begin{array}{l}
{\epsfxsize 6cm\epsffile{multiW214.eps}}
\end{array}\cr
&& + \begin{array}{l} {\epsfxsize 6cm\epsffile{multiW215.eps}}
\end{array}
+\begin{array}{l} {\epsfxsize 6cm\epsffile{multiW216.eps}}
\end{array}
\end{eqnarray}
\newsection{The gaussian case: the 1-matrix model limit.}
In this section, we are interested in the special case where
$d_2=1$, i.e. one has a gaussian potential in $M_2$. This
situation is very important because it links our results to the
1-matrix model studied in \cite{eynloop1mat}. Indeed, when one of
the potentials is gaussian -- $V_2$ for example --, the
integration over one of the variables -- $M_2$ in this case -- is
gaussian and can be straightforwardly performed without giving any
contribution to the formal expansion. Then, the 2-matrix model
with one gaussian potential $V_2(y)= {g_2 \over 2} y^2$ is equivalent to the 1-matrix model
with a potential $V = V_1 - {x^2 \over 2 g_2}$. We check in this
part that our results coincide with the ones obtained directly
from the 1-matrix model in \cite{eynloop1mat}. Actually, it is a
good way to better understand the structure obtained.
In this case, the Riemann surface is an hyperelliptical surface with
only two $x$-sheets. The equation $x(p)=x$ has only two solutions.
Let us call them $p$ and $\overline{p}$, i.e. $p^{0} =p$ and
$p^{1}=\overline{p}$. They obey the following relations: \begin{equation}
x(p)= x(\overline{p}) \and y(p) = - y(\overline{p}) \end{equation}
The algebraic equation generating the Riemann surface reads: \begin{equation}
E(x(p),y(r))= - g_2 (y(r)-y(p)) (y(r)-y(\overline{p})) = - g_2
(y(r)^2 - y(p)^2) \end{equation}
One can also remark that: \begin{equation} U_{k}(p,y;{\mathbf p}_K) = g_2
W_{k+1}(p,{\mathbf p}_K) \end{equation}
That is to say: \begin{equation} R_k^{0}(p,{\mathbf p}_K) = {U_{k}(p,y(p);{\mathbf p}_K)
\over E_y(x(p),y(p)) dx(p)} = - {W_{k+1}(p,{\mathbf p}_K) \over 2 y(p) dx(p)} \end{equation}
So that: \begin{equation} R_k^{0}(\overline{p},{\mathbf p}_K) = R_k^{1}(p,{\mathbf p}_K) =
{W_{k+1}(p,{\mathbf p}_K) \over 2 y(p) dx(p)} \end{equation}
\subsection*{Diagrammatic rules.}
One can now study how the diagrammatic rules
introduced earlier behave in this limit.
\begin{itemize}
\item {\bf The cubic rules}
Because $V_2$ is gaussian, the Feynman rules become:
\begin{center}
\begin{tabular}{|r|l|}\hline
non-arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{W2.eps}}
\end{array}
:=W_{2}(p,q) $ \cr\hline
arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{arrowedpropagator.eps}}
\end{array}
:=dS_{q,o}(p)
$ \cr\hline
Residue cubic-vertex:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{ncvertex.eps}}
\end{array}
:= \sum_s Res_{q\rightarrow a_s}
$ \cr\hline
simple vertex:& $
\begin{array}{r}
{\epsfxsize 2cm\epsffile{simplevertex2.eps}}
\end{array}
:= -{1 \over 2 y(p) dx(p)} $ \cr
\hline
\end{tabular}
\end{center}
The last component of the Feynman diagrams, the colored
cubic-vertex, implies three different $x$-sheets. Because there
exists only two such sheets in the gaussian case, this vertex
vanishes: \begin{equation}
\begin{array}{r}
{\epsfxsize 3.5cm\epsffile{cvertex.eps}}
\end{array}
\equiv 0 \end{equation}
Considered that the bivalent and trivalent vertices only appear
together, one can merge them into one whose value is equal to $-
\sum_s \mathop{\,\rm Res\,}_{q\rightarrow a_s} {1 \over 2 y(q) dx(q)}$, and one
recovers \cite{eynloop1mat}: \begin{equation}
\begin{array}{r}
{\epsfxsize 3cm\epsffile{1MM1.eps}}
\end{array}
\rightarrow
\begin{array}{r}
{\epsfxsize 3cm\epsffile{1MM2.eps}}
\end{array}
\end{equation}
\item {\bf The effective theory}
The effect of the gaussian limit on the effective theory is to
make it cubic. One obtains the following rules:
\begin{center}
\begin{tabular}{|r|l|}\hline
non-arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{W2.eps}}
\end{array}
:=W_{2}(p,q) $ \cr\hline
arrowed propagator:& $
\begin{array}{r}
{\epsfxsize 2.5cm\epsffile{arrowedpropagator.eps}}
\end{array}
:=dS_{q,o}(p)
$ \cr\hline
\begin{tabular}{c}
cubic vertex \cr (only for r=1):
\end{tabular}&
$
\begin{array}{r}
{\epsfxsize 3.8cm\epsffile{multiplevertex.eps}}
\end{array}
:= - \sum_s \mathop{\,\rm Res\,}_{q\rightarrow a_s} {1 \over 2 y(q) dx(q)}
$ \cr
\hline
\end{tabular}
\end{center}
\end{itemize}
Hence, the two theories turn into only one cubic theory in this
limit which is the one derived in \cite{eynloop1mat}. Indeed, the
corresponding recursive relation appears to be: \begin{eqnarray}
W_{k+1}^{(h)}(p,p_K) &=& - \sum_l \mathop{\,\rm Res\,}_{q \to a_l}
{W_{k+2}^{(h-1)}(q,q,p_K) dS_{q,o}(p) \over 2 y(q) dx(q)} \cr && -
\sum_{m=0}^h \sum_{j=0, j+m \neq 0}^k \sum_l \mathop{\,\rm Res\,}_{q \to a_l}
{W_{j+1}^{(m)}(q,p_J) W_{k-j+1}^{(h-m)}(q,p_{K-J}) dS_{q,o}(p)
\over 2 y(q) dx(q)} \end{eqnarray}
{\bf Remark:}
Diagrammatically, this limit can be easily interpreted. Starting
from the general cubic theory, in order, to obtain the 1-matrix
model graphs from the 2-matrix model ones, one only has to take
the length of the waved propagators to 0. In this case, the
graphs containing at least one colored vertex vanish.
Everything works as if the waved propagators of the 2-matrix
model were unstable particles which decay into stable ones
represented by non-waved propagators. Then the 1-matrix
limit is obtained by taking the life time of these particles to 0.
\vspace{0.7cm}
One shall also note that there is no symmetry factor in the
2-matrix model graphs of the cubic theory whereas there are not
well understood ones in the 1-matrix case. The derivation of the
1-matrix model as a limit exhibits how these factors arise. They
come from the same contribution given by different diagrams in
this limit. This observation exhibits how the 2-matrix model seems
more fundamental.
\eop
\newsection{Conclusion}
In this article, we have generalized the diagrammatic technique of
\cite{eynloop1mat} to compute all non-mixed correlation functions
of the 2-matrix model, to all orders in the topological expansion.
The result can be represented diagrammatically, with some cubic
Feynman rules, which are just convenient notations for writing
residues on an algebraic curve and it is not clear whether there exists a
field theory giving rise to these graphs or not.
This shows that the method discovered in \cite{eynloop1mat} is
very universal, i.e. it works for all algebraic curves, not only
hyper elliptical curves.
The future prospects of that work are to find the diagrammatic
rules for computing the free energy to all order in the
topological expansion, and also all mixed correlation functions
(using the result of \cite{EOtrmixte}). Another possible extension
is to work out the multimatrix model, i.e. the chain of matrices
as in \cite{eynmultimat}, and in particular the limit of matrix
quantum mechanics. We believe that this technique could apply to
many other integrable models.
Another question, is to understand the limit of critical points,
i.e. when some branch points and double points start to coalesce.
It seems that the diagrammatic technique should just reduce to
consider only residues at branch points which become critical. One
may expect to recover some relation with the Kontsevich integral,
in relationship with KP integrable hierarchies.
\subsection*{Acknowledgments}
The authors want to thank L.Chekhov, I. Kostov, V. Kazakov for
stimulating discussions. This work was partly supported by the
european network Enigma (MRTN-CT-2004-5652).
\eop
\setcounter{section}{0}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Creating entanglement over long distances is the main goal of quantum communication, with applications in quantum key distribution, fundamental tests of quantum mechanics, and distributed computing among others \cite{gisin-thew}.
However, the fragility of entanglement to environmental noise limits the effective distance of direct quantum communication.
One of the most celebrated solutions to this problem is the use of quantum repeaters \cite{repeater}.
As a drawback, this strategy consumes an amount of quantum memories per repeater that grows rapidly with the distance for establishing entanglement, even when error correction is used \cite{L.Jiang, surface}.
The distribution of entanglement in quantum networks has been the focus of intense research.
Nontrivial geometry of the quantum network can be used, for instance, in entanglement percolation \cite{ent_perc} or error-correction strategies \cite{raussendorf,perseguers2,perseguers3D,AGrudka}.
However, all the known results in this direction rely on unrealistic quantum states \cite{ent_perc,lapeyre1,perseguers1,multipartite,broadfoot1,broadfoot2,broadfoot3} or networks with impractical geometries (e.g. three dimensional) \cite{raussendorf,perseguers3D,AGrudka} or the consumption of a growing amount of local resources \cite{perseguers2,lapeyre2}.
Entanglement distribution in a noisy two-dimensional network with fixed local resources is believed to be possible through one-dimensional fault-tolerant quantum-computation schemes \cite{perseguers2,AGrudka}.
However, such a scheme often requires quantum communications and operations with a very small error rate (approximately $10^{-5}$) \cite{1dcode}.
Thus, the problem of designing a realistic scalable quantum network remains largely unresolved.
In this paper, we show that it is possible to entangle two distant sites in a two-dimensional network involving realistic quantum channels.
In the present proposal, the number of quantum memories per node needed is fixed and does not scale with the communication distance.
Also, the scalability of the two-dimensional quantum network does not rely on the scalability of quantum processors.
Moreover, quantum-communication error rates of up to $1.67\%$ can be tolerated.
Our starting point is a quantum network on the square lattice (see Fig. \ref{scheme}).
Each node in the network is connected to its neighbors through a quantum channel that distributes two-qubit Werner states $\rho$ given by
\begin{eqnarray}\label{werner}
\rho=(1-q)\ketbra{\Phi_+}{\Phi_+}+q\frac{\openone}{4},
\end{eqnarray}
where $\ket{\Phi_+}=(\ket{00}+\ket{11})/\sqrt{2}$ is a maximally entangled state, $\openone/4$ is the maximally mixed state, and $0\leq q \leq 1$ is a noisy parameter. This state can be understood as the result of the following process: a maximally entangled state $\ket{\Phi_+}$ is produced and sent to a neighboring site through a depolarizing channel. This channel leaves the state untouched with probability $F=\bra{\Phi_+}\rho\ket{\Phi_+}=1-3q/4$ (i.e. the fidelity between $\rho$ and $\ket{\Phi_+}$) and causes an error with probability $1-F$, which we call the channel-error rate.
Note that since any two-qubit state can be put into the form \eqref{werner} by local operations and classical communication \cite{bennett}, our results can also cover other cases of quantum states.
The main goal in our scheme is to entangle two arbitrarily distant nodes, labeled by Alice and Bob, using quantum channels connecting neighboring nodes, local operations at each node and one-way classical communication among them.
Here, we will consider, apart from the communication noise, possible errors in these operations.
Our protocol is based on the surface code \cite{surfacecode} and could be generalized to other geometries \cite{colorcode}.
Apart from the four qubits in each node composing the network, we need one more qubit in each node for processing the surface code.
\begin{figure*}[tbp]
\includegraphics[width=15 cm]{scheme.pdf}
\caption{
(a) Quantum network on the square lattice.
Each node has four linking qubits, which can be entangled with neighboring nodes, while the fifth one is used to process the surface code.
The colors are used to label the nodes according to the operations to be realized during the protocol.
(b) A rectangular part of the quantum network is used to create entanglement between qubits in Alice's and Bob's sites (see text).
}
\label{scheme}
\end{figure*}
\begin{figure}[h]
\includegraphics[width=8cm]{circuit.pdf}
\caption{
Circuits for stabilizer measurements (a) $ZZZZ$ and (b) $XXXX$.
Circuits for stabilizer measurements $ZZZ$ and $XXX$ are similar.
On each subfigure, the upper two lines are the processing qubit and linking qubits of a \textit{blue} or \textit{red} node, while the lower two lines are processing qubits and linking qubits of four neighboring \textit{black} nodes.
Each wave line represents a Bell state $\left\vert\Phi ^{+} \right\rangle$ of two corresponding linking qubits.
The measurement outcome of $ZZZZ$ ($XXXX$) is $zz_{1}z_{2}z_{3}z_{4}$ ($xx_{1}x_{2}x_{3}x_{4}$) where $z$ and $z_{i}$ ($x$ and $x_{i}$) are outcomes of measurements in the $Z$ ($X$) basis of the \textit{blue} (\textit{red}) processing qubit and the $i$th black linking qubit, respectively.
Each \textit{blue} (\textit{red}) node interacts with its four neighboring \textit{black} nodes in the order left, up, right, down.
After interacting with a \textit{blue} (\textit{red}) node, a \textit{black} processing qubit needs a phase (flip) gate $Z^{x_{i}}$ ($X^{z_{i}}$) where $x_{i}$ ($z_{i}$) is the measurement outcome of the corresponding \textit{blue} (\textit{red}) linking qubit.
}
\label{circuit}
\end{figure}
\section{Scheme}
To generate remote entanglement between Alice and Bob, we consider a section of the network with a rectangular geometry as shown in Fig. \ref{scheme}(b). We divide the nodes within this section of the network into three groups, marked in \textit{black}, \textit{blue}, and \textit{red} in the figure.
Each \textit{blue} (\textit{red}) node is surrounded by four \textit{black} nodes (or three if it is along a border of the rectangle).
Alice and Bob are both in the \textit{black} group and are located on two edges in this rectangular network, e.g. two vertical sides composed of \textit{black} nodes and \textit{blue} nodes.
The other two sides are composed of \textit{black} nodes and \textit{red} nodes.
At the start of the protocol, we initialize all processing qubits in \textit{black} nodes to the state $|0\rangle$.
We then use the entanglements shared between neighbors to perform stabilizer measurements $ZZZZ$ ($ZZZ$) and $XXXX$ ($XXX$) of four (three) \textit{black} processing qubits around each \textit{blue} and \textit{red} node, respectively.
Here, $Z$ and $X$ are Pauli operators.
A circuit describing these measurements is shown in Fig. \ref{circuit}.
As soon as these stabilizer measurements are performed, the state of \textit{black} processing qubits becomes an eigenstate of the stabilizers of the surface code \cite{surfacecode}.
Finally, all \textit{black} processing qubits except Alice's and Bob's qubits are measured in the following way:
\begin{itemize}
\item All \textit{black} processing qubits along the two vertical sides are measured in the $X$ basis.
\item All \textit{black} processing qubits along the dotted line composed of \textit{black} and \textit{red} nodes connecting Alice and Bob [see Fig. \ref{scheme}(b)] are measured in the $Z$ basis.
\item Qubits in the region defined within the dashed lines in Fig. \ref{scheme}(b) are measured in the $Z$ basis, and the ones outside are measured in the $X$ basis.
\end{itemize}
Here, we choose the dotted line so that it is in the middle of two corresponding dashed lines when it is near Alice and Bob or two horizontal lines when it is far away from Alice and Bob.
We argue that after these measurements, the processing qubits of Alice and Bob are entangled.
In order to see this, let us first consider the perfect case, i.e. when $q=0$ and all operations are perfect.
The initial state of \textit{black} processing qubits, which are all initialized in the state $|0\rangle$, is the eigenstate of $\overline{Z}_{AB}$ with the eigenvalue $+1$.
Here, $\overline{Z}_{AB}$ is the product $\prod Z$ of \textit{black} processing qubits on the line connecting Alice and Bob (the dotted line in Fig. \ref{scheme}).
The operator $\overline{Z}_{AB}$ commutes with the stabilizer operators.
Therefore, the stabilizer state is still an eigenstate of $\overline{Z}_{AB}$ with the eigenvalue $+1$.
The stabilizer state is also an eigenstate of the product of all $XXXX$ and $XXX$, which is $\overline{X}_{A}\overline{X}_{B}$, where $\overline{X}_{A}$ ($\overline{X}_{B}$) is the product $\prod X$ of \textit{black} processing qubits on the vertical side with Alice (Bob) [see Fig. \ref{scheme}(b)].
One can obtain the eigenvalue of $\overline{X}_{A}\overline{X}_{B}$ by multiplying measurement outcomes of all $XXXX$ and $XXX$.
After measuring out \textit{black} processing qubits except the processing qubits in Alice and Bob (i.e., the qubit $A$ and the qubit $B$), we can replace $Z$ and $X$ in $\overline{Z}_{AB}$ and $\overline{X}_{A}\overline{X}_{B}$ with the respective measurement outcomes.
Finally, we see that the state of qubits $A$ and $B$ is ``stabilized", i.e., it becomes an eigenstate of $Z_{A}Z_{B}$ and $X_{A}X_{B}$, where eigenvalues depend on measurement outcomes.
In this way, the qubit $A$ and the qubit $B$ are entangled as one of the Bell states.
Imperfections in quantum channels and in local operations can result in incorrect stabilizer-measurement outcomes.
In order to obtain a set of faithful stabilizer-measurement outcomes, the stabilizer measurements must be repeated $N$ times before final single-qubit measurements on \textit{black} processing qubits.
For each stabilizer measurement, the entanglement between neighboring sites needs to be regenerated. Thus, the overall time cost of our scheme is $NT$, where $T$ is the communication time for generating neighboring entanglements.
It is crucial to realize that \textit{black} processing qubits may be affected by errors during the stabilizer measurements.
However, these errors can be detected: if any of the stabilizer-measurement outcomes are different from each other in the previous time step, we have an error syndrome and we immediately conclude that incorrect stabilizer-measurement outcomes and errors on \textit{black} processing qubits have happened.
Moreover, it is possible that some qubits are wrongly initialized states other than the state $|0\rangle$ at the very beginning.
We can detect such initialization errors based on measurement outcomes of $ZZZZ$ stabilizers, i.e., all $ZZZZ$ should be $+1$ if the qubits are initialized correctly.
Errors occurring after the last stabilizer measurement, including errors induced by the last stabilizer measurement and subsequent operations, cannot be detected by further stabilizer measurements.
Thus, we may need to measure more \textit{black} processing qubits rather than only qubits included in $\overline{Z}_{AB}$ and $\overline{X}_{A}\overline{X}_{B}$ (see the measurement pattern defined by the dashed lines in Fig. \ref{scheme}).
We then detect these errors that occur after the last stabilizer measurements through a comparison of the outcomes of single-qubit measurements with outcomes of stabilizers, i.e., the outcome of a stabilizer should be the same as the product of outcomes of individual qubits in the stabilizer.
One corrects stabilizer-measurement outcomes and all other errors by pairing error syndromes \cite{SCQC} as in the typical surface-code error correction.
\begin{figure}[tbp]
\includegraphics[width=8 cm]{thresholds.pdf}
\caption{
Error thresholds for a variety of ratios between $\epsilon_{S}$ and $\epsilon_{E}$ for independent errors.
Here, $\epsilon_{t}$ is the threshold of $\epsilon_{E}$, i.e., errors are correctable if $\epsilon_{E}<\epsilon_{t}$.
Squares represent thresholds for $\epsilon_{S}/\epsilon_{E}=1,1.5,2,2.5,3$ without correlations.
These thresholds are obtained numerically by pairing error syndromes with the minimum-weight perfect-matching algorithm \cite{MWPM,Sean}.
The line is obtained by fitting thresholds with the function $\epsilon_{t}=\epsilon_{0}-k\log(\epsilon_{S}/\epsilon_{E})$.
}
\label{thresholds}
\end{figure}
\section{Error thresholds}
The surface code works if the probability of errors is lower than a certain threshold.
The outcome of a $XXXX$ or $ZZZZ$ measurement may be wrong with a probability $\epsilon_{S}$.
Between two time steps of stabilizer measurements, phase errors $[Z]$ (flip errors $[X]$) may happen on each \textit{black} processing qubit with a probability $\epsilon_{E}$.
Here and throughout we use the form $[U]$ to denote the superoperator
$[U]( \rho ) = U\rho U^{\dagger}$.
By considering only the errors coming from quantum channels that occur ``independently" and by considering the limit where $q$ is small, $\epsilon_{S}=2q$, and $\epsilon_{E}=q$.
In fact, errors corresponding to $XXXX$ and $ZZZZ$ stabilizers are correlated.
However, these two kinds of errors can be corrected separately.
Thus, correlations between them can be ignored.
Under these conditions, we find numerically that the error threshold depends on the ratio $\epsilon_{S}/\epsilon_{E}$ as $\epsilon_{t}=\epsilon_{0}-k\log(\epsilon_{S}/\epsilon_{E})$ (see Fig. \ref{thresholds}), where $\epsilon_{0}=0.0294$ and $k=0.0072$ are constants and $\epsilon_{th}$ is the threshold of $\epsilon_{E}$ (i.e. errors are correctable if $\epsilon_{E}<\epsilon_{th}$).
In our case in which $\epsilon_{S}/\epsilon_{E}=2$, the noise in quantum channels is correctable if $q=\epsilon_{E}<2.23\%$, corresponding to an error rate of $1.67\%$.
Imperfect operations, including the initialization of qubits, measurements, and controlled-NOT gates, may also result in errors, reducing the tolerable error rate of quantum channels.
Without loss of generality, we may assume that errors in operations are depolarized with the same rate $p$.
Erroneous operations are modelled by perfect operations preceded or followed by an erroneous superoperation $E_{1}=(1-p)[I]+(1/3)([X]+[Y]+[Z])$ for single-qubit operations or
$E_{2}=(1-p)[I]+(1/15)([I_{1}X_{2}]+\cdots +[Z_{1}I_{2}]+\cdots +[X_{1}Y_{2}]+\cdots )$ for two-qubit operations.
Moreover, imperfect two-qubit gates may give rise to correlations between phase errors on \textit{black} processing qubits, which take place in the form $[Z_{red}Z_{right}]$, $[Z_{red}Z_{down}]$, and $[Z_{righ}Z_{down}]$ with the same probability $\epsilon_{C}$ between two time steps of stabilizer measurements.
Here, $[Z_{red}]$ is a phase error on a \textit{red} processing qubit, which can induce an incorrect outcome of the stabilizer measurement, and $[Z_{right}]$ ($[Z_{down}]$) is a phase error on the \textit{black} processing qubit to the right (downward direction) of the \textit{red} processing qubit.
All other phase errors are independent, i.e. $[Z_{red}]$, $[Z_{right}]$, and $[Z_{down}]$ happen with the probabilities $\epsilon_{S}-2\epsilon_{C}$, $\epsilon_{E}-2\epsilon_{C}$, and $\epsilon_{E}-2\epsilon_{C}$, respectively.
Flip errors corresponding to stabilizers $ZZZZ$ are also similar.
By counting these errors, we find $\epsilon_{S}=2q+124p/15$, $\epsilon_{E}=q+76p/15$, and $\epsilon_{C}=8p/15$.
Then, we evaluate the thresholds of quantum channels with imperfect operations as shown in Fig. \ref{ERT} and show that if the error rate of operations is $10^{-3}$, the threshold of $q$ is about $1.69\%$, corresponding to an error rate of $1.27\%$.
Memory errors can occur in our scheme while we are generating neighboring entanglements.
Fortunately, these memory errors can also be detected by stabilizer measurements, and the decoherence time does not have to be comparable to the overall time cost $NT$ but does for the communication time for generating neighboring entanglements $T$.
We suppose memory errors are given by depolarization and occur with the rate $p_{m}$ during the time $T$, which can increase $\epsilon_{E}$ by $2p_m/3$.
Thus, memory errors on processing qubits can lower the threshold but not dramatically with $p_{m}=10^{-2}$ as shown in Fig. \ref{ERT}.
\begin{figure}[tbp]
\includegraphics[width=8 cm]{ERT.pdf}
\caption{
Thresholds of the communication-noise parameter $q$ where $p$ is the error rate of local operations.
Memory errors can lower the threshold but not dramatically with the error rate $p_{m}=10^{-2}$.
Two curves are obtained by neglecting correlations and using the linear fitting in Fig. \ref{thresholds}, which are good approximations of thresholds (rounds and squares for $p_{m}=0$ and $p_{m}=10^{-2}$, respectively) obtained numerically by pairing error syndromes with the minimum-weight perfect-matching algorithm \cite{MWPM,Sean}.
}
\label{ERT}
\end{figure}
\section{Final remote entangled state}
Even within the threshold, error correction may fail because a chain of errors connecting boundaries (error chain) may not be detected through error syndromes.
There are two kinds of nontrivial error chains that can affect the final entanglement between Alice and Bob: (i) error chains that flip qubits in $\overline{Z}_{AB}$ for an odd number of times and (ii) error chains that result in an odd total number of incorrect measurement outcomes of $XXXX$ stabilizers and phase errors on \textit{black} processing qubits along two vertical sides after the last stabilizer measurement.
In order to reduce the first kind of nontrivial error chains, the network for entangling Alice and Bob is designed so that the minimum distance between two horizontal sides and the line connecting Alice and Bob (the dotted line in Fig. \ref{scheme}) is also $N$.
Upon error correction, the total probability of long nontrivial error chains with the minimum length $N$ decreases exponentially with $N$ but increases polynomially with the distance between Alice and Bob \cite{raussendorf}.
Therefore, $N$ scales only logarithmically with the communication distance, and thus, these long error chains can then be neglected.
Short nontrivial error chains with lengths shorter than $N$ are all distributed in regions around Alice and Bob, whose probabilities also decrease exponentially with their lengths.
More generally, noise in the final remote entanglement can be described by the superoperator $E_{AB}=F[1]+\epsilon_{X}[X_A]+\epsilon_{Y}[Y_A]+\epsilon_{Z}[Z_A]$, where the fidelity $F=1-\epsilon_{X}-\epsilon_{Y}-\epsilon_{Z}$.
Assuming the last stabilizer measurement is $XXXX$, by only considering short error chains, we have $\epsilon_{X}=q/2+2p_{m}/3+44p/15+O(q^2,p_{m}^2,p^2)$, $\epsilon_{Y}=4p/15+O(q^2,p_{m}^2,p^2)$, and $\epsilon_{Z}=4p/3+O(q^2,p_{m}^2,p^2)$ \cite{raussendorf2,perseguers3D}.
\section{Efficiency}
The communication time for generating neighboring entanglements $T$ relies on the distance between two nearest-neighbor nodes.
For example, for nearest-neighbor distance of $10$~km, a neighboring entanglement can be generated with a probability $\sim 99.75\%$ in $T\sim 0.2$~ms.
Here, we have supposed a repeat of the entanglement generation in order to reach this high success probability, and the failure of generating entanglements is due to photon loss in fibers, whose attenuation is supposed to be $0.2$~dB/km in this example.
Failures of generating entanglements give rise to failures of stabilizer measurements, which are tolerable in surface codes.
The presence of these failures can reduce the threshold of noises, but only slightly if the success probability is near 1 \cite{Sean}.
With $L$ the distance between two vertical lines and $\sim N$ the distance between two horizontal lines, the probability of errors induced by long error chains scales as $\epsilon_{\text{long}}\sim Le^{-\kappa N}$. Here, $\kappa$ depends on the probability of errors, and $\kappa \sim 1$ for the probability of errors that is one-third of the threshold \cite{raussendorf3}.
Therefore, entanglements can be generated rapidly over a long distance, e.g. with $N=25$, resulting in $200$~ebits/s, and $\epsilon_{\text{long}}$ is still negligible if $L>10^5$, i.e. an overall distance of $10^6$~km.
\section{Discussion and conclusion}
In summary, we have proposed a protocol for entangling remote qubits on a two-dimensional noisy quantum network, which is scalable when the number of quantum memories in each node of the network is fixed.
In our protocol, the communication rate decreases only logarithmically with the distance.
The tolerable errors in the protocol presented here are three orders of magnitude better than possible protocols based on one-dimensional fault-tolerant quantum-computation schemes.
In this paper, we investigated the case in which each node has a five-qubit quantum memory.
Because every node is interacting with only one other node at one time, memory qubits can be reused, and indeed, two qubits per node is sufficient.
With more memories, entanglement distillation protocols can be used to improve the effective fidelity of quantum channels \cite{distillation}, i.e. increase the error-rate threshold.
This work is supported by the National Research Foundation and the Ministry of Education of Singapore.
D.C. acknowledges the PVE-CAPES program (Brazil). Y.L. acknowledges helpful discussions with Sean Barrett.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Data volumes are rapidly increasing in several research fields, as in bioinformatics, particle physics, earth sciences, and more. Next generation sequencing technologies, new particle detectors, recent advances in remote sensing techniques and higher resolutions in general, on both the instrumental and the simulation side, are constantly setting new challenges for data storage, processing and analysis.
Astrophysics is no different, and the upcoming generation of surveys and scientific instruments as the Square Kilometer Array (SKA) \citep{SKA}, the Cherenkov Telescope Array (CTA) \citep{CTA}, the Extremely Large Telescope (ELT) \citep{EELT}, the James Webb Space telescope \citep{jwst}, the Euclid satellite \citep{Euclid} and the eROSITA All-Sky Survey \citep{erosita} will pile up on this trend, bringing the data volumes in the exabyte-scale. Moreover, numerical simulations, a theoretical counterpart capable of reproducing the formation and evolution of the cosmic structures of the Universe, must reach both larger volumes and higher resolutions to cope with the large amount of data produced by current and upcoming surveys. State of the art cosmological N-body hydrodynamic codes (as OpenGADGET, GADGET4 \citep{gadget4} and RAMSES \citep{ramses}) can generate up to 20 petabytes of data out of a single simulation run, which are required to be further post-processed and compared with observational data \citep{2018MNRAS.475..676S,2017A&C....20...52R,2019ASPC..521..567T,2016NewA...42...49H}.
The size and complexity of these new experiments (both observational and numerical) require therefore considerable storage and computing resources for their data to be processed and analyzed, and possibly to adopt new approaches and architectures.
High Performance Computing (HPC) systems including Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), together with the so called ``bring computing close to the data" paradigm are thus becoming key players in obtaining new scientific results \citep{asch2018}, not only by reducing the time-to-solution, but also by becoming the sole approach capable of processing datasets of the expected size and complexity.
In particular, even the last steps of the data analysis processes, which could be usually performed on researchers' workstations and laptops, are getting too resource-intensive and progressively required to be offloaded to such systems as well.
Although capable of satisfying the necessary computing and storage requirements, these systems are usually hosted in remote computing centers and managed with queue systems, in order to dynamically share their resources across different users and to optimize the workload and the throughput. This can strongly complicate the user interaction, requiring remote connections for shell and graphical access (as SSH and X protocol forwarding), careful data transfer and management, and scheduler-based access to computing resources which strongly limits interactive access to the system.
Bringing along the software required for the analysis can be even more challenging, and without proper setup (in particular with respect to its dependencies) it can not only fail to start or even compile, but also severe reproducibility issues can arise \citep{bhandari2019characterization}.
To address these challenges, we see an increasing effort in developing the so called \emph{science platforms} \citep{2020ASPC..527..777T,sciplat1,sciplat2,sciserv}. A science platform (SP) is an environment designed to offer users a smoother experience when interacting with remote computing and storage resources, in order to mitigate some of the issues outlined above.
In science and, more specifically, in astronomy, a number of SPs have been designed and developed over the past years.
CERN SWAN \citep{piparo2018swan} represents CERN's effort to build towards the science platform paradigm. SWAN is a service for interactive, web-based data analysis which makes Jupyter Notebooks widely available on CERN computing infrastructure together with a Dropbox-like solution for data management. However, as of today, this solution does not provide support for applications other than the Jupyter Notebooks and a built-in shell terminal, does not allow using custom or graphical software environments and requires heavy system-level integration in order to be used on top of existent computing resources.
ESA Datalabs \citep{datalabs} is a science platform specific to astronomy and astrophysics. Similarly to CERN SWAN, it allows users to work on ESA's computing infrastructure using interactive computing environments as Jupyter Lab and Octave (or to choose from pre-packaged applications as TOPCAT). Datalabs is mainly focused on enabling users to gain direct access to ESA's datasets, it does not support using custom software environments, and it is not an open source project.
The Large Synoptic Survey Telescope (LSST) developed a similar science platform \citep{juric2017lsst}, based on a set of integrated web applications and services through which the scientific community will be able to ``access, visualize, subset and analyze LSST data". The platform vision document does not mention applications other than the Jupyter Notebooks, nor support for custom or graphical software environments, and refers to its own computing architecture.
There are also a number of initiatives entirely focus on supporting Jupyter Notebooks on cloud and HPC infrastructures (such as \citep{hpcnotebooks}, \citep{metabolomicsjupyter}, \citep{milliganjupyter} and \citep{castronova2018general}), which might fall in our SP definition to some extent, and in particular in Astronomy and Astrophysics it is worth to mention SciServer \citep{sciserv}, Jovial \citep{jovial} and CADC Arcade \citep{canfar}.
Lastly, it has to be noted that the private sector is moving fast with respect to resource-intensive and interactive data analysis, mainly driven by the recent advances in artificial intelligence and machine learning. In this context, we want to cite Google Colab \citep{bisong2019google} and Kaggle Notebooks \citep{kagglenotebooks}, which are built around heavily customised versions of the Jupyter Notebooks, and Azure Machine Learning \citep{azureml}, which provides a nearly full-optional SP specifically targeted at machine learning workflows.
While on one hand all of the above mentioned SPs do make it easier to access and use remote computing resources, on the other, since they are mainly focused on web-based and integrated analysis environments built on top Jupyter Notebooks or similar software, they also introduce two main drawbacks:
\begin{enumerate}
\item users are restricted in using pre-defined software packages, libraries and environments, which besides constraining their work can also lead to reproducibility issues, and
\item graphical software environments as remote desktops and GUI applications are supported only to a limited extent, if not completely unsupported.
\end{enumerate}
Moreover, the deployment options for most of the SPs developed today rely on technologies originating from the IT industry (e.g. Kubernetes) and require deep integration at system-level, which is often hard to achieve in the framework of HPC clusters and data-intensive system. This is not only because of technological factors and legacy aspects, but also because of a generalized pushback for exogenous technologies from some parts of the HPC community \citep{nih_hpc,nih2_hpc,nih3_hpc,nih4_hpc}.
In this paper we present a science platform which aims at overcoming these limitations: \textit{Rosetta}. Built on top of a novel architecture based on framing user tasks as microservices - independent and self-contained units - Rosetta allows to fully support custom software packages, libraries and environments, including remote desktops and GUI applications, besides standard web-based analysis environments as the Jupyter Notebooks. Its user tasks are implemented as software containers \citep{suse_cont}, which allow for safe, effective and reproducible code execution \citep{boettiger2015introduction}, and that in turn allows users to add and use their own software containers on the platform.
Rosetta is also designed with real-world deployment scenarios in mind, and thus to easily integrate with existing computing and storage resources including HPC clusters and data-intensive systems, even when they do not natively support containerization.
Although astronomy remains its mainstay (Rosetta has been developed in the framework of the EU funded project ESCAPE\footnote{ESCAPE aims to address the open science challenges shared by SKA, CTA, KM3Net, EST, ELT, HL-LHC, FAIR as well as other pan-European research infrastructures as CERN, ESO, JIVE in astronomy and particle physics.}), Rosetta can virtually support any science and technology domain.
This paper is organized as follows. In Sections \ref{sec:architecture}, \ref{sec:implementation} and \ref{sec:security}, we discuss the architecture of the Rosetta platform, its implementation and the security aspects.
This is followed, in Section~\ref{sec:rosetta}, by an overview of the platform from a user prospective.
Next, we present the deployment and usage scenario in a real production environment and a few use cases we are supporting (Section~\ref{sec:usecases}), leaving the last section to conclusions and future work.
\section{Architecture}
\label{sec:architecture}
Rosetta's architecture is entirely designed to provide simplified access to remote, dynamically allocated computing and storage resources without restricting users to a set of pre-defined software packages, libraries and environments.
It unfolds in two main components: the \textit{platform architecture} and the \textit{task orchestration architecture}.
The platform architecture follows a standard approach where a set of services implement the various functionalities, and it is schematized in Figure~\ref{fig:arch}. These comprise a web application service for the main application logic and the web-based UI, a database service for storing internal data and a proxy service for securing the connections. The web application service functionalities can be further grouped in modules which are responsible for managing the software containers, interacting with the computing and storage resources, orchestrating the user tasks, handling the user authentication and so on.
In particular:
\begin{itemize}
\item \emph{Software} functionalities allow to track the software containers available on the platform, their settings and container registries\footnote{A container registry is a place where container images are stored, which can be public or private, and deployed both on premises or in the Cloud. Many container registries can co-exisist at the same time.};
\item \emph{Computing} functionalities allow to interact with both standalone and clustered computing resources, hosted either on premises (e.g. via Openstack) or on cloud systems (e.g. on Amazon AWS);
\item \emph{Storage} functionalities allow browsing and operating on local and shared file system (as Ext4, NFS, BeeGFS);
\item \emph{Task} functionalities allow submitting and stopping tasks as well as viewing their logs, by interacting with the computing resources workload management systems (WMSs) as Slurm and Kubernetes and/or their container engines (e.g. Docker, Singularity, Podman);
\item \emph{Account} functionalities provide user account and profile management features including user registration, login and logout, supporting both local and external authentication (e.g. OpenID Connect, Shibbolet).
\end{itemize}
\begin{figure}[htpb]
\centering \includegraphics[scale=0.18]{images/architecture}
\caption{Rosetta main architecture. The first level of abstraction consist in the proxy, database and web application services. The web application service is further break down into its main components (software, computing, storage, tasks and account) together with their real world counterparts, some examples of which are given in the right part of the figure.}
\label{fig:arch}
\end{figure}
Rosetta's task orchestration architecture follows instead a novel, microservice-oriented architecture \citep{O1-124_adassxxx} based on software containers.
Microservices \citep{newman2015building} are independent, self-contained and self-consistent units that perform a given task, which can range from a simple functionality (e.g. serving a file to download) to complex computer programs (e.g. classifying images using a neural network). They are interacted with using a specific interface, usually a REST API over HTTP, which is exposed on a given port.
Microservices fit naturally in the containerisation approach, where each microservice runs in its own container, isolated from the underlying operating system, network, and storage layers.
User tasks in Rosetta are thus always executed as software containers, and treated as microservices. Rosetta can therefore stay agnostic with respect to the task interface, some examples of which include a Jupyter Notebook server, a web-based remote desktop or a virtual network computing (VNC) server, but also a secure shell (SSH) server with X protocol forwarding is a perfectly viable choice.
One of the main features of this approach, where user tasks are completely decoupled from the platform, is to make it possible for the users to add their own software containers. There is indeed no difference between ``platform'' and ``user'' containers, as long as they behave as a microservice. Rosetta users can thus upload their own software containers on a container registry, add them in the platform by setting up a few parameters (as the container image and the interface port), and then use them for their tasks.
In order to make use of this architecture for user tasks orchestration, Rosetta needs to be able to submit to the computing resources a container for execution, and to know how to reach it (i.e. on which IP address).
These functionalities are standard and built-in in most modern container orchestrators (e.g Kubernetes), however as mentioned in the introduction Rosetta has been designed to also support computing resources not natively supporting containerized workloads (e.g. HPC clusters and data-intensive systems). On these computing resources, also depending on the WMS and container engine used, some key features might not be available, as full container-host filesystem isolation, network virtualization and TCP/IP traffic routing between the containers.
To work around these missing features, Rosetta relies on an \emph{agent}, which is a small software component in charge of helping to manage the task container life cycle. Its main features comprises setting up the environment for the container execution, managing dynamic port allocation, reporting the host IP address to the platform, and running the container itself. The agent internal logic is described more in detail in section \ref{subsec:tasks}.
When a container is started, its interface has to be made accessible by the user. This is achieved first by making the interface port reachable on the internal network between the computing resource and Rosetta, and then by exposing it to the outside world through Rosetta itself, thus making it accessible by the user.
The first step can make use of simple TCP/IP tunnels as well as more sophisticated techniques usually available in modern container orchestrators and WMSs, while the second one can be accomplished either by directly exposing the task interface as-is or by relaying on a proxy service, which also allows to enforce access control and connection encryption.
Once tasks are executed and their interfaces made accessible, no further operations are required, and the users can be looped in.
A diagram of this flow is presented with two examples: the first using a WMS supporting containerized workloads with direct connection to the task interface (Figure~\ref{fig:task_wms_direct}), the second using the agent to run the task container and relaying on the proxy for connecting to the task interface (Figure~\ref{fig:task_agent_proxy}).
\begin{figure}[htbp]
\centering\includegraphics[scale=0.20]{images/task_wms_direct.png}
\caption{Rosetta user task orchestration using the computing resource's WMS and a direct connection to the task interface through a TCP/IP tunnel.}
\label{fig:task_wms_direct}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[scale=0.20]{images/task_agent_proxy.png}
\caption{Rosetta user task orchestration using the agent and the proxy service on top of a TCP/IP tunnel for connecting to the task interface.}
\label{fig:task_agent_proxy}
\end{figure}
\section{Implementation}
\label{sec:implementation}
Rosetta is entirely built using open-source technologies, in particular Python and the Django web framework, and released as an open source project\footnote{\url{https://www.ict.inaf.it/gitlab/exact/Rosetta}}.
Other technologies include HTML and JavaScript for the UI, Postgres for the database\footnote{The database service can be replaced by any other database supported by Django.} and Apache for the proxy. The platform services (not to be confused with the user tasks software containers) are containerised using the Docker engine and using Docker Compose as the default orchestrator\footnote{Other orchestrators can be supported as well, e.g. Kubernetes.}. Besides the web application, database and proxy services, Rosetta includes an optional container registry service, which can be used to store software containers locally, and a test Slurm cluster service for testing and debugging.
Rosetta deployment tools provide a set of management scripts to build, bootstrap and operate the platform and a logging system capable of handling both user-generated and system errors, exceptions and stack traces.
The web application functionalities are handled with a combination of Django object–relational mapping (ORM) models and standard Python functions and classes.
The ORM schema, which represents how the ORM models are actually stored in the database, is summarized in Figure \ref{fig:ORM}. In the following subsections we will describe their implementation according to the grouping introduced in section \ref{sec:architecture}: Software, Computing, Storage, Tasks and Account.
\begin{figure}[htpb]
\includegraphics[scale=0.25]{images/ORM}
\caption{The Rosetta Django ORM schema, showing the various models and their relationships. Some minor and less relevant models as the user profile, the login tokens and the key pairs have been excluded for the sake of simplicity.}
\label{fig:ORM}
\end{figure}
\subsection{Software}
\label{subsec:containers}
Software lives in Rosetta only as software containers. Software containers are represented using a Django ORM model which acts as a twin of the ``real'' container, providing metadata about the container itself. Rosetta relies on Open Container Initiative (OCI) containers, which must be stored on an OCI-compliant container registry.
The \verb`Container` ORM model has a \verb`name` and a \verb`description` fields to represent the container on Rosetta, and a series of attributes to identify its image: the \verb`registry` (to set on which container registry it is hosted), the \verb`image_name` (to locate it on the registry) and the \verb`image_tag` (to set a specific container version).
The \verb`image_arch`, \verb`image_os` and \verb`image_digest` attributes provide instead more fine-grained control in order to uniquely identify the image, and should be used in production environments. A container image is indeed uniquely identified on an OCI registry only if using, besides its name, either a triplet of tag, architecture and OS or an image hash digest (usually generated with SHA-256). This is because on OCI registries, multiple images can be assigned to the same tag, in order to enable multi-OS and multi-architecture support. Moreover, it has also to be noted that while a tag can be re-assigned, a digest is an immutable identifier and ensures reproducibility.
Containers can be registered in Rosetta as platform containers or user containers. A platform container is not associated with a specific user and thus available for all of them, while a user container belongs to and is accessible by a specif user only, according to its \verb`user` attribute. Containers can also be shared within (and made accessible only to) a specific \verb`group`.
An \verb`interface_port` attribute lets Rosetta know on which port the container will expose its interface, and the \verb`interface_protocol` sets the corresponding protocol (e.g. HTTP, SSH, VNC etc.).
The \verb`interface_transport` (defaulted to TCP/IP) can be used to cover non-standard scenarios (e.g. if using UDP).
Since as explained in Section \ref{sec:architecture}, the container interfaces are made accessible to the outside world, they need to be secured. For this to happen, Rosetta allows to setup a one-time password or token at task creation-time to be used for accessing the task interface afterwards. Task interfaces can get password-protected in two ways: by implementing a password-based authentication at task-level, or by delegating it to the proxy service. In the first case, the container must be built to support this feature and must be registered on the platform with the extra \verb`supports_interface_auth` attribute set to \verb`True`. Rosetta can then forward the password or token to the container via an environment variable. Instead, if the container makes use of an HTTP-based interface, it can delegate its access control to the proxy service, and just expose a plain, unprotected interface over HTTP. In this case, Rosetta will setup the proxy service in order enforce user authentication when accessing the task interface, and encrypt it using SSL. Delegating the task authentication to the proxy service is the default method for HTTP-based interfaces, since it is far more secure than leaving the authentication to be implemented at task-level, as it will be discussed in Section~\ref{sec:security}.
In order to support container engines missing port mapping capabilities, Rosetta provides a mechanism to let containers receive instructions on which port to start their interface on. As already mentioned in Section \ref{sec:architecture}, while most industry-standard container engines can autonomously manage TCP port mapping between containers and their host to avoid conflicts with ports already allocated (either by another service, by another container or by another instance of the same container), some of them cannot (e.g. Singularity).
In this case, the Rosetta agent can provide a specific port to the container where to make its interface to listen on, which is chosen between the free ephemeral ports of the host and passed to the container via an environment variable. To let Rosetta (and the agent) know that a given container supports this mechanism, its extra attribute \verb`supports_custom_interface_port` must be set to \verb`True` (and the \verb`interface_port` attribute is then discarded).
Rosetta comes with a number of a base containers for GUI applications, generic remote desktops and Jupyter Notebooks which can be easily extended to suit several needs:
\begin{itemize}
\item JupyterNotebook, the official Jupyter Notebook container extended to support custom interface ports;
\item GUIApplication, a container built to run a single GUI application with no desktop environment;
\item MinimalDesktop, a desktop environment based on Fluxbox where more than one application can be run in parallel;
\item BasicDesktop, a desktop environment based on Xfce for tasks requiring common desktop features as a file manager and a terminal.
\end{itemize}
The GUIApplication and Desktop containers make use of KasmVNC, a web-based VNC client built on top of modified versions of TigerVNC and NoVNC which provides seamless clipboard sharing between the remote application or desktop and the user's local desktop environment, as well as supporting dynamic resolution changes in order to always fit the web browser window, that are essential features in the everyday use.
\subsection{Computing}
\label{subsec:computing}
Computing resources are divided in two main types: \textit{standalone} and \textit{clusters}. The first ones may or may not have a WMS in place, while the second ones always do. If a computing resource has no WMS, the task execution is synchronous, otherwise the execution is asynchronous and the tasks are queued.
The Django ORM model class used to represent computing resources is named \verb`Computing`, and it includes a \verb+type+, a \verb`name` and a \verb`description` fields for identifying a specific computing resource within Rosetta. A set of attributes describe how to access it and to submit user tasks: the \verb`access_mode` specifies how the computing resource is accessed (i.e. over SSH, using a command line interface (CLI), or a set of APIs); the \verb`auth_mode` specifies how the platform gets authorized on the computing resource; the \verb`wms` specifies the WMS in place (or if there is none) and the \verb`container_engine` specifies which container engines (and runtimes) are available. With respect to the \verb`container_engine`, if the WMS natively supports containerized workloads and there is no need of running tasks using a specific container engine or runtime, then it can be just set to the value ``\verb+internal+''.
Some example combinations of these attributes are reported in Table \ref{table:computing}, where each row corresponds to a physical computing resource.
The first row represents a classic HPC cluster using Slurm as WMS and Singularity as container engine, and requiring an accredited cluster user to submit tasks over SSH using the Slurm command line interface.
The second row represents the same cluster but supporting, besides Singularty, also the Docker engine with both runC and Kata runtimes, in order to allow Rosetta (or its users) to chose the best one for a given task.
The third row represents yet the same cluster but accessed over Slurm REST APIs using JSON web tokens (JWT) for authentication.
The fourth and fifth rows represent instead standalone computing resources, using the Docker container engine, and accessed using SSH as a standard user for the fourth and the Docker REST APIs with a platform certificate for the fifth.
The sixth, seventh and eight rows all use computing resources managed with Kubernetes, and in the eight row the container runtimes available within Kubernetes are explicitly stated.
The last row is instead an example using Fargate, an hosted container execution service from Amazon Web Services (AWS) built on top of their Elastic Container Service (ECS), and accessed using its proprietary APIs.
When deploying Rosetta on a commercial cloud infrastructure as AWS or Google Cloud Platform (GCP), there are two options. The first one is to treat such infrastructures as transparent, and simply use standard (i.e. not proprietary) access modes as SSH, Slurm, or Kubernetes. In this case there is no difference between using Rosetta with computing resources deployed on premises or on such commercial cloud systems. The second option is to instead integrate at a deeper level, using AWS or GCP proprietary APIs and/or clients to automatically start new virtual machines upon request, or to use some of their native scheduling systems, as the last example of table \ref{table:computing}.
The implementation work to support all of the combinations of access and authentication modes, container engines and WMSs is still ongoing, as we privileged SSH and Slurm since they fit well in the application scenarios we encountered so far. However, we wanted to lie down a general framework in order to easily expand the platform in future.
\begin{table*}[htpb]
\begin{center}
{\small %
\begin{tabular}{ |c|c|c|c|c|}
\hline
&\verb`access_mode` & \verb`auth_mode` & \verb`wms` & \verb`container_engines` \\
\hline
Computing resource \#1 & SSH+CLI & user keys & Slurm & Singularity \\
Computing resource \#2 & SSH+CLI & user keys & Slurm & Docker[runC,Kata],Singularity \\
Computing resource \#3 & API & JWT & Slurm & Docker,Singularity \\
Computing resource \#4 & SSH+CLI & user keys & none & Docker \\
Computing resource \#5 & API & platform cert. & none & Docker \\
Computing resource \#6 & CLI & platform cert. & Kubernetes & internal \\
Computing resource \#7 & SSH+CLI & platform keys & Kubernetes & internal \\
Computing resource \#8 & API & platform cert. & Kubernetes & internal[runC,Kata] \\
Computing resource \#9 & API & platform cert. & Fargate & internal \\
\hline
\end{tabular}
}
\caption{Examples of various combinations of computing resource attributes. In order to schedule containerized workloads on a given computing resource, Rosetta needs to know how to access it (\texttt{access\symbol{95}mode}), how to get authorized (\texttt{auth\symbol{95}mode}), if and what WMS to use (\texttt{wms}), and which container engines are available (\texttt{container\symbol{95}engines}), possibly with their runtimes.}
\label{table:computing}
\end{center}
\end{table*}
The \verb`Computing` model describes the computing resource architectures as well, and in particular the \verb`arch` attribute defines the native architecture (e.g. amd64, arm64/v8), the \verb`supported_archs` attribute lists extra supported architectures (e.g. 386 on amd64 architectures) and the \verb`emulated_archs` attribute lists the architectures that can be emulated.
Computing resources can be also assigned to a specific group of users, using the \verb`group` attribute which, if set, restricts access to the group members only, and the \verb`conf` attribute can be used to store some computing resource-specific configurations (e.g. the host of the computing resource). Lastly, the \verb+Computing+ ORM model implements an additional \verb`manager` property which provides common functionalities for accessing and operating on the real computing resource, as submitting and stopping tasks, viewing their logs, and executing generic commands. This property is implemented as a Python function which upon invocation instantiates and returns an object sub-classing the \verb`ComputingManager` class, based on the computing resource \verb`type`, \verb`access_mode`, \verb`auth_mode` and \verb`wms` attributes.
Computing resources which are accessed using SSH can be accessed both as a standard user (using its account on the computing resource) or using a superuser (e.g. a ``platform'' user), depending on the deployment requirements.
In order to access using a standard user on the computing resource, Rosetta generates a dedicated private/public key pair, the public key of which is required to be added on the computing resource account by the user. To instead access using a ``platform'' superuser (and thus using the same user for orchestrating all of the user tasks), a dedicated account and key pairs are required to be setup both on the computing resource and within Rosetta.
Accessing computing resources using SSH requires no integration with the existent infrastructure at all, provided that standard SSH is available and a container engine is installed. For this reason, it perfectly fits our requirement of operating on HPC clusters and data-intensive systems where more complex integrations are hard to achieve.
\subsection{Storage}
Storage functionalities provide a way of defining, mounting and browsing data storages. A \verb+Storage+ is defined by a set of attributes, which include a \verb+name+, a \verb+type+, an \verb+auth_mode+ and an \verb+access_mode+.
If a storage is attached to a computing resource, then the \verb+computing+ attribute can be set. In this case, if the storage and the computing resource share the same access mode, the \verb+access_through_computing+ option can be ticked so that Rosetta can just use the computing resource one. The \verb+group+ attribute, if set, specifies the set of users authorized to access the storage. The \verb+base_path+ attribute sets the internal path to the storage, and supports using two variables: the \verb+$USER+, which is substituted with the Rosetta internal user name, and the \verb+$SSH_USER+, which is substituted with the SSH username (if the access method is based on SSH). The \verb+bind_path+ sets instead where the storage is made accessible within the software containers. If a data storage is attached to a computing resource and its \verb+bind_path+ is set, it will be then made accessible from all of the containers running on that computing resource, under the location specified by the \verb+bind_path+.
For example, a storage mounted on the \verb+/data+ mount point of an SSH-based computing resource (and represented in Rosetta using \verb+generic_posix+ as type and \verb~SSH+CLI~ as access method) could have a \verb+base_path+ set to \verb+/data/users/$USER+ and a \verb+bind_path+ set to \verb+/storages/user_data+, in order to separate data belonging to different users at orchestration-level.
At the moment only POSIX file systems are supported, which must be mounted on the various computing resources and that are in turn exposed inside the containers using the standard binding mechanism offered by most container engines. Any filesystem that can be mounted as such (e.g using FUSE) is therefore automatically supported, as CephFS or Amazon S3.
We envision adding support for other storage types in future releases, as for example object storages, but in this case accessing the storage APIs is up to the application running inside the container, and Rosetta can only act as a file manager. How to provide access in a standardized way to non-POSIX file systems within containers is indeed still an open theme.
Storage functionalities also include a set of APIs to provide support for the file manager embedded in the Rosetta web-based UI, which is built on top of the Rich File Manager \footnote{https://github.com/psolom/RichFilemanager} open source project. These APIs implement common functionalities (as get, put, dir, rename etc.) to perform file management operations, the internal logic of which depends on the storage type, making it easy to expand them in the future.
\subsection{Tasks}
\label{subsec:tasks}
Tasks are represented using an ORM model and a set of states (\emph{queued}, \emph{running} or \emph{stopped}). Tasks running on computing resources without a WMS are directly created in the running state, while when a WMS is in place they are created in the queued state and set as running only when they get executed.
States are stored in the \verb+state+ attribute of the \verb+Task+ model, which also includes a \verb+name+ and the links with the software container and the computing resource executing the task, plus its options (the \verb+container+, \verb+computing+ and \verb+computing_options+ attributes, respectively).
A set of other attributes as the \verb+interface_ip+, \verb+interface_port+, \verb+tcp_tunnel_port+ and \verb+auth_token+ let Rosetta know how to instantiate the connection to the task (i.e. for setting up the tunnel and/or configuring the proxy service).
Once a task starts on a computing resource, its IP address and port are saved in the corresponding \verb+Task+ fields, and the task is marked as running. If the task was queued, an email is sent to the user with a link to the task, which is particularly useful to let users immediately know when their tasks are ready, thus preventing to waste computing time on shared systems. Task functionalities also include opening the TCP/IP tunnel to the task interface port and/or configuring the HTTP proxy service in order to provide access to the task interface.
One of the main components of the task management functionalities is the agent, which as introduced in Section \ref{sec:architecture} allows to seamlessly support both WMSs not natively supporting containerized workloads and container engines missing some key features. In other words, it makes all of the computing resources behave in the same way from a Rosetta prospective. The agent is implemented as a Python script which is served by the Rosetta web application and that can run both as a superuser and as a standard, unprivileged user. When it is required, Rosetta delivers a bootstrap script on the computing resource which pulls and executes the agent code. As soon as it gets executed, the agent calls back the Rosetta web application and communicates the IP address of its host. If the agent landed on a computing resource using a container engine missing the dynamic port mapping feature, then it also searches for an available ephemeral TCP/IP port and communicates it to the web application as well. Lastly, the agent sets up the environment for the user task container, and starts it.
\subsection{Account}
\label{subsec:account}
Account and profile functionalities provide support for both local and external authentication services (e.g. Open ID connect). The accounts linking between local and external identities is based on the user email address, which is the standard approach in this kind of services.
Local and external authentication can co-exist at the same time, provided that if a user originally singed up using an external authentication service it will be then always required to log-in using that service. If allowing to register as local users or to entirely rely on external authentication is up to the administrators, and can be configured in the web application service.
Rosetta provides both user-based and group-based authorization, so that computing and storage resources, as well as software containers, can be made available to specific users or subsets of users only.
The user profile also supports some user-based configuration parameters for accessing the computing resources (e.g. the computing resource username if using an SSH-based access mode with user keys). Other minor functionalities, as password recovery, login tokens and time zone settings are provided as well.
\section{Security}
\label{sec:security}
Security of computing systems and web applications is a wide chapter and an extensive discussion on the topic is beyond the scope of this article, however we wanted to mention the main issues and how we took them into account.
The first layer of security in Rosetta consists in using software containers for the user tasks. The base executable unit in Rosetta is indeed the container itself, meaning that users has no control outside of their containers at all: once a container is sent for execution and Rosetta handles all the orchestration, the user is dropped inside it and cannot escape.
For this reason, even if a container gets compromised, all the other ones as well as the underlying host system does not get affected.
However, this statement is true in the measure of which the container engine can guarantee isolation and prevent privilege escalation. The Docker engine has an intrinsic issue with this respect, as it makes use of a daemon running with superuser privileges. Podman, a nearly drop-in replacement for Docker, runs instead in user-space and prevents this kind of issues by design, as well as Singularity. Other container engines as gVisor and Kata push security even further, providing respectively kernel and hardware virtualization.
Moreover, when Rosetta is integrated on computing resources using SSH-based access, the administrators can opt for revoking direct SSH user access on them, leaving Rosetta - and its containerized tasks - the only access point, thus greatly improving overall security.
With respect to potential malicious software, the first line of defense usually takes place in the container registry. Docker Hub, for example, has a built-in security scanning system, and there are a number of free and open source scanners that can be used for on-premise container registries as Klar/Clair\footnote{\url{https://github.com/optiopay/klar}}.
Scanning for malicious software can also be done when executing task containers\footnote{\url{https://docs.docker.com/engine/scan/}}, but not all container engines support this feature. Allowing only containers coming from registries which run security scanning, or to implement these checks along the building pipeline could be the best approach to protect against malicious software in container images \citep{brady2020docker}.
For what concerns software packages that can be installed at runtime inside the containers, Rosetta does not do any checking as it would be technically very hard if not even impossible. This is a common issue when giving users the freedom to download and execute code, including on commercial platforms as Google Colab and Kaggle. Even restricting user permissions would not prevent such issue, given that these packages can be always just downloaded and executed from a different location (e.g. a temporary folder). Having users to download and execute malicious software by mistake is therefore something very hard to prevent, and that has no simple mitigation approach unless relying on classic antivirus software which should run inside the containers.
As introduced in Section \ref{sec:architecture}, since Rosetta user task interfaces are made accessible to the outside world, they are required to be secured, both in term of access control and connection encryption. With this respect, it is necessary to make a distinction between HTTP-based and generic task interfaces. HTTP-based task interfaces can rely on the authentication and SSL encryption provided by the proxy service, and can therefore just use a plain HTTP protocol.
Generic task interfaces (e.g. a VNC or X server) are instead required to be secured at task-level, and it is responsibility of the task container to enforce it. As explained in subsection \ref{subsec:containers}, access control setup is in this case achieved by forwarding to the task a one-time password set by the user at task creation-time, which is then to be used by the container interface to authenticate the user. Encryption has to be setup at task-level too, and can be provided in first instance using self-signed certificates, or implementing more complex solutions as dynamic certificates provisioning.
An important detail in the task security context is that Rosetta makes a strong distinction between standard and power users, through a status switch in their profile. By default, only the latter can setup custom software containers using generic task interface protocols other than the HTTP, since handling security at task level (which is always required in this case) is error-prone and must be treated carefully. Standard users can therefore add and use custom software containers for their tasks on the platform only if using an HTTP-based interface, which is in turn forced to be secured by the proxy service.
For what concerns the tunnel from the web application service to the tasks, this is protocol-agnostic (above the TCP/IP transport layer) and is either accomplished by a direct connection on a private and dedicated network (e.g. if using Kubernets) or using an SSH-based TCP/IP tunnel using users' public/private keys, as explained in Section \ref{sec:architecture}, and thus assumed safe.
In terms of web security, we considered potential security risks originating from cross-site request forgery (CSRF), cross-origin resource sharing (CORS), cross-site scripting (XSS), and similar attacks.
The same origin policy (SOP) of modern web browsers is already a strong mitigation for these attacks, and all the platform web pages and APIs (with a few exceptions for internal functionalities) uses Django's built-in CSRF token protection mechanism.
However, the SOP policy has limitations \citep{schwenk2017same,chen2018we}, in particular in our scenario where users can run custom (and possibly malicious) JavaScript code from within the platform, either using the Jupyter Notebooks or by other means (e.g. by setting up a task serving a web page).
We therefore focused on isolating user tasks from the rest of the platform even on the web browser side. Using the same domain for both the platform and the user tasks (e.g. \url{https://rosetta.platform/tasks/1} is indeed definitely not a viable solution as it does not allow to enforce the SOP policy at all. Also using dedicated subdomains (e.g. \url{https://task1.rosetta.platform}) has several issues, in particular involving the use of cookies \citep{zalewski2012tangled, zalewski2009browser, squarcina2021can}.
The secure-by-design, safe solution is to serve user tasks from a \textit{separate} domain (e.g. \url{rosetta-tasks.platform}). Then, each task can have its own subdomain (as \url{https://task1.rosetta-tasks.platform}) and stay separated form the main platform domain. However, handling and securing subdomains like this requires wildcard DNS services and SSL certificates, which for many institutional domains are not available \citep{jp_security}, including ours. For this reason, in Rosetta we opted for an intermediate solution: we serve user tasks from a separate domain (e.g. \url{rosetta-tasks.platform}) assigning each of them to a different port, under the same SSL certificate. In this way, the URL to reach the task number 1 at \url{https://rosetta-tasks.platform:7001} can be secured by the same SSL certificate covering the URL for task number 2 at \url{https://rosetta-tasks.platform:7002}, but are treated as different origins by web browsers.
SSL certificates are indeed port-agnostic, while the SOP (which basically involves the triplet protocol, host and port for defining the origin) it is not, thus enabling web browsers to enforce it between the task 1 and 2, and in general securing all of the users tasks against each others.
While this approach might lead to some issues with institutional firewalls blocking external port access beyond the standard 80 and 443 ports, we found it to be the right compromise in our environment. Moreover, switching to serving each task from its own subdomain is just a matter of a quick change in the Rosetta proxy service configuration.
\section{User Experience}
\label{sec:rosetta}
From a user prospective, Rosetta presents itself as a web application with a web-based user interface (UI) that is shown upon user login in Figure \ref{fig:main}.
The UI, following the architecture presented in section \ref{sec:architecture}, is organised in five main areas: the \emph{Software} section, where to browse for the software containers available on the platform or to add custom ones;
the \emph{Computing} section, where to view the available computing resources;
the \emph{Storage} section, which provides a file manager for the various data storages;
the \emph{Tasks} dashboard, where to manage and interact with the user tasks, including connecting with them and viewing their logs;
and the \emph{Account} pages, where to configure or modify user credentials and access keys.\\
\begin{figure}[H]
\centering\includegraphics[scale=0.1,cfbox=gray 0.01ex 0ex]{images/platform_main_logged.png}
\caption{The Rosetta science platform main page and menu.}
\label{fig:main}
\end{figure}
To run a typical analysis task, the user first accesses the Software section (Figure \ref{fig:platform_containers}) in order to choose (or add) the desired software container. If adding a new software container, the user has to set its registry, image name, tag, the container interface port and protocol, plus some optional advanced attributes (Figure \ref{fig:platform_add_container}). The new container will then be listed together with the other ones so that the can be chosen for execution.
Once the software container is chosen, the user hits the ``play" button to create a new task. The platform will then ask the user on which computing resource to run the task, and to set a task name. A one-time password token is also generated, which is usually automatically handled by Rosetta and not required to be entered manually when connecting to the task (Figure \ref{fig:platform_new_task}). For some computing resources, extra options as the queue or partition name, CPU cores and memory requirements can be set as well. The task is then created and submitted.
As soon as the task is starting up on the computing resource, a ``connect'' button in the task dashboard becomes active. At this point, the user can connect to the task with just one click: Rosetta will automatically handle all the tunneling required to reach the task on the computing resource where it is running, and drop the user inside it
(Figures \ref{fig:platform_task_CASA} and \ref{fig:platform_task_Jupyter})
Users can transfer files to and from the data storages (and thus the tasks) using the built-in file manager (Figure \ref{fig:platform_storages}), which is an effective solution for light datasets, analysis scripts, plots and results. Larger data sets are instead supposed to be already located on a storage, either because the data repository is located on the storage itself (in a logic of bringing the computing close to the data) or because they have been previously staged using an external procedures.
\\
\\
\begin{figure}[htpb]
\centering\includegraphics[scale=0.1,cfbox=gray 0.01ex 0ex]{images/platform_containers.png}
\caption{Software containers list. For each software entry, a brief description is provided, together with the container image name and a menu from which to select a specific version. The "play" button will start a new task with the given software.}
\label{fig:platform_containers}
\end{figure}
\begin{figure}[htpb]
\centering\includegraphics[scale=0.1,cfbox=gray 0.01ex 0ex]{images/platform_add_container.png}
\caption{Adding a new software container. Besides a name and a brief description, the key fields are the container registry, image and tag, plus the port and protocol of its interface. }
\label{fig:platform_add_container}
\end{figure}
\begin{figure}[htpb]
\centering\includegraphics[scale=0.1,cfbox=gray 0.01ex 0pt]{images/platform_new_task.png}
\caption{Last step of new task creation, after selecting the software container and a computing resource. The interface asks to enter a task name and possibly other task parameters as the required number of CPUs, memory, or queue name.}
\label{fig:platform_new_task}
\end{figure}
\begin{figure}[htpb]
\centering\includegraphics[scale=0.1,cfbox=gray 0.01ex 0pt]{images/platform_task_CASA.png}
\caption{A Rosetta user task running a GUI application from the CASA suite, in a remote desktop environment. The remote desktop server is web-based, and supports dynamic resolution changes and seamless clipboards sharing with the client, allowing for a smooth user experience.}
\label{fig:platform_task_CASA}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[scale=0.11,cfbox=gray 0.01ex 0pt]{images/Jupyter.png}
\caption{A Rosetta user tasks running a Jupyter Notebook, displaying a plot using Numpy and Matplotlib. The authentication for the Notebook server is handled by the Rosetta proxy service, which also secures the connection over SSL.}
\label{fig:platform_task_Jupyter}
\end{figure}
\begin{figure}[htpb]
\vspace{2.5mm}
\hspace*{0.08in}
\includegraphics[scale=0.1,cfbox=gray 0.01ex 0pt]{images/platform_storages.png}
\caption{The Rosetta built-in file manager, which allows for browsing data storages and to upload or download data files. While not suitable for large datasets, it is an effective tool for lighter ones as well as analysis scripts, plots and results.
\\}
\label{fig:platform_storages}
\end{figure}
\section{Deployment and use cases}
\label{sec:usecases}
Rosetta is deployed in production at the computing center of INAF - Osservatorio Astronomico di Trieste \citep{2020ASPC..527..303B}, using an extranal, aggregated authentication system named RAP \citep{tinarelli2020authentication} and serving a set of different users with different software requirements.
To support our user community, we offer a pre-defined portfolio of containerized applications that span from generic data analysis and exploration tools (as iPython, R and Julia) to specific Astronomy and Astrophysics codes. These include common astronomical data reduction software and pipelines as IRAF, CASA, DS9, Astropy, but also Cosmological simulation visualization and analysis tools, and project-specific applications and codes. All of them are listed in the Software section of Rosetta and are accessible from the users' web browsers by running a task instance.
In the following we discuss more in detail four different use cases among the various projects we support:
\textit{the LOFAR pipelines},
\textit{the SKA data challenges},
\textit{the Astrocook quasar spectral analysis software},
and \textit{the HPC FPGA bitstream design}.
\subsection{The LOFAR pipelines}
The software collection for the LOFAR community consists in a set of tools and pipelines used to process LOFAR data, as the Prefactor and DDFacet data reduction codes \citep{tasse2018faceting}, for which we created a set of software containers.
A typical run of the LOFAR data processing pipelines holds for several days, and requires significant computing resources (in terms of RAM, CPUs and Storage) to process terabytes of data ($\sim$ 15TB). Several checks are necessary during a pipeline run to verify the status of the data processing and the convergence of the results.
In this context, we are using Rosetta to run the pipelines within a software container that provides both the pipelines themselves and visual tools to check the status of the processing phase. Rosetta tasks run on an HPC cluster managed using the Slurm WMS, which allocates a set of resources in terms of RAM and CPUs as requested by the scientists in the task creation phase. These tasks compete with other standard Slurm jobs running on the cluster, thus ensuring an optimized allocation of the available resources among all users.
Scientist running the pipelines in this mode are not required to interact with the Slurm WMS or to manually deploy any software on the cluster, instead they can just rely on Rosetta and update the containers with new software if necessary.
The container source codes are available online as part of the LOFAR Italian collaboration \footnote{https://www.ict.inaf.it/gitlab/lofarit/containers} and once built are registered to an INAF private container registry in order to account for both public and private codes as required by the different LOFAR Key Projects collaborations.
\subsection{The SKA data challenges}
INAF participated in the SKA Data Challenges\footnote{https://sdc2.astronomers.skatelescope.org/sdc2-challenge} as infrastructure provider. The purpose of these challenges is to allow the scientific community to get familiar with the data that SKA will produce, and to optimise their analyses for extracting scientific results from them.
The participants in the second SKA Data Challenge analysed a simulated dataset of 1 TB in size, in order to find and characterise the neutral hydrogen content of galaxies across a sky area of 20 square degrees. To process and visualize such a large dataset, it was necessary to use at least 512 GB of RAM, and INAF offered a computing infrastructure where such resources were available.
We used Rosetta to provide simplified access to this computing infrastructure (an HPC cluster managed using the Slurm WMS) and, as for the LOFAR pipelines use case, we provided a software container that provided all of the tools and applications necessary to complete the challenge (as CASA, CARTA, WSClean, Astropy and Sofia) in a desktop environment.
Most notably, users were able to ask for specific computing resource requirements when starting their analysis tasks (512 GB of RAM, in this case), and the cluster parallel file system used to store the dataset provided high I/O performance ($>$ 4 GB/s) and plenty of disk space, so that users could focus on the scientific aspects of the challenge and not worry about orchestration and performance issues.
\subsection{The Astrocook quasar spectral analysis software}
Astrocook\citep{cupani2020astrocook} is a quasar spectral analysis software built with the aim of providing many built-in recipes to process a spectrum. While this software is not necessarily resource-intensive in general, it can require quite relevant computing power in order to apply the various recipes.
Astroccok comes as a GUI application with some common and less common Python dependencies which are sometimes hard to install (as Astropy, StatsModels and wxPython) and it is a great example about how to use Rosetta in order to provide one-click access to a GUI application which might require some extra computing power.
Figure \ref{figure:astrocook} shows Astrocook running in a Rosetta task on a mid-sized, standalone computing resource, and accessed using the web-based remote desktop interface.
\begin{figure}
\centering\includegraphics[scale=0.11,cfbox=gray 0.01ex 0pt]{images/Astrocook_desktop.png}
\caption{The Astrocook quasar spectral analysis software running in a Rosetta task on a mid-sized computing resource. The "Spectrum" and "Spectrum detail" windows are the main components of Astrocook, while the "Sessions" and "System table" windows recap the analysis steps and parameters.}
\label{figure:astrocook}
\end{figure}
\subsection{The HPC FPGA bitstream design}
Field Programmable Gate Arrays (FPGAs) can be used as accelerators in the context of physics simulations and scientific computing and they have been adopted as a low-energy acceleration devices for exascale testbeds.
One of these testbeds is ExaNeSt's (European Exascale System Interconnect and Storage) prototype \cite{KATEVENIS201858}, a liquid-cooled cluster composed by proprietary Quad-FPGA daughterboard computing nodes, interconnected with a custom network and equipped with a BeeGFS parallel filesystem.
To use this cluster it is necessary to re-engineer codes and algorithms \cite{9041710, 978-3-030-32520-6,computation8020034}: the substantial programming efforts required to program FPGAs using the standard approach based on Hardware Description Languages (HDLs), together with its subsequent weak code portability have long been the main challenges in using FPGA-enabled HPC clusters as the ExaNeSt's prototype.
However, thanks to the High Level Synthesis (HLS) approach, FPGAs can be programmed using high level languages, thus highly reducing the programming effort and and greatly improving portability. HLS tools use high level input languages as C, C++, OpenCL and SystemC which, after a process involving intermediate analysis, logic synthesis and algorithmic optimization, are translated into FPGA-compatible code as the so called ``bitstream" files.
This last step in particular requires a considerable amount of resources: 128GB of RAM, extensive multi-threading support and 100 GB of hard disk space are the requirements for creating the bitstream files for the above mentioned FPGA-enabled HPC cluster. Moreover, from a user prospective, the design of an FPGA bitstream requires the interactive use of several GUI applications (as nearly all the HLS tools) and to let the software work for several hours.
Rosetta was adopted as the primary tool for programming INAF's FPGA cluster prototype, and suited very well the use case. Thanks to enabling access to persistent, web-based remote desktops with the required computing and storage resources, users were indeed capable of using HLS tools from their standard computing equipment, and to let them work for as many hours as needed, even if disconnecting and reconnecting the day after.
\\
\\
\\
\section{Discussion}
\label{sec:discussion}
In designing and implementing Rosetta we faced two main challenges: supporting custom software packages, libraries and environments, and integrating with computing resources not natively supporting containerized workloads.
We addressed the first challenge by developing a novel architecture based on framing user tasks as microservices. This allowed Rosetta to fully support custom software packages, libraries and environments (including GUI applications and remote desktops) and together with software containers allowed to ensure safe, consistent and reproducible code execution across different computing resources.
With respect to the second challenge, it has first to be noted that HPC clusters and data-intensive systems still rely on Linux users for a number of reasons, including accounting purposes and local permission management. This means that most of the containerisation solutions born in the IT industry, which assume to operate as a superuser, are in general not suitable. For this reason, the Singularity container engine was built to operate exclusively at user-level, and quickly become the standard in the HPC space.
However, Singularity is not designed to provide full isolation between the host system and the containers, and by default directories as the home folder, \verb+/tmp+, \verb+/proc+, \verb+/sys+, and \verb+/dev+ are all shared with the host, environment variables are exported as they are set on host, the PID namespace is not created from scratch, and the network and sockets are as well shared with the host. Also, the temporary file system provided by Singularity in order to make the container file system writable (which is required for some software) is a relatively weak solution, since it is stored in memory and often with a default size of 16MB, thus very easy to fill up.
We therefore had to address all these issues before being able to use Singularity as a container engine from Rosetta. In particular, we used a combination of command line flags (\verb`-cleanenv`, \verb`-containall`, \verb`-pid`) and ad-hoc runtime sandboxing for the key directories which require write access (as the user home), orchestrated by the agent. This step was key for the success of our approach and proved to remove nearly all the issues related to running Singularity containers on different computing systems.
Similarly, we had to work around a series of features lacking in WMSs not natively supporting containerized workloads (as Slurm), including container life cycle management itself, network virtualization and TCP/IP traffic routing between the containers, all solved using the agent as explained in the previous sections.
Once we were able to ensure a standardised behaviour of container engines and WMSs, we were able to make task execution uniform across different kinds of computing resources, providing the very same user experience. In this sense, Rosetta can be considered as an umbrella for a variety of computing resources, and can act as a sort of bridge in the transition towards software containers.
\section{Conclusions and future work}
\label{sec:conclusions}
We presented Rosetta, a science platform for resource-intensive, interactive data analysis which runs user tasks as software containers. Its main characteristic lies in providing simplified access to remote computing and storage resources without restricting users to a set of pre-defined software packages, libraries and environments.
To achieve this goal, we developed a novel architecture based on framing user tasks as microservices - independent and self-contained units - which we implemented as software containers. This approach allowed us to fully support custom software packages, libraries and environments, including remote desktops and GUI applications besides standard web-based solutions as the Jupyter Notebooks. Moreover, adopting software containers allowed for safe, effective and reproducible code execution, and enabled us to let our users to add and use their own software containers on the platform.
We also took real-world deployment scenarios in mind, and designed Rosetta to easily integrate with existent computing resources, even where they lacked native support for containerized workloads. This proved to be particularly helpful for integrating with HPC clusters and data-intensive systems.
We successfully tested Rosetta for a number of use cases, including the LOFAR data reduction pipelines at INAF computing centers in the context of the ESCAPE project which funded this work, the SKA data challenges, and other minor use cases of our user community.
The benefits of seamlessly offloading data analysis tasks to a sort of ``virtual workstation", hosted on a computing system capable of providing CPUs, RAM and storage resources as per requests were immediately clear, removing constrains and speeding up the various activities.
Although astronomy and astrophysics remains its mainstay, Rosetta can virtually support any science and technology domain requiring resource-intensive, interactive data analysis, and it is currently being tested and evaluated in other institutions.
Future work include adding support for distributed workloads (e.g. MPI, Ray) and for computing resources with mixed architectures, developing a command line interface, integrating with data staging solutions and continuing the implementation efforts for integrating with current and new WMSs (e.g. Torque, Openshift, Rancher, Nomad, and more).
\section{Acknowledgements}
\label{sec:ack}
This work was supported by the European Science Cluster of Astronomy and Particle Physics ESFRI Research Infrastructures project, funded by the European Union's Horizon 2020 research and innovation programme under Grant Agreement no. 824064. We also acknowledge the computing center of INAF- Osservatorio Astronomico di Trieste, \citep{2020ASPC..527..303B,2020ASPC..527..307T}, for the availability of computing resources and support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} Let $A_0,A_1,\ldots,A_n$ be real symmetric $k\times k$ matrices. The set $$\left\{ (x_1,\ldots, x_n)\in\R^n\mid A_0 +x_1A_1+ \cdots +x_nA_n\succeq 0\right\},$$ where $\succeq 0$ means positive semidefiniteness, is called a \textit{spectrahedron}. Spectrahedra are generalizations of polyhedra and occur as feasible sets for semidefinite optimization.
A projection of a spectrahedron to a subspace of $\R^n$ is often called a \textit{semidefinitely representable set}. Helton and Nie \cite{HeltonNieNecSuffSDP} conjecture that every convex semialgebraic set is such a projection.
See for example \cite{MR1284712, HeltonNieSDPrepr,HeltonNieNecSuffSDP,MR2292953,LasserreConvSets,NeSDP,NePlSch} for more detailed information on spectrahedra and their projections.
We prove that the convex hull of finitely many projections of spectrahedra is again a projection of a spectrahedron. This generalizes Theorem 2.2 from Helton and Nie \cite{HeltonNieNecSuffSDP}, which is the same result in the case that all sets are bounded or that the convex hull is closed.
\section{Result}
\begin{Prop} If $S\subseteq \R^n$ is a projection of a spectrahedron, then so is $\cc(S)$, the conic hull of $S$.
\end{Prop}
\begin{proof} Since $S$ is a projection of a spectrahedron we can write $$S=\left\{x\in\R^n\mid \exists z\in\R^m\colon A+\sum_{i=1}^nx_iB_i +\sum_{j=1}^m z_jC_j\succeq 0\right\},$$ with suitable real symmetric $k\times k$-matrices $A,B_i,C_j$. Then with \begin{align*}C:=\{ x\in\R^n\mid & \exists \la, r \in\R, z\in\R^m\colon \la A+\sum_{i=1}^n x_iB_i +\sum_{j=1}^m z_jC_j \succeq 0\ \wedge \\ & \quad \bigwedge_{i=1}^n \left(\begin{array}{cc}\la & x_i \\x_i & r\end{array}\right)\succeq 0\}\end{align*} we have $C=\cc(S)$ (note that $C$ is a projection of a spectrahedron, since the conjunction can be eliminated, using block matrices).
To see "$\subseteq$" let some $x$ fulfill all the conditions from $C$, first with some $\la>0$. Then $a:=\frac{1}{\lambda}\cdot x$ belongs to $S$, using the first condition only. Since $x=\la\cdot a$, $x\in\cc(S)$. If $x$ fulfills the conditions with $\la=0$, then $x=0$, by the last $n$ conditions in the definition of $C$. So clearly also $x\in\cc(S)$.
For "$\supseteq$" take $x\in \cc(S)$. If $x\neq 0$ then there is some $\la>0$ and $a\in S$ with $x=\la a$. Now there is some $z\in\R^m$ with $A+\sum_i a_iB_i +\sum_j z_jC_j\succeq 0$. Multiplying this equation with $\la$ shows that $x$ fulfills the first condition in the definition of $C$. But since $\la> 0$, the other conditions can clearly also be satisfied with some big enough $r$. So $x$ belongs to $C$. Finally, $x=0$ belongs to $C,$ too.
\end{proof}
\begin{Rem}
The additional $n$ conditions in the definition of $C$ avoid problems that could occur in the case $\la=0.$ This is the main difference to the approach of Helton and Nie in \cite{HeltonNieNecSuffSDP}.
\end{Rem}
\begin{Cor} If $S_1,\ldots, S_t\subseteq\R^n$ are projections of spectrahedra, then also the convex hull $\conv(S_1\cup\cdots\cup S_t)$ is a projection of a spectrahedron.
\end{Cor}
\begin{proof} Consider $\widetilde{S}_i:=S_i\times\{1\}\subseteq\R^{n+1}$, and let $K_i$ denote the conic hull of $\widetilde{S}_i$ in $\R^{n+1}$. All $\widetilde{S}_i$ and therefore all $K_i$ are projections of spectrahedra, and thus the Minkowski sum $K:=K_1+\cdots + K_t$ is also such a projection. Now one easily checks $$\conv(S_1\cup\cdots\cup S_t)=\left\{x\in\R^n\mid (x,1)\in K\right\},$$ which proves the result.
\end{proof}
\begin{Ex} Let $S_1:=\{ (x,y)\in \R^2\mid x\geq 0, y\geq 0, xy\geq 1\}$ and $S_2=\{(0,0)\}$. Both subsets of $\R^2$ are spectrahedra, so the convex hull of their union, $$\conv(S_1\cup S_2)= \{ (x,y)\in\R^2\mid x>0, y>0 \} \cup \{ (0,0)\} ,$$ is a projection of a spectrahedron.
\end{Ex}
{\linespread{1}\bibliographystyle{dpbib} | {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\input{tex/introduction}
\section{Background: Polyhedral Compilation}
\label{sec:poly}
\input{tex/poly}
\section{Related Work}
\label{sec:related}
\input{tex/related}
\section{PolyGym's Markov Decision Process}
\label{sec:mdp}
\input{tex/mdp}
\section{Evaluation}
\label{sec:eval}
\input{tex/evaluation}
\section{Conclusions}
\label{sec:conclusions}
\input{tex/conclusions}
\section*{Acknowledgments}
\input{tex/acknowledgments}
\bibliographystyle{IEEEtran}
\subsection{Experimental setup}
We implemented the \acp{MDP} of the schedule space as an OpenAI Gym Environment, so it is conveniently usable for future machine learning heuristics~\cite{brockman2016openai}.
The algorithms are implemented in Python, relying on the ISL binding islpy\footnote{\url{https://github.com/inducer/islpy}} for operations on integer sets.
To transform the polytopes into the generator representation, we used Polyite's implementation of the Chernikova algorithm\footnote{\url{https://github.com/stganser/chernikova}}.
Finally, we use LLVM and Polly in version 3.9 to extract polyhedral representations of SCoPs and to transform them according to the computed schedules.
All schedules were evaluated on the same system, consisting of an AMD 3960X with 24 cores and 48 threads, organized in 8 units, each with 3 cores.
The units have a dedicated 16 MB L3 cache.
Each core has a dedicated 32 KB L1 and a 512 KB L2 cache.
The benchmarks are executed in a single unit, that is, using at most 3 cores (and 6 threads).
We do so because of the low interference between units, which allows us to run up to four experiments in parallel without adding too much noise in the observations. The configuration of four units has been identified to have necligible side-effects in preliminary experiments.
For our evaluation we use the Polybench benchmark suite~\cite{polybench}, which consists of $30$ numerical computations from various domains, such as image processing, statistics, and linear algebra.
Each of the benchmarks contains a \ac{SCoP} kernel, i.e. a loop nest that satisfies the conditions of the polyhedral model.
We had to exclude the \texttt{ludcmp}, \texttt{heat-3d}, and \texttt{adi} kernels because of their large number of dependence polytopes.
The sheer size of these polytopes significantly slows down the runtime of the Chernikova algorithm and thus the construction of the schedule space.
Since programs generated by unfavorable schedules could potentially run for a long time, we set a timeout that is proportional to the LLVM O3 runtime by a factor of 10 when executing.
We compare the schedules found by PolyGym against standard optimizing compilers and against ISL.
As standard optimizing compiler we use LLVM in version 12 with the \texttt{-O3} flag (LLVM O3). The ISL experiments were run with flags \texttt{-polly -polly-parallel=true -polly-vectorizer=none -polly-tiling=true -polly-default-tile-size=64}.
In the experiments, we used 4 measurements and report the minimum to eliminate measurement inaccuracies.
\subsection{Result analysis}
To analyze how many profitable schedules the space spawned by our MDP formulation includes, we generate $1000$ schedules per benchmark kernel with the a simple bias the exploration of the schedule space towards the \texttt{select\_coeff0} action. This bias results in overall less complex schedules.
Figure~\ref{fig:speedup_distributions} shows the distributions of speedups by individual samples, along with the performance of ISL and LLVM O3.
We can observe that the search space contains many profitable schedules.
Many benchmarks, like \texttt{2mm},\texttt{doitgen}, \texttt{gemm},\texttt{jacobi-2d} or \texttt{seidel-2d}, show a sigificant distribution of points better than those found by ISL.
This suggests that an agent could learn to achieve this better performance results without iteratively executing the kernel.
Figure~\ref{fig:speedups_by_sample} shows the maximum measured speedups for individual kernels of the polybench suite.
We find schedules with an overall speedup of $3.39$x over O3-clang12, which is $1.83$x faster than the ones of ISL-clang12 and $1.67$x faster than ISL-clang3.9.
For 20 of the 26 kernels, the heuristic iteratively finds more profitable schedules than ISL-clang12, for 22 of 26 for ISL-clang3.9.
The results are not directly comparable to Polyite~\cite{ganser2017iterative,ganser2018speeding}, because they we use a different hardware system. However, the overall results seem to be comparable in terms of the improvement, which is not surprising, since we use a similar search space and a similar random sampling process.
Compared to Polyite, our MDP formulation is shape-agnostic.
This can enable an agent to learn to navigate this space without requiring an iterative execution.
\begin{figure}[h]
\centering
\input{plots/max_speedup_by_num_samples.tex}
\caption{Maximum speedup over LLVM O3 achieved by the different heuristics for action selection with increasing number of explored schedules.
The plot depicts the geometric mean over all selected benchmarks of the Polybench suite.
As reference, the aggregated speedups of ISL over LLVM O3 in different version (ISL-clang3.9, ISL-clang12) is also included.}
\label{fig:speedup_development_by_heuristics}
\end{figure}
We further analyze the influence that a potential heuristic has on the sampling process.
We employ different trivial heuristics for demonstration.
\begin{itemize}
\item In \emph{bias\_select\_dep}, we bias the schedule space construction phase towards the \texttt{select\_dep} action, which results in less schedule dimensions.
\item In \emph{bias\_coeff\_0}, we bias towards the \texttt{select\_coeff0} action in the schedule space exploration phase.
\item In \emph{uniform}, we select actions uniformly at random.
\end{itemize}
Figure~\ref{fig:speedup_development_by_heuristics} shows the performance of the PolyGym search space with different heuristics on the Polybench suite.
The plot shows the geometric mean across the different benchmarks as the number of schedules sampled increases.
We see that different heuristics impact the space exploration achieving different performances overall.
All evaluated heuristics outperform the schedules found by ISL-clang3.9 and ISL-clang12 in less than $40$ sampling iterations.
Most notably, when biasing the selection towards the coefficient 0 (\emph{bias\_coeff\_0}), the cross-over is at 11 iterations for ISL-clang12 and after 13 iterations for ISL-clang3.9.
Crucially, this shows that there is potential to learn to explore this space efficiently.
All of the evaluated heuristics, however, are very simple and make not use of the MDP's states.
In future work, this heuristic can be a Deep Neural Network that is optimized to take the best actions using Reinforcement Learning algorithms.
\subsection{Schedule space construction}
We look for a schedule space in its general form, and thus define it as a multi-dimensional space~\cite{feautrier1992schedulingII,pouchet2008iterative}.
As illustrated in Algorithm~\ref{algo:construction}, we construct a $k$-dimensional schedule space iteratively, by going through the dimensions of the space.
For each dimension, we decide which dependencies to include as strong dependencies, until we have selected all dimensions.
Crucially, the function that decides this, \texttt{select\_dependency}, is left as an unspecified, free function.
It then calculates the polytope of possible schedules strongly satisfying these dependencies and weakly satisfying the remaining unselected dependencies.
Using Chernikova's algorithm~\cite{chernikova} it calculates generators as vertices and rays for this polytope.
This algorithm goes back to the principles outlined by Feautrier~\cite{feautrier1992schedulingII}, which is the same basis for the heuristic in Section 3.2 in~\cite{pouchet2008iterative} and Algorithm~1 in~\cite{ganser2017iterative}.
Algorithm~\ref{algo:construction} generalizes these principles by leaving the decision function \texttt{select\_dependency} unspecified, instead of proposing a concrete heuristic.
Note that we write \texttt{select\_dependency}($d$) to specify that this function depends on the representation of the dependency $d$.
In general, this decision could also depend on other parameters like properties of the \ac{SCoP}.
\begin{algorithm}
\caption{General construction of the schedule space}
\label{algo:construction}
\begin{algorithmic}[1]
\Input{A set $D$ of dependencies}
\Output{A schedule space $S = (G_1,G_2,\ldots,G_k)$, where each $G_i$ corresponds to the set of generators of the lattice polytope $P_i$for the $i$-th dimension.}
\State $i \leftarrow 1$
\While{ $D \neq \emptyset$}
\State $\operatorname{Deps}_i \leftarrow \emptyset$
\For{ $d \in D$}
\If{ \texttt{select\_dependency(d)}}
\State $\operatorname{Deps}_i \leftarrow \operatorname{Deps}_i \cup \{ d \}$
\EndIf
\EndFor
\State $D_i \leftarrow \texttt{strong\_deps}(\operatorname{Deps}_i) \cap \texttt{weak\_deps}(D \setminus \operatorname{Deps}_i)$
\State $G_i \leftarrow \texttt{chernikova}(D_i)$
\State $i \leftarrow i + 1$
\EndWhile
\Return $(G_1,\ldots,G_{i-1})$
\end{algorithmic}
\end{algorithm}
This simple change, making \texttt{select\_dependency} a free function, has profound consequences.
The greedy heuristics of~\cite{feautrier1992schedulingII,pouchet2008iterative} construct one deterministic schedule space, which fundamentally limits the space of possible schedules that can be found with the method.
On the other hand, the randomized heuristic of~\cite{ganser2017iterative} could in principle produce any schedule space.
The statistical bias of the randomized algorithm, however, implies that this is never the case in practice, by the law of big numbers.
While this is good for finding good heuristics in most cases, it fundamentally limits the ceiling of possible improvement.
By leaving \texttt{select\_dependency} unspecified, a model could \emph{learn} a good heuristic, which can also leverage properties of the concrete instance of the problem.
Based on Algorithm~\ref{algo:construction} we can define an \ac{MDP} that constructs this space.
It considers the free \texttt{select\_dependency} function as an action, which is combined with two additional actions for controlling the iteration in from Algorithm~\ref{algo:construction}.
This allows a walk through the \ac{MDP} to steer the iteration through the construction.
The state space of the \ac{MDP} is the countably infinite space
\begin{align*} S_\text{cons} = \{ (i_\text{dim},i_\text{dep},d_1,\ldots,d_{|D|}) \\ \mid i_\text{dim},i_\text{dep}, d_1,\ldots,d_{|d|} \in \mathbb{N}, i_\text{dim} > 0 \}. \end{align*}
In this space, the first component $i_\text{dim}$ represents the dimension, the second $i_\text{dep}$ represents the current dependency being selected, while the other components represent the strong dependencies included in that dimension. We define the set of actions as $\operatorname{Act}_\text{cons} = \{ \texttt{next\_dim}, \texttt{next\_dep}, \texttt{select\_dep}\}$.
The transition probabilities $\textbf{P}_\text{cons} : S_\text{cons} \times \operatorname{Act}_\text{cons} \times S_\text{cons} \rightarrow [0,1]$ are defined as follows:
\begin{align*}
\textbf{P}_\text{cons}((i_\text{dim},i_\text{dep},d_1,\ldots,d_{|D|}),\texttt{next\_dim}, \\
(i'_\text{dim}+1,i'_\text{dep},d'_1,\ldots,d'_{|D|}) ) \\
= \delta_{i_\text{dim},i'_\text{dim}} \cdot \delta_{i_\text{dep},i'_\text{dep}} \cdot \delta_{d_1,d'_1} \cdots \delta_{d_{|D|},d'_{|D|}}, \\
\textbf{P}_\text{cons}((i_\text{dim},i_\text{dep},d_1,\ldots,d_{|D|}),\texttt{next\_dep},\\
(i'_\text{dim},i'_\text{dep}+n,d'_1,\ldots,d'_{|D|}) ) \\
= \delta_{i_\text{dim},i'_\text{dim}} \cdot \delta_{i_\text{dep},i'_\text{dep}} \cdot \delta_{d_1,d'_1} \cdots \delta_{d_{|D|},d'_{|D|}}, \\
\textbf{P}_\text{cons}((i_\text{dim},i_\text{dep},d_1,\ldots,d_{|D|}),\texttt{select\_dep},\\
(i'_\text{dim},i'_\text{dep},d'_1,\ldots,d'_{i_\text{dep}}+ i_\text{dim},\ldots,d'_{|D|}) ) \\
= \delta_{i_\text{dim},i'_\text{dim}} \cdot \delta_{i_\text{dep},i'_\text{dep}} \cdot \delta_{d_1,d'_1} \cdots \delta_{d_{|D|},d'_{|D|}} \cdot \delta_{d_{i_\text{dep}},0},
\end{align*}
where $\delta_{i,j} = 0$ if $i \neq j$ and $1$ if $i = j$ is the Kronecker delta and $n$ is a value that depends on the concrete state\footnote{We omit the indices indicating this dependency for readability.}.
Concretely, we distingish between two cases.
If there is a $k$ with $k > i_\text{dep}$ such that $d_k = 0$, then we choose the first (minimal) such $k$ and set $n = k - i_\text{dep}$.
If there is none, i.e. $d_i > 0$ for all $i > i_\text{dep}$, then we start back from $0$ and choose the smallest $k$ with $d_k = 0$ (without any additional requirements on $k$).
In this case we also set $n = k - i_\text{dep}$.
This means that, as their names suggest, \texttt{next\_dim} increases the dimension and \texttt{select\_dep} adds the current dependency to the set of strong dependencies if it was not previously added.
The action \texttt{next\_dep} increases the current available dependency, skipping those that have already been selected.
Accepting states are all state of the form $(i,j,d_1,\ldots,d_{|D|}), i,j,d_1,\ldots,d_{|D|} > 0$, where all dependencies have been chosen to be strongly carried.
Finally, the initial state is $(1,1,0,\ldots,0) \in S_\text{cons}$, starting with no dependencies selected.
We will discuss the rewards later, in Section~\ref{sec:rl_rewards}.
Note that while the state space depends on the concrete instance, i.e. the \ac{SCoP} being considered, the set of actions does not.
This is important since it allows us to learn a policy to navigate these spaces in a fashion that is independent of the problem instance.
\begin{figure}
\centering
\resizebox{0.45\textwidth}{!}{
\includegraphics{external_tikz/ext_construction_matvec.pdf}
}
\caption{An example of the schedule space construction \ac{MDP} and a concrete action sequence for the matrix-vector multiplication example.}
\label{fig:construction_matvect}
\end{figure}
Consider the example in Figure~\ref{fig:construction_matvect}.
It shows the state space for the schedule space construction of the example of the matrix-vector multiplication from Section~\ref{sec:poly}.
The figure also shows an example run through the state space, corresponding to the sequence of actions
\texttt{next\_dim}, \texttt{next\_dep}, \texttt{next\_dim}, \texttt{next\_dep}, \texttt{next\_dep}, \texttt{select\_dep}, \texttt{next\_dim}, \texttt{next\_dep}, \texttt{select\_dep}.
The first action, \texttt{next\_dim}, increases the dimension, which is the first component $(1,\ldots) \rightarrow (2,\ldots)$.
Then, the \texttt{next\_dep} action changes the second index, indicating the current dependency: $(2,1,0,0) \rightarrow (2,2,0,0)$.
Since no dependency has been selected so far, the value of $j$ is $2$, corresponding to the the first case outlined above, yielding $n = 1$.
After increasing the dimension with a \texttt{next\_dim} action, the next \texttt{next\_dep} action again changes the dependency.
This time there is no dependency with index $>2$, and thus $j = 1$, by the second case, corresponding to $n = -1$.
A second \texttt{next\_dep} returns the second index, marking the dependency, to $2$, returning to the state $(3,2,0,0)$.
At this point the \texttt{select\_dep} adds an index to the current dependency ($2$), marking the current dimension to carry this dependency strongly: $(3,2,0,3)$.
The dependencies $S \rightarrow T$ and $T \rightarrow T$ are sorted, in that order.
This process continues until it reaches the state $(4,1,4,3)$.
This corresponds to a $4$-dimensional space, as indicated by the first component.
As indicated by the third component\footnote{Since the first two indicate the current dimension and dependency.}, the third dimension strongly carries the first dependency, $S \rightarrow T$, and the fourth dimension strongly carries the second dependency $T \rightarrow T$, as indicated by the last component. In particular, the accepting state itself uniquely defines the schedule space, as the order in which the dependencies are selected within a dimension is irrelevant to the construction, it only matters which dimensions specifically carry the dependencies strongly.
\subsection{Schedule space exploration}
Once the schedule space is generated, we look into how to find profitable schedules in this space.
From Algorithm~\ref{algo:construction}, the schedule space is given by a vector $S = (G_1,G_2,\ldots,G_k)$, where each $G_d$ corresponds to the generators of a lattice polytope for the $d$-th dimension.
A valid schedule consists precisely of a point in each of the lattice polytopes corresponding to the dimensions.
To construct each of these points we need to consider the generators $G_d$.
These are given as a set of vertices $v_1,\ldots,v_s$ and rays $r_1,\ldots,r_t$ as explained in Section~\ref{sec:poly}.
An arbitrary point $p$ is in the polytope iff it can be written as a convex combination of vertices and a positive linear combination of rays, i.e.
\[p = \sum_{i=1}^s \lambda_i v_i + \sum_{i=1}^t \alpha_i r_i,\]
where $\lambda_i \geq 0$ for all $i = 1,\ldots, s$ and $\sum_{i=1}^s \lambda_i = 1$ and $\alpha_i \geq 0$ for all $i = 1,\ldots,t$. Note that some work uses a third generator type, lines. This can always be converted to an equivalent set of generators as we define here.
We choose to have only rays instead of rays and lines as it is more uniform this way, making this representation more ameanable for \ac{RL}.
Schedules correspond to the points in the lattice polytope, which is a subset of this general polytope.
This means that $p$ has to have integer coefficients.
To formulate this as an \ac{MDP}, we introduce Algorithm~\ref{algo:exploration} which generates points in the polytopes following the same principles as~\cite{feautrier1992schedulingI,feautrier1992schedulingII,pouchet2007iterative,pouchet2008iterative,ganser2017iterative,ganser2018speeding}.
Once again, our algorithm leaves an unspecified, free function \texttt{select\_coeff}.
The goal of this function is to find the values of the coefficients $\lambda_i, i=1,\ldots,s$ and $\alpha_i, i=1,\ldots,t$.
We do this by iterating over all vertices and rays and selecting a coefficient in this iteration.
Like the authors in~\cite{ganser2017iterative}, we use a correction step, multiplying by the \ac{LCD} in the end to ensure the point is in the lattice polytope, i.e. has integer coefficients.
While this can be avoided through the design of the \texttt{select\_coeff} function, this correction step can simplify the design of the function.
\begin{algorithm}
\caption{General exploration of the schedule space}
\label{algo:exploration}
\begin{algorithmic}[1]
\Input{A schedule space $S = (G_1,G_2,\ldots,G_k)$ as a vector, where each $G_i$ corresponds to the generators of a lattice polytope for the $i$-th dimension.}
\Output{A point in the schedule polytope for each dimension $p = (p_1,\ldots,p_k)$}
\For{ $i \in \{1,\ldots,k\}$}
\State $v_1,\ldots,v_s,r_1,\ldots,r_t \leftarrow G_i$
\State $p_i \leftarrow 0$
\For{ $x \in \{v_1,\ldots,v_s,r_1,\ldots,r_t\}$}
\State $p_i \leftarrow p_i + \texttt{select\_coeff}(x) \cdot x$
\EndFor
\If{ $p_i \in \mathbb{Q}^n \setminus \mathbb{Z}^n$ }
\State $p_i \leftarrow \operatorname{LCD}(p_i) \cdot p_i$
\EndIf
\EndFor
\Return $(p_1,\ldots,p_k)$
\end{algorithmic}
\end{algorithm}
To define a corresponding \ac{MDP}, we start by defining the state space. For an integer $N > 0$, we define the space as the finite set\footnote{This is thus parameterized by $N$. Parameters like these are sometimes also called hyper-parameters, especially in the context of machine learning.} of the coefficients
\begin{align*}
S_\text{expl} = \{ (\lambda_{1,1},\ldots,\lambda_{1,s_1},\alpha_{1,1},\alpha_{1,t_1},\ldots,\lambda_{k,1},\ldots,\alpha_{k,t_k}) \\
\mid \lambda_{i,j}, \alpha_{i,j} \in \{ 0, \ldots, N, \bot \} \text{ for all } i,j \}
\end{align*}
Note that since we have multiple polytopes $P_1,\ldots,P_k$, corresponding to the multiple dimensions, we use two indices for the generators, where the first index $d$ corresponds to the polytope $P_d$ and the second index iterates over the generators in $G_d$. This can also be conceptually understood as unrolling the two loops in Algorithm~\ref{algo:exploration} in the state space.
We use the symbol $\bot$ to mark coefficients that have not been selected, as this is distinct from selecting $0$ as a coefficient.
Choosing the coefficients $\lambda_i \in \{0, \ldots, N\}$ means that in most cases, $\sum_{i=1}^s \lambda_i > 1$.
In that case we norm the coefficients by building the convex combination as $\sum_{i=1}^s \lambda_i v_i / \sum_{i=1}^s \lambda_i$.
Additionally, when $s = 1$ it is clear that $\lambda_1 = 1$, since for a single point there is only one possible convex combination.
We thus remove the corresponding coefficients entirely from the state space\footnote{This was the case for almost all examples we evaluated in this paper.}.
Since we need to choose a coefficient for each term, we do not include actions to steer the exploration as we did for the schedule space construction.
Thus, the action space corresponds directly to the function \texttt{select\_coeff}.
We define actions \texttt{select\_coeff0}, \ldots, \texttt{select\_coeffN} accordingly.
We set
\begin{align*}
\textbf{P}_\text{expl}((a_1,\ldots,\ldots,a_i,\bot,\ldots,\bot),\texttt{select\_coeffX}, \\
(a_1,\ldots,\ldots,a_i,X,\bot,\ldots,\bot)) = 1,
\end{align*}
and for all other states $s,s' \in S_\text{expl}$ and actions $a \in \operatorname{Act}_\text{expl}$, we set $\textbf{P}_\text{expl}(s, a, s') = 0$.
The initial state corresponds the starting configuration with no coefficients defined, i.e., $(\bot,\ldots,\bot)$.
Similar to the space construction, a crucial aspect of this formulation is that the actions are independent of the \ac{SCoP}.
This way, a heuristic model can learn to find good schedules by selecting the correct coefficients.
Consider again the example of the matrix-vector multiplication kernel.
With the schedule space constructed as depicted in Figure~\ref{fig:construction_matvect}, the schedule space has four dimensions, where the third carries the dependency $S \rightarrow T$ strongly and the fourth one $T \rightarrow T$.
This yields four polytopes, one for each dimension, which after using Chernikova's algorithm can be generated each by a single vertex and $10,10,10,13$ rays respectively.
These all live in a $7$ dimensional vector space.
Note that neither the $4$ dimensions of the schedule space nor the $7$ dimensions of the polytopes for each schedule space dimension have a direct interpretation in terms of the loop bounds.
They represent loop schedules according to Farkas' lemma (cf. Section~\ref{sec:poly}) and they are not intuitively easy to understand in terms of the loops' \ac{AST}.
As mentioned above, the coefficient selection for the vertex is not part of the coefficient selection in the state space of the \ac{MDP}, since it is a single vertex in all four dimensions.
\begin{figure}
\centering
\resizebox{0.45\textwidth}{!}{
\includegraphics{external_tikz/ext_exploration_matvect.pdf}
}
\caption{An example of the schedule space exploration \ac{MDP} and a concrete action sequence for the matrix-vector multiplication example.}
\label{fig:exploration_matvect}
\end{figure}
Figure~\ref{fig:exploration_matvect} shows the beginning of a sample run through this space, given by the following action sequence (omitting the \texttt{select\_coeff} part of the name):
\texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{2}, \texttt{2}, \texttt{1}, \texttt{0}, \texttt{2}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{2}, \texttt{0}, \texttt{0}, \texttt{1}, \texttt{0}, \texttt{2}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{1}, \texttt{0}, \texttt{2}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{2}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}, \texttt{0}.
The coefficients of the rays are selected dimension by dimension: $\alpha_{1,1} = 0, \ldots, \alpha_{1,6} = 2, \alpha_{1,7} = 2, \alpha_{1,8} = 1, \ldots,\alpha_{1,10} = 2,\alpha_{2,1} = 0,\ldots,\alpha_{4,13} = 0$.
After selecting all coefficiens with that sequence, the resulting points are given by:
\begin{align*}
p_1 =~ & (0,0,0,0,0,0,0)~ + (0,0,0,0,0,-1,0) \\
& + 2 \cdot (0,0,0,0,0-1,-1)~ + 2 \cdot (0,0,0,-1,-1,0,0) \\
& + (-1,-1,0,0,0,0,0)~ = (-1,-1,0,-2,-2,-3,-1) \\
\vdots \\
p_4 =~& (0,0,1,0,0,0,0)~ + 2 \cdot (0,0,1,0,0,0,0) \\
& = (0,0,3,0,0,0,0).
\end{align*}
We see how this \ac{MDP} can produce concrete schedules.
Finally, this points can be translated into a valid transformation $(i,j) \rightarrow (k,l)$ as explained in Section~\ref{sec:poly}.
This translates to the following transformed C kernel:
\begin{minted}{C}
if (N >= 1)
for (int k = -N; k <= 0; k += 1) {
if (N + k >= 1)
for (int l = 0; l < N; l += 1)
T: y[-k] += y[-k][l]*x[l];
if (k <= -1)
S: y[-k - 1] = 0;}
\end{minted}
Note again that the correspondence between the representation as points in the schedule space and the transformed kernel is not directly visible.
The dimensions of a multidimensional schedule define how the individual points in the schedule are ordered (lexicographically), it does not directly translate e.g. to loop nesting.
\todo{See if I can make the example very explicit.}
This concrete example was in fact found by a simple heuristic, with a bias towards defining coefficients to be $0$.
We describe the heuristic in Section~\ref{sec:eval}.
\subsection{Rewards}
\label{sec:rl_rewards}
The rewards of an \ac{MDP} define a feedback loop -- the quality of an action taken in a given state.
This is needed for a learning algorithm so it can decide about what action is favorable to take in a given state and adjust the agent's model in case the action was infavorable.
Reinforcement Learning can then be used to train an agent that navigates an \ac{MDP}.
Recent work has shown that it is possible to use Deep Neural Networks as agent models that are able to generalize over the large search space and able to learn to make good decisions on in previously unseen states~\cite{mnih2015human,silver2016mastering}.
\begin{figure}
\centering
\tikzsetnextfilename{ext_pipeline}
\input{figures/pipeline.tex}
\caption{The PolyGym flow}
\label{fig:pipeline}
\end{figure}
Figure~\ref{fig:pipeline} shows the flow with two \ac{MDP} models in PolyGym in a \ac{RL} context.
Note that PolyGym does not define an agent, it only defines the environment to learn with such an agent.
The two separate \acp{MDP} produce a single schedule, which is then compiled with LLVM/Poly and executed, to determine the reward.
There is no immediate reward to the construction of the schedule space, nor to the partial states in the exploration step, since these do not define an actual schedule.
As such, we define the rewards for both individual \acp{MDP} uniformly, based on the final constructed schedule, as follows.
If the action leads to a complete schedule, which is a terminal state, then we give the speedup over a traditional optimizing compiler, e.g., using LLVM with the \texttt{-O3} flag, as a sample-independent metric as reward.
If it leads to an incomplete schedule, we give $0$ as reward.
Finally, if it leads to any invalid state, we give negative reward. Invalid states can occur when constructing an empty polytope in the \ac{MDP} of the schedule space construction.
They can also occur in the exploration \ac{MDP}, when there are multiple vertices in a polytope and all vertex coefficients are selected to be $0$ (leading to no convex combination).
In this way, (positive) rewards are only given for the final construction which obtains a valid schedule.
This constitutes what is called a sparse reward setting.
For example, the schedule for the matrix-vector multiplication kernel\footnote{For the vector dimension $N=2000$, with data generated procedurally using the Polybench tools.} found by the examples described in Figures~\ref{fig:construction_matvect} and~\ref{fig:exploration_matvect}, the resulting kernel runs an impressive $140 \times$ faster than the original kernel compiled with \texttt{-O3}\footnote{The full setup is described in Section~\ref{sec:eval}.}.
This level of improvement is thus not easy to obtain in more complex kernels and larger input sizes.
Thus, the final action \texttt{select\_coeff0} (not shown in Figure~\ref{fig:exploration_matvect} for space reasons) has a reward of $140$, while all other actions have a reward of $0$.
State-of-the-art methods in \ac{RL} trickle down this reward to the other actions~\cite{mnih2015human,silver2016mastering}.
Importantly, this reward should also be trickled down to the schedule space construction in a learning setting, even if the \ac{MDP} is conceptually a different one.
\subsection{Limitations}
While our formulation of the schedule search problem as an \ac{MDP} with an instance-independent action set enables reinforcement learning for this problem, it still has some limitations.
An important limitation is that some accepting states do not yield schedules, as discussed previously.
There are two possobilities of this happening.
The first corresponds to the schedule space construction.
Not every schedule space constructed with Algorithm~\ref{algo:construction} actually has schedules.
If too many dependencies are carried strongly in a dimension, the intersection of the corresponding polytopes can be empty, leading to a space without any valid schedules.
The randomized construction of~\cite{ganser2017iterative}, similarly, does not provide such a (formal) guarantee.
In practice, however, biasing the construction towards many dimensions, as they do, yields non-empty spaces.
We are confident this fact can easily be learned by pertinent methods.
The second possibility has to do with the vertices in a generating set for a polytope.
If all vertices have a coefficient of $0$, Algorithm~\ref{algo:exploration} does not produce a schedule, since the point constructed is not in the lattice polytope.
The authors in~\cite{ganser2017iterative} choose the vertex at random.
We cannot do the same, as it breaks the \ac{MDP} abstraction, where the reward of a state is non-deterministic.\footnote{Since the execution times of the \acp{SCoP} are non-deterministic as well, this is the case for all rewards. However, if the statistical variance of execution times is not negligible, then the whole problem of selecting an optimal schedule is ill-posed in the first place. This is not the case in practice.}
We did not run into this problem in any example considered in this paper.
Having more than one vertex seems to be an extremely rare occurrence in practice.
This is both a benefit and a problem: While we mostly do not have to deal with this problem, if at some point we ever do, we will probably not have enough samples to learn to deal with it.
A final limitation corresponds to the exploration phase.
There are two ways in which we exclude some points.
By choosing a finite $N$ as a hyper-parameter, we technically limit the possible coefficients of the schedule space.
This is necessary, however, since otherwise there is an infinite amount of possible schedule polytopes~\footnote{Many of these are equivalent, since the number of orderings is finite}.
Similarly, the fact that we use integer coefficients and calculate the \ac{LCD} in Algorithm~\ref{algo:exploration} might exclude some points that would otherwise be found with rational coefficients.
With an unbounded $N$ this problem would not exist.
Consequently, a larger $N$ mitigates it.
While the limitations discussed here technically exclude some solutions or allow algorithms~\ref{algo:construction} and~\ref{algo:exploration} to fail, they do so only in rare corner-cases.
Moreover, our reward design and the hyper-parameter $N$ allow us to avoid or at least mitigate these limitations in those rare corner-cases.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The equations of motion for a spinning test particle in a given gravitational background were deduced
by Mathisson and Papapetrou \cite{math37,papa51} and read
\begin{eqnarray}
\label{papcoreqs1}
\frac{DP^{\mu}}{{\rm d} \tau_U}
&=&-\frac12R^{\mu}{}_{\nu\alpha\beta}U^{\nu}S^{\alpha\beta}\equiv F^{\rm (sc)}{}^{\mu}
\ , \\
\label{papcoreqs2}
\frac{DS^{\mu\nu}}{{\rm d} \tau_U}&=&P^{\mu}U^{\nu}-P^{\nu}U^{\mu}\ ,
\end{eqnarray}
where $P^{\mu}$ is the total four-momentum of the particle, $S^{\mu\nu}$ is the antisymmetric spin tensor, and
$U$ is the 4-velocity of the timelike ``center of mass line'' used to make the multipole reduction.
In order to have a closed set of equations, Eqs.~(\ref{papcoreqs1}) and (\ref{papcoreqs2}) must be completed
by adding supplementary conditions (SC) whose standard choices in the literature are the
\begin{itemize}
\item[1.]
Corinaldesi-Papapetrou \cite{cori51} conditions (CP): $S^{t\nu}=0$,
where the index $\nu$ corresponds to a coordinate component and $t$ is a timelike slicing coordinate,
\item[2.]
Pirani \cite{pir56} conditions (P): $S^{\mu\nu}U_\nu=0$,
\item[3.]
Tulczyjew \cite{tulc59} conditions (T): $S^{\mu\nu}P_\nu=0$.
\end{itemize}
Only solutions of the combined equations for which both $U$ and $P$ are timelike vectors are considered, in order to have a meaningful interpretation describing a spinning test particle with nonzero rest mass and physical momentum.
Not much is known about actual solutions of these equations in explicit spacetimes which satisfy the Einstein equations.
In a previous article \cite{bdfg1}, we considered the simplest special case of a spinning test particle moving uniformly along a circular orbit in the static spherically symmetric Schwarzschild spacetime, but because these equations are still complicated, we looked for solutions with constant frame components of the spin tensor in the natural symmetry adapted static frame, i.e., coinciding with a static tensor field along the path. Such a static spin tensor is a very strong restriction on the solutions of these equations of motion, leading
to special solutions in which the spin vector is perpendicular to the plane of the orbit,
and contributes to an adjustment in the acceleration of the orbit.
Here we consider the slightly less restrictive case where the spin components are not constant, but the motion is still circular.
However, in this case it is clear that if the spin tensor has time-dependent components, its feedback into the acceleration of the test particle path will break the static symmetry of that path unless the spin precession is very closely tied to the natural Frenet-Serret rotational properties of the path itself. Indeed we find that only the Pirani supplementary conditions permit such specialized solutions since they allow the spin tensor to be described completely by a spatial spin vector in the local rest space of the path itself. By locking spin vector precession to the Frenet-Serret rotational velocity of the path, solutions are found with a spin vector Fermi-Walker transported along an accelerated center of mass world line. The remaining choices for the supplementary conditions have no natural relationship to the Frenet-Serret properties of the particle path and do not admit such specialized solutions.
With the assumption of circular motion, one can solve the equations of motion explicitly up to constants of integration. By a process of elimination, one can express them entirely in terms of the spin components and particle mass as a constant coefficient linear system of first and second order differential equations. By systematic solving and backsubstitution, one gets decoupled linear second order constant coefficient equations for certain spin components, which are easily solved to yield exponential or sinusoidal or quadratic solutions as functions of the proper time, from which the remaining variables may be calculated. Imposing the choice of supplementary conditions then puts constraints on the constants of integration or leads to inconsistencies. The details of the decoupling and solution of the equations of motion are left to the Appendix, leaving the imposition of the supplementary conditions to the main text.
\section{Circular orbits in the Schwarzschild spacetime}
Consider the case of the Schwarzschild spacetime, with the metric written in standard coordinates
\begin{equation}\fl\quad
\label{metric}
{\rm d} s^2 = -\left(1-\frac{2M}r\right){\rm d} t^2 + \left(1-\frac{2M}r\right)^{-1} {\rm d} r^2
+ r^2 ({\rm d} \theta^2 +\sin^2 \theta {\rm d} \phi^2)\ ,
\end{equation}
and introduce the usual orthonormal frame adapted to the static observers following the time lines
\begin{equation}\fl\quad
\label{frame}
e_{\hat t}=(1-2M/r)^{-1/2}\partial_t, \,
e_{\hat r}=(1-2M/r)^{1/2}\partial_r, \,
e_{\hat \theta}=\frac{1}{r}\partial_\theta, \,
e_{\hat \phi}=\frac{1}{r\sin \theta}\partial_\phi ,
\end{equation}
with dual frame
\begin{equation}\fl\quad
\omega^{{\hat t}}=(1-2M/r)^{1/2}{\rm d} t\,, \,
\omega^{{\hat r}}=(1-2M/r)^{-1/2}{\rm d} r\,, \,
\omega^{{\hat \theta}}=r {\rm d} \theta\,, \,
\omega^{{\hat \phi}}=r\sin \theta {\rm d}\phi\,,
\end{equation}
where $\{\partial_t, \partial_r, \partial_\theta, \partial_\phi\}$ and $\{{\rm d} t, {\rm d} r, {\rm d} \theta,{\rm d} \phi\}$ are the coordinate basis and its dual, respectively.
In order to investigate the simplest special solutions of the combined equations of motion,
we explore the consequences of assuming that the test particle 4-velocity $U$ corresponds to a timelike constant speed circular orbit confined to the equatorial plane $\theta=\pi/2$. Then it must have the form
\begin{equation}
\label{orbita}
U=\Gamma [\partial_t +\zeta \partial_\phi ]
=\gamma [e_{\hat t} +\nu e_{\hat \phi}], \qquad
\gamma=(1-\nu^2)^{-1/2}\ ,
\end{equation}
where $\zeta$ is
the angular velocity with respect to infinity, $\nu$ is the azimuthal velocity as seen by the static observers, $\gamma$ is the associated gamma factor, and $\Gamma$ is a normalization factor which assures that $U\cdot U=-1$. These are related by
\begin{equation}\fl\quad
\zeta=(-g_{tt}/g_{\phi\phi})^{1/2} \nu \ ,\qquad
\Gamma =\left( -g_{tt}-\zeta^2g_{\phi\phi} \right)^{-1/2}
=(-g_{tt})^{-1/2} \gamma
\ ,
\end{equation}
so that $\zeta\Gamma = \gamma\nu/(g_{\phi\phi})^{1/2}$, which reduces to
$\gamma\nu/r$ in the equatorial plane.
Here $\zeta$ and therefore $\nu$ are assumed to be constant along the world line.
We limit our analysis to the equatorial plane $\theta=\pi/2$; as a convention, the physical (orthonormal)
component along $-\partial_\theta$ which is perpendicular to the equatorial plane will be referred to as ``along the positive $z$-axis" and will be indicated by the index $\hat z$ when convenient: $e_{\hat z}=-e_{\hat \theta}$. Note both $\theta=\pi/2$ and $r=r_0$ are constants along any given circular orbit, and that the azimuthal coordinate along the orbit depends on the coordinate time $t$ or proper time $\tau$ along that orbit according to
\begin{equation}\label{eq:phitau}
\phi -\phi_0 = \zeta t = \Omega_U \tau_U \ ,\quad
\Omega_U =\gamma\nu/r
\ ,
\end{equation}
defining the corresponding coordinate and proper time orbital angular velocities $\zeta$ and $\Omega_U$. These determine the rotation of the spherical frame with respect to a nonrotating frame at infinity.
Among all circular orbits the timelike circular geodesics merit special attention, whether co-rotating $(\zeta_+)$
or counter-rotating $(\zeta_-)$ with respect to increasing values of the azimuthal coordinate $\phi$ (counter-clockwise motion). Their time coordinate angular velocities
$\zeta_\pm\equiv \pm\zeta_K=\pm (M/r^3)^{1/2}$, which are identical with the Newtonian Keplerian values, lead to the expressions
\begin{equation}\fl\quad
\label{Ugeos}
U_\pm=\gamma_K [e_{\hat t} \pm \nu_K e_{\hat \phi}]\ , \qquad
\nu_K=\left[\frac{M}{r-2M}\right]^{1/2}\ , \qquad
\gamma_K=\left[\frac{r-2M}{r-3M}\right]^{1/2}\ ,
\end{equation}
where the timelike condition $\nu_K < 1$ is satisfied if $r>3M$. At $r=3M$ these circular geodesics go null.
It is convenient to introduce the Lie relative curvature \cite{idcf1,idcf2} of each orbit
\begin{equation}
k_{\rm (lie)}=-\partial_{\hat r} \ln \sqrt{g_{\phi\phi}}=-\frac1r\left(1-\frac{2M}{r}\right)^{1/2}=-\frac{\zeta_K}{\nu_K}\ ,
\end{equation}
and a Frenet-Serret intrinsic frame along $U$ \cite{iyer-vish}, defined by
\begin{equation}
\label{FSframe}
E_{0}=U\ , \quad
E_{1}=e_{\hat r}\ , \quad
E_{2}=\gamma[\nu e_{\hat t} +e_{\hat \phi}]\ , \quad
E_{3}=e_{\hat z}
\end{equation}
satisfying the following system of evolution equations along the constant radial acceleration orbit
\begin{equation}
\label{FSeqs}\fl\quad
\frac{DU}{d\tau_U} \equiv a(U)=\kappa E_{1}\ ,\
\frac{DE_{1}}{d\tau_U}=\kappa U+\tau_1 E_{2}\ ,\
\frac{DE_{2}}{d\tau_U}=-\tau_1 E_{1}\ ,\
\frac{DE_{3}}{d\tau_U}=0\ ,
\end{equation}
where in this case
\begin{eqnarray}\label{kappatau1}
\kappa &=& k_{\rm (lie)}\gamma^2[\nu^2-\nu_K^2]
= -\frac{\gamma^2(\nu^2-\nu_K^2)}{\nu_K} \zeta_K
\ , \nonumber\\
\tau_1 &=& -\frac{1}{2\gamma^2} \frac{d\kappa}{d\nu}
=-k_{\rm (lie)}\frac{\gamma^2}{\gamma_K^2}\nu
= -\frac{\gamma^2\nu}{\gamma_K^2\nu_K} \zeta_K
\ .
\end{eqnarray}
The projection of the spin tensor into the local rest space of the static observers defines the spin vector by spatial duality
\begin{equation}
S^\beta=\frac12 \eta_\alpha{}^{\beta\gamma\delta}(e_{\hat t})^\alpha S_{\gamma\delta}\ ,
\end{equation}
where $\eta_{\alpha\beta\gamma\delta}=\sqrt{-g} \epsilon_{\alpha\beta\gamma\delta}$ is the unit volume 4-form constructed from the Levi-Civita alternating symbol $\epsilon_{\alpha\beta\gamma\delta}$ ($\epsilon_{\hat t\hat r\hat\theta\hat\phi}=1$),
leading to the correspondence
\begin{equation}
\label{spinvecehatt}
(S^{\hat r},S^{\hat\theta}=-S^{\hat z},S^{\hat\phi})
=(S_{\hat \theta \hat \phi}, -S_{\hat r \hat \phi} , S_{\hat r \hat \theta})
\ .
\end{equation}
For the CP supplementary conditions only these components of the spin tensor remain nonzero, while in the remaining cases the other nonzero components are determined from these through the corresponding orthogonality condition.
The total spin scalar is also useful
\begin{equation}
\label{sinv}
s^2
=\frac12 S_{\mu\nu}S^{\mu\nu}
=-S_{\hat r\hat t }^2 -S_{\hat \theta \hat t }^2 -S_{\hat \phi \hat t }^2
+S_{\hat r \hat \theta}^2 +S_{\hat r \hat \phi}^2+S_{\hat \theta \hat \phi}^2\ ,
\end{equation}
and in general is not constant along the trajectory of the spinning particle. In the Schwarzschild field the total spin must be small enough compared to the mass of the test particle and of the black hole $|s|/(mM)\ll 1$ for the approximation of the Mathisson-Papapetrou model to be valid. This inequality follows from requiring that
the characteristic length scale $|s|/m$ associated with the particle's internal structure be small compared to the natural length scale $M$ associated with the background field in order that the particle backreaction can be neglected, i.e., that the description of a test particle on a background field make sense \cite{mol}.
\section{Solving the equations of motion: preliminary steps}
Consider first the evolution equation for the spin tensor (\ref{papcoreqs2}).
By contracting both sides of Eq.~(\ref{papcoreqs2}) with $U_\nu$, one obtains the following expression for the total 4-momentum
\begin{equation}
\label{Ps}
P^{\mu}=-(U\cdot P)U^\mu -U_\nu \frac{DS^{\mu\nu}}{{\rm d} \tau_U}
\equiv
mU^\mu +P_s^\mu\ ,
\end{equation}
which then defines the particle's mass $m$, which a priori does not have to be constant along the orbit, while $P_s^\mu =U_\alpha DS^{\alpha\mu}/{{\rm d} \tau_U}$ is the part of the 4-momentum orthogonal to $U$.
Finally let $U_p$ denote the timelike unit vector associated with the total 4-momentum $P=||P||\,U_p\equiv\mu \, U_p$.
Backsubstituting this representation Eq.~(\ref{Ps}) of the momentum into the spin evolution Eq.~(\ref{papcoreqs2}) expressed in the static observer frame leads to
\begin{eqnarray}
\label{eqs1}
0&=&\frac{{\rm d} S_{\hat r\hat \phi}}{{\rm d} \tau_U}-\nu \frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}+\gamma\frac{\zeta_K}{\nu_K}(\nu^2-\nu_K^2)S_{\hat t\hat \phi}\ , \\
\label{eqs2}
0&=&\frac{{\rm d} S_{\hat \theta\hat \phi}}{{\rm d} \tau_U}-\nu \frac{{\rm d} S_{\hat t\hat \theta}}{{\rm d} \tau_U}-\frac{\gamma\nu}{\gamma_K^2}\frac{\zeta_K}{\nu_K}S_{\hat r\hat \theta}\ , \\
\label{eqs3}
0&=&\frac{{\rm d} S_{\hat r\hat \theta}}{{\rm d} \tau_U}-\gamma\nu_K\zeta_K S_{\hat t\hat \theta}+\gamma\nu\frac{\zeta_K}{\nu_K}S_{\hat \theta\hat \phi}\ .
\end{eqnarray}
From (\ref{Ps}), using the definition of $P_s$ and equations~(\ref{eqs1})--(\ref{eqs3}), it follows that the total 4-momentum $P$ can be written in the form
\begin{eqnarray}\fl\quad
\label{Ptot}
P&=&\gamma(m+\nu m_s)e_{\hat t}
+\frac1{\gamma}\left[\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}
-\gamma\nu\frac{\zeta_K}{\nu_K} S_{\hat t\hat \phi}\right]e_{\hat r}
+\frac1{\gamma}\left[\frac{{\rm d} S_{\hat t\hat \theta}}{{\rm d} \tau_U}
-\gamma\nu_K\zeta_K S_{\hat r\hat \theta}\right]e_{\hat \theta}
\nonumber\\
\fl\quad
&&+\gamma(m\nu+m_s)e_{\hat \phi}
\nonumber\\
\fl\quad
&=&mU+m_sE_{\hat \phi}+\frac1{\gamma}\left[\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}-\gamma\nu\frac{\zeta_K}{\nu_K} S_{\hat t\hat \phi}\right]e_{\hat r}
+\frac1{\gamma}\left[\frac{{\rm d} S_{\hat t\hat \theta}}{{\rm d} \tau_U}
-\gamma\nu_K\zeta_K S_{\hat r\hat \theta}\right]e_{\hat \theta}\ ,
\end{eqnarray}
with
\begin{equation}
\label{msdef}
m_s=\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}
+\gamma\nu\frac{\zeta_K}{\nu_K} S_{\hat t\hat r}
-\gamma\nu_K\zeta_K S_{\hat r\hat \phi}\ .
\end{equation}
Next consider the equation of motion (\ref{papcoreqs1}).
The Riemann tensor spin-curvature-coupling force term is
\begin{eqnarray}\fl\quad
\label{Fspin}
F^{\rm (sc)}
=\gamma\zeta_K^2 \left\{\nu S_{\hat t\hat \phi} e_{\hat t}
+ \left[2S_{\hat t\hat r}+\nu S_{\hat r\hat \phi}\right] e_{\hat r}
-\left[S_{\hat t\hat \theta}+2\nu S_{\hat \theta\hat \phi}\right] e_{\hat \theta}
-S_{\hat t\hat \phi}e_{\hat \phi}\right\}\ .
\end{eqnarray}
Using (\ref{Ps}), the balance condition which allows a circular orbit of this type to exist can be written as
\begin{equation}
\label{baleq}
ma(U)=F^{\rm(so)} + F^{\rm (sc)}\ ,
\end{equation}
where $a(U)$ is the acceleration of the $U$-orbit and $F^{\rm(so)}\equiv -DP_s/d\tau_{U}$ defines the spin-orbit coupling force term, which arises from the variation of the spin along the orbit.
Taking (\ref{Ptot}) and (\ref{msdef}) into account, Eq.~(\ref{papcoreqs1}) gives rise to the following set of ordinary differential equations
\begin{eqnarray}\fl\quad
\label{eqm1}
0&=&\nu\frac{{\rm d}^2S_{\hat t\hat \phi}}{{\rm d} \tau_U^2}-2\gamma\nu \nu_K\zeta_K\frac{{\rm d} S_{\hat r\hat \phi}}{{\rm d} \tau_U}+\gamma\frac{\zeta_K}{\nu_K}(\nu^2+\nu_K^2)\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}\nonumber\\
\fl\quad
&&-\gamma^2\nu\zeta_K^2(\nu^2-\nu_K^2)S_{\hat t\hat \phi}+\frac{{\rm d} m}{{\rm d} \tau_U}\ , \\
\fl\quad
\label{eqm2}
0&=&\frac{{\rm d}^2S_{\hat t\hat r}}{{\rm d} \tau_U^2}-\nu\frac{{\rm d}^2S_{\hat r\hat \phi}}{{\rm d} \tau_U^2}-2\frac{\gamma\nu}{\gamma_K^2}\frac{\zeta_K}{\nu_K}\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}+\nu\gamma^2\zeta_K^2(\nu^2-\nu_K^2)S_{\hat r\hat \phi} \nonumber\\
\fl\quad
&&-\zeta_K^2\left[\frac{\gamma^2}{\gamma_K^2}\frac{\nu^2}{\nu_K^2}+2\right]S_{\hat t\hat r}-m\gamma\frac{\zeta_K}{\nu_K}(\nu^2-\nu_K^2)\ , \\
\fl\quad
\label{eqm3}
0&=&\frac{{\rm d}^2S_{\hat t\hat \theta}}{{\rm d} \tau_U^2}-\nu\frac{{\rm d}^2S_{\hat \theta\hat \phi}}{{\rm d} \tau_U^2}+\gamma\frac{\zeta_K}{\nu_K}(\nu^2-\nu_K^2)\frac{{\rm d} S_{\hat r\hat \theta}}{{\rm d} \tau_U}+2\nu\zeta_K^2S_{\hat \theta\hat \phi}+\zeta_K^2S_{\hat t\hat \theta}\ , \\
\fl\quad
\label{eqm4}
0&=&\frac{{\rm d}^2S_{\hat t\hat \phi}}{{\rm d} \tau_U^2}-\gamma\frac{\zeta_K}{\nu_K}(\nu^2+\nu_K^2)\frac{{\rm d} S_{\hat r\hat \phi}}{{\rm d} \tau_U}+2\gamma\nu\frac{\zeta_K}{\nu_K}\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}-\zeta_K^2\left[\frac{\gamma^2}{\gamma_K^2}\frac{\nu^2}{\nu_K^2}-1\right]S_{\hat t\hat \phi}\nonumber\\
\fl\quad
&&+\nu\frac{{\rm d} m}{{\rm d} \tau_U}\ .
\end{eqnarray}
Note that there are two equations containing the second derivative of $S_{\hat t\hat \phi}$; this is due to the presence of its first derivative in two different components of $P$ (more precisely, in $P^{\hat t}$ and $P^{\hat \phi}$, see Eqs.~(\ref{Ptot}) and (\ref{msdef})).
Once the system of constant coefficient linear differential equations (\ref{eqs1})--(\ref{eqs3}) and (\ref{eqm1})--(\ref{eqm4}) is solved for $m$ and the spin tensor components, one may then calculate $P$. The system must be decoupled, leading to functions which are either exponentials, sinusoidals, or at most quadratic functions of the proper time along the particle world line. The elimination method for decoupling the equations is crucially different depending on whether
$\nu$ has the values 0 or $\pm \nu_K$ or none of these values, since one or the other or neither term drops out of the spin equations (\ref{eqs1}) and so must be considered separately.
From the details of their derivations discussed in the Appendix, one sees why there are several zones approaching the horizon where the solutions change character.
\section{Particles at rest: the $\nu=0$ case}
When the particle is at rest, the solutions for the components of the spin tensor and the varying mass $m$ of the spinning particle are given by
\begin{enumerate}
\label{solnueq0}
\item $2M<r<3M$:
\begin{eqnarray}
\label{solnueq0I}
S_{\hat \theta\hat \phi}
&=&c_1
\ , \nonumber\\
S_{\hat t\hat r}
&=&c_2e^{\omega_1\tau}+c_3e^{-\omega_1\tau}+\frac{\nu_K}{\zeta_K}\frac{c_m}{2+\nu_K^2}
\ , \nonumber\\
m&=&-\nu_K\zeta_K [c_2e^{\omega_1\tau}+c_3e^{-\omega_1\tau}]+\frac{2c_m}{2+\nu_K^2}
\ , \nonumber\\
S_{\hat t\hat \theta}
&=&c_4 e^{{\bar \omega}_0\tau}+c_5 e^{-{\bar \omega}_0\tau}
\ , \nonumber\\
S_{\hat t\hat \phi}
&=&c_6 e^{{\bar \omega}_0\tau}+c_7 e^{-{\bar \omega}_0\tau}
\ , \nonumber\\
S_{\hat r\hat \theta}
&=&\gamma_K\nu_K\left[c_4 e^{{\bar \omega}_0\tau}-c_5 e^{-{\bar \omega}_0\tau}\right]+c_8
\ , \nonumber\\
S_{\hat r\hat \phi}
&=&\gamma_K\nu_K\left[c_6 e^{{\bar \omega}_0\tau}-c_7 e^{-{\bar \omega}_0\tau}\right]+c_9
\ ;
\end{eqnarray}
\item $r=3M$:
\begin{eqnarray}
\label{solnueq0req3M}
S_{\hat \theta\hat \phi}&=&c_1\ , \nonumber\\
S_{\hat t\hat r}&=&c_2e^{\tau/(3M)}+c_3e^{-\tau/(3M)}+\sqrt{3}Mc_m\ , \nonumber\\
m&=&-\frac{\sqrt{3}}{9M} [c_2e^{\tau/(3M)}+c_3e^{-\tau/(3M)}]+\frac23c_m\ , \nonumber\\
S_{\hat t\hat \theta}&=&c_4\tau+c_5\ , \nonumber\\
S_{\hat t\hat \phi}&=&c_6\tau+c_7\ , \nonumber\\
S_{\hat r\hat \theta}&=&\frac{\sqrt{3}}{9M}\left[\frac{c_4}2\tau^2+c_5\tau\right]+c_8\ , \nonumber\\
S_{\hat r\hat \phi}&=&\frac{\sqrt{3}}{9M}\left[\frac{c_6}2\tau^2+c_7\tau\right]+c_9\ ;
\end{eqnarray}
\item $r>3M$:
\begin{eqnarray}
\label{solnueq0II}
S_{\hat \theta\hat \phi}
&=&c_1
\ , \nonumber\\
S_{\hat t\hat r}
&=&c_2e^{\omega_1\tau}+c_3e^{-\omega_1\tau}+\frac{\nu_K}{\zeta_K}\frac{c_m}{2+\nu_K^2}
\ , \nonumber\\
m&=&-\nu_K\zeta_K [c_2e^{\omega_1\tau}+c_3e^{-\omega_1\tau}]+\frac{2c_m}{2+\nu_K^2}
\ , \nonumber\\
S_{\hat t\hat \theta}
&=&c_4 \cos\omega_0\tau+c_5 \sin\omega_0\tau
\ , \nonumber\\
S_{\hat t\hat \phi}
&=&c_6 \cos\omega_0\tau+c_7 \sin\omega_0\tau
\ , \nonumber\\
S_{\hat r\hat \theta}
&=&\gamma_K\nu_K\left[c_4 \sin\omega_0\tau-c_5 \cos\omega_0\tau\right]+c_8
\ , \nonumber\\
S_{\hat r\hat \phi}
&=&\gamma_K\nu_K\left[c_6 \sin\omega_0\tau-c_7 \cos\omega_0\tau\right]+c_9
\ ,
\end{eqnarray}
\end{enumerate}
where $c_m, c_1, \ldots, c_9$ are integration constants and
\begin{equation}\fl\quad
\label{freqnueq0}
\omega_0=i{\bar \omega}_0=\frac{\zeta_K}{\gamma_K}=\sqrt{\frac{M(r-3M)}{r^3(r-2M)}}
\ , \quad
\omega_1=\zeta_K(2+\nu_K^2)^{1/2}=\sqrt{\frac{M(2r-3M)}{r^3(r-2M)}}\ .
\end{equation}
From Eq.~(\ref{Ptot}) the total 4-momentum $P$ then has the value
\begin{equation}\fl\quad
P=m e_{\hat t}+\omega_1[c_2e^{\omega_1\tau}-c_3^{-\omega_1\tau}]e_{\hat r}-\frac{\zeta_K}{\nu_K}\left[S_{\hat r\hat \theta}-\frac{c_8}{\gamma_K^2}\right]e_{\hat \theta}-\frac{\zeta_K}{\nu_K}\left[S_{\hat r\hat \phi}-\frac{c_9}{\gamma_K^2}\right]e_{\hat \phi}\
\end{equation}
in cases (i) and (iii), and
\begin{equation}\fl\,
P=m e_{\hat t}+\frac1{3M}[c_2e^{\tau/(3M)}-c_3^{-\tau/(3M)}]e_{\hat r}-\left[\frac{\sqrt{3}}{9M}S_{\hat r\hat \theta}-c_4\right]e_{\hat \theta}-\left[\frac{\sqrt{3}}{9M}S_{\hat r\hat \phi}-c_6\right]e_{\hat \phi}\
\end{equation}
in case (ii).
At this point the supplementary conditions impose constraints on the constants of integration which appear in the solution.
For a particle at rest ($\nu=0$), the CP and P conditions coincide and imply that
$S_{\hat t \hat a}=0$, namely
\begin{equation}
c_2=c_3=c_4=c_5=c_6=c_7=0\ , \qquad c_m=0\ ,
\end{equation}
leaving arbitrary values for $c_1, c_8, c_9$.
As a consequence, $m$ should be $0$ as well, implying that $P$ should be spacelike and therefore physically inconsistent.
The T supplementary conditions when $\nu=0$ imply instead
\begin{eqnarray}
\label{Tcondsoldnueq0}
0&=&S_{\hat t\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}
+S_{\hat t\hat r}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}
+S_{\hat t\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi }}{{\rm d} \tau_U}
-\nu_K\zeta_K[S_{\hat t\hat \phi}S_{\hat r\hat \phi}
+S_{\hat t\hat \theta}S_{\hat r\hat \theta}]
\ , \nonumber\\
0&=&S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}
+S_{\hat r\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}
-\nu_K\zeta_K (S_{\hat r\hat \theta}^2+S_{\hat r\hat \phi}^2)
-mS_{\hat t\hat r}
\ , \nonumber\\
0&=&S_{\hat \theta \hat \phi}\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}
-S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}
-\nu_K\zeta_K S_{\hat \theta \hat \phi}S_{\hat r\hat \phi}-mS_{\hat t\hat \theta}
\ , \nonumber\\
0&=&S_{\hat \theta\hat \phi}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}
+S_{\hat r\hat \phi}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}
-\nu_K\zeta_K S_{\hat r\hat \theta}S_{\hat \theta\hat \phi}
+mS_{\hat t\hat \phi}\ .
\end{eqnarray}
By substituting the solutions given by Eqs.~(\ref{solnueq0I})--(\ref{solnueq0II}) into these equations, one finds that all the integration constants except $c_1$ must vanish.
This in turn implies $m=0$, which again leads to a spacelike $P$.
Thus a spinning particle with nonzero rest mass cannot remain at rest in the given gravitational field.
\section{Geodesic motion: the $\nu=\pm \nu_K$ case}
When the test particle's center of mass moves along a geodesic (the orbit has zero acceleration $a(U)=0$) with azimuthal velocity $\nu=\pm\nu_K$, the spin-curvature and the spin-orbit forces balance each other (see Eq.~(\ref{baleq})): $F_{\rm (so)}=-F^{\rm
(sc)}$. The solution of Eqs.~(\ref{eqm1})-(\ref{eqm4}) determines the spin which leads to this balancing.
In the Schwarzschild spacetime, timelike circular geodesics only exist for $r>3M$. We consider
separately the various cases:
\begin{enumerate}
\item $3M<r<6M$:
\begin{eqnarray}\fl\quad
\label{solnuKII}
S_{\hat \theta\hat \phi}&=&c_3\cos\omega_4\tau+c_4\sin\omega_4\tau\ , \nonumber\\
\fl\quad
S_{\hat t\hat \theta}&=&c_5\cos\omega_3\tau+c_6\sin\omega_3\tau\pm\frac{S_{\hat \theta\hat \phi}}{\nu_K}\ , \nonumber\\
\fl\quad
S_{\hat r\hat \theta}&=&\gamma_K\nu_K\left[c_5\sin\omega_3\tau-c_6\cos\omega_3\tau\right]\ , \nonumber\\
\fl\quad
m&=&c_m\ , \nonumber\\
\fl\quad
S_{\hat t\hat r}&=&c_7e^{{\bar \omega}_2\tau}+c_8 e^{-{\bar \omega}_2\tau}\pm2\frac{\gamma_K}{\zeta_K}\frac{c_2}{4-3\gamma_K^2}\ , \nonumber\\
\fl\quad
S_{\hat r\hat \phi}&=&\pm\nu_K[c_7e^{{\bar \omega}_2\tau}+c_8 e^{-{\bar \omega}_2\tau}]+c_1+2\frac{\nu_K}{\zeta_K}\gamma_K\frac{c_2}{4-3\gamma_K^2}\ , \nonumber\\
\fl\quad
S_{\hat t\hat \phi}&=&\pm\frac2{\gamma_K}(3\gamma_K^2-4)^{-1/2}[c_8e^{-{\bar \omega}_2\tau}-c_7e^{{\bar \omega}_2\tau}]+c_9-3\gamma_K^2\frac{c_2}{4-3\gamma_K^2}\tau\ ;
\end{eqnarray}
\item $r=6M$:
\begin{eqnarray}
\label{solnuKreq6M}
S_{\hat \theta\hat \phi}&=&c_3\cos\frac{\sqrt{3}\tau}{18M}+c_4\sin\frac{\sqrt{3}\tau}{18M}\ , \nonumber\\
S_{\hat t\hat \theta}&=&c_5\cos\frac{\sqrt{6}\tau}{36M}+c_6\sin\frac{\sqrt{6}\tau}{36M}\pm2S_{\hat \theta\hat \phi}\ , \nonumber\\
S_{\hat r\hat \theta}&=&\frac{\sqrt{3}}3\left[c_5\sin\frac{\sqrt{6}\tau}{36M}-c_6\cos\frac{\sqrt{6}\tau}{36M}\right]\ , \nonumber\\
m&=&c_m\ , \nonumber\\
S_{\hat t\hat r}&=&c_7\tau+c_8\ , \nonumber\\
S_{\hat r\hat \phi}&=&\pm\frac12[c_7\tau+c_8]+c_1\ , \nonumber\\
S_{\hat t\hat \phi}&=&\mp\frac{\sqrt{2}\tau}{12M}\left[\frac{c_7}2\tau+c_8\right]+c_2\ ;
\end{eqnarray}
\item $r>6M$:
\begin{eqnarray}\fl\quad
\label{solnuKIII}
S_{\hat \theta\hat \phi}&=&c_3\cos\omega_4\tau+c_4\sin\omega_4\tau\ , \nonumber\\
\fl\quad
S_{\hat t\hat \theta}&=&c_5\cos\omega_3\tau+c_6\sin\omega_3\tau\pm\frac{S_{\hat \theta\hat \phi}}{\nu_K}\ , \nonumber\\
\fl\quad
S_{\hat r\hat \theta}&=&\gamma_K\nu_K\left[c_5\sin\omega_3\tau-c_6\cos\omega_3\tau\right]\ , \nonumber\\
\fl\quad
m&=&c_m\ , \nonumber\\
\fl\quad
S_{\hat t\hat r}&=&c_7\cos\omega_2\tau+c_8\sin\omega_2\tau\pm2\frac{\gamma_K}{\zeta_K}\frac{c_2}{4-3\gamma_K^2}\ , \nonumber\\
\fl\quad
S_{\hat r\hat \phi}&=&\pm\nu_K[c_7\cos\omega_2\tau+c_8\sin\omega_2\tau]+c_1+2\frac{\nu_K}{\zeta_K}\gamma_K\frac{c_2}{4-3\gamma_K^2}\ , \nonumber\\
\fl\quad
S_{\hat t\hat \phi}&=&\pm\frac2{\gamma_K}(4-3\gamma_K^2)^{-1/2}[c_8\cos\omega_2\tau-c_7\sin\omega_2\tau]+c_9-3\gamma_K^2\frac{c_2}{4-3\gamma_K^2}\tau\ ,
\end{eqnarray}
\end{enumerate}
where $c_m, c_1, \ldots, c_9$ are integration constants, and three real frequencies are defined for each open interval of radial values by
\begin{eqnarray}\fl\quad
&&\omega_2=i{\bar \omega}_2=\zeta_K(4-3\gamma_K^2)^{1/2}
=\sqrt{\frac{M(r-6M)}{r^3(r-3M)}}\ , \qquad
\omega_3=\zeta_K=\left(\frac{M}{r^3}\right)^{1/2}\ , \nonumber\\
\fl\quad
&&\omega_4=i{\bar \omega}_4=\zeta_K(3\gamma_K^2-2)^{1/2}
=\frac{1}{r}\sqrt{\frac{M}{r-3M}}
\ .
\end{eqnarray}
Consider first the open interval cases $r\not=6M$.
From Eq.~(\ref{Ptot}), the total 4-momentum $P$ is given by
\begin{eqnarray}\fl\quad
\label{PtotnuK}
P&=&\left\{m\gamma_K\mp\zeta_K\left[S_{\hat r\hat \phi}-(1-2\nu_K^2)\gamma_K^2c_1-2\frac{\nu_K}{\gamma_K\zeta_K}\frac{c_2}{1-4\nu_K^2}\right]\right\}e_{\hat t}\nonumber\\
\fl\quad
&&\mp\frac{\gamma_K^2\zeta_K}{2}\left\{(1+2\nu_K^2)\left[S_{\hat t\hat \phi}+3\frac{c_2}{1-4\nu_K^2}\tau\right]+(1-4\nu_K^2)c_9\right\}e_{\hat r}\nonumber\\
\fl\quad
&&+\left\{ \frac{\zeta_K}{\nu_K}S_{\hat r\hat \theta}\mp(1+2\nu_K^2)^{1/2}\left[c_3 \sin\omega_4\tau -c_4 \cos\omega_4\tau\right] \right\}e_{\hat \theta}\nonumber\\
\fl\quad
&&\pm\left\{m\gamma_K\nu_K\mp\frac{\zeta_K}{\nu_K}\left[S_{\hat r\hat \phi}-(1-2\nu_K^2)\gamma_K^2c_1-2\frac{\nu_K}{\gamma_K}\frac{c_2}{1-4\nu_K^2}\right]\right\}e_{\hat \phi}\ .
\end{eqnarray}
We next impose the standard supplementary conditions.
The CP conditions imply that $S_{\hat t \hat a}=0$, namely
\begin{equation}
c_2=c_3=c_4=c_5=c_6=c_7=c_8=c_9=0\ ,
\end{equation}
so that the only nonvanishing component of the spin tensor is
$ S^{\hat z}=S_{\hat r\hat \phi}=c_1 \equiv s$, leaving arbitrary values for $c_m$ as well.
From Eq.~(\ref{PtotnuK}), the total 4-momentum $P$ becomes (using $m_s=s\gamma\nu_K\zeta_K$ which follows from Eq.~(\ref{msdef}))
\begin{equation}
P=mU_{\pm}+s\gamma\nu_K\zeta_K E_{\hat \phi}\ ,
\end{equation}
with $U_{\pm}$ given by Eq.~(\ref{Ugeos}).
Re-examing Eq.~(\ref{Fspin}) shows that the spin-curvature force then acts radially, balancing the radial spin-orbit force.
The P conditions imply
\begin{equation}
S_{\hat t \hat \phi}=0\ , \qquad S_{\hat r \hat t}\pm\nu_K S_{\hat r \hat \phi}=0\ , \qquad S_{\hat \theta \hat t}\pm\nu_K S_{\hat \theta \hat \phi}=0\ ,
\end{equation}
which lead only to the trivial solution
\begin{equation}
c_1=c_2=c_3=c_4=c_5=c_6=c_7=c_8=c_9=0\ ,
\end{equation}
with $c_m$ arbitrary, or in other words the components of the spin tensor must all be zero, which means that a non-zero spin is incompatible with geodesic motion for a spinning particle.
The T supplementary conditions when $\nu=\pm\nu_K$ imply
\begin{eqnarray}\fl\quad
\label{TcondsoldnuK}
0&=&S_{\hat t\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}+S_{\hat t\hat r}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}+\gamma_K^2S_{\hat t\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi }}{{\rm d} \tau_U}\pm m\gamma_K^2\nu_K S_{\hat t\hat \phi}\nonumber\\
\fl\quad
&&\mp\gamma_K\zeta_K\{[S_{\hat t\hat r}S_{\hat t\hat \phi}\pm\nu_K S_{\hat t\hat \theta}S_{\hat r\hat \phi}]-\gamma_K^2S_{\hat t\hat \phi}[S_{\hat t\hat r}\mp\nu_K S_{\hat r\hat \phi}]\}\ , \nonumber\\
\fl\quad
0&=&S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}-\gamma_K^2[\pm\nu_K S_{\hat t\hat r}-S_{\hat r\hat \phi}]\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}+\gamma_K\frac{\zeta_K}{\nu_K}[S_{\hat t\hat r}^2-\nu_K^2 S_{\hat r\hat \theta}^2]\nonumber\\
\fl\quad
&&-\gamma_K^3\frac{\zeta_K}{\nu_K}\left[S_{\hat t\hat r}^2\mp\nu_K(1+\nu_K^2)S_{\hat t\hat r}S_{\hat r\hat \phi}+\nu_K^2S_{\hat r\hat \phi}^2\right]-m\gamma_K^2[S_{\hat t\hat r}\mp\nu_K S_{\hat r\hat \phi}]\ , \nonumber\\
\fl\quad
0&=&\gamma_K^2[\pm\nu_K S_{\hat t\hat \theta}-S_{\hat \theta \hat \phi}]\left[\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}-\gamma_K\nu_K\zeta_K S_{\hat r\hat \phi}\right]+S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}\nonumber\\
\fl\quad
&&+\gamma_K^2[S_{\hat t\hat \theta}\mp\nu_K S_{\hat \theta \hat \phi}]\left[m+\gamma_K\frac{\zeta_K}{\nu_K}S_{\hat t\hat r}\right]-\gamma_K\frac{\zeta_K}{\nu_K}[S_{\hat t\hat r}S_{\hat t\hat \theta}\pm\nu_K S_{\hat r\hat \theta}S_{\hat t\hat \phi}]\ , \nonumber\\
\fl\quad
0&=&\pm\gamma_K^2\nu_K S_{\hat t\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi }}{{\rm d} \tau_U}+S_{\hat \theta\hat \phi}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}+S_{\hat r\hat \phi}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}+\gamma_K^3\frac{\zeta_K}{\nu_K}S_{\hat t\hat \phi}[S_{\hat t\hat r}\mp\nu_K^3S_{\hat r\hat \phi}]\nonumber\\
\fl\quad
&&+m\gamma_K^2S_{\hat t\hat \phi}-\gamma_K\frac{\zeta_K}{\nu_K}[\nu_K^2S_{\hat r\hat \theta}S_{\hat \theta\hat \phi}+S_{\hat t\hat \phi}(S_{\hat t\hat r}\pm\nu_K S_{\hat r\hat \phi})]\ .
\end{eqnarray}
By substituting into these equations the solutions given by Eqs.~(\ref{solnuKII})--(\ref{solnuKIII}), we obtain the conditions
\begin{equation}
c_2=c_3=c_4=c_5=c_6=c_7=c_8=0\ ,
\end{equation}
implying that the only nonvanishing components of the spin tensor are
\begin{equation}
S^{\hat z} =-S_{\hat r \hat \phi}=-c_1\ , \qquad S_{\hat t \hat \phi}=c_9\ ,
\end{equation}
and either
\begin{equation}
c_1, c_9\quad \mbox{arbitrary}\ , \qquad c_m=\pm\gamma_K\zeta_K c_1\ ,
\end{equation}
which implies that spin component $S^{\hat z}$ is proportional to the mass, locking them together by a constant of proportionality depending on the orbit velocity, or
\begin{equation}
\label{physcond1}
c_1=0=c_9\ , \qquad c_m \quad \mbox{arbitrary}\ ,
\end{equation}
corresponding to the zero spin case where geodesic motion is of course allowed.
In the former case the total spin invariant (\ref{sinv}) reduces to
\begin{equation}
s^2=-c_9^2+c_1^2\ ,
\end{equation}
so that the condition $|s|/(mM)\ll1$ preserving the validity of the Mathisson-Papapetrou model reads
\begin{equation}
\frac{|s|}{mM}=\frac{1}{M\gamma_K\zeta_K}\left(1-\frac{c_9^2}{c_1^2}\right)^{1/2}\ll1\ ,
\end{equation}
implying either $c_1\gtrsim c_9$ or $r\simeq 3M$ (where $\gamma_K\to\infty$).
In the limit $r\to3M$ where the circular geodesics become null and require a separate treatment, one has a solution for which this spin component $S^{\hat z}$ is fixed to have a value
determined by the constant mass $m$ and the azimuthal velocity, the $t$-$\phi$ component of the spin is arbitrary. If one takes the limit $m\to0$, then the component of the spin vector out of the orbit vanishes, leaving the spin vector locked to the direction of motion as found
by \cite{mashnull} who discussed the null geodesic case using the P supplementary conditions, the latter being the only physically relevant in such limit.
Finally consider the remaining case $r=6M$.
Eq.~(\ref{Ptot}) then shows that the total 4-momentum $P$ is given by
\begin{eqnarray}\fl\quad
\label{PtotnuKreq6M}
P&=&\left\{\frac23\sqrt{3}m\mp\frac{\sqrt{6}}{108M}\left[3S_{\hat r\hat \phi}-2c_1\right]\right\}e_{\hat t}+\left\{\mp\frac{\sqrt{6}}{36M}S_{\hat t\hat \phi}+\frac{\sqrt{3}}2c_7\right\}e_{\hat r}\nonumber\\
\fl\quad
&&-\left\{\frac{\sqrt{6}}{18M}S_{\hat r\hat \theta}\pm\frac{1}{6M}\left[c_3 \sin\frac{\sqrt{3}\tau}{18M} -c_4 \cos\frac{\sqrt{3}\tau}{18M}\right]\right\}e_{\hat \theta}\nonumber\\
\fl\quad
&&\pm\frac12\left\{\frac23\sqrt{3}m\mp\frac{\sqrt{6}}{108M}\left[3S_{\hat r\hat \phi}-2c_1\right]\right\}e_{\hat \phi}\ .
\end{eqnarray}
Imposing the standard supplementary conditions gives rise to the same result as for the general case $r\not=6M$.
The CP conditions imply
\begin{equation}
c_2=c_3=c_4=c_5=c_6=c_7=c_8=0\ ,
\end{equation}
so that the only nonvanishing component of the spin tensor is $S^{\hat z}=-S_{\hat r\hat \phi}=-c_1$, for arbitrary values of $c_m$, leading to constant mass $m$.
The P conditions give only the trivial solution
\begin{equation}
c_1=c_2=c_3=c_4=c_5=c_6=c_7=c_8=0\ ,
\end{equation}
with $c_m$ arbitrary, leading to constant mass $m$.
Finally, the T conditions imply
\begin{equation}
c_3=c_4=c_5=c_6=c_7=c_8=0\ ,
\end{equation}
so that the only nonvanishing components of the spin tensor are
\begin{equation}
S^{\hat z} =-S_{\hat r \hat \phi}=-c_1\ , \qquad S_{\hat t \hat \phi}=c_2\ ,
\end{equation}
and either
\begin{equation}
c_1, c_2\quad \mbox{arbitrary}\ , \qquad c_m=\pm\frac{\sqrt{2}}{18M}c_1\
\end{equation}
or
\begin{equation}
\label{physcond2}
c_1=0=c_2\ , \qquad c_m \quad \mbox{arbitrary}\ ,
\end{equation}
with constant mass $m$ in both cases.
In the former case the spin invariant (\ref{sinv}) reduces to
\begin{equation}
s^2=-c_2^2+c_1^2\ ,
\end{equation}
so that the condition $|s|/(mM)\ll1$ reads
\begin{equation}
\frac{|s|}{mM}=\frac{18}{\sqrt{2}}\left(1-\frac{c_2^2}{c_1^2}\right)^{1/2}\ll1\ ,
\end{equation}
implying $c_1\gtrsim c_2$.
Thus if the center of mass of the test particle is constrained to be a circular geodesic, either the spin is forced to be zero or have an arbitrary constant value of the single nonzero component $S^{\hat z}$ of the spin vector out of the plane of the orbit.
\section{The general case: $\nu\not=0$ and $\nu\not= \pm \nu_K$}
For general circular orbits excluding the previous cases $\nu=0$ and $\nu= \pm \nu_K$, the solutions of the equations of motion for the components of the spin tensor and the mass $m$ of the spinning particle are
\begin{eqnarray}\fl\quad
\label{sthphisol}
S_{\hat \theta\hat \phi}&=&A\cos\Omega\tau+B\sin\Omega\tau\ , \\
\fl\quad
\label{stthsol}
S_{\hat t\hat \theta}&=&C\cos\Omega_1\tau+D\sin\Omega_1\tau
+F\, S_{\hat \theta\hat \phi}\ ,\quad
F=\frac{3\nu \nu_K^2}{[\nu^2(1+2\nu_K^2)-\nu_K^2(1-\nu_K^2)]}\ , \\
\fl\quad
\label{srthsol}
S_{\hat r\hat \theta}
&=&-\frac{\gamma\nu\zeta_K(1+2\nu_K^2)(\nu^2-\nu_K^2)}{\Omega\nu_K[\nu^2(1+2\nu_K^2)
-\nu_K^2(1-\nu_K^2)]}[A\sin\Omega\tau-B\cos\Omega\tau]
\nonumber\\
\fl\quad
&&-\nu_K\gamma_K[D\cos\Omega_1\tau-C\sin\Omega_1\tau]\ , \\
\fl\quad
\label{eqm4e}
S_{\hat r\hat \phi}
&=&\frac{\nu_K}{\zeta_K}\frac{\nu^2-\nu_K^2}{\frac{\nu^2}{\gamma_K^2}+\nu_K^2(2+\nu_K^2)} \left[\gamma\nu c_m - \gamma_K^2\frac{\nu_K}{\zeta_K} \frac{\nu^2(1-4\nu_K^2) +\nu_K^2(2+\nu_K^2)}{(\nu^2-\nu_K^2)^2}c_0\right]
\nonumber\\
\fl\quad
&&+c_1 e^{\Omega_+\tau}+c_2 e^{-\Omega_+\tau}+c_3 e^{\Omega_-\tau}+c_4 e^{-\Omega_-\tau}
\ , \\
\fl\quad
\label{eqm2d}
S_{\hat t\hat r}
&=&\frac{\nu_K}{\zeta_K}\frac{1}{\frac{\nu^2}{\gamma_K^2}+\nu_K^2(2+\nu_K^2)} \left[-\gamma (\nu^2-\nu_K^2)c_m+\nu\frac{\nu_K}{\zeta_K}c_0\right] +\frac{1}{2\nu\nu_K^2(1+2\nu_K^2)}\cdot\nonumber\\
\fl\quad
&&\cdot\{(3\nu_K^2+\Phi)
\left[c_1 e^{\Omega_+\tau} +c_2 e^{-\Omega_+\tau}\right]
+(3\nu_K^2-\Phi)\left[c_3 e^{\Omega_-\tau}+c_4 e^{-\Omega_-\tau}\right]\}\ , \\
\fl\quad
\label{eqs1c}
S_{\hat t\hat \phi}
&=&\frac{1}{\gamma}\frac{\nu_K}{\zeta_K}\frac1{\nu^2-\nu_K^2}
\bigg\{\Omega_+\left[\frac{3\nu_K^2+\Phi}{2\nu_K^2(1+2\nu_K^2)}-1\right]
\left[c_1 e^{\Omega_+\tau}-c_2 e^{-\Omega_+\tau}\right]
\nonumber\\
\fl\quad
&&+\Omega_-\left[\frac{3\nu_K^2-\Phi}{2\nu_K^2(1+2\nu_K^2)}-1\right] \left[c_3 e^{\Omega_-\tau}-c_4 e^{-\Omega_-\tau}\right]\bigg\}\ , \\
\fl\quad
\label{massvar}
m&=&\gamma\frac{\zeta_K}{\nu_K}[\nu S_{\hat r\hat \phi}- \nu_K^2 S_{\hat t\hat r}]+c_m\ ;
\end{eqnarray}
where $A, B, C, D, c_m, c_0, \ldots, c_4$ are integration constants, and the real positive frequencies $\Omega$ and $\Omega_1$ are given by
\begin{equation}
\Omega=\gamma\zeta_K (1+2\nu_K^2)^{1/2}\frac{|\nu|}{\nu_K} \ , \qquad
\Omega_1=\frac{\gamma\zeta_K}{\gamma_K}\ ,
\end{equation}
assumed to be distinct for the above equations to be valid,
and the remaining abbreviations are
\begin{equation}
\label{Omegapm}
\Omega_{\pm}=-\frac{\gamma}{\gamma_K}\frac{\zeta_K}{\nu_K}\left[{\bar \nu}^2-\nu^2\pm\frac{\gamma_K^2}2\Phi\right]^{1/2}, \quad
\Phi=3\nu_K^2\left[1-\frac{\nu^2}{{\tilde \nu}^2}\right]^{1/2},
\end{equation}
with
\begin{equation}
{\bar \nu}^2=\frac{\gamma_K^2\nu_K^2}{2}(1+2\nu_K^2)\ , \quad
{\tilde \nu}^2=\frac98\frac{\gamma_K^2\nu_K^2}{(1+2\nu_K^2)}\ .
\end{equation}
The behaviors of the azimuthal velocities ${\bar \nu}$, ${\tilde \nu}$ and $\nu_K$ as functions of the radial parameter $r/M$ are compared in Fig.~\ref{fig:1}. They all coincide at $r=6M$, where ${\bar \nu}={\tilde \nu}=\nu_K=1/2$; for $2M<r<6M$ it is ${\tilde \nu}<{\bar \nu}$, while ${\tilde \nu}>{\bar \nu}$ for $r>6M$.
\begin{figure}
\typeout{*** EPS figure 1}
\begin{center}
\includegraphics[scale=0.35]{fig1.eps}
\end{center}
\caption{The azimuthal velocities ${\bar \nu}$ ($\nu_{\rm b}$ in the plot), ${\tilde \nu}$ ($\nu_{\rm t}$ in the plot) and $\nu_K$ (dashed curve) are plotted as functions of the radial parameter $r/M$. The dashed vertical line corresponds to the value $r/M=6$, where they all coincide. There are five different regions (explicitly indicated in the plot), depending on the relative sign of the azimuthal velocity $\nu$ with respect to ${\bar \nu}$ and ${\tilde \nu}$, which correspond to intervals for which $\Omega_{\pm}$ are real, purely imaginary or neither, as explained in the text.
}
\label{fig:1}
\end{figure}
The quantities $\Omega_{\pm}$ also lead to angular velocities for certain intervals of values of the azimuthal velocity $\nu$. In fact we are interested in those values for which $\Omega_{+}$ and/or $\Omega_{-}$ are purely imaginary, since the imaginary parts can be interpreted as additional frequencies characterizing spin precession.
One must distinguish the cases $2M<r<6M$ and $r>6M$, referring to Fig.~\ref{fig:1} and to Eq.~(\ref{Omegapm}):
\begin{itemize}
\item[a)] $r>6M$:
\begin{itemize}
\item[-] if $\nu>{\tilde \nu}$ (region I), the quantities $\Omega_{\pm}$ are both complex;
\item[-] if $\nu={\tilde \nu}$, $\Omega_{+}=\Omega_{-}$ is purely imaginary, since ${\tilde \nu}>{\bar \nu}$;
\item[-] if ${\bar \nu}<\nu<{\tilde \nu}$ (region II), $\Omega_{-}$ is purely imaginary, while $\Omega_{+}$ can be either real (even zero) or purely imaginary;
\item[-] if $\nu<{\bar \nu}$ (region III), $\Omega_{+}$ is purely imaginary, while $\Omega_{-}$ can be either real (even zero) or purely imaginary;
\end{itemize}
\item[b)] $2M<r<6M$:
\begin{itemize}
\item[-] if $\nu>{\tilde \nu}$ (region IV), the quantities $\Omega_{\pm}$ are both complex;
\item[-] if $\nu={\tilde \nu}$, $\Omega_{+}=\Omega_{-}$ is real, since ${\tilde \nu}<{\bar \nu}$;
\item[-] if $\nu<{\tilde \nu}$ (region V), $\Omega_{+}$ is real, since ${\tilde \nu}<{\bar \nu}$, while $\Omega_{-}$ can be either real (even zero) or purely imaginary.
\end{itemize}
\end{itemize}
All of these remarks so far assume that the two frequencies $\Omega$ and $\Omega_1$ are distinct, necessary for the decoupling procedure which leads to this solution. A different result follows in the special case $\Omega=\Omega_1$. This occurs for the particular value of the azimuthal velocity
\begin{equation}
\label{nusolomeqom1}
\nu_0=\pm\frac{\nu_K}{\gamma_K}(1+2\nu_K^2)^{-1/2}
=\pm \sqrt{\frac{M(r-3M)}{r(r-2M)}}\ ,
\end{equation}
which vanishes at $r=3M$ and is real for $r>3M$,
rising to its peak speed at $r\approx 3.934M$ and decreasing asymptotically towards
the geodesic speed from below as $r\to \infty$.
The solutions for the components $S_{\hat \theta\hat \phi}$, $S_{\hat t\hat \theta}$ and $S_{\hat r\hat \theta}$ of the spin tensor are given by
\begin{eqnarray}\fl\quad
\label{sthphisolomeqom1}
S_{\hat \theta\hat \phi}&=&A\cos\Omega\tau+B\sin\Omega\tau\ , \\
\fl\quad
\label{stthsolomeqom1}
S_{\hat t\hat \theta}&=&\left[C-\frac32\frac{\gamma_0^2\nu_0}{\Omega}\zeta_K^2(A-B\Omega\tau)\right]\cos\Omega\tau+\left[D-\frac32\frac{\gamma_0^2\nu_0}{\Omega^2}\zeta_K^2A\tau\right]\sin\Omega\tau\ , \\
\fl\quad
\label{srthsolomeqom1}
S_{\hat r\hat \theta}
&=&\frac12\frac{\zeta_K}{\nu_K}\frac{\gamma_0^3}{\Omega^2}\bigg\{
\left[3\nu_0\nu_K^2\zeta_K^2(B+A\Omega\tau)+2\frac{\Omega^2}{\gamma_0^2}(\nu_0 B-\nu_K^2D)\right]\cos\Omega\tau \nonumber\\
\fl\quad
&&+\left[3\nu_0\nu_K^2\zeta_K^2(2A+B\Omega\tau)-2\frac{\Omega^2}{\gamma_0^2}(\nu_0 A-\nu_K^2C)\right]\sin\Omega\tau
\bigg\}\ ,
\end{eqnarray}
with
\begin{equation}\fl\quad
\Omega\equiv\Omega_1=\frac{\zeta_K}{\nu_K}\left[\frac{1+2\nu_K^2}{1+\nu_K^2(1+\nu_K^2)}\right]^{1/2}=\frac{\sqrt{M}}{r}\left[\frac{r-3M}{r(r-3M)+3M^2}\right]^{1/2}\ ,
\end{equation}
while that corresponding to the remaining components as well as to the varying mass $m$ are obtained simply by evaluating the general solutions (\ref{eqm4e})--(\ref{massvar}) at $\nu=\nu_0$.
The reality properties of the quantities $\Omega_{\pm}$ can be determined as done in the general case, noting that $\nu_0<{\tilde \nu}$ (corresponding to region V) always holds in the interval $3M<r<6M$. For $r>6M$ however, we must distinguish two different regions,
a) $6M<r<{\bar r}_0$, with ${\bar r}_0=6M(1+\sqrt{2}/2)\approx10.24M$ such that $\nu_0={\bar \nu}$, where $\nu_0<{\bar \nu}$ (corresponding to region III), and
b) $r>{\bar r}_0$, where $\nu_0>{\bar \nu}$ (corresponding to region II).
The behavior of $S$, $U$ and $P$ along the world line itself is completely
determined by the initial conditions
\begin{eqnarray}
&& S_{\hat \alpha\hat \beta}(0)\ ,\quad
\frac{{\rm d} S_{\hat \alpha\hat \beta}}{{\rm d} \tau_U}\bigg\vert_{\tau=0}\ ,
\end{eqnarray}
and the corresponding conditions on the mass $m$ of the particle which follow from Eq.~(\ref{massvar}).
Thus in the special case in which the ``center of mass line" is directed along a circular orbit, the completion of the scheme for the spinning test particle is equivalent to a choice of initial conditions.
In principle the components of the spin tensor which are not constants should precess with the
different frequencies which appear in
Eqs.~(\ref{sthphisol})--(\ref{eqs1c}), leading to non-periodic motion, a feature that seems to characterize the general situation in the Schwarzschild \cite{maeda} and Kerr \cite{semerak,hartl1,hartl2} spacetimes.
However, this does not occur in practice once the CP, P and T supplementary conditions are imposed, as we will
see below. It turns out that the nonvanishing components of the spin tensor are all constant in the CP and T cases,
while the motion is periodic with a unique frequency in the P case. As one might expect, the particle mass $m$ turns out to be constant in all three cases.
\subsection{The CP supplementary conditions}
The CP supplementary conditions require
\begin{equation}
S_{\hat t \hat r}=0\ , \qquad S_{\hat t \hat \theta}=0\ , \qquad S_{\hat t \hat \phi}=0\ .
\end{equation}
From Eq.~(\ref{eqm2d}) this forces
\begin{equation}
c_1=c_2=c_3=c_4=0\ , \qquad
c_0=\frac{\gamma}{\nu}\frac{\zeta_K}{\nu_K}(\nu^2-\nu_K^2)c_m\ .
\end{equation}
Substituting these values into Eq.~(\ref{eqm4e}) then gives
\begin{equation}
c_m=-\frac{\gamma\nu}{\gamma_K^2}\frac{\zeta_K}{\nu_K}S_{\hat r \hat \phi}\ ,
\end{equation}
so from Eq.~(\ref{massvar}) we get
\begin{equation}
\label{ssolCPgen}
S_{\hat r \hat \phi}=s=\frac{m}{\gamma\nu}\frac1{\nu_K\zeta_K}\ .
\end{equation}
Finally, from Eqs.~(\ref{sthphisol}) and (\ref{srthsol}) it follows that
\begin{equation}
S_{\hat \theta \hat \phi}=0\ , \qquad S_{\hat r \hat \theta}=0\ .
\end{equation}
However, from Eq.~(\ref{Ptot}) it follows that
\begin{equation}
P=-s\nu_K\zeta_K e_{\hat \phi}\ ,
\end{equation}
since $m_s=-s\gamma\nu_K\zeta_K=-m/\nu$, a consequence of Eqs.~(\ref{msdef}) and (\ref{ssolCPgen}).
This result is unphysical since the total 4-momentum $P$ is spacelike.
\subsection{The P supplementary conditions}
The P supplementary conditions require
\begin{equation}
\label{Pconds}
S_{\hat t \hat \phi}=0\ , \qquad
S_{\hat r \hat t}+S_{\hat r \hat \phi}\nu=0\ , \qquad
S_{\hat \theta \hat t}+S_{\hat \theta \hat \phi}\nu=0\ .
\end{equation}
Under these conditions the components of the spin vector $S_U$ in the local rest space of the particle,
$S_U^\beta=\frac12 \eta_\alpha{}^{\beta\gamma\delta}U^\alpha S_{\gamma\delta}$,
expressed in the Frenet-Serret frame, are just
\begin{equation}
\label{spinFS}
(S_U^{1},S_U^{2},S_U^{3})
=(\gamma^{-1} S_{\hat \theta \hat \phi},
S_{\hat r \hat \theta},
\gamma^{-1} S_{\hat r \hat \phi})
\ .
\end{equation}
Comparing the first Eq.~(\ref{Pconds}) with Eq.~(\ref{eqs1c}) we get
\begin{equation}
c_1=c_2=c_3=c_4=0\ ,
\end{equation}
so that $S_{\hat t \hat r}$, $S_{\hat r \hat \phi}$ and the particle mass $m$ are all constant.
Eqs.~(\ref{eqm4e}) and (\ref{eqm2d}) together with the second of the Pirani conditions Eq.~(\ref{Pconds}) imply
\begin{equation}
c_0=\frac{\gamma}{\nu}\frac{\zeta_K}{\nu_K}\frac{1+\nu^2}{\gamma_K^2}\frac{(\nu^2-\nu_K^2)^2}{\nu^2(2-5\nu_K^2)+\nu_K^2(1+2\nu_K^2)}\,c_m\ ,
\end{equation}
hence from Eq.~(\ref{massvar})
\begin{equation}
c_m=\left[1+\frac{1}{\gamma_K^2}\frac{\nu^2-\nu_K^2}{\nu^2(1-4\nu_K^2)+\nu_K^2(2+\nu_K^2)}\right]m\ .
\end{equation}
Next by substituting these values of the constants $c_0$ and $c_m$ into Eq.~(\ref{eqm4e}), we obtain
\begin{equation}\label{eq:S3}
\gamma S_U^3 =S^{\hat z}
=S_{\hat r \hat \phi}=-m\frac{\gamma}{\nu}\frac{\nu_K}{\zeta_K}\frac{\nu^2-\nu_K^2}{\frac{\gamma^2}{\gamma_K^2}(\nu^2-\nu_K^2)+3\nu_K^2}\ .
\end{equation}
Finally comparing the last of the Pirani conditions Eq.~(\ref{Pconds}) with Eqs.~(\ref{sthphisol})--(\ref{stthsol}) and Eqs.~(\ref{sthphisolomeqom1})--(\ref{stthsolomeqom1}) leads to two possibilities: either
\begin{equation}
\hbox{case P1:\qquad}
A=B=C=D=0\ ,
\end{equation}
which places no constraint on $\nu$ and the spin vector is constant and out of the plane of the orbit, or
\begin{equation}
\hbox{case P2:\qquad}
C=0=D\ , \quad F=\nu\ ,
\end{equation}
the latter of which (again from Eq.~(\ref{stthsol})) leads to the special azimuthal velocity
\begin{equation}\fl\quad
\label{nupirani2}
\nu = \nu_{(P2)}
=\pm 2\nu_K \left(\frac{1-\nu_K^2/4}{1+2\nu_K^2}\right)^{1/2}
=\pm 2\left(\frac{M(r-9M/4)}{r(r-2M)}\right)^{1/2}\ .
\end{equation}
The case P1 has been already considered previously \cite{bdfg1},
leaving only P2 to be considered here.
The corresponding azimuthal speed $\nu_{(P2)}$ vanishes at $r=9/4M$ and is real for $r>9/4M$,
rising to a maximum speed of 1 at $r=3M$, corresponding to the two null circular geodesics, and
decreasing asymptotically towards
twice the geodesic speed from below as $r\to \infty$.
The corresponding values of $\gamma$ and $\Omega$ are respectively
\begin{equation}
\label{eq:gammaP2}
\gamma_{(P2)}=\frac{(1+2\nu_K^2)^{1/2}}{1-\nu_K^2}=\frac{\sqrt{r(r-2M)}}{r-3M}\ ,
\end{equation}
and
\begin{equation}
\label{eq:OmegaP2}
\Omega_{(P2)}
= \gamma_K^2\zeta_K[(4-\nu_K^2)(1+2\nu_K^2)]^{1/2}
= \frac{\sqrt{M}}{r}\frac{\sqrt{4r-9M}}{r-3M}\ ,
\end{equation}
using Eq.~(\ref{eq:phitau}).
To get the anglular velocity of precession with respect to a frame which is nonrotating with respect to infinity, one must subtract away the precession angular velocity $\Omega_U=\gamma\nu/r$ of the spherical frame. In the case P2 one finds
$\Omega_{(P2)}-\Omega_U=0$, so the spin does not precess with respect to a frame which is nonrotating at infinity.
Substituting these values back into Eq.~(\ref{eq:S3}) then leads to
\begin{equation}
S_U^{3}
= \pm\frac{m}{2\Omega_{(P2)}}\ .
\end{equation}
The remaining nonzero spin components (\ref{sthphisol})--(\ref{srthsol}) can then be expressed in the form
\begin{eqnarray}
\label{spinsolsP}
S_U^1=\gamma^{-1}S_{\hat \theta \hat \phi}
&=&\gamma^{-1}[ A\cos\Omega_{(P2)}\tau_U+B\sin\Omega_{(P2)}\tau_U]\ ,\nonumber\\
S_U^2=S_{\hat r \hat \theta}
&=&\gamma^{-1} [A\sin\Omega_{(P2)}\tau_U-B\cos\Omega_{(P2)}\tau_U]\ ,
\end{eqnarray}
\begin{eqnarray}
\fl\quad
S_U^1(0)=\gamma^{-1}A\ , \qquad
S_U^2(0)=-\gamma^{-1}B\ , \qquad
S_U^3(0)=\pm\frac{m}{2\Omega_{(P2)}}\ ,
\end{eqnarray}
leading to
\begin{equation}
\pmatrix{
S_U^{1}\cr
S_U^{2}\cr
S_U^{3}\cr}=
\pmatrix{\cos\phi &\sin\phi & 0\cr
-\sin\phi &\cos\phi & 0\cr
0 & 0& 1\cr}
\pmatrix{
S_U^{1}(0)\cr
S_U^{2}(0)\cr
S_U^{3}(0)\cr}\ .
\end{equation}
The spin invariant (\ref{sinv}) becomes in this case
\begin{equation}
s^2=\frac{1}{\gamma^2}\left[A^2+B^2+\frac{m^2}{4\zeta_K^2}\frac1{4-\nu_K^2}\right]\ .
\end{equation}
The Mathisson-Papapetrou model is valid if the condition $|s|/(mM)\ll1$ is satisfied.
From the previous equation we have that either $\gamma\to\infty$ or the sum of the bracketed terms must be small, i.e.,
\begin{equation}
\label{scoeffs}
\frac{A^2}{m^2M^2}\ll1\ , \quad \frac{B^2}{m^2M^2}\ll1\ , \quad [4M^2\zeta_K^2(4-\nu_K^2)]^{-1}\ll1\ .
\end{equation}
The latter possibility cannot occur for any allowed values of $r/M$, since the third term (which is dimensionless) of (\ref{scoeffs}) is always greater than $\approx1.88$, as is easily verified. The former possibility is realized only in the case of ultrarelativistic motion, which Eq.~(\ref{eq:gammaP2}) implies occurs only as $r\to 3M$, where the orbits approach null geodesics and the limit $m\to0$ forces the component of the spin vector out of the plane of the orbit to vanish, locking the spin vector to the direction of motion exactly as discussed by Mashhoon \cite{mashnull}.
It is well known that the spin vector $S_U = S_U^i E_i$ lying in the local rest space of $U$ is Fermi-Walker transported along $U$ in the P case, so it must satisfy
\begin{equation}\fl\quad
\label{fweq}
0=\frac{D_{(\rm fw)}S_U}{{\rm d} \tau_U}\equiv P(U)\frac{DS_U}{{\rm d} \tau_U}
=
\left[\frac{{\rm d} S_U^{1}}{{\rm d} \tau_U}+S_U^2\tau_1 \right] E_1
+\left[\frac{{\rm d} S_U^{2}}{{\rm d} \tau_U}-S_U^1\tau_1 \right] E_2\ ,
\end{equation}
from (\ref{FSeqs}), where $P(U)^\mu_\alpha=\delta^\mu_\alpha+U^\mu U_\alpha$ projects into the local rest space of $U$.
To check this we must show that the following two equations are identically satisfied
\begin{equation}
\label{fwconds}
\frac{{\rm d} S_U^{1}}{{\rm d} \tau_U}+\tau_1S_U^2=0\ , \qquad
\frac{{\rm d} S_U^{2}}{{\rm d} \tau_U}-\tau_1 S_U^1=0\ .
\end{equation}
But these two equations follow immediately from (\ref{spinsolsP}), since $\tau_1=\Omega_{(P2)}$ results from the direct evaluation of the expression (\ref{kappatau1}) for $\tau_1$, with $\nu$ given by (\ref{nupirani2}).
Thus given the rest mass $m$ of the test particle, the constant component of the spin orthogonal to the orbit is then fixed by the orbit parameters, while the component in the plane of the orbit as seen within the local rest space of the particle itself is locked to a direction which is fixed with respect to the distant observers,
since the angle of precession with respect to the spherical axes is exactly the azimuthal angle of the orbit, but in the opposite sense. In other words the precession of the spin, which introduces a time varying force into the mix, must be locked to the first torsion of the orbit itself in order to maintain the alignment of the 4-velocity with a static direction in the spacetime, and the spin does not precess with respect to observers at spatial infinity.
Furthermore, the specific spin of the test particle cannot be made arbitrarily small except near the limiting radius where the 4-velocity of this solution goes null, and the spin vector is then locked to the direction of motion.
Apparently the imposition of a circular orbit on the center of mass world line of the test particle is just too strong a condition to describe an interesting spin precession.
The total 4-momentum $P$ given by Eqs.~(\ref{Ptot}) and (\ref{msdef}) can be written in this case as
\begin{eqnarray}
\label{PtotP}
P &=& mU + m_s E_{2}+\gamma^{-1}\left[\frac{{\rm d} S_U^{2}}{{\rm d} \tau_U}
-\gamma\nu_K\zeta_K S_{\hat r\hat \theta}\right]e_{\hat \theta}
\nonumber\\
&=& mU+m_s E_{2}-\left(\frac{\gamma\nu^2}r +\nu_K \zeta_K\right)S_U^2 e_{\hat z}
\ ,
\end{eqnarray}
with $\nu$ given by (\ref{nupirani2}) and $m_s$ a constant
\begin{equation}
\label{msP}
m_s=\gamma\frac{\zeta_K}{\nu_K}(\nu S_{\hat t\hat r}-\nu_K^2 S_{\hat r\hat \phi})
=\gamma^2\frac{\zeta_K}{\nu_K}(\nu^2-\nu_K^2)S_U^3\ ,
\end{equation}
but the final term in $P$ (out of the plane of the orbit) oscillates as the spin precesses in the plane of the orbit.
Note that the radial component of $P$ is zero.
The spin-curvature force (\ref{Fspin}) simplifies to
\begin{eqnarray}
\label{FspinP}
F^{\rm (sc)}&=&\gamma\zeta_K^2 \left\{\left[2S_{\hat t\hat r}+\nu S_{\hat r\hat \phi}\right]e_{\hat r}-\left[S_{\hat t\hat \theta}+2\nu S_{\hat \theta\hat \phi}\right]e_{\hat \theta}\right\}\nonumber\\
&=&3\gamma^2\nu \zeta_K^2 (S_U^2 e_{\hat r}-S_U^1 e_{\hat \theta})
\ ,
\end{eqnarray}
while the term on the left hand side of Eq.~(\ref{papcoreqs1}) can be written as
\begin{eqnarray}
\label{motradP}
\frac{DP}{{\rm d} \tau_U}&=&[m\kappa-m_s\tau_1]e_{\hat r}-\gamma\zeta_K^2 \left[S_{\hat t\hat \theta}+2\nu S_{\hat \theta\hat \phi}\right]e_{\hat \theta}\nonumber\\
&=&[m\kappa-m_s\tau_1]e_{\hat r}-3\gamma^2 \nu \zeta_K^2 S_U^1\, e_{\hat \theta}\ .
\end{eqnarray}
The force balance equation (\ref{baleq}) reduces to
\begin{eqnarray}
\label{baleqP}
ma(U)_{\hat r}&=&F^{\rm(so)}_{\hat r} + F^{\rm (sc)}_{\hat r}\ , \nonumber\\
ma(U)_{\hat \theta}&=&0=F^{\rm(so)}_{\hat \theta} + F^{\rm (sc)}_{\hat \theta}\ ,
\end{eqnarray}
where
\begin{eqnarray}\fl\quad
&&ma(U)_{\hat r}
= m\kappa\ , \ \,
F^{\rm(so)}_{\hat r}
= -m_s\left(\frac{DE_{\hat\phi}}{d\tau_{U}}\right)_{\hat r}=m_s\tau_1\ , \ \,
\nonumber\\
&&F^{\rm (sc)}_{\hat r}
=3\gamma^2\nu \zeta_K^2 S_U^3
\ ,
\quad
F^{\rm (sc)}_{\hat \theta}
= -F^{\rm(so)}_{\hat \theta}
= -3\gamma^2\nu \zeta_K^2 S_U^1
\ .
\end{eqnarray}
\subsection{The Tulczyjew (T) supplementary conditions}
The T supplementary conditions imply from (\ref{Ps})
\begin{eqnarray}\fl\quad
\label{Tcondsold}
0&=&S_{\hat t\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}+S_{\hat t\hat r}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}+\gamma^2S_{\hat t\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi }}{{\rm d} \tau_U}+m\gamma^2\nu S_{\hat t\hat \phi}\nonumber\\
\fl\quad
&&-\gamma\frac{\zeta_K}{\nu_K}\{[\nu S_{\hat t\hat r}S_{\hat t\hat \phi}+\nu_K^2S_{\hat t\hat \theta}S_{\hat r\hat \phi}]-\gamma^2S_{\hat t\hat \phi}[\nu S_{\hat t\hat r}-\nu_K^2S_{\hat r\hat \phi}]\}\ , \nonumber\\
\fl\quad
0&=&S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}-\gamma^2[\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}]\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}+\gamma\frac{\zeta_K}{\nu_K}[S_{\hat t\hat r}^2-\nu_K^2 S_{\hat r\hat \theta}^2]\nonumber\\
\fl\quad
&&-\gamma^3\frac{\zeta_K}{\nu_K}\left[S_{\hat t\hat r}^2-\nu(1+\nu_K^2)S_{\hat t\hat r}S_{\hat r\hat \phi}+\nu_K^2S_{\hat r\hat \phi}^2\right]-m\gamma^2[S_{\hat t\hat r}-\nu S_{\hat r\hat \phi}]\ , \nonumber\\
\fl\quad
0&=&\gamma^2[\nu S_{\hat t\hat \theta}-S_{\hat \theta \hat \phi}]\left[\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}-\gamma\nu_K\zeta_K S_{\hat r\hat \phi}\right]+S_{\hat r\hat \theta}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}\nonumber\\
\fl\quad
&&+\gamma^2[S_{\hat t\hat \theta}-\nu S_{\hat \theta \hat \phi}]\left[m+\gamma\frac{\zeta_K}{\nu_K}S_{\hat t\hat r}\right]-\gamma\frac{\zeta_K}{\nu_K}[S_{\hat t\hat r}S_{\hat t\hat \theta}+\nu S_{\hat r\hat \theta}S_{\hat t\hat \phi}]\ , \nonumber\\
\fl\quad
0&=&\gamma^2\nu S_{\hat t\hat \phi}\frac{{\rm d} S_{\hat t\hat \phi }}{{\rm d} \tau_U}+S_{\hat \theta\hat \phi}\frac{{\rm d} S_{\hat t\hat \theta }}{{\rm d} \tau_U}+S_{\hat r\hat \phi}\frac{{\rm d} S_{\hat t\hat r }}{{\rm d} \tau_U}+\gamma^3\frac{\zeta_K}{\nu_K}S_{\hat t\hat \phi}[S_{\hat t\hat r}-\nu\nu_K^2S_{\hat r\hat \phi}]\nonumber\\
\fl\quad
&&+m\gamma^2S_{\hat t\hat \phi}-\gamma\frac{\zeta_K}{\nu_K}[\nu_K^2S_{\hat r\hat \theta}S_{\hat \theta\hat \phi}+S_{\hat t\hat \phi}(S_{\hat t\hat r}+\nu S_{\hat r\hat \phi})]\ .
\end{eqnarray}
By solving for the first derivatives, a straightforward calculation shows that the above set of equations simplifies to
\begin{eqnarray}
\label{Tcond1}
0&=&\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}+m\frac{S_{\hat t\hat \theta}}{S_{\hat r\hat \theta}}-\gamma\nu\frac{\zeta_K}{\nu_K} S_{\hat t \hat \phi}\ , \\
\label{Tcond2}
0&=&\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}+m\nu+\gamma\frac{\zeta_K}{\nu_K}[\nu S_{\hat t\hat r}-\nu_K^2 S_{\hat r\hat \phi}]\ , \\
\label{Tcond3}
0&=&\frac{{\rm d} S_{\hat t\hat \theta}}{{\rm d} \tau_U}-m\frac{S_{\hat t\hat r}}{S_{\hat r\hat \theta}}-\gamma\nu_K\zeta_K S_{\hat r \hat \theta}\ , \\
\label{Tcond4}
0&=&\frac{m}{\gamma S_{\hat r\hat \theta}}[S_{\hat t\hat \theta}S_{\hat r\hat \phi}-S_{\hat r\hat \theta}S_{\hat t\hat \phi}-S_{\hat t\hat r}S_{\hat \theta \hat \phi}]\ ,
\end{eqnarray}
provided that $S_{\hat r\hat \theta}\not=0$ is assumed.
Substituting Eqs.~(\ref{eqs1b}), (\ref{massvar}) and then Eqs.~(\ref{eqm2c}), (\ref{eqm4d}) into Eq.~(\ref{Tcond2}) (see the equations listed in \ref{appa3}) leads to
\begin{equation}
\label{Tcond2b}
0=S_{\hat r\hat \phi}-\frac{\nu_K}{\zeta_K}\left[\gamma\nu c_m-\frac{\nu_K}{\zeta_K}\frac{c_0}{\nu^2-\nu_K^2}\right]\ .
\end{equation}
Substituting Eq.~(\ref{eqm4e}) into Eq.~(\ref{Tcond2b}) leads to
\begin{equation}
c_1=c_2=c_3=c_4=0\ , \quad c_0=\frac{\gamma\nu}{\gamma_K^2}\frac{\zeta_K}{\nu_K}\left[1+\frac{1}{\gamma^2}\frac{1}{2+\nu_K^2}\right]c_m\ ,
\end{equation}
implying that $S_{\hat t\hat \phi}=0$, and $S_{\hat t\hat r}$, $S_{\hat r\hat \phi}$ are constant, from Eq.~(\ref{eqs1c}), and Eqs.~(\ref{eqm4e}) and (\ref{eqm2d}) respectively.
But from Eq.~(\ref{Tcond1}) it follows that $S_{\hat t\hat \theta}=0$, or $A=B=C=D=0$, from Eq.~(\ref{stthsol}), so that $S_{\hat \theta\hat \phi}=0$
and $S_{\hat r\hat \theta}=0$ as well, from Eqs.~(\ref{sthphisol}) and (\ref{srthsol}) respectively. This contradicts the assumption $S_{\hat r\hat \theta}\not=0$ so only the case $S_{\hat r\hat \theta}=0$ remains to be considered.
If $S_{\hat r\hat \theta}=0$, the set of equations (\ref{Tcondsold}) reduces to
\begin{eqnarray}
\label{Tcond1c2}
0&=&\frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}-\left[\frac{m}{\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}}+\gamma\nu\frac{\zeta_K}{\nu_K}\right]S_{\hat t\hat \phi}\ , \\
\label{Tcond2c2}
0&=&\frac{{\rm d} S_{\hat t\hat \phi}}{{\rm d} \tau_U}-m\frac{\nu S_{\hat r\hat \phi}-S_{\hat t\hat r}}{\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}}+\gamma\frac{\zeta_K}{\nu_K}[\nu S_{\hat t\hat r}-\nu_K^2S_{\hat r\hat \phi}]\ ,
\end{eqnarray}
provided that $\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}\not=0$.
Substituting Eqs.~(\ref{eqs1b}), (\ref{massvar}) and then Eqs.~(\ref{eqm2c}), (\ref{eqm4d}) into Eq.~(\ref{Tcond2c2}) we obtain
\begin{equation}\fl\,
\label{Tcond2c2b}
0=\zeta_K^2(\nu^2-\nu_K^2)[\nu_K^2S_{\hat t\hat r}^2-S_{\hat r\hat \phi}^2]+\nu_K^2c_0[\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}]-\gamma\nu_K\zeta_Kc_m(\nu^2-\nu_K^2)[S_{\hat t\hat r}-\nu S_{\hat r\hat \phi}]\,.
\end{equation}
Substituting Eq.~(\ref{eqm4e}) into Eq.~(\ref{Tcond2c2b}) then gives
\begin{equation}
c_1=c_2=c_3=c_4=0\ ,
\end{equation}
implying
\begin{equation}
\label{vinc1}
S_{\hat t\hat \phi}=0\ , \qquad \frac{{\rm d} S_{\hat t\hat r}}{{\rm d} \tau_U}=0=\frac{{\rm d} S_{\hat r\hat \phi }}{{\rm d} \tau_U}\ ,
\end{equation}
from Eq.~(\ref{eqs1c}), and Eqs.~(\ref{eqm4e}) and (\ref{eqm2d}) respectively.
Hence, Eq.~(\ref{Tcond1c2}) is identically satisfied; moreover
\begin{eqnarray}\fl\,
\label{c0pmsol}
c_0^{(\pm)}&=&\frac{c_m}2\frac{\gamma\nu}{\gamma_K^2}\frac{\zeta_K}{\nu_K}\bigg\{[2-\nu_K^2(1-5\nu_K^2)](\nu^2-\nu_K^2)^2+3\nu_K^2\{\nu^2[3-\nu_K^2(7-\nu_K^2)]\nonumber\\
\fl\,
&&+\nu_K^4(4-\nu_K^2)\}\pm\frac{\nu_K}{\gamma^2\nu}\left[\frac{\nu^2}{\gamma_K^2}+\nu_K^2(2+\nu_K^2)\right][\nu^2(13\nu_K^2+4\nu^2)-8\nu_K^4]^{1/2}\bigg\}\cdot\nonumber\\
\fl\,
&&\cdot\bigg\{\left[\frac{\nu^2}{\gamma_K^2}+\nu_K^2(2+\nu_K^2)\right]^2-9\nu^2\nu_K^4\bigg\}^{-1}\ .
\end{eqnarray}
Next substituting Eqs.~(\ref{eqm4e}) and (\ref{eqm2d}) and then Eq.~(\ref{c0pmsol}) into Eq.~(\ref{massvar}) leads to
\begin{eqnarray}\fl\,
\label{cmpmsol}
c_m^{(\pm)}&=&-\frac{m}2\frac{\{\nu_K^2\nu^4-\nu^2[1-\nu_K^2(3-\nu_K^2)]\}^{-1}}{\nu^2+2\nu_K^2}\bigg\{2\frac{\nu^6}{\gamma_K^2}-\nu^4[1-\nu_K^2(3+\nu_K^2)]\nonumber\\
\fl\,
&&+\nu^2\nu_K^2[2-\nu_K^2(18-\nu_K^2)]+4\nu_K^4(2+\nu_K^2)\nonumber\\
\fl\,
&&\pm\frac{\nu}{\nu_K}\{\nu^2[1-\nu_K^2(3+\nu_K^2)]+\nu_K^2(2+\nu_K^4)\}[\nu^2(13\nu_K^2+4\nu^2)-8\nu_K^4]^{1/2}\bigg\}\ .
\end{eqnarray}
Finally substituting Eqs.~(\ref{c0pmsol}) and (\ref{cmpmsol}) into Eqs.~(\ref{eqm4e}) and (\ref{eqm2d}), we obtain expressions for the only nonvanishing components of the spin tensor
\begin{eqnarray}\fl\quad
\label{strsolT}
S_{\hat t\hat r}&=&\frac{m}{2\gamma}\frac{\nu_K}{\zeta_K}\frac{\nu^2-\nu_K^2}{\nu^2+2\nu_K^2}\frac{\nu^2(2-3\nu_K^2)+4\nu_K^2\pm\nu\nu_K[\nu^2(13\nu_K^2+4\nu^2)-8\nu_K^4]^{1/2}}{\nu_K^2(\nu^4-2)-\nu^2[1-\nu_K^2(3-\nu_K^2)]}\ , \\
\fl\quad
\label{srphisolT}
S_{\hat r\hat \phi}&=&\frac{m}{2\gamma}\frac{\nu_K}{\zeta_K}\frac{\{\nu_K^2(\nu^4-2)-\nu^2[1-\nu_K^2(3-\nu_K^2)]\}^{-1}}{\nu^2+2\nu_K^2}\{\nu\nu_K[(\nu^2+2\nu_K^2)(4\nu^2-3-\nu_K^2)\nonumber\\
\fl\quad
&&-2(\nu^2-\nu_K^2)^2]\pm[\nu^2(1-3\nu_K^2)-2\nu_K^2][\nu^2(13\nu_K^2+4\nu^2)-8\nu_K^4]^{1/2}\}\ ,
\end{eqnarray}
which are in agreement with the condition $\nu S_{\hat t\hat r}-S_{\hat r\hat \phi}\not=0$ assumed above. This solution, having constant spin components, was already found in previous work \cite{bdfg1}.
Eq.~(\ref{vinc1}) together with the fact that $S_{\hat t\hat \theta}=0$, $S_{\hat r\hat \theta}=0$ show that the total 4-momentum $P$ (see Eq.~(\ref{Ptot})) also lies in the cylinder of the circular orbit
\begin{equation}
\label{PtotT}
P=mU+m_sE_{\hat \phi}\ ,
\end{equation}
with
\begin{equation}
\label{msT}
m_s=\gamma\frac{\zeta_K}{\nu_K}(\nu S_{\hat t\hat r}-\nu_K^2 S_{\hat r\hat \phi})\ .
\end{equation}
It can therefore be written in the form $P=\mu \, U_p$ with
\begin{equation}\fl\quad
\label{PtotTcirc}
U_p=\gamma_p\, [e_{\hat t}+\nu_p e_{\hat \phi}]\ , \qquad \nu_p=\frac{\nu+m_s/m}{1+\nu m_s/m}\ ,\qquad \mu=\frac{\gamma}{\gamma_p}(m+\nu m_s)\ ,
\end{equation}
where $\gamma_p=(1-\nu_p^2)^{-1/2}$, provided that $m+\nu m_s\not=0$.
The T supplementary conditions can then be written as
\begin{equation}
\label{Tcondsnew}
S_{\hat t \hat \phi}=0\ , \qquad S_{\hat r \hat t}+S_{\hat r \hat \phi}\nu_p=0\ , \qquad S_{\hat \theta \hat t}+S_{\hat \theta \hat \phi}\nu_p=0\ ,
\end{equation}
the last condition being identically satisfied, and with the equivalent azimuthal velocity $\nu_p$ given by
\begin{equation}
\label{nupsol}
\nu_p^{(\pm)}=\frac12\frac{\nu_K}{\nu^2+2\nu_K^2}\{3\nu\nu_K\pm[\nu^2(13\nu_K^2+4\nu^2)-8\nu_K^4]^{1/2}\}\ ,
\end{equation}
from Eqs.~(\ref{strsolT}) and (\ref{srphisolT}).
The reality condition of (\ref{nupsol}) requires that $\nu$ takes values outside the interval $(-{\hat \nu},{\hat \nu})$,
with ${\hat \nu}=\nu_K\sqrt{2}\sqrt{-13+3\sqrt{33}}/4\simeq 0.727 \nu_K$; moreover,
the timelike condition for $|\nu_p| <1$ is satisfied for all values of $\nu$ outside the same interval.
From (\ref{Tcondsnew}) the spin vector orthogonal to $U_p$ is just
$ {\gamma_p}^{-1} S_{\hat r \hat \phi} E_{\hat \theta}$.
The spin-curvature force (\ref{Fspin}) turns out to be radially directed
\begin{eqnarray}
\label{FspinT}
F^{\rm (sc)}
= \gamma\zeta_K^2 \left[2S_{\hat t\hat r}+\nu S_{\hat r\hat \phi}\right]e_{\hat r}\ .
\end{eqnarray}
The term on the left hand side of Eq.~(\ref{papcoreqs1}) can be written as
\begin{equation}
\label{motradT}
\frac{DP}{{\rm d} \tau_U}=[m\kappa-m_s\tau_1]e_{\hat r}\ ,
\end{equation}
so that the balance equation (\ref{baleq}) reduces to
\begin{equation}
\label{baleqT}
ma(U)_{\hat r}=F^{\rm(so)}_{\hat r} + F^{\rm (sc)}_{\hat r}\ ,
\end{equation}
where
\begin{equation}\fl\quad
ma(U)_{\hat r}=m\kappa\ , \quad F^{\rm(so)}_{\hat r}=m_s\tau_1\ , \quad
F^{\rm (sc)}_{\hat r}=\gamma\zeta_K^2\left[2S_{\hat t\hat r}+\nu S_{\hat r\hat \phi}\right]\ .
\end{equation}
\section{Conclusions}
Spinning test particles in circular motion around a Schwarzschild black hole have been discussed in the framework of the
Mathisson-Papapetrou approach supplemented by the usual standard conditions.
One finds that apart from very special (and indeed artificially constrained) orbits where the spin tensor is closely matched to the curvature and torsion properties of the world line of the test particle or the static observer spin vector is constant and orthogonal to the plane of the orbit, the assumption of circular motion is not compatible with these equations. Indeed even in the former case, the test particle assumption is then violated except in the limit of massless particles following null geodesics, where the spin vector must be aligned with the direction of motion from general considerations.
The spin-curvature force generically forces the motion away from circular orbits, so one needs a much more complicated machinery to attempt to study explicit solutions of this problem, solutions which must break the stationary axisymmetry.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The numerous point sources that appear around
M87 (NGC 4486) were first identified as its globular cluster system
by Baum (1955). Since then, the globular cluster system, hereafter GCS, of
this giant elliptical galaxy near the core of the Virgo cluster has been
one of the most carefully studied systems and has led the way in the study
of globular clusters in external galaxies. Racine (1968a, 1968b) first investigated
the magnitude and color distribution of the clusters using photographic plates
and reported that they are on average bluer than the galaxy
background. Strom {\it et al.} (1981) corroborated this result and also suggested
that the mean color of the clusters shows a radial trend, with the more distant clusters being bluer than ones that are closer to the nucleus.
More recent studies by Lee \& Geisler (1993), Cohen {\it et al.} (1998) and Nielsen {\it et al.} (1998) have confirmed the radial variation in color.
The effective radius of the GCS was reported to be much larger than that of the underlying galaxy by Harris \&
Racine (1979), and has since been verified by other groups
(Harris 1986; Lauer \& Kormendy 1986; Grillmair, Pritchet \& van den Bergh 1986; McLaughlin 1995). More recently the GCS of M87
has been used to measure the Hubble constant by
exploiting the apparent constancy of the globular cluster luminosity function (GCLF).
Whitmore {\it et al.} (1995) studied the
globular clusters in the core region of the galaxy using the WFPC2 camera onboard the HST and were able to detect clusters that were two magnitudes
fainter than the turnover luminosity of the GCLF for the first time, hence providing a more secure measurement of the turnover. The superior angular resolution of the
HST enabled them to distinguish globular clusters from the galactic background
to a limiting magnitude of V=26 mag. This dataset provides the
best determined GCLF of {\it any} galaxy currently available, better even than
for the Milky Way or M31 due to the much larger number of clusters.
In this paper, we follow up on the work of Whitmore {\it et al.} (1995), hereafter referred to as Paper 1, and study the properties of the
globular clusters in
the inner region of M87 in the context of the formation and evolution of
the GCS.
In Paper 1 we found that the color distribution of the clusters is bimodal,
which was confirmed by the observations of Elson \&
Santiago (1996a, 1996b). This is consistent with the merger scenario of
globular cluster formation proposed by Ashman \&
Zepf (1992), who suggested that globular clusters can be formed by the
interaction or merger of galaxies. These clusters would be more metal
rich, and hence redder, than the older population associated with the
interacting galaxies (see Ashman \& Zepf 1997 for a detailed discussion). Developing
upon the merger scenario Whitmore {\it et al.} (1997) have modelled the evolution of the color distribution of globular clusters with time, and they suggest an age of 9$\pm$5 Gyr for the population of red clusters in M87.
In this paper we study the two populations
and attempt to resolve questions about their formation and evolutionary histories.
It has been suggested that the universality of the turnover luminosity of the
GCLF is caused by destruction mechanisms that conspire to
preferentially destruct clusters outside a narrow range of luminosities,
and hence masses, near the turnover (Fall \& Rees 1977).
The major players are dynamical friction that preferentially destroys
the high mass clusters, and tidal shocking and evaporation which act more efficiently on low density,
low mass clusters (Murali \& Weinberg 1997a, 1997b; Ostriker \& Gnedin 1997;
Gnedin \& Ostriker 1997 and references therein).
Theory predicts the range of their maximum efficiency in the region of 2-8 kpc of the
galactic center, depending on the model.
If this is true then the luminosity distribution should vary most strongly
in this region, and it raises concerns about the
suitability of using an universal luminosity function to describe the distribution
of cluster luminosities. However, previous studies (Grillmair {\it et al.}
1986; Lauer \& Kormendy 1986; McLaughlin, Harris \& Hanes 1994) have found no evidence for variation of
the GCLF with radial distance from the center of the galaxy. The only
claim to the contrary is by Gnedin
(1997) who reports that the clusters in the inner region are brighter
than those in the outer region by 0.26 mag. In order to
examine the importance of destruction
mechanisms in modifying the globular cluster system we study the variation of
the GCLF and cluster density with galactocentric distance. We also measure the sizes of the clusters to determine if there is a radial variation in the size distributions
due to dynamical evolution of the clusters.
\section{Observations and Reductions}
We observed the inner region of M87 with the Wide Field and Planetary
Camera 2 (henceforth WF=Wide Field Camera and PC=Planetary Camera) onboard
the HST on February 4, 1995. Four 600 secs exposures were taken with
the broad band filters F555W and F814W. These four separate exposures provide
excellent cosmic ray removal,
which was performed using the STSDAS task GCOMBINE. This was followed by the IRAF task
COSMICRAYS to remove hot pixels. Fig 1 shows the mosaiced V (F555W) image
from the four WFPC2 chips. Numerous cluster candidates can easily be
identified. Fig 2 is the residual V image of the region near the center of the
galaxy after a median filtered image has been subtracted. It
demonstrates the superior
ability of the HST to resolve point sources in the presence of a strong
background. The optical jet and several cluster candidates can easily be seen
right into the core.
\subsection{Object Identification}
The initial detection of candidate clusters was carried out using the
DAOFIND routine in DAOPHOT (Stetson 1987) using the V + I image. This list was
then refined by manual inspection. The criteria that we used to manually
identify clusters
were: 1) the objects must be point-like, 2) they must
be present in the V, I and V+I image, 3) at least 3 adjacent pixels must
be above the background level, 4) the objects must have a reasonable shape
(e.g. all the bright pixels should not be along a column or a row). At
this stage, only the sources
that obviously appeared to be extended and likely to be background galaxies
were eliminated. During this process $\approx$10\% of the objects detected by
DAOFIND were rejected and another $\approx$15\% added to the list.
Another object list was produced using a completely objective method of
identification similar to the one used by Whitmore {\it et al.} (1997). One of the shortcomings of DAOFIND is that it uses
a constant value of $\sigma$ for
the background noise, which leads to spurious detection of objects in
regions of high noise (corresponding to regions of high surface brightness). To
overcome this problem, we developed a method of identifying sources
using the local S/N ratio. Initially, potential objects in the V+I image
were identified by using a very low cutoff (2$\sigma$) in DAOFIND.
This step typically returned a few thousand sources in each of the chips.
Aperture photometry was then performed on each of these sources with radii of
0.5 pixels and 3 pixels and a sky annulus between 5 and 8 pixels.
The standard deviation from the median of the sky value for
each source was taken to be a measure of the background
noise at that point. At this stage, any object where the flux within 0.5 pixels was
at least 3 times greater than the standard deviation of the sky pixels, or was at
least 20 counts above the
background, was identified as a candidate cluster. Aperture photometry was then
performed on this list of objects in the F555W
and F814W images separately. Only objects that had a S/N $>$ 1.5 in both the
images were considered bona fide detections.
From our aperture correction tests (see $\S$ 2.2) we found that for a typical
cluster in the PC, the flux within 3
pixels is $\approx$7.5 times the flux within 0.5 pixels and the ratio of fluxes is
$\approx$5.5 for the WF. Allowing for variation in the sizes of the clusters we concluded that $\frac{Counts_{3pix}}{Counts_{0.5pix}}$ $<$ 12 for the PC and
$\frac{Counts_{3pix}}{Counts_{0.5pix}}$ $<$ 8 for the WF is a good discriminant to weed
out background galaxies (This is similar to the concentration criteria used in
Paper 1). Additionally, a lower limit cutoff of $\frac{Counts_{3pix}}{Counts_{0.5pix}}$ $>$ 1.5
is useful in rejecting
chip defects or hot pixels that might be present after the
initial reduction. Both our final objective list and the DAOFIND/manual object
lists were passed through these filters.
There was excellent agreement between the sources found by the two methods. The
objective method identified 1057 sources in all 4 chips, while 1012 sources were
found by the manual method. Of these, 989 sources were identical in the two lists.
The cluster candidates that did not match were either near the detection limit
and/or had large photometric errors. A comparison of the photometry based on
the manual and objective lists reveal no significant effect of different
selection criteria, hence we use the objective list for further analysis.
The V and I images were also cross correlated to search for spatial shifts
between the two sets. The PC images were found to have offsets less than 0.07
pixels while the WF chips had offsets within 0.035 pixel. Due
to the excellent spatial registration of both sets of images, identical pixel
positions, determined from the V image, are used for the rest of the analysis.
\subsection{Aperture Photometry}
Close inspection of the radial profiles of the cluster candidates show
that they are statistically broader than true point sources. We
use this to study the sizes of the clusters in $\S$ 3.7.
However, this also means that the aperture
corrections derived for point sources would be slightly small if applied to
the clusters in M87. In order to quantify this difference we computed the flux within
apertures of various sizes on a sample of bright clusters in all the chips and compared
these with profiles for stars in the calibration field NGC 5139
($\omega$ Cen) and the standard star GRW+70D5824. The average cluster
is found to be 0.09 mag brighter in both V and I when compared to numbers
reported in Paper 1 using point source corrections (Holtzman {\it et al.} 1995a).
{\it This 0.09 mag difference, and the more objectively determined candidate list,
are the primary changes between this paper and Paper 1.}
The photometric zeropoints are adopted from Whitmore {\it et al.} (1997)
who derived the values by comparing HST and ground based photometry for 6 galaxies. The average of
the 6 galaxies yields a zeropoint of 22.573 mag to convert from F555W to
Johnson V and 21.709 mag to convert
from F814W to Cousins I. These numbers agree closely with the Holtzman {\it et al.}
(1995b) zeropoints of 22.56 mag in V and 21.69 mag in I.
In Paper 1 we used Burstein and Heiles's (1984) value
of A$_B$=0.09$\pm$0.06 mag and reddening curves from Mathis (1990) to derive an
extinction of A$_V$=0.067$\pm$0.04 mag and A$_I$=0.032$\pm$0.02 mag. Sandage \&
Tammann (1996), commenting on Paper 1, argue that the extinction in the
direction of M87 should be 0.00 mag. We continue to use the above values of the
extinction to maintain continuity with Paper 1 since there is no compelling
reason to assume that there is no reddening in the direction of M87.
The zeropoints and aperture corrections used to perform aperture photometry
using the PHOT task in the DAOPHOTX package within IRAF are listed
in Table 1. A correction of 0.1 magnitude has been added to normalize
the measurements made within a radius of 0.5$''$ to infinite radius
following Holtzman {\it et al.} (1995b). Since the difference between the aperture corrections in the three WF chips is smaller than the rms variation within a
chip, we have used the
same correction for all the WF chips to determine the brightness of the
candidate globular clusters. An aperture radius of 2 pixels for the WFs, and 3 pixels for the PC is chosen so that the percentage of encircled energy is approximately the same on both chips (Note: an aperture radius of 3 pixels was used for both
the PC and the WF in Paper 1). The sky annulus is defined to be between 5-8
pixels, fairly close to the
source, because the large galaxy background gradient introduces errors in
the determination of the background when an aperture with a larger radius is
used. Table 2 lists the magnitudes and positions of the 100 clusters closest
to the nucleus.
\subsection{Completeness Corrections}
For statistical studies of globular cluster systems it is
necessary to know the detection limit of clusters in the field. To quantify
this effect completeness tests were performed on the images.
First, clusters with high S/N were extracted from regions
of low background noise to be used as model objects for the tests. A random number
routine was used
to add objects of known magnitude and V-I colors between 0.9 mag and 1.3 mag. In each pass one hundred
objects were added per chip and the objective algorithm
described above was then used to detect candidate
clusters in the chips. In all, approximately 10000 simulated objects
were added to the two sets of images in V and I. The completeness limits
determined from these tests are plotted in Fig 3. It can
be seen that the detection limit migrates
toward progressively fainter magnitudes with increasing distance from the
center of the galaxy, as is to be expected from the background light
distribution. In the innermost region of the PC (0-10$''$) the 50\% completeness limit is at V=24.3 mag while in the region between 20-30$''$, it is at V=25.3
mag. The detection limit in the WF chips varies from V=24.1 mag to V=25.7 mag
while the average 50\% threshold for the entire field (WF+PC) is at V=25.5 mag.
At first glance it might seem surprising that the detection threshold in
the innermost bin of the WF (6-30$''$) is brighter than the threshold in the PC.
However, the galaxy background at comparable radii is larger for the WF than the
PC due to the larger linear size of the WF pixels. Since the nucleus of the galaxy
is near the apex of the WFPC2 the background counts per pixel due to the
galaxy is larger in the inner regions of the WF than everywhere but very
near the center of the PC. The Poisson noise in the background is the limiting factor in
object identification,
hence the lower threshold.
\subsection{Contamination Corrections}
One of the biggest advantages of the high resolving power of the HST is that
background galaxies can be identified and rejected fairly easily. However, the
background galaxy count increases sharply near the limit of our completeness
threshold and a few are probably still lurking in our object lists
masquerading as clusters. Furthermore, the lists may be contaminated by 2-4
stars in each of the chips. In order to quantify this correction, we used the automated routine to identify sources in two images,
one each in F606W and F814W, from a Medium Deep Survey field (Griffiths {\it et al.} 1994) located about
9.3 degrees from the
program galaxy. We then performed aperture photometry on the objects that
passed our normal filters for cluster candidates, taking into
account an offset of 0.29 magnitudes for the photometric transformation
from F606W to F555W (derived using the SYNPHOT package). Fig 4 shows the
luminosity function for these objects. There are only
about 20 contaminating objects with V$<$25 mag within the WFPC2 field of view
but the number increases sharply near V=25.5 mag.
\subsection{Surface Photometry}
One of the main aims of this paper is to study the spatial variations
of various properties of the globular clusters in M87 and to compare them to the underlying
galaxy. A previous study by Boroson, Thompson \& Shectman (1983) found that V-I$\approx$1.6 mag
in the inner regions of the galaxy and decreases with radius. On the
other hand, Zeilinger, Moller \& Stiavelli
(1993), employing more modern CCD techniques, reported that the V-I color
distribution of the galaxy is nearly constant at 1.3 mag in the inner region of
the galaxy. To resolve this issue we derived the surface
profiles of the galaxy from our image. First, the images from the 4 CCDs
were mosaiced using the STSDAS task WMOSAIC. Then, after visual inspection,
a mask was used to blot out the optical jet and the blank region outside the PC. The STSDAS
tasks ELLIPSE and BMODEL were used to model the smooth galaxy background and
fit circular profiles with a fixed center. Sparks, Laing \& Jenkins (1988) report 0.08$<$10(1-$\frac{a}{b}$)$<$0.79 in the inner 1 arcmin of
the galaxy. Thus, the ratio of the axes $\frac{a}{b}$
is between 0.92 and 0.99 and the assumption of circular isophotes does not
introduce a significant source of error.
The small field of view of the WFPC2 presents some problems because the nuclear
region of M87 completely fills
the aperture, making accurate sky background measurements difficult.
The background counts were calculated from the average
brightness of the MDS field used for background object detection. After applying a
photometric correction to the sky intensity found in the F606W image in order to convert to F555W the derived background level in the V image is
9.2 counts per pixel. The
corresponding correction for the F814W image is 6.0 counts per pixel. While this method of calculating
the background is not very secure, we found that subtracting background counts of this order
affected the color profile by less than 0.05 mag (vis-a-vis no background correction) in the inner 15$''$
of the galaxy where it is brightest. On comparing our V and I band profiles with that of
Zeilinger {\it et al.} (1993) we find that they match up very well in both the
bandpasses. The V-I color of the galaxy
in this region is constant at 1.3 mag and is consistent with the photometry of
Zeilinger {\it et al.} (1993). On the other hand our V-band profile matches the Boroson {\it et al.} (1983) data quite well, but our I band profile is $\approx$0.2 mag dimmer than
theirs. This difference may be because Boroson {\it et al.} use a Gunn i filter (Wade {\it et al.} 1979) which has an effective wavelength
$\lambda_{i}$=8200A, while our I magnitudes are in the Cousins system
($\lambda_{I}$=8000A). Therefore, we use Zeilinger et als (1993) photometry for further analysis and interpretation.
\section{Results and Discussion }
\subsection{ The Luminosity Function}
The most significant result reported in Paper 1 was that the GCLF can
be measured roughly 2 magnitudes deeper than the turnover luminosity. We have re-evaluated
these calculations to see how the numbers change due to the new detection
routines and aperture corrections.
In Fig 5 we plot the luminosity function for the object list used
in Paper 1 and the objectively identified candidate
list from the current paper. The new aperture, completeness, and background
corrections from Table 1, Fig 3, and Fig 4 respectively, have been applied to the
GCLF in the top panel, the objectively identified clusters, while the bottom
panel is a plot of the luminosity function calculated in Paper 1.
The turnover magnitude in the V band
for the objective set is m${_V}{^0}$=23.67$\pm$0.06 mag for the best fitting Gaussian compared to
m${_V}{^0}$=23.74$\pm$0.06 mag in Paper 1. We use the objectively
identified list as our best estimate, hence the turnover is 0.07 magnitudes
brighter than that reported in the earlier analysis, primarily due to the new aperture corrections. A similar analysis of the luminosity function in the I
band gives m${_I}{^0}$=22.55$\pm$0.06 mag, as compared to the Paper 1 value of
m${_I}{^0}$=22.67 mag.
\subsection{Spatial Variation of the Luminosity Distribution }
The fact that the GCLF of luminous elliptical
galaxies appears to be nearly universal (Harris 1991; Jacoby {\it et al.} 1992;
Whitmore 1996; Ashman \& Zepf 1997) is a rather remarkable result which
implies that, for any reasonable range of M/L
ratio, the underlying mass distribution of globular clusters is similar.
It is rather amazing that the the GCLF of a large number of galaxies
of various shapes and sizes have similar GCLFs in spite of the destruction
mechanisms that may be constantly acting upon the clusters with a relative efficiency
that depends strongly on the shape and/or size of the galaxy. In particular,
theoretical models predict that these processes should have a
measurable effect in the inner regions of a galaxy, with
dynamical friction and tidal shocking being suggested as the most
important mechanisms (Aguilar, Hut \& Ostriker 1988; Murray \& Lin 1992). While dynamical friction operates more efficiently on high mass
clusters and causes them to spiral towards the nucleus of the galaxy, tidal
shocking selectively destroys the low density clusters. There is
the added possibility that the clusters are destroyed or captured by a massive central
object e.g. a blackhole (McLaughlin 1995). Even though all of
these mechanisms are predicted to be most efficient from anywhere within the
inner 2 kpc to the inner 8 kpc depending on the model, most observations of the
core region of M87 (Grillmair {\it et al.} 1986; Lauer \& Kormendy 1986; McLaughlin,
Harris \& Hanes 1994; Harris, Harris \& McLaughlin 1998) have
revealed no evidence of a radial variation of the GCLF. The only claim to the
contrary has been made by Gnedin (1997), who suggests that the clusters in the
inner regions of M87 are systematically brighter than those in the outer
regions by 0.26 magnitudes, which he attributes to the preferred destruction
of low mass clusters in the stronger tidal field of the inner regions of the galaxy. However, ground based data may be strongly affected by selection effects
which can lead to spurious results e.g. fainter (low mass) clusters are harder
to detect near the center of the galaxy where the background light is the
strongest and it can point towards an apparent radial variation in intensity if
this effect is not accurately compensated for. Our data goes much deeper than previous observations, with higher completeness fractions near the center of
M87, and hence gives
us a better chance of identifying any radial variations in the LF
and addressing the question of the universality of the GCLF.
Fig 6 is a plot of the radial distribution of cluster
magnitudes for all objects with photometric errors less than 0.3 magnitudes. A
least squares fit straight line shows a weak correlation between the brightness
of the clusters and the galactocentric radius, but this is almost
certainly an artifact of the fainter completeness limits at larger radii.
In order to make a more meaningful test of the variation in the LF with
distance, we divided the sample into 5, roughly equal, radial bins and corrected each of distributions for completeness and contamination. Both the raw and
the corrected luminosity distribution for each of the bins, in both the V and
the I bands, is plotted in Fig 7. Close inspection of the corrected profiles indicates that the GCLF is remarkably similar at all radial distances. In order
to quantify this observation we fit Gaussian profiles to the corrected
luminosity distributions. The peak
magnitude and $\sigma$ (the dispersion) for the best fitting Gaussian is plotted in Fig 8. Although the four inner bins of the V band
data do seem to show that the turnover magnitude is brighter in the
inner bins, the outermost bin, for which the turnover luminosity is most securely determined, does not conform to this trend. The turnover luminosity
of the innermost bin is less than 1$\sigma$ brighter than the mean, while the
peak of the bins at 53.5$''$ and 66.6$''$ are $\sim$1$\sigma$ dimmer than the mean. The I band luminosities show no significant radial trends whatsoever.
Therefore it appears that both the peak
of the distribution and the half-width are constant to within the
uncertainties, with no obvious radial dependance that can be attributed to a
particular destruction mechanism. At this point one could argue that the ad hoc
choice of a Gaussian has no physical basis, and that a variation
in the shape of the luminosity function may be hidden under the parameters
of the Gaussian fit. In order to address this issue
we plot the cumulative profile of the corrected distributions in 5 radial
bins in Fig 9. We have considered only clusters brighter than 24 mag in V, and
23 mag in I, where the completeness and background correction are minimal.
Since the normalized profiles in neither the V, nor the I band show a
consistent, distance dependant variation, we are led to conclude that we see no
believable evidence of the effect of destruction mechanisms in the luminosity
function of M87. K-S tests, performed on the unbinned data in the bright portion of the distributions which are unaffected by varying completeness limits, confirm that
all the distributions are statistically identical (with typical confidence limits of 70\% that the distributions are related). Our conclusion does not agree with Gnedin (1997), who claims to see evidence of cluster destruction based on his interpretation of the
McLaughlin, Harris \& Hanes (1994) data. Gnedin finds that the turnover magnitude of the GCLF in his inner region (1.21$'$$<$R$<$3.64$'$) at
V=23.13 mag, is 0.24 mag brighter than the turnover magnitude of his outer
region (3.64$'$$<$R$<$6.82$'$) at V=23.37 mag, which he interprets as
evidence of tidal shocks in the inner region. If this is a real physical effect
then we expect our sample of clusters, which have an
average radial distance R$\approx$0.83$'$, to have a turnover magnitude brighter
than Gnedin's inner clusters. However, the overall turnover luminosity
of our sample at V=23.67 mag as well as the turnover luminosities of each of
the radial bins (Fig 8) are dimmer than even Gnedin's outer sample and are clearly
at odds with his data. Though unlikely, this discrepancy could possibly be
due to a zerpoint error in either our photometry or that of the McLaughlin,
Harris \& Hanes (1994) data on which Gnedin bases his analysis. However, as Harris
{\it et al.} (1998) find no zeropoint offset between their observations and the
McLaughlin at al. (1994) data, and the turnover luminosity of the Harris et
al. (1998) observations matches our's (they estimate m$^v$$_0$=23.7 mag) we conclude that
there the offset is unlikely to be due to zeropoint errors. We believe that the apparent brightening observed by
Gnedin is probably due to undercompensation of completeness corrections in the inner
regions where the dimmer clusters are harder to detect against the strong
galaxy background.
Another interesting result that we see in Fig 8 is that the scatter
in the turnover magnitude and $\sigma$ is marginally smaller in the I band than
in the V band ($m{_V}{^0}$~=~23.71$\pm$0.11 mag; $m{_I}{^0}$~=~23.67$\pm$0.06 mag: $\sigma{_V}$~=~1.37$\pm$0.16 mag; $\sigma{_I}$~=~1.41$\pm$0.11 mag). This agrees with the suggestion of Ashman, Conti \&
Zepf (1995) that the luminosity function in the I band is less affected by
variations in the metallicities of the clusters and may be a better choice
for distance measurements.
\subsection{ The Color Distribution }
As discussed in Paper 1, the color distribution of the globular clusters
in the inner regions of M87 is bimodal. The mean color of all the clusters was estimated
to be V-I~=~1.10$\pm$0.01~mag, with the blue peak at V-I~=~0.95~mag and the red
peak at V-I~=~1.20~mag. The color distribution of
clusters having
photometric error less than 0.2 magnitudes in V-I, as derived in Paper 1, is
compared with the current list in Fig 10.
The mean color of the clusters in the objectively identified list is
V-I~=~1.09$\pm$0.01 ~mag. A simultaneous fit of two Gaussians to
the color distribution of the data shows that the blue and
the red peaks are at V-I~=~0.95$\pm$0.02 mag and V-I~=~1.20$\pm$0.02 ~mag
respectively, which, within the uncertainties, is identical to the peak values of
0.97$\pm$0.02 ~mag and 1.21$\pm$0.02 ~mag found
by fitting Gaussians to the Paper 1 list. So even though the luminosity function of the Paper 1 list and the objectively identified list are slightly different due to the new aperture corrections, the
color distribution are nearly identical.
We also use the KMM mixture modelling algorithm, described in Ashman, Bird
\& Zepf (1994), to independently test for bimodality, and to partition the objectively identified candidate list into
sub-populations. Since KMM is sensitive to outliers in the dataset, only
the candidate objects that are within the range 0.6$<$V-I$<$1.6 are
considered. This reduces the number of objects to 997. We find that the
distribution can be divided into 2 sub-populations. Though Lee \& Geisler (1993)
suggest that the distribution may be trimodal we find no reasonable
partition with 3 groups that supports this claim. In the case of a homoscedastic
partition (2 populations forced to have equal variances) there are 428 blue
objects with mean V-I=0.963 ~mag and 569 red objects with V-I=1.201 ~mag
and a common dispersion of 0.134 mag. The threshold color dividing the two
populations is V-I=1.064 which is slightly smaller than the value derived from
fitting Gaussians. For a heteroscedastic partition (red and blue populations
allowed different variances) we obtain 336 blue candidates with a mean
V-I=0.935 ~mag and 0.123 mag dispersion, and 661 red candidates with V-I=1.18
mag and 0.141 mag dispersion. We shall adopt V-I=0.95 mag as the peak of the blue clusters, V-I=1.20 mag as the peak of the red clusters, and V-I=1.09 as
the mean of the entire distribution, for the rest of the paper. From hereon
we define clusters that have V-I$<$1.09 mag as blue clusters and those with
V-I$>$1.09 as red clusters.
The relationship between the broad band colors and metallicities of globular clusters is known to be roughly linear, and the most commonly used
expression, derived by Couture, Harris \& Allwright (1990), using the Milky
Way globular clusters, is V-I = 0.198[Fe/H] + 1.207. However, we found in Kundu
\& Whitmore (1998), that the slope of the equation changes significantly due
to the choice of the independent variable used to derive this relationship and
that the above equation probably overestimates the metallicity of metal rich
clusters. Therefore we used the following relationship derived (using Milky Way clusters)
in Kundu \& Whitmore (1998) to convert broad band colors to metallicities:
\begin{equation}
[Fe/H] = -5.89 + 4.72 (V-I)
\end{equation}
Using equation 1, the average metallicity of the clusters in our field is Fe/H=-0.74 dex, which is in close agreement with the value of Fe/H=-0.86$\pm$0.20
derived by Lee \& Geisler (1993) and close to the value of Fe/H=-0.95 reported by Cohen, Blakeslee \&
Ryzhov (1998). The slightly higher metallicity of our sample is probably a
sign of the color/metallicity gradient. The blue
clusters have a mean metallicity of Fe/H=-1.41 dex ( using V-I=0.95) while the
mean metallicity of the red clusters is Fe/H=-0.23 dex.
\subsection{The Radial Color Distribution And Its Implications On Formation Scenarios}
The reason for the bimodality in the color distributions of the GCS of many elliptical galaxies is open to interpretation. A possible
scenario, proposed by Ashman \& Zepf (1992) and Zepf \& Ashman (1993), is that a merger produces
a metal rich population of globular clusters that is redder than the original
metal poor population, thereby leading to a bimodality in the color
distribution. If the red clusters are created during a gas rich merger we might
expect them to have formed closer to the center of the galaxy due to the higher degree of
gaseous dissipation than suffered by the older blue population. This would
manifest itself as a negative color gradient with respect to galactic
radius.
A recent paper by Forbes, Brodie \& Grillmair (1997) points out that the predicted correlation between increasing Sn and the fraction of red-to-blue
clusters predicted in this scenario (Zepf \& Ashman 1993) does not hold for cD
galaxies such as M87. With a Sn of roughly 16 we would expect 4 times more red
clusters than blue ones, while we find roughly equal numbers of red
and blue clusters in the inner region (The red clusters slightly outnumber the blue ones). This is an important result and probably indicates
that the simple equation spiral + spiral = elliptical is too
simplistic, especially for cD galaxies like M87 which have probably had
a more complicated formation history. Another possibility is that cD galaxies accrete dwarf galaxies
and their cluster population (Zepf, Ashman \& Geisler 1995) or acquire
some of their globular clusters by tidal stripping (Forbes, Brodie \&
Grillmair 1997). This may partly explain the abundance of blue clusters
since most of the accreted globular clusters will be metal poor and bluish in color. C\^{o}t\'{e}, Marzke, \& West (1998) on the other hand assume that the
red clusters represent the galaxy's intrinsic population while the entire blue
population is acquired through mergers of smaller galaxies or tidal stripping.
We discuss the relative merits of the various models in the following sections.
The issue of radial gradients in the color distribution of globular
clusters in M87 has been rather controversial. Though Strom {\it et al.} (1981)
reported that the average color of the clusters tends to be bluer at larger
galactocentric radii, later observations by Cohen (1988) and Couture {\it et al.} (1990)
contested this claim. However, more recent observations of the region between
50$''$$<$R$<$500$''$ by Lee \& Geisler
(1993) and Cohen, Blakeslee \& Ryzhov (1998) have confirmed the radial trend
in metallicity (color) observed by Strom {\it et al.} (1981).
Interestingly, Harris {\it et al.} (1998) have studied the color distribution
of clusters in the inner
140$''$ of M87, a field which is nearly identical to ours, and conclude that color distribution is essentially flat within the core radius of $\sim$1$'$
and then becomes bluer with radius in a manner that is consistent
with the gradients seen at larger radii.
In the the top panel of Fig 11 we plot the distribution of the
globular cluster colors vs galactocentric radius for our data. The surface color
distribution of the galaxy within the inner 15$''$ as measured by us and the profiles from Zeilinger {\it et al.} (1993) are overlaid on the plot. The globular cluster color
distribution shows only a weak trend with galactocentric distance. A
linear fit to the GC color vs distance for clusters with uncertainty of less
than 0.2 mag in V-I gives :
\begin{equation}
V-I = 1.11(\pm0.01) - 3.5(\pm2.5)*10^{-4}*R
\end{equation}
where R is the
distance in arcsecs from the galactic center. The small negative
gradient in the color distribution suggests that the mean metallicity (color)
of the clusters decreases with galactocentric radius. Even though the slope
derived in equation 2 is just a 1.4$\sigma$ result, the derived metallicity
gradient is consistent with the Lee \& Geisler's (1993) data. In order to
compare our observations with earlier results we calculated the mean color of the globular clusters in ten radial bins and the corresponding
metallicity using equation 1. The metallicity of the clusters derived from this
dataset, Lee \& Geisler (1993) and Harris {\it et al.} (1998) are plotted as a function of logarithmic galactocentric radius in the bottom panel of Fig 11 (Note that we
used equation 1 to convert the Harris {\it et al.} colors to Fe/H and not the
expression quoted by them in order to be self consistent). Even though the
derived metallicity of the clusters is highly sensitive to the photometric zeropoint and the coefficients of the conversion relation (equation 1), our
metallicity estimates appear to agree well with previous observations. Our
data is also consistent with the trend of decreasing metallicity of globular
clusters with distance observed by Lee \& Geisler (1993). Even though Harris
{\it et al.} (1998) argue that the mean
metallicity within the core is constant, our observations (Fig 11) do
not provide any
compelling supporting evidence for this claim. Given the
uncertainties in the calibration of the metallicity scale, we opt to fit
a straight line between log(R) and [Fe/H] in order to describe the metallicity gradient analytically. The
expression for the best fitting line is:
\begin{equation}
[Fe/H] = 0.06(\pm0.14)-0.44(\pm0.07)*log(R)
\end{equation}
where R is in arcsecs.
It is apparent that the mean metallicity of the clusters
decreases with galactocentric distance, but what is the underlying reason for this trend? One possibility is that the distribution of metallicities varies smoothly with galactocentric radius and the gradient simply reflects the metal enrichment of the infalling
gas associated with the collapse phase of the galaxy. The other explanation
is that the bimodal metallicity distribution represents two distinct
cluster systems with different spatial distributions and that the difference
in the relative number of the two sets of clusters causes the metallicity
gradient. In order to search for evidence to corroborate either of these
hypotheses we have divided the cluster population into 5 radial bins, each
having approximately one-fifth of the total clusters, and plotted the color distributions in Fig 12. We see clear evidence of bimodality in each of the
bins, with the blue and the red peaks located in same place in each of the
figures. A close look at the figures
suggests that in the inner region there are more red clusters (V-I$>$1.09) than blue
ones (V-I$<$1.09) while in the outer region blue clusters outnumber red ones. This trend is relatively weak and Kolmogorov-Smirnov tests show that it is statistically likely
that all the distributions are identical except for the innermost one
at 20.9$''$ compared to the outermost sample at 82.6$''$.
There is only a 0.33\%
chance that these two distributions are identical. More convincing evidence of this trend comes from WFPC2 observations of clusters in other fields around M87 (Elson \& Santiago
1996b, Neilsen {\it et al.} 1997, Neilsen {\it et al.} 1998). Neilsen {\it et al.} (1998)
have studied 4 fields in and around M87 including this dataset and the Elson
and Santiago (1996b) field, and find that in each case there is a blue peak
near V-I=0.95 mag, while the red peak at V-I=1.2 mag becomes progressively
weaker with distance. For comparison, we find approximately equal numbers of
red and blue clusters in our field while Elson \& Santiago (1996b) found twice as many blue globular clusters as red ones in a field 2.5$'$ from the center of the galaxy. Based on the evidence on hand we conclude that there are two
distinct populations of clusters in M87, with the red clusters being
more centrally concentrated than the blue ones. Moreover, the radial trend in
color is a natural consequence of the increasing ratio of blue to red clusters
with distance. Recently Geisler,
Lee \& Kim (1996) have found that the red cluster population in NGC4472 is
similarly more centrally concentrated than their blue counterparts suggesting
that this phenomenon is fairly typical in giant elliptical galaxies.
The Zepf \&
Ashman (1993) merger model successfully explains most of the features of the color distribution described above. However, we note that though the red
clusters slightly outnumber the blue ones in our field, the blue population
has a larger spatial extent, which suggests that overall there is a significant
number of both. Hence the overabundance of blue clusters also contributes significantly
to the high S$_N$ of M87.
The
excess blue clusters may have been acquired through cannibalization of metal poor satellite galaxies or by tidally stripping
them of their clusters or possibily the entire
blue cluster distribution is a cannibalized population as suggested by C\^{o}t\'{e} {\it et al.} (1998). Another possibility is the Harris {\it et al.} (1998)
scenario according to which the
blue clusters were formed during the collapse phase of a massive proto-galaxy
and supernova powered galactic winds drove out a large portion of the
gas, leaving behind a blue cluster rich galaxy. The extended spatial distribution and overabundance of
blue clusters predicted by this theory would also agree with our observations.
We also observe that the mean population is 0.2 mag bluer than the stellar
background. This is much smaller than the difference of 0.5 mag between the
mean cluster color and the galaxy background reported by Couture et
al. (1990) using the Boroson {\it et al.} (1983) value of V-I = 1.6 mag for the stellar background. We believe that this large difference may partly be due to
to the fact that the Couture {\it et al.} (1990) I band photometry is based on the Kron-Cousins
system while Boroson {\it et al.} (1983) reported their I magnitudes in the Gunn system (see $\S$2.5).
The observed difference of only 0.1 mag between the red clusters and the galaxy background weakens the argument of Couture {\it et al.} (1990) that the GCS formed much earlier epoch than the
bulk of the stars and strengthens the case for a second burst of cluster
formation late in the metal enrichment history of the galaxy, possibly
due to a merger.
\subsection{ The Effect Of Color On The Luminosity Function }
Ashman, Conti \& Zepf (1995) (henceforth
ACZ) have modelled the luminosity of globular clusters with different metal
abundances and they find that the absolute magnitude of the peak of the GCLF
should vary with metallicity if we assume identical mass functions. Therefore, according to ACZ, bluer, metal poor clusters should be
brighter than redder, metal rich clusters in the V band due to the effects of metallicity on stellar evolution.
In Paper 1 we found that the blue population is 0.13 mag brighter than
the red ones in the V band, consistent with the sense of the prediction although
smaller in magnitude. Elson \& Santiago (1996b)
also reported a difference of 0.3 magnitude in the same sense between the two
populations of clusters in their sample. We find that the blue clusters
are 0.23 mag brighter than the red ones in the V band for the candidate clusters
used in this paper. The blue
population identified by a homoscedastic partition in
$\S$ 3.3 is 0.30 mag brighter than the corresponding red population in V, while the
difference is 0.22 mag for the heteroscedastically partitioned set.
However, the magnitude of the difference is still smaller than
the value of $\approx$0.5 mag predicted by ACZ. In
the I band, ACZ predict that the blue population should be brighter than the
red population by $\approx$0.1 mag. However, the I bands magnitudes show
a very small trend in the opposite sense i.e. we find that the red clusters
are brighter than the blue ones by $\approx$0.06 mag. This apparent
inconsistency may be a result of the simplifying assumption made by ACZ that
both the populations are the same age. By relaxing this criterion, and allowing
the age of the metal rich population to be younger, the colors and magnitude
shifts in both bandpasses can be explained self-consistently. Using the
evolutionary tracks of Whitmore {\it et al.} (1997) derived from the Bruzual \& Charlot (1998) models, we estimate that the red clusters are 3-6 Gyr younger than a
15 Gyr old blue population. A similar analysis based on the Worthey (1994) models gives an age of
9 Gyr for the red clusters. The 9-12 Gyr age of the red clusters supports the
hypothesis that they were formed during a second event, later in the history
of the galaxy than the blue clusters.
We have established previously that the luminosity function of the
entire cluster distribution shows no clear evidence of radial variation, but the
red and blue cluster distributions separately might show a spatial variation.
In order to address this question we, once again, divided the clusters into five radial bins with approximately
equal number of objects and then divided them into red and blue populations
using the mean value of 1.09 as the breaking point. We calculated the luminosity function in each radial bin, and the difference in the luminosity
functions of the red and blue clusters. We found that the slope of the straight line that best fits the difference in the luminosity function of the red and blue clusters with magnitude, in each case, is either statistically zero or
a very small negative number (i.e. $\approx$1 $\sigma$ result) suggesting
that the blue clusters are slightly brighter than the red ones. On the whole
the red and blue clusters seem to have very similar luminosity distributions everywhere. In order to illustrate this we present the mean magnitudes
for clusters brighter than 23.5 mag in V, where incompleteness is not
a factor, in Table 3
The blue clusters also shows a slightly smaller statistical variation in
magnitude with
radius than the red clusters suggesting that they
might be a more stable indicator of the
turnover luminosity than the combined population. Such small scale variations
notwithstanding, the most significant result is that on the whole the
luminosity distribution is remarkably constant everywhere.
\subsection{ Surface Density of Clusters}
The core radius of the globular cluster system of M87 has been
shown to be larger than that of the underlying galaxy in earlier
studies by Grillmair {\it et al.} (1986),
Harris {\it et al.} (1986) and Lauer \& Kormendy (1986). In Fig 13 we plot the logarithmic
surface density of clusters for the various datasets as a function of logarithmic radius. Comparison of the underlying galaxy's brightness
profile with that of the cluster density distribution confirms previous
observations that it's profile is much flatter than that of the galaxy
light in the
inner region of the galaxy.
The HST's superior ability
to identify globular clusters near the center of an elliptical galaxy is also immediately
apparent from the figure.
The density profiles plotted in Fig 13 are offset from one another since the
datasets have varying completeness limits. In order to calculate the corrected density profile of the cluster system, we calculated the projected total number of clusters at each point assuming that the luminosity function at each point
has the same turnover and halfwidth as the entire population (Note that we calculate the total number of clusters that should be visible if clusters everywhere follow the GCLF plotted in Fig
5 and if the completeness was 100\% everywhere). A similar correction was made to the Grillmair {\it et al.} (1986)
dataset after applying a color correction of B-V=0.67 mag (Couture {\it et al.} 1990).
We did not use the other datasets to calculate the core radius because
the completeness limit is not well defined for the Harris (1986) dataset
and is much lower for the Lauer \& Kormendy (1986) data.
The projected total density distribution is plotted in Fig 14. The best fitting King model (King 1966) with a concentration
index of 2.5 is overlaid for comparison. The King radius, r$_0$ (sometimes referred to as the core radius), derived from the fit is 56.2$''$ and is much larger than the reported core
radius of the galactic light (R$_c$=6.8$''$ Kormendy 1985). Though our
GCS core radius is smaller than the 88$''$ reported by Lauer \& Kormendy
(1986), it is consistent with the $\approx$1$'$ value derived by McLaughlin (1995).
Though destruction
of clusters due to tidal shocking and dynamical friction could have conceivably caused this turnover in the density distribution, the fact that we see no clear spatial change
in the luminosity function ($\S$ 3.2) makes it unlikely. We agree with previous studies
(Grillmair {\it et al.} 1986; Harris {\it et al.} 1998 etc.) that conclude that the spatial
constancy of the GCLF suggests that the large core is a
relic of the cluster formation process with the initial distribution of clusters being less peaked than the underlying galaxy's light profile. If the
blue clusters were formed during the collapse of a huge proto-galaxy as
suggested by Harris {\it et al.} (1998) it is possible that they mimic the density
distribution of the mammoth proto-galaxy and hence the discrepancy in the
core radii. Similarly, if the population of blue clusters is entirely
cannibalized, as suggested by C\^{o}t\'{e} {\it et al.} (1998) the cluster distribution would be predicted to have a large core radius. In a recent HST study of
14 ellipticals with kinematically distinct cores, Forbes {\it et al.} (1996)
found that the globular cluster density distributions rose less steeply than
the galaxy background in the nuclear region of all 14 galaxies. They discounted
destruction processes being the cause of this turnover for the same reasons
(i.e. the luminosity function is similar everywhere).
The projected central density of clusters is 460 clusters arcmin$^{-2}$.
For comparison Lauer \& Kormendy calculated a central density of 72 clusters
arcmin$^{-2}$ from their observations. This large difference in densities is
another reminder of the HST's superior ability to resolve point sources
in a strong background.
\subsection{Shape of the cluster distribution}
The shape of the globular clusters distribution may hold some important
clues about the formation history of the galaxy since the spatial distribution of the clusters retains information about the shape of the proto-galaxy from which it
formed and/or the signatures of violent interactions that may have changed the morphology of the galaxy. McLaughlin {\it et al.} (1994) studied the spatial distribution of the
globular clusters around M87 in great detail and concluded that the cluster
distribution is elliptical ($\sim$E2.5 Hubble distribution) and aligned with the major axis of the galaxy
in the region 1.9$'$$<$R$<$4.5$'$.
In the top panel of Fig 15 we plot the number of cluster candidates within 55 arcsecs
of the center of the galaxy, binned in 30$\arcdeg$ sectors, as a function of angle East from North. Note that we have used reflection around the
center of the galaxy to complete the L-shaped wedge that is not covered by the
PC. Although the isophotes of the galaxy in this region are nearly circular,
we can clearly see that the spatial distribution of
globular clusters is flattened. Histograms of the red and blue cluster distributions show that they are both flattened with the red clusters
having a larger ellipticity than the blue ones. We have also plotted the actual positions of the red and blue clusters on the WFPC2 chip in the lower panels of
Fig 15.
In order to quantify the ellipticity of the
clusters distribution we created a fake image in which we added a
Gaussian source at the location of each cluster brighter than V=23.5 (in order
to minimize the effects of incompleteness), and then smoothed it by
convolving with a wide Gaussian of FWHM $\approx$ 30$''$. At a distance of
55$''$ from the center of the galaxy (major axis), we find that while
the blue clusters follow an E2.8$\pm$0.2 (Hubble type)
profile, the red cluster distribution is flatter and follows an
E3.6$\pm$0.2 shape. One concern is that
smoothing of the fake image by a broad Gaussian may lead us to significantly
underestimate the ellipticity of the clusters distribution. Numerical
tests described in Kundu \& Whitmore (1998) indicate that this is a minor effect in the case of M87, hence we estimate that the shape of the
blue cluster distribution is E3$\pm$0.5, while that of the red clusters is
E4$\pm$0.5. We also find the position angle of the major axis of the red clusters is 185$\pm$5 East of North and the position angle of the blue clusters
is 195$\pm$5 East of North.
The fact that both the red and the blue clusters have flattened distributions that are roughly coincident, while the galaxy light profile is circular, is an intriguing result since no formation or destruction mechanism associated with the present day
spherical halo can induce the observed shape of the clusters. The elliptical
profiles of the clusters must then be a signature of the shape of M87 at an earlier epoch when it was flatter, maybe even with a disk
component. If the cluster distribution of M87 is largely formed by accretion of
companions, the elliptical shape may reflect the shape of the Virgo cluster
itself. The discovery that the position angle of cD galaxies
correlates with that of the host galaxy by Binggeli (1982) seems to support this hypothesis.
We also note that the position angle of the the major axes of both the blue and red clusters differ from the position angle of the major axis of the galaxy,
which is at 155$\arcdeg$, by a statistically significant amount. This is in direct contrast to the observation of McLaughlin {\it et al.} (1994), who reported
that the major axis of the clusters in the region 1.9$'$$<$R$<$4.5$'$ is aligned
with the major axis of the galaxy. The cluster distribution apparently
has twisted iso-density curves. Intriguingly the position angle of the clusters
within our field of view, especially the red ones, seems to coincide with the
181$\arcdeg$ position angle of the nuclear ionized disk. While it is
speculative
to link the large scale structure of the globular cluster with a 1$''$ radius
ionized disk, it is possible that the nuclear disk is actually a remnant of an
accretion event that produced many of the central red clusters.
\subsection{ The Size Distribution}
As we noted in the aperture photometry section, the profiles of the globular clusters are on average broader than stellar
profiles, indicating that they are spatially resolved.
In order to model the light distribution of the globular clusters
in M87 we assume that they are similar to the Milky Way
clusters and that the surface brightness profile can described by
King models (King 1962). The size of a cluster can then be defined by two
parameters, the King radius (r$_0$), and the
concentration parameter c = log$_{10}$($\frac{r_{tidal}}{r_0}$). However,
there are two important considerations to be made before we model the observed light
distribution of the clusters. We must take into account the fact that the
WFPC2 point spread function varies across each of the chips. Also,
the undersampling of the PSF by the WFPC2 camera has
the unpleasant effect of modifying the observed light profile
depending on the location of the peak within a pixel.
We created 4-fold oversampled Tiny Tim models (Krist 1995) of the PC and
WF PSFs at five evenly spaced locations on each of the chips.
Each of these PSFs were convolved by a range of King models, varying c between 0.5 and 2.5,
and r$_0$ between 0.5 and 16 pixels of the oversampled PSF. We then resampled the models to normal size
for eight different locations of the peak within a pixel. For each of the
models, we performed photometry for a range of aperture radii to create
a cumulative light profile and normalized all the profiles in our library to
the observed light within 3.3 pixels. The profiles of the candidate objects
were created using the same aperture photometry parameters. We deduced the
structural parameters of an individual cluster by finding the
best fit (in a least square sense) within the set of models that were closest
to the candidate cluster with respect to the chip location and centering
within a pixel. As explained in Kundu \& Whitmore (1998), our numerical tests
indicate that there
are significant correlated errors when we fit c and r$_0$ simultaneously. On the other hand, if we restrict the fits to a single
concentration index, we can measure the relative sizes of the
clusters candidates reliably even in
regions with a strong galaxy background. We quantify the sizes in terms of the physically meaningful half light radii (r$_h$) because unlike the King radius,
r$_0$, it is largely unaffected by the choice of the King model c parameter. We chose to fit c=1.25 and c=1.5 King models since the median King model concentration parameter of old globular clusters in the Milky Way (Harris 1996), M31 (Crampton {\it et al.} 1985; Grillmair {\it et al.} 1996), and the LMC
(Elson \& Freeman 1985) all lie within this range.
The half light distribution of the clusters, r$_h$, obtained by fitting the cluster profiles to c=1.25 King model convolved PSF, is plotted as a function of the cluster brightness
in Fig 16. While we see no obvious relationship between r$_h$ and V, we note
the striking resemblance of our plot with Fig 1 of van den Bergh (1996) which
plots the same quantities for the Milky way globular clusters. As in the
Milky way, the clusters with half light radii between 2 and 4 pc are more luminous than both smaller and larger clusters. The lack of any strong correlation between the sizes and the brightness (mass) of the clusters also
implies that larger clusters are in general more diffuse, low density objects.
Fig 17 is a plot of the size distribution of the clusters as a
function of the projected distance from the center of M87. We see no significant radial trends in the figure. However, a careful inspection of the plot shows that there may be a lack of large
clusters within the innermost 10$''$ and the
large, diffuse clusters in the innermost regions are destroyed by tidal forces.
Given the small number statistics we cannot conclusively prove whether
this is indeed a real effect.
The mean half light radii (r$_h$)
of the cluster candidates in the PC and WF, fitted to c=1.25 and c=1.5 models, are shown in
Table 4. The measured value of $r_h$$\approx$2.5 pc is comparable to the mean
half light radius of $\approx$3 pc for the brightest Milky Way clusters (van den
Bergh 1996). Table 4 also shows the average sizes of the blue and red clusters in the PC and WF chips. Interestingly, we find that the blue clusters are statistically
larger than the red clusters in both the WF and the PC.
We have observed a similar effect in the NGC 3115 globular
cluster system (Kundu \& Whitmore 1998). In order to verify the reliability of the size difference we developed a different
algorithm to estimate the size of the clusters. First, we obtained counts in a set of radial bins around the candidate objects and calculated the FWHM of Gaussian curve that best fit the intensity distribution
of each individual cluster. WFPC2 observations of a set of program
stars from a field in $\Omega$ Cen were then convolved with Gaussian
distributions of varying widths and fitted with the same routine to
establish a relationship between the convolved width and resulting FWHM.
The calculated sizes of the blue clusters in M87 are larger
than the red clusters in all the chips, in both the V and I filters using
this method.
We tested 5 independent star fields in $\Omega$ Cen to see whether this might be an instrumental effect. In each of the test images we selected unresolved/barely resolved objects with colors between 0.8$<$V-I$<$1.4 and used our algorithms to determine the size. We find that the size of objects with V-I$>$1.06 mag is statistically
identical to the size of objects with V-I$<$1.06 in both the V and I filters. We also tested each
quadrant of all four WFPC2 chips and discovered no position dependent effect
that may be inducing the difference in size between red and blue objects. We therefore conclude that the observed size difference between the
red and blue clusters is real and not due to instrumental biases.
van den Bergh (1994) showed that the half-width of the Milky Way globular clusters
increases with galactocentric distance. Since the blue clusters are on average more distant from the center of the
galaxy than the red disk clusters, the difference in size between the red and
the blue clusters may be indicative of a similar relationship in NGC 3115 and
M87. The physical reason for the difference in sizes between the red and blue
clusters remains unclear. It may be a relic of the different formation mechanisms
of blue and red clusters, a signature of the radial variation in efficiency of various destruction mechanisms superimposed on the different spatial scales
of the two systems, a reflection of slightly accelerated core collapse of the
red clusters which are closer to the center of the galaxy, or a combination of
all of the above.
\section{Summary}
We observed the inner region of M87 with the WFPC2 camera onboard the
HST and identified 1057 globular cluster candidates. These observations
reached two magnitudes deeper than the turnover magnitude of the
GCLF and allowed us to study the luminosity, color, and size distribution of the globular
clusters in the inner region. The main results gleaned from this study are:
1) Within statistical errors, we find no variation in the luminosity
function with radius, in either the V, or the I band, hence no
obvious evidence of
evolution of the luminosity function due to destruction processes. The
constancy of the GCLF bolsters our confidence in the turnover magnitude
as a secondary
distance indicator. Additionally, I band magnitudes show slightly less scatter than the
V band values presumably since the magnitudes
are less affected by metallicity differences. This suggests that there might be
some merit in using I band GCLFs instead of V band GCLFs as distance indicators.
2) The peak of the turnover magnitude is m$_V$$^0$=23.67 mag in the V band and
m$_I$$^0$=22.55 mag in the I band.
3) The color distribution of the clusters is bimodal with a
blue peak at V-I = 0.95 mag, a red peak at V-I = 1.20 mag and a mean color
V-I = 1.09 mag. The bimodality in the color distribution reflects the underlying bimodal metallicity distribution of clusters in M87. The blue
peak has a metallicity of Fe/H=-1.41 dex while the more metal rich, red peak
has a metallicity of Fe/H=-0.23 dex. The difference in the color of the red population
and the underlying galaxy is only 0.1 magnitude, smaller than the 0.5 magnitudes
previously reported in the literature, which suggests that the red population
may have formed at
a fairly late stage in the metal enrichment history of the galaxy, probably during a metal rich merger event.
4) The color distribution is bimodal
in all five radial bins and there is weak evidence that the red clusters
are more centrally concentrated than the blue ones within our WFPC2 field of
view. Combining our data with other observations, we infer that the average
metallicity of globular clusters decreases with distance, most likely due to
the increasing ratio of blue to red clusters with galactocentric distance.
5) The luminosity function for the red and blue
clusters are similar at all radii, though the blue candidates are on average 0.2 magnitudes brighter than the red ones in the V band. The difference in the color and brightness between the two populations suggests that the red clusters were formed roughly 9-12 Gyr ago, assuming a 15 Gyr old blue population.
6) The core radius of the globular cluster density distribution is 56.2$''$ for the best fitting King model with a concentration index c=2.5. This is much larger than that of the underlying galaxy light (R$_c$=6.8$''$). Since we
see no evidence of cluster destruction processes in the luminosity function,
this is most likely a relic of the formation history of the clusters.
7) Even though M87 has a $\sim$E0 shape, the globular cluster distributions
appear to be flattened. While the blue cluster population has an E3$\pm$1
profile the red clusters have an even flatter E4$\pm$1 shape. This implies that
M87 may have had a much flatter profile during the epoch in which the clusters formed.
8) The half light radius of the clusters is $\approx$2.5 pc with
no obvious radial variation in the size distribution. On average, the
blue clusters appear to be 20\% larger than the red clusters.
We conclude from this study that the globular clusters in the inner
region of M87 have remarkably homogeneous spatial properties and that the Globular
Cluster Luminosity Function and color distribution of the clusters
are similar throughout the regions studied by us. However, small differences
in the properties of the red and blue clusters suggest that these two
populations might have different formation histories and that the red population
formed later in the metal enrichment history of the galaxy than the
blue population, most likely during a major merger.
We are grateful to Alberto Conti for helping us with the KMM mixture modelling.
We would also like to thank Ivan King for supplying us with King model
profiles and Bryan Miller for help in the early stages of the project. We also
wish to thank the anonymous referee for many useful comments and suggestions.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Acute mental health provider shortages alarmed the Association of American Medical Colleges in 2016 \cite{butryn2017shortage}. Eighteen percent of the counties in the U.S. had reported a lack of providers, such as psychologists, social workers, advanced practitioners, therapists, and counselors \cite{thomas2009county}. Nearly every county (e.g., 96\% of $3,140$ counties in the U.S.) needed more psychiatrists \cite{thomas2009county}. Rural counties and low-income counties have higher levels of need concerning healthcare access \cite{thomas2009county}. This issue is a global problem to this day. In developed countries, 100,000 people approximately share 6.6 psychiatrists, while, in lower-income countries, 100,000 people approximately share as few as 0.1 psychiatrists \cite{oladeji2016brain}. An estimated additional 1.18 million mental health workers are needed in low- and middle-income countries for basic mental health interventions \cite{oladeji2016brain}. Developing cost-effective and feasible mental health support systems to mitigate these shortages will be critical.
Chatbots are digital tools that interact with users in natural language, often for the goal of helping a user complete a task, \cite{laranjo2018conversational, vaidyam2019chatbots}. Chatbots, in the form of voice agents with speech functions became widely available to the public through off-the-shelf products, such as Apple's Siri, Microsoft's Cortana, and Amazon's Alexa \cite{belfin2019graph}. Having been adopted in multiple fields, including business \cite{adam2020ai}, governance \cite{androutsopoulou2019transforming}, and education \cite{georgescu2018chatbots}, the potential of chatbots is also presented in mental health, where chatbots are labeled as "the future of therapy" \cite{vaidyam2019chatbots} and are believed to increase the opportunities for individuals to receive therapeutic or emotional support services \cite{moore2019bot}. In 2019, Facebook Messenger had more than 300,000 chatbots and many were classified as related to healthcare and wellbeing \cite{zhang2020artificial}. Several studies have proven their efficacy: therapy services delivered through chatbots were as effective as face-to-face therapy in diagnosing and treating anxiety and depression symptoms \cite{fulmer2018using, ho2018psychological, inkster2018empathy, roniotis2017detecting}. Therapy chatbots provide economic and easily accessible services through multiple forms (e.g., text, chat app, social media) and personalized, immediate support available 24/7 \cite{palanica2019physicians, zhang2020artificial}. Especially when integrated into the health and ambient assistive technologies (e.g., "ambiently used sensor-based information and communication technologies, aiming at contributing to a person's health and health care as well as to her or his quality of life" \cite{haux2016health}), therapy chatbots will bring increasing possibility in personal healthcare including prevention, intervention, and therapy by coordinating with other sensors and ambient, smart health solutions as home settings will become a critical place to perform clinical care, in addition to hospitals and clinics \cite{haux2016health}.
Therapy chatbots need to address users' emotional needs, unlike other social chatbots that offer information only \cite{sharma2018digital}. Furthermore, therapy chatbots need to be distinguished from generic, social chatbots in that they involve persuasion, which is a conceptualized more complex task than those of social chatbots that engage users in general conversations (e.g., talking about movies or weather) \cite{zhang2020artificial}. Therapy contexts would require the chatbots to generate responses to complicated questions and allow users to lead the conversation, which is challenging but could be achieved by increasingly advancing artificial intelligence techniques developed in recent years \cite{abd2019overview, fulmer2018using, gao2018neural}. In comparison, social chatbots may be evaluated with standard metrics, such as BLEU, METEOR, and ROUGE-N family \cite{chen2017survey, lin2004rouge}, evaluating the quality of responses, such as their continuity, engagement, compassion, and coherence, is critical \cite{chen2017survey, liu2016not}. However, very few studies have evaluated the speech quality of chatbots \cite{chen2017survey, liu2016not}, however, the need has been documented in depth \cite{chaves2020should, zhang2020artificial}. And few studies have discussed the possible negative consequences of applying undesirable therapy chatbots, especially the ethical problems \cite{fleischmann2019good}, even though many therapy chatbots have been applied with some constraints in delivering therapy supports.
Numerous platforms are being released using advancing machine learning techniques, such as RNN (Recurrent Neural Network), LSTM (Long Short Term Memory), and Seq2Seq model (Encoder-Decoder Sequence to Sequence Models) \cite{dsouza2019chat}. These platforms bring opportunities for building effective therapy chatbots at a low cost. Most therapy chatbots, however, force users to respond among pre-determined options. Such forms of communication do not suit therapy contexts in which patients need to be open, honest, and expressive. With an interest in building a therapy chatbot that allows more freedom of conversation for therapy, we investigate basic qualities of a state-of-the-art technique called Generative Pre-Training Model-2 (GPT-2) \cite{radford2019language, OpenAIwebsite} for therapy chatbot contexts. Based on our results, we discuss the implications of designing and building therapy chatbots contributing to the field's discussion around human-centered AI design.
\section{Related Work}
In this section, we walk through a few seminal approaches to building and evaluating therapy chatbots.
\subsection{Building Therapy Chatbot Models}
Largely two main conversational approaches make up the models of building chatbots: retrieval-based and generative-based \cite{dsouza2019chat, mudikanwi2011student}. The key distinction between the two lies in that retrieval-based chatbots find matched responses from the database of manually created utterances of conversations. In contrast, the generative-based chatbots auto-generate responses via machine learning algorithms.
To date, most therapy chatbots apply a retrieval-based approach \cite{abd2019overview, laranjo2018conversational}. Based on response matching, retrieval-based chatbots rely on dialogue management frameworks to track conversation states \cite{chen2017survey} and decide future actions \cite{swartout2013virtual}. Most therapy chatbots have used hand-crafted dialogue management frameworks \cite{thomson2013statistical} of finite states \cite{sutton1998universal} and frame-based, also known as form-filling, frameworks \cite{goddeau1996form}. For the framework of the finite state, the dialogue is constrained to a sequence of pre-determined steps or states \cite{laranjo2018conversational}, and users are required to select responses for single-turn conversations \cite{chen2017survey}. This goes well with straightforward and well-structured tasks but will fail if users need to take initiative in the conversation \cite{laranjo2018conversational}. For a frame-based or form-filling framework, the flow of the dialogue is not pre-determined \cite{laranjo2018conversational} but proceeds according to the pre-specified action for each pre-defined set of known concepts called slots \cite{thomson2013statistical}. This kind of framework is usually used in information-seeking conversations \cite{bohus2009ravenclaw}, where users seek information according to a set of constraints. An example of this is where users provide information to fill slots, such as the departure to and arrival in a city to search a route. However, this framework sometimes struggles to adapt to other kinds of conversations \cite{thomson2013statistical} and often causes users to provide more information than needed due to non-predetermined dialogue flow \cite{laranjo2018conversational}. Several popular techniques to realize the dialogue management frameworks are Artificial Intelligence Markup Language (AIML) and ChatScript etc. \cite{dsouza2019chat}. AIML, firstly adopted by ALICE (Artificial Linguistic Internet Computer Entity), is an XML-compliant language that allows for efficient pattern matches in a tree structure for retrieving responses. Seminal therapy chatbots reported in the literature—VICA, a virtual agent equipped with voice-communication \cite{sakurai2019vica}, a conversational agent for alcohol misuse intervention \cite{elmasri2016conversational}, and a counseling agent in the IT industry \cite{shinozaki2015context}—all applied AIML to build the chatbot. Vivibot \cite{greer2019use}, Woebot \cite{fitzpatrick2017delivering}, and a virtual agent for post-traumatic stress disorder \cite{tielman2017therapy} also applied decision tree structures. An embodied conversational agent for education \cite{sebastian2017changing} applied the option-choice format to allow user replies. However, the retrieval-based design allows chatbots to reply with more coherent answers than generative-based design \cite{dsouza2019chat}, it restrains free conversations \cite{klopfenstein2017rise, laranjo2018conversational, zhang2020artificial} due to pre-created outputs \cite{mudikanwi2011student}. It is insufficient for multi-linear conversations due to the decision tree mechanism \cite{dsouza2019chat}. Additionally, it will fail the task if users' inputs do not match any database \cite{dsouza2019chat}, making it difficult to improve usability.
Alternatively, generative-based chatbots allow for conversational flexibility. This model applies machine learning techniques to train the chatbots to learn and generate responses based on a large amount of training data \cite{dsouza2019chat}. Popular artificial intelligence techniques are RNN, LSTM, and Seq2Seq model \cite{dsouza2019chat, trivedi2019chatbot}. Few studies have tried to apply a g generative-based approach to build therapy chatbots. Among the generative-based models, the state-of-the-art models include Bidirectional Encoder Representations from Transformers (BERT) \cite{devlin2018bert} and the OpenAI Generative Pre-Training-2 Model (GPT-2) \cite{radford2019language}, which has been expanded to a third-generation, autoregressive language model (GPT-3) \cite{brown2020language}. These models are open-sourced, efficient to model training, and tailorable for task-oriented dialog generation \cite{qiu2020pre, zhang2020artificial}. The OpenAI GPT-2 as a generative unsupervised pre-trained model was released in 2019 and trained on a large unlabeled training corpus, which can reduce manual annotation costs, avoid training a new model from scratch and allow for deep language models \cite{OpenAIwebsite, qiu2020pre}. Tests showed the model achieved state-of-the-art performance on language tasks like question answering, reading comprehension, summarization, and translation \cite{OpenAIwebsite, radford2019language}. The chatbot can also be fine-tuned with different domain data for unique purposes for its target users \cite{lee2020patent, vig2019multiscale}. However, problems exist, like users having difficulty understanding and model generating errors that violate common sense \cite{zhang2020artificial}. One general solution is to incorporate pre-trained models to facilitate conversations in specialized domains by fine-tuning with domain's datasets \cite{OpenAIwebsite}.
\subsection{Evaluation of Therapy Chatbots}
Conducting evaluations on chatbots \cite{laranjo2018conversational} range from technical performance, user experience and to speech quality.
\textbf{Technical performance.} Retrieval-based chatbots are evaluated based on the rate of successful task completion and recognition accuracy of speech \cite{laranjo2018conversational}. Typical measurements include accuracy, which refers to the percentage of label matched, and Precision, Recall, and F-measure, which are based on relevance \cite{dsouza2019chat}. In contrast, generative-based chatbots are evaluated using Word Similarity Metrics such as BLEU, METEOR, and ROUGE-N family for their technical performance \cite{chen2017survey, lin2004rouge}. Furthermore, datasets such as the corpus of CNN/Daily Mail dataset \cite{nallapati2016abstractive}, the Gigaword corpus \cite{napoles2012annotated}, the 2004 Document Understanding Conference dataset \cite{harman2004effects}, arXiv \cite{scharpf2020classification}, and PubMed \cite{dynomant2019doc2vec} are provided and widely used to evaluate the generated responses of chatbots, allowing researchers to compare models' performances based on the Word Similarity Metrics. Although these metrics are frequently used, researchers found that they are either weak or have no correlation to human judgments even though they can serve as measurements to distinguish state-of-the-art models from baselines \cite{liu2016not}. One promising method is to employ an approach to distinguish models' outputs from those produced by humans \cite{chen2017survey}.
\textbf{User experience.} Research in therapy chatbots applied user research to evaluate user experience , including measuring users' trust and comfortability \cite{sakurai2019vica}, emotional states \cite{fitzpatrick2017delivering, fulmer2018using, greer2019use}, overall satisfaction \cite{elmasri2016conversational}, and acceptability and usability outcomes \cite{fitzpatrick2017delivering,laranjo2018conversational, tielman2017therapy}. Several researchers used the Positive and Negative Affect Schedule (PANAS) to test emotional states \cite{fitzpatrick2017delivering, fulmer2018using, joerin2019psychological}. However, user research is often costly or limited to small samples \cite{fitzpatrick2017delivering, greer2019use}.
\textbf{Speech quality.} Speech quality \cite{moller2000new} examines the gap between the user's perception and expectation during the conversation based on the context. Unlike technical performance evaluations, which have focused on the general performance of chatbots in language generation, speech quality measures the effectiveness of conversation delivered by the chatbots in the specific application contexts. Zhang et al. discussed measuring the speech quality of chatbots through either subjective evaluation from the user's perspectives (e.g., coherence, naturalness, and fluency) or objective evaluation (e.g., linguistic analyses of contents, lengths of conversations, and amounts of information exchanged) \cite{zhang2020artificial}. Objective evaluation, including meta-information of the conversation (e.g., utterance length, turn-taking, words used, etc.), is especially suitable for the generative-based approach. The responses are auto-generated by the chatbots whose quality can not be guaranteed, unlike human moderated responses in the retrieval-based approach. Previous therapy chatbot research used similar evaluations to measure speech performance, such as the average number of interactions in a single dialogue session called Conversation-turns Per Session (CPS) \cite{sakurai2019vica, shinozaki2015context,shum2018eliza}.
The OpenAI GPT-2 model has shown that it reached its benchmark in terms of its technical performance \cite{OpenAIwebsite, radford2019language}. However, such performance evaluations are not enough to explain the requirements needed for sensitive contexts, such as the safety and credibility that users experience. Assessing user experience requires putting human subjects at risk by exposing them to untested therapy chatbots. Given that this is the first step into evaluating a therapy chatbot using the generative-based model, we begin with assessing the basic qualities of speech and conversation measured through the meta-information of the chatbot responses.
\section{Methods}
The pre-trained model refers to the released factory model without additional fine-tuning with the training data of an application area. The fine-tuned model is tuned by a domain-specific dataset based on the pre-trained model with an application goal. Our goal was to investigate how these pre-trained and fine-tuned models of the OpenAI GPT-2 perform as therapy chatbots. As a preliminary step into this long journey, we first focused on whether the chatbots respond with basic key conditions associated with speech quality that can be measured using meta-information on the words, length of words, and sentiments in chatbot's responses:
\begin{itemize}
\item {RQ1}: How do chatbots with pre-trained and fine-tuned models perform in generating understandable responses?
\item{RQ2}: How do pre-trained and fine-tuned models perform in adjusting the information load with users' inputs when compared to the therapists?
\item{RQ3}: What are the sentiments of the pre-trained and fine-tuned models compared to that of the therapists?
\end{itemize}
Below, we walk through the following: (1) generative-based model and the dataset we used to fine-tune and test the models and (2) the background of how we evaluated RQ2 and RQ3 in terms of adjusting the information load and sentiments used for therapist-patient interaction.
\subsection{Dataset and Fine-tuning}
Due to concerns with malicious applications, the OpenAI applied a staged release plan and shared four GPT-2 model sizes: 117, 345, 762, and 1542 million parameters. They are respectively called the 117M model, 345M model, 762M model, and 1.5GB model. We applied the Open AI GPT-2 345M model. It was the largest model available when we initiated the experiment, which was in September of 2019. As of December of 2020 (when we are writing this draft after completing the analysis), the 1.5GB model has been the most updated version so far. Since the OpenAI GPT-2 outperforms other language models trained on specific domains without fine-tuning \cite{OpenAIwebsite,radford2019language} and as an open-sourced pre-trained model, which was trained by over 8 million human filtered documents for a total of 40 GB of text, it can reduce manual annotation costs, time, and computational resources. We wanted to investigate how well the pre-trained model performs in a therapist-patient dialogue context. Meanwhile, we also fine-tuned the model to see whether fine-tuning with training data can bring better results than the pre-trained model. It is a potential method to reduce problems like topic changes and the model generating errors that violate common sense \cite{OpenAIwebsite}.
Although larger models might result in better outcomes with research goals\cite{lee2020patent}, some research has compared outcomes from different model sizes and demonstrated that the results are consistent between different model sizes \cite{xu2019neural}. GPT-3 is another updated model that has attempted to remove the fine-tuning process and build a general language model. However, researchers found that GPT-3 did not yield satisfying performance because it generated off-topic, confusing answers and had ethical issues, such as cultural biases in its responses \cite{floridi2020gpt}. The OpenAI has not open-sourced the model's code yet, and the only API is available for the model's capacity testing \cite{mcguffie2020radicalization}. For our goal, because GPT-3 does not allow fine-tuning, our approach of evaluating GPT-2 instead of GPT-3 remains the most updated approach to testing generative-based models that allow for fine-tuning, especially because of our domain context of the therapist-patient dialogue context.
We had access to 306 transcribed therapy sessions between 152 family caregivers of individuals with dementia and their respective mental health therapists. Among them, 59 transcriptions were excluded because they were transcripts unrelated to the main therapy sessions (e.g., the closing session of therapy where therapists reflect the whole process and say farewell to patients). Then, duplicate sessions, of which there were 8, were excluded, resulting in 239 sessions. This process follows the common practice in the machine learning study of splitting datasets by train/test \cite{boyd2019deep}, We then fine-tuned the model with 75\% of the 239 remaining sessions, i.e., 179 sessions from 123 caregivers. The remaining 25\% of the sessions were used for evaluation, i.e., 60 sessions from 52 caregivers, which consisted of 9261 patient-therapist response pairs.
\subsection{Evaluation: Non-word outputs, response length, sentiment analyses}
To answer the three research questions, we used three measurements: the proportion of non-word outputs (RQ1), the length of response (number of words in a response) (RQ2), and sentiment components (RQ3).
\subsubsection{Measurement for RQ1: The proportion of non-word outputs}
For the proportion of non-word outputs, when the model's responses did not contain any English words (e.g., punctuations only like "????????"), we considered them as non-word outputs that failed the initial step to being evaluated for the model's speech quality. We conducted a two-proportion test \cite{newcombe1998interval, newcombe1998two} to test the proportions of non-word outputs of the pre-trained versus fine-tuned models.
\subsubsection{Measurement for RQ2: Length of response (number of words)}
To guarantee successful communication, speakers and listeners should collaborate to allow the conversation to continue \cite{olmstead2020role}. According to the Conversation theory \cite{pask1976conversation} and the multi-action cascade model (MACM) of conversation \cite{gunaratne2019multi}, conversation participants can act in three ways to allow the dialogue to continue: (1) initiate a new conversation, (2) contribute to an existing conversation, and (3) acknowledge shared information (e.g., "I see" "That's great!") \cite{heritage2017conversation}.
For conversations to proceed, two people who are conversing with one another overall maintain the balance of information, specifically regarding the length of responses \cite{olmstead2020role}. Lower information load cannot intrigue the other person to contribute to the conversation due to limited information to prompt conversations. A higher information load can make it harder for individuals to digest the information right away. Information overload refers to information load that is greater than the receiver's information processing capacity \cite{roetzel2019information, streufert1965conceptual}. An ideal information load is neither too low nor too high for the receiver's capacity, including their characteristics (such as serial processing ability or limited short-term memory) or limited task-related equipment (such as time or budget) \cite{roetzel2019information}. If therapy chatbots provide an unsuitable information load to users, dissatisfying or negative outcomes will occur. Hence, we evaluated the length of each output to assess whether the models responded with longer or shorter utterances compared to the therapists' responses.
Given the factors discussed above, the length of responses calculated as a total number of words per response is used as an indicator of the amount of information shared for each response \cite{calvo2014finding}. Then we conducted a {\itshape One-way Repeated ANOVA} \cite{kim2015statistical} to test whether there was any significant difference in the response lengths among the three outputs from the pre-trained model, the fine-tuned model, and the therapists. If the {\itshape One-way Repeated ANOVA} result indicated there was a significant difference among the three outputs, we performed {\itshape Tukey's HSD} test \cite{abdi2010tukey} for pairwise comparisons.
\subsubsection{Measurement for RQ3: Sentiment analysis}
The therapists in the transcript data used Problem-Solving Therapy, an intervention for managing depression and anxiety \cite{zhang2018effectiveness}. In this technique, positive reinforcement is one of the fundamental key components in establishing a therapeutic alliance \cite{nezu2006problem}. To evaluate the level of positive reinforcement the models perform, we created a keyword list that would identify therapists' original conversation pairs that included positive reinforcement among the 60 sessions set aside as evaluation data. In generating this keywords list, we used the SocialSent Casual ConversationA lexicon \cite{hamilton2016inducing}, a lexicon known to effectively convey sentiments that may differ according to the context of the conversation. We selected the keywords from the lexicon within 2 standard deviations of the mean according to the SocialSent Casual Conversation's positive sentiment scores. Of these 4,621 keywords, 143 keywords appeared in the 60 sessions and 9261 conversation pairs. Two authors of this study conducted a manual annotation of randomly selected 100 conversation pairs to determine whether the keyword included in the conversation pair was relevant to therapists positively reinforcing the patients' responses. The Cohen's Kappa for inter-rater reliability for this task was 0.62. For disagreements on the inclusion of keywords, the authors discussed the differences and agreed. The overall team then discussed the final keywords to include, which were 35 keywords in number. The resulting keywords included "Good," "Yeah," or "Nice." If the therapist's utterance included at least one of these keywords, we selected the conversation pair for evaluation. This process resulted in a total of 308 conversation pairs, covering 54 sessions from 47 caregivers. We extracted the patient's utterances as an input to the pre-trained and fine-tuned models to generate response outputs from the two models (See Fig. \ref{fig:Prisma}).
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{PrismaDiagram.eps}
\caption{Diagram showing the process of filtering conversational pairs to evaluate and select the training dataset for the fine-tuned model.}
\label{fig:Prisma}
\end{figure}
We then compared the two responses from the pre-trained and fine-tuned models against the therapists' original responses to evaluate their comparative sentiments. To do this, we used SÉANCE version 1.05 \cite{crossley2017sentiment} to calculate the two models' outputs' expanded sentiment scores on positivity. It is a freely available sentiment analysis tool, which contains an extensive database of dictionaries of words. Unlike most sentiment analysis tools, this tool integrates multiple sentiment analysis tools' dictionaries, including the Harvard IV-4 dictionary lists used by the General Inquirer \cite{stone1966general}, the Lasswell dictionary lists \cite{lasswell1969value}, the Affective Norms for English Words \cite{bradley1999affective}, Hu–Liu polarity lists \cite{hu2004mining}, Geneva Affect Label Coder \cite{scherer2005emotions}, EmoLex \cite{mohammad2010emotions,mohammad2013crowdsourcing}, SenticNet \cite{cambria2012senticnet}, and the Valence Aware Dictionary for Sentiment Reasoning \cite{hutto2014vader}. SÉANCE generated 20 weighted component scores from the indices of these databases through principal component analysis. We chose 10 component scores relevant to positive reinforcement. These components were "Negative Adjectives," "Positive Adjectives," "Joy," "Affect for Friends and Family," "Fear and Disgust," "Positive Nouns," "Respect," "Trust Verbs," "Failure," and "Positive Verbs." We disregarded the remaining 10 weighted component scores generated by SÉANCE because they were not applicable in the context of this study.
\section{Results}
\subsection{RQ1 findings: The proportion of non-word outputs}
The Fine-tuned model performed worse in generating more outputs that were not English words compared to the pre-trained model. The proportion of non-word outputs of the pre-trained model versus the fine-tuned model was 5.8\% (18 out of 308 conversation pairs) and 40.6\% (125 out of 308 conversation pairs). The two-proportion test \cite{newcombe1998interval,newcombe1998two} showed a significant difference between these two proportions: the 96\% {\itshape confidence interval} is [0.281, 0.408] and the {\itshape sample estimates} were 5.5\% and 39.9\% respectively. Examples of non-word outputs included: "????????", "" and "\_\_\_\_." Examples of remained outputs included: "I see why he would want to keep doing this," "Wow! And these are things that you've sung with her before," and "It went really well."
\subsection{RQ2 findings: Length of response (number of words)}
We excluded all conversation pairs where the generated outputs from either pre-trained or fine-tuned models were non-word, leaving 177 conversation pairs for analysis. We then counted the total number of words included in each response. The mean total number of words per response was 14.05 words ($SD =40.14$). The pre-trained model, on average, generated 75.23 words per response ($SD =114.40$). The fine-tuned model, on average, generated responses that contained 18.44 words ($SD =43.55$). The {\itshape One-way Repeated ANOVA} among the three outputs showed that there was a significant difference ($F (2, 176)= 39.42$, $p<0.001$). {\itshape Tukey's HSD} paired contrasts \cite{abdi2010tukey} showed that there was a significant difference between the pre-trained model and therapists ($p<0.001$) but not between the fine-tuned model and therapists ($p=0.84$). (For the boxplot, see Fig. \ref{Boxplot})
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{Boxplot.eps}
\caption{Boxplot of the number of words for each response of three outputs (The red points are mean values for each). Therapists' responses and the fine-tuned model responses are similar in length, while the pre-trained model had a significantly higher number of words compared to the therapists' responses in the transcripts.}
\label{Boxplot}
\end{figure}
The example texts below show an example of length differences of the three outputs, where the pre-trained model responded with a longer response than the therapist's response. Still, the fine-tuned model responded with a length similar to that of the therapist. According to the MACM and conversation theory, \cite{gunaratne2019multi, pask1976conversation}, the conversation context is where there is no new information in the conversation, and the responder should be acknowledging existing information, as the therapist is doing in this example. However, the pre-trained model attempts to add new information with a higher information load, unlike the therapist's response and the fine-tuned model. This example shows that the fine-tuned model follows the ground truth (i.e., the therapist's response) much better than the pre-trained model in terms of information load:
\textbf{Input}: {\itshape "You know, it's kind of funny because my brother is four years younger than me and for some reason, I don't remember growing up with him. But in the last seven years taking care of my mom, we got really close. And as far as when we would sit there and talk about our childhood, I say, well where were you during this time, I don't remember you. Because he was home most of the time, and I was outgoing to college and stuff. He started filling me in on stuff going on in the house that I didn't see. And I was shocked. We're starting to talk about things now."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "That's great."}
\item {\textbf{The pre-trained model's response}}: {\itshape "What did you promise your mother? ????? B: I promised her that I would guide and care for Jamy to the best of my knowledge. I don't think she was aware of this until this morning. I love my mother very much and wish her well-loved sons. We both miss her. I cannot afford to receive her well care, but if she needed anything personal, something you wanted, I can follow up or get it. If you need anything from the rest, NOT HARMFUL LIKE 5 TL;DR I will take care of you, and I only ask for pins and drama that you are safe and are safe in my refuge. She is a mental health professional... And most importantly, if you need anything, please do WHAT YOU NEED!!!"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "Wow!"}
\end{itemize}
\subsection{RQ3 findings: Sentiment analysis}
Table \ref{tab:Descriptive}shows the descriptive statistics of the ten component sentiment scores of the three outputs, of which {\itshape One-way Repeated ANOVA} tested their difference of means.
\begin{table}
\caption{Descriptive statistics of 10 component sentiment scores of (1) The therapists' original responses, (2) the pre-trained model's responses, and (3) the fine-tuned model's responses.}
\label{tab:Descriptive}
\begin{tabular}{p{250pt}p{50pt}p{50pt}p{50pt}}
\toprule
Sentiment components ($n=177$) & Therapists' original responses (Mean, SD) & Pre-trained model's responses (Mean, SD) & Fine-tuned model's responses (Mean, SD)\\
\midrule
{Negative Adjectives (e.g. "afraid", "abnormal")} & -1.04, 0.49 & 0.19, 1.03 & -0.24, 0.80\\
{Positive Adjectives (e.g. "accessible", "acceptable", "accord")} & 1.72, 1.09 & -0.07, 0.48 & 0.32, 0.89\\
{Joy (e.g. "tantalizing", "lovable", "greeting")} & 2.82, 3.52 & 0.44, 0.80 & 0.59, 1.56\\
{Affect Friends and Family (e.g. "bride", "brother")} & 0.32, 0.49 & 0.0.23, 0.26 & 0.13, 0.24\\
{Fear and Disgust (e.g. "inept", "perverted")} & 0.02, 0.09 & 0.13, 0.28 & 0.05, 0.20\\
{Positive Nouns (e.g. "abundance")} & 0.04, 0.38 & -0.09, 0.50 & 0.02, 0.30\\
{Respect (e.g. "acknowledgment", "admiration")} & 0.02, 0.10 & 0.06, 0.21 & 0.02, 0.12\\
{Trust Verbs (e.g. "proven", "merchant", "pawn")} & 0.16, 0.23 & 0.13, 0.19 & 0.07, 0.18\\
{Failure (e.g. "arrest", "attack")} & 0.01, 0.07 & 0.05, 0.13 & 0.02, 0.09\\
{Positive Verbs (e.g., "abound")} & 0.05, 0.24 & -0.03, 0.44 & 0.04, 0.30\\
\bottomrule
\end{tabular}
\end{table}
The {\itshape One-way Repeated ANOVA} among the three outputs showed that "Positive Verbs" ($F(2, 176)=2.88$, $p=0.06$) and "Respect" ($F(2,176)=2.64$, $p=0.07$) did not show significant differences among the three outputs. Both the pre-trained model and fine-tuned model generated similar responses to the therapists on component scores of "Positive Verbs" and "Respect." The eight component scores showed significantly different results among the three response types ($all p<0.05$). We further tested these eight component scores (i.e., "Negative Adjectives," "Positive Adjectives," "Joy," "Affect Friends and Family," "Fear and Disgust," "Positive Nouns," "Trust Verbs," "Failure") with {\itshape Tukey's HSD} paired contrasts \cite{abdi2010tukey}. We created subgroups by two paired contrast dimensions: the therapists' original responses versus the pre-trained model outputs and the therapists' original responses versus the fine-tuned model outputs. Finally, three categories were created because all sentiment components were significantly different in both dimensions. Table \ref{tab:sentimentcomponents} shows the frequency in each category.
\begin{table}[htbp]
\centering
\caption{The number of sentiment components whose scores of the pre-trained model or the fine-tuned model significantly differ from therapists' responses or not: seven out of eight sentiment components differ significantly between the pre-trained model and therapists while five out of eight sentiment components differ significantly between the fine-tuned model and therapists.}
\begin{tabular}{p{100pt}p{100pt}p{100pt}p{100pt}}
\toprule
\multicolumn{2}{c}{{Number of sentiment components}} & \multicolumn{2}{c}{Therapists vs. The pre-trained model} \\
\cmidrule{3-4}
\multicolumn{2}{c}{} & Significantly different & Not significantly different \\
\midrule
{Therapists' vs. The fine-tuned model} & Significantly different & N=4 (Negative Adjectives, Positive Adjectives, Joy, Affect Friends and Family) & N=1 (Trust Verbs) \\
\cmidrule{2-2} & Not significantly different & N=3 (Fear and Disgust, Failure, Positive Nouns) & N=0 \\
\bottomrule
\end{tabular}%
\label{tab:sentimentcomponents}%
\end{table}%
\subsubsection{The sentiment component whose score significantly differed between the fine-tuned model and therapists but did not significantly differ between the pre-trained model and therapists.}
Only one sentiment component satisfied this condition: "Trust Verbs" with the component score $p<0.001$. Based on their mean values (mean of "Trust Verbs" of therapists' original responses= 0.16, mean of "Trust Verbs" of the pre-trained model's responses = 0.13, mean of "Trust Verbs" of the fine-tuned model's responses = 0.07), we can see that the fine-tuned model shows fewer positive words on the "Trust Verbs" component compared to that of therapists. An example below demonstrates how the fine-tuned model created fewer positive words on "Trust Verbs" than therapists:
\textbf{Input}: {\itshape "I guess somebody who was actually interested in me and didn't come to the door and say, "How's ?" instead of-- Well, although many people have said, "How are you? How are you holding up?" And many have said, "How can you stand this, day after day?" But that was another affirmation that I am maybe worth something or something like that."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "Oh, we were glad to be there for you. That's the whole point."}
\item {\textbf{The pre-trained model's response}}: {\itshape "How am I holding up NOW?" I: ?"Do I need help him a little? Do I have to? Cause I'm here ?" and any other nonsense. MYMITT HIGH SCHOOL SYMP: ?"MYMITT"<|endoftext|>Background and Synthesis:"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "Well, you sound like a very open person. I mean, you have a lot of positives around you"}
\end{itemize}
\subsubsection{The sentiment components whose score significantly differed between the pre-trained model and therapists but did not significantly differ between the fine-tuned model and therapists.}
Three components were in this category: "Fear and Disgust" ($p<0.001$), "Failure" ($p<0.001$), and "Positive Nouns" ($p=0.010<0.050$). Based on their means from Table 1, we can see the pre-trained model shows more negative sentiment components, including "Fear and Disgust" and "Failure" and fewer positive sentiment components such as "Positive Nouns" than the therapists.
For instance, below shows an example of how the "Fear and Disgust" component score of the pre-trained model is more negative than therapists (mean of "Fear and Disgust" of therapists' original responses= 0.02, mean of "Fear and Disgust" of the pre-trained model's responses = 0.13, mean of "Fear and Disgust" of the fine-tuned model's responses = 0.05):
\textbf{Input}: {\itshape "It's the first year, actually, that they haven't but, um – so my daughter, actually we are thinking of – it's her idea – sending her alone on a plane to Florida to visit her grandparents and her cousins."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "Oh, good. Well, she's old enough to do that."}
\item {\textbf{The pre-trained model's response}}: {\itshape "Albuquerque was‖sport bad, it's perhaps lucky every food recovered."}
\item{\textbf{The fine-tuned model's response}}: {\itshape "? Well, that's great."}
\end{itemize}
Below is an example of how the "Failure" component score of the pre-trained model is more negative than therapists (mean of "Failure" of therapists' original responses= 0.01, mean of "Failure" of the pre-trained model's responses = 0.05, mean of "Failure" of the fine-tuned model's responses = 0.02):
\textbf{Input}: {\itshape"So, he's a dear, he's really trying to be helpful too, but it's just that he is busy with his wife and his family. The only thing is if I do something, I usually have to sandwich it in with everything else that is going on in my life, and I can't just walk away and say, oh, well I'm going to go, we're going to go somewhere new or do something. As you know. So that's why I really appreciated these ladies because they have helped me to just stick up for myself a little bit more."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "That is great, I'm glad to hear that, and I thank you for sharing it with me. It is helpful to us."}
\item {\textbf{The pre-trained model's response}}: {\itshape"? I think the fact that you have said that it is very challenging because you said you were "21 years old me and my marriage, decided at 17 … that it was time to part ways from her" is something you weren't going to do. So far, you've kind of backed off. What has excited you from that?" CSI Yeow Dengpetian Goldman Dear Richard, thank you. Can I return your question about you but point fingers back at it? You said that you wouldn't pursue your law license. I went back and looked at the years I spent at himieva. I found 18 of 18 major events in his name, including pickpocketing, Pimp Involvement, Driver Felony, MS Dump him, Josephine Smith, etc. I really wanted to dig a little deeper on that because at least there was some information there, but now, like Eric, he immediately added insult to injury (...)"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "Sure, sure."}
\end{itemize}
Below is an example of how the "Positive Nouns" component score of the pre-trained model is more negative than therapists (mean of "Positive Nouns" of therapists' original responses= 0.04, mean of "Positive Nouns" of the pre-trained model's responses = -0.09, mean of "Positive Nouns" of the fine-tuned model's responses = 0.02):
\textbf{Input}: {\itshape"at many times for you. But there are others in similar circumstances. So hopefully, this will be a little bit of help to you and hopefully not taking up too much of your time 'cause I know things are busy."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "No, I like it already. I mean, it's great. I mean"}
\item {\textbf{The pre-trained model's response}}: {\itshape"what is your personal opinion about burritos reppin where people watch tv and look at the [inappropriate sexual slang] bunch?"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "Well, thank you."}
\end{itemize}
This example also showed the model using inappropriate slang out of context, which can generate possible harm.
\subsubsection{The sentiment components whose score significantly differed between the pre-trained model and therapists and between the fine-tuned model and therapists.}
Four components were in this category: "Negative Adjectives" (both models: $p<0.001$), "Positive Adjectives" (both models: $p<0.001$), "Joy" (both models: $p<0.001$), "Affect Friends and Family" (the pre-trained model: $p=0.038<0.050$; the fine-tuned model: $p<0.001$). Based on their means from Table 1, we can see both models shows more negative words on components of "Negative Adjectives" and fewer positive words on components of "Positive Adjectives," "Joy," and "Affect Friends and Family" than therapists.
Below is an example of how both the pre-trained model and the fine-tuned model generated more negative words than therapists on the component of "Negative Adjectives" (mean of "Negative Adjectives" of therapists' original responses= -1.04, mean of "Negative Adjectives" of the pre-trained model's responses = 0.19, mean of "Negative Adjectives" of the fine-tuned model's responses = -0.24):
\textbf{Input}: {\itshape "And even though you have this big yard and you want to go back in your house, you're trapped because they're standing in the whole doorway. And they're making you listen to them, and it's something you disagree with. And you don't want to get in an argument about it. And so finally I had to tell people because there was a little bit of friction going on, and I had to say, you know what? This isn't about you. This whole situation is not about you. It's not about me. It's about our mother. And that's what we have to be happy about and try and deal with. Not our petty problems."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "That is so great, that is so healthy!"}
\item {\textbf{The pre-trained model's response}}: {\itshape "But that's so terrible making the ordinary tiny little things feel bigger."}
\item{\textbf{The fine-tuned model's response}}: {\itshape "Yeah. I'm sure it was stressful for many people."}
\end{itemize}
Notice how the therapist focused on staying positive, whereas both GPT-2 models responded negatively.
Similarly, both GPT-2 models--the pre-trained model and the fine-tuned model--generated fewer positive words than therapists on the component of "Positive Adjectives" (mean of "Positive Adjectives" of therapists' original responses= 1.72, mean of "Positive Adjectives" of the pre-trained model's responses = -0.07, mean of "Positive Adjectives" of the fine-tuned model's responses = 0.32):
\textbf{Input}: {\itshape "I know. Can you imagine? Oh, my gosh, I would have just died."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "Well, it's a good thing she noticed right away."}
\item {\textbf{The pre-trained model's response}}: {\itshape "Wow, I feel you. That timing is incredible for a good old-fashioned real-life rock star like me, earthbound."}
\item{\textbf{The fine-tuned model's response}}: {\itshape "I've thought about that. If I ?"}
\end{itemize}
Below is another example of how both GPT-2 models generated fewer positive words than therapists on the component of "Joy" (mean of "Joy" of therapists' original responses= 2.82, mean of "Joy" of the pre-trained model's responses = 0.44, mean of "Joy" of the fine-tuned model's responses = 0.59):
\textbf{Input}: {\itshape "it's very rewarding. it's really hard, but it's, um – I love my job. So –"}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "Yeah. Good for you!"}
\item {\textbf{The pre-trained model's response}}: {\itshape "????NR again. ????"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "ia, ok."}
\end{itemize}
The GPT-2 models' responses did not even make sense when the response to this user's input had to be positively reinforced, just like the therapist was attempting to do.
Lastly, the following shows an example of how the GPT-2 models produced less "Affect Friends and Family" compared to the therapists' responses (mean of "Affect Friends and Family" of therapists' original responses= 0.32, mean of "Affect Friends and Family" of the pre-trained model's responses = 0.23, mean of "Affect Friends and Family" of the fine-tuned model's responses = 0.13):
\textbf{Input}: {\itshape "They are. They are still together, and they are--my folks just turned 80, both of them. And so they are--it's hard to be far from them. But actually, we do a lot of Skyping, a lot of video conferencing together. So that's nice."}
\begin{itemize}
\item {\textbf{Therapist's response}}: {\itshape "Well, my folks. But we still have the kids and still, get together. And I know that we're going to leave them--when we go, we're going to leave them in good shape."}
\item {\textbf{The pre-trained model's response}}: {\itshape "Hey Glenstar!"}
\item{\textbf{The fine-tuned model's response}}: {\itshape "What do you do? How does that work?"}
\end{itemize}
In summary, we compared the generative-based model's pre-trained and fine-tuned models to original therapists' responses based on three research questions and their corresponding analyses: the proportion of non-word outputs analysis, the length of response (number of words) analysis, and sentiment analysis. Results showed that the fine-tuned model created more non-word outputs than the pre-trained model. For the length of the response analysis, the fine-tuned model performed closer to the therapists compared to the pre-trained model. For the sentiment analysis, both the pre-trained and fine-tuned models were likely to generate more negative or fewer positive outputs than the therapists. Below, we discuss the implications of our findings to designing human-AI interaction in therapy chatbots.
\section{Discussion}
Our findings showed limitations to using generative-based chatbot models for therapy contexts. By assessing the simplified speech quality measures on non-word proportions, length, and sentiment, we saw that much work is still needed in using generative-based language models for therapy contexts, even with its proven technical performance. Especially for health contexts, safety, credibility, personality suitable for context, nuanced responses, etc., are critical for chatbots to adhere to and perform with. Our findings show incredible challenges in designing human-AI interaction, with its unpredictable responses and the need for significantly larger training datasets. Below, we expand on our main findings and discuss potential reasons for the results and what future work can address those challenges.
Both GPT-2 models---pre-trained and fine-tuned---generated a decent portion of non-word outputs. This would confuse users, interfering with the fidelity of patient-therapy chatbot interaction. The reason why both models created non-word outputs and the fine-tuned model created more non-word outputs than the pre-trained model could be the difference of the datasets used for pre-training versus our data used for the fine-tuning and evaluation. The datasets for pre-training are based on the web corpus filtered and modified by humans, and each sentence is a full sentence and well-formatted \cite{radford2019language}. However, the transcripts of therapy conversations for fine-tuning and evaluation were conversation-based dialog pairs compatible with speakers' habits, rife with informal usages, and partial segments of sentences. Therefore, when models encountered such unfamiliar inputs compared to the data used for pre-training, they might generate non-word outputs accordingly. However, researchers claimed that the OpenAI GPT-2 could process any format of inputs regardless of pre-processing, tokenization, or vocabulary size \cite{radford2019language}, the model still needs improvement. This is a common problem for other pre-trained models, such as BERT \cite{devlin2018bert}, ERNIE (THU) \cite{zhang2019ernie}, BART \cite{lewis2019bart}, RoBERTa \cite{liu2019roberta}, InfoWord \cite{kong2019mutual}, which also uses formal text like Wiki, book, Web, news, and stories \cite{qiu2020pre}. Researchers found a similar phenomenon that BERT is not robust on misspellings \cite{sun2020adv}. To avoid generating non-word outputs, therapy chatbots need to, in real-time, check through all the responses, detect and filter out non-word outputs, and regenerate responses. But such a solution will cause a delay in the model's responses and cost computational resources. Recent work proved that both generalization and robustness of pre-trained models for natural language processing could be improved through adversarial pre-training or fine-tuning, which uses adversarial examples to train the model so that the model to withstand strong adversarial attacks \cite{liu2020adversarial,zhu2019freelb}. Adversarial training has been widely used in computer vision \cite{shafahi2019adversarial}. However, it is still challenging for text \cite{qiu2020pre}. Future studies should consider using the adversarial training method to reduce the proportion of non-word outputs.
The OpenAI GPT-2 performed well in adjusting the information load of the output with the ground truth (i.e., the therapist's response) during the fine-tuning process. Therapists in our dataset maintained an ideal information load in their responses based on patients' input and conversation context. The average length of the responses of the pre-trained model was significantly longer than that of the therapists, which could result in information overload. Researchers found that information overload impacted the speaker's responsiveness, and the likelihood of response would be suppressed if users were overloaded \cite{gunaratne2020effects}. After fine-tuning, the model generated similar lengths of responses to that of the therapists. This result indicates that the fine-tuning process of GPT-2 potentially adjusted information overload, which is a critical factor in the successful continuation of the conversation. To maintain appropriate information load, language models should decide when to stop generating longer responses. There are trade-offs to generating lengthy adequate textual outputs compared to generating them efficiently in short outputs \cite{gatt2018survey, rieser2009natural}. Early approaches to natural language generation include modular architectures and planning perspectives. Modular architectures treat language generation tasks as the pipeline architecture consisting of sub-tasks, including text plan, sentence plan, and text \cite{reiter1997building}. Planning perspectives view language generation tasks as planning links to satisfy a particular goal, and the generation will stop if the goal is achieved \cite{rieser2009natural}. These approaches tended to sacrifice efficiency to generate short responses in favor of lengthy adequate information \cite{gatt2018survey}. However, from this study, GPT-2 showed that the advanced approach, which emphasizes statistical learning of correspondences between inputs and outputs, can manage the information load through fine-tuning the domain datasets.
The sentiment analysis results imply we must be cautious of directly applying generative-based models without any human filtering in therapy chatbot applications. Both the pre-trained and fine-tuned models were likely to generate more negative adjectives and fewer positive adjectives and words than the therapists. The pre-trained model generated more fear, disgust, and failure sentiments and fewer positive nouns than the therapists, while the fine-tuned model generated fewer trust-related verbs. This phenomenon could cause adverse events when therapy chatbots provide services for therapy contexts. Patients avoid seeking information from the providers if they feel discomfort due to negative responses from the therapist \cite{case2005avoiding, ferlatte2019perceived}. This result may cause potentially harmful interaction, result in ineffective therapy, discourage patients from seeking help when they need mental health support, and result in negative experiences. Patients' prior experience of seeking mental help greatly impacts the likeliness of seeking mental health help in the future \cite{jon2019perceived, planey2019barriers}. So, developing therapy chatbots that do not have perfectly moderated and approved responses like the approach of this study's chatbot can be problematic.
Possible reasons for getting more negative or less positive outputs from models could be from two aspects: the transformer-based model and the dataset size for fine-tuning. The OpenAI GPT-2 is a transformer-based model \cite{oluwatobi2020dlgnet}. The transformer is a model architecture that can directly model the dependency between every two words in a sequence to allow the model to learn language representations and generate outputs like the natural language \cite{qiu2020pre}. Other transformer-based pre-trained models include GPT \cite{radford2018improving}, GPT-3 \cite{brown2020language}, BERT \cite{devlin2018bert}, TransformerXL \cite{dai2019transformer}, ERNIE \cite{zhang2019ernie}, and ERNIE 2.0 \cite{sun2020ernie}. However, the context influencing the direction of conversations is missed in such model architecture because it fails to include human cognitive factors like the speaker's intents. This phenomenon could result in both models generating more negative or less positive responses than therapists because therapists intend to apply more positive reinforcements in therapy than in ordinary conversations. In addition, the complex, deep non-linear transformer architecture makes it hard to interpret and improve accordingly with a low degree of transparency \cite{qiu2020pre}. The downside of this approach is that we do not have access to understanding the meaning and impact of each parameter of the deep transformer architecture. Explainable artificial intelligence, which aims to make the model architecture transparent and understandable, could be a potential solution to this problem \cite{xu2019explainable, zednik2019solving}.
In addition, the small size of data for fine-tuning could have influenced the performance of the fine-tuned model. The OpenAI GPT-2 medium model has 345 million parameters, trained over 8 million documents, and 40 GB of text in the pre-training process \cite{radford2019language}. The fine-tuning dataset in this project is less than 7MB, significantly smaller than the data that trained the pre-trained model. Pre-trained models are created to solve this problem. They are expected to avoid overfitting on small data \cite{erhan2010does}, learn universal language representations, and provide a model initialization for better generalization performance \cite{qiu2020pre}. However, this problem still exists due to parameter inefficiency and every application task having its own fine-tuned parameters. Large-scale domain datasets for fine-tuning the pre-trained models are still needed. For medical domains, however, due to privacy concerns, sensitive datasets such as therapist-patient conversations are especially challenging to collect at the level of scale that these models require. Although open-source data platforms in healthcare like Inter-university Consortium for Political and Social Research (ICPSR) \cite{taylor1975inter}, Healthdata.gov (http://www.healthdata.gov), Centers for Disease Control and Prevention data and statistics (https://www.cdc.gov/datastatistics/index.html), CMS Data Navigator (https://dnav.cms.gov/Default.aspx), etc. provide different formats of data including interview, bio-measures, questionnaires, etc., unlike these general datasets, therapy conversation data usually have little chance to be shared in the open-source data platforms due to the confidentiality agreement \cite{bond2002law}. A recent scoping review indicated a delay in applying artificial intelligence for chatbots in mental health compared to chatbots in other fields like customer services \cite{abd2019overview}. Some researchers proposed an improved solution to fix the original parameters of pre-trained models and add small fine-tunable adaption modules for a specific task \cite{stickland2019bert}. Future studies could consider applying such solutions to improve models' performances.
The measurements we used to evaluate the chatbot were preliminary. As the next step, we should examine chatbots' responses at a sentence level and a task level to investigate whether the response was suitable as part of a larger context. Now that we have examined response lengths, the next steps are to examine how information overload can be sophisticated for each conversational pair's context. For instance, the MACM of conversation \cite{gunaratne2019multi} shows how the intentions and acts of the speakers can change the level of information load. Depending on where the conversation is within the therapy context, expectations for information overload should differ depending on these contexts. We should expand sentiment analysis to include further analyzing correct sentiment beyond positive reinforcement-related conversation pairs and investigate the therapy session as a whole.
\section{Conclusion}
Our study was the first study to evaluate the basic qualities of the meta-information of generative-based therapy chatbot responses. As generative-based models become widely disseminated as AI solutions, and as more healthcare tools adopt AI in the process, we must understand possible opportunities and negative consequences of the impact these new technical solutions will have. Our work contributes to the increasing interest in building therapy chatbots and the rapidly evolving social and everyday computing field. A myriad of AI and machine learning-based solutions become integrated and permeated.
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The action for a $p$-brane is comprised of the kinetic term and the WZ (Wess-Zumino) term \cite{green84,Bergshoeff87,ach87}. The WZ term ensures that a local ``$\k$ symmetry" is present which means that only half the fermionic degrees of freedom are physical \cite{siegel83-kappa}. The Lagrangian is not manifestly invariant under the global action of the supertranslation group (it is not ``left invariant") due to quasi-invariance (invariance up to a total derivative) of the WZ term. The WZ term is the pullback of a ($p+1$)-form $B$ which is a potential for a field strength $H$. Although $H$ is left invariant, in standard superspace it is impossible to find a left invariant potential $B$. In terms of CE (Chevalley-Eilenberg) cohomology \cite{chevalley48} this means that $H$ is a nontrivial cocycle. In fact, $H$ is characterized as the unique nontrivial CE $(p+2)$-cocycle of dimension $p+1$ \cite{azc89-2}. There are two avenues of research that have resulted from this fact.
The first area of research concerns topological charge algebras. The Noether charges associated with left invariance of an action are phase space generators of the left group action (``left generators''). For manifestly left invariant actions, the algebra of Noether charges is the same as the underlying algebra of symmetries. This is the ``minimal algebra'' of Noether charges. However, Lagrangians are often quasi-invariant under the action of symmetry transformations. In this case the Noether charges need to be modified in order to ensure their conservation. The conserved charges then obey an algebra which is a modification of the minimal algebra by a topological ``anomalous term'' \cite{witten78}. This is the case for p-brane actions, where quasi-invariance of the WZ term under the action of the supertranslation group means that the Noether charges satisfy an algebra which is an extension of the supertranslation algebra by a topological term \cite{azc89}. In the conventional formulation of superspace, the fermionic directions have trivial topology \cite{dewitt}. In this case, the anomalous term simplifies to a form which can be related to PBRS (partial breaking of rigid supersymmetry) \cite{sorokin97,townsend97}. The algebra of constraints for the action is also modified in the presence of the WZ term \cite{azc91}. The constraints can be identified with generators of the right group action (``right generators"), thus leading to a modified algebra of right generators. The modified algebras of Noether charges and constraints can also be related to a construction involving ghost fields \cite{azc91}. A BRST style ``ghost differential" $s$ acting on an infinite dimensional ``loop superspace" is introduced. The anomalous term is then the result of solving cohomological descent equations. A similar construction of a finite dimensional nature has also been considered \cite{Bergshoeff98}.
In a second line of research, superspaces associated with extensions of the supertranslation algebra have been discovered which allow manifestly left invariant WZ terms to be constructed for the $p$-brane action \cite{siegel94,bergshoeff95,chrys99}. The resulting actions can be considered equivalent to the standard action since the Lagrangians differ only by a total derivative (and the extra superspace coordinates appear only in this derivative). Due to manifest left invariance of the Lagrangian, the Noether charges are not modified and they satisfy the minimal algebra (which in this case reflects the underlying extended supertranslation algebra). There is partial correspondence with the standard superspace formulation of the action here. The anomalous term in the standard superspace formulation can be identified with a corresponding term in the minimal algebra of an extended superspace formulation \cite{chrys99}. However, the full extended superalgebra is not generated in this way. In the extended superspace formulation there are fermionic Noether charges which complete the full algebra. However, in the standard superspace formulation, the assumption of trivial fermionic topology prevents the existence of any analog of these fermionic charges. As a result, one obtains only the ``first line" of the full superalgebra. In this paper we will show that a full agreement between the algebras of the different action formulations is possible. In doing so we will also address the fact that there is more than one extended superspace which allows a manifestly left invariant WZ term to be constructed. Which of these extended superspaces should be generated by the anomalous term of the standard action?
There are hints that incorporating fermionic charges (whether they are topological in nature or arise from fermionic boundary conditions) in brane theory may yield interesting results. For example, the action of supersymmetries on bosonic charges clearly produces fermionic charges \cite{hatsuda00}. Should these charges vanish? Quantizing in the standard flat background allows one to choose a trivial representation for the fermionic charges. However, in certain superspaces nonvanishing fermionic charges are actually \textit{required} \cite{peeters03}. The construction of topological anomalous terms has always allowed for nontrivial topology of the bosonic coordinates (otherwise even the classical charges would vanish). However, the terms that would result from nontrivial fermionic topology have usually been omitted.
In this paper we investigate $p$-brane superalgebras by focussing on the underlying double complex cohomology of the anomalous terms. A number of new results follow. The anomalous terms of the algebras of left/right generators are shown to derive from representatives of a single complex cocycle associated with the $p$-brane. The presence of gauge freedom for these representatives leads to the identification of a new freedom in the anomalous term of the Noether charge algebra. It follows that this anomalous term is not well defined as a form, but as an entire \textit{cohomology class} $[M]$. In the standard superspace background, $[M]$ is shown to be a unique, nontrivial class which may be constructed on the basis of the same dimensionality and Lorentz invariance requirements used to construct $H$ in \cite{azc89-2}. It is also shown that $[M]$ defines a spectrum of extended superalgebras. When fermionic charges are allowed, these superalgebras are realized as the topological charge algebras of the action.
The construction is then applied to the GS (Green-Schwarz) superstring. The topological charges are identified as extra generators of the Noether charge algebra. The resulting topological charge algebra is shown to be a one parameter spectrum of extended superalgebras. When fermionic charges are retained, this spectrum contains three extended algebras of interest. The first is an algebra developed by Green, which has a fermionic ``central" extension \cite{green89}. The second is an algebra which extends the Green algebra by a noncentral bosonic generator. Both of these algebras have been used to construct string Lagrangians that have \textit{manifest} left invariance, and are thus of physical significance \cite{siegel94,bergshoeff95,chrys99}. The third algebra, which is of the type considered in \cite{hatsuda00,peeters03}, results from the action of supersymmetry on the bosonic charge. It thus emerges naturally that if fermionic charges are retained, \textit{all} the known extended algebras of the superstring appear in the spectrum of topological charge algebras of the standard action. Since the spectrum is not simply obtained by rescaling known algebras, new superalgebras also result from the process.
The structure of this paper is as follows. In section \ref{sec:Extended algebras} our conventions are outlined and the properties of $p$-branes are reviewed. The extended superspaces used for the GS superstring are also presented. In section \ref{The p-brane double complex} the ghost differential $s$ is defined, and is then used to define a superspace double complex. An exactness theorem for $s$ is presented, and a total differential $D$ is defined which is shown to evaluate the CE cohomology of the WZ term. It is shown that the $p$-brane has a naturally associated $D$ cocycle which is defined by the $(p+2)$-form $H$. In section \ref{sec:Algebra modifications} the construction of topological anomalous terms is reviewed in a fully integrated approach. It is shown that the anomalous term defines an extension of the underlying superalgebra by an ideal. Modified generators of the right action are defined. The resulting modified algebra is shown to derive from the representative $H$ of the $p$-brane $D$ cocycle. The relationship between the right generator algebra and the constraint algebra is given. Cohomological properties of the anomalous term that follow from the CE properties of $H$ are presented in two theorems. The first theorem defines the anomalous term as a cohomology class. The second theorem states that in standard superspace this class can be constructed using uniqueness of the cocycle and dimensional analysis. In section \ref{sec:Application to the GS superstring} the construction is applied to the GS superstring. Both standard and extended superspace actions are investigated. We point out that the reader may find it helpful to read the explicit examples of this section in conjunction with the general theory of the preceding sections. The algebra of right generators and the constraint algebra are evaluated. Both are shown to agree with the cocycle construction. The topological charge algebra is found by solving the descent equations. The most general gauge transformation of the anomalous term is shown to contain a single degree of freedom. This freedom is used to generate a spectrum of algebras that includes the known extended superalgebras of the superstring. Properties of the extended superspace actions are shown to be consistent with the general construction. In section \ref{sec:Conclusion} some comments on future directions for research are made.
\section{Preliminaries}
\label{sec:Extended algebras}
\subsection{$p$-branes}
\label{sec:Preliminaries}
The superalgebra of the supertranslation group is\footnote{The charge conjugation matrix will not be explicitly shown. It will only be used to raise/lower indices on gamma matrices, which have the standard position $\Gamma^{\a}{}_{\b}$. $\Gamma_{\a\b}$ is assumed to be symmetric. Majorana spinors are assumed throughout (thus, for example, $\bar\t_{\a}=\t^{\b}C_{\b\a}$).}:
\bea
\{Q_{\alpha},Q_{\beta}\}&=&\Gamma^{a}{}_{\alpha\beta}P_{a}.
\eea
The corresponding group manifold can be parameterized:
\be
g(Z)=e^{x^{a}P_{a}}e^{\theta^{\a}Q_{\a}},
\ee
where $Z$ is the combined notation for coordinates:
\[
Z^{A}=\{x^{a},\theta^{\a}\}.
\]
This group can be constructed as the coset space consisting of the super-Poincar\'{e} algebra modulo the Lorentz subgroup, however for our purposes this is an unnecessary complication. In this paper it is valid to assume that expressions are Lorentz invariant if upper indices are contracted with lower ones.
The left vielbein is defined by:
\begin{eqnarray}
L(Z)&=&g^{-1}(Z)dg(Z)\\
&=&dZ^{M}L_{M}{}^{A}(Z)T_{A}\nn,
\end{eqnarray}
where $T_A$ represents the full set of superalgebra generators. The right vielbein is defined similarly:
\begin{eqnarray}
R(Z)&=&dg(Z)g^{-1}(Z)\\
&=&dZ^{M}R_{M}{}^{A}(Z)T_{A}\nn.
\end{eqnarray}
The left group action is defined by:
\be
g(Z')=g(\e)g(Z),
\ee
where $\e^A$ is an infinitesimal constant. The corresponding superspace transformation is generated by the operators:
\be
\label{2:scalar left gen}
Q_{A}=R_{A}{}^{M}\del_{M},
\ee
where $R_{A}{}^{M}$ are the inverse right vielbein components, defined by:
\be
R_{A}{}^{M}R_{M}{}^{B}=\d_{A}{}^{B}.
\ee
$Q_A$ are generators of the left group action, and will be referred to as the ``left generators." Forms that are invariant under the global left group action will be called ``left invariant." The vielbein components $L^{A}$ are left invariant by construction. Their explicit structure is:
\begin{eqnarray}
L^{a}&=&dx^{a}-\half \dtgtu{a}\\
L^{\alpha}&=&d\theta^{\alpha}\nn.
\end{eqnarray}
Indices $A,B,C,D$ will be used to indicate components with respect to this basis, while $M,N,L,P$ will be used for the coordinate basis. The right group action is defined by:
\be
g(Z')=g(Z)g(\e).
\ee
The corresponding superspace transformation is generated by the operators:
\be
\label{right generators}
D_{A}=L_{A}{}^{M}\del_{M},
\ee
where $L_{A}{}^{M}$ are the inverse left vielbein components, defined by:
\be
L_{A}{}^{M}L_{M}{}^{B}=\d_{A}{}^{B}.
\ee
$D_A$ are generators of the right supertranslation group action, and will be referred to as the ``right generators." They are also commonly known as ``supercovariant derivatives" since they commute with the left generators as a result of the associativity of group multiplication. However, unlike the $Q_{A}$ they do not generate global symmetries of the action. The left and right vielbein and inverse vielbein components have been evaluated and placed in appendix \ref{sec:app:standard vielbein components} for reference.
The NG (Nambu-Goto) action for a $p+1$ dimensional manifold embedded in the background superspace is:
\be
S=-\int d^{p+1}\s\sqrt{-g}.
\ee
The integral is over the $p+1$ dimensional ``worldvolume," which has coordinates $\s^{i}$ and is embedded in superspace. The worldvolume metric $g_{ij}$ is defined using pullbacks of the left vielbein:
\bea
L_{i}{}^{A}&=&\del_{i}Z^{M}L_{M}{}^{A}\\
g_{ij}&=&L_{i}{}^{a}L_{j}{}^{b}\eta_{ab}\nn,
\eea
and $g$ denotes $\det g_{ij}$. A $p$-brane is the $\k$-symmetric generalization of the NG action. The $p$-brane action is:
\be
\label{3:p-brane action}
S=-\int d^{p+1}\s\sqrt{-g}+\int B.
\ee
The first term of the action is the ``kinetic" term. The second term is the WZ term, which is the integral over the worldvolume of the pullback of a superspace form $B$. $B$ is defined by the property\footnote{Wedge product multiplication of forms is understood.}:
\bea
\label{3:H def}
dB&=&H\\
&\propto& d\t^{\a}d\t^{\b}L^{a_{1}}\ldots L^{a_{p}}(\G_{a_{1}\ldots a_{p}})_{\a\b}.\nn
\eea
The proportionality constant depends on $p$ and is determined by requiring $\k$ symmetry of the action. There are certain identities required to ensure the consistency of this definition. Firstly, closure of $H$ requires a Fierz identity:
\be
\label{3:p-brane fierz}
\G^{[a_{1}...a_{p}]}{}_{(\a\b}\G_{a_p\d\e)}=0.
\ee
This condition on the gamma matrices can only be satisfied for certain combinations of $p$ (spatial dimension of the brane) and $d$ (superspace dimension) \cite{evans88}. The allowed values of $(p,d)$ (called the ``minimal branescan") are such that:
\be
\label{3:p brane Gamma symmetry requirement}
(\G_{[a_{1}...a_{p}]})_{\a\b}=(\G_{[a_{1}...a_{p}]})_{\b\a}.
\ee
This ensures that $H$ can be nonzero.
\subsection{Green algebra}
The $p$-brane action (\ref{3:p-brane action}) can also be used in extended superspace backgrounds. In the general construction we will not specify the background being used in order to allow for this possibility\footnote{However, extended backgrounds that are extensions of standard superspace by an ideal are most naturally applied (see appendix \ref{sec:app:algebra conditions}). The extended algebras associated with $p$-branes have this property.}. In section \ref{sec:Application to the GS superstring} we will consider the GS superstring in both standard and extended superspaces. There are two known extended superspaces which allow the construction of manifestly left invariant superstring WZ terms. The first is described by a superalgebra that was introduced by Green \cite{green89}. It has a fermionic generator $\Sigma^\a$ that defines a central extension of the supertranslation group\footnote{Of course, when Lorentz generators are included, $\Sigma^\a$ is no longer central.}:
\bea
\label{2:extalg}
\{Q_{\alpha},Q_{\beta}\}&=&\Gamma^{a}{}_{\alpha\beta}P_{a}\\
\ [Q_{\b},P_{a}]&=&\Gamma_{a\b\g}\Sigma^{\g}\nn.
\eea
The corresponding group manifold can be parameterized\footnote{Parameterizations are not unique. In particular we note the Green algebra can alternatively be parameterized to yield a linear realization of the left group action \cite{derig97}.}:
\begin{eqnarray}
g(Z)=e^{x^{a}P_{a}}e^{\theta^{\a}Q_{\a}}e^{\phi_{\b}\Sigma^{\b}},
\end{eqnarray}
where:
\[
Z^{A}=\{x^{a},\theta^{\a},\phi_{\a}\}.
\]
Standard superspace is obtained by omitting the extra generator $\Sigma^{\a}$ (and its associated coordinate $\p_{\a}$). The resulting left vielbein components are:
\begin{eqnarray}
L^{a}&=&dx^{a}-\half \dtgtu{a}\\
L^{\alpha}&=&d\theta^{\alpha}\nn\\
L_{\alpha}&=&d\phi_{\alpha}-dx^{b}\gtl{b}{\alpha}+\sixth\dtgtu{b}\gtl{b}{\alpha}\nn.
\end{eqnarray}
The left and right vielbein and inverse vielbein components for the Green algebra have been evaluated and placed in appendix \ref{sec:app:green algebra vielbein components}.
\subsection{Extended Green algebra}
Addition to the Green algebra of a noncentral bosonic generator $\Sigma^{a}$ results in the extended Green algebra \cite{bergshoeff95, chrys99}:
\bea
\label{2:ext Green algebra}
\{Q_{\alpha},Q_{\beta}\}&=&\Gamma^{a}{}_{\alpha\beta}P_{a}+\Gamma_{a}{}_{\alpha\beta}\Sigma^{a}\\
\ [Q_{\b},P_{a}]&=&\Gamma_{a\b\g}\Sigma^{\g}\nn\\
\ [Q_{\b},\Sigma^{a}]&=&\Gamma^a{}_{\b\g}\Sigma^{\g}\nn.
\eea
The Green algebra results from the reduction:
\bea
P'_a&=&P_{a}+\eta_{ab}\Sigma^{b}\\
\Sigma'^{\a}&=&2\Sigma^{\a},\nn
\eea
where $\eta_{ab}$ is the Minkowski metric. The extended Green algebra group manifold can be parameterized:
\be
g(Z)=e^{x^{a}P_{a}}e^{y_{b}\Sigma^{b}}e^{\theta^{\a}Q_{\a}}e^{\phi_{\b}\Sigma^{\b}},
\ee
with coordinates\footnote{Coordinate indices will not be raised/lowered in this paper. In the notation being used $\{Z^{a},Z^{\a},Z_{a},Z_{\a}\}$ are all independent coordinates.}:
\[
Z^{A}=(x^{a},\theta^{\a},y_{a},\phi_{\a}).
\]
The left vielbein components are found to be:
\begin{eqnarray}
L^{a}&=&dx^{a}-\half \dtgtu{a}\\
L^{\alpha}&=&d\theta^{\alpha}\nn\\
L_{a}&=&dy_{a}-\half \dtgtl{a}\nn\\
L_{\alpha}&=&d\phi_{\alpha}-dx^{b}\gtl{b}{\alpha}-dy_b\gtu{b}{\alpha}+\third\dtgtu{b}\gtl{b}{\alpha}\nn.
\end{eqnarray}
The left/right vielbein and inverse vielbein components for the extended Green algebra have been evaluated and placed in appendix \ref{sec:app:ext green algebra vielbein components}.
\section{Double complex for the $p$-brane}
\label{The p-brane double complex}
\subsection{Cocycles from WZ terms}
\label{sec:WZ terms as a double complex}
The exterior derivative $d$ together with the space of differential forms constitutes the de Rham complex. The operator $d$ is nilpotent (i.e. $d^{2}=0$) and can therefore be used to define cohomology classes. The $n$-th de Rham cohomology is the set of equivalence classes:
\be
H_d^{n}=Z^{n}/B^{n}
\ee
where $Z^{n}$ are the closed $n$-forms (i.e. those in the kernel of $d$) and $B^{n}$ are the exact $n$-forms (those in the image of $d$). The de Rham complex can be extended into a double complex by the addition of a second nilpotent operator that commutes with $d$. The operator used in this paper is a ``ghost differential" $s$. This operator was introduced in \cite{azc91} acting on an infinite dimensional ``loop superspace." We now define the analogous operator for use on finite dimensional superspaces. The introduction of a ghost partner $e^{A}$ for each coordinate is required. The ghost fields have the opposite grading to coordinates:
\bea
\ [e^{A},Z^{M}\}&=&0\\
\{e^{A},e^{B}]&=&0\nn,
\eea
where $[\quad,\quad\}$ and $\{\quad,\quad]$ are the graded commutator/anticommutator:
\bea
[X_A,X_B\}&=&-(-1)^{AB}[X_B,X_A\}\\
\{X_A,X_B]&=&(-1)^{AB}\{X_B,X_A]\nn.
\eea
They are independent of the fields $Z^{M}$, and hence satisfy $de^{A}=0$. A general element of the double complex is a ``ghost form valued differential form." The space of all such ``generalized forms" of differential degree $m$ and ghost degree $n$ will be denoted by $\Omega^{m,n}$. The collection of these spaces will be denoted $\Omega^{*,*}$. Generalized forms $Y\in \Omega^{m,n}$ will be written using a comma to separate ghost indices from space indices:
\be
Y=e^{B_{n}}\ldots e^{B_{1}}L^{A_{m}}\ldots L^{A_{1}}Y_{A_{1}\ldots A_{m},B_{1}\ldots B_{n}}\frac{1}{m!n!}.
\ee
We then define the ghost differential by the following properties:
\begin{itemize}
\item
$s$ is a right derivation\footnote{Our conventions are such that $d$ is also a right derivation (with respect to the differential degree).}. That is, if $X$ and $Y$ are generalized forms and $n$ is the ghost degree of $Y$ then:
\be
s(XY)=Xs(Y)+(-1)^{n}s(X)Y.
\ee
\item
If $X$ has ghost degree zero then:
\be
\label{3:s operator Q part}
sX=e^{A}Q_{A}X.
\ee
$Q_{A}$ denotes a Lie derivative with respect to the vector field (\ref{2:scalar left gen}) associated with the global left action.
\item
\be
se^{A}=\half e^{C}e^{B}t_{BC}{}^{A},
\ee
where $t_{BC}{}^A$ are the structure constants of the superalgebra associated with the background superspace (we henceforth refer to this superalgebra as the ``background superalgebra").
\end{itemize}
One verifies that\footnote{To prove nilpotency of $s$ one needs to use the Jacobi identity for the background superalgebra.}:
\bea
\label{6:sdprops}
s^{2}&=&0\\
\ [s,d]&=&0\nn.
\eea
Hence $s$ extends the de Rham complex into a double complex. $s$ is similar to a BRST operator in that it requires the introduction of ghost fields; however unlike a BRST operator it has not been derived from constraints or gauge symmetries.
There is a total differential $D$ that is naturally associated with the double complex:
\bea
\label{3:d-s differential D}
D&=&s+(-1)^{n+1}d\\
D^{2}&=&0\nn,
\eea
where $n$ is the ghost degree of the generalized form upon which $D$ acts. The spaces $\Omega^l_{D}$ of the single complex upon which $D$ acts are the sum along the anti-diagonal of the spaces of the double complex:
\begin{figure}[t]
\begin{center}
\begin{picture}(120,130)(0,-10)
\put(0,18){\vector(1,0){123}}
\put(0,18){\vector(0,1){85}}
\put(10,10){\makebox(50,50){
\large $\begin{array}{cccccccc}
& 3\ & dB & & &\\
& 2\ & B &\diamondsuit& &\\
\uparrow & 1\ & & W &\diamondsuit&\\
d & 0\ & & & N & sN\\
& & 0 & 1 & 2 & 3\\
& & s & \rightarrow & &
\end{array}$
}}
\end{picture}
\caption{Descending sequence for the string}
\label{6fig:basic HB box}
\end{center}
\end{figure}
\be
\Omega_{D}^{l}=\{\oplus\Omega^{m,n}:\qquad m+n=l\}.
\ee
The $l$-th cohomology of $D$ is:
\be
H_D^{l}=Z_D^{l}/B_D^{l},
\ee
where $Z_D^{l}$ are the $D$ closed generalized $l$-forms (``$D$ cocycles"), and $B_D^{l}$ are the generalized $l$-forms in the image of $D$ (``$D$ coboundaries"). The restriction of $H_D^l$ to representatives within $\Omega^{m,l-m}$ will be denoted $H^{m,l-m}$. The representatives of $H^{m,0}$ can be used to define descending cohomology sequences. We now illustrate this for the $(p+2)$-form $H$ that defines the WZ term.
Firstly, $H$ is a left invariant, closed form with ghost number zero. It is therefore closed under both $s$ and $d$. Using the fact that $s$ and $d$ commute, $sH=0$ implies that $dsB=0$, and thus $sB=-dW$ for some $W\in \Omega^{p,1}$. This argument does not apply globally, but is valid on every coordinate patch\footnote{The fields of the double complex can be viewed as Cech cochains. In this case an expression like $dB$ represents something closed but not necessarily exact. The de Rham triviality of such fields does not affect the double complex cohomology studied in this paper.}. The same logic that was applied to $B$ can then be applied to $W$. This gives $sW=dN$ for some $N\in \Omega^{p-1,2}$. For the string, the last nonzero element of the sequence is $sN\in H^{0,3}$. For a $p$-brane, the sequence continues until we reach an element of $H^{0,p+2}$. The descending cohomology sequence can be graphically depicted using a ``tic-tac-toe box" \cite{bott82}. The string case is depicted in figure \ref{6fig:basic HB box}. The symbol $\diamondsuit$ indicates ``zero with respect to the operator $D$." Precisely, for a $p$-brane, denote the ``potentials" of the sequence by $B^{p+1-m,m}$ (e.g. $W=B^{p,1}$). Then each $\diamondsuit$ represents a relation:
\be
sB^{p-m+1,m}+(-1)^{m}dB^{p-m,m+1}=0.
\ee
These are the ``descent equations" (note that the first descent equation, not represented in the above, is $H=dB^{p+1,0}$).
We have defined the tic-tac-toe construction on the double complex so that its endpoints would be linked via a coboundary of the $D$ complex. For example, in the string case:
\be
-dB\oplus sN=D(B\oplus W\oplus N).
\ee
That is, $H=dB\in H^{3,0}$ is $D$ cohomologous to $sN\in H^{0,3}$. We may write this as:
\[H\simeq sN.\]
In general one finds that:
\be
H\simeq sB^{p+1-m,m} \qquad\forall m.
\ee
The $D$ cocycle represented by $H$ can therefore be alternatively represented by $s$ acting on any of the potentials of the sequence.
We will call a nilpotent operator ``exact" if its associated cohomology is trivial. For example, the de Rham differential $d$ on an open set is exact; the cohomology $H_d$ is trivial as a result of the Poincar\'{e} lemma. That is, given $Y\in H_{d}^{m}$, then for all $m\geq 1$ we can write $Y=dX$ for some $X\in \Omega_{d}^{m-1}$. Note that the ``exactness" of an operator is dependent on the space upon which it acts. By definition, $d$ is not exact (globally) on a manifold that possesses nontrivial de Rham cohomology. There are important consequences for $D$ cohomology if we can show that the ghost differential $s$ is exact.
\begin{theorem}[exactness]
$s$ is exact on open sets.
\end{theorem}
To prove this we find a chain map for which the operators $s$ and $d$ become ``dual" to each other. A chain map between two complexes is one that commutes with the differentials of the complexes. In our case, the required chain map $\Psi$ must satisfy:
\bea
\Psi(d)\Psi(Y)&=&\Psi(dY)\\
\Psi(s)\Psi(Y)&=&\Psi(sY)\nn
\eea
for any $Y\in \Omega^{*,*}$. The chain map is the ``check map" defined by:
\bea
\Psi:\Omega^{*,*}&\rightarrow&\check\Omega^{*,*}\\
L^{A}&\rightarrow&e^{A}\nn\\
e^{A}&\rightarrow&R^{A}\nn.
\eea
The map takes $(m,n)$-forms to $(n,m)$-forms. On $\check\Omega^{*,*}$ we have the operators $\check s$ and $\check d$ defined by:
\begin{itemize}
\item
\be
\check s=d.
\ee
\item
$\check d$ is a right derivation.
\item
If $X$ has ghost degree zero then:
\be
\check dX=e^{A}D_{A}X,
\ee
where $D_{A}$ is a Lie derivative with respect to the vector field (\ref{right generators}) associated with the global right action.
\item
\be
\check de^{A}=-\half e^{C}e^{B}t_{BC}{}^{A}.
\ee
\end{itemize}
If we think of $s$ as a generalized left variation, then $\check d$ is the analogous right variation. The check map is clearly invertible. Let $Y$ be any $s$ closed generalized form of ghost degree one or more over an open set. Then, using $\check s=d$ and the exactness of $d$ on an open set, one shows that $Y$ is an $s$ coboundary:
\bea
sY&=&0\\
\Rightarrow \check s\check Y&=&0\nn\\
\Rightarrow \check Y&=&\check s\check X\nn\\
\Rightarrow Y&=&sX\nn.
\eea
Therefore $s$ is exact on open sets since we have $H^m_{s}=H^m_{d}$.
In \cite{azc89-2} it was shown that CE cohomology can be restated as the restriction of de Rham cohomology to left invariant forms. Now, the $(p+2)$-form $H$ is a $D$ coboundary when it can be written $H=DB$. Equivalently, $H$ is a $D$ coboundary if a left invariant potential $B$ can be found. This is precisely the definition of a trivial CE cocycle. A nontrivial $D$ cocycle is one for which we must necessarily have $sB\neq 0$, which is equivalent to the definition of a nontrivial CE cocycle. CE cohomology is therefore the restriction of $D$ cohomology to forms that have ghost degree zero. $H_D$ is the natural extension of CE cohomology into the double complex $\Omega^{*,*}$. Since $s$ is exact, we may reverse descending tic-tac-toe sequences into ascending ones, starting with any element of $H_D$ and finding an associated left invariant element of $H_d$. This establishes an isomorphism between $H_D$ and CE cohomology that would not exist if $s$ were not exact.
\subsection{Gauge freedom}
Using the tic-tac-toe construction, the form $H\in H^{p+2,0}$ may be identified with any of the other representatives $sB^{p+1-m,m}$ of the $p$-brane $D$ cocycle. This is a well defined map between $H^{m,n}$ cohomologies, but \textit{not} between the forms themselves. In general there is gauge freedom for representatives. Although this freedom is associated with $D$ coboundaries, there is no reason for these coboundaries to be exact. In this way we will see that the gauge freedom can affect the topological charge algebra.
We now explicitly derive the gauge transformations for the string. Consider the relation $H=dB$. Given $H$, this defines $B$ only up to a closed form. Thus, given a solution $B$, the alternative solution $B'=B-d\psi$ is equally valid. We write this as:
\be
\label{delta B=d psi}
\D B=-d\psi.
\ee
In an extended superspace that allows a manifestly invariant WZ term, a transformation of this type is all that separates the standard WZ term from the invariant one (see section \ref{sec:Application to the GS superstring} for an explicit example). What then is the effect of the transformation (\ref{delta B=d psi}) on $W$? Since the variation $\D$ commutes with $s$ and $d$, we have:
\bea
d\D W&=&\D dW\\
&=&-\D sB\nn\\
&=&ds\psi\nn.
\eea
The general solution may be written:
\be
\label{3:delta W}
\D W=s\psi+d\lambda,
\ee
where $\lambda$ is a new gauge field. The gauge transformations of the field $N$ are derived similarly. Directly from (\ref{3:delta W}) we have:
\bea
d\D N&=&\D dN\\
&=&\D sW\nn\\
&=&ds\lambda\nn.
\eea
This has the general solution:
\be
\D N=s\lambda+C,
\ee
where $C$ is a $d$ closed $(0,2)$-form. If one progressed in the other direction (an ascending sequence starting from $sN$) one would also find an $s$ closed $(2,0)$-form gauge field $C'$ for $B$. One can then write the gauge transformations in totality as:
\be
\D(B\oplus W\oplus N)=D(\psi\oplus \lambda)\oplus C \oplus C'.
\ee
One verifies that each potential has two gauge fields: one that is $d$ closed and one that is $s$ closed.
These gauge transformations are additive. For example, the field $W$ has two gauge transformations: one for $\psi$ and one for $\lambda$, with $\D W$ given by (\ref{3:delta W}). The gauge fields are independent (they are not required to satisfy descent equations like those that relate $B$, $W$ and $N$). They may also affect more than one field. For example, $\psi$ is a transformation that leaves $dB$ invariant ($\D B=-d\psi$), and also $sW$ invariant ($\D W=s\psi$). Although $sB$ and $dW$ are not gauge invariant, the $\psi$ transformation is such that the descent equation $sB=-dW$ is true in all gauges. The construction ensures that in general, descent equations are preserved by the gauge transformations. The gauge transformations are identically the same as the $D$ coboundaries as a result of the exactness of the operators $s$ and $d$. The alternative representatives $sB$, $sW$ and $sN$ of the $D$ cocycle defined by $H$ are therefore well defined elements of their $H^{m,n}$ cohomologies.
\section{Algebra modifications}
\label{sec:Algebra modifications}
\subsection{The algebra of left generators}
\label{sec:The algebra of left generators}
The action is formulated in terms of $(Z^{M},\dot Z^{M})$, which may be viewed as coordinates for the superspace tangent bundle. The Hamiltonian formulation of dynamics is cast in terms of coordinates $Z^{M}$ and their associated conjugate momenta $P_{M}$, which together constitute the ``phase space." The momenta are defined by:
\be
\label{3:momentadef}
P_{M}=\frac{\del L}{\del \dot Z^{M}}.
\ee
The phase space can be viewed as coordinates for the superspace cotangent bundle. The Lagrangian then provides a map (a Legendre transform), defined by (\ref{3:momentadef}), from the tangent bundle to the cotangent bundle.
We use the following fundamental (graded) Poisson brackets on phase space\footnote{Brackets of unspecified type will in general be Poisson brackets. Exceptions should be clear within context.}:
\be
\label{3:basicbrackets}
\ [P_{M}(\s),Z^{N}(\s')\}=\d_{M}{}^{N}\d(\sa-\sa'),
\ee
where it is assumed $\s'^{0}=\s^{0}$ (i.e. equal time brackets). The Dirac delta function notation is shorthand for the product of the $p$ delta functions on the spatial coordinates of the worldvolume. One can use (\ref{3:basicbrackets}) and the following Poisson bracket identities to evaluate general brackets:
\bea
\label{3:bracket identities}
\ [X_{A},X_{B}X_{C}\}&=&[X_{A},X_{B}\}X_{C}+(-1)^{AB}X_{B}[X_{A},X_{C}\}\\
\ [X_{A},X_{B}(Y)\}&=&[X_{A},Y^{N}\}\frac{\del X_{B}}{\del Y^{N}}\nn.
\eea
The above relations can all be derived from an integral form of the Poisson bracket, which can be useful for certain proofs. The form we use is:
\be
[X_{A},X_{B}\}=\int d^{p}\s\frac{\d X_{A}}{\d P_{M}(\s)}\frac{\d X_{B}}{\d Z^{M}(\s)}(-1)^{MA+M}-(-1)^{AB}[A\leftrightarrow B].
\ee
We define the following regularly used ``bar map" by its action on superspace forms:
\be
\label{3:form correspondence}
\bar Y^{m-p,n}(\s)=(-1)^{p(p+m+1)}i_{\del_{1}}.\ .\ .\ i_{\del_{p}}Y^{m,n}(\s).
\ee
Here, $i_{V}$ denotes interior derivation with respect to the vector $V$, and $\del_i$ is the $i$-th worldvolume tangent vector. When $Y\in\Omega^{p,n}$ we will also indicate an integrated version of this map using the same symbol:
\be
\label{integrated bar map}
\bar Y^{0,n}=\int d^p\s \bar Y^{0,n}(\s).
\ee
Even though we may omit the argument in (\ref{3:form correspondence}), it should be clear within context which of these maps is implied. We now show that this map generates the algebra modifications of the $p$-brane from its associated $D$ cocycle.
The Noether charges associated with a manifestly left invariant Lagrangian will be denoted $\bar Q_A$. One finds\footnote{``Bar" above $Q_A$ or $D_A$ is a definition, not an action of the map (\ref{3:form correspondence}). The notation indicates that $\bar Q_A$ and $\bar D_A$ naturally act upon elements in the image of this map.}:
\be
\bar Q_{A}=\int d^{p}\s R_{A}{}^{M}P_{M}.
\ee
These charges are the phase space analog of the left generators (\ref{2:scalar left gen}). They satisfy the same algebra as the background superalgebra, but with the sign reversed:
\be
[\bar Q_{A},\bar Q_{B}\}=-t_{AB}{}^{C}\bar Q_{C}.
\ee
This is the ``minimal algebra." In general, the $p$-brane Lagrangian is not manifestly left invariant (i.e. it is only symmetric up to a total derivative) due to quasi-invariance of the WZ term. Using the definitions of section \ref{sec:WZ terms as a double complex}, the variation of the WZ form is $Q_AB=-dW_A$. From this we have:
\bea
Q_A\mathcal{L}&=&Q_A\mathcal{L}_{WZ}\\
&=&\del_{i}w_{A}{}^{i}\nn,
\eea
where
\be
\label{4:w related to W}
w_{A}{}^{i}=-\frac{1}{p!}\tilde \e^{i_{p}...i_{1}i}W_{i_{1}...i_{p},A}
\ee
and $\tilde \e$ is the antisymmetric Levi-Civita symbol. Now, upon using the EL (Euler-Lagrange) equations:
\be
\frac{\del \mathcal{L}}{\del Z^{M}}-\del_{i}\frac{\del \mathcal{L}}{\del(\del_{i}Z^{M})}=0,
\ee
we have identically:
\be
Q_A\mathcal{L}=\del_{i}\bigg [Q_AZ^{M}\frac{\del\mathcal{L}}{\del(\del_{i}Z^{M})}\bigg ].
\ee
Hence, ``on-shell" there are conserved currents:
\bea
\label{3:conserved current}
\tilde q_{A}{}^{i}&=&Q_{A}Z^{M}\frac{\del \mathcal{L}}{\del(\del_{i}Z^{M})}-w_{A}{}^{i}\\
\del_{i}\tilde q_{A}{}^{i}&=&0\nn.
\eea
The associated conserved charges are:
\be
\label{6:cons charges}
\widetilde {\bar Q}_{A}=\bar Q_{A}+\bar W_{A}.
\ee
Using (\ref{6:cons charges}), the $\widetilde {\bar Q}_{A}$ obey a modified version of the minimal algebra:
\be
[\widetilde {\bar Q}_{A},\widetilde {\bar Q}_{B}\}=-t_{AB}{}^{C}\widetilde {\bar Q}_{C}+\bar M_{AB},
\ee
with
\be
\label{6:M def}
\bar M_{AB}=[\bar Q_{A},\bar W_{B}\}+[\bar W_{A},\bar Q_{B}\}+t_{AB}{}^{C}\bar W_{C}.
\ee
This is the algebra of conserved charges. Now define the special representative $M=sW$ of the $p$-brane $D$ cocycle. The definition of $\bar M$ given here agrees with that obtained from the bar map (\ref{integrated bar map}) acting upon $M$. We refer to both $M$ and $\bar M$ as ``anomalous terms." If we need to distinguish between the two, $\bar M$ will be referred to as the ``topological anomalous term", and $M$ as its ``superspace representation." The bar map ensures that elements in its image contain no time derivatives, or equivalently no dependence upon the phase space momenta. The anomalous term $\bar M$ then results from Poisson brackets involving at most one momentum variable, which leads to a simplified structure.
One verifies that:
\bea
\bar M_{AB}&=&(-1)^p\int d^{p}\s M_{p\ldots 1,AB}(\s)\\
&=&(-1)^p\int \Phi^* M_{AB}\nn,
\eea
where the map $\Phi$ embeds the spatial section of the worldvolume into superspace. We assume that the spatial section is a closed manifold. $\bar M_{AB}$ is therefore just a topological integral over the spatial section of the closed $p$-form $M_{AB}$. The result of the integral will be determined by the topology of the spatial section, and the class of the associated de Rham cohomology to which $M_{AB}$ belongs.
In prior literature the topological anomalous term was found to be proportional to the pullback of the $p$-form \cite{azc89}:
\be
\G_{m_1\ldots m_p\a\b}dx^{m_1}\ldots dx^{m_p}.
\ee
A current associated with this form can be defined, and this current is conserved identically since the form is closed (see (\ref{4:conserved current}) below). However, this structure for the anomalous term assumes that integrals of the form:
\be
\label{6:closed theta integral}
\int d\s^{1}\del_{1}Y(\t)
\ee
vanish, where $Y$ is an arbitrary function. This amounts to the requirement that the fermionic directions (corresponding to the coordinates $\t$) must have trivial topology. The topological integrals of closed forms with $\t$ differentials and single valued coefficients must vanish in this case. However, recent work \cite{peeters03} suggests that for certain spaces more general than flat superspace, fermionic charges in the modified algebra are \textit{required} on the basis of Jacobi identities. In flat space it is consistent to set the fermionic charges to zero but in other spaces this can cause inconsistencies. Although we assume flat background spaces in this work, we will formally allow nonvanishing fermionic charges in order to see which features appear as a result. Since $\bar M$ is still derived from a closed form, the associated current is still conserved identically:
\bea
\label{4:conserved current}
m_{AB}^{i}&=&\tilde \e^{i_{p}...i_{1}i}M_{i_{1}...i_{p},AB}\frac{1}{p!}\\
\del_{i}m_{AB}^{i}&=&\tilde \e^{i_{p}...i_{1}i}\del_i \del_{i_1}N_{i_{2}...i_{p},AB}\frac{1}{(p-1)!}\nn\\
&=&0\nn.
\eea
There is no obvious reason to expect that it should be possible to incorporate the topological anomalous term $\bar M$ into the definition of an extended algebra. However, using its superspace representation $M$ we now show that this is indeed possible. In section \ref{sec:Application to the GS superstring} we will explicitly derive the extended algebras that result from the superstring anomalous term.
\begin{theorem}[extension]
The anomalous term of the Noether charge algebra defines an extension of the background superalgebra by an ideal. The resulting extended superalgebra is solvable.
\end{theorem}
First we need to show closure of the algebra. This requires that the anomalous term, and all brackets resulting from it, can be expressed using a finite number of new generators. To find the extended algebra one could investigate the Poisson brackets of $\widetilde{\bar Q}_A$ and $\bar M_{AB}$. However, one can equivalently use the double complex. In this case the anomalous term is represented by a set of superspace forms $M_{AB}$ (one for each bracket of the minimal algebra). The minimal algebra is generated by the left generators $Q_A$ of the double complex. Define the modified left generators as:
\be
\widetilde Q_A=Q_A+W_A.
\ee
The Poisson bracket algebra generated by $\widetilde {\bar Q}_A$ and $\bar M$ is found to be the same as the ``operator-form" algebra generated by $\widetilde Q_A$ and $M$ (forms are assumed to commute with other forms). We therefore use the operator-form algebra since it is more convenient. If required, the Poisson bracket algebra can be obtained by replacing all generators with barred ones in the operator-form algebra. Let $\mathcal{G}=\{Q_A\}$ denote the minimal algebra, and $\mathcal{\widetilde G}=\{\widetilde Q_A,\Sigma_A\}$ denote the full algebra that is assumed to result by addition of the anomalous term. Now consider the following schematic representation of the action of the left generators on forms\footnote{For this proof we will assume for brevity that $\mathcal{G}$ is the standard superalgebra. The same principles are also valid in the case where $\mathcal{G}$ is one of the extended superalgebras (e.g. those of section \ref{sec:Extended algebras}).}:
\begin{center}
$\begin{array}{llllll}
x &\rightarrow & \t &\rightarrow & $const $ \rightarrow & 0\\
dx &\rightarrow & d\t &\rightarrow & 0. &
\end{array}$
\end{center}
If a form has coefficients with a polynomial structure then each action of $Q_A$ brings it closer to annihilation. The requirements of Lorentz invariance and fixed dimensionality (see section \ref{sec:Anomalous term cohomology}) ensure that all valid forms have this polynomial structure. It follows that the anomalous term will be annihilated by the left generators in a finite number of steps. There is then a stepwise process to define the extended algebra. At the first step we may factor out any Lorentz invariant tensors from $M_{AB}$ (which become new structure constants). The remaining form is then written in terms of a minimal set of independent closed forms $\Sigma_A$, which become new generators of the algebra. The $\Sigma_A$ commute with themselves and satisfy:
\be
\label{modifed operator with form bracket}
[\widetilde Q_A,\Sigma_B\}=[Q_A,\Sigma_B\}
\ee
since $W_A$ commutes with $\Sigma_B$. We then act again with the $Q_A$ and introduce new generators to deal with any forms that cannot be written in terms of those generators previously defined. By the above annihilation argument it follows that this process is finite. That is, there will be a finite number of new generators. The resulting algebra has the structure:
\bea
\ [\widetilde Q,\widetilde Q]&\subset& \widetilde Q\oplus\Sigma\\
\ [\widetilde Q,\Sigma]&\subset& \Sigma\nn
\eea
The second line shows that $\Sigma$ is an ideal of the new algebra. The algebra $\mathcal{\widetilde G}$ is said to be \textit{solvable} if:
\be
(\mathsf{Ad}_\mathcal{\widetilde G})^m(\mathcal{\widetilde G})=0
\ee
for some finite integer $m$, where $\mathsf{Ad}_\mathcal{\widetilde G}$ is the adjoint action. The minimal algebra $\mathcal{G}$ is solvable. The annihilation argument shows that $\mathcal{\widetilde G}$ is also solvable, since the action of $\mathcal{\widetilde G}$ annihilates the new generators in a finite number of steps.
This shows that the new algebra closes, however to show that $\mathcal{\widetilde G}$ is a valid \textit{superalgebra} we must also show that the super-Jacobi identities are satisfied. There are four cases to test. The first is:
\be
(-1)^{AC}[\widetilde Q_{A},[\widetilde Q_{B},\widetilde Q_{C}\}\}+\mathsf{cycles},
\ee
where ``$\mathsf{cycles}$" indicates the terms obtained from two repetitions of the cycling $A\rightarrow B\rightarrow C$. Using $M=sW$ one can show that this reduces to:
\be
(-1)^{AC}t_{BC}{}^Dt_{AD}{}^E Q_{E}+\mathsf{cycles},
\ee
which vanishes since the original structure constants satisfy the Jacobi identity. The second case is:
\be
(-1)^{AC}[\widetilde Q_{A},[\widetilde Q_{B},\Sigma_{C}\}\}+\mathsf{cycles}.
\ee
By (\ref{modifed operator with form bracket}) it is valid to replace $\widetilde Q$ by $Q$ in the above expression since $\Sigma$ is an ideal. The Jacobi identity is then identically satisfied since it reflects an action of the minimal algebra. The final two cases:
\be
(-1)^{AC}[\widetilde Q_{A},[\Sigma_{B},\Sigma_{C}\}\}+\mathsf{cycles}\\
(-1)^{AC}[\Sigma_{A},[\Sigma_{B},\Sigma_{C}\}\}+\mathsf{cycles}\nn
\ee
are trivially satisfied. The Jacobi identity therefore holds, and $\mathcal{\widetilde G}$ is an extended superalgebra.
\subsection{The algebras of right generators and constraints}
\subsubsection{The algebra of right generators}
The right generators and their algebra are modified in a similar way to the left generators. The minimal right generators for the phase space are:
\be
\label{3:unmodified right gen def}
\bar D_{A}=L_{A}{}^{M}P_{M}.
\ee
The $\bar D_{A}$ satisfy the minimal algebra:
\be
[\bar D_{A}(\s),\bar D_{B}(\s')\}=\d(\sa-\sa')t_{AB}{}^{C}\bar D_{C}(\s).
\ee
If a WZ term is added to the NG action, the new momenta are related to the NG action momenta $P^{(NG)}_{M}$ via:
\be
\label{3:momenta extra term}
P_{M}=P^{(NG)}_{M}+\bar B_{M}.
\ee
The WZ term of the Lagrangian may be written in terms of $\bar B$ as:
\be
\label{3:Adef}
\mathcal{L}_{WZ}=\dot Z^{M}\bar B_{M}.
\ee
We define the modified right generators for the phase space such that they are constructed from the NG momenta:
\bea
\label{3:modified right gen}
\widetilde {\bar D}_{A}&=&L_{A}{}^{M}P^{(NG)}_{M}\\
&=&\bar D_{A}-\bar B_{A}\nn.
\eea
This is motivated by the modification of the standard superspace action constraints in the presence of the WZ term \cite{azc91}, and the relation of constraints to right generators (see section \ref{3:sec:constraints}). Again, the components of $\bar B$ contain no time derivatives. Thus, the modification to the right generators for the phase space contains no momentum dependence (just as in the left generator case). If one imposes a condition that $B$ must be single valued then the modified algebra derives from $H$:
\be
\label{3:right gen modified algebra}
[\widetilde {\bar D}_{A}(\s),\widetilde {\bar D}_{B}(\s')\}=\d(\sa-\sa')[t_{AB}{}^{C}\widetilde {\bar D}_{C}-\bar H_{AB}](\s).
\ee
This shows that the result stated in \cite{azc91} for the standard superspace action also holds for extended superspace actions. The bar map is again seen to commute with the bracket operation:
\bea
\label{6:Fdef}
\d(\sa-\sa')\bar H_{AB}(\s)&=&[\bar D_{A}(\s),\bar B_{B}(\s')\}+[\bar B_{A}(\s),\bar D_{B}(\s')\}\\
&&-\d(\sa-\sa')t_{AB}{}^{C}\bar B_{C}(\s)\nn.
\eea
\subsubsection{The algebra of constraints}
\label{3:sec:constraints}
The $p$-brane action (\ref{3:p-brane action}) yields constraint equations for the phase space variables. That is, equations of the form:
\be
\label{3:constraint defined}
C_{M}(Z,P)=0
\ee
for some functions $C_{M}$, which reduce to identities once the definitions (\ref{3:momentadef}) of momenta are used.
This results in a reduction of phase space. For the content of this paper it will only be necessary to find (not eliminate) the constraints.
Evaluating $\frac{\del L}{\del \dot x^{m}}$ and $\frac{\del L}{\del \dot \t^{\m}}$ for the NG action one finds:
\bea
P^{(NG)}_{m}&=&-\gmh g^{0i}L_{i}{}^{a}\eta_{am}\\
P^{(NG)}_{\m}&=&-\half(\G^{n}\t)_{\m}P^{(NG)}_{n}\nn.
\eea
One thus identifies the fermionic constraints of the NG action as:
\bea
C_{\a}&=&\d_{\a}{}^{\m}P^{(NG)}_{\m}+\half(\G^{n}\t)_{\m}P^{(NG)}_{n}\\
&=&L_{\a}{}^{M}P^{(NG)}_{M}.\nn
\eea
Comparing with (\ref{3:unmodified right gen def}) we see that these are just the odd, minimal right generators for the phase space. The $C_{\a}$ thus satisfy the algebra:
\be
\{C_{\a}(\s),C_{\b}(\s')\}=\d(\sa-\sa')t_{\a\b}{}^{A}\bar D_{A}(\s).
\ee
Upon the addition of a WZ term, the momenta (including those associated to new coordinates) pick up the extra terms $\bar B_M$ as in (\ref{3:momenta extra term}). It will be assumed that the background superspace is either standard superspace, or an extension of standard superspace by an ideal (e.g. the superalgebras of section \ref{sec:Extended algebras}). We find that the constraints $\widetilde C_{A}$ in the presence of the WZ term can then be written:
\be
\label{6:fermionic mod}
\widetilde C_{A}=L_{A}{}^{M}(P_{M}-\bar B_{M}),\qquad A\neq a.
\ee
Details of the calculation may be found in appendix \ref{sec:app:algebra conditions}. Thus, the constraints $\widetilde C_{A}$ (where $A\neq a$) are the modified right generators for the phase space (\ref{3:modified right gen}), and their algebra is the same:
\be
[\widetilde C_{A}(\s),\widetilde C_{B}(\s')\}=\d(\sa-\sa')[t_{AB}{}^{C}\widetilde {\bar D}_{C}-\bar H_{AB}](\s).
\ee
Note that although there is no constraint $\widetilde C_{a}$, $\widetilde {\bar D}_{a}$ can still appear on the RHS.
The constraint surface must be invariant under the action of the Noether symmetries of the action. The constraints must therefore be left invariant in the sense:
\be
[\bar {Q}_{A},C_{\b}(\s)\}\approx 0,
\ee
where $\approx$ means ``equal on the constraint surface." For the NG action this is an example (in PB form) of the commutativity of the left and right actions. When the WZ term is added, this condition must continue to hold (i.e. upon replacing $\bar {Q}_{A}$ and $C_{A}$ by their modified counterparts). In fact, if one assumes that $W$ is single valued then one can use the descent equation $sB=-dW$ to show:
\be
[\widetilde {\bar Q}_{A},\widetilde {\bar D}_{B}(\s)\}=0.
\ee
This generalizes a result in \cite{azc91} for the standard background to the case of extended backgrounds. Since the constraints are a subset of the modified right generators, their left invariance is guaranteed by the double complex cohomology. Furthermore, since the equation $sB=-dW$ is preserved by the gauge transformations, the left invariance of the constraints is independent of the gauge.
\subsection{Cohomology of algebra modifications}
\label{sec:Anomalous term cohomology}
We are now in a position to determine how gauge freedom affects the algebras of left/right generators. Before proceeding however, we need to establish some facts about the $D$ cohomology of $H$. First let us review why the equation (\ref{3:H def}) defining $H$ takes the form it does. $H$ must have the following properties:
\subsubsection{Properties of $H$}
\begin{itemize}
\label{Properties of H}
\item
$H$ is closed.
\item
$H$ is left invariant.
\item
dim $H=p+1$.
\item
$H$ is Lorentz invariant.
\end{itemize}
In standard superspace, $H$ is the \textit{unique} $p+2$ form (up to a constant of proportionality) with the properties \ref{Properties of H} \cite{azc89-2}. Furthermore, it is a nontrivial CE cocycle. In the double complex construction this implies that in standard superspace, $H$ is the unique Lorentz invariant element of $H^{p+2,0}$ with dimension $p+1$. One can verify that the last two items in the list \ref{Properties of H} are preserved by the operators $d$ and $s$. We conclude that Lorentz invariance and dimensionality $p+1$ must be a property of \textit{all} elements of the double complex (including potentials and gauge transformations). The exactness of $s$ means that the $D$ cohomology of the single complex is equal to the de Rham cohomology of the first column of the double complex. Since we should restrict ourselves to Lorentz invariant forms of dimension $p+1$, by the uniqueness of $H$, this cohomology is equal to the field of scalars we are using (the constant of proportionality multiplying $H$ labels the class).
The uniqueness of $H$ implies that the modification to the right generator algebra $\bar H$ is also unique. It is gauge invariant, and even independent of the background superalgebra used (since the same definition of $H$ is always used). Note however that the right generator algebra obtained in an extended background can be different to the right generator algebra obtained in standard superspace (even though the modification is the same) because then the \textit{minimal} algebra that we start with is already different.
The left generator algebra is less straightforward. Note that due to nilpotency of the operators, moving twice in any one direction on the tic-tac-toe box gives zero. An interesting consequence of this is that the gauge freedom in the WZ term (resulting from $\psi$, $C'$) has no effect upon the anomalous term $M$. Note however that using a different background superspace will not only change the minimal algebra but can also change the modification $M$ (since the descent equations may have different solutions). We note that left invariant WZ terms can only be constructed in such extended superspace backgrounds.
The result of main interest is that the topological anomalous term $\bar M$ is not gauge invariant. Using (\ref{3:delta W}):
\bea
\label{Delta M=-sd lamda}
\D M&=&s\D W\\
&=&sd\lambda\nn.
\eea
Although $\D M$ is a $D$ coboundary, it need not be exact. $\D \bar M$ can therefore be nonzero in the presence of nontrivial topology (just as $\bar M$ can be). How much freedom do we have? At first it seems that we have full gauge freedom at our disposal, but in practice the requirements of Lorentz invariance and correct dimensionality are restrictive. In section \ref{sec:Application to the GS superstring} we will see that in the case of the string, these requirements on the gauge fields reduce the freedom in the anomalous term down to a single, global degree of freedom. A corresponding free constant parameterizes the ``spectrum of algebras" obtained from the process.
Identifying gauge freedom in the anomalous term forces us to reevaluate its mathematical nature. Since there is an orbit of gauge equivalent representatives, and there is no natural basis upon which to fix a gauge, one can no longer speak of ``the" anomalous term if one defines it as a particular form or modified left generator algebra. In order that the anomalous term be a well defined object it must be defined as an entire $D$ cohomology class $[M]$. We have already seen that the representatives $M$ of this class are $D$ cohomologous to $H$. Since $s$ is exact, this correspondence is a \textit{bijection} between the $H^{p+2,0}$ and $H^{p,2}$ cohomologies to which $H$ and $M$ belong. That is, to each cohomology class $[H]\in H^{p+2,0}$ is associated a unique class $[M]\in H^{p,2}$ of the \textit{same triviality}, and vice versa. The nature of the resulting class $[M]$ depends on the background space being used.
First consider standard superspace. Since the class $[H]$ is unique and nontrivial, $[M]$ must also be unique and nontrivial. The classes $[M]$ must be labeled by a single proportionality constant belonging to the field of scalars (just as the classes $[H]$ are). The difference between $[H]$ and $[M]$ is that $[H]$ consists of $H$ only; there are no \textit{coboundaries} for the $H^{p+2,0}$ cohomology. In general there \textit{are} coboundaries for the $H^{p,2}$ cohomology; they are precisely the $\lambda$ gauge transformations (and we will see that explicit, nonvanishing examples of such gauge transformations do exist). The $D$ cocycle of the $p$-brane therefore has a set of equivalent representatives belonging to $\Omega^{p,2}$; it is this full set which makes the anomalous term a well defined object.
If an extended superspace is used then $H$ is a $D$ coboundary. Based on the historical derivation, one might argue that in this case the anomalous term should not even exist (since a manifestly left invariant WZ term is possible). However, from the cohomology point of view the anomalous term should consist of all possible modifications to the Noether charges that are consistent with charge conservation. In the double complex construction, charge conservation is guaranteed by the descent equations. The anomalous term $[M]$ therefore becomes the space of $D$ coboundaries within $\Omega^{p,2}$. This is identically equal to the representatives $\D M$ resulting from the $\lambda$ gauge transformations. Note that $D$ coboundaries need not be exact; it is therefore possible to obtain nonzero topological integrals for $\bar M$ even in the case of a manifestly left invariant WZ term.
We summarize with the following:
\begin{theorem}[cohomology]
The anomalous term is the restriction of $H^{p,2}$ to forms that are $D$ cohomologous to $H$.
\end{theorem}
\begin{theorem}[uniqueness]
In the standard background, the anomalous term is the unique, Lorentz invariant, $D$ nontrivial class of dimensionality $p+1$ (uniqueness is up to a proportionality constant).
\end{theorem}
From the second of these we conclude that in standard superspace it is possible to find the anomalous term without solving descent equations. If a single $D$ nontrivial representative within $H^{p,2}$ can be found then the entire anomalous term will be generated by the $\lambda$ gauge transformations. This class is unique (up to the constant of proportionality which labels the classes). In superspaces which allow manifestly left invariant Lagrangians, the anomalous term is the set of $D$ coboundaries generated by the $\lambda$ gauge transformations.
Note that the above arguments apply only to the superspace representation $M$ of the anomalous term. The associated topological anomalous term $\bar M$ may vanish for topological reasons separate from $D$ cohomology. For example, if we choose to compactify \textit{no} dimensions, or if the brane does not ``wrap", then topological integrals such as $\bar M$ must identically vanish. If we compactify only \textit{some} dimensions then we may find that in standard superspace there do exist gauges in which the topological anomalous term vanishes, since a gauge transformation may shift the form $M$ into a trivial sector of the cohomology of the spatial section. We will see an explicit example of this in section \ref{sec:Application to the GS superstring}.
To summarize, the $p$-brane has an associated $D$ cocycle defined by the representative $H\in H^{p+2,0}$. The Noether charge algebra can be modified by a topological anomalous term deriving from cocycle representatives $M\in H^{p,2}$. The representatives are not unique due to the presence of $\lambda$ gauge transformations of the cocycle. These transformations themselves represent topological integrals which can be nonzero. The anomalous term is well defined as a cohomology class, where elements related by $\lambda$ gauge transformations are to be considered equivalent. Since each representative of the anomalous term defines an extended supertranslation algebra, each algebra in the spectrum can be considered as being equivalent from a $D$ cohomology point of view.
It is interesting to note that all the cocycle representatives of ghost degree two or less have physical interpretations:
\begin{itemize}
\item
H measures the modification to the right generator algebra.
\item
sB measures the left variation of the WZ term.
\item
sW measures the modification to the left generator algebra.
\end{itemize}
One may ask if any other representatives are significant. The only one remaining in the case of the string is the ghost degree three element $sN$. Consider the following modified algebra\footnote{We present this for the sake of interest only since we have no physical interpretation for modified algebras resulting from $N$.}:
\be
[Q_{A},Q_{B}\}=-t_{AB}{}^{C}Q_{C}+N_{AB}.
\ee
One finds that the Jacobi identity of this algebra is generated by $sN$:
\be
(-1)^{AC}sN_{ABC}=(-1)^{AC}[Q_A,[Q_B,Q_C\}\}+\mathsf{cycles}.
\ee
$sN$ therefore determines whether or not $N$ can define an extended superalgebra. Based on the cocycle triviality arguments we conclude that $N$ defines extensions of extended backgrounds, but not of the standard background. Applying the same argument to $M$ (and using $sM=0$), we verify the claim of section \ref{sec:The algebra of left generators} that $M$ generates extensions of both standard and extended backgrounds.
We finally note that the argument which shows that $H^{p+2,0}$ is unique in standard superspace implies the same for $H^{0,p+2}$. That is, the class containing $sB^{0,p+1}$ consists of one element. The components of $sB^{0,p+1}$ must also be proportional to those of $H$ since the construction of a nontrivial representative in $H^{0,p+2}$ has the same mathematical content as the construction of a nontrivial representative in $H^{p+2,0}$.
\section{Application to the GS superstring}
\label{sec:Application to the GS superstring}
To illustrate the above formalism we consider the case of the GS superstring. After presenting the action, the modified algebras of the left/right generators are found. The effect of the cocycle gauge transformations is then investigated.
\subsection{Superstring actions}
We wish to study the effects that the following may have upon the results:
\begin{itemize}
\item
Extending the background superspace (in order to allow manifestly symmetric WZ terms to be used).
\item
Changing the WZ term.
\end{itemize}
For this purpose we use an action that has free parameters (``switches"). The action can be used in the standard superspace background and also on the two extended ones of section \ref{sec:Extended algebras}. The switches allow one of three WZ terms to be used, or alternatively no WZ term at all. The action is:
\bea
\label{3:action}
S_{k,s,\bar s}&=&-\int d^{2}\s \sqrt{-g}\ \bigg [1-\frac{k}{2}\e^{ij}\bigg (\tb\G_{i}\del_{j}\t-s\bigg [1-\frac{\bar s}{2}\bigg ]\del_{i}\t^{\m}\del_{j}\p_{\m}\\
&&-s\bar s\del_{i}y_{n}\del_{j}x^{n}\bigg )\bigg ].\nn
\eea
The switches $k$, $s$ and $\bar s$ are restricted to the following values:\\
\\
$\begin{array}{ll}
k=\{-1,0,1\} & $controls the existence and sign of the WZ term.$\\
s=\{0,1\} & $switches on a manifestly invariant WZ term.$\\
\bar s=\{0,1\} & $controls the type of invariant WZ term.$
\end{array}$\\
\\
$k=0$ gives the NG action. For $k\neq 0$ we have three possibilities.
\begin{itemize}
\item
$s=0$ gives the standard WZ term on standard superspace. This results in the standard $\k$ symmetric GS superstring action. The corresponding Lagrangian is only left invariant up to a total derivative.
\item
$(s,\bar s)=(1,0)$ gives a manifestly left invariant WZ term that exists on the superspace of the Green algebra \cite{siegel94,bergshoeff95}. The resulting action can be brought to the form:
\be
S_{k,1,0}=-\int d^{2}\s \sqrt{-g} \bigg [1+\frac{k}{2}\e^{ij}(L_{i}{}^{\a}L_{j\a})\bigg ],
\ee
showing clearly the manifest left invariance. The WZ 2-form in this case is:
\be
B=\frac{k}{2}L^{\a}L_{\a}.
\ee
\item
$(s,\bar s)=(1,1)$ gives another manifestly left invariant WZ term that exists on the superspace of the extended Green algebra \cite{chrys99}. In this case:
\be
\label{2 term LI WZ form}
B=-\frac{k}{2}L^{a}L_{a}+\frac{k}{4}L^{\a}L_{\a}.
\ee
\end{itemize}
\subsection{Constraint and right generator algebras}
The action (\ref{3:action}) yields the bosonic momentum:
\be
P_{m}=-\gph g^{0i}L_{i}{}^{a}\eta_{am}-\frac{k}{2}\tgdotl{m}.
\ee
The momenta other than $P_{m}$ can be written in terms of $P_{m}$ and $Z^{M}$. These equations are then written in the form of constraints on phase space\footnote{These are the ``modified" constraints of the general section. We have dropped the tilde since we are no longer considering the NG and GS actions separately.}:
\bea
C_{\m}&=&P_{\m}
+\half \gtu{m}{\m}P_{m}
+\frac{k}{2}L_{1}^{a}\gtl{a}{\m}
+\frac{k}{4}s\bar s(\G^{n}\t)_{\m}\del_{1}y_{n}\\
&&-\frac{sk}{2}\bigg [1-\frac{\bar s}{2}\bigg ]\del_{1}\p_{\m}\nn\\
C^{m}&=&P^{m}-\frac{k}{2}s\bar s\del_{1}x^{m}\nn\\
C^{\m}&=&P^{\m}-\frac{sk}{2}\bigg [1-\frac{\bar s}{2}\bigg ]\del_{1}\t^{\m}\nn.
\eea
The 1-form $\bar B$ is found to be:
\bea
\bar B_{m}&=&-\frac{k}{2}\tgdotl{m}-\frac{k}{2}s\bar s\del_{1}y_{m}\\
\bar B_{\m}&=&\frac{ks}{2}\bigg [1-\frac{\bar s}{2}\bigg ]\del_{1}\p_{\m}-\frac{k}{2}\del_{1}x^{m}\gtl{m}{\m}\nn\\
\bar B^{m}&=&\frac{k}{2}s\bar s\del_{1}x^{m}\nn\\
\bar B^{\m}&=&\frac{ks}{2}\bigg [1-\frac{\bar s}{2}\bigg ]\del_{1}\t^{\m}\nn.
\eea
In standard superspace, $C_{\m}$ coincide with the right generators for the phase space, but for the extended algebras it is the linear combinations of section \ref{3:sec:constraints} that generate the right action. These are:
\be
\label{3:rightwithA}
C_{A}=L_{A}{}^{M}P_{M}-\bar B_{A},
\ee
where $\bar B_{A}=L_{A}{}^{M}\bar B_{M}$.
For the string we require D=(3, 4, 6, 10) \cite{evans88}. The Fierz identity becomes:
\be
\label{2:simplifying}
\G^{a}{}_{(\a\b}\G_{a\d)\e}=0.
\ee
Using this, the Poisson bracket algebra of the constraints is found to be:
\bea
\label{3:rightalg}
\{C_{\a}(\s),C_{\b}(\s')\}&=&\d(\sa-\sa')(\G^{a}{}_{\a\b}\widetilde{\bar D}_{a}+k\G_{a\a\b}L_1{}^a)(\s)\\
\ [C^{a}(\s),C_{\b}(\s')]&=&-\d(\sa-\sa')\G^a{}_{\b\g}C^{\g}(\s)\nn,
\eea
with all other brackets vanishing. The second bracket is an example illustrating the fact that although the modification $H^a{}_\b$ vanishes, the associated constraint bracket is nonzero for the extended Green algebra because the minimal algebra has a noncentral generator $\Sigma^a$. Note that the constraint $C^a$ does not exist on standard or Green superspaces, and in these cases only the first bracket is present.
The algebra of right generators is slightly more general than (\ref{3:rightalg}) since there is a generator $D_{a}$ that is not reflected as a constraint. Using the bar map (\ref{3:form correspondence}) and the components of $H$:
\be
H_{c\b\a}=k\G_{c\b\a}
\ee
we obtain:
\bea
\bar H_{\a\b}&=&-k\G_{a\a\b}L_{1}{}^{a}\\
\bar H_{a\b}&=&k(\G_{a}\del_{1}\t)_{\b}\nn
\eea
as the only nonzero components of the modification. The first of these is seen to agree with the first bracket of (\ref{3:rightalg}). The second is not present in the constraint case.
\subsection{Left generator algebra}
\subsubsection{Standard superspace action}
\label{sec:standard superspace action}
Let us find a representative of the anomalous term by solving the descent equations. First, using the Fierz identity one finds for the variation of the WZ form:
\bea
\label{6:std variation}
Q_{\a}B&=&-\frac{k}{2}L^{b}(\G_{b}d\t)_{\a}\\
&=&-\frac{k}{2}d\bigg [(dx^{b}-\sixth d\bar\t\G^b\t)(\G_{b}\t)_{\a}\bigg ]\nn.
\eea
The bosonic symmetries are manifest (i.e. $Q_{a}B=0$). Thus:
\be
\label{6:std solution for W}
W=\frac{k}{2}e^{\a}(dx^{b}-\sixth d\bar\t\G^b\t)(\G_{b}\t)_{\a}
\ee
is a solution for the potential $W$. Evaluating $M=sW$ and using the Fierz identity we find that all $\t$ dependence is lost:
\be
\label{6:standard modification}
M_{\a\b}=kdx^{m}\G_{m\a\b},
\ee
with all other components vanishing. Using the map (\ref{integrated bar map}) we then find $\bar M$:
\be
\label{6:standard physical modification}
\bar M_{\a\b}=-k\int d\s^{1}\del_{1}x^{m}\G_{m\a\b}.
\ee
This integral can be nonzero whenever the spatial section has nontrivial topology in the bosonic sector. It is equivalent to the previously known result \cite{azc89} except that we have not needed to assume trivial fermionic topology. One of the new points is that (\ref{6:std variation}) determines $W$ only up to a gauge transformation (which we have called $\lambda$). The resulting anomalous term $M$ is not gauge invariant under such transformations. In fact, we now show that if fermionic topology is trivial then the topological anomalous term (\ref{6:standard physical modification}) is gauge equivalent to zero.
The following gauge field satisfies the conditions of Lorentz invariance and dimensionality $p+1=2$:
\be
\label{6:explicit lambda}
\lambda=-ke^a x^{b}\eta_{ab}.
\ee
Let us find its effect upon the solutions (\ref{6:std solution for W}) and (\ref{6:standard modification}) for $W$ and $M$. Firstly:
\bea
\D W&=&d\lambda\\
&=&-ke^a dx^{b}\eta_{ab}\nn.
\eea
Using $\D M=sd\lambda$ we then find:
\bea
\D M_{\a\b}&=&-kdx^{m}\G_{m\a\b}\\
\D M_{a \b}&=&-\frac{k}{2}(\G_{a}d\t)_{\b}\nn\\
\D M_{ab}&=&0\nn.
\eea
We see that $\D M_{\a\b}$ is closed but not exact whenever $dx^{m}$ is. Now, the de Rham nontriviality of $dx^{m}$ is the condition for which the original representative (\ref{6:standard physical modification}) is nonzero. Therefore, in this case the gauge transformation $\D\bar M$ is nonzero whenever $\bar M$ itself is.
After the gauge transformation, the alternative representative $M'$ is:
\be
\label{6:mirror symmetry gauge}
M'_{a \b}=-\frac{k}{2}(\G_{a}d\t)_{\b}.
\ee
We have thus traded nonzero $M_{\a\b}$ for nonzero $M_{a\b}$. However, when converted to the topological anomalous term this becomes a topological $\t$ integral of the type (\ref{6:closed theta integral}):
\be
\label{6:QP top charge}
\bar M'_{a \b}=\frac{k}{2}\int d\s^{1}(\G_{a}\del_{1}\t)_{\b}.
\ee
Therefore, even if the standard quasi-invariant Lagrangian is used, when fermionic topology is trivial, the topological charge algebra is gauge equivalent to the minimal algebra.
The more interesting case occurs when nontrivial fermionic topology is formally allowed. In this case, the integral (\ref{6:QP top charge}) can be nonzero. Let us repeat the above procedure using instead the associated one parameter family of gauge transformations parameterized by a constant $a$:
\be
\label{6:gauge xfm one parameter}
\lambda=-ake^a x^{b}\eta_{ab}.
\ee
First we show that this is in fact the most general gauge transformation. There are two more possibilities for $\lambda$ with the correct Lorentz and dimensionality properties. The first is:
\be
\label{6:redundant gauge xfm}
\lambda'=-\frac{ak}{2}x^{a}\bar e\G_{a}\t.
\ee
Defining $\D'W=d\lambda '$, one can verify that although $\D W$ differs from $\D' W$, the algebra modifications $\D M$ and $\D'M$ are the same. In the context of this paper it is the algebra itself that is important, not any particular representation of its generators. The transformation (\ref{6:redundant gauge xfm}) is therefore equivalent to (\ref{6:gauge xfm one parameter}). The only other possibilities appear to be gauge fields of the form:
\be
\lambda''=\bar e\G^{a_1. . .a_b}\t \bar\t\G_{a_1. . .a_b}\t,
\ee
where $b$ is such that $\G_{a_1. . .a_b\a\b}$ is antisymmetric. These transformations leave $M$ invariant, and are hence redundant. We may therefore take (\ref{6:gauge xfm one parameter}) as the most general transformation. Applying this to the representative (\ref{6:standard physical modification}) one finds the equivalence class $[\bar M]$ of topological anomalous terms, with representatives parameterized by the gauge parameter $a$:
\bea
\ [\bar M]_{\a\b}&=&-(1-a)k\G_{m\a\b}\int d\s^{1}\del_{1}x^{m}\\
\ [\bar M]_{a \b}&=&\frac{ak}{2}\int d\s^{1}(\G_{a}\del_{1}\t)_{\b}\nn.
\eea
By introducing appropriately defined new generators we now show that this anomalous term generates extended superalgebras. The new generators are simply the topological charges:
\bea
\bar \Sigma^{a}&=&\frac{k}{2}\int d\s^{1}\del_{1}x^{a}\\
\bar \Sigma^{\g}&=&\frac{k}{2}\int d\s^{1}\del_{1}\t^{\g}\nn.
\eea
Note that $\bar \Sigma^{a}$ and $\bar \Sigma^{\g}$ are nonzero only when the associated superspace dimension is compact and the spatial section of the string wraps around it. Upon adding these to the set of conserved charges:
\bea
\widetilde {\bar Q}_{\a}&=&R_{\a}{}^{M}P_{M}-\frac{k}{2}\int d\s^{1}(\del_{1}x^{m}-\sixth\del_{1}\bar\t\G^{m}\t)\gtl{m}{\a}\\
\widetilde {\bar P}_{a}&=&R_{a}{}^{M}P_{M}+ak\int d\s^{1}\del_{1}x^{m}\eta_{ma}\nn,
\eea
we then obtain the following algebra under Poisson bracket:
\bea
\label{6:gauge fixed algebra}
\{\widetilde {\bar Q}_{\alpha},\widetilde {\bar Q}_{\beta}\}&=&-\Gamma^{b}{}_{\alpha\beta}\widetilde {\bar P}_{b}
-2(1-a)\Gamma_{b}{}_{\alpha\beta}\bar \Sigma^{b}\\
\ [\widetilde {\bar Q}_{\a},\widetilde {\bar P}_{b}]&=&-a\Gamma_{b\a\g}\bar \Sigma^{\g}\nn\\
\ [\widetilde {\bar Q}_{\a},\bar \Sigma^{b}]&=&-\half \Gamma^b{}_{\a\g}\bar \Sigma^{\g}\nn.
\eea
We will check that the Jacobi identity is satisfied. The only nontrivial possibility is:
\be
[\widetilde {\bar Q}_{\a},\{\widetilde {\bar Q}_{\b},\widetilde {\bar Q}_{\g}\}]+\mathsf{cycles}=3\G^{a}{}_{(\a\b}\G_{a\g)\d}\bar\Sigma^{\d},
\ee
which vanishes by the Fierz identity. We note three special cases:
\begin{itemize}
\item
For $a=1$ the extra generator $\bar \Sigma^{a}$ is redundant and may be excluded since it appears nowhere on the RHS of a bracket. We then recover the Green algebra\footnote{Negative signs relative to the background superalgebras are expected due to the use of operators instead of superalgebra generators. Redefinition of the operators with a sign reversal gives the background superalgebra.}.
\item
For $a=\half$ we rescale $\bar \Sigma^\a$ with a factor of $\half$ and recover the extended Green algebra.
\item
Turning off the gauge transformation altogether results in a variant in which $\widetilde {\bar P}_{a}$ is central. The structure of this algebra is of the type considered in \cite{peeters03}:
\bea
\{Q,Q\}&\sim &P+P'\\
\ [Q,P']&\sim &\Sigma\nn.
\eea
\end{itemize}
An important point is that the spectrum (\ref{6:gauge fixed algebra}) cannot be obtained by simply rescaling the known algebras. It is therefore a generalization which yields new superalgebras.
We see that the outcome of the construction is a spectrum of superalgebras parameterized by a free constant. The algebras are constructed by identifying topological charges with new superalgebra generators. One can then decompose the ideal arising from the topological anomalous term. The anomalous term, which is the modification to the Noether charge algebra in the presence of the nontrivial WZ term, contains a gauge freedom. The free constant of the algebra represents the part of the gauge freedom which is consistent with Lorentz invariance and dimensionality requirements. The spectrum of algebras contains the three superalgebra extensions that have so far been associated with the string. We emphasize two departures from prior literature that were required:
\begin{itemize}
\item
Since representatives of the anomalous term are not gauge invariant, it is well defined only as an entire cohomology class. A free constant parameterizes the class.
\item
The fermionic extensions of the superalgebra resulting from the anomalous term are topological integrals. If we formally allow nontrivial fermionic topology, these charges can be \textit{physically} realized. However, regardless of topological considerations, the extended superalgebras generated by the mechanism can always be \textit{abstractly} realized in the operator-form representation.
\end{itemize}
\subsubsection{Extended superspace actions}
\label{4:subsubsec:manifest examples}
The motivation behind using an extended background superspace was to enable a manifestly left invariant WZ term to be used. The left invariant WZ form is generated by a $\psi$ gauge transformation on the standard WZ form:\\
\\
$\bar s=0$:
\bea
\D B&=&-d\psi\\
&=&\frac{k}{2}d\t^{\m}d\p_{\m}\nn.
\eea
$\bar s=1$:
\bea
\D B&=&-d\psi\\
&=&-\frac{k}{2}dx^{m}dy_m+\frac{k}{4}d\t^{\m}d\p_{\m}\nn.
\eea
The manifest left invariance of the $s=1$ action allows us to choose vanishing components for $W$. $M=0$ is then a representative of the anomalous term. As expected, $M$ is therefore $D$ trivial for the extended superspace actions as a result of manifest left invariance.
When using extended superspaces it is just as valid to use the standard WZ term as any other one (since they are gauge equivalent). Let us therefore consider using the standard action on an extended superspace. In this case the extra available coordinates still trivialize the anomalous term. For example, in the case of the Green algebra one can modify $W$ from (\ref{6:std solution for W}) using:
\bea
\D W&=&d\lambda\\
&=&-\frac{k}{2}e^{\a}d\p_{\a}\nn.
\eea
This completes $W$ into an $s$ closed form:
\be
W=-\frac{k}{2}e^\a L_\a.
\ee
Therefore $M=0$ in this gauge (even though $W$ is non-zero). We see that even when a quasi-invariant WZ term is used, the $D$ cocycle is still trivialized by extending the superspace appropriately. This is consistent with the general observation made in section \ref{sec:Anomalous term cohomology} that changing the WZ term does not affect the anomalous term; only changing the background will have an effect.
In the extended superspace case there are many more possibilities for the $\lambda$ gauge transformations since one can form new $\lambda$ fields using the extra coordinates. One might further extend the extended background superalgebras in this way. However, the number of possibilities for $\lambda$ is considerable and the algebras obtained can be large. Since there is currently no direct physical application of such algebras (unlike the extensions of \textit{standard} superspace considered in this paper) we will not pursue this possibility here.
\section{Comments}
\label{sec:Conclusion}
In section \ref{sec:standard superspace action}, the spectrum of algebras for the superstring was shown to contain three known extended algebras. Two of these (the Green algebra and extended Green algebra) have already found application in allowing a manifestly left invariant string WZ term to be constructed. It turns out that the entire spectrum (\ref{6:gauge fixed algebra}) of algebras for the superstring can be used this way. We find that the general solution for the WZ form $B$ takes the same form as in (\ref{2 term LI WZ form}) (the calculation is quite simple and will not be given here). In the general $p$-brane case it is possible that the cocycle approach may generate those superalgebras which allow the construction of left invariant WZ forms. Work on this issue is currently in preparation.
In this work our attention has been restricted to $p$-branes only for brevity. Similar principles to those of the $p$-brane WZ term also apply to the WZ terms of D-branes and M-branes \cite{chrys99}. The additional feature of these branes is the presence of worldvolume gauge fields. With minimal modifications to allow for these fields, the cocycle construction can also be applied to these branes. For the traditional (bosonic topology only) approach to Noether charge algebras of D-branes and M-branes see \cite{sorokin97,hammer97,Hackett03-M-brane-charges}. Work on D-brane charge algebras using the methods of this paper is currently in preparation.
The previously derived structure of the anomalous term arises in the cocycle construction in a particular choice of gauge. This simplified structure relates to the PBRS construction, where the modified algebra is written in the form of a projector \cite{townsend97}. This represents the physical situation in supergravity field theory where half the supersymmetries are broken. The work of this paper shows that allowing for $\lambda$ gauge freedom results in an expanded definition of the anomalous term. It would be interesting to revisit the PBRS construction to determine whether the new possibilities for the anomalous term can be incorporated. Ideally one would like to find a generalization of PBRS which is $\lambda$ covariant.
\subsection{Acknowledgments}
I would like to thank I. N. McArthur for helpful suggestions and critical reading of the manuscript. I would also like to thank S. M. Kuzenko for critical reading of the manuscript.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Neural sequence-to-sequence models with attention have become the \textit{de facto} methods for machine translation~\cite{bahdanau2014neural,vaswani2017attention}. NMT models require a large amount of parallel data to surpass the quality of phrase-based statistical models, and they are very sensitive to data quality~\cite{koehn2017six}. As a conditional text generation task, machine translation contains both \textit{intrinsic} uncertainty, where a given sentence usually has multiple valid reference translations, and \textit{extrinsic} uncertainty, due to noise in the sentence alignment that produces parallel training data~\cite{ott2018analyzing}. %
As an option for handling data uncertainty, latent variable models such as variational autoencoders (VAE) have been investigated in language modeling and conditional text generation~\cite{miao2016neural,zhang2016variational,yang2017improved}. However, in contrast to their success when applied to computer vision tasks~\cite{kingma2013auto,rezende2014stochastic}, VAEs in natural language processing suffer from \textit{posterior collapse}, where the learnt latent code is ignored by the decoder~\cite{bowman2015generating}.
In this work, we propose to address posterior collapse when using latent variable models in neural machine translation. First, we provide an analysis of the evidence lower bound (ELBO) used in conditional variational autoencoders (CVAE) commonly used in conditional text generation. Our analysis reveals that optimizing CVAE's ELBO not only inevitably leads to vanishing divergence of the posterior from the prior during training, but also decreasing mutual information between latent codes and data. Based on this insight, we propose two modifications of CVAE's ELBO to address this problem: 1) we explicitly add mutual information back to the training objective in a principled way, and 2) we use a factorized decoder, predicting ``bag of words" as an auxiliary decoding distribution to regularize latent variables, finding that both are complementary. We summarize our contribution as follows:
\begin{enumerate}
\item %
%
We improve CVAE by enhancing mutual information between latent variables and data, effectively mitigating posterior collapse in conditional text generation.
\item %
We apply the proposed model in neural machine translation with the Transformer architecture. Experiments demonstrate that latent variables are not ignored even in the presence of the powerful autoregressive decoder. Compared to variational NMT with CVAE architecture or non-latent Transformer, the proposed improvements yield improved robustness and data-efficiency.
\item We extend the proposed model to semi-supervised learning with monolingual data, and show that it has superior performance on self-training by effectively learn from source-side monolingual data.
\end{enumerate}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{model_joint2.pdf}
\end{center}
\caption{Model architecture, including training with only parallel data, and joint training with monolingual data.}
\label{fig:training}
\end{figure*}
\section{Background}
\subsection{Neural Machine Translation}
Problem instances in machine translation are pairs of sequences \((\bm{x} \triangleq [x_1, \ldots, x_m], \bm{y} \triangleq [y_1, \ldots, y_n])\), where \(\bm{x}\) and \(\bm{y}\) represent the source and target sentences, respectively.
Conventionally, a neural machine translation (NMT) model is a parameterized conditional distribution whose likelihood factors in an autoregressive fashion:
\begin{equation}
p_\theta\left(\bm{y}\mid\bm{x}\right) = \prod_{i}^{|\bm{y}|} p_\theta\left(y_i \mid \bm{x}, \bm{y}_{<i}\right)\text{.}
\end{equation}
The dominant translation paradigm first represents the source sentence as a sequence of contextualized vectors (using the \emph{encoder}), then decodes this representation token-by-token into a target hypothesis according to the above factorization.
The parameters \(\theta\) are learned by optimizing the log-likelihood of training pairs with stochastic gradient methods \cite{bottou2004large}. Decoding the model occurs in a deterministic fashion, using an efficient approximate search like beam search \cite{tillmann-ney-2003-word}. Recently, Transformer with multi-head attention has become the state of the art for NMT \cite{vaswani2017attention}.
\subsection{Conditional Variational Autoencoder (CVAE)}
Our NMT approach extends the conditional variational autoencoder (CVAE) \cite{sohn2015learning}, of which variational NMT \cite{zhang2016variational} is a particular case. It introduces a latent variable $\bm{z}$ to model the conditional distribution:
\begin{equation}
p_\theta(\bm{y} \mid \bm{x}) = \int_{\bm{z}}p_\theta(\bm{y}\mid \bm{z}, \bm{x}) \cdot p(\bm{z} \mid \bm{x})\, \mathrm{d}z
\text{.}
\label{eqn:log-likelihood}
\end{equation}
However, it is intractable to directly marginalize \(\bm{z}\). Instead, the CVAE objective is to maximize the \textbf{evidence lower bound (ELBO)} of the \mbox{(log-)}likelihood:
\begin{multline}
\mathcal{L}_{\mathrm{CVAE}}(\phi, \theta; \bm{x}, \bm{y}) = \Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})} \left[\log p_\theta(\bm{y}\mid \bm{x}, \bm{z})\right] \\
- \KL(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) \parallel p_\theta(\bm{z} \mid \bm{x}))
\text{,}
\label{eqn:cvae}
\end{multline}
where $\KL$ represents the Kullback--Leibler (KL) divergence between two distributions.
Learning is done by amortized variational inference, where the variational distribution \(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})\) is an inference network parameterized by \(\phi\).
\subsection{Posterior Collapse}
Posterior collapse can be explained mathematically by analysis of the ELBO objective, as well as from the perspective of a powerful decoder. We consider both in this subsection.
We first provide an analysis of CVAE's objective and identify its drawback. Recall that our computed loss approximates the loss on the true data distribution by using a finite number of samples:
\begin{equation}
\mathcal{L} = \Expect_{p_{\mathcal{D}}(\bm{x}, \bm{y})} \left[ \mathcal{L}_{\mathrm{CVAE}}(\phi, \theta; \bm{x}, \bm{y}) \right]
\end{equation}
Thus, the KL term is:
\begin{align}
&\Expect_{p_\mathcal{D}(\bm{x}, \bm{y})} \left[\KL(q_\phi(\bm{z} \mid \bm{x}, \bm{y}) \parallel p_\theta(\bm{z} \mid \bm{x})) \right] \nonumber \\
&\triangleq \Expect_{p_\mathcal{D}(\bm{x}, \bm{y})}\Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})} \left[\log q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) - \log p(\bm{z}\mid \bm{x})\right] \nonumber \\
&= \sum_{\bm{x}, \bm{y}} q(\bm{x}, \bm{y}, \bm{z}) \log \frac{q (\bm{x}, \bm{y}, \bm{z})}{p(\bm{x}, \bm{y}, \bm{z})} \nonumber \\
&= \Expect_{\bm{x}, \bm{y}, \bm{z}}\log \frac{q(\bm{x}, \bm{y} \mid \bm{z}) q(\bm{z})}{p(\bm{x}, \bm{y}) p(\bm{z})} \nonumber \\
%
&= \Expect_{q_\phi(\bm{z})}\Expect_{q_\phi(\bm{x}, \bm{y} \mid \bm{z})} \log \frac{q(\bm{x}, \bm{y} \mid \bm{z})}{p_\mathcal{D}(\bm{x}, \bm{y})} + \Expect_{q_\phi(\bm{x}, \bm{y}, \bm{z})} \log \frac{q(\bm{z})}{p(\bm{z})} \nonumber \\
&= \underbrace{-\Entr(\bm{x}, \bm{y} \mid \bm{z}) + \Entr(\bm{x}, \bm{y})}_{\triangleq \MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y})} %
+ \underbrace{
\Expect_{q_\phi(\bm{z})} \log \frac{q(\bm{z})}{p(\bm{z})}
%
}_{\triangleq \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) )} \label{eqn:kl}
\end{align}
The third line comes from multiplying the numerator and denominator by \(p_\mathcal{D}(\bm{x}, \bm{y})\) following \citet{hoffman2016elbo}, the fact that \(p(\bm{z} \mid \bm{x})\) is conditionally independent of \(\bm{y}\), and defining \(p_{\mathcal{D}}(\bm{x}, \bm{y}) \triangleq \frac{1}{N}\) for all \(N\) training samples \((\bm{x}, \bm{y}) \in \mathcal{D}\). The fifth line comes from factoring and conditional independence.
As the two resulting terms are non-negative \cite{cover-thomas-2006-elements}, the global minimum of \Cref{eqn:kl} is \(\MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y}) = \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) ) = 0 \). Unfortunately, at this point, the consequence of the optimization is that \(\bm{z}\) is conditionally independent of the data.
Another explanation of posterior collapse is the ``powerful decoder" perspective: an autoregressive model with large capacity comes to approximate a complex distribution \emph{without using the latent variables} \cite{bowman2015generating,he2019lagging}. This is a challenge for NMT, which requires a powerful decoder such as Transformer with direct attention to the encoder. %
\section{Addressing Posterior Collapse}
\subsection{CVAE Guided by Mutual Information}
\subsubsection{Adding $\MI_{q_{\phi}}(\bm{z}; \bm{x},\bm{y})$ to ELBO}
To combat the optimization dilemma from \cref{eqn:kl}, we explicitly add the mutual information term to the CVAE's ELBO and obtain a new training objective:
\begin{multline}
\label{eq:micvae}
\mathcal{L}_{\mathrm{MICVAE}} =\mathcal{L}_{\mathrm{CVAE}} + \MI_{q_{\phi}}(\bm{z}; \bm{x}, \bm{y}) \\ =
\Expect_{q_{\phi}(\bm{z}\mid \bm{x}, \bm{y})}\log p(\bm{y}\mid \bm{x}, \bm{z}) - \KL(q_{\phi}(\bm{z}) \parallel p(\bm{z} ) )
%
\text{.}
\end{multline}
The new training objective aims to match the aggregated posterior distribution of the latent variable $q_{\phi}(\bm{z})$ to the aggregated prior distribution $p(\bm{z})$. It can be seen as an extension of InfoVAE~\cite{zhao2017infovae} to conditional generative models, where we have overcome the mismatch between the (joint) data distribution \(p_\mathcal{D}(\bm{x}, \bm{y})\) and the (conditional) log-likelihood objective \(p_\theta(\bm{y} \mid \bm{x})\). %
\subsubsection{Guiding $\bm{z}$ to Encode Global Information}
Several existing approaches weaken the decoder to encourage latent variables to be utilized, which is not preferred in practice \cite{bowman2015generating,guljarani2016pixelvae}. Here we propose a different approach: explicitly guiding the information encoded in $\bm{z}$ without reducing the decoder's capacity.
Inspired by an information-theoretic view of posterior collapse using Bits-Back Coding theory~\cite{wallace-freeman-1987-estimation,Hinton:1993:KNN:168304.168306,chen2016variational}, we add an auxiliary loss for $\bm{z}$ to encode information which cannot be modelled locally by the autoregressive decoder distribution $\prod_t p_\theta(y_t \mid \bm{x}, \bm{y}_{<t})$. We use bag-of-words (BoW) prediction as the auxiliary loss. %
It encodes global information while having a non-autoregressive factorization $\prod_t p_\psi(y_t \mid \bm{z})$. The auxiliary decoder complements the autoregressive decoder (which is locally factorized) by combining predictions at the Softmax layer, i.e.\ $p(y_t \mid \bm{x}, \bm{y}_{<t}, \bm{z})$ is a \textbf{mixture of softmaxes} \cite{yang2018breaking}:
\begin{multline}
p(y_t \mid \cdot) = (1-\lambda) \cdot p_{\theta}(y_t \mid \bm{x}, \bm{y}_{<t}, \bm{z}) \\
+ \lambda \cdot p_{\psi}(y_t \mid \bm{z})
\text{.}
\end{multline}
Thus, the bag-of-words objective regularizes the log-likelihood bound.
\subsection{Architecture}
\paragraph{ Inference Network} We use discrete latent variables with reparameterization via Gumbel-Softmax~\cite{jang2016categorical} to allow backpropagation through discrete sampling. Compared to the multivariate Gaussian distribution commonly used in VAE and CVAE, our parameterization allows us to explicitly account for multiple modes in the data. To make our model more general, we introduce a \emph{set} of discrete latent variables \(\bm{z} = \{\bm{z}_1, \ldots, \bm{z}_K\}\) which are independently sampled from their own inference networks $\Phi_k$. Specifically, each $\Phi_k$ computes dot product attention with encoder outputs $\bm{h}\in \mathbb{R}^d $:
\begin{equation}
\bm{C}_k = \text{Softmax}(\frac{\bm{e}_{k}\bm{W}^k(\bm{h}\bm{W}^h)^\top}{\sqrt{d}})\bm{h}\bm{W}^h
\text{.}
\end{equation}
We can now sample $\bm{z}_k$ by Gumbel-Softmax reparameterization trick~\cite{jang2016categorical}:
\begin{equation}
\begin{split}
\bm{z}_k = \text{GumbelSoftmax}(\textbf{C}_k)
=\text{softmax}\left(\frac{\textbf{C}_k + \bm{g}}{\tau}\right),
\end{split}
\end{equation}
where $\bm{g}=-\log(-\log(\bm{u})), \bm{u}\sim \text{Uniform}$ is the Gumbel noise and $\tau$ is a fixed temperature (we use $\tau=1$ in this paper). In the inference time, we use a discrete version by directly sampling from the latent variable distribution.
\paragraph{BoW Auxiliary Decoder}
Given inferred sample $\bf{z} \sim \Phi_k({\textbf{h}})$, the BoW decoder predicts all tokens at once without considering their order.
We compute the cross-entropy loss for the predicted tokens over the output vocabulary space \(V\):
\begin{equation}
\mathcal{L}_{\mathrm{BoW}} = \sum_{i=i}^{|V|} p_i \log \hat{p_\psi}(y_i \mid \bm{z}), \quad \sum_{i=1}^{| V| } p_i = 1
\text{.}
\end{equation}
We take the empirical distribution $p_i$ to be a token's frequency within a sentence normalized by its total frequency within a mini-batch, mitigating the effect of frequent (stop) words. $\hat{p}_{
\psi}%
$ is computed by conditioning on the latent code only, without direct attention to encoder outputs. We use dot-product attention between the latent embeddings and the token embeddings (each of dimensionality \(d\)):
\begin{equation}
\label{eq:bow_loss}
p_{\psi}(y_i \mid \bm{z}) = \text{Softmax}_i \left(\frac{\Embedding(\bm{z})\Embedding^T(V)}{\sqrt{d}}\right)
\text{.}
\end{equation}
\subsection{Training}
\label{sec:model_training}
We train our model using amortized variational inference, where samples $\bm{z}$ are drawn from the posterior distributions to get a Monte Carlo estimate of the gradient. In addition to standard CVAE supervised learning with parallel data, we also extend our model to be jointly trained by adding monolingual data.
\todo[disable,author={R1}]{Instead of adding the extra objective Iq(z;x) from the source monolingual data, why not also consider Iq(z;y) from the target monolingual data? Since the posterior distribution q(z|x,y) is conditioned on both source and target, it would be better to promote mutual info in both directions.}
\todo[disable,author={R1}]{How did you compute the mutual info I(z;x)? Better say one or two sentences about the evaluation method.
}
\paragraph{Semi-supervised learning} We apply the same modification to VAE's ELBO, following \citet{zhao2017infovae}. For jointly training with source-side monolingual data, we add $\MI_{q_{\phi}}(\bm{z}; \bm{x})$ to the ELBO\footnote{Learning to copy the source text has proven useful for low-resource NMT \cite{currey-etal-2017-copied}.}, and for target-side monolingual data, we add $\MI_{q_{\phi}}(\bm{z}; \bm{y})$. %
The joint objective sums the modified CVAE and VAE objectives%
:
\begin{equation}
\label{eq:mono_loss}
\begin{split}
\mathcal{L}_{\mathrm{Mono}} = & \log p(\bm{x} \mid \bm{z}) \\
%
&+{} D_{\mathrm{KL}}\left(\frac{1}{N} \sum_{n=1}^N q_{\phi}(z_n | x_n) \;\bigg{|\bigg|}\; \frac{1}{N} \sum_{n=1}^N p(z_n)\right) \\
%
\end{split}
\end{equation}
\begin{equation}
\label{eq:joint_loss}
\mathcal{L}_{\mathrm{Joint}} = \mathcal{L}_{\mathrm{MICVAE}} + \mathcal{L}_{\mathrm{Mono}}
\end{equation}
\Cref{alg:main} describes the overall training strategy.
\begin{algorithm}[t]
\caption{\label{alg:main}Training Strategy}
\begin{algorithmic}[1]
\STATE $\Phi_{enc}, \Phi_{k=1, ..., K}, \Theta_{dec},\Theta_{BoW} \gets \text{initialize parameters}$
%
%
%
%
%
\WHILE{$\Theta_{enc}, \Theta_{dec},\Theta_{BoW}, \Phi_{k=1, ..., K}$ have not converged}
\STATE{Sample $(\mathbf{x}, \mathbf{y})$ from $D^{\text{bitext}} $}
\STATE{Compute $\mathcal{L}_{\mathrm{MICVAE}}$ with \Cref{eq:micvae}}
\STATE{Train $\Phi_{enc}, \Theta_{dec}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{MICVAE}}$}
\STATE{Compute $\mathcal{L}_{\mathrm{BoW}}$ with \Cref{eq:bow_loss}}
\STATE{Train $\Phi_{enc},\Theta_{BoW}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{BoW}}$}
\IF{\text{self\_training}}
\STATE{Sample $\mathbf{x}$ from $D^{\text{mono}} $}
\STATE{Compute $\mathcal{L}_{\mathrm{Mono}}$ with \Cref{eq:mono_loss}}
\STATE{Train $\Phi_{enc}, \Phi_{k=1, ..., K}$ with $\mathcal{L}_{\mathrm{Mono}}$}
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Experiments}
Here we describe our experiments, showing that our techniques have practical value for both mitigating posterior collapse and improving translation quality.
\subsection{Setup}
\paragraph{Datasets}
First, we evaluate our models on standard WMT benchmark datasets. Second, we focus on two representative challenges in NMT: low-resource and robustness to noisy data.
\begin{description}
\item[WMT14 German--English] We use data from the WMT14 news translation shared task, which has 3.9M sentence pairs for training with BPE tokenization.
\item[WMT16 Romanian--English] We use data from the WMT16 news translation shared task. We use the same BPE-preprocessed \cite{sennrich-etal-2016-neural} train, dev and test splits as in \citet{gu2017non} with 608k sentence pairs for training.
\item[Low resource benchmark (FLoRes) Sinhala--English] We use the same preprocessed data as in \citet{guzman2019two}. There are 646k sentence pairs.
\item[MT for Noisy Text (MTNT) French--English] We use 30K subword units built jointly from source and target sentences, and only keep sentences with less than 100 tokens. For training, there are 34,380 sentence pairs for English-French and 17,616 sentence pairs for French--English \cite{michel2018mtnt}. We also used 18,676 \emph{monolingual} sentences per language from the same data source (Reddit).
\end{description}
\paragraph{Implementation} All of our models are implemented using Transformer architecture.%
For WMT14 De--En and WMT16 Ro--En%
, we use the base configuration \cite{vaswani2017attention}: 6 blocks, with 512-dimensional embedding, 2048-dimensional FFN, and 8 attention heads. For FLoRes (low-resource) and MTNT (both low-resource and noisy), we use a smaller Transformer: 4 layers, 256-dimensional embedding, 1024-dimensional inner layers, and 4 attention heads. %
Input and output embeddings are shared between the inference network and decoder. We use $T=4$ categorical latent variables each of dimension 16 which are found by grid search over validation set. Auxiliary bag-of-words predictions are combined with decoder prediction with $\lambda=0.1$. All models are optimized using Adam with $\beta_1=0.9$, $\beta_2=0.98$, $\epsilon=1e-8$, weight decay of 0.001, and the same warmup and learning rate schedule as in \citet{ott2018scaling}. All models are trained on 8 \textsc{Nvidia} V100 GPUs with 32K tokens per mini-batch. We train WMT14 De-En with 200k updates and all other models with 100k updates.
We employ joint BPE vocabularies. The sizes are 32k for En--De and En--Ro; 30k for Fr--En; and 3k for Si--En. In addition, we use a word dropout rate of 0.4 during training of the baseline and latent variable models, which is complementary to our approach.
\paragraph{Baselines} We compare our model to three baselines: 1) \textit{Transformer, non-latent}: standard Transformer model without latent variables (denoted as non-latent), 2) \textit{VNMT}: CVAE model with Gaussian distribution as was proposed in Variational NMT by \citet{zhang2016variational}, which we reimplemented using Transformer, and 3) \textit{DCVAE}: CVAE model with the same discrete latent variables parameterization as ours but without the proposed enhancement on promoting mutual information, i.e., the only differences are the modified ELBO and bag-of-words regularizer.
\section{Main Results}
\subsection{Preventing Posterior Collapse}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{kl-ablation-all.pdf}
\end{center}
\caption{Row (A): comparison of KL and mutual information between baseline (DCVAE, solid triangle, orange color) and our model (solid circle, teal color). Row (B) and (C): ablation study on relative contribution from MICVAE and BoW. All metrics are computed on WMT16 Ro--En validation set during training.}
\label{fig:kl_mi}
\end{figure*}
In this set of experiments, we compare our model to a standard DCVAE without the proposed enhancement in mutual information. We report four metrics of posterior collapse on validate set of WMT Ro--En:
\begin{enumerate}
\item Kullback--Leibler divergence (KL).
\item Mutual information between the latent variable and the data: $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z},\bm{y})$.
%
\item Negative log-likelihood (NLL) per token.
\end{enumerate}
\Cref{tab:collapse_metrics} shows that when using standard DCVAE ELBO, even with the common practice of KL annealing (KLA), both the KL loss and mutual information settle to almost 0 which is consistent with the analysis in \Cref{eqn:kl}. We also plot the progression of \(\KL\), \(\MI_{q_{\phi}}(\bm{z}; \bm{x})\), and \(\MI_{q_{\phi}}(\bm{z}; \bm{y})\) during training in \Cref{fig:kl_mi}. The posterior collapse of the baseline model is apparent: both \(\KL\) mutual information terms drop to 0 at the beginning of training as a result ELBO's design. On the other hand, our model, without using any annealing schedule, can effectively increase mutual information and prevent KL loss from settling to a degenerate solution early on.
\todo[disable,author=R3]{The comparison of KL term in table 1 seems meaningless, because the term in different methods are different.}
\todo[disable,author={Arya}]{Which language is \Cref{tab:collapse_metrics} describing?}
\todo[disable,author={Xian}]{Added in the table caption}
\begin{table}
\caption{Results on improving posterior collapse. The KL value refers to $\KL(q_{\phi}(\bm{z}\mid \bm{x}, \bm{y}) \parallel p(\bm{z} \mid \bm{x} ))$ for DCVAE and $\KL(q_{\phi}(\bm{z}\mid \bm{y} ) \parallel p(\bm{z} \mid \bm{x} ))$ for our model.}
\smallskip
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l r r r r}
\toprule
Model & \(\KL\) & $\MI_{q_{\phi}}(\bm{z},\bm{x})$ & $\MI_{q_{\phi}}(\bm{z},\bm{y})$ & NLL \\
\midrule
DCVAE + KLA & 0.001 & 0.001 & 4.2\textsc{e}-6 & 3.17 \\
Our model & 0.17 & 0.18 & 0.31 & 3.16 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:collapse_metrics}
\end{table}
\begin{comment}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{kl-ablation.pdf}
\end{center}
\caption{Ablation study on relative effects of modifed ELBO only, and BoW only on KL and mutual information on WMT16 Ro--En validation set during training.}
\label{fig:kl-ablation}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{kl-mi-bow_only.pdf}
\end{center}
\caption{Effect of BoW only on KL and mutual information on WMT16 Ro--En validation set during training.}
\label{fig:bow-only}
\end{figure*}
\end{comment}
\todo[disable]{In \Cref{tab:collapse_metrics}, it seems the proposed methods makes NLL worse? Also, just by using the modified ELBO, the KL becomes even smaller than CVAE baseline?}
\todo[disable,author={R2}]{As mutual information is a measure of the amount of information contained in one random variable about another random variable, how can it be the KL decrease while the I increase(since claimed the property of KL in VAE setting, the smaller, the better)?}
\todo[disable,author={R3}]{Table 1 and figure 2 show that even with Modified ELBO and BOW, the KL terms are still at a very small value. In my experiments wrt VNMT personally, such low KL terms seemed to contribute very little information to the decoders. \textbf{I suggest authors to provide more analysis of whether the z thing does actually play an important role in your model}, such as substituting z with a zero vector.}
\todo[disable,author={R3}]{It seems that the bag-of-words decoder plays a much more important role than modified ELBO according to the results in table 1. I think you need to evaluate the contribution of the modified ELBO to your model. Does modified ELBO improves BLEU score ?}
\subsection{Translation Quality}
We report corpus-level BLEU \cite{papineni2002bleu}\footnote{Particularly, we use detokenized SacreBLEU \cite{post-2018-call}.} on the test sets where the translations are generated by sampling each $z_k$ with soft-assignment (vs. argmax). %
\todo[disable]{Are the BLEU scores reported the mean score of several sampled translations (from the latent space)? }
\todo[disable, author={R2}]{Sincerely, it's good to report real result, but maybe you should explain why on the WMT16 EN-RO dataset, your model performs slightly worse, as VNMT is supposed to learning better due to the newly added information.}
\paragraph{Supervised Learning on Parallel Data} First, we evaluate our model's performance when trained with parallel data on standard WMT datasets. \Cref{parallel_results} shows that our model consistently outperforms both VNMT and DCVAE models---which requires ad-hoc KL annealing (KLA) while on par with a strong Transformer baseline. %
\begin{table}
\caption{BLEU score on WMT benchmarks.}
\smallskip
\centering
\adjustbox{max width=\linewidth}{
\begin{tabular}{l r r r r }
\toprule
& \multicolumn{2}{c}{WMT16} & \multicolumn{2}{c}{WMT14} \\
%
\cmidrule(lr){2-3} \cmidrule(lr){4-5} %
Model & Ro--En & En--Ro & De--En & En--De \\ %
\midrule
VNMT & 34.20 & 34.27 & 30.35 & 25.84 \\
DCVAE & 34.16 & 34.51 & 29.76 & 25.46 \\
Our model & 34.76 & 34.97 & 31.39 & 26.42 \\
\midrule
Non-latent & 34.73 & 34.54 & 30.89 & 26.36 \\
\bottomrule
\end{tabular}}
\label{parallel_results}
\end{table}
\begin{comment}
\paragraph{Low resource NMT: fully supervised}
Next, we evaluate our model on low-resource scenarios which is an unsolved challenge in NMT \cite{koehn2017six}. \Cref{lowres} summarizes the results on two representative low-resource datasets.
\begin{table}
\caption{BLEU score on low-resource datasets.}
\smallskip
\centering
\adjustbox{max width=\linewidth}{
\begin{tabular}{l r r r r }
\toprule
& \multicolumn{2}{c}{MTNT} & \multicolumn{2}{c}{FLORES} \\
%
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \\ %
Model & Fr--En & En--Fr & Si--En & En--Si \\ %
\midrule
Non-latent & 26.65 & running & TODO & TODO \\
VNMT & 26.90 & running & TODO & TODO \\
DCVAE & 26.37 & running & TODO & TODO \\
Our model & \textbf{28.58} & running & TODO & TODO \\
\bottomrule
\end{tabular}}
\label{lowres}
\end{table}
\end{comment}
\paragraph{Semi-supervised with Source-side Monolingual Data}
Leveraging monolingual data is a common practice to improve low resource NMT. Current approach has been mostly focusing on using target-side monolingual data through ``backtranslation" as a data augmentation, while how to effectively leverage source-side monolingual to facilitate self training is still an open challenge \cite{sennrich2015improving,zhang2016exploiting}. We use the joint training objective described in \Cref{eq:joint_loss}. To have a fair comparison, we also extend VNMT and DCVAE with the same joint training algorithm, i.e., the newly added monolingual data is used to train their corresponding sequence encoder and inference network with standard VAE ELBO. That is, the only difference is that our model was trained to promote mutual information $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$. As shown in \Cref{table:mono}, by doing so the proposed model brings larger gains during self-training with source-side monolingual data.
\begin{table}
\caption{Translation performance (BLEU) of utilizing source-side monolingual data.}
\smallskip
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l r r}
\toprule
Model & Fr--En & En--Fr \\
\midrule
DCVAE & 26.37 & 26.11 \\
+ source mono & 27.30 & 26.40 \\
Our model & 28.58 & 26.31 \\
+ source mono & 29.81 & 26.69 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{table:mono}
\end{table}
\paragraph{Robustness to noisy data}
While high-quality parallel data is scarce for low-resource language pairs, weakly aligned sentence pairs can be mined from massive unpaired data such as Paracrawl\footnote{\url{https://paracrawl.eu/}}. We evaluate our model's performance when augmenting the training set with increasingly noisy parallel data filtered by Zipporah \cite{xu2017zipporah}. \Cref{fig:si_en} %
shows the results in the Sinhala--English direction. Our model always outperforms standard Transformer, which struggles as more (and noisier) data is added.
\todo[disable, author={R2}]{on the comparison between with and without mono corpus, they compare to a NON-latent model, which is not fair, as least they should list their results on a unmodified VAE model, i.e., this experiment result is not convincible.}
\todo[disable, author={R3}]{Experiment about monolingual data and noisy data lacks comparison with VNMT.}
\todo[disable, author={R2}]{I doubt that their model performs better on the noise data just like what word dropout does(which has been proven to be effective).
}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{bar_plot-2.pdf}
\caption{BLEU when increasing the amount of noisy parallel data in training, Si--En.}
\label{fig:si_en}
\end{figure}
\begin{comment}
\paragraph{Data Efficiency}
We evaluate the proposed model's sample-efficiency by varying the amount of training data. As is shown in \Cref{table:data-effiency}, \todo[inline]{plot this result.}
\begin{table}
\caption{Translation performance (BLEU) with increasing amount of parallel data.}
\smallskip
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l r r r}
\toprule
Model & De-En 500k & De-En 1M & De-En 3.9M \\
\midrule
Non-latent & 20.89 & 24.08 &30.89 \\
DCVAE & 20.54 & 23.70 & 29.76 \\
Our model & 21.20 & 24.26 & 31.39 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{table:data-effiency}
\end{table}
\end{comment}
\section{Analysis}
\subsection{Ablation Study}
We further investigate how different ingredients of our proposed approach contribute to preventing posterior collapse and improving translation quality. We conduct further experiments with two variants of the proposed model: 1) modified ELBO only: only adding mutual information term to the training objective, while without gradients from $\mathcal{L}_{\mathrm{BoW}}$, 2) BoW only: which is equivalent to DCVAE combined with Bow decoder.
First, we perform the same collapse metrics evaluation as in \Cref{tab:collapse_metrics}. \Cref{fig:kl_mi} (B) suggests that by explicitly adding mutual information term back to the training objective, both $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$ are effectively raised, while the remaining aggregated KL term is still optimized to zero. Such behavior is consistent with the analysis revealed in \Cref{eqn:kl}. On the other hand, regularizing $z$ with BoW decoder only, as is shown in \Cref{fig:kl_mi} (C), is very effective in preventing KL vanishing as well as increasing mutual information. When two approaches are combined, as was shown in \Cref{fig:kl_mi}, the model retain higher mutual information for both $\MI_{q_{\phi}}(\bm{z}, \bm{x})$ and $\MI_{q_{\phi}}(\bm{z}, \bm{y})$.
Next, we look into whether such difference in mutual information lead to difference in translation quality. We compare these two models: BoW only (\Cref{fig:kl_mi} (C)) and both (\Cref{fig:kl_mi} (A)) on WMT14 De--En and WMT16 Ro--En test sets. \Cref{table:ablation-bleu} reveals that such difference matters more in low-data regime.
\begin{table}
\caption{Ablation study on translation quality (BLEU).}
\smallskip
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l r r}
\toprule
Model & De--En (3.9M) & Ro--En (608K) \\
\midrule
Both & 31.39 & 34.76 \\
BoW only &31.14 & 34.22 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{table:ablation-bleu}
\end{table}
\subsection{Analysis of Outputs}
Delving into model predictions helps us understand how our model outperforms the others. We provide some 1-best predictions from the Romanian--English data in \Cref{tab:outputs}.
Several examples support the fact that our model has more fluent and accurate translations than the baseline or VNMT. VNMT often struggles by introducing disfluent words, and both VNMT and the baseline can select justifiable but incorrect words. For instance, in our second example, the gender and animacy of the possessor are not specified in Romanian. Our model selects a more plausible pronoun for this context.
More broadly, we find that the reference translations are quite loose and context-dependent (rather than word-for-word translations), making it difficult for models to reproduce---they give reasonable translations with greater fidelity to source word order and content. (As an extreme example, the English translation of \emph{ed miliband isi cunostea dusmanii} adds information to the beginning: \emph{for all his foolishness ed miliband knew who his enemies were}; no model is able to add this.) Our model often makes superior judgments in terms of lexical choice and fluency.
\begin{table}[t]
\centering
\caption{Translation examples from the baseline Transformer, VNMT, and our model. Disfluent words or absences are marked in \textcolor{red}{red}, and slightly incorrect lexical choice is marked in \textcolor{blue}{blue}. Romanian diacritics have been stripped.}
\smallskip
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l}
\toprule
\textbf{Source}: ma intristeaza foarte tare .\\
\textbf{Reference}: that really saddens me . \\
\textbf{Base}: i am very saddened .\\
\textbf{VNMT}: i am saddened very \textcolor{red}{loudly} . \hfill\emph{(Wrong sense of \emph{tare})}\\
\textbf{Ours}: i am very saddened .\\
\midrule
\textbf{Source}: cred ca executia sa este gresita .\\
\textbf{Reference}: i believe his execution is wrong .\\
\textbf{Base}: i believe that \textcolor{blue}{its} execution is wrong .\\
\textbf{VNMT}: i believe that \textcolor{blue}{its} execution is wrong .\\
\textbf{Ours}: i believe that his execution is wrong .\\
\midrule
\textbf{Source}: da , chinatown\\
\textbf{Reference}: yes , chinatown\\
\textbf{Base}: yes , chinatown\\
\textbf{VNMT}: yes , \textcolor{red}{thin} \textcolor{blue}{.}\\
\textbf{Ours}: yes , chinatown\\
\midrule
\textbf{Source}: nu stiu cine va fi propus pentru aceasta functie .\\
\textbf{Reference}: i do not know who will be proposed for this position .\\
\textbf{Base}: i do not know who will be proposed for this \textcolor{blue}{function} .\\
\textbf{VNMT}: i do not know who will be proposed for this \textcolor{blue}{function} .\\
\textbf{Ours}: i do not know who will be proposed for this position .\\
\midrule
\textbf{Source}: recrutarea , o prioritate tot mai mare pentru companii\\
\textbf{Reference}: recruitment , a growing priority for companies\\
\textbf{Base}: recruitment , \textcolor{blue}{an increasing} priority for companies\\
\textbf{VNMT}: recruitment , \textcolor{red}{[article missing]} increasing priority for companies\\
\textbf{Ours}: recruitment , a growing priority for companies\\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:outputs}
\end{table}
\subsection{Analysis of Latent Variables}
Finally, we probe whether different latent variables encode different information. We random sample 100 sentences from two test sets of distinct domains, MTNT (Reddit comments) and WMT (news) with 50 sentences each. We plot the t-SNE projection of their corresponding latent variables samples $z_k$ inferred from $\Phi_k$, $k=1,2,3,4$ respectively. Figure \ref{fig:z_tsne} indicates that different latent variables learn to organize the data in different manners, although there was no clear signal that any of them exclusively specialize in encoding a domain label. We leave an thorough analysis of their information specialization to future work.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{z-plots-2.pdf}
\end{center}
\caption{t-SNE visualization of $\bm{z}_k$, $k=1,2,3,4$ samples inferred from 100 sentences from two datasets with distinct domains, MTNT (orchid) and WMT news (green).}
\label{fig:z_tsne}
\end{figure}
\section{Related Work}
Unlike most prior work in (conditional) text generation, we are able to address posterior collapse without requiring an annealing schedule \cite{bowman2015generating}, a weakened decoder \cite{guljarani2016pixelvae}, or a restriction on the variational family \cite{razavi2018preventing}.
Unlike \citet{ma-etal-2018-bag}, who also employ bag-of-words as an objective for NMT, our bag-of-words decoder only has access to \(\bm{z}\), not the encoder states. Conversely, unlike \citet{weng-etal-2017-neural}, our generative decoder has access to both the latent variable and the encoder states, and the bag-of-words prediction is handled by a separate set of parameters.
Posterior collapse for text VAE was first identified in language modeling \cite{bowman2015generating}. %
VNMT \cite{zhang2016variational} applies CVAE with Gaussian priors to conditional text generation. VRNMT \cite{su2018variational} extends VNMT by modeling the translation process in greater granularity. All of them needed manually designed annealing schedules to increase KL loss to mitigate posterior collapse. Discrete latent variables have been applied to NMT \cite{gu2017non,shen2019mixture,kaiser2017one} but did not use variational inference or address posterior collapse. Tackling posterior collapse has received more attention lately, with general approaches such as aggressively trained inference networks \cite{he2019lagging}, skip connections \cite{dieng2018avoiding}, and more expressive priors \cite{razavi2018preventing,tomczak2017vae}.
\section{Conclusion}
We have presented a conditional generative model with latent variables whose distribution is learned with variation inference, then applied it to the task of machine translation. Our approach does not require an annealing schedule or a hamstrung decoder to avoid posterior collapse. Instead, by providing a new analysis of the conditional VAE objective to improve it in a principled way and incorporating an auxiliary decoding objective, we measurably rely on the latent variables. In addition to preventing posterior collapse, our approach improves translation quality in terms of BLEU. Empirical evaluation demonstrates that the proposed method has improved performance in dealing with uncertainty in data, including weakly supervised learning from source-side monolingual data as well as noisy parallel data.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:I}
Neutron stars, which are stellar remnants of core-collapse supernova
explosions that occur at the last moment of massive stars, are
composed of matter under extreme conditions, namely, such high
density and large neutron excess as to be very difficult to realize on earth.
In fact, the density inside the star can significantly exceed the
normal nuclear density under strong gravitational field,
while the neutron excess can become extremely large under charge neutral
and beta equilibrium conditions \citep{NS}. Thus, observations of
neutron stars are expected to help us to probe the physics under such
extreme conditions, particularly given the difficulty in terrestrial
laboratories in obtaining relevant information about matter in
neutron stars. Even at the present time when information from two solar
mass neutron stars and a binary neutron star merger is available \citep{A2018},
however, the equation of state (EOS) of neutron star matter and
hence the neutron star structure are still uncertain.
In spite of the uncertainties in the EOS of neutron star matter,
a schematic view of neutron star structure can be drawn
as follows. Under the envelop composed mostly of a liquid
metal, the matter is considered to have a lattice structure via the
inter-ionic Coulomb interaction. This crystalline region is
referred to as a crust. In the deepest region of the crust,
below which the matter becomes uniform and constitutes a core, nuclei present
are so closely packed that the nuclear shape, which is normally roughly
spherical, could turn to cylinder (``spaghetti''), slab (``lasagna''), tube or
cylindrical hole (``ani-spaghetti''), and bubble or spherical hole (``Swiss
cheese'') as the density increases.
Such exotic shapes are often called nuclear pasta \citep{LRP1993,O1993}.
This nuclear pasta is embedded in a gas of dripped neutrons and thus
can be viewed as a liquid-gas mixed phase of nuclear matter. Since the
crystalline order of the phases of cylindrical nuclei, slab-like nuclei,
and tubes is low-dimensional, furthermore, these phases are liquid
crystals \citep{PP1998}. Interestingly, it is known that the appearance
of pasta structures depends strongly on the slope parameter of nuclear
symmetry energy \citep{OI2007}, of which the determination is one of
the important problems in nuclear physics \citep{L2017}.
Observational evidence for the presence of nuclear pasta would thus be
highly desired.
To extract information of neutron star interiors from observations,
asteroseismology is a very powerful technique, just like the
seismology for the Earth and the helioseismology for the Sun.
That is, since the characteristic frequencies observed from
neutron stars may well be more or less related to the interior
properties, one could obtain the interior information by somehow
observing such frequencies, identifying them as eigenfrequencies of
some global oscillation modes, and then solving an inverse problem.
Such frequencies could be obtained from gravitational waves that
would radiate from the interiors and reach us due to the strong permeability.
In fact, many possibilities of extracting the
neutron star properties via direct detection of the gravitational waves
have been proposed (e.g., \cite{AK1996,STM2001,SKH2004,SYMT2011,DGKK2013}).
Study in this direction is so promising as to make us expect to obtain
important information on the neutron star interiors in the near future.
As long as neutron star asteroseismology is concerned,
quasi-periodic oscillations (QPOs) in X-rays have been only known
electromagnetic signals of global oscillations. Up to now, three
giant flares have been observed from different soft gamma repeaters (SGRs).
In two of them, namely, SGR 1900+14 and SGR 1806-20, several QPOs have
been found in the X-ray afterglow following the respective
giant flare, where the observed QPO frequencies are in the range of tens of Hz
up to kHz \citep{I2005,SW2005,SW2006}. In SGR 1806-20, another QPO,
i.e., the 57 Hz QPO, was also found from the shorter and less energetic
recurrent 30 bursts \citep{QPO2}. Since the central object in the SGR is
considered to be a strongly magnetized neutron star, the observed QPOs
may well come from the global oscillations of the
neutron star. Given that typically, the frequencies induced by
acoustic oscillations in the star are around kHz \citep{VH1995}, one
has difficulty in identifying the QPOs of frequency lower than
$\sim 100$ Hz as the acoustic oscillations. In practice, it is
generally accepted that the mechanisms for explaining such lower QPO
frequencies are either the crustal torsional oscillations, the
magnetic oscillations, or the coupled modes (magneto-elastic oscillations).
However, calculations of the magnetic oscillation frequencies suffer
several uncertainties. The geometry and strength distribution of the
internal magnetic fields are poorly known, although the magnetic
oscillations depend strongly on such magnetic structure
\citep{GCFMS2013}. In addition, one has to take into account the
uncertain core EOS if the magnetic fields penetrate into
the core region. To avoid such uncertainties, in this study we focus on the
crustal torsional oscillations by ignoring the coupling to the magnetic
oscillations. Note that even in the absence of this coupling, the calculated
eigenfrequencies of the crustal torsional oscillations are still controlled by
several physical parameters that are relatively well-known but not yet
determined, i.e., such crustal properties as the shear modulus and the
superfluid density, as well as the star's radius $R$ and mass $M$. By
identifying the observed QPO frequencies as such eigenfrequencies,
therefore, one can manage to constrain the crustal properties \citep{SA2007,SW2009,GNJL2011,SNIO2012,PA2012,SNIO2013a,SNIO2013b,S2014,S2016,SIO2016,SIO2017a,SIO2018}.
In most of these earlier studies of the crustal torsional
oscillations, it was assumed that only the phase of spherical nuclei
oscillates quasiperiodically, while the pasta phases remain free from
oscillations. Since most of the pasta phases are liquid crystals, however,
their elastic properties could be responsible for global oscillations. In
contrast to a naive view that the shear modulus decreases continuously
in the pasta phases and eventually vanishes at the crust-core boundary,
which was adopted in \cite{S2011,PP2016}, we have recently attempted
to introduce a more realistic effect of the pasta elasticity into
the torsional oscillations \citep{SIO2017a,SIO2018}. In this attempt,
it was noted that for slab-like nuclei, the transverse shear response
vanishes for long-wavelength perturbations \citep{dGP1993,PP1998}.
That is, within the linear analysis, the phase of slab-like nuclei
behaves as a fluid. This indicates that the torsional oscillations
that could be excited within the phases of spherical and cylindrical
nuclei would be separable from those within the phases of
tubes and bubbles.
In our recent study \citep{SIO2018}, we calculated eigenfrequencies
of the torsional oscillations that occur inside the phases of
spherical and cylindrical nuclei and showed that the QPO frequencies observed
in SGR 1806-20 and SGR 1900+14, except for the 26 Hz QPO observed in SGR
1806-20, can be explained in terms of such oscillations. Additionally,
since the torsional oscillations are supposed to be confined
within a layer composed of spherical and cylindrical nuclei, we
discussed the overtone torsional oscillations, which have radial
nodes in such a manner that is dependent on the thickness of the layer. By
identifying the kHz QPO observed in SGR 1806-20 as the 1st overtone
torsional oscillation, we attempted to constrain the incompressibility
of symmetric nuclear matter for given $M$ and $R$. By combining the
resultant constraint with the constraint from empirical data for nuclear
giant monopole resonances, furthermore, not only did we manage to
constrain $M$ and $R$, but we obtained a more severe constraint on the
slope parameter $L$ of nuclear symmetry energy.
Even before our previous work \citep{SIO2018}, we suggested the
possibility that the 26 Hz QPO in SGR 1806-20 stems from torsional
oscillations that occur only in a deeper layer of the crust than the
slab phase, i.e., in a layer of composed of tubes and bubbles. As a first
step \citep{SIO2017a}, we focused on the torsional oscillations in the
bubble phase alone by simply assuming zero elasticity in the tube phase.
It was noted that the lowest fundamental frequency in the bubble
phase could explain the 26 Hz QPO because the enthalpy density is
relatively small in the bubble phase. In this work, by taking into
account the effect of the tube phase, we will give a more realistic
evaluation of the eigenfrequencies of torsional oscillations that occur
in the tube-bubble layer and thereby examine whether one could still
explain the 26 Hz QPO. Within the same framework, moreover,
we will discuss possible identification of newly found QPOs in SGR
1806-20 by a Bayesian procedure \citep{MCS18}.
In Sec.\ \ref{sec:II}, we summarize a model for the neutron star crust
that is constructed in such a way as to depend on the EOS of
nuclear matter. Section \ref{sec:III} is devoted to description of the
shear modulus that is consistent with the crust model summarized in Sec.\
\ref{sec:II}. In Sec.\ \ref{sec:IV}, we calculate the eigenfrequencies of
fundamental shear torsional oscillations in two elastic layers within the
crust and compare them with the low-lying QPO frequencies observed from
SGR 1806--20. Finally, concluding remarks and details of such comparison
are given in Sec.\ \ref{sec:V} and Appendix \ref{sec:appendix_1}, respectively.
Throughout the text, we use units in which $c=G=1$, where $c$ and $G$ denote
the speed of light and the gravitational constant, respectively.
\section{Model for neutron star crust}
\label{sec:II}
We start with construction of a neutron star crust in a spherically
symmetric configuration. This is because for neutron stars observed as SGRs
the magnetic and rotational energies are much smaller than the
gravitational binding energy \citep{K1998,H1999}. Then, the crust can be
constructed by integrating the Tolman-Oppenheimer-Volkoff (TOV) equation
together with the EOS of matter in the crust. Correspondingly, the
metric is given in spherical polar coordinates as
\begin{equation}
ds^2 = -e^{2\Phi(r)} dt^2 + e^{2\Lambda(r)} dr^2 + r^2 d \theta^2 + r^2\sin^2\theta d\phi^2,
\end{equation}
where $\Lambda(r)$ is directly connected to the mass function, $m(r)$, via
$\exp(-2\Lambda)=1-2m/r$.
It is advantageous that we dispense with the core EOS, which is
significantly uncertain. In integrating the TOV equation, therefore,
we set the values of $R$ and $M$ and then go inward from the star's
surface down to the crust-core boundary \citep{IS1997}.
To construct the crust in equilibrium, one has to prepare the EOS of
crustal matter that is in beta equilibrium and globally charge neutral.
This EOS can in turn be constructed in such a way that is dependent on
the bulk energy of zero temperature nuclear matter per baryon, which can
generally be expanded in the vicinity of the saturation point of
symmetric nuclear matter with respect to the baryon number density
($n_{\rm b}$) and the neutron excess ($\alpha$) (see \cite{L1981}):
\begin{equation}
w(n_{\rm b}, \alpha) = w_0 + \frac{K_0}{18n_0^2}(n_{\rm b} - n_0)^2 + \left[S_0 + \frac{L}{3n_0}(n_{\rm b} - n_0)\right]\alpha^2. \label{eq:w}
\end{equation}
Here $w_0$ and $K_0$ are the bulk energy and the incompressibility of
the symmetric nuclear matter at the saturation density of $n_{\rm b}=n_0$,
while $S_0$ and $L$ are the parameters that characterize
the nuclear symmetry energy, $S(n_{\rm b})$, i.e., $S_0=S(n_0)$ and
$L=3n_0(dS/dn_{\rm b})$ at $n_{\rm b}=n_0$. Among these five saturation
parameters, $n_0$, $w_0$, and $S_0$ are fairly well
constrained from empirical data for masses and charge radii of stable
nuclei. On the other hand, the constraint on the remaining two parameters,
$K_0$ and $L$, are relatively more difficult to obtain, because
these are related to the density change from $n_{\rm b}=n_0$. In this
study, therefore, we adopt the phenomenological EOSs of crustal
matter that were constructed by \cite{OI2003,OI2007} in such as way as to
depend on $K_0$ and $L$ (hereafter refereed to as OI-EOSs). These
EOSs allow us to systematically examine the dependence of the crustal
oscillations on $K_0$ and $L$.
Let us briefly summarize the OI-EOSs. The expression for the
energy of bulk nuclear matter used in the OI-EOSs was
constructed in a Pade form with respect to the density and in a
parabolic approximation with respect to the neutron excess, and fitted to
empirical data for masses and charge radii of stable nuclei within the
Thomas-Fermi approach \citep{OI2003}. Consequently, the saturation
parameters in Eq.\ (\ref{eq:w}) were given for more than 200 sets
of $K_0$ and $y\equiv -K_0S_0/(3n_0L)$. Then, within the Wigner-Seitz
approximation for five nuclear shapes, i.e., sphere, cylinder, slab,
tube, and bubble, the equilibrium nuclear shape and size in the crust
were determined as a function of $n_{\rm b}$ by optimizing the
energy density functional in the presence of a neutralizing
uniform electron gas and a gas of dripped neutrons~\citep{OI2007}.
In this study we confine ourselves to several sets of the saturation
parameters, which cover not only typical but also extreme cases as in
Table~\ref{tab:EOS}. We remark that the typical values are
empirically deduced as, e.g., $30\, \, \raisebox{-0.8ex}{$\stackrel{\textstyle <}{\sim}$ } L \, \, \raisebox{-0.8ex}{$\stackrel{\textstyle <}{\sim}$ } 80$ MeV \citep{Newton2014}
and $K_0=230\pm 40$ MeV~\citep{KM2013} or $250 \, \, \raisebox{-0.8ex}{$\stackrel{\textstyle <}{\sim}$ } K_0 \, \, \raisebox{-0.8ex}{$\stackrel{\textstyle <}{\sim}$ } 315$
MeV~\citep{SSM2014}.
Since we focus on the torsional oscillations that are trapped inside
the phases of tubes and bubbles in this study, we also show the
transition densities from the slab to the tube phase (S-CH),
from the tube to the bubble phase (CH-SH), and from the bubble to
the uniform phase (SH-U) in Table~\ref{tab:EOS}. As already
predicted by \cite{OI2007}, the pasta structure is more
difficult to appear for larger $L$. In fact, some of the
pasta structures are predicted to disappear for the cases with
$L=76.4$ and 146.1 MeV, which are denoted by the asterisk
in the column of $K_0$ in Table~\ref{tab:EOS}. We remark that
the thickness of each pasta phase strongly depends on not only $K_0$
and $L$ but also the stellar compactness ($M/R$) \citep{SIO2017b}.
We also remark that the transition densities tabulated in
Table~\ref{tab:EOS} are not obtained at constant pressure; in a
real situation, the density jumps at the transition pressures, but this
jump is tiny because the transitions are of weak first order.
\begin{table}
\centering
\begin{minipage}{100mm}
\caption{
The transition densities at the S-CH, CH-SH, and SH-U boundaries are shown for
several sets of the OI-EOSs characterized by $K_0$ and $L$. In the
cases in which the asterisk is affixed to the value of $K_0$, some
of the pasta phases are not predicted to appear. The values with $*1$
and $*2$ denote the transition densities from the cylindrical-hole to the
uniform phase and from the phase with spherical nuclei to the
uniform phase, respectively.
}
\begin{tabular}{cc|cccc}
\hline\hline
$K_0$ (MeV) & $L$ (MeV) & S-CH (fm$^{-3}$) & CH-SH (fm$^{-3}$) & SH-U (fm$^{-3}$) \\
\hline
180 & 17.5 & 0.09811 & 0.10206 & 0.10321 \\
180 & 31.0 & 0.08739 & 0.09000 & 0.09068 \\
180 & 52.2 & 0.07733 & 0.07885 & 0.07899 \\
230 & 23.7 & 0.09515 & 0.09817 & 0.09866 \\
230 & 42.6 & 0.08411 & 0.08604 & 0.08637 \\
230 & 73.4 & 0.07284 & 0.07344 & 0.07345 \\
360 & 40.9 & 0.09197 & 0.09379 & 0.09414 \\
$^*$360 & 76.4 & 0.07890 & --- & 0.07918$^{*1}$ \\
$^*$360 & 146.1 & --- & --- & 0.06680$^{*2}$ \\
\hline\hline
\end{tabular}
\label{tab:EOS}
\end{minipage}
\end{table}
In considering the torsional oscillations, furthermore, the
effective enthalpy, $\tilde{H}$, that participates in the oscillations is
another important factor, because the shear velocity $v_s$ is given by
$v_s^2=\mu/\tilde{H}$, where $\mu$ is the shear modulus to be discussed
in the next section, and because the fundamental frequency of the
torsional oscillations is proportional to $v_s$ \citep{HC1980}. In practice,
for the torsional oscillations in the tube and bubble phases, the effective
enthalpy can be expressed as
\begin{equation}
\tilde{H} = \frac{N_i + {\cal R}(A - N_i)}{A}H, \label{eq:H}
\end{equation}
where $N_i$ denotes the number of neutrons inside a single tube or
bubble, $A$ is the total nucleon number in a Wigner-Seitz cell, and $H$
denotes the local enthalpy given by $H=\varepsilon + p$ with the energy density
($\varepsilon$) and pressure ($p$). The coefficient ${\cal R}$ is a parameter
that characterizes a participant ratio, i.e., how much ratio of nucleons
outside the tube or bubble comove with it non-dissipatively via
entrainment, namely, Bragg scattering off the corresponding lattice. Note
that the non-participant nucleons behave as a superfluid. There are two
extreme cases: All the nucleons inside the Wigner-Seitz cell contribute
to the effective enthalpy for ${\cal R}=1$ (maximum enthalpy), while
no nucleons outside the tube or bubble do so for ${\cal R}=0$ (minimum
enthalpy). In general, ${\cal R}$ has an intermediate value that depends
on the band structure and pairing gap for the nucleons outside the tube or
bubble and hence changes with density. In this study, we simply
consider only the extreme cases in which ${\cal R}$ is constant at
1 or 0 in the whole region of the tube and bubble phases. We remark that the
value of ${\cal R}$ in the bubble phase is predicted to be
$\sim 0.34-0.38$ at $n_{\rm b}=0.08$ fm$^{-3}$, according to the band
calculations by \cite{Chamel2012}. Incidentally, we naively assume that
all the $N_i$ neutrons comove with the interface of the tube or bubble, just
like bubbles in boiled water. This might not be always the case with the
superfluid system considered here in which a non-dissipative hydrodynamic flow
could arise in such a way that some neutrons go across the interface
\citep{MU2016}.
\section{Shear modulus}
\label{sec:III}
Let us now proceed to the shear modulus, $\mu$, which is
associated with the distortion energy to be produced
by adding long-wavelength transverse shear deformation
of each of the five phases of inhomogeneous nuclear matter.
The distortion energy comes mainly from the change of the Coulomb
energy due to the deformation, and a particular form of
the corresponding shear modulus was adopted in our previous
studies except for the tube phase.
In the case of a bcc Coulomb lattice composed of
spherical nuclei, the effective shear modulus was
originally derived as
\begin{equation}
\mu_{\rm sp} = 0.1194\frac{n_i(Ze)^2}{a}, \label{eq:musp}
\end{equation}
where $n_i$, $Z$, and $a$ denote the ion number density,
the charge number of the nuclei, and the radius of the
Wigner-Seitz cell, i.e., $n_i=(4\pi a^3/3)^{-1}$
\citep{OI1990,SHOII1991}. Note that this $\mu_{\rm sp}$
was obtained via Monte Carlo method by averaging over all
directions of the wave vector of the distortion
with the assumption that each nucleus is a point particle.
Recently, this shear modulus has been
modified by taking into account the effect of electron
screening \citep{KP2013} and the effect of polycrystalline
nature \citep{KP2015}. In this study, however,
we adopt the traditional formula given by Eq.\ (\ref{eq:musp})
for simplicity.
The elastic properties in the rod and slab phases
have been discussed by \cite{dGP1993,PP1998}. The shear
modulus in the phase of cylindrical nuclei was
derived through the deformation energy to be produced
by adding a two-dimensional displacement perpendicular
to the elongated direction of the equilibrium
configuration of cylindrical nuclei. In practice, it can
be effectively expressed as
\begin{equation}
\mu_{\rm cy} = \frac{2}{3}E_{\rm Coul} \times 10^{2.1(w_2-0.3)}, \label{eq:mucy}
\end{equation}
where $E_{\rm Coul}$ and $w_2$ denote the Coulomb energy
per volume of a Wigner-Seitz cell and the volume
fraction of cylindrical nuclei, respectively, and
the coefficient of $2/3$ comes from the average over
all directions between the wave-vector of the distortion
and the elongated direction under the assumption that
crystallites of cylindrical nuclei randomly point.
We remark that
in the liquid drop model $E_{\rm Coul}$ is given by
\begin{equation}
E_{\rm Coul} = \frac{\pi}{2} (\rho_p R_p)^2 w_2\left[\ln\left(\frac{1}{w_2}\right)-1+w_2\right],
\label{eq:E_coul}
\end{equation}
where $\rho_p$ and $R_p$ are the proton charge density and the proton radius
of a cylindrical liquid drop \citep{RPW1983}.
By following a similar line of argument,
it was shown that the deformation energy in the phase of
slab-like nuclei becomes of higher order with respect
to the displacement. That is, this phase behaves as a
fluid within the linear response. This is the reason
why one can consider the torsional oscillations inside the
phases of spherical and cylindrical nuclei separately
from those inside the phases of tubes and bubbles.
The shear modulus in the tube (bubble) phase, i.e., $\mu_{\rm ch}$
($\mu_{\rm sh}$), can be derived in a similar fashion to that
in the phase of cylindrical (spherical) nuclei, because the liquid
crystalline structure of tubes (bubbles) is the same as that in the phase
of cylindrical (spherical) nuclei. In this study, therefore, we adopt
Eq.\ (\ref{eq:mucy}) for the tube phase and Eq.\ (\ref{eq:musp}) for the bubble
phase by properly replacing the relevant quantities in these
formulae: In the tube phase, $w_2$ in Eq.\ (\ref{eq:mucy})
(including $E_{\rm Coul}$) is replaced
by the volume fraction of a gas of dripped neutrons, while in the
bubble phase $n_i$ and $Z$ are replaced by the number density of
bubbles and the effective charge number $Z_{\rm bubble}$ of a bubble,
respectively \citep{SIO2017a}. In practice, $Z_{\rm bubble}$ is given
by $Z_{\rm bubble}=n_QV_{\rm bubble}$, with the volume of the bubble,
$V_{\rm bubble}$, and the effective charge number density of the bubble,
$n_Q$, defined by the difference of the charge number density
inside the bubble from that outside the bubble, i.e.,
$n_Q=-n_{\rm e}-(n_{\rm p}-n_{\rm e})=-n_{\rm p}$ with the proton number density
outside the bubble ($n_{\rm p}$) and the number density of a uniform electron
gas ($n_{\rm e}$).
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{mu-K180} &
\includegraphics[scale=0.5]{mu}
\end{tabular}
\end{center}
\caption
(Color online)
Left: Profile of the shear modulus in the tube phase (thin lines) and bubble phase
(thick lines), calculated for the neutron star models with
$M=1.4M_\odot$ and $R=12$ km. Here, $K_0$ is fixed at 180 MeV,
while $L$ takes the value as labeled in the unit of MeV.
Right: For the neutron star model with $M=1.4M_\odot$ and
$R=12$ km constructed with $K_0=180$ MeV and $L=55.2$ MeV,
the profile of the shear modulus in
the phase of spherical nuclei (Sp) and in the phase of cylindrical nuclei (Cy)
is shown as well as that in the tube (CH) phase and the bubble (SH) phase.
\label{fig:mu}
\end{figure*}
In Fig.~\ref{fig:mu}, we illustrate the profile of the shear modulus inside the tube
and bubble phases for neutron star models constructed from the first
three sets of the OI-EOSs listed in Table~\ref{tab:EOS}. From this figure,
one can observe that the shear modulus becomes discontinuous at the transition
between the tube and bubble phases, which is similar to the case of the
transition between the phases of spherical and cylindrical nuclei
\citep{SIO2018}. In addition, it is to be noted that the shear modulus
in the tube phase can decrease as the density increases and that this
tendency becomes stronger for larger $L$. This tendency may well
come from the decrease of the volume fraction of a gas of dripped
neutrons with density (e.g., \cite{WI2003}).
\section{Torsional oscillation frequencies and comparison with QPOs}
\label{sec:IV}
We now turn to evaluations of the eigenfrequencies of fundamental
torsional oscillations in the sphere-cylinder and tube-bubble layers of the
crust of a neutron star. To this end, we start with the perturbation
equation in a spherical coordinate system, which is given by linearizing
the relativistic equation of motion that determines the torsional
oscillations \citep{ST1983,Sotani2007} as
\begin{equation}
{\cal Y}'' + \left[\left(\frac{4}{r} + \Phi' - \Lambda'\right)+\frac{\mu'}{\mu}\right]{\cal Y}'
+ \left[\frac{\tilde{H}}{\mu}\omega^2e^{-2\Phi} - \frac{(\ell+2)(\ell-1)}{r^2}\right]e^{2\Lambda}{\cal Y}=0,
\label{eq:perturbation}
\end{equation}
where ${\cal Y}$ denotes the Lagrangian displacement in the $\phi$ direction,
while $\tilde{H}$ is the effective enthalpy given by Eq.\ (\ref{eq:H}).
With the appropriate boundary conditions, the problem to solve becomes an
eigenvalue problem, where $\omega$ is the eigenvalue. Then, the
eigenfrequency of the torsional oscillations $f$ is given by $f=\omega/(2\pi)$.
As for the boundary conditions relevant to the torsional
oscillations that are excited inside the tube and bubble phases,
there are two boundaries, namely, the boundary between the bubble phase
and uniform matter, which corresponds to the inner boundary, and the boundary
between the slab and tube phases, which corresponds to the outer
boundary. In practice, one has to impose the zero-traction conditions
at the inner and outer boundaries, i.e., ${\cal Y}'=0$. In addition, one has
to impose the junction condition at the boundary between the tube and bubble
phases, where the traction should be continuous, i.e.,
\begin{equation}
\mu_{\rm ch}{\cal Y}' = \mu_{\rm sh}{\cal Y}'.
\end{equation}
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{t2B1-M14R12a} &
\includegraphics[scale=0.5]{t2B0-M14R12a}
\end{tabular}
\end{center}
\caption
(Color online)
Fundamental frequencies of the $\ell=2$ torsional oscillations in the tube
and bubble phases, as obtained for the neutron star models with
$M=1.4M_\odot$ and $R=12$ km as well as with various sets of $L$
and $K_0$. Here, the left and right panels correspond to the results
for the maximum and minimum enthalpies that participate in the
oscillations, i.e., ${\cal R}=1$ and 0, respectively (see the text for
details). The solid line denotes the fitting given by Eq.\ (\ref{eq:fitting}).
\label{fig:t2L2-M14R12}
\end{figure*}
In Fig.~\ref{fig:t2L2-M14R12}, we plot the $\ell=2$ fundamental
frequencies of torsional oscillations in the tube and bubble phases
that are calculated for the neutron star models with $M=1.4M_\odot$ and
$R=12$ km, with various EOS parameter sets shown in
Table~\ref{tab:EOS}, and with the maximum and minimum enthalpies
(${\cal R}=1$ and 0). From this figure, one can observe that the frequency
depends only weakly on $K_0$, but shows a significant sensitivity
to $L$. In fact, we find that the $\ell=2$ fundamental frequencies can
be well fitted as a function of $L$ via
\begin{equation}
{}_0t_2 = c_2^{(0)} + c_2^{(1)}\sqrt{L} + c_2^{(2)}L, \label{eq:fitting}
\end{equation}
where $c_2^{(0)}$, $c_2^{(1)}$, and $c_2^{(2)}$ are the adjustable
parameters that depend on $M$ and $R$ as well as the value of
${\cal R}$. The obtained fitting [Eq.\ (\ref{eq:fitting})] is also
shown in Fig.~\ref{fig:t2L2-M14R12}. We remark that this fitting has
a different functional form from that obtained for the fundamental
frequencies of crustal torsional oscillations in the phases of
spherical and cylindrical nuclei \citep{SIO2018}. We also remark that the
fundamental frequency in the tube and bubble phases can be smaller than
that in the phases of spherical and cylindrical nuclei and that
the fundamental frequencies with general values of $\ell$ can
also be well fitted in the same functional form:
\begin{equation}
{}_0t_\ell = c_\ell^{(0)} + c_\ell^{(1)}\sqrt{L} + c_\ell^{(2)}L,
\label{eq:fitting1}
\end{equation}
with a different set of the adjustable parameters $c_\ell^{(0)}$,
$c_\ell^{(1)}$, and $c_\ell^{(2)}$. Hereafter we will thus attempt to
identify the observed QPO frequencies by using the fitting formula
(\ref{eq:fitting1}) for the tube-bubble layer, in addition to
the formula given in \cite{SIO2018} for the sphere-cylinder layer.
Note that the obtained fundamental frequencies in the tube-bubble
layer are generally lower than those obtained in our earlier
analysis by assuming that only the bubble phase is oscillating
\citep{SIO2017a}. This tendency is more significant for larger values
of $L$. This is partly because for larger $L$, the bubble phase is
less likely to appear, as shown in Table~\ref{tab:EOS}, and partly
because the shear modulus and hence the shear velocity is relatively
small in the tube phase, as shown in Fig.\ \ref{fig:mu}. As we shall
see later, therefore, the 26 Hz QPO observed from SGR 1806-20 is
identified as the $\ell=4$ fundamental torsional oscillation in the
tube-bubble layer, in contrast to our earlier analysis \citep{SIO2017a}
in which it was identified as the $\ell=2$ fundamental torsional
oscillation.
Up to now, we have already done many trials to identify the QPOs
observed in SGR 1806-20 and SGR 1900+14 as the crustal torsional
oscillations. As long as we adopt the QPO frequencies derived in the
conventional non-Bayesian analyses of RXTE data for the X-ray afterglows
of the giant flares and the recurrent X-ray outbursts, such identification
has worked out relatively well
\citep{SNIO2012,SNIO2013a,SNIO2013b,SIO2016,SIO2017a,SIO2018}.
In fact, the observed QPOs, except for the 26 Hz QPO in SGR 1806-20,
can be identified as the torsional oscillations inside the phases of
spherical and cylindrical nuclei in such a way that the QPOs of
frequencies lower than $\sim 200$ Hz correspond to the fundamental
oscillations with various values of $\ell$, while the higher QPOs observed
in SGR 1806-20 correspond to the overtones \citep{SIO2018}.
In this case, since it is still uncertain how much fraction of dripped
neutrons in the phase of cylindrical nuclei would be locked to the
motion of protons in the nuclei, we introduced a parameter $N_s/N_d$,
where $N_d$ and $N_s$ respectively denote the number of dripped neutrons
outside the cylindrical nuclei and the number of a superfluid part
of the dripped neutrons that behave independently of the oscillations,
and examined the extreme cases with $N_s/N_d=0$ and 1. We remark that
all (none) of the dripped neutrons outside the cylindrical nuclei
participate in the oscillations for $N_s/N_d=0$ $(1)$. We also
remark that for the corresponding value of $N_s/N_d$ in the phase of
spherical nuclei, we adopt the results by \cite{Chamel2012}, which are
based on the band theory.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{1806-L-M14R12Ns10a}
\end{center}
\caption
(Color online)
The $\ell=2$, 3, and 4 fundamental frequencies (painted regions)
of the torsional oscillations in the layer of the tube and bubble
phases, calculated as a function of $L$ for the neutron star
models with $M=1.4M_\odot$ and $R=12$ km. The lower and upper
boundaries of the painted regions correspond to the results obtained
for the maximum enthalpy (${\cal R}=1$) and the minimum enthalpy
(${\cal R}=0$), respectively. For reference, the low-lying QPO
frequencies derived by the conventional non-Bayesian analysis for
SGR 1806-20 are shown by horizontal lines. The QPO frequencies
except 26 Hz can be interpreted as manifestations of the
$\ell=2$, 3, 6, and 10 fundamental torsional oscillations that are
excited in the layer composed of spherical and cylindrical nuclei
\citep{SIO2018}, as illustrated by the solid lines that
denote the corresponding eigenfrequencies obtained by assuming that
the dripped neutron outside the cylindrical nuclei do not
participate in the oscillations (minimum enthalpy, i.e.,
$N_s/N_d=1$). The vertical thick line, i.e., $L=73.4$ MeV, denotes
the optimal value of $L$ for explaining the observed QPOs except
the 26 Hz QPO in terms of the torsional oscillations in the
sphere-cylinder layer with minimum enthalpy, while the vertical thin line,
i.e., $L=70.4$ MeV, denotes the corresponding value of $L$
in the case of maximum enthalpy, i.e., $N_s/N_d=0$ \citep{SIO2018}.
\label{fig:M14R12}
\end{figure}
Let us now illustrate how the newly examined torsional
oscillations in the tube-bubble layer could be accommodated
to the QPO observations of frequencies lower than 100 Hz,
including 26 Hz, for typical $M$-$R$ sets of the stellar models.
For $M=1.4M_\odot$ and $R=12$ km, such illustration can be seen
from Fig.~\ref{fig:M14R12}, in which the 18, 29, 57, and
92.5 Hz QPOs in SGR 1806-20 are as usual identified as
the $\ell=2$, 3, 6, and 10 fundamental frequencies in the
sphere-cylinder layer, whereas the 26 Hz QPO, which is difficult
to explain in terms of the oscillation in the sphere-cylinder layer,
can reasonably be identified as the $\ell=4$ fundamental
frequency in the tube-bubble layer. We remark that the
optimal value of $L$ for explaining the observed
low-lying QPOs ranges between 70.4 and 73.4 MeV
for neutron stars with $M=1.4M_\odot$ and $R=12$ km.
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{1806-L-M13R13Ns10a} &
\includegraphics[scale=0.5]{1806-L-M18R12Ns10a}
\end{tabular}
\end{center}
\caption
(Color online)
Same as Fig.~\ref{fig:M14R12}, but for the neutron star models
with $M=1.3M_\odot$ and $R=13$ km in the left panel and
with $M=1.8M_\odot$ and $R=12$ km in the right panel.
The optimal values of $L$ denoted by the vertical
thick and thin lines are $L=70.8$ and 67.5 MeV,
respectively, in the left panel and $L=63.5$ and 59.6 MeV,
respectively, in the right panel.
\label{fig:M13M18}
\end{figure*}
We then examine whether or not the above-mentioned identification,
which strictly holds for $(M, R)=(1.4M_\odot,12 {\rm km})$, still works
out for other sets of $(M, R)$. For neutron star models with
$(M, R)=(1.3M_\odot,13 {\rm km})$ and $(1.8M_\odot,12{\rm km})$,
we again calculate the eigenfrequencies of the double-layer torsional
oscillations, as shown in Fig.~\ref{fig:M13M18}. We find that
the 18, 29, 57, and 92.5 Hz QPOs in SGR 1806-20 can be still consistent
with the $\ell=2,3,6$, and 10 fundamental frequencies in the sphere-cylinder
layer for such a range of the optimal $L$ as 67.5--70.8 MeV and
59.6--63.5 MeV, respectively. This shift of the optimal $L$ could
open up an opportunity of selecting $M$ and/or $R$ because
the $L$ dependence of the fundamental frequencies
in the sphere-cylinder layer is different from that
in the tube-bubble layer.
In fact, one can observe the tendency that the more
massive neutron star, the more difficult to explain
the 26 Hz QPO in terms of the $\ell=4$ fundamental
oscillation in the tube-bubble layer,
as long as we adopt the optimal value of $L$ that enables us to identify
the 18, 29, 57, and 92.5 Hz QPOs as the oscillations in the
sphere-cylinder layer. Note that the
fundamental frequencies scale as $R^{-1}$ both in the tube-bubble
and sphere-cylinder layers, implying that $R$ is not constrained
in the present approach. We can thus conclude that
light neutron star models are favored over
heavy ones in our identification. Incidentally, the
$(M, R)=(1.3M_\odot,13 {\rm km})$ case is consistent with
the neutron star models considered to be relevant
as a result of the comparison of the constraint on $K_0$,
which is obtained by assuming that the lowest
overtone frequency in the sphere-cylinder layer is
equal to the QPO frequency of 626.5 Hz observed from
SGR 1806-20, with the terrestrial constraint on $K_0$
\citep{SIO2018}. Furthermore, the
$(M, R)=(1.3M_\odot,13 {\rm km})$ case is consistent with
the mass and radius formulas for low-mass neutron
stars \citep{SIOO2014}, given the optimal
value of $L\sim 70$ MeV, and also with the constraint
on the mass and radius of each of the merging binary
neutron stars \citep{A2018}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{1806-L-M13R13Ns10c}
\end{center}
\caption
(Color online) Relations between the newly found
QPOs of 51.4, 97.3, and 157 Hz in SGR 1806-20
\citep{MCS18}, which are shown by horizontal solid lines,
and a selected set of the crustal torsional oscillations
for the neutron star models with $M=1.3M_\odot$ and $R=13$
km. The 51.4 and 97.3 Hz QPOs are identifiable as the
$\ell=8$ and 15 fundamental torsional oscillations in the
tube-bubble layer, while the 157 Hz QPO is identifiable as
the $\ell=17$ fundamental torsional oscillations in the
sphere-cylinder layer. The dashed and dotted lines
denote the originally discovered QPOs, which except for the 26 Hz QPO
have already been identified by us as manifestations of the fundamental
torsional oscillations in the sphere-cylinder layer, while the 26 Hz QPO
is identified as the $\ell=4$ oscillation in the tube-bubble layer as mentioned in text.
\label{fig:M13R13c}
\end{figure}
Thanks to the smaller shear modulus in the tube phase,
which leads to the smaller fundamental frequencies in the
tube-bubble layer than those in the sphere-cylinder
layer, we have a chance to explain not only the originally
discovered QPOs but also the QPOs newly found by
a Bayesian procedure, e.g., the 51.4, 97.3, and 157 Hz
QPOs in SGR 1806-20 \citep{MCS18}\footnote{In appendix
\ref{sec:appendix_1}, we tabulate a possible
correspondence between the crustal torsional oscillations and
all the 26 QPOs shown in Table 1 in \cite{MCS18}.}.
In practice, we illustrate the identification of
these three QPOs for the neutron star models with
$M=1.3M_\odot$ and $R=13$ km in Fig.~\ref{fig:M13R13c}.
As already shown in \cite{SIO2018}, the frequencies of 18, 29,
57, 92.5, and 150 Hz can be identified as the
$\ell=2$, 3, 6, 10, and 16 fundamental frequencies in
the sphere-cylinder layer. In a similar way, we find
that the newly found QPO of 157 Hz can also be
identified as the $\ell=17$ fundamental frequency, while
the newly found QPOs of 51.4 and 97.3 Hz
can be identified as the fundamental oscillations in
the tube-bubble layer, as is the case with
the 26 Hz QPO, in such a way that 26, 51.4, and 97.3 Hz
correspond to the $\ell=4$, 8, and 15 fundamental oscillations
in the tube-bubble layer.
\section{Conclusion}
\label{sec:V}
We have calculated the eigenfrequencies of the torsional oscillations in
the tube-bubble layer, in contrast to our previous work in which we
calculated those only in the bubble layer, and successfully identified
the newly found QPOS as the fundamental oscillations either in
the tube-bubble or sphere-cylinder layer. In the course of the
calculations, we find that the shear modulus, which characterizes the
torsional oscillations, decreases in the tube phase as the slope
parameter $L$ increases. As a result, the fundamental frequencies in the
tube-bubble layer can become smaller than those in the
sphere-cylinder layer. We also find that the fundamental frequencies
in the tube-bubble layer can be parameterized as a function of $L$,
and that the dependence on $L$ is different from that obtained for
the fundamental frequencies in the sphere-cylinder layer.
Remarkably, such a different dependence on $L$ helps us to explain
not only the QPO frequencies originally discovered in SGR 1806-20 but
also those newly found in the same object by a Bayesian procedure
in terms of the eigenfrequencies of the fundamental torsional oscillations
either in the tube-bubble or sphere-cylinder layer of a relatively low
mass neutron star constructed from the EOS of $L\sim 70$ MeV.
We also remark that such a neutron star model and the suitable value
of $L$ are consistent with the mass and radius formulas of low-mass
neutron stars and the constraint from the gravitational waves from the
neutron star binary merger, GW170817.
As a possible extension of this study, it would be of interest to
analyze the QPO widths, which could give us information of the internal
magnetic structure via possible coupling of the crustal torsional
oscillations with the Alfv\'{e}n continuum in the core \citep{MCS18}.
Generally, magnetars are considered to have a toroidal field that is by
an order of magnitude higher than the poloidal field. The question of
whether or not this picture is relevant might be possibly answered.
This work was supported in part by Grants-in-Aid for Scientific Research
through Grant Nos.\ 17K05458, 18H01211, and 18H05406
provided by Japan Society for the Promotion of Science (JSPS).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
One way that we study the geometry of moduli spaces of curves is by computing how its subvarieties intersect. When a collection of subvarieties intersect in a finite number of points, such a number is called an intersection number, and it is generally difficult to calculate. One way to approach this is to exploit the recursive structure that intersection numbers possess. For monomials of $\psi$-classes, examples of recursive structure are given by the string and dilaton equations (see Section \ref{sssec:psi} or \cite{VakilGW}). We are interested in computing intersection numbers of monomials of $\psi$-classes againist the pullback of the stratum
\[ \Delta = \parbox{4cm}{
\begin{tikzpicture}
\Vertex[x=0, label = 1, size=1]{A}
\Vertex[x=2, size = 0.5]{B}
\draw (B) to [out =315, in=45, looseness = 10] (B);
\draw (A) -- (B);
\end{tikzpicture}}
\]
via the forgetful morphism $\pi_{[n]}:\Mo{2}{n} \to \Mo{2}{}$. To do so, we apply the projection formula, string equation, and dilaton equation and obtain the following theorem.
\begin{manualtheorem}{3.1}
For any positive integer $n$ and partition $k_1+\dots+k_n=n+1$ of the integer $n+1$,
\[\int_{\PI{n}}\psi_1^{k_1}\dots\psi_n^{k_n}=\frac{1}{24}\binom{n+1}{k_1,k_2,\dots,k_n}\]
\end{manualtheorem}
Theorem \ref{thm:pullback} allows us to give a simple proof of the $\lambda_g$ conjecture for genus two,
\begin{manualtheorem}{3.2}
For any positive integer $n$ and partition $k_1+\dots+k_n=n+1$ of the integer $n+1$,
\[\I{2}{n}\lambda_2\psi_1^{k_1}\dots\psi_n^{k_n}=\frac{7}{24\cdot 8\cdot 30}\binom{n+1}{k_1,k_2,\dots,k_n}\]
\end{manualtheorem}
\noindent The proof of this theorem follows from Theorem \ref{thm:pullback} and a formula of Pixton \cite{pixton} which allows us to write $\lambda$-classes as linear combinations of strata. \\
\indent In Section 2 we establish the relevant background information on the Deligne-Mumford compactification of the moduli space of pointed curves and tautological intersection theory within this moduli spaces. Then in Section 3 we establish the results above, in particular: the relations analogous to the dilaton and string equations and the proofs of Theorem \ref{thm:pullback} and Theorem \ref{thm:lambdatwo}.
\section{Background}
\subsection{Moduli Space of Stable n-Pointed Curves}
Here we give a concise presentation of the theory on moduli spaces of curves which is needed for our result. More information and references regarding the material introduced in this section may be found in \cite{kockQC} and \cite{VakilGW}.
By $M_{0,n}$ we denote the fine moduli space whose points parameterize $n$-tuples of distinct points in $\mathbb{P}^1$ up to projective equivalence. To understand $M_{0,n}$ we first look at the case for $n=4$. Two quadruples $P\vcentcolon=(p_1,p_2,p_3,p_4)$ and $Q\vcentcolon=(q_1,q_2,q_3,q_4)$ are \textbf{projectively equivalent} if there exists M\"{o}bius transformation $\varphi$ such that $\varphi(p_i)=q_i$ for $i = 1, \ldots, 4$. For any three distinct points $x,y,z\in\mathbb{P}^1$, there exists a unique M\"{o}bius transformation $\varphi\in\textrm{Aut}(\mathbb{P}^1)$ such that $x$, $y$, and $z$ are taken to $0$, $1$ and $\infty$ respectively under $\varphi$. It follows that any quadruple $P$ is equivalent to a unique quadruple of the form $(0,1,\infty, t)$, and therefore $M_{0,4}$ is isomorphic to $\mathbb{P}^1\setminus\{0,1,\infty\}$. Generalizing this idea, we find $M_{0,n}$ as the fine moduli space given by the $(n-3)$-fold product of $M_{0,4}$ without diagonals, i.e.
\[M_{0,n}\cong \M_{0,4}\times\dots\times M_{0,4}\setminus\{\cup \delta_{ij}\},\]
where $\delta_{ij}$ denotes the locus of points where the $i$-th and $j$-th coordinates are equal.
Since $M_{0,4}\cong \mathbb{P}^1\setminus\{0,1,\infty\}$ has dimension one, this implies that
$\dim{M_{0,n}}=n-3$.
The space $\M{0}{n}$ is not compact, however, it admits a compactification of interest: the Deligne-Mumford compactification, $\Mo{0}{n}$. This moduli space parameterizes a broader class of curves which we now characterize.
\begin{definition}
A \textbf{tree of projective lines} is a connected curve such that
\begin{enumerate}
\item Each irreducible component is isomorphic to a projective line.
\item The points of intersection of the components are ordinary double points, i.e., nodes.
\item The fundamental group is trivial.
\end{enumerate}
We call the irreducible components \textbf{twigs}.
\end{definition}
\begin{definition}
For $n\geq 3$ a \textbf{stable n-pointed rational curve} is a tree of projective lines with $n$ marked distinct points in the smooth locus, such that the sum of the number of marked points and nodes on each twig is at least three.
\end{definition}
$\Mo{0}{n}$ parameterizes isomorphism classes of stable $n$-pointed curves and contains $\M{0}{n}$ as a dense open set. We call $\Mo{0}{n}\setminus \M{0}{n}$ the boundary of $\Mo{0}{n}$ and points in the boundary correspond to nodal marked curves. The space $\M{0}{n}$ admits a natural stratification, given by the equivalence class for the equivalence relation that declares two points equivalent if their corresponding curves are homeomorphic as pointed curves.
\begin{definition}
Given a rational stable $n$-pointed curve $C$ with marked points $p_1,\dots, p_n$ its \textbf{dual graph} is a tree defined to have:
\begin{enumerate}
\item a vertex for each twig of $C$.
\item an edge for each node of $C$ connecting the appropriate vertices.
\item a labeled half edge corresponding to each marked point on the appropriate vertex.
\end{enumerate}
\end{definition}
Two marked curved are homeomorphic if and only if they have the same dual graph, and therefore dual graphs index the strata of $\M{0}{n}$.
\begin{example} \normalfont
Below are three examples of dual graphs of stable rational pointed curves\footnote{Legs on these graphs should be labeled, but, throughout the paper, we omit unimportant labels to keep the pictures cleaner.},
\[
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{B}
\node[above left =0.5cm of B] (D) {};
\node[above right =0.5cm of B] (E) {};
\node[below = 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\end{tikzpicture
\ \ \ \ \ \ \ \
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex [x=1, size = 0.5]{B}
\node[above right =0.5cm of B] (D) {};
\node[right =0.5cm of B] (E) {};
\node[below right= 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\draw (A) -- (B);
\node[above left = 0.5cm of A] (G) {};
\node[below left = 0.5cm of A] (H) {};
\draw (A) -- (G);
\draw (A) -- (H);
\end{tikzpicture
\ \ \ \ \ \ \ \
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex [x=2, size = 0.5]{B}
\node[above right =0.5cm of B] (D) {};
\node[right =0.5cm of B] (E) {};
\node[below right= 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\node[above left = 0.5cm of A] (G) {};
\node[below left = 0.5cm of A] (H) {};
\draw (A) -- (G);
\draw (A) -- (H);
\Vertex[x=1, size = 0.5]{C}
\draw (A) -- (C);
\draw (C) -- (B);
\node[above = 0.5cm of C] (I) {};
\draw (C) --(I);
\end{tikzpicture}\]
On the other hand, the next two are the dual graphs of an unstable pointed rational curve and a stable but not rational curve respectively.
\[
\parbox{4cm}{
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex [x=2, size = 0.5]{B}
\node[above right =0.5cm of B] (D) {};
\node[right =0.5cm of B] (E) {};
\node[below right= 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\node[above left = 0.5cm of A] (G) {};
\node[below left = 0.5cm of A] (H) {};
\draw (A) -- (G);
\draw (A) -- (H);
\Vertex[x=1, size = 0.5]{C}
\draw (A) -- (C);
\draw (C) -- (B);
\end{tikzpicture}}
\ \ \ \ \ \ \ \
\parbox{4cm}{
\begin{tikzpicture}
\Vertex[x=0, y=1, size = 0.5]{A}
\Vertex[x=0, y=-1, size=0.5]{C}
\Vertex[x=-1, y=0, size = 0.5]{B}
\Vertex[x=1, y=0, size=0.5]{D}
\draw (A) -- (B);
\draw (B) -- (C);
\draw (C) -- (D);
\draw (D) -- (A);
\node[above = 0.5cm of A] (E) {};
\node[left = 0.5cm of B] (F) {};
\node[below = 0.5cm of C] (G) {};
\node[right = 0.5cm of D] (H) {};
\draw (A) -- (E);
\draw (B) -- (F);
\draw (C) -- (G);
\draw (D) -- (H);
\end{tikzpicture}}
\]
\end{example}
This compactified moduli space has coarse generalization for curves of any genus; the space parameterizing curves of genus $g$ with $n$ marked points is denoted $\Mo{g}{n}$. The theory of the moduli space of curves of genus $g$ is more complex, however the dimension of this space faithfully generalizes that of the genus $0$ space with $\dim \Mo{g}{n}=3g-3+n$ \cite{harrisMod}. For our purposes we will only need the cases for $g\leq 2$ and really only care about $g=0$ and $g=1$.
\subsection{Natural Morphisms}
We now discuss two natural morphisms on these moduli spaces: the forgetful and gluing morphisms. We also briefly cover a form of the projection formula which will be of use.
\subsubsection{Forgetful Morphism}
Given an $(n+1)$-pointed curve $(C,p_1,\dots,p_{n+1})$ one can forget $p_{n+1}$ to obtain an $n$-pointed curve $(C,p_1,\dots,p_n)$, this idea leads to the forgetful morphism $\pi_{n+1}:\Mo{g}{n+1}\rightarrow \Mo{g}{n}$. \\
\indent Of course, the full definition requires addressing subtleties, we will address them now. First, no forgetful morphism exists for $(g,n)=(0,3)$ or $(g,n)=(1,1)$, due to stability conditions. Secondly, we can't always just forget a marked point, since doing so may result in an unstable curve. In such cases, an extra stabilization process must be introduced, called contraction. This happens in the following two cases.
\begin{enumerate}
\item If $p_{n+1}$ is on a twig (name we reserve for rational components) with two nodes and no other marked points, then we contract that twig after forgetting $p_{n+1}$. For example
\[
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex [x=2, size = 0.5]{B}
\node[above right =0.5cm of B] (D) {};
\node[right =0.5cm of B] (E) {};
\node[below right= 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\node[above left = 0.5cm of A] (G) {};
\node[below left = 0.5cm of A] (H) {};
\draw (A) -- (G);
\draw (A) -- (H);
\Vertex[x=1, size = 0.5]{C}
\draw (A) -- (C);
\draw (C) -- (B);
\node[above = 0.5cm of C] (I) {$p_{n+1}$};
\draw (C) --(I);
\end{tikzpicture}\]
will be sent to
\[
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex [x=2, size = 0.5]{B}
\node[above right =0.5cm of B] (D) {};
\node[right =0.5cm of B] (E) {};
\node[below right= 0.5cm of B] (F) {};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\node[above left = 0.5cm of A] (G) {};
\node[below left = 0.5cm of A] (H) {};
\draw (A) -- (G);
\draw (A) -- (H);
\draw (A) -- (B);
\end{tikzpicture}\]
under the forgetful morphism $\pi_{n+1}$.
\item If $p_{n+1}$ is on a twig with exactly one other marked point ${p_i}$ and exactly one node, then the twig is contracted after forgetting $p_{n+1}$ and ${p_i}$ is placed on what used to be the node. For example,
\[\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{A}
\Vertex[x=1, size = 0.5]{B}
\node[above left =0.5cm of A](C){$p_{n+1}$};
\node[below left =0.5cm of A](D){$p_i$};
\node[above right =0.5cm of B](E){};
\node[below right =0.5cm of B](F){};
\node[right=0.5cm of B](G){};
\draw (A)--(C);
\draw (A)--(D);
\draw (B)--(E);
\draw (B)--(F);
\draw (B)--(G);
\draw(A)--(B);
\end{tikzpicture}\]
is sent to
\[\begin{tikzpicture}
\Vertex[x=0, size=0.5]{B}
\node[left =0.5cm of B](D){$p_i$};
\node[above right =0.5cm of B](E){};
\node[below right =0.5cm of B](F){};
\node[right=0.5cm of B](G){};
\draw (B)--(D);
\draw (B)--(E);
\draw (B)--(F);
\draw (B)--(G);
\end{tikzpicture}\]
under the forgetful morphism $\pi_{n+1}$.
\end{enumerate}
\subsubsection{Gluing Morphisms}
Another natural idea is to identify two marked points, thus gluing two curves together. That is, given an $(n+1)$-pointed curve of genus $g$ and and $(n'+1)$-pointed curve of genus $g'$ we may obtain a genus $g+g'$ curve with $(n+n')$-marked points, defining a map $\textrm{gl}:\Mo{g}{n+1}\times \Mo{g'}{n'+1}\rightarrow \Mo{g+g'}{n+n'}$.
One could do this with a genus $g$ curve with $(n+2)$-marked points to obtain a curve of genus $g+1$ with $n$-marked points, giving rise to a gluing morphism $\textrm{gl}:\Mo{g}{n+2}\rightarrow \Mo{g+1}{n}$.
In fact one may naturally generalize this idea to gluing multiple pairs of points, and observe that the closure of any stratum in $\Mo{g}{n}$ is the image of some appropriately defined gluing morphism.
\begin{example}\normalfont
\label{ex:glue334}
Given the following two curves
\[
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{B}
\node[above left =0.5cm of B] (D) {};
\node[above right =0.5cm of B] (E) {};
\node[below = 0.5cm of B] (F) {$p$};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\Vertex[x=2, size = 0.5]{A}
\node[above left =0.5cm of A] (H) {};
\node[above right =0.5cm of A] (I) {};
\node[below = 0.5cm of A] (J) {$p'$};
\draw (A) -- (H);
\draw (A) -- (I);
\draw (A) -- (J);
\end{tikzpicture}\]
we may identify $p$ and $p'$ under the gluing morphism $\textrm{gl}:\Mo{0}{3}\times \Mo{0}{3}\rightarrow \Mo{0}{4}$ to obtain the curve
\[
\begin{tikzpicture}
\Vertex[x=1,size=0.5]{A}
\Vertex[x=0,size=0.5]{B}
\draw(A)--(B);
\node[above left =0.5cm of B] (D) {};
\node[below left =0.5cm of B] (E) {};
\node[above right =0.5cm of A] (F) {};
\node[below right =0.5cm of A] (G) {};
\draw(B)--(D);
\draw(B)--(E);
\draw(A)--(F);
\draw(A)--(G);
\end{tikzpicture}\]
\end{example}
\begin{example}\normalfont
Given the following curve,
\[
\begin{tikzpicture}
\Vertex[x=0, size = 0.5]{B}
\node[above left =0.5cm of B] (D) {};
\node[above right =0.5cm of B] (E) {$p'$};
\node[below = 0.5cm of B] (F) {$p$};
\draw (B) -- (D);
\draw (B) -- (E);
\draw (B) -- (F);
\end{tikzpicture}
\]
we may identify $p'$ and $p$ via the gluing morphism $\textrm{gl}:\Mo{0}{3}\rightarrow \Mo{1}{1}$ to obtain
\[
\begin{tikzpicture}
\Vertex[x=2, size = 0.5]{B}
\draw (B) to [out =315, in=45, looseness = 10] (B);
\node[left=0.5cm of B] (A){};
\draw(A)--(B);
\end{tikzpicture}\]
\end{example}
\subsubsection{The Projection Formula}
The projection formula allows us to intersect, via appropriate applications of pushforwards and pullbacks, Chow classes that live on two different spaces connected by a well behaved morphism. For a more in depth coverage of the projection formula consult \cite{andallthat}, Section 1.3.6.
Given a flat and proper morphism $f:X\rightarrow Y$, a Chow class $\beta$ of $Y$ and $\alpha$ of $X$, the following relation holds:
\begin{description}
\item[Projection Formula] \[f_*((f^*\beta)\cdot \alpha)=(f_*\alpha)\cdot \beta \]
\end{description}
with multiplication in the Chow ring.
The following picture illustrates the content of the projection formula.
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (242,22) -- (398,22) -- (398,178) -- (242,178) -- cycle ;
\draw (241,239) -- (402,239) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (252,166) .. controls (641,124) and (17,83) .. (379,35) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ][line width=0.75] (355.91,239.9) .. controls (355.9,237.96) and (357.47,236.34) .. (359.4,236.27) .. controls (361.34,236.21) and (362.91,237.73) .. (362.92,239.66) .. controls (362.92,241.6) and (361.36,243.22) .. (359.43,243.29) .. controls (357.49,243.35) and (355.92,241.83) .. (355.91,239.9) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ][line width=0.75] (315,239.5) .. controls (315,237.57) and (316.57,236) .. (318.5,236) .. controls (320.43,236) and (322,237.57) .. (322,239.5) .. controls (322,241.43) and (320.43,243) .. (318.5,243) .. controls (316.57,243) and (315,241.43) .. (315,239.5) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ][line width=0.75] (277,238.5) .. controls (277,236.57) and (278.57,235) .. (280.5,235) .. controls (282.43,235) and (284,236.57) .. (284,238.5) .. controls (284,240.43) and (282.43,242) .. (280.5,242) .. controls (278.57,242) and (277,240.43) .. (277,238.5) -- cycle ;
\draw (319,190) -- (319,226) ;
\draw [shift={(319,228)}, rotate = 270] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (281,23) -- (281,178) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (321,21) -- (321,177) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (362,21) -- (362,178) ;
\draw (350,252.4) node [anchor=north west][inner sep=0.75pt] {$\textcolor[rgb]{0.29,0.56,0.89}{f_\ast\alpha} \cdot \textcolor[rgb]{0.82,0.01,0.11}{\beta }=f_{*}(\textcolor[rgb]{0.29,0.56,0.89}{\alpha} \cdot \textcolor[rgb]{0.82,0.01,0.11}{f^\ast\beta })$};
\end{tikzpicture}
In this picture we see the pushforward of $f^*\beta\cdot\alpha$ gives $f_*\alpha\cdot\beta$ which we have represented by taking the points of $\beta$ and outlining them in blue.\\
\indent We remark that forgetful morphisms are flat and proper; hence one can use the projection formula.
\subsection{Tautological Intersection theory on $\Mo{g}{n}$}
We are mainly interested in the intersection numbers of particular Chow classes in $\Mo{g}{n}$. In these compactified moduli spaces, the strata have a closure whose fundamental class gives a natural Chow class to work with. In genus 0 and 1, these classes are of particular importance for this project. In genus 0, these classes generate the Chow ring and in genus 1 they generate a particularly important subring of the Chow ring, the tautological ring \cite{VakilGW}.
\begin{remark} \normalfont
We follow the following convention of \cite{pixton}. Every dual graph identifies a stratum and the stratum is an image of a gluing morphism. However, these gluing morphisms are not always $1:1$, so the fundamental class of the stratum is not always the same as the pushforward of the fundamental class via the gluing morphism. By dual graphs we denote the pushforward of fundamental classes of products of moduli spaces under the gluing morphism. As such if a stratum is identified by a dual graph $\Gamma$, the class of the closure of the stratum is then $\frac{1}{|\textrm{Aut}(\Gamma)|}[\Gamma]$.
\end{remark}
The following examples illustrate this point.
\begin{example}\normalfont
\label{ex:bij}
In Example \ref{ex:glue334}, we obtained a stratum of $\Mo{0}{4}$ by gluing two strata from smaller moduli spaces. The gluing morphism provides a bijection
\[\textrm{gl}_*:\Mo{0}{3}\times\Mo{0}{3}\rightarrow \Mo{0}{4}\]
Therefore, $ \textrm{gl}_*(1_{\Mo{0}{3} \times \Mo{0}{3}})$ agrees with the class of the stratum represented by the following dual graph
\[
\begin{tikzpicture}
\Vertex[x=1,size=0.5]{A}
\Vertex[x=0,size=0.5]{B}
\draw(A)--(B);
\node[above left =0.5cm of B] (D) {};
\node[below left =0.5cm of B] (E) {};
\node[above right =0.5cm of A] (F) {};
\node[below right =0.5cm of A] (G) {};
\draw(B)--(D);
\draw(B)--(E);
\draw(A)--(F);
\draw(A)--(G);
\end{tikzpicture}\]
\end{example}
\begin{example}\normalfont
Though the gluing morphism in Example \ref{ex:bij} is a bijection, injectivity often fails. Take $\textrm{gl}: \Mo{0}{1,2,*,\bullet}\rightarrow\Mo{1}{2}$, this is a $2:1$ map onto its image. Take a point in the stratum corresponding to the dual graph, $\Gamma$, depicted below
\[\begin{tikzpicture}
\Vertex[x=0, size=0.5]{A}
\node[above left =0.5cm of A] (B) {1};
\node[below left=0.5cm of A] (C) {2};
\draw (A) to [out =315, in=45, looseness = 10] (A);
\draw (A) -- (B);
\draw (A) -- (C);
\end{tikzpicture}\]
As noted in the remark, dual graphs denote the pushforward of fundamental classes of products and quotients of moduli spaces under the gluing morphism, here we have:
\[[\Gamma]=\textrm{gl}_*[1_{\Mo{0}{1,2,*,\bullet}}]\]
This points' preimage under $\textrm{gl}$ is given by two points which we may schematically depict as
\[\begin{tikzpicture}
\Vertex[x=0, size=0.5]{A}
\node[above left =0.5cm of A] (B) {1};
\node[below left=0.5cm of A] (C) {2};
\node[above right =0.5cm of A] (D) {$*$};
\node[below right=0.5cm of A] (E) {$\bullet$};
\draw (A) -- (B);
\draw (A) -- (C);
\draw (A) -- (D);
\draw (A) -- (E);
\Vertex[x=6, size=0.5]{J}
\node[above left =0.5cm of J] (F) {1};
\node[below left=0.5cm of J] (G) {2};
\node[above right =0.5cm of J] (H) {$\bullet$};
\node[below right=0.5cm of J] (I) {$*$};
\draw (J) -- (F);
\draw (J) -- (G);
\draw (J) -- (H);
\draw (J) -- (I);
\end{tikzpicture}\]
So, the class of the closure of the stratum is given by $\frac{1}{2}[\Gamma]$.\end{example}
We particularly care about two other families of classes given as Chern classes of some natural bundles on $\Mo{g}{n}$, which we now discuss.
\subsubsection{$\psi$-classes} \label{sssec:psi}
Given $\Mo{g}{n}$ we have the universal curve $\pi: C_{g,n}\rightarrow \Mo{g}{n}$. Let $s_i$ be the section of $\pi$ corresponding to the $i$-th marked point. Then the pullback by $s_i$ of the relative dualizing sheaf defines a bundle denoted $\mathbb{L}_i$. We then define the $i$-th \textbf{$\psi$-class} to be the first Chern class of $\mathbb{L}_i$, i.e., \[\psi_i=c_1(\mathbb{L}_i)\in A^{1}(\Mo{g}{n})\]
A more complete treatment of this may be found in $\cite{kocknotes}$.
In $g=0$ and $g=1$, the intersection numbers of monomials of $\psi$-classes are completely determined by the following two recursive structures and initial conditions.
\begin{description}
\item[String Equation] Given $g,n, k_1, \dots, k_n\in \mathbb{Z}^+$ with $k_1+\dots+k_n=3g-3+n+1$ and $2g-2+n>0$ then
\[\I{g}{n+1}\psi_1^{k_1}\dots\psi_n^{k_n}=\sum_{i=1}^n\I{g}{n}\psi_1^{k_1}\dots \psi_i^{k_i-1}\dots \psi_n^{k_n},\]
adopting the notational convention that $\psi_i^{-1} = 0$.
\item[Dilaton Equation] Given $g,n, k_1, \dots, k_n\in \mathbb{Z}^+$ with $k_1+\dots+k_n=3g-3+n$ and $2g-2+n>0$ then
\[\I{g}{n+1}\psi_1^{k_1}\dots \psi_n^{k_n}\psi_{n+1}=(2g-2+n)\I{g}{n}\psi_1^{k_1}\dots \psi_n^{k_n} . \]
\item[Initial Conditions] \[\I{1}{1}\psi_1=\frac{1}{24}\qquad \textrm{and}\qquad \I{0}{3}1=1.\]
\end{description}
The above two relations are called the \textbf{dilaton equation} and the \textbf{string equation} \cite{kocknotes} respectively and they are the most important results of the background section for the purposes of this paper. For a derivation of the initial condition see \cite{VakilGW} Section 3.13.
\begin{remark} \normalfont
While it is most common to see the string and dilaton equations expressed as identities among intersection numbers, in the course of their proof one readily sees that they are really identities among push-forwards of cycles (\cite{kocknotes}, Section 1.4). In particular, the dilaton equation is derived from the fact that $\pi_{n*}(\psi_n) = (2g-2+n)[1_{\Mo{g}{n}}]$, while string follows from the relation:
\begin{equation}
\label{eq:cyclestring}
\pi_{n*}(\psi_1^{k_1}\dots\psi_n^{k_n})=\sum_{j=1}^n \psi_1^{k_1}\dots \psi_i^{k_i-1}\dots \psi_n^{k_n}.
\end{equation}
\end{remark}
\subsubsection{$\lambda$-classes}
Another type of class is obtained from the push-forward of the relative dualizing sheaf of the universal curve, which we call the \textbf{Hodge bundle}, denoted $\mathbb{E}_{g,n}$. With this bundle we may define \textbf{$\lambda$-classes} in a similar fashion to the $\psi$-classes as
\[\lambda_i=c_i(\mathbb{E}_{g,n}).\]
An important property of these classes is that they are stable under pullback via forgetful morphisms, i.e. $\pi_n^\ast \lambda_i = \lambda_i$.
The $\lambda_g$ conjecture (\cite{lambda}, later proven in \cite{lambdaPf}) gives a simple formula for calculating intersection numbers of monomials of $\psi$-classes on $\Mo{g}{n}$ along with a factor of the $\lambda_g$ class. For ease of reference, the theorem is explicitly stated:
\begin{theorem}[$\lambda_g$ Conjecture, \cite{lambdaPf}] \label{lambdag}
Let $k_1,\dots,k_n\in\mathbb{Z}^+$ such that $k_1+\dots+k_n=2g-3+n$. Then
\[\I{g}{n}\psi_1^{k_1}\dots\psi_n^{k_n}\lambda_g=\binom{2g+n-3}{k_1,\dots,k_n}\I{g}{1}\psi_1^{2g-2}\lambda_g\]
with initial condition
\[\I{g}{1}\psi_1^{2g-2}\lambda_g=\frac{2^{2g-1}-1}{2^{2g-1}}\frac{|B_{2g}|}{(2g)!}\]
where $B_n$ is the $n$-th Bernoulli number. \textnormal{\cite{Wiki}}
\end{theorem}
We are interested in giving an elementary proof of this theorem for $g=2$. In doing so we will make use of a formula of Pixton, found in section 6 of \cite{pixton}, which gives $\lambda_g$ as a linear combination of strata. For genus two, this formula has the form
\begin{equation} \label{for:pix}
\lambda_2 = \frac{1}{240}
\left[\parbox{1cm}{\scalebox{0.5}{\begin{tikzpicture}[scale = 1]
\Vertex[x=0, label = 1, size=1]{A}
\Edge[loopsize=1cm](A)(A)
\node[above right =0.3cm of A] (D) {$\psi$};
\end{tikzpicture}}}
\right]
+\frac{1}{1152}
\left[\parbox{1.2cm}{\scalebox{0.5}{\begin{tikzpicture}[scale = 1]
\Vertex[x=4, size =0.5]{B}
\draw (B) to [out =315, in=45, looseness = 10] (B);
\draw (B) to [out =225, in=135, looseness = 10] (B);
\end{tikzpicture}}}
\right],
\end{equation}
where the decoration of $\psi$ on a half edge of a graph means pushing forward the $\psi$ class at the corresponding mark via the gluing morphism.
\section{Results}
Throughout the following section, ``nontrivial partition" refers to any integer partition of a number $n$ that is not $1+1+\dots+1=n$. Further, $\Delta$ refers to the dual graph
\[
\begin{tikzpicture}
\Vertex[x=0, label = 1, size=1]{A}
\Vertex[x=2, size = 0.5]{B}
\draw (B) to [out =315, in=45, looseness = 10] (B);
\draw (A) -- (B);
\end{tikzpicture},
\]
and the correponding class in the tautological ring of $\Mo{2}{}$.
Recall that $\dim \Mo{g}{n}=3g-3+n$, hence the intersection number of a monomial of $\psi$-classes over $\Mo{g}{n}$ is nonzero if and only if the monomial's degree is $3g-3+n$. Indeed, for each $n$, there are $2^n$ strata in $\PI{n}$, since each stratum in $\PI{n-1}$ gives two strata in $\PI{n}$. Calculating the intersection numbers from Theorem \ref{lambdag} amounts to calculating a sum of $2^n$ intersection numbers, one for each stratum of $\PI{n}$. If we let $i$ denote one such stratum, $\int_{i}\Psi^K$ is equal to a product of two intersection numbers by Fubini's theorem, one being the intersection number of the monomial of $\psi$-classes over the genus 1 twig of $\Delta$ and the other being the intersection number of the monomial of $\psi$-classes over the genus 0 twig.
\begin{example} \normalfont \label{example:31}
To make the above discussion more concrete, let's investigate the $n=1$ case. For $n=1$, $\PI{1}$ contains two strata
\[
\begin{tikzpicture}
\Vertex[x=0, label = 1, size=1]{A}
\Vertex[x=2, size = 0.5]{B}
\draw (B) to [out =315, in=45, looseness = 10] (B);
\draw (A) -- (B);
\node[above left =0.5cm of A] (D) {};
\Edge (D)(A)
\Vertex[x=5, label = 1, size=1]{F}
\Vertex[x=7, size = 0.5]{G}
\draw (G) to [out =315, in=45, looseness = 10] (G);
\draw (F) -- (G);
\node[above left =0.5cm of G] (H) {};
\Edge (H)(G)
\end{tikzpicture}
\]
which we refer to as $A$ and $B$ respectively. Now $\Delta$ is of codimension 2. Therefore the pullback of $\Delta$ under $\pi$, $\PI{1}$, is of codimension 2 in the four dimensional $\M{2}{1}$, and is therefore dimension 2. \\
\indent Since $\dim \PI{1}=2$, a monomial of $\psi$-classes must be of degree 2 in order for the intersection number to be nonzero; further, $n=1$ so only one $\psi$-class is available to us. Hence, the intersection number of interest is $\int_{\PI{1}}\psi^2$.\\
\indent The pullback of $\Delta$ under $\pi$ consists of A and B so
\[\int_{\PI{1}}\psi^2=\int_A\psi^2+\int_B\psi^2\]
Then by Fubini's theorem and Section \ref{sssec:psi} we have
\begin{align*}
\int_{\PI{1}}\psi^2=\int_A\psi^2+\int_B\psi^2 &= \I{1}{2}\psi^2\I{0}{3}1+\I{1}{1}1\I{0}{4}\psi^2\\
&=\frac{1}{24}
\end{align*}
\end{example}
The only partitions of $n+1$ that can appear as exponent vectors for the monomials of $\psi$-classes are nontrivial. Indeed, for any positive integer $n$, intersection numbers are nonzero if and only if the degree of the monomial of $\psi$-classes in question is of degree $n+1$. However, we only have one $\psi$-class for each marked point, i.e. $n$ $\psi$-classes. Therefore the partition $1+1+\dots+1=n+1$ is unattainable. \\
It's convenient for us to adopt the following notational convention:
\begin{itemize}
\item Given a nontrivial partition $K=k_1+\dots+k_n$ of $3g-3+n\in \mathbb{Z}^+$,
\[\I{g}{n}\Psi^K\vcentcolon= \I{g}{n}\psi_1^{k_1}\dots\psi_n^{k_n}\]
\end{itemize}
Let $c:X\rightarrow \textrm{pt}$ be the constant morphism and we observe that the integral symbol is just a notation for the degree of the push forward of a class via $c$: $\int_X\alpha = c_*(\alpha)$. We have the following commutative diagram,
\begin{equation}\label{pushdia}
\begin{tikzcd}
{\overline{M}_{g,n+1}} \\
& {\textrm{pt}} \\
{\overline{M}_{g,n}} \\
{\overline{M}_g}
\arrow["{\pi_{[n-1]}}", from=3-1, to=4-1]
\arrow["{\pi_n }"', from=1-1, to=3-1]
\arrow["{\pi_{[n]}}"', bend right, from=1-1, to=4-1]
\arrow["c", from=1-1, to=2-2]
\arrow["{\tilde{c}}"', from=3-1, to=2-2]
\end{tikzcd}
\end{equation}
which will be useful in the proofs of the following lemmas.
We are now ready to provide the relations analogous to the dilaton and string equations with the following two lemmas.
\begin{lemma}\label{lem:dil}
Let $K$ be a nontrivial partition of $n\in \mathbb{Z}^+$, then
\[\int_{\PI{n}}\Psi^K\psi_n=(n+1)\int_{\PI{n-1}}\Psi^K\]
\end{lemma}
\begin{proof}
For a nontrivial partition $K$ of $n$, we start by expressing each $\psi$ class as the pull-back via the morphism forgetting the last mark, plus the appropriate boundary correction (see \cite{kocknotes}, Lemma 1.3.1)
\[\Psi^K\psi_n=\prod_i(\pi^*_n\psi_i+D_{i,n})^{k_i}\psi_n\]
where $D_{i,n}$ denotes the boundary divisor where the $i$-th and $n$-th marked points are together on a rational component with no other mark. Notice for all $i$, $\psi_n\cdot D_{i,n}=0$ and therefore
\begin{equation}\label{eq:puba}
\Psi^K\psi_n=\pi_n^*\left(\prod_i\psi_i^{k_i}\right)\psi_n\end{equation}
Chasing the commutative diagram \eqref{pushdia}, one has
\begin{align*}
\int_{\PI{n}}\Psi^K\psi_n&=c_*(\pi^*_n(\PI{n-1})\Psi^K\psi_n\\
&=\tilde{c}_*\pi_{n*}(\pi^*_n(\PI{n-1})\Psi^K\psi_n).
\end{align*}
then by \eqref{eq:puba} we may bring $\Psi^K$ within the parenthesis, yielding
\[\tilde{c}_*\pi_{n*}(\pi^*_n(\PI{n-1}\Psi^K)\psi_n).\]
By the projection formula this reduces to
\[\tilde{c}_*(\PI{n-1}\Psi^K\cdot \pi_{n*}\psi_n).\]
This allows us to apply the dilaton equation, finally giving
\begin{align*}
\tilde{c}_*(\PI{n-1}\Psi^K\cdot\pi_{n*}\psi_n)&=(2g-2+n)\int_{\PI{n-1}}\Psi^K\\
&=(n+1)\int_{\PI{n-1}}\Psi^K.
\end{align*}
\end{proof}
\begin{lemma} \label{lem:string}
Let $K$ be a nontrivial partition of $n+1$, $n\in \mathbb{Z}^+$ with $k_n=0$ (in other words the length of $K$ is at most $n-1$). If $j\in [n-1]$ then we define $K(j)$ as
\[K(j)\vcentcolon= \begin{cases}
(k_1,\dots,k_j-1,\dots,k_n) & k_j>0 \\
0 & k_j=0
\end{cases}
\]
Then
\[\int_{\PI{n}}\Psi^K=\sum_{j\in [n-1]}\int_{\PI{n-1}}\Psi^{K(j)}\]
\end{lemma}
\begin{proof}
We start again with diagram \eqref{pushdia},
\begin{align*}
\int_{\PI{n}}\Psi^K &= c_*(\PI{n}\Psi^K)\\
&=\tilde{c}_*\pi_{n*}(\pi_n^*(\PI{n-1})\Psi^K).
\end{align*}
Then, by the projection formula
\[\tilde{c}_*\pi_{n*}(\pi_n^*(\PI{n-1})\Psi^K)=\tilde{c}_*(\PI{n-1}\cdot \pi_{n*}\Psi^K).\]
Finally the cycle theoretic string equation \eqref{eq:cyclestring} gives
\[\tilde{c}_*(\PI{n-1}\cdot \pi_{n*}\Psi^K)=\tilde{c}_*\left(\PI{n-1}\cdot \sum_{j\in[n-1]}\Psi^K\right).\]
In terms of intersection numbers
\[\int_{\PI{n}}\Psi^K=\sum_{j\in [n-1]}\int_{\PI{n-1}}\Psi^{K(j)}.\]
\end{proof}
This next lemma is just a generalization of Pascal's rule for binomial coefficients. It may be proved by an elementary induction argument, so we omit its proof.
\begin{lemma}\label{lem:pas}
For $m\in \mathbb{Z}^+$, $m \geq 2$, $k_1,\dots,k_m\in \mathbb{Z}^+$ with $n=k_1+\dots+k_m\geq 1$ we have
\[\sum_{j\in[n-1]}\binom{n-1}{K(j)}=\binom{n}{K}\]
\end{lemma}
These three lemmas allow us to prove the following first major result of this paper.
\begin{theorem} \label{thm:pullback}
Given a nontrivial partition $K$ of $n+1$ with $n\in \mathbb{Z}^+$,
\begin{equation}\label{eq:bino}\int_{\PI{n}}\Psi^K=\frac{1}{24}\binom{n+1}{K}\end{equation}
\end{theorem}
\begin{proof}
We proceed by induction with Example \ref{example:31} providing the base case.
Assume \eqref{eq:bino} holds for all nontrivial partitions of $n$ and let $K$ be a partition of $n+1$ with one component equal to $0$. Then we may apply Lemma \ref{lem:string} giving
\[\int_{\PI{n}}\Psi^K=\sum_{j\in[n-1]}\int_{\PI{n-1}}\Psi^{K(j)}\]
By the inductive hypothesis
\[ \sum_{j\in[n-1]}\int_{\PI{n-1}}\Psi^{K(j)}=\sum_{j\in[n-1]}\frac{1}{24}\binom{n}{K(j)} \]
Them Lemma \ref{lem:pas} and the initial condition yield
\[ \sum_{j\in[n-1]}\frac{1}{24}\binom{n}{K(j)}=\frac{1}{24}\binom{n+1}{K} \]
If $K$ is a nontrivial partition of $n+1$ with no component equal to $0$, then some component must be $1$. Indeed, if all components were greater than $1$ then their sum must be greater than $n+1$. Hence we may write $\Psi^K=\Psi^{K'}\psi_n$ and apply Lemma \ref{lem:dil} finding
\[ \int_{\PI{n}}\Psi^{K'}\psi_n=(n+1)\int_{\PI{n-1}}\Psi^{K'}. \]
So by the inductive hypothesis
\[ (n+1)\int_{\PI{n-1}}\Psi^{K'}=(n+1)\frac{1}{24}\binom{n}{K'}=\frac{1}{24}\binom{n+1}{K} \]
\end{proof}
\noindent With Theorem \ref{thm:pullback} we now use Pixton's formula \eqref{for:pix} to give a proof of the $\lambda_g$ conjecture for $g=2$.
\begin{theorem}\label{thm:lambdatwo}
Given a nontrivial partition $K$ of $n+1$ for $n\in \mathbb{Z}^+$,
\[\I{2}{n}\lambda_2\Psi^K=\frac{7}{24\cdot 8\cdot 30}\binom{n+1}{K}.\]
\end{theorem}
\begin{proof}
We first convert \eqref{for:pix} into a linear combination of strata. We have $ \left[\parbox{.7cm}{\scalebox{0.3}{\begin{tikzpicture}[scale = 1]
\Vertex[x=0, label = 1, size=1]{A}
\Edge[loopsize=1cm](A)(A)
\node[above right =0.3cm of A] (D) {$\psi$};
\end{tikzpicture}}}
\right] =\textrm{gl}_*(\psi_1)$ for the gluing morphism $\textrm{gl}:\Mo{1}{2}\rightarrow\overline{M}_2$ and $\psi_1$ on $\Mo{1}{2}$.
On $\Mo{1}{1}$ we can express the class $\psi_1$ as:
$$\psi_1 = \frac{1}{24} \left[\parbox{ 1cm}{\scalebox{0.5}{\begin{tikzpicture}
\Vertex[x=0, size=0.5]{A}
\node[right =0.5 of A](B){};
\draw (A) -- (B);
\draw (A) to [out =225, in=135, looseness = 10] (A);
\end{tikzpicture}}}
\right]$$
Therefore, by \cite[Lemma 1.3.1]{kocknotes} $\psi_1$ on $\Mo{1}{2}$ can be expressed as
\[
\frac{1}{24}
\left[\parbox{ 1cm}{\scalebox{0.5}{\begin{tikzpicture}
\Vertex[x=0, size=0.5]{A}
\node[above right =0.5cm of A] (D) {};
\node[below right =0.5cm of A](P){};
\draw (A) -- (P);
\draw (D)--(A);
\draw (A) to [in =135, out=225, looseness = 10] (A);
\end{tikzpicture}
}}
\right]
+
\left[\parbox{1.5cm}{\scalebox{0.5}{\begin{tikzpicture}
\Vertex[x=5, label = 1, size=1]{F}
\Vertex[x=7, size = 0.5]{G}
\draw (F) -- (G);
\node[above right =0.5cm of G] (H) {};
\node[below right =0.5cm of G] (I) {};
\draw (G) -- (H);
\draw (G) -- (I);
\end{tikzpicture}
}}
\right].
\]
After pushforward we can then rewrite \eqref{for:pix} as:
\begin{equation} \label{eq:almostthere}
\lambda_2=\left(\frac{1}{1152}+\frac{1}{240\cdot24}\right)\Delta_0+\frac{1}{240}\Delta,\end{equation}
where $\Delta_0$ denotes $\textrm{gl}_*(\overline{M}_{0,5})$ via the gluing morphism that glues two distinct pairs of nodes.
From \eqref{eq:almostthere} and the fact that $\lambda_2$ is stable under pull-back, we have
\begin{align*}
\I{2}{n}\Psi^{K}\lambda_2 &= \I{2}{n}\Psi^{K}\pi_{[n]}^\ast\left(\left(\frac{1}{1152}+\frac{1}{240\cdot 24}\right)\Delta_0+\frac{1}{240}\Delta\right) \\
&= \left(\frac{1}{1152}+\frac{1}{240\cdot 24}\right)\int_{\pi^*_{[n]}(\Delta_0)}\Psi^K+\frac{1}{240}\int_{\PI{n}}\Psi^K
\end{align*}
Since $\pi_{[n]}^\ast(\Delta_0)=\textrm{gl}_*(\overline{M}_{0,2g+n})$ and in our case $g=2$, we have
\begin{equation} \label{last}
\int_{\pi^*_{[n]}(\Delta_0)}\Psi^K= \int_{\overline{M}_{0,2g+n}}\Psi^K=\binom{2g+n-3}{K}=\binom{n+1}{K}\end{equation}
Therefore by \eqref{last} and Theorem \ref{thm:pullback}
\begin{align*}
\left(\frac{1}{1152}+\frac{1}{240\cdot 24}\right)\int_{\pi^*_{[n]}(\Delta_0)}\Psi^K+\frac{1}{240}\int_{\PI{n}}\Psi^K &= \left(\frac{1}{1152}+\frac{1}{240\cdot 24}+\frac{1}{240\cdot 24}\right)\binom{n+1}{K}\\
&=\frac{7}{8\cdot 24\cdot 30}\binom{n+1}{K}.
\end{align*}
\end{proof}
\section{Acknowledgments}
The first author would like to acknowledge Professor Renzo Cavalieri for suggesting the topic of this paper and his invaluable mentorship.
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\label{s:I-intro}INTRODUCTION}
An earlier theoretical paper\cite{Bindi05prb} (hereafter I) on magnetization
steps from a diluted Heisenberg antiferromagnet on the square lattice was
largely devoted to the nearest-neighbor (NN) cluster model.
To obtain magnetization steps (MST's), the NN exchange constant \J1 must
be antiferromagnetic (AF).
The present paper discusses cluster models with two exchange constants:
the largest, called \JJ1, and the second-largest, \JJ2.
Both exchange constants are assumed to be AF.
Some of the discussion in the present paper applies to any \JJ1-\JJ2 cluster
model, regardless of the neighbors associated with \JJ1 and \JJ2.
However, the main focus is on the \J1-\J2 and \J1-\J3 models.
In these models $J^{(1)}{=}J_1$, and \JJ2 is either the second-neighbor
exchange constant \J2, or the third-neighbor exchange constant \J3.
As in I, the following assumptions are made:
1) Thermal equilibrium, at temperature $T$ and magnetic field $B$, prevails.
2) All the magnetic ions are identical.
3) The cation sites form a square lattice, and the magnetic ions are randomly
distributed over these sites.
4) Only a fraction $x$ of all cations sites are occupied by magnetic ions.
This fraction is well below the site percolation concentration $x_c$ for the
relevant cluster model.
5) None of the magnetic interactions is anisotropic.
Background material for the present paper may be found in I,
in a recent review,\cite{Shapira02jap} and in earlier
papers.\cite{Vu92,Bindi98prl,Bindi99jap}
This paper is organized as follows.
The principal results are presented in the main text.
Supplementary material is relegated to appendices.
The main text starts with a brief discussion of the crucial role of cluster
``configurations'' in the theory.
All cluster types in the \J1-\J2 and \J1-\J3 models, subject to the restriction
$n_c{\le}5$ on the cluster size $n_c$, are then identified.
The statistics for these cluster types is expressed using perimeter polynomials
(PP's) for cluster types.
As discussed in I, the PP's for cluster types are analogous to the conventional
PP's for cluster sizes.\cite{Stauffer94}
The contribution of any cluster type to the magnetization $M$ is calculated by
combining the results for the energy eigenvalues with those for the cluster
statistics of that cluster type.
The total magnetization $M(T,B)$ is the sum of the contributions from all
cluster types.
In some respects the derivative curve, $\rmd M/\rmd B$\ versus $B$, is more
informative than the magnetization curve, $M$ versus $B$.
Examples of calculated magnetization and derivative curves at constant $T$ are
given for widely different ratios $J^{(2)}{/}J^{(1)}$.
The statistics for cluster types is independent of the spin $S$ of the
individual magnetic ions.
However, the energy eigenvalues, and therefore the magnetization curve, depend
on $S$.
Although much of the discussion is for any $S$, all the numerical examples are
for $S{=}5/2$, which is the appropriate value for the $\mathrm{Mn}^{2+}$ and
$\mathrm{Fe}^{3+}$ ions.
Both theses ions are $S$-state ions, and they usually have low crystalline
anisotropy.
Such ions are useful for testing theories in which anisotropy is neglected.
The structures of the magnetization and derivative curves are much simpler when
the ratio $J^{(2)}{/}J^{(1)}$\ is ``very small.''
Cluster models with such widely different magnitudes of \JJ2 and \JJ1 are
called ``lopsided cluster models.''
In addition to their interesting physics, lopsided models are useful because
they apply to many materials.
Lopsided cluster models are discussed in detail in the following
paper.\cite{Bindi06eprint2}
\section{\label{s:II-config}CLUSTER CONFIGURATIONS}
\subsection{\label{ss:IIA}Configurations}
The calculation of the magnetization $M(T, B)$ in any cluster model requires:
1) the identification of all cluster types $c$ in that model;
2) a calculation of the average magnetic moment $\mu_c(T, B)$ per
realization,\cite{note5} for each cluster type $c$; and
3) an evaluation of the probabilities of finding the various cluster types.
Before carrying out the first and third of these tasks it is necessary to
identify all the ``cluster configurations'' that exist in the particular cluster
model.
These cluster configurations are the fundamental building blocks of the theory,
and also of the computer programs that are used to implement the theory.
Cluster configurations were discussed in Sec.~IIIB of I.
The discussion below brings out some new features.
\subsubsection{\label{sss:IIA1}Cluster configurations}
A spin cluster consists of a finite number of exchange-coupled magnetic ions
(spins) that occupy a set of cation sites.
Spin clusters are considered to have the same configuration if and
only if the sets of cation sites occupied by these clusters
can be obtained from each other by symmetry operations of the space group of
the cation structure.
Each one such set of cation sites is called a ``realization'' of the
configuration.
The symmetry operations of the space group of the cation structure are the only
symmetry operations considered in the present work.
They will be referred to, simply, as ``symmetry operations.''
The symmetry operations that are relevant to the present work are the operations
of the $P4m$ space group of the square lattice, including the lattice
translations.
Realizations of the same configuration have the following important geometrical
property.
Starting from one realization, a rigid object can be constructed by
joining all pairs of cation sites in that realization by straight-line segments.
The straight-line segments may be viewed as ``struts'' that give the object its
rigidity.
Rigid objects constructed in this manner from different realizations of the same
configuration, either have identical shapes or are chiral isomers (mirror
images) of each other.
Because of this geometrical property, configurations of clusters are sometimes
viewed as the geometrical shapes of clusters.
\subsubsection{\label{sss:IIA2} Cluster configurations of one specific cluster
model}
A cluster model is specified by the set of exchange constants (the $J$'s) that
are included in the model.
Any spin cluster in this model consists of spins that are coupled to each other,
but not to other spins, by the $J$'s of the model.
Thus, any two spins in a cluster are connected by at least one continuous path
of exchange bonds associated with this set of $J$'s.
No continuous path of such exchange bonds is allowed to exist between spins in
different clusters.
The restriction on the allowed $J$'s is a restriction on the allowed sets of
cation sites associates with the clusters of the model.
For example, in the \J1-\J2 model, any cation site of a cluster must
have a NN site or a 2nd-neighbor site in the same cluster.
The cluster configurations of the model are all the configurations of the sets
of cation sites that are allowed by the $J$'s of the model.
\subsubsection{\label{sss:IIA3}Identical configurations in different cluster
models}
Sometimes, several cluster models are considered.
It may then happen that the set of $J$'s in one cluster model is only a subset
of all the exchange constants in another cluster model.
Any cluster configuration in the cluster model with the fewer $J$'s is then also
a configuration in the model with the larger set of $J$'s.
For example, the exchange constant \J1 of the NN cluster model is a subset of
the exchange constants in the \J1-\J2 model.
Any cluster configuration in the \J1 model is also a configuration in the
\J1-\J2 model.
The converse is not true; many cluster configurations that exist in the \J1-\J2
model do not exist the \J1 model.
For the same reason, all cluster configurations that exist in the \J1 model
also exist in the \J1-\J3 model.
Figure~2 of I shows an example of the same configuration in different cluster
models.
Figure~2(b) of I shows the configuration in the \J1 model, and Figs.~2(c) and
2(d) show the same configuration in the \J1-\J2 and \J1-\J3 models,
respectively.
\subsection{\label{ss:IIB}From cluster configurations to cluster types}
Once a cluster model is specified, it is necessary to identify the cluster types
that exist in the model.
In the present work this identification was carried out by a series of computer
program.
The first set of programs identified all the (different) configurations that
exist in the specified cluster model.
For each configuration, a realization that has one spin at the origin was
generated.
This realization is called the ``prototype'' of the configuration.
Only configurations with no more than 5 cation sites were considered explicitly
in the present work.
A cluster type $c$ is specified by a cluster size $n_c$ and by a bond
list.\cite{Bindi05prb}
To identify the cluster types that exist in the model, the prototypes of all the
(different) configurations were first classified by size, i.e., by the number of
spins in the prototype.
The next step was to generate the bond lists for all prototypes of a given size,
$n_c$.
The final step was to identify all prototypes of the same size $n_c$ that have
identical bond lists.
Such prototypes are, by definition,\cite{note5} realizations of the same cluster
type, $c$.
In fact, they are the prototypes of all the configurations $r_c$ of cluster type
$c$.
To summarize, the classification of the prototypes of different configurations
by both size and bond list leads to:
1) all cluster types $c$, for each cluster size $n_c$;
2) the bond list for each of these cluster types;
and 3) the prototypes of all the configurations of each cluster type.
\subsection{\label{ss:IIC}Statistics of cluster types}
The goal of the statistics is to find, for each cluster type $c$, the
probability $P_c$ that a randomly-chosen spin is in one of the realizations of
this cluster type.
The main assumption is that the magnetic ions are randomly distributed over the
cation sites.
The procedure for calculating $P_c$ as a function of $x$ was outlined in
Sec.~III C of I.
The procedure starts from the configurations $r_c$ of cluster type $c$.
Any realization of cluster type $c$ must also be a realization of one of the
configurations $r_c$ of that cluster type.
The probability $P_{r_c}$ that a randomly-chosen spin is in some realization of
the configuration $r_c$ is given by Eq.~(4) of I.
This equation contains two parameters that depend on the configuration:
the lattice-combinatorial parameter $n_{r_c}$, and the perimeter $\nu_{r_c}$.
After these two parameters are evaluated for each of the configurations $r_c$,
the probability $P_c$ is obtained by summing $P_{r_c}$ over all the configurations
$r_c$ of cluster type $c$. This sum is given by Eq.~(5) of I.
The lattice-combinatorial parameter $n_{r_c}$ is the number of (distinct)
realizations of the configuration $r_c$ that have one spin at the origin.
This $n_{r_c}$ depends only on the configuration.
If the same configuration exists in more than one cluster model, then the
corresponding lattice-combinatorial parameters are the same in all these models.
The computer program that was used to obtain $n_{r_c}$ was based on the
principle that all realizations of a configuration can be generated from the
prototype of the configuration by applying the symmetry operations of the $P4m$
space group, including the lattice translations.
If the same realization was generated by different symmetry operations, the
count for $n_{r_c}$ included this realization only once.
In contrast to $n_{r_c}$, the perimeter $\nu_{r_c}$ depends not only on the
configuration but also on the cluster model.
If the same configuration exists in two cluster models, the perimeters in the
two models are, in general, not equal.
For example, any configuration that is present in the \J1 model is also present
in the \J1-\J2 model.
The perimeter in the \J1 model will be called the \J1 perimeter.
Given any realization of the configuration $r_c$, the \J1 perimeter is the
number of (cation) sites that are NN's of the sites in the realization, but are
not themselves sites of the realization.
The perimeter in the \J1-\J2 model (called the \J1-\J2 perimeter) is the
number of sites that are either NN's and/or 2nd-neighbors of the sites in the
realization, but are not themselves sites of the realization.
Therefore, for the same configuration, the \J1-\J2 perimeter is larger than the
\J1 perimeter.
As a consequence, the probability that a randomly-chosen spin is in a
realization of this configuration will be lower in the \J1-\J2 model than in
the \J1 model [See Eq.~(4) of I].
Similar results apply to a configuration that exists in both
the \J1 and the \J1-\J3 models.
\section{\label{s:III-types}CLUSTER TYPES }
\begin{figure*}\includegraphics[scale=1]{fig1}
\caption{\label{Fig1}Cluster types of the \J1-\J2 model, up to cluster size $n_c{=}4$.
Solid circles represent spins. Solid and dotted lines represent \J1 bonds and
\J2 bonds, respectively. The labels for the cluster types are discussed in the text.}
\end{figure*}
\begin{figure*}\includegraphics[scale=1]{fig2}
\caption{\label{Fig2} Cluster types of quintets $(n_c{=}5$) in the \J1-\J2
model.}
\end{figure*}
\subsection{\label{ss:IIIA}Generic and specific models}
A \JJ1-\JJ2 cluster model in which the symmetry classes of the neighbors
associated with \JJ1 and \JJ2 are not specified will be called a ``generic'' model.
If the symmetry classes of the neighbors are specified, the cluster model is
``specific'' rather than generic.
In this paper, only two specific \JJ1-\JJ2 models are considered:
the \J1-\J2 model (with $|J_1|{>}|J_2|$), and the \J1-\J3 model (with
$|J_1|{>}|J_3|$).
The reason for focusing on these specific models is that exchange constants
often tend to decrease as the distance $r$ of the relevant neighbor increases.
Although the decrease is not always monotonic,\cite{Bindi98prl,Bindi99jap} we
expect that in the vast majority of diluted antiferromagnets with a square cation
lattice, $J^{(1)}{=}J_1$ and \JJ2 is either \J2 or \J3.
The cluster types in the \J1-\J2 model are not trivially related to the cluster
types in the \J1-\J3 model.
That is, the bond lists for the two models cannot be obtained from each other by
replacing the \J2 bonds by \J3 bonds.
The site percolation concentrations for these two specific models are also
different.\cite{Malarz05pre}
The non-trivial dependence of the bond lists, and hence of the cluster types, on
the specific cluster model implies that bond lists and cluster types cannot be
given for a generic model. They can only be given for a specific model.
\subsection{\label{ss:IIIB}Parent cluster models}
The ``parent'' cluster models of the \J1-\J2 model are the \J1 model and the
\J2 model, each of which has only one of exchange constants of the \J1-\J2
model.
The \J1-\J2 model is not a simple combination of its parent models.
Similarly, the \J1-\J3 model is not a simple combination of the \J1 and \J3
models, which are its parent models.
There are many interesting relations between the \J1-\J2 and \J1-\J3 models
and their respective parent models.
To avoid repeated interruptions in the main text of the paper, discussions of
these relations are relegated to Appendices.
For the limited purpose of calculating magnetization curves numerically, the
results in these Appendices are not essential.
However, these results give a deeper insight into the physics.
Some of these results will be quoted, and used, in the main texts of the present
paper and of the following paper.
Appendix~\ref{a:iso}, describes the strong similarity (called ``isomorphism'')
between the three parent models, i.e., the \J1, \J2, and \J3 models.
These are the only ``parent models'' considered in the present work.
\subsection{\label{ss:IIIC} Cluster types in the \J1-\J2 model}
\begin{figure*}\includegraphics[scale=1]{fig3}
\caption{\label{Fig3}Cluster types of the \J1-\J3 model, up to cluster size $n_c{=}4$.
The format, including the format of the labels for the cluster types, is similar to that in \Fig1,
except that the dotted lines represent \J3 bonds.}
\end{figure*}
\begin{figure*}\includegraphics[scale=1]{fig4}
\caption{\label{Fig4} Cluster types of quintets $(n_c{=}5$) in the \J1-\J3
model.}
\end{figure*}
\subsubsection{\label{sss:IIIC1}Cluster types}
In the \J1-\J2 model on the square lattice, there are 67 cluster types of sizes
$n_c{\le}5$. These cluster types are shown in \Figs12.
Cluster types of the same size, $n_c$, appear in the same column.
The four columns in \Fig1 show the cluster types with $n_c {=} 1, 2, 3$, and $4$.
The single column in \Fig2 shows the cluster types with $n_c{=}5$.
Spins are represented by solid circles, \J1 bonds by solid lines, and \J2 bonds
by dotted lines.
Only one configuration for each cluster type is shown.
The bond lists for these cluster types, which specify all intra-cluster exchange
interactions,\cite{Bindi05prb} are given in Appendix~\ref{a:bondlists}.
Each cluster type is labeled by two numbers separated by a hyphen.
The first number is the cluster size $n_c$.
The second is a serial number (SN) within this cluster size.
Thus, the format for any label is [$n_c$-(SN)].
There is only one type of single (type 1-1), but two types of pairs:
2-1 which is a \J1-pair, and 2-2 which is a \J2-pair.
There are four types of triplets ($n_c{=} 3$), 15 types of quartets
($n_c{=}4$), and 45 types of quintets ($n_c{=} 5$).
\subsubsection{\label{sss:IIIC2}Four categories of cluster types}
The number of cluster types in \Figs12 is rather large.
It is therefore useful to classify them.
The classification scheme is not unique. In one scheme the 67 cluster types
in \Figs12 are divided into four broad categories:
\begin{enumerate}\setlength{\topsep}{0pt}\setlength{\itemsep}{-0.25\baselineskip}
\item ``single,'' which has no exchange bonds;
\item ``pure \J1'' cluster types, with only \J1 exchange bonds;
\item ``pure \J2'' cluster types, with only \J2 bonds;
\item ``mixed'' cluster types, with both \J1 and \J2 bonds.
\end{enumerate}
The single (type 1-1) is at the left of the bottom row of \Fig1.
The only pure \J2 cluster types are the other 5 cluster types in the same bottom
row together with the four cluster types in the bottom row of \Fig2.
The only pure \J1 cluster types are: 2-1, 3-1, 4-1, and 5-1.
All the remaining 53 cluster types are ``mixed'' types.
The pure \J1 cluster types, and the pure \J2 cluster types, are related to
cluster types in the (parent) \J1 and \J2 models, respectively.
These relations are discussed in Appendix~\ref{a:relations}.
\subsection{\label{ss:IIID}Cluster types in the \J1-\J3 model}
In the \J1-\J3 model there are 82 cluster types of sizes $n_c{\le}5$.
They are shown in \Figs34.
The format is the same as in \Figs12, except that the dotted lines now
represent \J3 bonds.
The bond lists for the cluster types in \Figs34 are given in
Appendix~\ref{a:bondlists}.
The labels for the cluster types of the \J1-\J3 model
have the same format as those for the cluster types in the \J1-\J2 model.
Therefore, many of the labels used in the two models models are identical.
When the same label appears in both models, it often refers to different cluster
types (i.e., different bond lists).
Unless it is clear from the context, it is then necessary to specify
the cluster model to which the label refers.
Once again the cluster types in \Figs34 can be divided into four categories:
\begin{enumerate}\setlength{\topsep}{0pt}\setlength{\itemsep}{-0.25\baselineskip}
\item the ``single'' (type 1-1), at the left of the bottom row of \Fig3;
\item the 9 ``pure \J3'' cluster types, consisting of the other 5 cluster types
in the bottom row of \Fig3 together with the 4 cluster types in the bottom row of \Fig4;
\item the 5 ``pure \J1'' cluster types: 2-1, 3-2, 4-3, 4-5, and 5-5;
\item the 67 remaining cluster types, which are all ``mixed'' types.
\end{enumerate}
\section{\label{s:IV} CLUSTER STATISTICS AND PERIMETER POLYNOMIALS}
\subsection{\label{ss:IVA}Results for small clusters}
The probabilities $P_c$ as a function of $x$ were obtained using the procedure
discussed in Sec.~\ref{ss:IIC}.
Only cluster types with $n_c{\le}5$ were considered.
Some of the results for the \J1-\J2 model are shown \Fig5.
They include the $P_c$'s for the single (type 1-1), for the two types of pairs
(2-1 and 2-2), and for the four types of triplets (3-1, 3-2, 3-3 and 3-4).
These labels for the cluster types refer to \Fig1.
Also shown in \Fig5 are:
the sum $P_4$ of the probabilities for all 15 quartet types in \Fig1;
the sum $P_5$ of the probabilities for all the quintet types in \Fig2;
and the sum $P_{>5}$ of the probabilities for all cluster types with sizes
$n_c{>}5$.
\begin{figure*}\includegraphics[scale=1]{fig5}
\caption{\label{Fig5}
Probabilities $P_c$ as a function of $x$ for some cluster types of the
\J1-\J2 model.
The solid curves are for individual cluster types, labeled as in \Fig1.
Curve 1-1 is for singles, curves 2-1 and 2-2 are for the two types of pairs, and
curves 3-1 up to 3-4 are for the four types of triplets.
The dashed curves are probabilities for some combinations of cluster types:
$P_4$ is the sum of the $P_c$'s for all the 15 quartet types in \Fig1;
$P_5$ is the sum of the $P_c$'s for all 45 quintet types in \Fig2;
and $P_{>5}$ is the sum of the probabilities for all cluster types of sizes
$n_c{>}5$.
}
\end{figure*}
\Figure6 shows the corresponding probabilities for the \J1-\J3 model.
In this case the cluster types refer to \Figs34.
In both \Fig5 and \Fig6 the highest value of $x$ is below the relevant site
percolation concentration, $x_c{=}0.407$ for the \J1-\J2 model,
and $x_c{=}0.337$ for the \J1-\J3 model.\cite{Malarz05pre,Peters79}
For the \J1 model, $x_c{=}0.593$.
\begin{figure*}\includegraphics[scale=1]{fig6}
\caption{\label{Fig6}
Probabilities $P_c$ as a function of $x$ for some cluster types of the \J1-\J3
model.
The labels for the various curves are similar to those in \Fig5, except that
the cluster types refer to \Figs34 for the \J1-\J3 model.}
\end{figure*}
\subsection{\label{ss:IVB}Perimeter polynomials}
As discussed in I, the probability $P_c$ can be expressed succinctly in the form
\begin{equation}
P_c = n_cx^{n_c{-}1}D_c(q), \label{Eq:1}
\end{equation}
where $D_c(q)$ is a polynomial in $q{=}1{-}x$, defined as the perimeter
polynomial (PP) for cluster type $c$.
Appendix~\ref{a:bondlists} gives the PP's for cluster types with sizes
$n_c{\le}5$ in the \J1-\J2 model. These are the cluster types in \Figs12.
Appendix~\ref{a:bondlists} also gives the PP's for cluster types of the \J1-\J3
model whose sizes are $n_c{\le}5$. These are the cluster types in \Figs34.
For the \J1-\J2 model, the following check on the results for $D_c(q)$ in
Appendix~\ref{a:bondlists} was performed.
The conventional PP's, $D(q)$, for cluster sizes (not types) were given
for this model by Peters et al.\cite{Peters79}
As expected, the polynomial $D(q)$ for any cluster size $s$ is equal to the sum
$\sum D_c(q)$ of the PP's in Appendix~\ref{a:bondlists} over all cluster types
$c$ that have the size $n_c{=}s$.
Appendix~\ref{a:comments} discusses some relations between the probabilities
$P_c$ for the pure-\J1 and pure-\J2 cluster types in the \J1-\J2 model and the
probabilities for the same cluster types in the (parent) \J1 and \J2 models.
The relations between the probabilities $P_c$ for pure-\J1 and pure-\J3 cluster
types in the \J1-\J3 model and the probabilities for the same cluster types in
the (parent) \J1 and \J3 models are also discussed.
\section{\label{s:V}THE MAGNETIZATION CURVE}
\subsection{\label{ss:VA}Calculation procedure}
The procedure for calculating the magnetization $M(T,B)$ was discussed
previously.\cite{Bindi05prb,Shapira02jap}
Briefly, the Hamiltonian of one realization\cite{note5} of each cluster type $c$
is diagonalized, and the results are used to obtain the average magnetic moment
$\mu_c(T,B)$ per realization.
For singles, $\mu_c(T,B)$ follows the Brillouin function for spin $S$.
For each of the other cluster types, $\mu_c(T,B)$ exhibits a series of MST's as
a function of $B$ at very low $T$.
The total magnetization $M(T,B)$ is a statistically-weighted sum of $\mu_c(T,B)$.
This sum is given by Eq.~(13) of I, namely,
\begin{equation}
M(T,B) = \sum_c N_c \mu_c(T,B) = N_\mathrm{total}\sum_c
\frac{P_c}{n_c}\mu_c(T,B),
\label{Eq:2}
\end{equation}
where $N_c{=}P_c N_\mathrm{total}/n_c$ is the population of cluster type $c$,
and $N_\mathrm{total}$ is the total number of spins.
The quantities $M$, $N_c$ and $N_\mathrm{total}$ are all either per unit mass
or per unit volume.
The infinite sum in Eq.~(\ref{Eq:2}) cannot be evaluated exactly because
$\mu_c(T,B)$ and $P_c$ (or $N_c$) are known only for a finite number of cluster
types $c$.
Usually they are known only for cluster types whose sizes $n_c$ are no
larger than some maximum size $n_{\mathrm{max}}$.
The sum in Eq.(~\ref{Eq:2}) is therefore truncated after the finite sum over
cluster types with $n_c{\le} n_\mathrm{max}$ is evaluated exactly.
The remainder (REM) from clusters of sizes $n_c{>}n_\mathrm{max}$, is then
approximated by the ``remainder correction'' $R(T,B)$.
In the present work, $n_\mathrm{max}{=} 5$, so that the REM is from clusters larger
than quintets.
The remainder correction $R(T,B)$ was obtained by the corrective quintets
(CQUIN's) method.
The application of this method to the NN cluster model was discussed in I.
For a model with two exchange constants the CQUIN's method is considerably more
involved.
The number of spins in the REM is $P_{>5}N_\mathrm{total}$.
The accuracy of the CQUIN's method is not an important issue if $P_{>5}{\ll}1$.
Because $P_{>5}$ increases with $x$, the accuracy is not a significant issue if
$x$ is sufficiently small.
All magnetization curves in the present paper are for $x {\leq} 0.09$,
and are based on the \J1-\J2 model.
Under these conditions, $P_{>5} {\leq} 3.6\%$, so that errors in $M$ resulting from
the CQUIN's method are not significant.
The description of the CQUIN's method is postponed to the following paper, which
also includes some examples for higher $x$.
\subsection{\label{ss:VB}The two reduced magnetic fields}
The two exchange constants, \JJ1 and \JJ2, lead to two energy scales for the
Zeeman energy.
In analogy to Eq.~(10) of I, the primary reduced magnetic field $b_1$ is defined
as
\begin{subequations}\label{Eq:3}
\begin{equation}
b_1 = g\mu_\mathrm{B} B/|J^{(1)}|. \label{Eq:3a}
\end{equation}
The secondary reduced magnetic field $b_2$ is
\begin{equation}
b_2 = g\mu_\mathrm{B} B/|J^{(2)}|. \label{Eq:3b}
\end{equation}
\end{subequations}
Thus, at any given $B$,
\begin{equation}
{b_1}/{b_2} = \left|{J^{(2)}}/{J^{(1)}}\right| < 1.\label{Eq:4}
\end{equation}
The reduced magnetization $m$ is defined as in I. That is,
\begin{equation}
m = M/M_0, \label{Eq:5}
\end{equation}
where $M_0$ is the true saturation value of $M$.
The reduced parameters $b_1$, $b_2$, and $m$, will be used in plots and
discussions of the magnetization curves.
\subsection{\label{ss:VC}MST's from the two different types of pairs}
At a low $T$ the calculated magnetization curve, $M$ versus $B$, includes a
superposition of many series of MST's.
Each series arises from some cluster type $c$ of size
$2{\le}n_c{\le}n_\mathrm{max}$.
The magnitude $(\Delta M)_c$ of a magnetization jump at each MST in the series
is proportional to the population $N_c$ of the relevant cluster type.
For low $x$ the largest jumps $(\Delta M)_c$ are for \JJ1 pairs and \JJ2 pairs.
In both \J1-\J2 and \J1-\J3 models these pairs are cluster types 2-1 and 2-2,
respectively (see \Figs13).
The MST's from \JJ1 pairs occur at the primary reduced fields
\begin{subequations}\label{Eq:6}
\begin{equation}
b_1 = 2, 4, 6,\dots,4S. \label{Eq:6a}
\end{equation}
The MST's from \JJ2 pairs occur when the secondary reduced field has the same
values, i.e., at
\begin{equation}
b_2 = 2, 4, 6,\dots,4S. \label{Eq:6b}
\end{equation}
\end{subequations}
Experimental values of the magnetic fields $B$ at the MST's from pairs are often
used to determine \JJ1 and \JJ2.
The temperature requirement for resolving the MST's from \JJ2 pairs is
$k_\mathrm{B} T{<}|J^{(2)}|$.
This is a more stringent requirement than $k_\mathrm{B} T{<}| J^{(1)} |$ for resolving
the MST's from \JJ1 pairs.
Equations (\ref{Eq:6a}) and (\ref{Eq:6b}) use the reduced fields $b_1$ and $b_2$.
Using the magnetic field $B$, instead of the reduced fields,
the ranges of $B$ for the MST series from the two types of pairs may or may not overlap.
The condition for avoiding overlap is
\begin{equation}
{J^{(2)}}/{J^{(1)}} < {1}/{2S}. \label{Eq:7}
\end{equation}
For magnetic ions with $S{=}5/2$ (e.g., $\mathrm{Mn^{2+}}$ or $\mathrm{Fe^{3+}}$)
overlap is avoided if $J^{(2)}{/}J^{(1)}{<}0.2$.
\subsection{\label{ss:VD}Two examples of magnetization curves for very low $x$}
The main purpose of the following two examples is to illustrate the dependence
of the MST pattern on the ratio $J^{(2)}{/}J^{(1)}$.
The examples are for $x{=}0.01$ in the \J1-\J2 cluster model.
To optimize the resolution of the spectra, the examples are for $T{=}0$.
The value $S{=}5/2$ is assumed.
\Figures78 are for $J_2{/}J_1{=}0.28$.
\Figure7 shows the reduced magnetization $m$ as a function of the primary
reduced magnetic field $b_1$.
MST's from \J1 pairs (cluster type 2-1) are indicated by long arrows, and
those from \J2 pairs (cluster type 2-2) by shorter arrows.
The main part of \Fig7 shows the magnetization curve up to $b_1=7.5$.
The full magnetization curve is shown in the inset.
\begin{figure}\includegraphics[scale=1]{fig7}
\caption{\label{Fig7}Magnetization curve at $T{=}0$, calculated from the \J1-\J2
model using the parameters $x{=}0.01$, $S{=}5/2$, and $J_2{/}J_1{=}0.28$.
The ordinate is the reduced magnetization, $m{=}M{/}M_0$, where $M_0$ is the true
saturation magnetization. The abscissa is the primary reduced magnetic field
$b_1{=}g\mu_\mathrm{B} B{/}|J_1|$, up to 7.5.
MST's from \J1 pairs (cluster type 2-1) are indicated by long upward arrows.
MST's from \J2 pairs (cluster type 2-2) are indicated by shorter downward
arrows. The inset shows the full magnetization curve, up to complete
saturation.}
\end{figure}
\begin{figure}\includegraphics[scale=1]{fig8}
\caption{\label{Fig8}The derivative of the magnetization curve in \Fig7.
The ordinate is the derivative $\mathrm{d} m/\mathrm{d} b_1{=}(|J_1|{/}g\mu_\mathrm{B} M_0) (\mathrm{d} M{/}\mathrm{d} B)$.
The lower abscissa scale is for the primary reduced field $b_1$.
The upper abscissa scale is for the secondary reduced magnetic field $b_2$.
The derivative peaks at MST's from cluster types 2-1 (\J1 pairs) and 2-2 (\J2
pairs) are indicated by long and short arrows, respectively.}
\end{figure}
\Figure8 shows the derivative $\rmd M/\rmd B$, in normalized units,\cite{note10}
in fields up to $b_1=7$, corresponding to the main part of \Fig7.
The upper abscissa scale is for the secondary reduced magnetic field $b_2$.
The derivative peaks at MST's from \J1-pairs (type 2-1) and from \J2-pairs (type 2-2),
are indicated by long and short arrows, respectively.
Their reduced magnetic fields are given by Eqs.~(\ref{Eq:6a}) and (\ref{Eq:6b}).
Because the ratio $J_2{/}J_1{=}0.28$ is higher than 0.2, the field ranges for the
series of MST's from the two types of pairs overlap.
\Figure9 shows the magnetization (top) and derivative (bottom) curves for
$x{=}0.01$ when the ratio $J_2{/}J_1{=}0.028$, i.e., smaller by a factor of
10 compared to the ratio in \Figs78. All other parameters are the same as for
\Figs78.
Because the ratio $J_2{/}J_1$ is now well below 0.2, the MST series from \J1- and
\J2-pairs occur in field ranges that do not overlap.
In the ``gap'' between these two field ranges, the magnetization (upper curve)
exhibits a plateau of apparent saturation, labeled as $m_s$.
The apparent saturation value, $m_s{=}0.961$, agrees with the value of $m_s$
obtained from the NN cluster model for this $x$ (Ref.\onlinecite{Bindi05prb}).
However, in contrast to the NN cluster model, the apparent saturation is reached
only after the series of MST's from the \J2 pairs (type 2-2) is completed.
Clearly, the MST pattern for $J_2{/}J_1{=}0.028$ (\Fig9) is much simpler than
that for $J_2{/}J_1{=}0.28$ (\Figs78).
\begin{figure}\includegraphics[scale=1]{fig9}
\caption{\label{Fig9}Zero-temperature magnetization curve (upper curve) and its
derivative (lower curve) for $x{=}0.01$.
These curves are calculated using the \J1-\J2 model and the ratio
$J_2{/}J_1{=}0.028$.
As in \Fig8 the lower and upper abscissa scales are for $b_1$ and $b_2$,
respectively.
The left ordinate scale is for $m$. The right ordinate scale is for $\rmd m/\rmd b_1$.
The arrows indicate the locations of the MST's from \J1 pairs (type 2-1).
The MST's from \J2 pairs (type 2-2) are bunched up at low fields.
The plateau of apparent saturation is labeled as $m_s$.}
\end{figure}
\Figure7 for $J_2{/}J_1{=}0.28$ shows a short magnetization plateau
immediately after the initial magnetization rise.
This plateau, which ends at the first MST from \J2 pairs (not \J1 pairs), is not
the plateau of apparent saturation predicted by the NN cluster model.
The value $m{=}0.925$ at the first short plateau in \Fig7 is well below the
apparent saturation value $m_s{=}0.961$ in the NN cluster model.
The reason for the lower value is that the series of MST's from \J2 pairs has
not been completed before the start of this short plateau.
\section{\label{s:VI}THE MST SPECTRUM}
\subsection{\label{ss:VIA}Spectrum}
The derivative $\rmd M/\rmd B$\ as a function of $B$ exhibits a peak at each MST.
The pattern of the peaks in the derivative curve will be called the ``MST spectrum.''
\Figure8 and the lower curve in \Fig9 are examples of such MST spectra.
Each peak in $\rmd M/\rmd B$\ is a ``spectral line.''
Two or more spectral lines associated with MST's arising from different cluster
types may overlap.
When a spectral line is due to only one MST from one cluster type $c$, the
integral of the spectral line with respect to $B$ is equal to the magnetization
jump $(\Delta M)_c$.
There are several advantages of using the spectrum.
A plot of the spectrum, $\rmd M/\rmd B$\ versus $B$, is very effective in conveying
information visually.
Another advantage is that the calculated spectrum is the \emph{exact} spectrum
from cluster types with $n_c{\le}n_\mathrm{max}$.
As discussed earlier, the infinite sum in Eq.~(\ref{Eq:2}) is split into a sum
over clusters with $n_c{\le}n_\mathrm{max}$, and a remainder (REM) from larger
clusters ($n_c{>}5$ in the present work).
The reason for the split is that the Hamiltonians of clusters with
$n_c{>}n_\mathrm{max}$ have not been diagonalized.
The finite sum is evaluated exactly, but the REM is approximated by $R(T,B)$.
The derivative of $R(T,B)$ does not give the spectral lines from the clusters
with $n_c{>}n_\mathrm{max}$.
These lines can be obtained only if cluster Hamiltonians for
$n_c{>}n_\mathrm{max}$ are diagonalized.
A wealth of information, such as accurate exchange constants, can be obtained
from a comparison of an experimental spectrum with the calculated exact spectrum for
$n_c{\le}n_\mathrm{max}$.
All the spectra shown in this paper are the derivatives of the exact finite sum
up to $n_\mathrm{max}{=5}$, without the remainder (see Ref.~\onlinecite{note11}).
\subsection{\label{ss:VIB}Additional examples of MST patterns and spectra}
The examples in Sec.~\ref{ss:VD}, for $J_2{/}J_1{=}0.28$ and $0.028$, assumed
$x{=}0.01$.
The following examples are for the same ratios \rJ2, but for higher $x$.
All other parameters ($S{=}5/2$, $T{=}0$), and the cluster model (\J1-\J2), are
the same as in Sec.~\ref{ss:VD}.
An example of a zero-temperature spectrum calculated from the \J1-\J3 model will
be given in the following paper.\cite{Bindi06eprint2}
Calculated spectra at finite temperatures will be shown in
Ref.~\onlinecite{exp15mK} in connection with analysis of experimental data.
\subsubsection{\label{sss:VIB1} Magnetization curves and spectra for $x{=}0.09$}
Magnetization curves and spectra for $x{=}0.09$ are shown in \Figs{10}{11}.
From \Fig5 the probabilities $P_c$ for all four triplet types increase when $x$
changes from $0.01$ to $0.09$.
The largest increase is for triplet type 3-3, which is a \J1 pair attached to
a third spin by a \J2 bond.
As a result, spectral lines from type 3-3 triplets are readily seen in
\Figs{10}{11}.
\Figure{10} shows the MST pattern (top curve) and the MST spectrum (lower curve)
for $x{=}0.09$ when $J_2{/}J_1{=}0.28$.
The range of the primary reduced field is $b_1{<}7.5$.
The cluster types responsible for some prominent spectral lines are indicated.
Clearly, for $J_2{/}J_1=0.28$ the spectrum is quite complicated
because of the overlap between different series of MST's from different cluster
types.
\begin{figure}\includegraphics[scale=1]{fig10}
\caption{\label{Fig10}The reduced magnetization $m$ (top curve) and the MST
spectrum (lower curve) at $T{=}0$ for $x{=}0.09$, $S{=}5/2$.
These curves are for $J_2{/}J_1{=}0.28$.
Only the results in the range $b_1{<}7.5$ are shown.
The cluster types responsible for some of the spectral lines are indicated.}
\end{figure}
\begin{figure}\includegraphics[scale=1]{fig11}
\caption{\label{Fig11}Zero-temperature magnetization curve (top curve) and
spectrum (lower curve) for $x{=}0.09$, calculated from the \J1-\J2 model when
$J_2{/}J_1{=}0.028$.
The range of the primary reduce field is limited to $b_1{<}7.5$.
The plateau of apparent saturation is labeled as $m_s$.}
\end{figure}
\Figure{11} shows the results for the same $x$ when $J_2{/}J_1{=}0.028$.
For this much lower \rJ2 ratio, the spectrum is much simpler.
All the discernable spectral lines occur in two separate field ranges.
The top of the low-field range is slightly above $b_1{=}0.42$ where the series
from type 3-4 triplets ends.
The bottom of the high-field range is near $b_1{=}0.935$ where a small MST from
quartet type 4-2 is barely discernable.
The two field ranges are separated by a gap, i.e., by a field range in which
there are no discernable spectral lines.
The absence of discernable lines implies that $m$ has reached a plateau.
The value $m{=}0.715$ at this plateau agrees with the apparent saturation value
$m_s$ calculated from the NN cluster model.
In the field range of \Fig{11} the \J1 pairs, which exist both in the \J1
and the \J1-\J2 models, give rise to large MST's at $b_1{=}2, 4, 6$.
These are the 2-1 1ines in \Fig{11}.
Near each 2-1 line there is also a line from the 3-3 triplets.
The 3-3 lines do not exist in the \J1 model.
Each 3-3 line together with the stronger nearby 2-1 line may be viewed as a
fine structure (FS) that has evolved from a single spectral line,
due to \J1 pairs, in the \J1 model.
The separation $\Delta b_2$, in the secondary reduced field, between the 3-3
line and the nearby 2-1 line is of order $1$.
The corresponding magnetic field separation $\Delta B$ is $g\mu_\mathrm{B} \Delta B{\sim}|J_2|$.
\Figure{11} also shows a spectral line at $b_1{=}7$ labeled as 3-2 \& 3-1.
It corresponds to the coincidence of the first MST from triplets of type 3-1
and the first MST from triplets of type 3-2.
The ``intensity'' of this combined line is just the sum of the two intensities.
The other lines from the 3-1 and 3-2 triplets, at $b_1{=}9,11,13$ and $15$
(above the field range of \Fig{11}), also coincide when $J_2{/}J_1{=}0.028$.
\Figure10 shows that the 3-1 and 3-2 lines still coincide when $J_2{/}J_1{=}0.28$.
It can be shown that this remains true as long as $J_2{/}J_1 {\le} 1$.
For additional details see Ref.~\onlinecite{note:coincidence}.
\subsubsection{\label{sss:VIB2}Spectrum for $x{=}0.20$}
For $x{=}0.20$, approximately 40\% of the spins are in clusters of sizes
$n_c{\ge}5$ (see \Fig5).
When such a large fraction of the spins are in the REM of Eq.~(\ref{Eq:2}),
the accuracy of the CQUIN's method of treating the magnetization from the REM
is open to question.
Under these circumstances, a plot of the exact spectrum for clusters with
$n_c{\le}5$ is probably preferable to a plot of the magnetization curve.
\begin{figure}\includegraphics[scale=1]{fig12}
\caption{\label{Fig12}Zero-temperature spectrum in the \J1-\J2 model for
$J_2{/}J_1{=}0.028$ when $x{=}0.20$.
Cluster types responsible for some spectral lines are indicated.}
\end{figure}
\begin{figure}\includegraphics[scale=1]{fig13}
\caption{\label{Fig13}Expanded view of the spectrum in \Fig{12} $(x{=}0.20)$ for
the range $0{<} b_1 {<} 2.5$.}
\end{figure}
The calculated spectrum for $x{=}0.20$ will be shown only for the low ratio
$J_2{/}J_1{=}0.028$.
\Figure{12} shows this spectrum in the range $0{<}b_1{<}7.5$, at $T{=}0$.
Cluster types responsible for some of the spectral lines are indicated.
An expanded view of the spectrum in the range $b_1{<}2.5$ is shown in \Fig{13}.
The comparison of \Figs{12}{13} with \Fig{11} shows that the
increase of $x$ from $0.09$ to $0.20$ has led to the following changes:
\begin{enumerate}\setlength{\topsep}{0pt}\setlength{\itemsep}{-0.25\baselineskip}
\item There are new discernable lines. Some of these new lines are in the FS
near $b_1{=}2$.
Two of the lines in this FS are from cluster type 4-12.
One of these 4-12 lines is slightly above the 2-1 line
(from pure \J1 pairs in the \J1-\J2 model), and and the other 4-12 line is
slightly below it .
Actually there is even a stronger new line in the same FS from quartets of type
4-10. However, this line is not resolved because it is very close to the still
stronger line from the 3-3 triplets.
\item A gap, in which there are no discernable spectral lines, separates the
low-field and high-field parts of the spectrum.
On the scale of \Fig{13} there are no discernable lines between $b_1{=}0.63$
and $b_1{=}0.92$.
\end{enumerate}
\section{\label{s:VII}LOPSIDED MODELS}
The spectra for a ``very small'' ratio \rJ2, are shown in Figs.~\ref{Fig9},
and \ref{Fig11}--\ref{Fig13}.
These spectra are relatively simple.
They suggest\cite{note7A} the following five features for such small \rJ2 ratios.
\begin{enumerate}\setlength{\topsep}{0pt}\setlength{\itemsep}{-0.25\baselineskip}
\item The spectrum consists of a low-field part and a high-field part,
separated by a gap in which there are no discernable spectral lines.
\item In the field range of the gap, the magnetization exhibits apparent
saturation, with an apparent saturation value $m_s$ equal to that
given by the NN cluster model.
\item In the high-field part of the spectrum, many spectral lines that exist
in the parent \J1 model develop a FS.
The most conspicuous FS evolves from those lines in the \J1 model
that are due to \J1-pairs, i.e., the lines at $b_1{=}2, 4, 6,$ etc.
\item Separations between adjacent lines in the FS that has evolved from a
single line in the \J1 model are of order $\Delta b_2 {\sim} 1$.
The corresponding separations $\Delta B$ are of order $g\mu_\mathrm{B} B\Delta B{\sim}|J_2|$.
\item In the low-field part of the spectrum, the separations between adjacent
lines are also of order $\Delta b_2{\sim}~1$, or $g\mu_\mathrm{B} B\Delta B{\sim}|J_2|$.
\end{enumerate}
The same five features are also found in simulations that use the \J1-\J3 model
when the ratio \rJ3 is ``very small.''
When the ratio $J^{(2)}{/}J^{(1)}$\ is sufficiently small that the five features
listed above appear in the spectrum, the \JJ2-\JJ1 model is called
``lopsided.''
Spectra from lopsided models are discussed in detail in the following paper.
Far from a mere curiosity, lopsided models actually apply to many materials.
In Ref.~\onlinecite{exp15mK}, which appears in this issue, the theoretical
results for lopsided models will be used to interpret data
obtained in $\mathrm{(C_3NH_3)_2Mn_xCd_{1-x}Cl_4}$ near $20\;\mathrm{mK}$.
\acknowledgments
This work was supported by CNPQ and FAPESP. Travel funds for Y.~S. were
also provided by FAPESP.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
\setlength{\parskip}{0pt}
Catastrophic forgetting is a recurrent problem in artificial neural networks, and a great handicap for continual learning. Neural networks tend to abruptly forget what they previously learned upon learning new information. The performance of the neural network on a specific task is evaluated based on a loss function and the model updates its parameters to try to optimize for the given loss function without considering its performance in any other task. The location in weight space where the network can reliably perform in a specific task is likely to be distinct from that where it achieved success in a different task. This effectively means that when we retrain a neural network on a different task it will update its weights aiming solely at optimizing the new loss function without minding preserving its performance on other previous tasks.
On the other hand, we humans clearly learn sequentially and build upon previous knowledge to learn new skills. We first learn to walk, then to run, later how to ride a bike... But we do not forget how to walk when we learn to ride a bike. Although it is true that we tend to slowly forget learned information throughout our lifetime (especially if we do not revisit it), new learning does not catastrophically interfere with consolidated knowledge. There are a wide variety of neurophysiological principles that control the interplay between stability and plasticity of the different human brain regions and that help in the development and specialization of the brain based on our sensorimotor experiences throughout life~(\cite{lewk,Murray2016MultisensoryPA,power,zenke}).
In this paper we will focus on memory replay and on how we can draw inspiration from this biological phenomenon to tackle catastrophic forgetting in artificial neural networks. Neural replay sequences were first discovered by studying the hippocampus in rats, but hippocampal replay has also been observed in cats, rabbits, songbirds and monkeys~(\cite{buhry,nokia,Dave2000SongRD,skaggs}). Memory replay has been found to be an important mechanism for learning implemented by biological brains. It consists in firing or replaying neural activity during sleep or wakeful rest so as to reinforce cell activations that occurred beforehand during learning. An entire replay sequence can last a fraction of a second, nevertheless it can play through multiple seconds worth of real life experience. Disrupting brain activity during replay in rodents has experimentally been proven to hinder learning which supports the importance of memory replay~(\cite{EgoStengel2010DisruptionOR,jadhav}).
\section{Memory Replay in Artificial Neural Networks}
\label{sec:Memory Replay in Artificial Neural Networks}
Limited network capacity is not behind catastrophic forgetting in artificial neural networks. The problem resides in how to train the model on different tasks sequentially. Bio-inspired memory replay has been proposed as a tool for continual learning in neural networks by several researchers. We could potentially try to store all previously encountered examples and retrain on them simultaneously when learning a new task, but the scalability of this method is not manageable. Storing previously seen data and alternating it with new data~(\cite{Rebuffi2017iCaRLIC,chaudhry2019tiny}) can be problematic due to both restricted storage capacity and privacy issues. Hence, some researchers have explored using learned generative neural network models of past observations to generate data for replay~(\cite{Robins1995CatastrophicFR,vandeVen2018GenerativeRW,vandeven2020}). This type of approach has also been criticized: for shifting the catastrophic forgetting problem to the training of the generative model~(\cite{schwarz2018progress}).
Human memory is not perfect, but we surely store memories. Although much is still unknown about how memories are created and accessed within the brain, it is reasonable to suggest that having memories is an evolutionary mechanism to help us survive by making better informed decisions based on past experiences. Having perfect memory is not efficient from a storage perspective and it is not necessary either. We can hypothesize that to make future decisions we only need a compressed version of what we experienced in the past. That compressed version of the experience is a memory that encapsulates the key information of the event without having to store the whole raw input signal. What is more, many times memories relate together information coming from different sensory signals: the visual appearance of a place may remind us of a smell or a song, for example. This may be an indication that we are storing multimodal data in a compressed latent space.
In this paper we will draw inspiration from these suggestions and store previously seen data in latent space which will be used for memory replay. We will first train two neural networks on a given task. The first neural network, which we will call the compressor, will encode the input data into latent space which will be pasted on to a classifier. Initially both networks will be trained and updated simultaneously on a given task (effectively working as a single model). Once we have trained on the first task we will store a fraction of the original data in latent space. Next we will proceed to learn a second similar classification task, but we will freeze the first network. The classifier will be updated based on both the latent space memory and new raw input data that will be pasted through the first (frozen) neural network for encoding. Using this approach we are able to preserve almost the original performance on the first task while learning a second task in a memory efficient way. Latent replay has also been explored in real-time continual learning scenarios~(\cite{latent_replay}).
The rest of the paper has the following structure. Section~$\ref{Datasets}$ briefly reviews the three datasets used. Section~$\ref{Neural Network Models}$ covers the two neural network models used: the encoder and the classifier. Section~$\ref{Training procedure}$ discusses the training procedure in more detail. In Section~$\ref{Results}$ we show the obtained results and lastly, in Section~$\ref{sec:Conclusion}$ we summarize the final conclusions.
\section{Datasets}
\label{Datasets}
In this work we use three datasets: the MNIST (\cite{726791,deng2012mnist}), the Fashion-MNIST (\cite{xiao2017fashionmnist}), and the Kuzushiji-MNIST dataset (\cite{Clanuwat2018DeepLF}).
MNIST is a famous database of handwritten digits, 0 to 9, that is commonly used for training various machine learning algorithms. It was created by remixing samples from the original NIST database~(\cite{Grother1995NISTSD}). MNIST contains a total of 70,000 images, the images are fit into a 28 × 28 pixel bounding box, and have grayscale levels. Figure~$\ref{fig:MNIST}$ and Figure~$\ref{fig:MNIST_pca}$ displays some samples from the dataset.
\begin{figure}[!htb]
\includegraphics[width=\linewidth]{MNIST.png}
\caption{Sample MNIST images.}
\label{fig:MNIST}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=\linewidth]{mnist_pca.jpg}
\caption{MNIST dataset after applying principal component analysis.}
\label{fig:MNIST_pca}
\end{figure}
Fashion-MNIST also comprises 28 × 28 grayscale images of a total of 70,000 fashion products from 10 categories, with 7,000 images per category. It was based on the assortment on Zalando's website, see Figure~$\ref{fig:FashionMNIST}$. The Fashion-MNIST was intended to be a dropin replacement of MNIST and wanted to provide a more challenging alternative for benchmarking machine learning models.
\begin{figure}[!htb]
\includegraphics[width=\linewidth]{Fashion_MNIST_samples.png}
\caption{Sample Fashion-MNIST images.}
\label{fig:FashionMNIST}
\end{figure}
Lastly, we also work with the Kuzushiji-MNIST dataset which focuses
on Kuzushiji (cursive Japanese). The dataset was created by the National Institute of Japanese Literature as part of a national project to digitize about 300,000 old Japanese books. It contains 70,000 images in 28 × 28 pixel grayscale with 7,000 images per category as well. It has 10 classes with one character to represent each of the 10 rows of Hiragana, see Figure~$\ref{fig:kuzuMNIST}$. Hiragana is a Japanese phonetic lettering system, one component of the Japanese writing system, alongside katakana, kanji and sometimes Latin script. By comparing Figure~$\ref{fig:MNIST}$ and Figure~$\ref{fig:kuzuMNIST}$, it is immediately obvious that the Kuzushiji-MNIST dataset is a more challenging dataset than the original MNIST.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{di.png}
\caption{Sample Kuzushiji-MNIST images. The 10 classes of Kuzushiji-MNIST are displayed, with the first column showing each character's modern hiragana counterpart.}
\label{fig:kuzuMNIST}
\end{figure}
\section{Neural Network Models}
\label{Neural Network Models}
As previously mentioned, we use two neural network models. The first model is a fully convolutional network that maps the input images into the latent space. The network described in Table~$\ref{tab:network1summary}$ maps the input images of dimensions 28 × 28 pixels into a compressed version with 6 × 6 pixels (note that images are in grayscale and both input and output of the network have a single channel).
The second network is a classifier which has convolutional layers, a fully connected region, and a final log softmax activation function. The architecture is summarized in Table~$\ref{tab:network2summary}$. This second network classifies the output of the first network in 10 classes, that is, it learns to classify the compressed latent version of the original inputs.
Table~$\ref{tab:network1summary}$ and Table~$\ref{tab:network2summary}$ provide detailed information about the number of layers, convolutional channels, the number of neurons used in dense layers, kernel sizes and more.
\begin{table}[htb!]
\centering
\caption{Description of first neural network: the compressor. Layer 1 has 10 convolutional output channels and Layer 2, 1 convolutional output channel, both use a kernel size of 5. The maxpooling layer uses a kernel size of 3.}
\label{tab:training-prediction cases}
\begin{tabular}{@{}cl@{}}
\toprule
Layer number & Description \\ \midrule
1 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
2 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
3 & Maxpool (2d) \\
\bottomrule
\end{tabular}
\label{tab:network1summary}
\end{table}
\begin{table}[htb!]
\centering
\caption{Description of second neural network: the classifier. All convolutional layers have 50 convolutional channels and a kernel size of 3. Dense Layer 7 has 16 neurons, and Layer 8 has 10, one for each class.}
\begin{tabular}{@{}cl@{}}
\toprule
Layer number & Description \\ \midrule
1 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
2 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
3 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
4 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
5 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
6 & Convolutional (2d) \\
& Batchnormalization \\
& ELU activation \\
& Flatten\\
7 & Dense \\
& Batchnormalization \\
& ELU activation \\
8 & Dense \\
& Log Softmax \\
\bottomrule
\end{tabular}
\label{tab:network2summary}
\end{table}
\section{Training procedure}
\label{Training procedure}
All datasets used in this work have 10 classes. We will train the classifier on 5 classes first and on the other 5 classes later. For that we organize the data in each dataset in 10 folds, one for each class.
Initially, we train the first and second neural networks at the same time on predicting the first 5 classes. Effectively, at this stage both neural networks work as a single model: the input images go through the first neural network and the outputs are given to the classifier; the weights of both models are updated. We randomly divide the data in the first 5 folds and use $90\%$ for training and $10\%$ for testing.
Once we are done training we freeze the first neural network, which has learned how to "effectively compress" the input data based on the first 5 classes. We pass a small percentage of the original data through the compressor and store the compressed version of the data which will be our latent space based memory. Note that we need to store both the compressed latent space form of the original data and the corresponding labels.
Next we proceed to train the classifier on the remaining 5 classes. The new data is passed through the (frozen) compressor before feeding it to the classifier, whose weights are updated to optimize for the new data and the stored latent space based memory simultaneously. This way we are able to preserve almost the original model performance on the test set used for the first 5 classes while training on the new data.
For training we use a stochastic gradient descent optimizer, a batch size of 1000, a learning rate of 0.001, and the cross entropy loss function. Also, we replay the latent space based memory every time the model is updated on the new data, that is, for every batch we update the model parameters based on the new data and then based on the latent space memory.
\section{Results}
\label{Results}
In this section we explore how much latent space based memory is required to preserve good performance on the original task. In latent space the input images of dimensions 28 × 28 = 784 pixels are represented by just 6 × 6 = 36 pixels ($4.59\%$ the original number of pixels). So, for example, if we were to store $5\%$ of the original data in the latent space based memory that would amount to storing $0.23\%$ of the amount of pixels in original training set. Table~$\ref{tab:training_cases}$ summarizes the different training cases explored in this work.
Table~$\ref{tab:training1performance_mnist}$, Table~$\ref{tab:training1performance_fashionmnist}$, and Table~$\ref{tab:training1performance_kuzushijimnist}$ report the results obtained by the model on the training and testing set when trained on the initial 5 classes for each dataset respectively.
Table~$\ref{tab:training2performance_mnist}$, Table~$\ref{tab:training2performance_fashionmnist}$, and Table~$\ref{tab:training2performance_kuzushijimnist}$ show the performance of the model on the new classes and also the performance on the original testing data after retraining using memory replay. In the leftmost column of the tables, the training case is specified: the first number refers to the classes used for training and testing (see Table~$\ref{tab:training_cases}$), and the percentage number denotes the amount of latent space memory used. For example, Case $1-0.05\%$ performs the $2^{nd}$ model training based on classes $[5,6,7,8,9]$, stores $1575/31500$ of the original images in latent space, and was originally trained on classes $[0,1,2,3,4]$.
Lastly for comparison, we also include the performance of the model on the original trainining test after retraining on the new classification task when memory replay is not used and the first neural network is not frozen. That is, we retrain the whole model on the new task and check its performance on the original classification classes: catastrophic forgetting occurs. See Table~$\ref{tab:cata_performance_mnist}$, Table~$\ref{tab:cata_performance_fashionmnist}$, and Table~$\ref{tab:cata_performance_kuzushijimnist}$.
\clearpage
\begin{table}[htb!]
\centering
\caption{Training cases explored in this work. More training combinations are possible. The $1^{st}$ training classes column refers to the classes used to originally train the two neural networks (compressor included). The $2^{nd}$ training classes are the classes on which the classifier was additionally trained on using latent space based memory replay. These cases are applicable to all datasets.}
\begin{tabular}{@{}ccc@{}}
\toprule
Case & $1^{st}$ training & $2^{nd}$ training \\ \midrule
1 & [0,1,2,3,4] & [5,6,7,8,9] \\
2 & [5,6,7,8,9] & [0,1,2,3,4] \\
3 & [0,1,2,5,6] & [3,4,7,8,9] \\
4 & [5,6,7,0,1] & [8,9,2,3,4] \\
\bottomrule
\label{tab:training_cases}
\end{tabular}
\end{table}
Table~$\ref{tab:training1performance_mnist}$, Table~$\ref{tab:training2performance_mnist}$, and Table~$\ref{tab:cata_performance_mnist}$ contain relevant information for the MNIST dataset. By comparing the rightmost columns of Table~$\ref{tab:training2performance_mnist}$ and Table~$\ref{tab:cata_performance_mnist}$ it is clear that using a small proportion of the original data we are able to avoid catastrophic forgetting, whereas without memory replay the model forgets the original classification task.
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $1^{st}$ training classes using MNIST dataset.}
\begin{tabular}{@{}ccc@{}}
\toprule
Case & Training & Testing \\ \midrule
1 & 0.9904 & 0.9885 \\
2 & 0.9914 & 0.9837 \\
3 & 0.9901 & 0.9871 \\
4 & 0.9908 & 0.9881 \\
\bottomrule
\label{tab:training1performance_mnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $2^{nd}$ training classes and performance on the original testing set for MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 - 0.05 \% & 0.9902 & 0.9685 & 0.9392 \\
1 - 0.23 \% & 0.9829 & 0.9658 & 0.9555 \\
1 - 1.15 \% & 0.9897 & 0.9667 & 0.9787 \\
2 - 0.05 \% & 0.9906 & 0.9891 & 0.9594\\
2 - 0.23 \% & 0.9902 & 0.9905 & 0.9667 \\
2 - 1.15 \% & 0.9760 & 0.9717 & 0.9658 \\
3 - 0.05 \% & 0.9908 & 0.9749 & 0.9788\\
3 - 0.23 \% & 0.9914 & 0.9764 & 0.9800 \\
3 - 1.15 \% & 0.9712 & 0.9620 & 0.9811 \\
4 - 0.05 \% & 0.9886 & 0.9686 & 0.8732 \\
4 - 0.23 \% & 0.9900 & 0.9669 & 0.9569 \\
4 - 1.15 \% & 0.9904 & 0.9680 & 0.9779 \\
\bottomrule
\label{tab:training2performance_mnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Retraining on new task without memory replay for MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 & 0.9910 & 0.9804 & 0.0423 \\
2 & 0.9916 & 0.9902 & 0.0269 \\
3 & 0.9906 & 0.9824 & 0.2003 \\
4 & 0.9923 & 0.9833 & 0.0142 \\
\bottomrule
\label{tab:cata_performance_mnist}
\end{tabular}
\end{table}
Table~$\ref{tab:training1performance_fashionmnist}$, Table~$\ref{tab:training2performance_fashionmnist}$, and Table~$\ref{tab:cata_performance_fashionmnist}$ summarize the results for the Fashion-MNIST dataset for which we achieve a slightly worse performance on the training set. Nevertheless, the latent space based memory replay still gives satisfactory results and we are able to avoid catastrophic forgetting. Note from Table~$\ref{tab:cata_performance_fashionmnist}$ that forgetting is severe when we simply retrain the whole model, whereas in Table~$\ref{tab:training2performance_fashionmnist}$ we see that the performance on the original training set is really similar to the one obtained before retraining.
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $1^{st}$ training classes using Fashion-MNIST.}
\begin{tabular}{@{}ccc@{}}
\toprule
Case & Training & Testing \\ \midrule
1 & 0.9902 & 0.8897 \\
2 & 0.9900 & 0.9611 \\
3 & 0.9643 & 0.8825 \\
4 & 0.9784 & 0.9174 \\
\bottomrule
\label{tab:training1performance_fashionmnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $2^{nd}$ training classes and performance on the original testing set for Fashion-MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 - 0.05 \% & 0.9811 & 0.9500 & 0.7959 \\
1 - 0.23 \% & 0.9907 & 0.9477 & 0.8500 \\
1 - 1.15 \% & 0.9906 & 0.9468 & 0.8831 \\
2 - 0.05 \% & 0.9204 & 0.8988 & 0.9211\\
2 - 0.23 \% & 0.9434 & 0.8912 & 0.9426 \\
2 - 1.15 \% & 0.9110 & 0.8897 & 0.9540 \\
3 - 0.05 \% & 0.9696 & 0.9546 & 0.8345 \\
3 - 0.23 \% & 0.9682 & 0.9531 & 0.8600 \\
3 - 1.15 \% & 0.9680 & 0.9401 & 0.8779 \\
4 - 0.05 \% & 0.9127 & 0.8931 & 0.8974 \\
4 - 0.23 \% & 0.9915 & 0.9008 & 0.8722 \\
4 - 1.15 \% & 0.9873 & 0.9028& 0.9100 \\
\bottomrule
\label{tab:training2performance_fashionmnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Retraining on new task without memory replay for Fashion-MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 & 0.9860 & 0.9566 & 0.0528 \\
2 & 0.9772 & 0.9008 & 0.0285\\
3 & 0.9787 & 0.9614 & 0.0011 \\
4 & 0.9398 & 0.9159 & 0.0022 \\
\bottomrule
\label{tab:cata_performance_fashionmnist}
\end{tabular}
\end{table}
In Table~$\ref{tab:training1performance_kuzushijimnist}$, Table~$\ref{tab:training2performance_kuzushijimnist}$, and Table~$\ref{tab:cata_performance_kuzushijimnist}$ the results for the Kuzushiji-MNIST dataset as displayed. Again, memory replay proves successful at helping the network retain its original performance in classifying the first 5 classes using a small amount of latent space based memory. In this case we see a clear drop in accuracy when we use only $0.05\%$ of the original data as compared to using $0.23\%$. This could be attributed to the fact that in the Kuzushiji-MNIST dataset we can find a quite diverse set of images under the same class (Figure~$\ref{fig:kuzuMNIST}$), hence we require more examples in memory to capture all the diversity in the original data.
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $1^{st}$ training classes using Kuzushiji-MNIST.}
\begin{tabular}{@{}ccc@{}}
\toprule
Case & Training & Testing \\ \midrule
1 & 0.9904 & 0.9683 \\
2 & 0.9902 & 0.9769 \\
3 & 0.9904 & 0.9663 \\
4 & 0.9888 & 0.9737 \\
\bottomrule
\label{tab:training1performance_kuzushijimnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Performance (accuracy) for $2^{st}$ training classes and performance on the original testing set for Kuzushiji-MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 - 0.05 \% & 0.9901 & 0.9697 & 0.7282 \\
1 - 0.23 \% & 0.9904 & 0.9674 & 0.9108 \\
1 - 1.15 \% & 0.9886 & 0.9626 & 0.9506 \\
2 - 0.05 \% & 0.9822 & 0.9631 & 0.7825\\
2 - 0.23 \% & 0.9903 & 0.9665 & 0.9631 \\
2 - 1.15 \% & 0.9903 & 0.9668 & 0.9663 \\
3 - 0.05 \% & 0.9901 & 0.9737 & 0.7067\\
3 - 0.23 \% & 0.9903 & 0.9703 & 0.9083 \\
3 - 1.15 \% & 0.9900 & 0.9728 & 0.9091 \\
4 - 0.05 \% & 0.9907 & 0.9668 & 0.7751 \\
4 - 0.23 \% & 0.9902 & 0.9645& 0.9223\\
4 - 1.15 \% & 0.9908 & 0.9637 & 0.9525 \\
\bottomrule
\label{tab:training2performance_kuzushijimnist}
\end{tabular}
\end{table}
\begin{table}[htb!]
\centering
\caption{Retraining on new task without memory replay for Kuzushiji-MNIST.}
\begin{tabular}{@{}cccc@{}}
\toprule
Case & Training & Testing & Original Testing \\ \midrule
1 & 0.9851 & 0.9699 & 0.0122 \\
2 & 0.9852 & 0.9711 & 0.0229 \\
3 & 0.9859 & 0.9728 & 0.0120 \\
4 & 0.9868 & 0.9680 & 0.0117 \\
\bottomrule
\label{tab:cata_performance_kuzushijimnist}
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:Conclusion}
It has been experimentally found that memory replay may be a key element to continual learning in biological brains. The brain uses this technique to reinforce cell activation after learning a new task. In this work, we drew inspiration from these findings and applied latent space based memory replay to classification problems using the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets. We found that using a small percentage of the original data and storing it in a compressed latent space form is sufficient for memory replay and to preserve a good performance on previous tasks.
We only explored this technique for similar tasks. In the future it would be interesting to apply this current of thought and similar latent space based memory replay approaches to more diverse problems which are not so similar between them. Efficient memory replay will probably require better encoding mechanisms. Nevertheless, we have shown that memory replay is indeed a good approach for tackling catastrophic forgetting and we have managed to do so storing only a very small percentage of the original data.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Acknowledgements}
This work was funded in part by the Canada Research Chairs program, the Fonds
qu\'{e}b\'{e}cois de recherche sur la nature et les technologies (FQRNT) and
NSERC. Calculations were done using ressources from the R\'eseau Qu\'ebecois
de Calcul de Haute Performance (RQCHP). H.K. is grateful to the Tunisian
Government for a scholarship.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{1. Introduction}
An irreducible complex phase in the weak interaction quark-mixing(CKM) matrix of the Standard Model(SM) describes CP violation(\cite{CKM:1}). The constraint between first and third generation is given by the equation $V_{ud}V_{ub}^*+V_{cd}V_{cb}^*+V_{td}V_{tb}^* = 0 $. This equation can be visualized in the form of the "Unitary triangle" in the complex plane and we label the three angles of this unitary triangle as $\phi_1,\phi_2$ and $\phi_3$. \footnote{$\phi_1=\beta,\phi_2=\alpha$ and $\phi_3=\gamma$ in a different convention} B-factories using $\Upsilon(4S) \rightarrow B^0 - \bar B^0 $ production measure CP violation arising from the interference between $B^0 - \bar B^0$ mixing and decay.
\section{2. $\phi_1 (\beta)$ measurement }
\subsection{2.1 Time dependent asymmetry}
If both $B^0$ and $\bar B^0$ decay to a common CP eigenstate $"f_{CP}"$, we can define the time dependent CP asymmtry $A_{CP}(t)$ as
\begin{equation}
A_{CP}(t)=\frac{\Gamma(\bar B^0 \rightarrow f_{CP})-\Gamma(B^0 \rightarrow f_{CP})}{\Gamma(\bar B^0 \rightarrow f_{CP})+\Gamma(B^0 \rightarrow f_{CP})}
= S_{f_{CP}}sin\Delta mt+ A_{f_{CP}}cos\Delta mt
\end{equation}
where $\Gamma(\bar B^0|(B^0) \rightarrow f_{CP})$ is the decay rate for a $\bar B^0|(B^0)$ to decay to $f_{CP}$ at a proper time t after $\Upsilon(4S)$-production while $\Delta m$ is the mass difference between the two $B^0$ mass eigenstates.
\subsection{2.2 $b \rightarrow c \bar c s$ decay}
This quark transition is the best place to measure $\phi_1$ because of its small theoretical uncertainty. The CP-violation parameters are $S_{b \rightarrow c \bar c s} = - \xi sin2\phi_1$ with $\xi$ the CP-eigenvalue of the final states, and $A_{b \rightarrow c \bar c s} = 0$. The "golden channel" $B \rightarrow J/\psi K^0$ gives the best results for $\phi_1$ because of the good signal-to-noise ratio. From 386M $B\bar B$ events BELLE \cite{BELLE-beta:2} has selected 5,264 $B \rightarrow J/\psi K_S$ and 4792 $B \rightarrow J/\psi K_L$ candidates. The updated result \\
$S_{J/\psi K^0} = 0.652 \pm 0.039(stat) \pm 0.020(syst)$\\
$A_{J/\psi K^0} = 0.010 \pm 0.026(stat) \pm 0.036(syst)$\\
BABAR \cite{BABAR-beta:3} has started with 227M $B\bar B$ events and selected 7730 events in both modes. This updated result is now $sin2\beta = 0.722 \pm 0.040(stat) \pm 0.023(syst)$.
The direct CP-violation is $A_{J/\psi K^0}$ is consistent with 0. Fig. ~\ref{fig:1} shows proper time distributions and raw asymmetries for both $B \rightarrow J/\psi K_S$
and $B \rightarrow J/\psi K_L$ measured in both experiments. The different sign of raw asymmetries stems from the different CP-eigenvalues of $K_S$ and $K_L$.
\begin{figure}
\includegraphics[width=6.0cm]{fig14ac_raw-asym_jpsiks_pipm_goodtag.eps}\includegraphics[width=6.0cm]{fig14bd_raw-asym_jpsikl_goodtag.eps}\includegraphics[width=6.0cm]{fig2.eps}
\caption{Measurement of $S_{J/\psi K^0}$ from BELLE (left, center) and BABAR (right). Upper part: proper time distribution $\Delta t$ and lower part: raw asymmetry for (left) $B \rightarrow J/\psi K_S$ and (right) $B \rightarrow J/\psi K_L$ for BELLE. For BABAR the two upper plots show $B \rightarrow J/\psi K_S$, while the two lower plots show $B \rightarrow J/\psi K_L$ }
\label{fig:1}
\end{figure}
\subsection{2.3 Resolving the $\phi_1$ ambiguity}
Although the CP violating parameter $\phi_1$ can be measured with relatively high precision there remains still a four-fold ambiguity for $\phi_1 $. The Standard Model predicts a positive value of $cos2\phi_1$, and BELLE \cite{BELLE-cos:4} has obtained the best result with a time dependent Dalitz analysis of the neutral D meson in the channel $B \rightarrow D^{(*)}_{CP} \pi^0 (\eta,\omega)$, with the $D_{CP}$ decaying to $K_S\pi^+\pi^-$ Using 386M $B\bar B$ events BELLE obtained \\
$sin2\phi_1 = 0.78 \pm0.44(stat)\pm0.22(syst)$ and
$cos2\phi_1 =+1.87 ^{+0.40}_{-0.53}(stat) ^{+0.22}_{-0.32}(syst)$\\
The negative $cos2\phi_1$ solution is disfavoured with ~2$\sigma$ significance.
BABAR \cite{BABAR-cos:5} has used a different channel and gets $cos2\beta = +2.72 ^{+0.50}_{-0.79}(stat)\pm0.27(syst)$ thus supporting this solution of $\phi_1$ ($\beta$)
\subsection{2.4 $b \rightarrow q \bar q s $ decay modes and the influence of penguins}
In the SM final states from $b \rightarrow s \bar s s$ or $b \rightarrow d \bar d s$ transitions offer an independent test by comparing the CP-violating parameters in loop processes with those from tree-dominated ones. These decays are dominated by gluonic penguin amplitudes, but "new" non-SM physics could contribute to loop amplitudes and affect time-dependent asymmetries.
Therefore $S_{b \rightarrow s}= (sin2\phi_1^{eff})$ may deviate from its "nominal" value. One prominent pure penguin mode is $B^0 \rightarrow \Phi K^0$ which is shown in fig.~\ref{fig:2}. BELLE has combined all $b \rightarrow q \bar q s$ decay modes for 386M $B \bar B$ events and obtained
$sin2\phi_1^{eff} = 0.652\pm0.039(stat)\pm0.020(syst)$\\
With the current statistics fig.~\ref{fig:2}(right) a compilation from HFAG \cite{HFAG:10} shows no significant deviation from the "nominal" value and therefore more statistics is needed to settle this question.
\begin{figure}
\includegraphics[width=6.0cm]{fig15_phik0_highr_dt6bin_sigfit_kldtflip.eps}\includegraphics[width=6.0cm]{sPengS_CP.eps}
\caption{Left:Measurement of $S_{\Phi K^0}$ from BELLE(\cite{BELLE-beta:2}: Upper part: proper time distribution $\Delta t$ and lower part: raw asymmetry. Right: Summary of measurements of CP-violating parameters S in the $b \rightarrow q \bar q s$ modes \cite{HFAG:10}. No significant deviations from $S_{b \rightarrow c \bar c s} = sin2\phi_1$ is observed }
\label{fig:2}
\end{figure}
\section{3. $\phi_2 (\alpha)$ measurement}
\subsection{3.1 $B \rightarrow \pi^+\pi^-$}
Similar as for $\phi_1$, the angle $\phi_2$ is determined by measuring the time dependent CP-asymmetries in the $b\rightarrow u $ transition. The time dependent CP-analysis has been done for $B^0 \rightarrow \pi^+\pi^-$ in both experiments. BELLE \cite{BELLE-alpha:6} gets for 666$\pm$43 signal candidates from 275M $B\bar B$ events:\\
$S_{\pi^+ \pi^-} = -0.67\pm0.16(stat)\pm0.06(syst) $ and
$A_{\pi^+ \pi^-} = 0.56\pm0.12(stat)\pm0.06(syst) $\\
while BABAR \cite{BABAR-alpha:7} obtained from 227M $B\bar B$ events 467$\pm$33 signal candidates: \\
$S_{\pi^+ \pi^-} = -0.30\pm0.17(stat)\pm0.03(syst) $ and
$A_{\pi^+ \pi^-} = +0.09\pm0.15(stat)\pm0.04(syst) $\\
as can be seen in fig.~\ref{fig:3}.
There is still a discrepancy not resolved between the two measurements; especially BELLE claims a direct CP-violation (A $\ne$ 0) with more than 4 $\sigma$ significance, while BABAR sees none. The appearance of direct CP-violation is a hint, that the contribution from $b \rightarrow d$ penguin diagrams cannot be ignored, and therefore $sin2\phi_2$ as measured in $S_{\pi^+ \pi^-}$ is different from its ideal value.
\begin{figure}
\includegraphics[width=6.0cm]{fig2belle.eps}\hspace{1.0cm}\includegraphics[width=6.0cm]{fig2_babar_neu.eps}
\caption{Measurement of $S_{\pi^+ \pi^-}$ : Upper part: proper time distribution $\Delta t$ for q=+1 ($B^0$-tag) and q=-1 ($\bar B^0$-tag) for BELLE (left) and BABAR (right). The lower part shows the raw asymmetries for both experiments, (for BELLE divided according to the purity $r$ of the tagged sample)}
\label{fig:3}
\end{figure}
\subsection{3.2 $B \rightarrow \rho^+\rho^-$}
The decay $B^0 \rightarrow \rho^+\rho^-$ is also sensitive to this angle $\phi_2$, but it is more complicated as the final state is not a CP-eigenstate: it decays via $P \rightarrow VV$. Fortunately, the longitudinal polarization is found nearly 100 \%, as found by BELLE \cite{BELLE-rho:8} \\$f_L =0.941^{+0.034}_{-0.040}(stat)\pm0.030(syst)$ \\and therefore this final state should be a CP-eigenstate since longitudinal polarization dominates. BELLE obtains from 275M $B\bar B$ events the following values for \\
$S_{\rho^+ \rho^-} = 0.08\pm0.41(stat)\pm0.09(syst)$ and
$A_{\rho^+ \rho^-} = 0.00\pm0.30(stat)\pm0.09(syst)$ \\
while BABAR \cite{BABAR-rho:9} has measured the following values (see fig. ~\ref{fig:4}): $f_L =0.978\pm0.014(stat)^{+0.021}_{-0.029}(syst)$\\
$S_{\rho^+ \rho^-} = -0.33\pm0.24(stat)^{+0.08}_{-0.14}(syst)$ and
$A_{\rho^+ \rho^-} = 0.03\pm0.18(stat)\pm0.09(syst)$\\
using 232M $B\bar B$ events.
Both experiments use these results for a measurement of $\phi_2 (\alpha)$, i.e.\\BELLE : $\phi_2 = (88\pm17)[deg]$ and BABAR : $\alpha = (100\pm13)[deg]$.
The combined value from both $B \rightarrow \pi^+\pi^-$ and $B \rightarrow \rho^+\rho^-$ measurements is given in table ~\ref{tab:a}, as compiled by \cite{HFAG:10} and \cite{Fitter:14}.
\begin{figure}
\includegraphics[width=6.0cm]{plotDt_LeptonKaon.eps}
\caption{BABAR: Measurement of $S_{\rho^+ \rho^-}$ : Upper part: proper time distribution $\Delta t$ for $B^0$-tag and $\bar B^0$-tag. The lower part shows the raw asymmetry}
\label{fig:4}
\end{figure}
\section{4. $\phi_3 (\gamma)$ measurement}
The method with the best results obtained so far uses a Dalitz plot analysis of $K_S \pi^+ \pi^-$ decay of the neutral D from the $B^\pm \rightarrow D K^\pm$ process. With the assumption of no CP-asymmetry between neutral D mesons, we can describe the amplitudes $M_+|(M_-)$ as a function of Dalitz plot variables: \\
$B^+ \rightarrow [K_S\pi^+\pi^-]K^+$ : $M_+ =f(m^2_+,m^2_-)+re^{i(+\phi_3+\delta)}f(m^2_-,m^2_+)$\\
$B^- \rightarrow [K_S\pi^+\pi^-]K^-$ : $M_- =f(m^2_-,m^2_+)+re^{i(-\phi_3+\delta)}f(m^2_+,m^2_-)$\\
where $m_+|(m_-)$ is the invariant mass of $K_S$ with $\pi^+|(\pi^-)$, $\delta$ is the strong phase, while $\phi_3$ is the weak phase to be fitted. r is the ratio of the color suppressed and allowed decays, and the amplitudes $f(m^2_+,m^2_-)$ and $f(m^2_-,m^2_+)$ were obtained from an independent $D^0$ sample of $D^{*+} \rightarrow D^0\pi^+$. Using 386M events we got $331\pm17$ $B^+ \rightarrow DK^+$,$81\pm8$ $B^+ \rightarrow D^*K^+$ and $54\pm8$ $B^+ \rightarrow DK^{*+}$ candidates. Combining all modes BELLE \cite{BELLE-gamma:11} obtained \\
$\phi_3 = (53^{+15}_{-18}(stat)\pm3(syst)\pm9(model))[deg]$ \\
where the model error reflects the uncertainty in the D-meson decay $f(m^2_-,m^2_+)$.
\section{5. Other constraints on the Unitarity triangle}
It is not only important to look at the angles of the Unitarity triangle, but there is also much interest in determining its side lengths. The two most difficult are $|V_{td}|$ and $|V_{ub}|$, i.e. transitions from third to first generation, as these are obviously very rare decays with a small branching ratio.
\subsection{5.1 $|V_{td}/V_{ts}|$ measured with the FCNC process $b\rightarrow d \gamma$}
Starting with 386M $B \bar B$ events, BELLE \cite{BELLE-dsg:12} has measured a branching fraction of Br($\bar B \rightarrow (\rho,\omega)\gamma$) = $(1.32^{+0.34}_{-0.31}(stat)^{+0.10}_{-0.09}(syst))x10^{-6}$ with 5.1 $\sigma$ significance (systematic included) with the fit of fig ~\ref{fig:6}\\ The ratio Br($\bar B \rightarrow (\rho,\omega)\gamma$)/Br($\bar B \rightarrow K^* \gamma$) is proportional to $|V_{td}/V_{ts}|^2$ with a kinematical factor and an SU(3) correction, and we deduce:\\
$|V_{td}/V_{ts}|$ = $0.199^{+0.026}_{-0.025}(exp)^{+0.018}_{-0.015}(theo)$.\\
Recently this quantity has also been measured by CDF \cite{CDF:15} by directly determining the oscillation amplitude $\Delta m_s$ from $B_s \bar B_s$-mixing, and they get the improved result:
$|V_{td}/V_{ts}|$ = $0.208^{+0.001}_{-0.002}(exp)^{+0.008}_{-0.006}(theo)$, consistent with BELLE.
\begin{figure}
\includegraphics[width=12.0cm]{fig2d.eps}
\caption{BELLE: Fit Results in the two projections $\Delta E$ and $M_{bc}$ for $\bar B \rightarrow (\rho,\omega) \gamma$. Curves show signal (dashed), continuum (dotted), $\bar B \rightarrow \bar K^* \gamma$ (dot-dashed), background (dot-dot-dashed) and the total fit result (solid)}
\label{fig:6}
\end{figure}
\subsection{5.2 $|V_{ub}|$ from the purely leptonic decay $B^- \rightarrow \tau \bar\nu_{\tau}$}
$|V_{ub}|$ is traditionally measured via its semileptonic decays $b \rightarrow u \ell \nu$ but this includes the knowledge of its form factor, which is difficult to obtain experimentally. Therefore BELLE has aimed to obtain this branching ratio with the advantage of a model independent measurement.
BELLE \cite{BELLE-tau:13} started with 447M B meson pairs and used only events where one side was fully reconstructed, i.e. the tag side ($B_{tag}$), and compared properties of the remaining particles, i.e. the signal side ($B_{sig}$), if this decays into $\tau$ and neutrino, using the expectations from MC for signal and background. The $\tau$ meson is identified in five decay modes ($\mu^-\bar \nu_{\mu}\nu_{\tau}$,$e^-\bar \nu_{e}\nu_{\tau}$,$\pi^-\nu_{\tau}$,$\pi^-\pi^0\nu_{\tau}$ and $\pi^-\pi^+\pi^-\nu_{\tau}$) which cover approximately 81\% of all $\tau$ decays. For all modes except $\pi^-\pi^0\nu_{\tau}$, $\pi^0$ candidates are also rejected on the signal side. Fig ~\ref{fig:7} shows the energy $E_{ECL}$ of photons, which are not associated with either $B_{tag}$ or the $\pi^0$-candidate from $\tau^-\rightarrow \pi^-\pi^0\nu_{\tau}$ Signal events should peak at low $E_{ECL}$,i.e. no photon energy, while background events show higher values from $E_{ECL}$ due to additional neutral clusters.
We find $17.2^{+5.3}_{-4.7}$ signal events from a fit to a sample of 54 events, which means 3.5 $\sigma$ significance including systematics. Our preliminary value of the branching fraction is :\\
Br( $B^- \rightarrow \tau \bar\nu_{\tau}$) = $1.79^{+0.56}_{-0.49}(stat)^{+0.39}_{-0.46}(syst)$\\
Using the equation
Br( $B^- \rightarrow \tau \bar\nu_{\tau}$) = $\frac{G^2_Fm_Bm^2_{\tau}}{8\pi}$$(1-\frac{m^2_{\tau}}{m^2_B})^2$$f^2_B|V_{ub}|^2\tau_B$ \\
we get $f_B|V_{ub}|$ = $(10.1^{+1.6}_{-1.4}(stat)^{+1.1}_{-1.3}(syst))x10^{-4}$ GeV.
Using the most recent HFAG \cite{HFAG:10} value from semileptonic decays for $V_{ub}=(4.39\pm0.33))x10^{-3}$ we can measure the structure constant $f_B= 229^{+36}_{-31}(stat)^{+30}_{-34}(syst)$ MeV
This is in full agreement with a recent unquenched lattice calculation with
$f_B= 216\pm 22$ MeV and thus the HFAG value for $V_{ub}$ is confirmed by this measurement.
\section{6.Summary}
Both experiments BELLE and BABAR have shown that the CKM-ansatz works extremely well for tree processes, and that this fact can be used as an "anchor point" for New Physics. Even contributions from penguin processes show no statistically significant deviation up to now, and this can be seen both in angles and side lengts of the unitarity triangle. To pin down effects from NP, statistics of the relevant processes should be increased considerably.
\begin{table}
\begin{tabular}{lrr}
\hline
& \tablehead{1}{r}{b}{BELLE [deg.]}
& \tablehead{1}{r}{b}{World Average [deg.]}\\
\hline
$\phi_1$ & $20.3\pm1.8$ & $21.7^{+1.3}_{-1.2}$\\
$\phi_2$ & $93^{+12}_{-11}$ & $99^{+12}_{-9}$\\
$\phi_3$ & $53^{+15}_{-18}$ & $62^{+35}_{-25}$\\
\hline
\end{tabular}
\caption{Constraints on the angles of
Unitary Triangle}
\label{tab:a}
\end{table}
\begin{figure}
\includegraphics[width=5.0cm]{fig5.eps}
\caption{BELLE: Distribution of the energy in the Electromagnetic Calorimeter ($E_{ECL}$) after all selection cuts: The solid curve shows the result of the fit with the sum of the signal shape (dashed) and the background shape (dotted)}
\label{fig:7}
\end{figure}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Talbot effect was originally observed by Henry Fox Talbot and described in his seminal work in \cite{Talbot_Original}. When a plane wave is incident on a periodic aperture, the image of the aperture is self-replicated at specific discrete locations away from the aperture. At other distances in between, the aperture is self-imaged with smaller period resulting in higher repetition of the periodic illuminations. This effect is known as the spatial Talbot effect. Talbot effect has found extensive applications in imaging, optical communication, optical computing, and optical interconnection, to name a few and a more comprehensive review can be found in \cite{Talbot_Review}. The Talbot effect has also been translated to temporally periodic signals. In a temporal Talbot effect, the space-time duality is exploited to achieve self-imaging of periodic pulse trains, where their repetition rates are increased using simple phase-only filtering techniques \cite{TemporalTalbot}\cite{Azana_Talbot_Temporal}.
While Talbot effect has conventionally been used in repetition-rate multiplication of either the periodic spatial aperture or the periodic pulse trains, they have been extended to achieve repetition-rate division as well recently \cite{Talbot_Amplification}\cite{Talbot_Averaging_Jose}. However, to the best of my knowledge, the period of the aperture is either multiplied or divided by an \emph{integer} factor only. In this work, this concept of period scaling is generalized to \emph{non-integer} factors, where the input periodic aperture can be imaged with either a higher or a lower repetition rate by any arbitrary real number. The specific contributions of this work are: 1) Proposal of a generalized Talbot effect described in the spatial domain for arbitrary scaling of aperture periodicities. 2) Specific implementations of generalized Talbot system using metasurfaces \cite{meta2}\cite{meta3}, which are 2D array of subwavelength particles to provide abrupt phase discontinuities in space.
\section{Principle}
\subsection{Conventional Talbot Effect}
\begin{figure}[htbp]
\begin{center}
\psfrag{c}[c][c][0.7]{$|\psi(x,y,0)|$}
\psfrag{e}[c][c][0.7]{$|\psi_T(x,y,\Delta z)|$}
\psfrag{f}[c][c][0.7]{$\psi(x,y,z)$}
\psfrag{x}[c][c][0.7]{$x$}
\psfrag{y}[c][c][0.7]{$y$}
\psfrag{z}[c][c][0.7]{$z$}
\psfrag{n}[c][c][0.7]{Periodic Aperture}
\psfrag{o}[l][c][0.7]{$\Lambda = m X_0\quad\text{where}\quad m\in\mathcal{R}$}
\psfrag{p}[l][c][0.7]{$\Lambda = X_0/m\quad\text{where}\quad m\in\mathcal{I}, \ge1$}
\psfrag{s}[c][c][0.7]{$X_0$}
\psfrag{t}[c][c][0.7]{$\Delta z$}
\psfrag{A}[c][c][0.7]{$T(x,y)$}
\includegraphics[width=0.9\columnwidth]{TalbotEffect2.eps}
\caption{Generalized spatial Talbot effect where the period of the 2D array of sources is scaled by an arbitrary real number $m$ in the output plane, as opposed to integer-only values in the conventional Talbot effect.}\label{Fig:Principle}
\end{center}
\end{figure}
Consider an aperture consisting of a 2D array of holes with period $\Lambda = X_0$ at $z=0$, as shown in Fig.~\ref{Fig:Principle}(a), illuminated by a plane wave of frequency $\omega$ (or wavelength $\lambda$). Such an aperture acts as a 2D array of point sources, which then radiate along the $z-$axis. Due to diffraction, the periodic sources interfere with each other forming complex diffraction patterns. However, at integer multiples of distances $\Delta z = X_0^2/\lambda = z_T$, known as the Talbot distance, the input aperture distribution is re-constructed with the same periodicity as a result of free-space interference. This phenomenon is called the integer Talbot effect. At other distances $z = z_T/m$, the fields are self-imaged with $m-$times the spatial repetition rate of the input aperture, as shown in the bottom of Fig.~\ref{Fig:Principle}(a), i.e. $\Lambda = X_0/m$. This phenomenon is called the fractional Talbot effect. Therefore, in a conventional spatial Talbot effect, the periodicity of the input aperture is always increased by an \emph{integer} number $m$, i.e.
\begin{equation}
\overbrace{\psi_T(x,y,z_T/m)}^{\Lambda = X_0/m}= \underbrace{\psi_T(x,y,0)}_{\Lambda = X_0} \ast h(x,y),\label{Eq:ConventionalTalbotEffect}
\end{equation}
\noindent where $h(x,y)$ is the impulse response of free-space.
\subsection{Generalized Talbot Effect}
The conventional Talbot effect is observed under the assumption that there is no phase discontinuity across the aperture so that $\psi_T(x,y,0) = |\psi_T(x,y,0)|$. This assumption consequently restricts the repetition-rate increase factor $m$ to integer values only. However, if the aperture is allowed to feature abrupt phase discontinuities across it, this restriction of integer-only values of $m$ is lifted. This can be understood by considering two fractional Talbot distances $z_1 = z_T/p$ and $z_2 = z_T/q$ such that
\begin{align}
\psi_T(x,y,z_T/p) = |\psi_T(x,y,0)|\ast h(x,y)\\
\psi_T(x,y,z_T/q) = |\psi_T(x,y,0)|\ast h(x,y).
\end{align}
\noindent Alternatively, the above equations can be written in a complex spatial frequency domain as
\begin{align}
\tilde{\psi}_T(k_x,k_y,z_T/p) = |\tilde{\psi}_T(k_x,k_y,0)| \times \tilde{H}(k_x,k_y)\\
\tilde{\psi}_T(k_x,k_y,z_T/q) = |\tilde{\psi}_T(k_x,k_y,0)| \times \tilde{H}(k_x,k_y).
\end{align}
\noindent Substituting $|\tilde{\psi}_T(x,y,0)|$ from the second equation into the first and using $\tilde{H}(k_x,k_y,z ) = e^{-ik_z z}$, one get
\begin{align}
\tilde{\psi}_T(k_x,k_y,z_T/p) &= \tilde{\psi}_T(k_x,k_y,z_T/q) \exp\left[-ik_z z_T\left(\frac{1}{p}- \frac{1}{q}\right)\right]
\end{align}
\noindent In this case considered, $p < q$. Inverse Fourier transforming the above equation leads to
\begin{align}
\psi_T(x, y ,\Delta z) &= \psi_T(x,y,0) \ast h(x,y, \Delta z),
\end{align}
\noindent where $\Delta z = z_T(1/p - 1/q)$, where $z= z_T/q$ is chosen as a new reference plane. This equation can then be further written as
\begin{align}
\overbrace{\psi_T(x, y ,\Delta z)}^{\Lambda= X_0/p} &= [\underbrace{|\psi_T(x,y,0)|}_{\Lambda= X_0/q} \times \overbrace{T(x,y)}^\text{Metasurface}] \ast h(x,y, \Delta z),
\end{align}
\noindent where $T(x,y) = \angle \psi_T(x,y,0) = \angle \psi_T(x,y,z_T/q)$. This equation tells us that an input aperture field with a period $\Lambda= X_0/q$ when multiplied with a phase function $T(x,y)$ results in another periodic field at $z=\Delta z$ with a period $\Lambda= X_0/p$. In other words, the period has been scaled by a non-integer value $m = q/p$ between the input and the output planes. Such a system is referred here to as the \emph{generalized spatial Talbot system} and the corresponding effect as the generalized Talbot effect, as illustrated in the bottom of Fig.~\ref{Fig:Principle}(b). The phase function $T(x,y)$ represents a spatial phase discontinuity profile which enables the scaling of the repetition rate of the input aperture fields by a non-integer value. Such a phase discontinuity can be easily introduced using a metasurface, as will be shown in Sec. IV.
In summary, when the input aperture field with the period $\Lambda = X_0$ is spatially cascaded with a metasurface with transmittance $T(x, y) = \psi_T(x,y,z_T/q)$, and propagated by a distance $z = z_T(1/p - 1/q)$, the output field distribution exhibits a scaled period $\Lambda = (q/p) X_0$.
\section{Metasurface Transmittance Functions}
\begin{figure*}[htbp]
\begin{center}
\psfrag{a}[c][c][0.7]{$y~(\mu\text{m})$}
\psfrag{b}[c][c][0.7]{$x~(\mu\text{m})$}
\psfrag{c}[c][c][0.7]{$|\psi(x,y,0)|^2$~(dB)}
\psfrag{d}[c][c][0.7]{$\angle T(x,y)$~($ \pi$~rad)}
\psfrag{e}[c][c][0.7]{$|\psi(x,y,z_1)|^2$~(dB)}
\psfrag{h}[c][c][0.7]{$|\psi(x,y,\Delta z)|^2$~(dB)}
\psfrag{f}[c][c][0.7]{$\boxed{1/3~\text{Division}}$}
\psfrag{g}[c][c][0.7]{$\boxed{1/4~\text{Division}}$}
\psfrag{n}[c][c][0.7]{\color{white}{$-4.15$~dB}}
\psfrag{m}[c][c][0.7]{\color{white}{$-3.5$~dB}}
\psfrag{p}[c][c][0.7]{$\Delta z = (1/3 - 1/4)z_T$}
\psfrag{q}[c][c][0.7]{$\Delta z = (1/2 - 1/3)z_T$}
\psfrag{r}[c][c][0.7]{$\boxed{m = 4/3 = 1.33}$}
\psfrag{s}[c][c][0.7]{$\boxed{m = 3/2 = 1.50}$}
\includegraphics[width=1.7\columnwidth]{Result.eps}
\caption{Computed fields at the output plane when a periodic array of sources are phased following \eqref{Eq:PhaseEven} and \eqref{Eq:PhaseOdd}, to achieve a repetition rate scaling by a factor of $m=1.33$ and $m=1.5$, respectively. Here $X_0=100~\mu$m and the individual sources are assumed to be Gaussian functions $\psi(x,y) = \exp[-(x^2 + y^2)/2w_0^2]$, with $w_0 = 5~\mu$m. The design frequency is $250$~THz. Only a small part of the overall aperture is shown for clarity.}\label{Fig:IdealResults}
\end{center}
\end{figure*}
As described in the previous section, the metasurface of Fig.~\ref{Fig:Principle}, is a phase-only function, and it mimics the phase distributions of the fractional Talbot self-images, i.e. $T(x,y) = \exp\{i\angle \psi_T(x,y,z_T/m)\}$. In this section, the closed-form expressions of these phase distributions will be developed.
Let us assume for simplicity, a one-dimensional periodic array of sources with the amplitude distribution
\begin{align}
\psi_\text{in}(x) &= \sum_{a = -\infty}^{+\infty} \delta(x-aX_0).\label{Eq:Input}
\end{align}
\noindent After a propagation through free-space along $z-$axis, the output fields are given by $\psi(x) = \psi_\text{in}(x)\ast h(x,z)$, where the impulse response $h(x)= \exp(-i\pi x^2/\lambda z)$, under paraxial conditions (time convention used here is $e^{j\omega t}$). Using this equation with \eqref{Eq:Input}, and simplifying the convolution integral, we get
\begin{equation}
\psi(x) = e^{-i\frac{\pi x^2}{\lambda z}} \sum_{a = -\infty}^{+\infty} \exp\left(-i\pi a^2 n\right)\exp\left(in\frac{2a \pi }{X_0}x\right),\label{Eq:TalbotProp}
\end{equation}
\noindent where $X_0^2/\lambda z = n$ assumed to be an integer, i.e. $n\in \mathcal{I}$. This corresponds to propagation distance $z= z_T/n$, where $z_T = X_0^2/\lambda$, known as the Talbot distance.
\begin{enumerate}
\item Case 1: when is $n$ \emph{even}, $\exp\left(-i\pi a^2 n\right) = 1$, and thus \eqref{Eq:TalbotProp} reduces to
\begin{align}
\psi(x) & = \frac{X_0}{n} \sum_{a =-\infty}^{+\infty} \delta\left(x-a\frac{X_0}{n}\right)\exp\left(-i\frac{\pi a^2}{n}\right), \label{Eq:Even}
\end{align}
\item Case 2: when $n$ is \emph{odd}, \eqref{Eq:TalbotProp} reduces after straightforward manipulation to
\begin{align}
\psi(x) = \frac{X_0}{n}\sum_{a= -\infty}^{+\infty} \delta\left(x-a\frac{X_0}{2n}\right) [1 -e^{j\pi a}]\exp\left(-i\frac{\pi a^2}{4n}\right) \label{Eq:Odd}
\end{align}
\end{enumerate}
These equations are derived using the identity $\sum_{a=-\infty}^{\infty} \exp(2\pi j ax/X_0) = X_0 \sum_{a=-\infty}^{\infty} \delta(x - aX_0)$. Equations \eqref{Eq:Even} and \eqref{Eq:Odd} reveals that the period of the output field $\psi(x)$ is smaller by a factor of $n$ compared to the input field in both cases, as expected at the fractional Talbot distance $z= z_T/n$. Repeating the same procedure for a 2D array of sources, the complex phase of the output self-imaged patterns at the location $(a,b)$ on the aperture can be verified to be \cite{Talbto_Phase_Analytical}
\begin{subequations}
\begin{equation}
\phi(a, b) = -\frac{2\pi (a^2 + b^2)}{n}\label{Eq:PhaseEven}
\end{equation}
\begin{equation}
\phi(a, b) = -\frac{\pi [(2a+1)^2 + (2b+1)^2]}{4n}\label{Eq:PhaseOdd}
\end{equation}
\end{subequations}
\noindent These equations can be used to determine the complex phase of the fields at any fractional Talbot distance lying between $[0,\; z_T]$, and thus can be used to construct the metasurface transmittance function $T(x,y)$ for the case of generalized Talbot effect as described in Sec. II. It should be noted that while the above phase functions are developed only for the special case of $z\in [0,\; z_T]$, similar procedure can be carried out to cover $z\in [z_T,\; 2z_T]$ and so on \cite{Talbto_Phase_Analytical}.
Figure~\ref{Fig:IdealResults} shows two examples, where the above principal is applied to achieve scaling the spatial period of the input field by a non-integer factor. In the first example, the input field has a period of 25~$\mu$m and the desired increase in the period $m =4/3 = q/p$, so that the output $\Lambda = (4/3)\times 25~\mu$m. The metasurface transfer function $T(x,y)$ is then constructed using \eqref{Eq:PhaseEven} with $n = q = 4$ as shown in the middle of Fig.~\ref{Fig:IdealResults}. The output of the metasurface is then free-space propagated by a distance $\Delta z = (1/p - 1/q)z_T$ leading the output fields, as shown on the right of Fig.~\ref{Fig:IdealResults}, and as expected an increase the period by a factor of $4/3$ is observed. Second example in Fig.~\ref{Fig:IdealResults} shows a similar result showing a period increase by a factor of $3/2$. It should be noted that while these examples only illustrate a period increase by a non-integer factor, similar demonstrations can be easily made for a reduction as well by utilizing fraction Talbot distances between $ [z_T,\; 2z_T]$.
\section{Metasurface Implementation}
Metasurfaces are two dimensional arrays of sub-wavelength electromagnetic scatterers, which are the dimensional reduction of a more general volumetric metamaterial structures. By engineering the electromagnetic properties of the scattering particles, the metasurface can be used to manipulate and engineer the spatial wavefront of the incident waves. By this way, they provide a powerful tool to transform incident fields into specified transmitted and reflected fields \cite{Metasurface_Synthesis_Caloz}. More specifically, metasurfaces can either impart amplitude transformations, phase transformations or both, making them applicable in diverse range of applications involving lensing, imaging \cite{meta2}\cite{meta3}, field transformations \cite{MetaFieldTransformation}, cloaking \cite{MetaCloak} and holograming \cite{MetaHolo}, to name a few. Therefore, considering their versatile field transformation properties and their electrically thin dimensions, they are ideally suited to provide abrupt phase discontinuities in free-space required in the proposed generalized Talbot effect.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{
\psfrag{a}[c][c][0.7]{$\mathbf{p}$}
\psfrag{b}[c][c][0.7]{$\mathbf{m}$}
\psfrag{c}[c][c][0.7]{$2r_1$}
\psfrag{d}[c][c][0.7]{$\Lambda$}
\psfrag{e}[c][c][0.7]{$2r_2$}
\psfrag{f}[c][c][0.7]{$t$}
\psfrag{g}[c][c][0.7]{$n_0$}
\psfrag{h}[c][c][0.7]{$n_h$}
\psfrag{x}[c][c][0.7]{$x$}
\psfrag{y}[c][c][0.7]{$y$}
\psfrag{z}[c][c][0.7]{$z$}
\psfrag{j}[c][c][0.7]{$|R| = 0$}
\includegraphics[width=0.5\columnwidth]{MSCell.eps}}
\subfigure[]{
\psfrag{a}[c][c][0.8]{frequency $f$ (THz)}
\psfrag{b}[c][c][0.8]{$|T|^2, |R|^2$~(dB)}
\psfrag{d}[c][c][0.8]{$\mathbf{p}$}
\psfrag{c}[c][c][0.8]{$\mathbf{p}$, $\mathbf{m}$}
\psfrag{e}[c][c][0.8]{$\mathbf{m}$}
\psfrag{f}[c][c][0.5]{\shortstack{$r_2=27$~nm\\ $\kappa=0.419$}}
\psfrag{g}[c][c][0.5]{\shortstack{$r_2=55$~nm\\ $\kappa=0.425$}}
\includegraphics[width=0.8\columnwidth]{Huygens.eps}}
\caption{Huygens' source metasurface based on all-dielectric resonators. a) Unit cell periodic in $x-$ and $y-$ directions. b) Typical amplitude transmission (solid) and reflection (dashed) for the two cases when the two dipole moments $\mathbf{p}$ and $\mathbf{m}$ are frequency aligned, and not frequency aligned, respectively. Design parameters: $r_1=300$~nm, $t=220$~nm, $\Lambda = 666$~nm, $n_h= 1.66$~(Silica) and $n_0=3.45$ and $\tan\delta = 0.001$~(Silicon).}\label{Fig:HuyDiMs}
\end{center}
\end{figure}
To provide the needed phase-only filtering characteristics, the metasurface must exhibit ideally a unit amplitude transmission without any reflections, i.e. $|T(x,y)| = 1 \forall \angle T(x,y) \in [0,\;2\pi]$ and reflectance $|R(x,y)| = 0$. These specifications can be conveniently achieved using a so-called \emph{Huygens' metasurface}. A huygens' configurations consists of an orthogonally placed electric and magnetic dipole moments \cite{Kerker_Scattering}, $\mathbf{p}$ and $\mathbf{m}$, respectively, as shown in Fig.~\ref{Fig:HuyDiMs}(a), resulting in a complete cancellation of backscattering as a result of destructive interference of the fields generated by the two dipolar moments. Metasurface consisting of such scattering particles is perfectly matched to free-space and thus has zero reflections, i.e. $|R(x,y)| = 0$. Under lossless conditions, a Huygen's metasurfaces acts as an all-pass surface, with $|T(x,y)| = 1$ and $\angle T(x,y) = \phi_0 \in [0, 2\pi]$.
\begin{figure*}[htbp]
\begin{center}
\psfrag{X}[c][c][0.75]{$\begin{tabular}{ c|cc }
& $r_2$~(nm) & $\kappa$ \\ \hline
\#1 & 22.5 & 0.4 \\
\#2 & 27 & 0.419 \\
\#3& 29 & 0.43 \\
\hline
\end{tabular}$}
\psfrag{a}[c][c][0.8]{frequency $f$ (THz)}
\psfrag{c}[c][c][0.7]{$|T(x,y)|$~(dB)}
\psfrag{d}[c][c][0.8]{$\angle T(x,y)$~($\pi$~rad)}
\psfrag{f}[c][c][0.8]{$20\log|T|$~(dB)}
\psfrag{e}[c][c][0.7]{Phase $\angle T$ ($\pi$ rad)}
\psfrag{g}[c][c][0.7]{$|\psi(x,y, \Delta z)|^2$~(dB)}
\psfrag{m}[c][c][0.7]{$x~\mu$m}
\psfrag{n}[c][c][0.7]{$y~\mu$m}
\psfrag{h}[c][c][0.7]{$20\log |R| < -10$~dB}
\includegraphics[width=1.7\columnwidth]{HFSS.eps}
\caption{Demonstration of the generalized Talbot effect using an all-dielectric metasurface, to achieve a period scaling by a factor of $m=1.5$, as an example. a) FEM-HFSS simulated transmission and phase responses of three different metasurface unit cells to approximate the required discrete phases. b) Amplitude and phase transmittance of the metasurface aperture using the unit cells of (a). c) The output fields at $\Delta z = (1/p - 1/q)z_T$ under plane-wave excitation of the metasurface aperture computed using Fourier propagation, i.e. $\psi(x,y, \Delta z) = \psi(x,y, 0)\ast h(x,y)$. Only a small part of the overall aperture is shown for clarity. }\label{Fig:HFSS}
\end{center}
\end{figure*}
A practical Huygen's metasurface is conveniently realized using all-dielectric resonator arrays which naturally produce orthogonal $\mathbf{p}$ and $\mathbf{m}$ with lower losses compared to their plasmonic counterparts \cite{AllDielectricKivshar}. A good review on a such all-dielectric metasurfaces can be found in \cite{AllDieelctricMTMS}. A generalized unit cell of an all-dielectric Huygens' metasurface used in this work is shown in Fig.~\ref{Fig:HuyDiMs}(a), consisting of a high-dielectric holey elliptical puck embedded in a host medium of a lower refractive index $n_h$.
The puck has an ellipticity of $\kappa$ and the hole inside the puck has the elliptical shape as well, but rotated by $90^\circ$. This configuration is particularly useful because its transmission phase at a fixed frequency, can be conveniently tuned by varying the inner radius $r_2$ and $\kappa$ only, without affecting the thickness, lattice size of the unit cell and the outer radius $r_1$, and simultaneously maintaining a good match to free space.
Fig.~\ref{Fig:HuyDiMs}(b) shows a typical response of such a unit cell for two sets of parameters $r_2$ and $\kappa$, whereby in the first case, the two dipole moments $\mathbf{p}$ and $\mathbf{m}$ are properly excited at the same design frequency ($250$~THz in this example). This results in an optimal interaction of the two dipoles resulting a near-perfect transmission of the wave, as expected from a Huygens' source. This situation corresponds to phase-only transmission response to be used shortly in the generalized Talbot system. The second case, however shows the mis-aligned dipoles resulting in a strong reflection from the unit cell. This situation thus corresponds to a near-perfect reflector.
Using these two configurations, a metasurface aperture can now be constructed to demonstrate a generalized Talbot effect. Let us take an example where the required period scaling at the output plane is $m=1.5 = q/p=3/2$. Since $q=3$ is odd, the discrete phase values are first computed using \eqref{Eq:PhaseOdd}. Next, the metasurface unit cell of Fig.~\ref{Fig:HuyDiMs}(a) is designed to approximate these phase values. Fig.~\ref{Fig:HFSS}(a) shows the transmission and the phase of three such unit cell designs. The reflection in all cases is $< -10$~dB which is sufficiently low in typical practical situations. Using these transmission responses and the perfect reflector unit cell configuration, a metasurface aperture is formed as shown in Fig.~\ref{Fig:HFSS}(b). This completes the metasurface design. A plane wave incidenting on this aperture, and propagating by a distance $\Delta z = (1/p - 1/q)z_T = (1/2 - 1/3)(100~\mu\text{m})^2/\lambda(250~\text{THz})$, transforms into the output fields as shown in Fig.~\ref{Fig:HFSS}(c). As expected and required, the output period is now $50~\mu$m, and thus is $m=1.5$ times more than the one at the input. The metasurface thus successfully performs the specified non-integer period scaling.
\section{Conclusions}
A generalized spatial Talbot effect has been proposed where the period of the input aperture is scaled by a non-integer real value, as opposed to the integer-only factor in a conventional Talbot effect. This has been achieved by engineering phase discontinuity distributions in space using metasurfaces, in conjunction with free-space propagation. Specific implementations using all-dielectric metasurfaces has also been presented and non-integer scalings of the input aperture has been demonstrated using numerical results based on Fourier propagation. While the generalized Talbot effect has been discussed here in the space domain, the proposed principle is equally applicable in the time domain based on space-time duality, where in that case, the repetition rate of the periodic pulse trains may be scaled by a non-integer factor using temporal phase modulators.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |